Skip to main content
starcite.ai
BlogBook a call

The session log for multi-agent AI

Your AI product streams messages to users in real time. When they switch tabs, refresh, or drop WiFi, the stream dies — and the session can vanish. starcite is the session log underneath: every message, tool call, and status update is persisted before the write returns. Your UI reads from a cursor — catch-up and live streaming are the same operation.

"I switched tabs and my response vanished""The AI just stopped mid-sentence"Tab switch kills SSE stream"I cancelled and lost everything"No session history after refreshRedis OOM at 3am"I switched tabs and my response vanished""The AI just stopped mid-sentence"Tab switch kills SSE stream"I cancelled and lost everything"No session history after refreshRedis OOM at 3am
"My two tabs show different things""The thinking steps showed up after the answer"No total order across agentsSession unrecoverable after disconnect"Everything broke after your update"600 lines of reconnect logic"My two tabs show different things""The thinking steps showed up after the answer"No total order across agentsSession unrecoverable after disconnect"Everything broke after your update"600 lines of reconnect logic
"I see the same message twice"No cursor, no resume pointMobile background loses sessionMulti-device state divergesSSE drops, no way to catch upAgent handoff loses context"I see the same message twice"No cursor, no resume pointMobile background loses sessionMulti-device state divergesSSE drops, no way to catch upAgent handoff loses context

Recognise yourself? We help you onboard & migrate.

Book a call
# Create a session
$ starcite create --title "Draft contract"
sess_a4f2e1
# Each agent appends
$ starcite append sess_a4f2e1 --agent researcher --text "Found 8 relevant cases…"
$ starcite append sess_a4f2e1 --agent drafter --text "Drafting clause 4.2…"
$ starcite append sess_a4f2e1 --agent reviewer --text "Clause 4.2 approved."
# Tail one agent
$ starcite tail sess_a4f2e1 --agent drafter
[drafter] Drafting clause 4.2…

Persisted before acknowledgement. Every append returns only after the message is durably stored. If the connection drops, every message written so far is in the log.

Same order, every reader. Multiple agents write concurrently. Every consumer — browser, dashboard, webhook — sees the same sequence.

Cursor-based resume. tail(cursor=0) replays from the start. tail(cursor=N) picks up where you left off. Same API, whether you're catching up or streaming live.

Many writers, many readers. Agents append from any node without sticky sessions. Any number of clients can tail the same session concurrently.

Works with any framework. If your stack can call append(), our SDKs can be used in Next.js, Vercel AI SDK, and custom runtimes.

“I built this myself too many times. Duct-taped Redis, Postgres, and SSE together and still ended up with constant pain. After watching three other teams do the exact same thing, I got pissed and figured we can do better”—— Sebastian Lund, Founder

Three agents, one session, total order. Tail it live or replay from the beginning with cursor=0.

tailing
sess_d2f43 agents
284114:22:01.203messageplanner — Found 3 pending migrations. Handing off.
284214:22:01.917handoffplanner — planner → validator reason="migrations detected"
284314:22:02.051statusvalidator — validator attached cursor=2842
284414:22:02.412messagevalidator — Running dry-run on 3 migrations…
284514:22:03.801checkpointvalidator — 3/3 migrations OK schema_version=47
284614:22:04.002messagevalidator — Running test suite (parallel=4)…
284714:22:06.118checkpointvalidator — 47/47 passing coverage=94.2%
284814:22:06.201messagevalidator — All checks green. Safe to deploy.
284914:22:06.302handoffvalidator — validator → deployer reason="all checks passed"
285014:22:06.488statusdeployer — deployer attached cursor=2849
285114:22:06.501messagedeployer — Deploying staging canary (v2.4)…
285214:22:08.204checkpointdeployer — canary live traffic=10% instances=2/20
285314:22:09.101messagedeployer — Running health check…
285414:22:09.812artifactdeployer — p99=42ms error_rate=0.00% status=healthy
285514:22:01.203messageplanner — Found 3 pending migrations. Handing off.
285614:22:01.917handoffplanner — planner → validator reason="migrations detected"
285714:22:02.051statusvalidator — validator attached cursor=2842
285814:22:02.412messagevalidator — Running dry-run on 3 migrations…
285914:22:03.801checkpointvalidator — 3/3 migrations OK schema_version=47
286014:22:04.002messagevalidator — Running test suite (parallel=4)…
286114:22:06.118checkpointvalidator — 47/47 passing coverage=94.2%
286214:22:06.201messagevalidator — All checks green. Safe to deploy.
286314:22:06.302handoffvalidator — validator → deployer reason="all checks passed"
286414:22:06.488statusdeployer — deployer attached cursor=2849
286514:22:06.501messagedeployer — Deploying staging canary (v2.4)…
286614:22:08.204checkpointdeployer — canary live traffic=10% instances=2/20
286714:22:09.101messagedeployer — Running health check…
286814:22:09.812artifactdeployer — p99=42ms error_rate=0.00% status=healthy

FAQ

How does starcite fit into my existing stack?

starcite sits underneath your orchestrator and transport as the session persistence and replay layer. Add append() where your agents produce output and use tail() for reads, then stream those reads into your UI however you like.

How is this different from SSE or WebSockets?

SSE and WebSockets are transport layers. starcite is session infrastructure: it persists your session data, assigns a total order, and lets any client resume from any position. When your UI needs updates, call tail() and stream them. You can render those updates in your own UI pipeline, or proxy reads yourself—your API usage stays the same.

Why do messages disappear or duplicate after refresh, tab switching, or reconnect?

Streams are not a source of truth. When the connection drops, you miss events, and when you reconnect you often replay from the wrong spot and show duplicates. Starcite persists each event to an ordered log, so the UI can reconnect and tail from its last cursor to catch up exactly. If you use our SDKs, that cursor handling is the default. For the full technical breakdown, see Why Agent UIs Lose Messages on Refresh.

How do I resume SSE/EventSource after reconnect?

Treat EventSource auto-reconnect (and Last-Event-ID) as a transport hint, not your resume mechanism. Persist the last Starcite cursor you rendered, then reconnect by tailing from that cursor. You catch up from the log and avoid both gaps and duplicates. Our SDKs persist the cursor and reconnect automatically by default. For a deeper dive, read the six failure modes post.

Should I use Redis Streams or Kafka, or build my own event broker?

Redis Streams and Kafka can work if you already run them, but you still end up owning ordering, retention, replay, and reconnect edge cases. If you just want reliable sessions, Starcite gives you the ordered log and cursor-based catch-up without building and operating a bespoke stream system.

Can multiple clients observe the same session?

Any number of clients can tail any session. A browser, a second tab, a mobile device, a monitoring dashboard, a webhook consumer — they all call tail(session, cursor) and see the same totally ordered history. Each reader owns its own cursor.

How is starcite persistence implemented?

By default, events are persisted in S3-compatible object storage. That gives us an append-only, durable log with total-order keys that sessions can resume from at any time.

How does ordering work with multiple agents?

Every event gets a monotonic sequence number on write, so every consumer sees the same total order.

What's the difference between replay and resume?

They're the same operation. tail(cursor=0) replays the full session history. tail(cursor=N) resumes from message N. There is no separate replay API.

What can I store in a session?

Any typed message: user messages, assistant responses, tool calls, tool results, status updates, thinking steps, agent handoffs, checkpoints, artifacts. starcite is schema-flexible — you define the types that matter to your product.

Is starcite an agent framework?

No. starcite does not route, plan, or execute agents. It does not run tools. It persists the messages your agents produce and lets any client read them back in order. Your orchestrator stays yours.

Is this open source?

Yes. You can view the source at github.com/fastpaca/starcite.

Want to chat?

We help you onboard & migrate. 30 min call, no pitch deck.

We're currently prioritizing a few early-stage partners for bespoke integrations and production migration support.

Book a call
starcite.ai

© 2026 Anor AI Limited

GitHubBlogContact