You Don't Need to Shard Your Database for AI Agents
You Don't Need to Shard Your Database
Every time someone builds an AI agent, the instinct is to reach for Postgres, Redis, or some distributed database cluster. The reasoning sounds smart - "what if we need to scale?" But for most agent workloads, a single SQLite file per session handles everything just fine.
Why SQLite Works for Agents
An AI agent session is fundamentally a single-writer workload. One agent runs, makes decisions, stores state, and moves on. There is no concurrent write pressure. There is no multi-region replication requirement. There is just one process writing structured data.
SQLite gives you:
- Zero configuration - no server process, no connection pooling, no network hops
- Single file backup - copy the file and you have a complete snapshot
- Sub-millisecond reads - the data is local, not crossing a network
- Session isolation - each agent session gets its own database file
When This Breaks Down
You need something bigger when multiple agents share state simultaneously, when you need real-time cross-session queries, or when your data exceeds what fits on a single disk. But that threshold is much higher than most people assume.
A single SQLite file can hold terabytes. Most agent sessions produce kilobytes to megabytes of state data.
The Practical Pattern
Start each agent session with a new SQLite file. Store decisions, tool calls, intermediate results, and final outputs. When the session ends, archive the file. If you need to query across sessions later, build a simple aggregation step that reads the archived files.
This pattern eliminates an entire category of infrastructure problems. No database server to maintain. No connection strings to manage. No scaling decisions to make until you actually have a scaling problem.
The best database strategy is the one that lets you focus on building the agent instead of managing infrastructure.
Fazm is an open source macOS AI agent. Open source on GitHub.