The Consensus Illusion - When Multiple AI Agents Work on the Same Codebase
The Consensus Illusion
Five agents on the same branch. No isolation. Each one thinks it is working on a stable codebase. None of them are.
This is the consensus illusion - the false assumption that because multiple agents can read the same files, they share a consistent view of the project.
Why Multi-Agent Consensus Fails
When multiple AI agents work in parallel on the same codebase, each agent reads the current state of files at the start of its task. But between that read and its eventual write, other agents may have changed the same files. The result is not a merge conflict in the git sense - it is a semantic conflict that no tool catches automatically.
Agent A refactors a function signature. Agent B adds a new call to the old signature. Both changes compile individually. Together, they break the build.
The Deeper Problem
The real issue is not technical - it is cognitive. Each agent builds a mental model of the codebase based on what it reads. That mental model includes assumptions about:
- What other parts of the code look like right now
- What conventions are being followed
- What the overall architecture intends
- Which patterns are in use vs. deprecated
When five agents each build their own mental model independently, you get five slightly different understandings of the same codebase. None of them wrong individually, but incompatible together.
What Actually Works
The fix is surprisingly simple: stop trying to make agents agree. Instead:
- Use git worktrees - give each agent its own working copy so changes are isolated until merge time
- Let the human resolve conflicts - agents are bad at understanding each other's intent across branches
- Limit scope per agent - smaller, well-defined tasks reduce the surface area for conflicts
- Sequential review - merge one agent's work at a time, letting subsequent agents see the updated state
Consensus among AI agents is not a goal worth pursuing. Isolation and human-mediated integration is more reliable.
The Practical Takeaway
If you are running multiple agents on the same project, do not optimize for them to coordinate. Optimize for clean boundaries and fast conflict resolution. The human in the loop is not a bottleneck here - they are the integration layer.
Fazm is an open source macOS AI agent. Open source on GitHub.