Anchoring Bias in Multi-Agent Systems - When One Agent's Output Biases All the Others
Anchoring Bias in Multi-Agent Systems - When One Agent's Output Biases All the Others
If you run multiple AI agents on the same codebase, you have probably noticed something subtle and frustrating. When one agent sees another agent's partial output, it anchors to that approach - even if it is a bad one.
This is the classic anchoring bias from behavioral psychology, but playing out in silicon instead of neurons.
How It Happens
Say you have three agents working on a refactor. Agent A starts first and writes a rough draft of a new function. Agent B spins up, sees Agent A's file, and instead of evaluating the problem independently, it builds on top of Agent A's approach. Agent C does the same. Now all three agents have converged on whatever direction Agent A happened to pick - regardless of whether it was optimal.
The result? You get three agents producing variations of the same mediocre solution instead of three genuinely independent approaches.
Why This Matters More Than You Think
In human teams, anchoring bias is well-documented. The first number thrown out in a negotiation sets the range. The first design mockup shapes all future iterations. But humans at least have the ability to consciously reject an anchor when prompted.
LLMs do not have that ability. They are pattern-completion machines. When they see existing code or partial solutions in their context window, they complete the pattern rather than questioning it. The anchor is not just influential - it is almost deterministic.
Practical Mitigations
The fix is isolation. Each agent needs its own workspace where it cannot see what others are doing:
- Git worktrees: Give each agent its own branch and working directory
- Context isolation: Do not share intermediate outputs between agents until they have each produced a complete solution
- Merge at the end: Let agents work independently, then compare their outputs and pick the best approach
The biggest quality improvement in multi-agent workflows is not a better model. It is making sure your agents genuinely cannot see each other's work until you are ready to compare.
- Managing Parallel Claude Agents
- Git Worktree Isolation for Multi-Agent Development
- Multiple Agents Consensus Illusion
Fazm is an open source macOS AI agent. Open source on GitHub.