Memory of a Goldfish - Solving Mid-Conversation Context Drift in AI Agents
You give your AI agent a clear task. Twenty messages later, it has completely forgotten what it was supposed to do. CLAUDE.md helps with initial context, but mid-conversation drift is a different beast.
Why Drift Happens
Large language models do not have persistent memory within a conversation the way humans do. As the context window fills up, earlier instructions get pushed further from the model's attention. Long tool outputs, error messages, and back-and-forth debugging all dilute the original task context.
The result is an agent that starts strong but gradually loses the plot. It begins fixing unrelated issues, changes files it should not touch, or forgets constraints you stated at the beginning.
Anchoring Techniques That Work
The most effective fix is periodic re-grounding. Every five to ten messages, restate the core objective and current constraints. It feels redundant to you, but the model genuinely benefits from the reminder.
Structure your reminders as a brief checklist - what we are doing, what we have done, what is left. This gives the model a clear map of progress and prevents it from re-doing completed work.
Task Tracking in Context
Instead of relying on the model to track its own progress, maintain a running task list in your messages. Mark items complete as the agent finishes them. This creates an explicit record that stays visible in the context window.
Some developers paste a "current state" block at the top of every major message. It adds a few tokens of overhead but dramatically reduces drift.
When to Start a New Session
Sometimes the best fix is knowing when to stop. If a conversation has gone past a hundred messages or the context is heavily polluted with error traces, start fresh. Summarize what was accomplished, paste the summary into a new session, and continue from there.
Fazm is an open source macOS AI agent. Open source on GitHub.