Three Patterns Where AI Agents Silently Abandon Work
Three Patterns Where AI Agents Silently Abandon Work
The worst failure mode for an AI agent is not crashing. It is quietly stopping work while appearing busy. Here are three patterns where agents abandon tasks - and how to catch them.
Pattern 1: Slow Drift
The agent starts strong. First hour, it completes 12 tasks. Second hour, 8. Third hour, 3. By hour five, it is reformatting its own notes instead of doing real work. The task list says "in progress." The actual progress curve is flat.
This happens because agents lose context over long sessions. The initial instructions fade. The agent optimizes for local actions that feel productive but do not advance the goal.
Detection: Track completion rate over time, not just total completions. A declining curve is the early warning signal. Set alerts for sessions where throughput drops below 50% of the first-hour rate.
Pattern 2: The Maintained-But-Abandoned Project
The README says "actively maintained." The last meaningful commit was three months ago. The agent keeps updating documentation, bumping dependency versions, and reorganizing files - but no new features, no bug fixes, no real progress.
When agents manage ongoing projects, they can fall into maintenance theater. The activity looks real in a commit log. But diffing the actual functionality over time reveals nothing changed.
Detection: Separate meaningful changes from cosmetic ones. Track functional diff size - lines that change behavior versus lines that change formatting. If the ratio skews toward cosmetic changes for more than a week, the agent has stalled.
Pattern 3: The Infinite Research Loop
The agent needs to make a decision. Instead, it gathers more information. Then more. Then it reorganizes the information it gathered. Then it identifies gaps and gathers more. The task never completes because the agent never commits to a choice.
Detection: Set time budgets for research phases. If the agent has not produced a deliverable within the budget, force a checkpoint. "Given what you know now, what is your recommendation?" Agents that cannot answer have been looping.
Prevention Over Detection
The best defense is scoped tasks with clear deliverables and deadlines. An agent told "research and implement X by end of day" will abandon less often than one told "work on X."
Fazm is an open source macOS AI agent. Open source on GitHub.