Back to Blog

The Lossy Handoff Problem - When AI Agents Transfer Context via Git Diff

Fazm Team··3 min read
handoffcontext-lossgit-diffai-agentknowledge-transferarchitecture

The Lossy Handoff Problem

An AI agent spends 50 turns exploring a problem. It considers three approaches, rejects two, hits a dead end, backtracks, and finally lands on a solution. The handoff to the human? A git diff.

All of that reasoning - the alternatives considered, the trade-offs weighed, the dead ends discovered - is compressed into "what changed." The why is gone.

Why Git Diffs Fail for Context Transfer

A git diff is a perfect record of what changed in the code. For straightforward changes - fix a typo, update a dependency, add a field - that is enough. The diff is the context.

But for architectural decisions, the diff is the tip of the iceberg. It shows you the chosen path without revealing:

  • What other approaches were considered and why they were rejected
  • What constraints shaped the decision
  • What assumptions the agent made about future requirements
  • What trade-offs were accepted and which were avoided
  • What the agent tried that did not work

The Information Asymmetry

When a human developer makes architectural decisions, the context lives in their head. They can explain it in a PR description, discuss it in a review, or answer questions about it later.

When an AI agent makes the same decisions, the context exists only in the conversation history - which is typically discarded after the session ends. The human inherits the code but not the reasoning.

Better Handoff Mechanisms

To close the context gap, agents should produce more than diffs:

  1. Decision logs - a brief record of each significant choice and the reasoning behind it
  2. Rejected alternatives - what was considered and specifically why it was not chosen
  3. Assumption documentation - what the agent assumed about requirements, constraints, and future direction
  4. Confidence markers - which parts of the solution the agent is confident about vs. which feel like guesses

The Practical Fix

The simplest approach: require agents to write a structured summary alongside every non-trivial change. Not a novel-length explanation - just the key decisions and their reasoning in a few bullet points. Store it in the commit message or a companion file.

A 200-word decision log saves hours of reverse-engineering intent from a diff.

Fazm is an open source macOS AI agent. Open source on GitHub.

More on This Topic

Related Posts