AI Tickets Need Way More Context Than Human Tickets
Writing Tickets for AI Is Different
The biggest surprise when delegating coding tasks to AI agents isn't that they make mistakes - it's that they make exactly the mistakes you'd expect from someone who reads your ticket completely literally. Human developers fill gaps with context, experience, and common sense. AI agents fill gaps with assumptions that look reasonable but are often wrong.
The Inference Gap
A human developer reading "add error handling to the payment flow" knows to check for network errors, validation failures, duplicate charges, and edge cases specific to your payment provider. They've seen payment systems before. They infer scope from experience.
An AI agent reading the same ticket might add try-catch blocks around the obvious happy path and call it done. It did exactly what you asked - added error handling. It just didn't infer the depth you expected.
What Good AI Tickets Look Like
Effective AI tickets include context that humans take for granted. Specify which files to modify. List the error cases to handle. Reference similar patterns elsewhere in the codebase. Describe what "done" looks like with concrete acceptance criteria.
The extra upfront effort pays off immediately. A well-specified ticket takes an AI agent from "plausible but incomplete" to "actually correct" on the first attempt, saving you the review-reject-retry cycle.
Codebase Context Is Everything
The most effective approach is giving AI agents automatic access to codebase context - existing patterns, conventions, related code. When an agent can see how error handling works elsewhere in the codebase, it follows those patterns. Without that context, it invents its own approach, which may or may not match what you want.
This is why tools that embed codebase awareness into the agent workflow produce dramatically better results than copy-pasting code into a chat window.
Fazm is an open source macOS AI agent. Open source on GitHub.