Explicit Acceptance Criteria in CLAUDE.md to Stop Premature Victory
Explicit Acceptance Criteria in CLAUDE.md to Stop Premature Victory
Claude Code has a habit of declaring victory before the job is actually done. The fix is embarrassingly simple: put explicit acceptance criteria in your CLAUDE.md file.
The Premature Victory Problem
Without clear criteria, Claude will implement a feature, see that it compiles, and announce it is done. But "it compiles" is not the same as "it works." The tests might not pass. The feature might break an existing workflow. The output file might not exist where it should.
This gets worse with parallel agents. If one agent declares victory prematurely and another agent depends on its output, you get cascading failures that are hard to debug.
What Acceptance Criteria Look Like
The CLAUDE.md rules that actually work are specific and verifiable:
## Acceptance Criteria for All Changes
- All existing tests must pass after your changes
- New features must include at least one test
- The build must succeed with zero warnings
- If you create a new file, verify it exists and is not empty
- If you modify an API, verify the callers still compile
- Do not mark a task as complete until you have run the test suite
These are not aspirational guidelines. They are checkboxes that the agent can verify before claiming it is finished.
Why This Works
Claude Code follows explicit instructions better than implicit expectations. When your CLAUDE.md says "run tests before declaring done," the agent runs tests. When it says nothing about tests, the agent optimistically assumes everything is fine.
The pattern extends beyond testing. You can add criteria for documentation, for commit message format, for file naming - anything you care about. The more specific you are, the less time you spend correcting the agent after the fact.
The Convergence Effect
Once you add acceptance criteria, something interesting happens: all your parallel agents converge on the same quality bar. Instead of five agents with five different definitions of "done," you get five agents all verifying against the same checklist. That consistency is worth the five minutes it takes to write the criteria.
Fazm is an open source macOS AI agent. Open source on GitHub.