Your AI Agent Needs Better Taste, Not More Autonomy
Your AI Agent Needs Better Taste, Not More Autonomy
The instinct when an AI agent produces mediocre output is to give it more autonomy - more tools, more freedom, more decision-making power. But the problem is rarely capability. It is taste.
What Taste Means for Agents
Taste is the ability to distinguish between technically correct and actually good. An agent can write code that compiles, but does it write code that is readable? An agent can draft an email that conveys information, but does it draft one that sounds human? An agent can automate a workflow, but does it automate it in a way that is maintainable?
These are judgment calls. They require knowing not just what works, but what works well. And that is exactly where most agents fall short.
Why Abstract Guidelines Fail
The natural approach is to write guidelines. "Write clean code." "Keep emails concise." "Prefer simple solutions." These directives sound clear to a human but give an agent almost nothing to work with. What counts as clean? How concise is concise? Simple compared to what?
Abstract guidelines create the illusion of quality control without actually controlling quality. The agent pattern-matches on the vague instruction and produces output that superficially complies while missing the point entirely.
Concrete Examples Work Better
Instead of telling the agent "write good commit messages," show it ten examples of good commit messages and ten examples of bad ones. The agent extracts the pattern from concrete instances far more reliably than from abstract descriptions.
This is how taste gets encoded - not through rules but through examples. A style guide that says "be conversational" is less useful than five paragraphs that demonstrate conversational writing. A coding standard that says "handle errors gracefully" is less useful than three code snippets that show graceful error handling.
The Practical Approach
Build a library of examples for every quality dimension you care about. Good and bad. Annotated with what makes each one good or bad. Feed these examples into the agent's context when it is working on related tasks.
This is more work than writing a one-line guideline. It is also dramatically more effective. Your agent does not need more freedom to make decisions. It needs better reference points for what good decisions look like.
Autonomy without taste produces confident mediocrity at scale.
- Code Quality via CLAUDE.md Standards and Pre-Commit Hooks
- AI Agent Code Looks Plausible
- Context Engineering - CLAUDE.md Is the Most Important File
Fazm is an open source macOS AI agent. Open source on GitHub.