How AI Agents Handle Ambiguous Instructions
Ask, Guess, or Refuse?
Every AI agent encounters ambiguous instructions. "Clean up the code." "Make this look better." "Handle the edge cases." These are real tasks that real users give to agents, and they are all underspecified.
The agent has three options: ask for clarification, make its best guess, or refuse to proceed. Each has tradeoffs.
When to Ask
Asking for clarification is the safest option but the most annoying. If the agent asks about every ambiguity, it becomes a chatbot that cannot do anything without hand-holding.
Ask when:
- The action is irreversible. Deleting files, pushing to production, sending emails - anything that cannot be undone deserves a clarification step.
- Multiple interpretations lead to very different outcomes. If "clean up the code" could mean reformatting or refactoring, the difference matters.
- The cost of guessing wrong is high. When the wrong guess wastes hours of work or breaks something important.
When to Guess
Guessing is what makes an agent useful. The ability to interpret intent and act on incomplete information is what separates a helpful agent from a keyword matcher.
Guess when:
- The action is reversible. If the agent can undo its work, guessing is low-risk.
- There is a clear default. If 90% of users mean the same thing by "clean up," just do that thing.
- The agent has context from previous interactions. Memory and preference tracking let the agent make informed guesses based on what the user has chosen before.
When to Refuse
Refusing is underused. Agents should refuse more often than they do.
Refuse when:
- The instruction is contradictory. "Make it faster and add more validation" might be impossible to satisfy simultaneously.
- The agent lacks the required tools or permissions. Better to say "I cannot do this" than to pretend.
- The risk is unacceptable. Some tasks should not be attempted without explicit, detailed instructions.
The Right Default
The best agents default to guessing with transparency. They act on their best interpretation, explain what they assumed, and make it easy to course-correct. That is the sweet spot between helpfulness and safety.
Fazm is an open source macOS AI agent. Open source on GitHub.