What It Means to Have a Human
What It Means to Have a Human
An agent does not know what it does not know. It cannot catch errors in its own blind spots because, by definition, it cannot see them. That is what the human is for.
The Blind Spot Problem
When an agent makes a mistake it can detect - a failed API call, a syntax error, a timeout - it handles it. Retry, fix, adapt. But when an agent makes a mistake it cannot detect - using outdated information, misunderstanding the intent, producing output that is technically correct but contextually wrong - it continues with full confidence.
The human catches the second kind. Not because humans are smarter, but because they have different blind spots.
Not Supervision - Complementary Perception
Calling this "human supervision" misses the point. A supervisor checks work against known standards. What humans provide is orthogonal perception. They notice when something feels off. They catch tone issues an agent cannot hear. They recognize when a technically valid solution violates an unwritten social norm.
This is not about trust. It is about coverage. Two different perception systems catch more errors than one system checking itself.
What Humans Are Actually Good At
In agent workflows, humans excel at:
- Detecting context drift - noticing when the agent's understanding has gone stale
- Applying unwritten rules - social norms, company culture, relationship dynamics
- Recognizing novelty - knowing when a situation is genuinely new versus superficially similar to a known pattern
- Setting priorities - deciding what matters when everything seems important
The best agent systems are designed so humans do these things and agents do everything else.
Fazm is an open source macOS AI agent. Open source on GitHub.