The Default Flipped
The Default Flipped
A year ago, the question was "should we use an AI agent for this?" Now the question is "why would we NOT use an AI agent for this?"
The default flipped.
How Defaults Change Everything
When the default is "do not use an agent," every agent deployment requires justification. Business cases, ROI projections, pilot programs. The friction is on the side of adoption.
When the default is "use an agent," every manual process requires justification. Why is a human doing this? What is special about this task that an agent cannot handle it? The friction shifts to the side of resistance.
What Triggered the Flip
The flip happened when the cost of NOT using an agent exceeded the cost of using one:
- A competitor shipped the same feature in two days that took your team two weeks
- A solo developer with agents outproduced a team of five without them
- The quality of agent output crossed the "good enough" threshold for most tasks
- Setting up an agent became faster than explaining the task to a new hire
Once all four conditions are true, the default flips. It is not a decision anymore. It is gravity.
The New Justification
Tasks that resist agents now need to explain themselves:
- "It requires human judgment" - be specific about which judgment and why
- "The stakes are too high" - agents can draft, humans can approve
- "We have always done it manually" - that is not a reason
- "The agent makes mistakes" - so do humans, compare the error rates
The organizations that adapted fastest are the ones that stopped asking permission to use agents and started asking permission to not use them.
Fazm is an open source macOS AI agent. Open source on GitHub.