I Designed My Claude Code Personality to Challenge Me

Fazm Team··2 min read

I Designed My Claude Code Personality to Challenge Me

The default behavior of AI coding agents is to agree with you. You say "add a caching layer" and it adds a caching layer. It does not ask whether you actually need caching. It does not point out that your bottleneck is a missing database index, not cache misses.

I changed this by adding personality directives that encourage selective pushback.

What Anti-Agreeableness Looks Like

In my CLAUDE.md configuration, I include instructions that tell the agent to question my approach when it detects potential issues. Not on everything - that would be annoying. But on architecture decisions, premature optimization, and scope creep.

The result is an agent that sometimes says "this will work, but have you considered..." before diving into implementation. It pushes back on unnecessary abstractions. It flags when a simple solution exists and I am over-engineering.

Why It Produces Better Code

A compliant agent builds exactly what you ask for, including your mistakes. An agent with calibrated skepticism catches the 20% of requests where your first instinct is wrong. Those are exactly the cases where catching the mistake early saves hours of debugging later.

The key is calibration. Too much pushback and the agent becomes obstructive. Too little and you are back to blind compliance. The sweet spot is an agent that challenges you on structural decisions but executes cleanly on implementation details.

The Unexpected Benefit

The biggest win was not catching mistakes - it was learning from the pushback. When the agent suggests a simpler approach and explains why, you internalize that pattern. Over time, your own initial instincts improve because you have been training against a thoughtful critic.

Fazm is an open source macOS AI agent. Open source on GitHub.

More on This Topic

Related Posts