Stripping Personality from AI Agent Config for 7 Days - The Token Cost of Personality

Fazm Team··2 min read

The Token Cost of Personality

We ran an experiment - strip every personality instruction from our agent's configuration and run it for seven days. No "be friendly," no "respond with enthusiasm," no "use a conversational tone." Just raw, functional responses.

What We Removed

Our agent config had accumulated personality directives over months. "Be helpful and approachable." "Use casual language when appropriate." "Add brief acknowledgments before answering." "Show empathy when users express frustration." Each directive seemed harmless. Together, they added roughly 200 tokens to the system prompt and inflated every response by 15-30%.

Those extra tokens in every response do not seem like much until you multiply them across thousands of daily interactions. Personality is a luxury tax on every single interaction.

The 7-Day Results

Token usage dropped measurably. Responses were shorter, more direct, and - this surprised us - often clearer. Without the filler phrases that personality directives encourage ("Great question! Let me help you with that..."), the agent got straight to the answer.

Task completion time improved because the agent spent fewer tokens on pleasantries and more on actual work. For cron-scheduled background tasks where no human sees the output, the personality tokens were pure waste.

When Personality Matters

This is not an argument to strip all personality from every agent. User-facing conversational agents benefit from a warm tone. But the cost should be intentional, not accidental.

The problem is that personality instructions accumulate. Each one is small. Nobody audits their total impact. Over time, your system prompt becomes a personality essay that inflates every interaction.

The Fix

Separate your agent configs by context. User-facing interactions get personality directives. Background tasks, cron jobs, and inter-agent communication get stripped-down functional configs. Audit your system prompts quarterly for personality bloat.

Every token your agent spends on personality is a token it is not spending on the task. Make that tradeoff deliberately, not by default.

More on This Topic

Fazm is an open source macOS AI agent. Open source on GitHub.

Related Posts