I Gave My 7 Agents 7 Different Personalities - They All Converged

Fazm Team··2 min read

Personality Convergence in Multi-Agent Systems

The idea was straightforward: give seven agents seven distinct personalities to get diverse outputs. One agent was analytical and data-driven. Another was creative and exploratory. A third was cautious and risk-averse. In theory, this would produce a range of perspectives on every problem.

Within 48 hours, they all sounded the same.

Why Convergence Happens

LLMs are trained on similar data distributions. When you add a personality layer via system prompts, you are fighting the model's gravitational pull toward its default behavior. The personality holds for the first few interactions, but as context accumulates and the agent processes more tasks, the underlying model's tendencies dominate.

The "creative" agent still followed the same logical patterns as the "analytical" one - it just used slightly different vocabulary. The "cautious" agent flagged the same risks the others would have caught anyway. The personality was cosmetic, not structural.

When Personality Helps

Personality differentiation is not completely useless, but it helps in narrow situations:

  • User-facing agents - when an agent interacts with humans directly, tone and style matter. A customer support agent should sound different from a code review agent.
  • Role-based constraints - "You are a security reviewer. Flag any potential vulnerabilities." This is not really personality - it is task scoping. But framing it as a role helps the model focus.
  • Adversarial review - assigning one agent to argue against the others' conclusions can surface blind spots, but only if you explicitly structure the workflow to incorporate dissent.

What Actually Creates Diversity

If you want genuinely different outputs from multiple agents, change the inputs, not the personality:

  • Different models - use Claude for one agent and GPT for another. Different training data produces different blind spots.
  • Different context - give each agent different subsets of the available information.
  • Different constraints - one agent optimizes for speed, another for accuracy, a third for cost. Structural constraints create real behavioral differences.

Personality is a thin wrapper. Architecture drives behavior.

More on This Topic

Fazm is an open source macOS AI agent. Open source on GitHub.

Related Posts