Position Sizing for Agents Without Human Override
Position Sizing for Agents Without Human Override
In trading, position sizing prevents a single bad trade from wiping out your account. The concept is simple: never risk more than X percent on one decision. AI agents operating autonomously need the same guardrail, and almost none of them have it.
The Missing Guardrail
An agent with access to your email can send a message to your entire contact list. An agent with file system access can delete your project directory. An agent with browser access can submit a form that commits you to a contract. Each of these is a single action with catastrophic potential.
Most agent frameworks have no concept of action magnitude. Sending one email and sending a thousand emails are the same operation repeated. Deleting a temp file and deleting a production database are the same tool call with different parameters. There is no built-in sense of "this action is too large."
Implementing Position Limits
Position limits for agents work like trading limits. Define maximum blast radius per action: no more than five emails per batch without approval. No file deletions outside designated directories. No form submissions involving money without confirmation.
These limits are not about distrusting the agent. They are about acknowledging that even good agents make mistakes, and the cost of a mistake should never exceed a recoverable threshold.
The Override Problem
The harder question is what happens when the agent hits a limit. If it stops and waits for approval, you lose the autonomy benefit. If it proceeds with a reduced action, the task might fail. If it skips the action entirely, work gets dropped.
The practical answer is tiered limits. Low-risk actions proceed freely. Medium-risk actions proceed with logging. High-risk actions require approval. The tiers are defined by reversibility - can you undo it? If yes, the agent can proceed. If no, a human confirms.
Fazm is an open source macOS AI agent. Open source on GitHub.