Steal Prompt Structure Patterns, Not Content
Steal Prompt Structure Patterns, Not Content
When you find a prompt that works well, the instinct is to copy it. But copying the words misses the point. What makes a prompt effective is its structure - how it organizes instructions, what categories of constraints it includes, and how it decomposes complex tasks.
Structure Over Words
A good system prompt for an AI agent typically has a specific structural pattern: identity and scope definition, then behavioral constraints, then output format requirements, then error handling instructions. The exact words matter less than having each of these sections.
When you see a prompt that produces reliable agent behavior, study its structure. Does it define what the agent should NOT do before what it should do? Does it provide examples of correct and incorrect behavior? Does it set explicit boundaries around tool usage?
Patterns Worth Stealing
The constraint-first pattern: start with limitations before capabilities. "You must not modify files outside the project directory. You must not make network requests without user approval. Within these constraints, you may..."
The decomposition pattern: break complex tasks into numbered steps. "Step 1: Identify the target element. Step 2: Verify it is interactable. Step 3: Perform the action. Step 4: Verify the result."
The fallback pattern: define what to do when the primary approach fails. "If the element is not found via accessibility API, try screenshot analysis. If screenshot analysis fails, ask the user for guidance."
Why Content Copying Fails
Prompt content is context-dependent. A prompt optimized for one model, one task, and one set of tools does not transfer to a different setup. But the structural patterns - how to organize constraints, how to decompose tasks, how to handle failures - transfer across every context.
Build a library of structural patterns, not a library of prompts. Your prompts will be better for it.
Fazm is an open source macOS AI agent. Open source on GitHub.