Positive Prompting for CLAUDE.md: Tell the Model What to Do, Not What to Avoid

A thread on r/ClaudeCode nailed a pattern most people get wrong: they fill their CLAUDE.md with "do not" instructions. The model performs significantly better when you tell it what you want instead of listing everything you want to prevent. Here is how to write instructions that actually work.

1. Why Positive Instructions Outperform Negative Ones

When you write "do not use any third-party libraries," the model has to hold a constraint in working memory while generating every line of code. It knows what it cannot do, but it has no guidance on what it should do instead. This creates ambiguity at every decision point.

Compare that to "use only the standard library for HTTP operations." Now the model has a clear directive. It does not need to evaluate every library against a blacklist. It knows exactly which path to take.

This mirrors how effective human communication works. "Do not be late" is weaker than "arrive by 9am." The positive version is specific, actionable, and verifiable. The negative version is vague and leaves the definition of "late" open to interpretation.

The practical test: After writing an instruction, ask yourself: "If the model follows this perfectly, do I know exactly what it will do?" If the answer is no, you probably wrote a negative constraint instead of a positive directive.

2. Rewriting Common Negative Instructions

Here are the most common negative patterns found in CLAUDE.md files and their positive equivalents:

Negative (Weak)Positive (Strong)
Do not create new files unless necessaryAlways prefer editing existing files. Only create new files when explicitly required.
Do not use console.log for debuggingUse the project's logger (import from @/lib/logger) for all log output
Do not write long functionsKeep functions under 30 lines. Extract helpers for distinct operations.
Do not commit secrets or .env filesStage files by name (not git add -A). Check staged files for credentials before committing.
Do not skip testsRun the full test suite after every change. Verify all tests pass before committing.
Do not make unnecessary changesOnly modify files directly related to the current task. Leave unrelated code unchanged.

Notice the pattern: every positive version includes a specific action or a measurable threshold. "Under 30 lines" is verifiable. "Not long" is subjective.

3. Structuring Your CLAUDE.md for Clarity

The structure of your CLAUDE.md matters as much as the content. A wall of text gets partially ignored. A well-organized document with clear sections gets followed consistently. Here is a structure that works:

  • Project overview - One paragraph describing what the project is and what tech stack it uses. This grounds all subsequent instructions.
  • Critical rules - The 3-5 most important constraints, marked with CRITICAL or IMPORTANT. These are the ones that cause real damage if violated (force pushes, data deletion, credential exposure).
  • Development workflow - How to build, test, and deploy. Specific commands, not vague guidance.
  • Code style - Formatting, naming conventions, patterns to follow. Reference existing files as examples.
  • Testing requirements - What must be tested, how to run tests, what constitutes passing.
  • Context and conventions - Project-specific knowledge the model would not otherwise have (database schema patterns, API conventions, deployment environments).

Use markdown headers, bold text for emphasis, and tables for structured information. The model parses these formatting cues and weights bolded/UPPERCASED text more heavily in its decision-making.

4. When Negative Instructions Are Actually Correct

Positive framing is the default, but there are cases where a negative instruction is the right choice. Specifically, when the action is destructive and irreversible:

  • NEVER force push to main/master - There is no positive equivalent that is as clear. "Push carefully" does not carry the same weight.
  • NEVER run git reset --hard without user confirmation - Destructive operations need explicit prohibitions.
  • NEVER delete production databases - Some actions are so dangerous that you want the model to have an explicit stop signal.
  • NEVER skip hooks (--no-verify) - Bypassing safety checks should be explicitly forbidden.

The pattern: use NEVER for actions that cannot be undone. Use positive framing for everything else. This creates a natural hierarchy where the few negative rules stand out precisely because they are rare.

If your CLAUDE.md has more than 5-7 NEVER rules, that is a sign you are overusing the pattern and should convert most of them to positive instructions.

5. Battle-Tested Instruction Patterns

Some instruction patterns work reliably across projects. These have been refined through real usage in production codebases, including projects like Fazm that rely heavily on CLAUDE.md for maintaining consistency across dozens of AI agent workflows:

  • Decision tables - When different situations need different responses, use a markdown table with "When X, do Y" rows. Models follow tabular instructions more consistently than prose.
  • Numbered step sequences - For multi-step workflows, use numbered lists. The model follows step order more reliably than it follows paragraph descriptions.
  • Concrete examples - "Format commit messages like: fix: resolve race condition in session refresh" beats "write good commit messages."
  • File path references - "Follow the pattern in src/components/cta-button.tsx" is more effective than describing the pattern abstractly.
  • Conditional overrides - "For Mediar projects, use matt@mediar.ai. For all other projects, use i@m13v.com." These handle project-specific variations cleanly.

# Example: Decision table for testing

| Change type | How to verify |

|-------------|---------------|

| UI/visual | Build, launch, screenshot with macOS automation |

| Logic/backend | Run programmatic test hooks, check logs |

| Mixed | Do both |

This table format eliminates ambiguity. The model does not have to interpret prose to figure out what kind of testing is appropriate - it matches the change type to the action.

6. Iterating on Instructions Over Time

Your CLAUDE.md should evolve. The first version is always incomplete. Here is a practical iteration process:

  • Week 1: Write the basics - project description, critical rules, build commands, code style preferences
  • Week 2-3: Every time the model makes a mistake, add an instruction that would have prevented it. Keep a running list of "the model did X when I wanted Y" incidents.
  • Week 4: Review all instructions. Remove any that are redundant. Combine similar rules. Convert negative instructions to positive ones where possible.
  • Monthly: Audit instruction effectiveness. If a rule keeps getting violated, the wording is not clear enough - rewrite it. If a rule is never relevant, remove it to reduce context noise.

Treat your CLAUDE.md like production code. It needs maintenance, refactoring, and testing. The "test" is whether the model follows the instructions consistently. If it does not, the instructions need work, not the model.

Version control your CLAUDE.md alongside your code. Review changes to it in pull requests. Team members should understand and agree on the rules, because these rules shape how AI contributes to the codebase.

7. Complete CLAUDE.md Examples

Here is a minimal but effective CLAUDE.md skeleton that uses positive framing throughout:

# Project Name

Next.js 14 app with TypeScript, Tailwind, Prisma, PostgreSQL.

## Critical Rules

- NEVER force push to main

- NEVER commit .env files

## Development

- Build: pnpm build

- Test: pnpm test

- Lint: pnpm lint

## Code Style

- Use named exports for components

- Keep functions under 30 lines

- Follow the pattern in src/components/example.tsx

## Workflow

1. Read relevant files before making changes

2. Make focused, minimal changes

3. Run tests after every change

4. Commit with conventional commit format

This is about 20 lines and covers the essentials. It will grow as you discover edge cases, but starting small and iterating is better than trying to write the perfect CLAUDE.md upfront.

For a more comprehensive example, browse open-source projects that publish their CLAUDE.md files. Look at how they balance specificity with brevity. The best ones read like a senior developer onboarding a new team member - clear expectations, specific examples, and a focus on what to do rather than what to avoid.

See a production CLAUDE.md in action

Fazm is an open-source macOS AI agent with a battle-tested CLAUDE.md that manages dozens of AI workflows. Explore it for real-world instruction patterns.

View on GitHub

fazm.ai - Open-source desktop AI agent for macOS