The Real AI Coding Skill Is Problem Decomposition, Not Prompt Engineering

M
Matthew Diakonov

The Real AI Coding Skill Is Problem Decomposition, Not Prompt Engineering

The people who get the most out of AI coding tools are not better at prompting. They are better at decomposing problems into pieces an agent can handle cleanly.

There is data to support this. A 2025 randomized controlled trial found that when experienced developers used AI tools without changing how they approached problems, they actually took 19% longer on average. But Google's internal RCT found developers completing tasks 21% faster. The difference is not the model. It is whether the developer knows how to scope work for the agent.

Prompt Engineering Is a Red Herring

The internet is full of prompt templates, magic words, and formatting tricks. Add "think step by step." Use XML tags. Start with a system prompt that says "you are an expert." These help at the margins, but they are not what separates developers who benefit from AI from those who feel like it slows them down.

The scarce skill - the one that compounds over time - is looking at a complex feature and breaking it into tasks that each fit within a single agent context window, have clear inputs and outputs, can be verified independently, and do not require understanding the entire codebase to implement correctly.

What Good Decomposition Looks Like

Here is a concrete comparison. The same developer, same feature, two approaches:

Approach A (poorly scoped):

Build me a user authentication system with OAuth, email verification,
rate limiting, and session management.

Approach B (well scoped):

Write a function that validates an OAuth callback from Google, extracts
the user profile fields [id, email, name, picture], and returns this type:

type OAuthUser = {
  provider_id: string
  email: string
  name: string
  avatar_url: string | null
}

The input will be a Google OAuth2 callback response. Return null if
required fields are missing. Here is an example response payload:
[paste example]

The second prompt produces working, testable code almost every time. The first produces something that looks complete but has subtle integration issues in every component - mismatched types, missing error cases, incorrect assumption about session storage.

Breaking Down a Real Feature

Here is how to decompose "add rate limiting to the API" into agent-sized pieces:

Step 1: Define the interface

Write a RateLimiter class with this interface:
- constructor(maxRequests: number, windowMs: number)
- isAllowed(key: string): boolean
- remaining(key: string): number
- reset(key: string): void

Use an in-memory Map. No external dependencies. Include unit tests.

Step 2: Define the middleware

Write Express middleware that uses the RateLimiter class [paste the class].
- Per-IP limiting with key format: "ip:{remoteAddress}"
- Returns 429 with JSON body: { error: "rate_limit_exceeded", retry_after: N }
- Passes X-RateLimit-Remaining header on all responses
- Skips rate limiting for requests with valid API key in Authorization header

Step 3: Wire it up

Show me how to add this middleware [paste middleware] to the existing
router in src/routes/api.ts [paste the file]. It should apply to all
routes under /api/v1 except /api/v1/health.

Three scoped tasks. Each one produces verifiable output. Each one can be tested before moving to the next. You never give the agent "add rate limiting to the API" because that requires it to understand your entire routing structure, your existing middleware pattern, your error response format, and your deployment constraints - context that does not reliably fit in a prompt.

The Verification Step That Most People Skip

Good decomposition includes defining how you will know the piece is correct before you write the prompt. This is not optional - it is the mechanism that makes AI coding reliable.

For each piece:

  1. What does a passing unit test look like?
  2. What edge case is most likely to fail?
  3. What is the exact interface with the next piece?

Writing down the answer to these questions before generating code forces you to understand the problem well enough to scope it properly. If you cannot answer them, you need to decompose further.

Why This Mirrors Good Engineering

This is not a new skill invented for AI. It is the same skill that makes developers effective at writing tickets, delegating to junior engineers, or designing APIs. You need to understand the problem well enough to define clean interfaces around each piece.

The developers who benefit most from AI are often the ones who could have written the code themselves. They understand the problem domain deeply enough to know when the AI's output is wrong, and to scope the task so it is usually right. Less experienced developers sometimes get less value because they cannot tell whether the generated code is correct - and without that verification capability, the speed gains evaporate on debugging.

The AI productivity paradox data confirms this: developers on teams with high AI adoption complete 21% more tasks and merge 98% more PRs, but PR review time increases 91%. The bottleneck shifted to human judgment, not to code generation. Decomposition is how you keep the judgment load manageable.

The Workflow in One Paragraph

Break the feature into functions with clear interfaces. Define the type signature and expected behavior for each function before generating it. Generate implementations one at a time. Test each piece before moving to the next. When a piece does not work, diagnose which part of the scope was unclear and tighten it. The AI handles the syntax. You handle the architecture and verification.

That is the whole thing. No prompt templates required.

More on This Topic

Fazm is an open source macOS AI agent. Open source on GitHub.

Related Posts