I Forgot How to Code After Using AI Agents

M
Matthew Diakonov

I Forgot How to Code After Using AI Agents

You sit down for a system design interview. The interviewer asks you to sketch out an API. Six months ago, this was automatic - endpoints, data models, error handling flowing from memory. Now your brain does something new: it waits. It expects to delegate. The code is not gone from your memory, but the retrieval pathway has changed.

This experience is common enough that researchers have started studying it formally.

The Research Is In

In early 2026, Anthropic published a study on how AI assistance impacts coding skill formation. Fifty-two mostly junior engineers learned Trio, an asynchronous programming library none of them had used before. Half had access to AI assistance, half did not. Both groups completed coding tasks followed by a quiz covering debugging, code reading, and conceptual understanding.

The results were clear: the AI-assisted group averaged 50% on the comprehension tests, compared to 67% for the manual coding group. The gap was largest on debugging questions - 17 percentage points lower.

The pattern in the low-scoring AI-assisted engineers: complete delegation of code generation, progressive handing-over of debugging to the AI, and using AI to solve problems rather than to understand them. The high-scoring pattern, even among AI users: asking follow-up questions after generating code, combining generation with explanation requests, maintaining cognitive engagement with the material.

A separate 2025 study by Microsoft and Carnegie Mellon found a similar pattern: the more participants leaned on AI tools, the less critical thinking they engaged in, making it harder to summon those skills when needed.

What the Cognitive Shift Feels Like

Heavy use of AI coding agents changes how your brain approaches problems at a fundamental level. Instead of thinking "how do I implement this," you start thinking "how do I describe this so the agent implements it." The skill shifts from writing to specifying, from doing to directing.

The change is not binary. You do not wake up one day unable to code. It is gradual. The first sign is usually syntax hesitation - you know you want a generator in Python but pause on the exact syntax. Then it is debugging slowdown - you paste errors into Claude before spending time reading the stack trace yourself. Then it is planning delay - features that you used to decompose in your head now seem to require external help to break down.

What Atrophies

Specific skills that fade with heavy AI delegation:

Syntax recall. You know the concept but cannot remember the exact function name or parameter order. This is the most visible symptom and the least serious - syntax is Google-able and not where your value as a developer sits.

Debugging intuition. You used to scan code and feel where the bug was. Years of practice built a pattern library in your head. When you stop exercising that pattern library, it gets slower to access. This is more serious because debugging intuition is hard to rebuild and is what distinguishes senior developers from junior ones.

Implementation planning. Breaking a feature into concrete implementation steps used to happen automatically. Now the habit of externalizing that to an AI means the internal mechanism gets less practice.

Error message reading. When every error goes straight to an AI for interpretation, you stop developing the skill of reading error messages yourself. Stack traces that used to be navigable become walls of text.

What Gets Stronger

The Anthropic study found a pattern in the high-performing AI users: they engaged with the AI as a thinking partner, not a replacement for thinking. That distinction matters.

Skills that tend to grow with thoughtful AI use:

Architecture and system design. When AI handles implementation, you naturally spend more time at the architecture level. Understanding how components connect, evaluating tradeoffs between approaches, and designing for maintainability become the primary intellectual activities. Ironically, these are exactly the skills that matter most in technical interviews.

Code review and pattern recognition. Reviewing AI-generated code at high throughput builds fast pattern matching. You develop an eye for subtle issues, inconsistencies, and architecture violations that would be easy to miss when writing one function at a time.

Requirements precision. Writing specs that produce good AI outputs requires being extremely specific about edge cases, constraints, and acceptance criteria. This is a form of requirements engineering that most developers never formally practiced.

Tradeoff evaluation. Comparing multiple AI-generated approaches on correctness, performance, maintainability, and fit forces explicit reasoning about tradeoffs. Developers who do this regularly become better at technical decision-making.

Finding the Balance

The Anthropic researchers concluded with a recommendation: deploy AI tools with intentional design choices that support engineers' learning. The productivity benefits of AI assistance are real, but they can come at the cost of the debugging and validation skills needed to oversee AI-generated code well.

That is the core tension. You need those oversight skills to use AI tools effectively. But using AI tools heavily can erode the very skills that make you good at using them.

The practical response is deliberate practice without AI assistance, the same way a pilot practices manual landings even though the autopilot works fine. A few techniques that help:

Set aside no-AI sessions. Pick one problem per week and solve it fully by hand. It does not need to be complex. The goal is keeping the retrieval pathways active.

Read AI output before using it. Before accepting generated code, read it line by line and make sure you understand what it does. This keeps comprehension skills engaged even when you are not writing.

Debug the first five minutes manually. Before pasting an error to an AI, spend five minutes reading the stack trace and forming a hypothesis. Even if you then use AI to confirm or extend your diagnosis, the initial manual effort preserves the skill.

Explain the AI's output. When an agent produces code you did not fully understand, ask it to explain the approach. Then close that window and explain it to yourself. The retrieval practice is what builds retention.

The goal is not to avoid AI tools - the productivity leverage is too significant. The goal is to remain capable of functioning without them, because the moments when you cannot delegate are exactly the moments when your own skills matter most: outages, interviews, architecture decisions, and reviewing work from agents that might be subtly wrong.

Fazm is an open source macOS AI agent. Open source on GitHub.

More on This Topic

Related Posts