Tested Windsurf, Cursor, and Claude Code on the Same Project - Clarifying Questions Matter

Fazm Team··3 min read

Tested Windsurf, Cursor, and Claude Code on the Same Project

I gave the same task to all three tools: add a new feature to an existing codebase with about 50 files. Same prompt, same repo, same starting state. The results were surprisingly different - but not for the reasons I expected.

The Setup

The task was adding OAuth integration to an existing Express API. The codebase had its own patterns for middleware, error handling, and database access. A good implementation would follow those patterns. A bad one would ignore them and create something that works but looks nothing like the rest of the code.

What Actually Differed

Cursor jumped in immediately. It generated code fast, produced a working implementation, but used patterns that did not match the existing codebase. The code worked in isolation but looked like it was written by a different team.

Windsurf also started generating quickly. The output was solid but similarly disconnected from the existing patterns. It did not ask about the middleware conventions or the error handling approach.

Claude Code did something different - it asked questions first. "I see you have a custom middleware pattern in auth.ts. Should I follow that pattern?" and "Your error handler wraps errors in a specific format. Should the OAuth errors use the same format?" After two rounds of clarifying questions, it generated code that matched the codebase style.

Why Clarifying Questions Matter

The biggest risk with AI coding tools is not wrong code - it is inconsistent code. A feature that works but follows different conventions than the rest of the project creates maintenance debt. Every developer who touches it later has to figure out why this file looks different from everything else.

Tools that ask questions before generating code produce output that fits the existing project. Tools that generate immediately produce output that fits the training data. These are very different things.

The Tradeoff

Asking questions is slower. If you need a quick prototype that you will throw away, the tool that generates fastest wins. But if you are adding to a production codebase that a team maintains, the five minutes spent on clarifying questions save hours of refactoring later.

The best AI coding tool is not the fastest one. It is the one that understands what it does not know and asks.

Fazm is an open source macOS AI agent. Open source on GitHub.

More on This Topic

Related Posts