Ai Coding
56 articles about ai coding.
1.6M Git Events Show AI Code Needs More QA
When AI agents generate most of your code, your review process must scale to match. Analysis of 1.6 million git events reveals where QA breaks down - and how to fix it.
18M Tokens to Fix Vibecoding Debt - And How to Avoid It
Letting AI write code without specs creates a specific kind of technical debt that costs millions of tokens to unwind. Here is the system that prevents it.
Advising Junior Developers in the AI Age - Why Fundamentals Still Matter
When 80% of code is AI-generated, junior developers still need strong fundamentals. Here is how to mentor new engineers when the easy work is automated away.
When AI-Built Apps Need a Rewrite vs When They Are Good Enough
Not every AI-built app needs a professional rewrite. Here is how to evaluate whether your AI-generated code is production-ready or heading for trouble.
Maintaining Code Quality with AI Coding Agents
AI agents write plausible code that passes review at a glance. Enforce quality with CLAUDE.md conventions, mandatory linter runs, and automated test gates.
When AI Code Review Flags Intentional Behavior as a Bug
The real gap in automated code review is not missed bugs - it is when AI catches something that looks wrong but is actually intentional. Pattern matching
AI Made My Team Write 21% More Code - The Review Queue Doubled
AI does not remove bottlenecks, it moves them downstream. When code generation gets faster, code review becomes the new constraint.
Why AI Coding Agents Fail Without Enough Project Context
Agent mode errors in Cursor, ChatGPT, and other tools often come from insufficient context - not model limitations. Here is how to give your AI agent the
We Don't Need Experts Anymore Thanks to Claude - 5 Agents, 3 Hours Debugging
The irony of AI coding - spending hours debugging AI-generated error handling code with multiple agents. AI makes you faster until it makes you slower.
AI Coding Productivity Data Is Not What You Expected
METR's research shows developers overestimate their AI coding productivity gains. The perceived time savings do not match the measured results - here is
5000 Lines of Code Per Day - Why the Metric Is Meaningless Even for AI
AI agents can write thousands of lines of code daily. But lines of code was always a bad metric - and AI makes it even more obvious. What actually matters
Asked Claude to Fix Recipes, Built a macOS App Instead
How AI-assisted scope creep turns a simple fix into a full macOS app - the natural progression from one-liner to production software.
The Better Claude Code Becomes, the Less I Want to Use It
As Claude Code gets more opinionated and capable, it removes the flexibility that made it useful. When tools think for you, you stop thinking.
The Biggest AI Coding Productivity Gain Is Codebase Navigation
AI saves the most developer time on codebase navigation and understanding - finding the right code before fixing it. The same skill applies to accessibility
Breaking Down Complex Projects for AI Coding Agents
Handing an AI coding agent a full PRD never works. Learn how to decompose complex projects into agent-sized tasks that actually get completed correctly.
Cancelled My Cursor Subscription, All In on Codex - But Local Access Is Hard to Give Up
Switching from Cursor to Codex sounds great until you realize local file access and shell commands are features you cannot live without.
Claude Code Writes Your Code, but Do You Know What's in It?
AI coding agents restructure modules in unexpected ways. The code works but the architecture drifts from your mental model unless you actively review
How CLAUDE.md Prevents AI Agents from Writing Goop Code
The single biggest improvement for AI-generated code quality is describing your architecture in a CLAUDE.md file before the agent touches anything. Here is
Tell Your Coding Agent to Ship Small Chunks
Large AI-generated PRs are unreviewable. Ship features in small chunks with per-feature CLAUDE.md specs and separate agent sessions for each piece.
Cursor vs Codex vs Claude Code - Different Tools for Different Workflows
Cursor, GitHub Codex, and Claude Code are not interchangeable. Each fits a different development style. Here is when to use which AI coding tool.
Git Was Built for Humans but AI Is Writing My Code Now
Why git's human-centric workflow breaks down with AI-generated commits and how intent-based rollback could fix the problem.
Keeping CLAUDE.md in Sync When 5 Agents Modify Your Codebase
How to prevent CLAUDE.md files from going stale when multiple AI agents rename modules and restructure code simultaneously.
Drowning in AI? Start with a CLAUDE.md File
The biggest thing that helped me learn AI coding tools was treating the AI like a junior dev I am managing. Start with a CLAUDE.md file and build from there.
Anyone Else Feeling Like They're Losing Their Craft to AI?
The grief of watching AI take over coding tasks you spent years mastering, and why low-level skills still matter as craft.
Multi-Agent Code Review Loops - The Simple Pattern That Works
Running parallel AI coding agents works best with a simple pattern: one agent writes code, another reviews it. Here is how to set it up.
Managing Context Bloat in AI Coding Agent Workflows
Context bloat kills AI coding agent performance. Learn why narrow, specialized skills beat broad context windows for persistent memory in Cursor and similar
How Developers Actually Use AI in Their Coding Workflow
What real AI-assisted development looks like vs the demo version. Five agents doing heavy lifting while you architect - the workflow nobody shows on Twitter.
The Most Important AI Coding Rule - Remove Verbosity and Blathering
When writing Swift and macOS code with AI, the 'remove verbosity and blathering' instruction does the most important work. Concise prompts produce better code.
Sandbox vs YOLO Mode for AI Coding Agents
Should you run AI coding agents in a sandbox or let them execute freely? YOLO mode with frequent git commits offers the best balance of speed and safety.
When Sonnet Outperforms Opus - Choosing the Right AI Model Tier
Sonnet vs Opus for coding tasks - when the cheaper, faster model produces better results. Benchmarks, cost comparison, and a practical routing guide for daily AI coding work.
When Cheaper AI Models Are Good Enough for Daily Development
Sonnet handles Python wrappers and routine coding just fine. Opus shines for architecture decisions. How to route AI model usage by task complexity and save
Spotify Devs Haven't Written Code Since December - Specification-Driven Development
Specification-driven development is replacing hands-on coding. Write specs, let AI agents generate the implementation. Here's why it works.
Vibe Coding Is Not an Excuse to Skip Code Review
Your CTO saying 'just vibe code it' is not a strategy. Using AI to ship faster works - but only if you still review what it produces.
Cursor Caught a Race Condition - Voice-Controlled Coding and Verbal Debugging
Voice-controlled AI coding agents don't just save keystrokes. Speaking your code logic out loud helps you think more clearly and catch bugs you'd miss typing.
Why Vibe Coded Projects Fail at Scale
Vibe coding with AI is great for prototypes but breaks down at scale. Here is why, and how to transition to structured AI-assisted development before it is
Making AI Coding Enjoyable - Fix the Process, Not the AI
The 200-file changeset problem is a process failure, not an AI failure. Scope your agents tightly to make AI-assisted coding productive and enjoyable.
AI Coding Tools Made Me Mass-Produce Bad Code Faster
AI-generated code looks plausible even when it is wrong. Handwritten bugs are easier to spot. AI bugs have correct syntax but wrong logic.
The Real AI Coding Skill Is Problem Decomposition, Not Prompt Engineering
The developers who get the most from AI coding tools are not better at prompting. They are better at decomposing problems. Here is the concrete workflow with examples that separate 2x from 10x AI-assisted developers.
The Biggest AI Coding Skill Gap Is Context Management
Too much context is as bad as too little when working with AI agents. The same principle applies to GUI automation with accessibility trees. Learn to manage
AI Coding Technique: Change One File, Migrate the Entire Codebase
A practical AI coding technique - manually change one SwiftUI file, then have Claude Code migrate 1500+ hardcoded calls across the entire codebase to match.
The Real Metric AI Improved in Software - Release Cadence
AI coding tools did not make individual code better. They made release cadence faster. Going from monthly to weekly releases on a desktop app using Claude Code.
The AI Renaissance for Retirees: Writing Specs Instead of Code
Retirees are building software by writing detailed CLAUDE.md specs that direct AI agents. You do not need to write code anymore - you need to write clear
Building a Desktop App 100% with Claude AI
What you learn the hard way building a native desktop email client entirely with Claude. Swift, Rust, and the real challenges no tutorial covers.
The Scope Shift in Code Copying - From Stack Overflow Snippets to Full AI Interaction Flows
AI changed how developers copy code. Instead of grabbing individual accessibility API snippets from Stack Overflow, we now generate entire interaction flows
Context Management Is 90% of the Skill in AI-Assisted Coding
The real skill in AI-assisted coding is not prompting - it is context management. Persistent memory, CLAUDE.md files, and layered context separate productive developers from frustrated ones.
Why Cursor Skips Planning Mode and How a Strict Plan-Execute Loop Fixes It
Cursor and similar AI coding tools skip planning and jump straight to editing files. A strict plan-then-execute loop prevents runaway changes.
Developers Are Becoming Their Own Business Analysts in the AI Era
The most productive developers now spend their day writing detailed requirements and acceptance criteria, then handing them to Claude. Writing specs is the
The 1M Context Trap: Why More Context Makes Claude Lazier
Research on 18 frontier models confirms every one degrades with more context. The 'lost-in-the-middle' effect causes 30%+ accuracy drops. The counterintuitive fix: use less context, not more.
Opus Token Burn Rate - Watching It Write, Delete, and Rewrite 200-Line Functions
Opus does not just burn tokens - it vaporizes them. The write-delete-rewrite cycle where Opus creates 200 lines, decides it does not like them, and starts over.
Pair Programming with AI - Write the Spec First, Approve the Plan
The best workflow for AI pair programming: write a short spec, let the agent propose its plan before writing any code, then approve step by step. Control
Why Mandating AI Coding Tools Fails - Organic Adoption Wins
Forcing developers to use AI coding tools backfires. The developers who get the most from AI got there organically because it genuinely made them faster
Codex vs Claude Code - A Practical Comparison for Real Development
OpenAI Codex and Claude Code take different approaches to AI-assisted development. Here is how they compare for agent-mode workflows, MCP integration, and
Building a macOS Desktop Agent with Claude - How AI Wrote Most of Its Own Code
How we used Claude to build Fazm, a native macOS AI agent. ScreenCaptureKit, accessibility APIs, and Whisper - with Claude writing most of the Swift code
The AI Verification Paradox - We Code Faster But Ship Slower
AI makes individuals write code faster, but teams are moving slower. The bottleneck shifted from writing code to understanding what code just got written.
What SaaS Ideas AI Cannot Replace - Always-On, Hardware Access, and Persistent State
Claude Code can write you a script but it cannot run a 24/7 service, access your screen, or manage devices. Here is where SaaS still wins.
AI Lets Everyone Ship Code - But Who Holds the Pager?
AI coding tools mean non-engineers can ship code faster than ever. The problem is not the code quality - it is the ownership gap when things break at 3am.