Code Review
29 articles about code review.
1.6M Git Events Show AI Code Needs More QA
When AI agents generate most of your code, your review process must scale to match. Analysis of 1.6 million git events reveals where QA breaks down - and how to fix it.
129K Commits Later - Vibe Coding Is Just Coding
After 129,000 AI-assisted commits, the distinction between vibe coding and real coding has disappeared. Here is what changes when agents write most of the code and humans review - with real data, workflow patterns, and hard-earned lessons.
Using AI Agents as Code Reviewers with Custom Review Checklists
How to set up Claude Code as a code reviewer using custom slash commands and review checklists - catching bugs, enforcing standards, and scaling code review.
When AI Code Review Flags Intentional Behavior as a Bug
The real gap in automated code review is not missed bugs - it is when AI catches something that looks wrong but is actually intentional. Pattern matching
AI Made My Team Write 21% More Code - The Review Queue Doubled
AI does not remove bottlenecks, it moves them downstream. When code generation gets faster, code review becomes the new constraint.
Auth Bypass Risks in AI-Generated Code
AI-generated code often has subtle authentication bypass vulnerabilities. Learn where auth middleware bugs hide and how to catch them before they ship.
Why Automated Code Review Catches Syntax but Misses Logic Errors
Automated code review tools are pattern matchers, not business logic understanders. They catch formatting issues but miss the logic errors that actually
Claude Code Writes Your Code, but Do You Know What's in It?
AI coding agents restructure modules in unexpected ways. The code works but the architecture drifts from your mental model unless you actively review
Tell Your Coding Agent to Ship Small Chunks
Large AI-generated PRs are unreviewable. Ship features in small chunks with per-feature CLAUDE.md specs and separate agent sessions for each piece.
Cross-Review Between Parallel Agents Catches the Bugs Single Agents Miss
When parallel agents review each other's work instead of their own, they catch integration-level bugs that self-review misses. The data shows 87% fewer false positives and 3x more real bugs found.
I Measured Every Hour My Human Worked for Two Weeks
After tracking a developer's time for two weeks, the data showed they stopped writing code entirely. With AI agents, output increased 89x while the human
Multi-Agent Code Review Loops - The Simple Pattern That Works
Running parallel AI coding agents works best with a simple pattern: one agent writes code, another reviews it. Here is how to set it up.
Orchestrator for Implementor and Review Loop - AI Agent Code Review Patterns
How to implement code review loops with AI agent orchestration using implementor and reviewer patterns with a shared file approach.
Orchestrator Implementor Review Loop - Code Review with tmux Claude Code Sessions
How to implement a code review loop using tmux-based Claude Code orchestration with separate orchestrator, implementor, and reviewer sessions.
Running Parallel AI Agents on Isolated Git Worktrees for Small, Reviewable PRs
The biggest problem with AI-generated PRs is scope creep - agents touch dozens of files across unrelated concerns. Isolated git worktrees with one agent per concern fixes this and produces PRs humans can actually review.
Special Token Injection Attacks on AI Coding Agents
Gaslighting LLMs with special token injection is a real threat to AI coding agents. Learn how these attacks work and how to defend your agent workflows.
Reviewing AI Agent Code Changes - What Was Not Modified Matters More
The diff shows what changed. The real bugs hide in what the agent decided not to change. A systematic approach to reading the negative space in AI-generated diffs.
Vibe Coding Is Not an Excuse to Skip Code Review
Your CTO saying 'just vibe code it' is not a strategy. Using AI to ship faster works - but only if you still review what it produces.
Why Software Engineers Are Divided on AI - The 5x Gain Is Not Where You Think
The real AI productivity gain for developers is in code review and navigation, not code generation. This explains why engineers disagree on AI's value.
The Danger of Plausible-Looking AI Code - How to Catch Subtle Bugs
AI-generated code compiles, passes linting, and looks correct. But the logic can be subtly wrong in ways human-written code never is. Code review habits
Running AI Agents as Actual Employees in Real Workflows
How to run multiple Claude Code instances in parallel as actual team members - task assignment patterns, git worktree isolation, coordination rules, and real workflow examples from daily use.
Opus for Planning, Codex for Review: When 8 Phases Were Supposed to Be 5
How to use Opus for project planning and Codex for code review when running parallel Claude agents. Lessons from a project that grew from 5 planned phases to 8.
Pair Programming with AI - Write the Spec First, Approve the Plan
The best workflow for AI pair programming: write a short spec, let the agent propose its plan before writing any code, then approve step by step. Control
$25 Per PR Review Is Wild - Run Claude Code on the Diff Yourself
Anthropic's PR review tool costs $15-25 per pull request. You can build the same thing yourself with Claude Code and a custom skill in an hour - for pennies per review instead of dollars.
Reading Extended Thinking from 5 Parallel Claude Code Agents
What it feels like reading extended thinking from 5 parallel Claude Code agents. It is like having 5 coworkers all privately judging your code at the same time.
When Developers Stop Writing Code and Start Reviewing AI Agents
Going from writing code to mass-reviewing output from 5 parallel Claude agents. Haven't typed a function in weeks. The new developer workflow is review, not
Write Specs Before PRs to Avoid Redesign Debates in Code Review
How writing a short spec before non-trivial PRs prevents architecture debates during code review and saves hours of rework.
From Writing Code to Reviewing Code - The AI Shift
The job changed from writing code to mass-reviewing AI-generated code from parallel agents and writing CLAUDE.md specs. Here is what that transition looks
The AI Verification Paradox - We Code Faster But Ship Slower
AI makes individuals write code faster, but teams are moving slower. The bottleneck shifted from writing code to understanding what code just got written.