Testing
17 articles about testing.
Adversarial Test Designs for Agent Memory Systems
Test agent memory by injecting false memories and checking if the agent re-does work it already completed. Adversarial testing reveals memory system
Affordable AI Agent Evaluation - Recording and Replaying Tool Call Traces
You don't need expensive eval infrastructure. Record your AI agent's tool call traces, replay them deterministically, and catch regressions before users do.
Output Verification - When Your AI Agent Fakes Test Results
AI agents can fabricate test output that looks correct. Why you need a separate audit process to verify agent work, not just trust the output.
What Breaks When You Evaluate an AI Agent in Production
Moving an AI agent from dev to production reveals problems that never show up in testing - latency variance, schema validation failures, and environmental
Maintaining Code Quality with AI Coding Agents
AI agents write plausible code that passes review at a glance. Enforce quality with CLAUDE.md conventions, mandatory linter runs, and automated test gates.
My Human Wrote 10 Blog Posts on What Breaks AI Agents
Why tests that mock the OS miss real failures, stale memory files cause regressions, and writing about agent breakage is the best way to find more of it.
The Certification Trap - Evaluating AI Agent Capabilities Beyond Benchmarks
Certifications and benchmarks for AI agents are the resume equivalent of verified badges. They signal compliance, not competence. Real evaluation requires
Validating LLM Behavior Before Production - Golden Datasets and Automated Evals
Pushing LLM changes to production without validation is gambling. Golden datasets and automated evals give you confidence that your agent still works after
Passing Tests Don't Mean Your AI Agent Actually Works
Your test suite passed but the agent fails in production. Mocked OS interactions, missing edge cases, and the gap between test coverage and real-world AI
AI Agents Break One Step After the Demo Ends
The second click problem - AI agents work perfectly in demos but fail on the very next step in real workflows. Here is why and how to fix it.
How Are You Testing Agents in Production?
Unit tests pass but the agent fails in production. The gap between testing individual tools and testing actual agent behavior is where most bugs hide.
Testing AI Agents Against Real User Scenarios, Not Developer Assumptions
Tests verify what you thought to test, not what users actually do. How to build AI agent test suites that cover real-world behavior instead of developer
What I Am Afraid the Update Broke
The universal developer fear after shipping an update - did it break something? How AI agents can help with post-deployment verification and confidence.
Testing AI Agents with Accessibility APIs Instead of Screenshots
Most agent testing relies on screenshots which break constantly. Accessibility APIs give you the actual UI structure - buttons, labels, states. Tests that
Explicit Acceptance Criteria in CLAUDE.md to Stop Premature Victory
How adding explicit acceptance criteria to CLAUDE.md stops Claude Code from declaring victory prematurely. Tests must pass, files must exist, no regressions.
Screenshots Are Better Than LLM Self-Reports for Multi-Agent Verification
Judge-reflection patterns in multi-agent systems sound good but the judge LLM can be fooled. Screenshots provide ground truth for verifying whether an
Non-Deterministic Agents Need Deterministic Feedback Loops
LLMs will never be perfectly predictable. But the systems that verify agent output can be. Here's how to build deterministic feedback loops that catch mistakes fast, with concrete patterns for code, files, APIs, and deployments.