Hallucination
6 articles about hallucination.
The 3-Tool-Call Problem and Why It Matters
Three tool calls means three round trips and three chances to hallucinate. Each step compounds error probability, making multi-step agent tasks
AI Agents Recommend Packages That Don't Exist
AI agents confidently invoke non-existent functions and recommend phantom npm packages. How to detect and prevent hallucinated tool calls in production.
AI Agent Hallucination Detection - Safeguards That Actually Work
AI agents fail confidently - they report success while quietly doing the wrong thing. Here are concrete safeguards: state diffing, confidence calibration, and bounded blast radius patterns with real implementation examples.
Claude with n8n MCP Server - Reference Docs Prevent Hallucination
The best AI for n8n automation creation is Claude with the n8n MCP server. Feeding reference docs into context prevents hallucinated node names and wrong
Solving the Hallucination vs Documentation Gap for Local AI Agents
How CLI introspection and skills that tell agents to check docs first can reduce hallucinations in local AI agents.
Uncertainty Markers in AI Agent Outputs - Why Knowing What the Model Doesn't Know Matters
LLMs that mark what they are uncertain about are far more trustworthy in production. Uncertainty markers help AI agents fail gracefully instead of