AI Agents Recommend Packages That Don't Exist

Fazm Team··2 min read

AI Agents Recommend Packages That Don't Exist

The agent confidently runs npm install react-native-smart-calendar. The package does not exist. Or worse, it exists but was published by someone who anticipated the hallucination and uploaded a malicious package with that plausible-sounding name.

This is not a rare edge case. AI agents hallucinate tool calls and package names regularly, and the confidence with which they do it makes the problem harder to catch.

Why This Happens

LLMs generate text based on patterns. A package name like react-native-smart-calendar follows the exact naming conventions of real packages. The model has seen thousands of similar names and generates one that fits the pattern - regardless of whether it actually exists.

The same applies to function calls. An agent might call fs.readFileSecure() or os.getSystemMemoryUsage() - functions that sound real, follow naming conventions, but do not exist in any standard library.

Detection Strategies

Pre-execution validation - Before running any install command, verify the package exists in the registry. A simple API call to registry.npmjs.org or pypi.org catches phantom packages before they hit your system.

Tool schema enforcement - Define the exact set of tools available to the agent. If the agent tries to call a function not in the schema, reject it. This is the simplest and most effective guard against hallucinated tool calls.

Allowlists over blocklists - Maintain a list of approved packages for your project. Any package not on the list requires human approval before installation. This is more restrictive but prevents both hallucinated and malicious packages.

The Supply Chain Risk

Attackers monitor common AI hallucinations and register packages with those names. This is called "slopsquatting" and it is a real supply chain attack vector. Your agent's confident recommendation becomes a pathway for malicious code.

The fix is treating every agent-suggested package as untrusted input. Validate it exists, check its download count, verify the publisher, and review the code before adding it to your project.

Never trust an agent's confidence as evidence that something exists.

Fazm is an open source macOS AI agent. Open source on GitHub.

More on This Topic

Related Posts