Manus AI Vercel Key Authentication Leak in My Computer Mode

Matthew Diakonov··8 min read

Manus AI Vercel Key Authentication Leak in My Computer Mode

When Manus AI launched their "My Computer" mode, it gave desktop agents direct access to local files, environment variables, and developer toolchains. That access came with a predictable consequence: during execution, the agent surfaced Vercel API authentication keys stored in local configuration files and environment variables.

The issue highlighted a fundamental tension in desktop AI agents. The more access you give them, the more sensitive material they can read, transmit, or accidentally include in outputs. Vercel keys are a particularly visible example because they are present in nearly every Next.js developer's local environment, sitting in .env.local files or global Vercel CLI config.

What Happened

Manus AI's "My Computer" mode operates by scanning the local filesystem to understand project context. During this scan, the agent read Vercel authentication tokens from standard locations:

| Location | Token Type | Risk Level | |----------|-----------|------------| | ~/.vercel/auth.json | Global CLI auth token | Critical | | .env.local | Project-scoped deploy tokens | High | | .vercel/project.json | Project ID and org ID | Medium | | ~/.zshrc / ~/.bashrc | Exported VERCEL_TOKEN | Critical | | .env.production | Production deploy keys | Critical |

The agent then included fragments of these credentials in its context window, reasoning traces, and in some cases, output responses visible to the user. Because Manus routes heavy reasoning to cloud endpoints, any credential material in the context window travels off-device to Manus servers.

Why This Matters for All Desktop Agents

This is not exclusively a Manus problem. Any AI agent with filesystem access faces the same challenge. The Vercel key leak is a specific instance of a broader pattern: AI agents that operate at the OS level inevitably encounter credentials.

Developer machines are credential-rich environments. A typical workstation might have SSH keys in ~/.ssh/, AWS credentials in ~/.aws/credentials, Google Cloud service accounts in ~/.config/gcloud/, Docker registry tokens, npm auth tokens in ~/.npmrc, and dozens of environment variables containing API keys. Give an agent fs.readFile permissions and it can read all of them.

The question is not whether desktop agents will encounter secrets. They will. The question is what the agent does with them once found.

<svg viewBox="0 0 800 400" xmlns="http://www.w3.org/2000/svg">
  <rect width="800" height="400" fill="#0f172a" rx="8"/>
  <text x="400" y="35" text-anchor="middle" fill="#2dd4bf" font-size="16" font-weight="bold">Desktop Agent Credential Exposure Flow</text>

  <!-- Local Machine -->
  <rect x="30" y="60" width="200" height="300" rx="6" fill="#1e293b" stroke="#334155" stroke-width="1.5"/>
  <text x="130" y="85" text-anchor="middle" fill="#e2e8f0" font-size="13" font-weight="bold">Local Machine</text>
  <rect x="50" y="100" width="160" height="30" rx="4" fill="#7f1d1d" stroke="#dc2626" stroke-width="1"/>
  <text x="130" y="120" text-anchor="middle" fill="#fca5a5" font-size="11">.env.local (API keys)</text>
  <rect x="50" y="140" width="160" height="30" rx="4" fill="#7f1d1d" stroke="#dc2626" stroke-width="1"/>
  <text x="130" y="160" text-anchor="middle" fill="#fca5a5" font-size="11">~/.vercel/auth.json</text>
  <rect x="50" y="180" width="160" height="30" rx="4" fill="#7f1d1d" stroke="#dc2626" stroke-width="1"/>
  <text x="130" y="200" text-anchor="middle" fill="#fca5a5" font-size="11">~/.ssh/id_rsa</text>
  <rect x="50" y="220" width="160" height="30" rx="4" fill="#7f1d1d" stroke="#dc2626" stroke-width="1"/>
  <text x="130" y="240" text-anchor="middle" fill="#fca5a5" font-size="11">~/.aws/credentials</text>
  <rect x="50" y="260" width="160" height="30" rx="4" fill="#7f1d1d" stroke="#dc2626" stroke-width="1"/>
  <text x="130" y="280" text-anchor="middle" fill="#fca5a5" font-size="11">~/.npmrc (tokens)</text>
  <rect x="50" y="300" width="160" height="30" rx="4" fill="#7f1d1d" stroke="#dc2626" stroke-width="1"/>
  <text x="130" y="320" text-anchor="middle" fill="#fca5a5" font-size="11">shell env exports</text>

  <!-- Agent Process -->
  <rect x="290" y="100" width="200" height="220" rx="6" fill="#1e293b" stroke="#2dd4bf" stroke-width="2"/>
  <text x="390" y="125" text-anchor="middle" fill="#2dd4bf" font-size="13" font-weight="bold">AI Agent Process</text>
  <rect x="310" y="145" width="160" height="25" rx="4" fill="#134e4a"/>
  <text x="390" y="163" text-anchor="middle" fill="#99f6e4" font-size="11">File System Scan</text>
  <line x1="390" y1="170" x2="390" y2="185" stroke="#475569" stroke-width="1.5" marker-end="url(#arrow)"/>
  <rect x="310" y="185" width="160" height="25" rx="4" fill="#134e4a"/>
  <text x="390" y="203" text-anchor="middle" fill="#99f6e4" font-size="11">Context Window Build</text>
  <line x1="390" y1="210" x2="390" y2="225" stroke="#475569" stroke-width="1.5"/>
  <rect x="310" y="225" width="160" height="25" rx="4" fill="#7f1d1d" stroke="#dc2626" stroke-width="1"/>
  <text x="390" y="243" text-anchor="middle" fill="#fca5a5" font-size="11">Credentials in Context</text>
  <line x1="390" y1="250" x2="390" y2="265" stroke="#475569" stroke-width="1.5"/>
  <rect x="310" y="265" width="160" height="25" rx="4" fill="#134e4a"/>
  <text x="390" y="283" text-anchor="middle" fill="#99f6e4" font-size="11">Reasoning + Output</text>

  <!-- Cloud -->
  <rect x="560" y="100" width="200" height="140" rx="6" fill="#1e293b" stroke="#f59e0b" stroke-width="1.5"/>
  <text x="660" y="125" text-anchor="middle" fill="#f59e0b" font-size="13" font-weight="bold">Cloud Inference</text>
  <rect x="580" y="145" width="160" height="25" rx="4" fill="#78350f"/>
  <text x="660" y="163" text-anchor="middle" fill="#fde68a" font-size="11">Context + Credentials</text>
  <rect x="580" y="185" width="160" height="25" rx="4" fill="#78350f"/>
  <text x="660" y="203" text-anchor="middle" fill="#fde68a" font-size="11">Logged in Server Memory</text>

  <!-- Arrows -->
  <defs><marker id="arrow" viewBox="0 0 10 10" refX="5" refY="5" markerWidth="6" markerHeight="6" orient="auto"><path d="M 0 0 L 10 5 L 0 10 z" fill="#475569"/></marker></defs>
  <line x1="230" y1="180" x2="288" y2="180" stroke="#dc2626" stroke-width="2" marker-end="url(#arrow)" stroke-dasharray="6,3"/>
  <text x="259" y="172" text-anchor="middle" fill="#dc2626" font-size="9">reads</text>
  <line x1="492" y1="180" x2="558" y2="180" stroke="#f59e0b" stroke-width="2" marker-end="url(#arrow)" stroke-dasharray="6,3"/>
  <text x="525" y="172" text-anchor="middle" fill="#f59e0b" font-size="9">sends</text>

  <!-- Risk Label -->
  <rect x="560" y="270" width="200" height="70" rx="6" fill="#7f1d1d" stroke="#dc2626" stroke-width="1.5"/>
  <text x="660" y="295" text-anchor="middle" fill="#fca5a5" font-size="12" font-weight="bold">Exposure Risk</text>
  <text x="660" y="315" text-anchor="middle" fill="#fca5a5" font-size="10">Keys visible in logs,</text>
  <text x="660" y="330" text-anchor="middle" fill="#fca5a5" font-size="10">outputs, server memory</text>
</svg>

The Authentication Flow That Breaks

Vercel's authentication model uses long-lived tokens stored locally. When you run vercel login, the CLI writes a token to ~/.vercel/auth.json. This token has broad scope: it can deploy, manage domains, read environment variables, and access team resources. There is no granular permission model for CLI tokens.

When an AI agent reads this token, several things can go wrong:

  1. Context transmission: The token enters the agent's context window and gets sent to cloud inference servers
  2. Output leakage: The agent might include token fragments in responses, logs, or generated files
  3. Persistence: Tokens in the context window may persist in server-side conversation logs
  4. Replay risk: Anyone with access to conversation logs could extract and reuse the token

The Vercel CLI token does not expire automatically. Unless the developer explicitly revokes it, a leaked token remains valid indefinitely.

How Credential-Aware Agents Should Work

The solution is not to restrict filesystem access entirely. That would make desktop agents useless for development tasks. Instead, agents need credential awareness built into their file access layer.

| Approach | Effectiveness | Tradeoff | |----------|--------------|----------| | Blocklist known credential paths | Medium | Requires maintaining path lists | | Pattern matching (regex for keys) | High | Can miss novel formats | | Sandbox with virtual filesystem | Very High | Limits agent capability | | Credential proxy with redaction | Very High | Adds complexity | | Local-only inference (no cloud) | Complete | Reduces model capability |

The most robust approach combines pattern matching with a credential proxy. Before any file content enters the agent's context window, a middleware layer scans for known credential patterns and replaces them with placeholder tokens. The agent sees [REDACTED_VERCEL_TOKEN] instead of the actual key. If the agent needs to use the credential for a deployment, the proxy layer handles the substitution at execution time without exposing the raw value.

What Fazm Does Differently

Fazm runs entirely on your machine. The inference, the file access, the credential handling, all of it stays local. There is no cloud endpoint receiving your context window. Your Vercel tokens, AWS keys, and SSH credentials never leave your device.

This is not just a privacy feature. It is an architectural decision that eliminates an entire class of credential exposure risks. When the model runs locally, there is no server-side log containing your secrets. There is no network hop where tokens could be intercepted. The blast radius of a credential appearing in the context window is contained to your own machine.

For developers who deploy through Vercel, this means you can let Fazm read your project configuration, understand your deployment setup, and help with build issues without worrying about your authentication tokens traveling to a third-party server.

Protecting Your Credentials from Any AI Agent

Regardless of which AI agent you use, these practices reduce credential exposure:

  1. Use scoped tokens: Create deployment-specific tokens with minimal permissions rather than using your global Vercel token
  2. Rotate regularly: Set calendar reminders to rotate Vercel tokens monthly
  3. Use .agentignore files: Some agents respect ignore files similar to .gitignore for excluding sensitive paths
  4. Audit agent permissions: Review what filesystem paths your agent can access and restrict to project directories
  5. Monitor token usage: Vercel's dashboard shows when tokens were last used, so check for unexpected activity
  6. Environment isolation: Use tools like direnv to scope environment variables to specific project directories rather than exporting them globally

The Manus "My Computer" incident is a useful reminder that desktop AI agents operate at a privilege level that previous developer tools did not. A VS Code extension runs in a sandbox. A CLI tool reads what you pipe to it. An AI agent with filesystem access reads everything it can reach, and it might send that data somewhere you did not expect.

Related Posts