How to Keep Your .env Files Safe from AI Coding Agents

M
Matthew Diakonov

How to Keep Your .env Files Safe from AI Coding Agents

If you use Claude Code or any AI coding agent, your .env file is one cat .env away from being read into the context window. Once it is in the context, your API keys, database passwords, and secrets are part of the conversation - stored in Anthropic's servers for the duration of the session.

This is not hypothetical. AI agents routinely read files to understand your project. A helpful agent debugging a database connection issue will naturally want to check the connection string. Most do not have the judgment to distinguish "this is sensitive" from "this is useful context."

In 2025, PromptArmor demonstrated something worse: a poisoned web page can manipulate an AI agent via indirect prompt injection, causing it to bypass .gitignore protections and exfiltrate .env credentials using terminal commands. The agent follows instructions embedded in content it reads, not just instructions from the user.

Layer 1: .claudeignore

The first and simplest protection is a .claudeignore file in your project root. It tells Claude Code to never read these files even if the agent wants to:

# Secrets and credentials
.env
.env.*
.env.local
.env.production
.env.staging

# Keys and certificates
*.pem
*.key
*.p12
*.pfx
id_rsa
id_ed25519

# Service credentials
credentials.json
service-account.json
gcloud-credentials.json
firebase-adminsdk-*.json
.aws/credentials

# Token files
token*.json
*_token.json
.netrc

Claude Code respects .claudeignore at the file read level - the agent cannot access the contents even if it constructs a command to read the file. The agent can still know the file exists (useful for debugging "why isn't the connection working?") but cannot read the values.

Create this file immediately in every project that has any form of credentials. It takes 30 seconds and eliminates the most common exposure vector.

Layer 2: macOS Keychain for Desktop Agents

For desktop automation agents like Fazm that run with broad system access, environment variables in .env files are the wrong storage primitive entirely. The macOS Keychain provides a better architecture: secrets are stored in hardware-backed encrypted storage, access requires user authorization, and the secret never touches the filesystem.

# Store a secret in Keychain
security add-generic-password \
    -s "openai-api-key" \
    -a "fazm" \
    -w "sk-proj-..." \
    -T "" \
    -U

# Retrieve it programmatically
security find-generic-password \
    -s "openai-api-key" \
    -a "fazm" \
    -w

The agent requests a specific secret by name. The secret goes from Keychain directly into memory, gets used for the API call, and is discarded. It never touches the filesystem. It never appears in the context window (unless you explicitly pass it there, which you should not do).

From your agent code:

import subprocess

def get_secret(service: str, account: str = "fazm") -> str:
    """Retrieve a secret from macOS Keychain."""
    result = subprocess.run(
        ["security", "find-generic-password",
         "-s", service, "-a", account, "-w"],
        capture_output=True,
        text=True
    )
    if result.returncode != 0:
        raise ValueError(f"Secret not found: {service}")
    return result.stdout.strip()

# Usage - the key never appears in code or config
api_key = get_secret("openai-api-key")
client = OpenAI(api_key=api_key)

The critical point: the api_key variable exists in memory for the duration of the call and is garbage collected afterward. It is never logged, never written to disk, and never appears in any file the agent could read.

Layer 3: The Local Proxy Pattern

For more complex setups - especially when you want the agent to call external APIs without ever seeing the credentials - the local proxy pattern is cleanest:

# local-secrets-proxy.py
# Runs on localhost:8765, intercepts API calls and injects credentials

from http.server import HTTPServer, BaseHTTPRequestHandler
import subprocess
import httpx

def get_from_keychain(service):
    result = subprocess.run(
        ["security", "find-generic-password", "-s", service, "-w"],
        capture_output=True, text=True
    )
    return result.stdout.strip()

class SecretProxy(BaseHTTPRequestHandler):
    def do_POST(self):
        # Agent sends request to localhost:8765/openai
        # Proxy retrieves the real API key and forwards to api.openai.com
        content_length = int(self.headers['Content-Length'])
        body = self.rfile.read(content_length)

        api_key = get_from_keychain("openai-api-key")
        response = httpx.post(
            "https://api.openai.com" + self.path,
            content=body,
            headers={"Authorization": f"Bearer {api_key}",
                    "Content-Type": "application/json"}
        )

        self.send_response(response.status_code)
        self.end_headers()
        self.wfile.write(response.content)

The agent never sees the API key. It sends requests to the local proxy, which retrieves the real credential from Keychain and injects it into the outbound request. The API response comes back through the proxy to the agent.

This pattern completely eliminates the credential from the agent's context, even if the agent is specifically trying to find it.

Layer 4: Automatic Secret Rotation

For production agent deployments, static credentials are a liability. A credential that gets exposed (via a log file, a context window leak, or an indirect prompt injection) needs to stop working immediately.

AWS Secrets Manager and HashiCorp Vault both support automatic rotation - the secret value changes every 1-2 hours, and any exposed credential becomes invalid within that window. The rotation is transparent to your application as long as you retrieve the credential fresh for each session rather than caching it at startup.

import boto3

def get_rotating_secret(secret_name: str) -> str:
    """Retrieve a rotating secret from AWS Secrets Manager."""
    client = boto3.client("secretsmanager", region_name="us-east-1")
    response = client.get_secret_value(SecretId=secret_name)
    return response["SecretString"]

# Call this fresh each session - not at module load time
api_key = get_rotating_secret("prod/openai-api-key")

For macOS desktop agents, automatic rotation is usually overkill. The Keychain plus .claudeignore approach covers the realistic threat model. For server-side agents calling APIs on behalf of multiple users, rotation is worth the operational complexity.

The Practical Checklist

If you are building or running AI agents today, run through these steps:

  1. Create .claudeignore in every project with credentials. Add it to your project template.
  2. Move sensitive credentials from .env to macOS Keychain for desktop agents.
  3. Ensure .env files are in .gitignore (they should be, but check).
  4. For agents that make outbound API calls: use the local proxy pattern so credentials never enter the context window.
  5. For production deployments: evaluate a secrets manager with rotation.
  6. Audit your existing projects - check if any .env files are committed to your git history. If they are, rotate those credentials immediately.

The principle is simple: secrets should never be in a file that an AI agent can read. The implementation requires a few layers, but each one is straightforward.


Fazm uses macOS Keychain for all secret management. Open source on GitHub. Discussed in r/ClaudeAI.

Related Posts

Related Posts