Security
17 articles about security.
Why Your AI Agent Needs a Firewall - And Why It Should Be Open Source
AI coding agents access your file system, network, and APIs. An open-source firewall lets you audit exactly what the agent can do. Transparency beats trust.
Privacy Controls Are the Real Story in AI Agent Frameworks
Most agent frameworks let the model do whatever it wants. Privacy-first agents run everything locally, never send screen data to the cloud, and give users explicit control over what the agent can access.
AI Desktop Agent Security Best Practices for Teams and Enterprises
Giving AI agents access to your computer raises real security questions. Here are the best practices for deploying desktop agents safely - from permission models to data governance.
The Asymmetric Trust Problem - When Your AI Agent Has More Access Than You Intended
Accessibility APIs were designed for screen readers and expose everything on screen. When you grant an AI agent accessibility permissions, it gets far more access than you probably realized.
Blast Radius - What Happens When Your AI Agent Gets Compromised
MCP servers limit blast radius by design with UI-only access, no shell, no filesystem. But in practice, both tools often run in the same session. Here is how to assess the real risk.
Bypass Permissions vs Allowlists - Finding the Middle Ground for AI Agents
Full permission bypass is reckless and full approval mode is unusable. The middle ground with allowlists is where AI agent permissions actually work.
Why Community Skill Repos Need Platform-Level Sandboxing
Community skills repos are an open attack vector for AI agents. Platform-level sandboxing and verification are essential to prevent supply chain attacks.
Cron Jobs and Unsupervised Root Access - The Security Risk of Scheduled AI Agents
Why scheduled autonomous AI agent tasks need audit trails, rate limits, and human review. The security implications of launchd agents running unsupervised with system access.
Using macOS Keychain for AI Agent Credential Access
Store passwords in macOS Keychain for your AI agent instead of .env files. It is more secure, centralized, and eliminates token pasting across sessions.
MEMORY.md as an Injection Vector - The Security Risk of Implicitly Trusted Config Files
CLAUDE.md and MEMORY.md files are loaded every session and trusted implicitly by AI agents. This makes them a potential prompt injection vector that most setups do not protect against.
Your AI Agent Shouldn't Send Screen Recordings to the Cloud
Some agents capture your screen and send it to cloud servers for processing. Local agents process everything on device - your data never leaves your machine.
Why Self-Hosting AI Matters: Your Agent Sees Your Emails, Documents, and Browsing History
AI agents interact with your most sensitive data - emails, documents, browsing history. Self-hosting with local LLMs keeps that data on your machine where it belongs.
The Auth Problem for AI Agents - OAuth, Rate Limiting, and Dry Run Modes
AI agents face unique authentication challenges: automating OAuth browser flows, managing rate limits across multiple instances, and testing with dry run modes.
Why Local-First AI Agents Are the Future (And Why It Matters for Your Privacy)
AI agents that control your computer need access to everything on your screen. Here is why where that data gets processed - locally or in the cloud - is the most important question you should be asking.
How to Keep Your .env Files Safe from AI Coding Agents
AI agents that can read and edit files will eventually stumble into your secrets. Here is how to use .claudeignore and macOS Keychain to keep API keys out of the context window.
AI Agent Permissions - Why Local Agents Do Not Have the Cloud Permission Problem
Cloud AI agents like Cowork need folder-level access grants that linger after tasks complete. Local agents that use accessibility APIs avoid this entirely.
Prompt Injection and AI Agents - Why Browser-Based Agents Have a Bigger Attack Surface
AI agents that run inside the browser inherit whatever the page feeds them, including injection payloads. Native agents that interact from outside have a smaller attack surface.