Security
54 articles about security.
Manus AI Vercel Key Authentication Leak in My Computer Mode
Manus AI's My Computer mode exposed Vercel API keys during local agent execution. Why credential isolation matters for desktop AI agents and how to prevent authentication leaks.
Third-Party Apps: What They Are, How Permissions Work, and Security Risks
A complete guide to third-party apps covering what they are, how they access your data through OAuth and APIs, common security risks, and how to audit and manage permissions across platforms.
AI Agent Blast Radius: What It Is and How to Measure It
AI agent blast radius defines the maximum damage an agent can cause in a single failure. Learn how to measure, categorize, and reduce blast radius across desktop, cloud, and code agents.
AI Agent Trust Management: A Practical Framework for Production Systems
How to manage trust in AI agents across their lifecycle, from initial deployment with minimal permissions to earning expanded access through verified behavior.
Data > Credentials in Power Automate: Managing Connections, Secrets, and Credential Storage
Learn how Data > Credentials works in Power Automate desktop flows. Covers credential types, secure storage, common errors, and how AI agents handle credentials differently.
How to Limit the Blast Radius of a Compromised AI Agent
Practical techniques to contain damage when an AI agent gets compromised. Covers process isolation, least-privilege tooling, network segmentation, and real
Verified Trust vs Assumed Trust in AI Agents
What is verified trust in the context of AI agents and how does it differ from assumed trust? A breakdown of both models, when each applies, and how to build agents you can actually trust.
12 CVEs Indexed - Dependency Security in AI Agent Toolchains
Transitive dependencies in AI agent toolchains go unaudited. When your agent relies on npm packages, Python libraries, and MCP servers, the attack surface explodes. Here is how to find and fix the vulnerabilities hiding in your dependency tree.
93% No Scope. 0% Revocation.
Most agent integrations request broad permissions with no mechanism for revocation. No scope and no revocation is a terrifying combination.
Adversarial Testing for AI Agent Memory Systems
What happens when you inject false information into an AI agent's memory? Adversarial testing reveals whether your agent can verify its own memories or
AI Agent Confidence Calibration: When Pride Becomes a Security Risk
Overconfident AI agents skip verification and make dangerous assumptions. Learn how to calibrate agent confidence levels to prevent costly mistakes.
Why AI Desktop Agents Need an Execution Authorization Layer
Every OS-level action an AI agent takes should pass through a policy layer first. Hard rules for dangerous operations, heuristics for edge cases.
AI Agent Security in 2026 - Lessons from OpenClaw and Why Architecture Matters
The OpenClaw security crisis showed what happens when AI agents have unchecked access to your system. Here is what went wrong, what the industry learned
Code That Cannot Phone Home - AI Agents for Air-Gapped Systems
Military systems, trading floors, and medical devices cannot use cloud AI APIs. Here is how local screen understanding via AXUIElement and on-device models like MLX enable AI agents in fully air-gapped environments.
Auth Bypass Risks in AI-Generated Code
AI-generated code often has subtle authentication bypass vulnerabilities. Learn where auth middleware bugs hide and how to catch them before they ship.
v2.1.78 Broke bypassPermissions: Skills Are User Content
When bypassPermissions broke, it revealed that .claude/skills/ files are user content, not system files. Agent permission models need to respect this boundary.
HTTP Requests as Unaudited Data Pipelines - When Error Reporting Leaks API Keys
Error reporting tools sending stack traces with API keys embedded. Every HTTP-capable dependency is a potential exfiltration path for sensitive data in AI
Why Local-First AI Agents Are the Future of Desktop Automation
Cloud-based AI agents send your screen data to remote servers. Local-first agents like Fazm keep everything on your Mac. Here is why that matters more than
Local Inference Virtue Signaling
Running inference locally is not just a privacy flex - screenshots should genuinely never leave the machine. The case for local processing of visual data.
Machine-Enforceable Policy
Most AI agent policies rely on the honor system. OS-level sandboxing has gaps. Until policy enforcement is machine-verifiable, agent safety depends on trust
How Do I Make AI Use My Computer Safely?
Use MCP servers with the macOS accessibility API to let AI control your computer safely, with proper permission boundaries and audit trails.
Nobody Asks Where MCP Servers Get Their Data
MCP servers give AI agents powerful desktop automation capabilities. But the security trust surface - who controls what your agent accesses - is something
Open Source Desktop Agents vs Closed Source - The Trust Problem
When an AI agent has full access to your desktop, open source is not just a preference - it is a trust requirement. You need to verify what the agent can
Why the OpenClaw AI Agent Is a Privacy Nightmare
Cloud-based desktop agents with open ports create massive privacy risks. Local agents with no exposed ports are private by design.
Prompt Injection Through Tool Results: The Hidden Attack Vector
How tool results become prompt injection vectors for AI agents, and why system prompts are your best defense against malicious content in API responses.
Safety Problems at the Execution Layer - Not in the Prompt
82% of MCP implementations have path traversal vulnerabilities. Real AI agent safety failures happen at execution, not planning. Here is what the CVE data shows and how to build execution-layer guardrails.
The Sandbox Paradox: AI Agents Need Access to Be Useful
AI agents need system access to be useful but restrictions to be safe. The sandbox paradox is the central tension in desktop agent design - here's how to
Small Business and Home Network Setup - Separate VLANs for Everything
How to architect a combined home and small business network with separate VLANs using UniFi or pfSense. Includes VLAN numbering, firewall rules, and where AI agents fit into network automation.
Special Token Injection Attacks on AI Coding Agents
Gaslighting LLMs with special token injection is a real threat to AI coding agents. Learn how these attacks work and how to defend your agent workflows.
Sybil Detection Through Timing Analysis - What Content Analysis Misses
Bot timestamp patterns reveal what content analysis cannot. Timing-based sybil detection catches coordinated inauthentic behavior more reliably than text
Text-to-SQL Safety for AI Agents - Sanitization, Read-Only Access, and Ambiguous Joins
Running text-to-SQL on production databases with AI agents requires input sanitization, read-only access, and careful handling of ambiguous joins across
Trust vs Verify - Why Local Open Source AI Agents Are Easier to Trust
The difference between trusting and verifying an AI agent. Local, open source agents make trust simpler because you can inspect everything.
VPS + Docker for a Personal Desktop Agent Is Over-Engineering - The Security Math
Running a personal AI desktop agent on a VPS with Docker, Nginx, and Cloudflare tunnels adds attack surface without adding capability. Why local-first eliminates the entire security surface area.
When AI Agents Choose Not to Know - Ignorance as a Security Boundary
Deliberate ignorance is an underrated security pattern for AI agents. An agent that never sees a credential cannot leak it. Choosing not to know is a design
Yolo Mode vs Safe Permissions - When to Let Your AI Agent Run Free
Should you skip permission checks in AI agents? It depends on the task. Code agents with git are low risk. Desktop agents touching production systems need
Zelle Fraud Patterns: Social Engineering Meets Instant Money
Zelle fraud exploits instant, irreversible transfers combined with social engineering. Understanding authorization tricks helps build better fraud detection
Zero-Trust Security for AI Agents: When Default Deny Goes Too Far
Zero-trust security models applied to AI agents can make them useless if too aggressive. Learn how to balance security with agent usefulness in production
Why Your AI Agent Needs a Firewall - And Why It Should Be Open Source
AI coding agents access your file system, network, and APIs. An open-source firewall lets you audit exactly what the agent can do. Transparency beats trust.
Privacy Controls Are the Real Story in AI Agent Frameworks
Most agent frameworks let the model do whatever it wants. Privacy-first agents run everything locally, never send screen data to the cloud, and give users
AI Desktop Agent Security Best Practices for Teams and Enterprises
Giving AI agents access to your computer raises real security questions. Here are the best practices for deploying desktop agents safely - from permission
The Asymmetric Trust Problem - When Your AI Agent Has More Access Than You Intended
Granting macOS accessibility permissions to an AI agent gives it access to every text field, password manager value, and bank balance visible on screen. The permission you think you granted is a small subset of what you actually granted.
Blast Radius - What Happens When Your AI Agent Gets Compromised
MCP servers limit blast radius by design with UI-only access, no shell, no filesystem. But in practice, both tools often run in the same session. Here is
Bypass Permissions vs Allowlists - Finding the Middle Ground for AI Agents
Full permission bypass is reckless and full approval mode is unusable. The middle ground with allowlists is where AI agent permissions actually work.
Why Community Skill Repos Need Platform-Level Sandboxing
Community skills repos are an open attack vector for AI agents. Platform-level sandboxing and verification are essential to prevent supply chain attacks.
Cron Jobs and Unsupervised Root Access - The Security Risk of Scheduled AI Agents
Why scheduled autonomous AI agent tasks need audit trails, rate limits, and human review. The security implications of launchd agents running unsupervised
Using macOS Keychain for AI Agent Credential Access
Store passwords in macOS Keychain for your AI agent instead of .env files. It is more secure, centralized, and eliminates token pasting across sessions.
MEMORY.md as an Injection Vector - The Security Risk of Implicitly Trusted Config Files
CLAUDE.md and MEMORY.md files are loaded every session and trusted implicitly by AI agents. This makes them a potential prompt injection vector that most
Your AI Agent Shouldn't Send Screen Recordings to the Cloud
Some agents capture your screen and send it to cloud servers for processing. Local agents process everything on device - your data never leaves your machine.
Why Self-Hosting AI Matters: Your Agent Sees Your Emails, Documents, and Browsing History
AI agents interact with your most sensitive data - emails, documents, browsing history. Self-hosting with local LLMs keeps that data on your machine where
The Auth Problem for AI Agents - OAuth, Rate Limiting, and Dry Run Modes
AI agents face unique authentication challenges: automating OAuth browser flows, managing rate limits across multiple instances, and testing with dry run modes.
Why Local-First AI Agents Are the Future (And Why It Matters for Your Privacy)
AI agents that control your computer need access to everything on your screen. Here is why where that data gets processed - locally or in the cloud - is the
How to Keep Your .env Files Safe from AI Coding Agents
In 2025, PromptArmor showed that poisoned web sources can manipulate AI agents to exfiltrate .env credentials via terminal commands. Here is the multi-layer defense: .claudeignore, keychain proxy, and vault patterns.
AI Agent Permissions - Why Local Agents Do Not Have the Cloud Permission Problem
Cloud AI agents like Cowork need folder-level access grants that linger after tasks complete. Local agents that use accessibility APIs avoid this entirely.
Prompt Injection and AI Agents - Why Browser-Based Agents Have a Bigger Attack Surface
AI agents that run inside the browser inherit whatever the page feeds them, including injection payloads. Native agents that interact from outside have a