Why Community Skill Repos Need Platform-Level Sandboxing
Why Community Skill Repos Need Platform-Level Sandboxing
Community skill repositories are one of the most exciting parts of the AI agent ecosystem - and one of the most dangerous. When anyone can publish a skill that an AI agent executes with full system access, you have a supply chain attack waiting to happen.
The Attack Surface
A community skill is just code that runs on your machine. The typical attack looks like this:
- Someone publishes a useful-looking skill - "Auto-organize Downloads" or "Smart Email Sorter"
- The skill does what it claims, so it gets stars and installs
- Buried in the code is a payload that exfiltrates SSH keys, browser cookies, or API tokens
- Because the agent has system-level permissions, the skill inherits those permissions
This is not hypothetical. It is the same pattern that plagues npm, PyPI, and every other open package ecosystem. The difference is that AI agent skills often have broader system access than a typical library.
What Platform-Level Sandboxing Looks Like
The fix cannot be "just review the code." That does not scale. It needs to be built into the platform:
- Permission declarations - skills must declare what they need access to (filesystem, network, clipboard, specific apps)
- Runtime sandboxing - enforce those declarations at the OS level, not just on the honor system
- Audit trails - log every system call a skill makes so anomalies are detectable
- Verified publishers - tie skills to verified identities, not anonymous GitHub accounts
The macOS Advantage
macOS already has App Sandbox and TCC (Transparency, Consent, and Control). AI agent platforms should leverage these existing mechanisms rather than building custom sandboxing from scratch. The OS already knows how to restrict file access, network calls, and hardware permissions per process.
Build Verification Into the Install Flow
Every skill install should show a clear permissions prompt - similar to what mobile apps do. Users should see exactly what a skill can access before it runs.
Fazm is an open source macOS AI agent. Open source on GitHub.