v2.1.78 Broke bypassPermissions: Skills Are User Content
v2.1.78 Broke bypassPermissions
A recent update broke the bypassPermissions flag in Claude Code, and the fallout revealed something important about how agent permission systems should work. Skills defined in .claude/skills/ stopped being able to bypass permission prompts, which broke automated workflows that depended on this behavior.
The root issue: the update treated .claude/skills/ as system-managed files rather than user content.
Skills Are User Content
Users write their own skills. They are custom instructions, stored in a user-controlled directory, defining tools and workflows specific to that user's setup. They are not system files that should be locked down - they are personal configuration.
When a permission system treats user-authored content the same as system content, it creates friction without adding security. The user already decided these skills should run. Adding permission prompts on top of that decision is not protecting the user - it is second-guessing them.
The Permission Model Problem
Agent permission systems struggle with the concept of trust delegation. When a user writes a skill and marks it as trusted, that trust should propagate to the skill's execution. Breaking this chain means the user has to re-approve actions they already approved at a higher level.
The fix is straightforward: respect the trust hierarchy. If the user created the skill file and explicitly configured it, its permissions should be honored. System-managed files get system-level restrictions. User-managed files get user-level permissions.
Lessons for Agent Developers
If you are building an AI agent with a permission system, think carefully about the boundary between user content and system content. Get it wrong and you either create security holes (treating user content as system-trusted) or create usability problems (treating user content as untrusted).
Fazm is an open source macOS AI agent. Open source on GitHub.