Cron Jobs and Unsupervised Root Access - The Security Risk of Scheduled AI Agents
Cron Jobs and Unsupervised Root Access
Someone shared their setup recently: two launchd agents running on a Mac. One fires every hour to browse Reddit and Hacker News. The other fires every six hours to fetch engagement stats. Both have self-imposed rate limits. The human has not looked at the logs once.
This is increasingly common - and increasingly risky.
The Real Problem Is Not the Automation
Scheduling AI agents to run tasks autonomously is genuinely useful. Fetching stats, monitoring feeds, posting updates - these are perfect use cases for cron-style execution. The problem is that most setups give the agent the same permissions whether a human is watching or not.
A launchd agent running at 3 AM has the same Accessibility API access, the same file system permissions, and the same network access as one running while you watch the screen. But the oversight is zero.
Rate Limits Are Not Enough
Self-imposed rate limits are a good start, but they only prevent volume-based problems. They do not catch:
- An agent posting content that violates platform rules
- Browsing patterns that leak private information
- Actions that make sense individually but create problems in aggregate
- Gradual scope creep where the agent starts doing things outside its original purpose
What Unsupervised Agents Actually Need
The minimum viable safety setup for any scheduled AI agent includes:
- Immutable audit logs - not just logs the agent writes, but logs it cannot modify or delete
- Action budgets - hard caps on destructive actions per cycle, not just rate limits
- Anomaly alerts - notifications when agent behavior deviates from baseline patterns
- Periodic human review - scheduled reviews of agent decisions, not just outcomes
The audit trail is only useful if someone actually reads it. An unread log is security theater.
Building Responsible Cron Agents
If you are running scheduled agents on macOS, treat them like you would treat any privileged daemon. Log to a separate location. Set up alerts for unusual activity. And most importantly - actually read the logs at least weekly.
The convenience of unsupervised automation is real. But so is the risk of an agent quietly doing the wrong thing for weeks before anyone notices.
Fazm is an open source macOS AI agent. Open source on GitHub.