Running an AI Desktop Agent 24/7 on a Mac Mini
Running an AI Desktop Agent 24/7 on a Mac Mini
A Mac Mini M4 with 48GB of RAM makes a surprisingly capable always-on AI automation box. Ollama runs 32B parameter models at interactive speeds. Apple Silicon is power-efficient enough to run 24/7 without a noticeable electricity bill. And macOS gives you everything you need for persistent agent execution.
launchd Over cron
The first lesson: use launchd instead of cron for scheduling agent tasks on macOS. launchd:
- Survives reboots automatically
- Restarts the agent if it crashes
- Can be configured to only fire when the previous job finishes (no overlapping agent sessions)
- Integrates with macOS power management
A basic launchd plist that runs your agent hourly:
<key>StartInterval</key>
<integer>3600</integer>
<key>KeepAlive</key>
<dict>
<key>SuccessfulExit</key>
<false/>
</dict>
The KeepAlive configuration restarts the agent if it exits with an error. This is critical for overnight processing where you are not watching.
Context Amnesia
The biggest problem with 24/7 agent execution is context amnesia. Each session starts fresh. The agent does not remember what it processed in the previous run, what errors it encountered, or what state the system is in.
Solutions that work:
- HANDOFF.md files - each session writes what it did and what is pending
- Local state files - track processed items so the agent does not redo work
- Structured logs - append-only logs that the next session can read for context
Overnight Batch Processing
The best use case for an always-on Mac Mini agent: overnight processing that would take too long during your workday.
- File organization across large directories
- Batch document processing and formatting
- Data extraction from accumulated emails
- Accessibility tree analysis of complex applications
- Local model inference on queued tasks
Start the batch before bed. Check the results in the morning.
Fazm runs on Mac Mini for 24/7 desktop automation. Open source on GitHub. Discussed in r/ClaudeCode and r/LocalLLaMA.