Proactive AI Agents That Help Without Being Asked
Proactive AI Agents That Help Without Being Asked
Most AI agents wait for a command. You type a request, the agent executes it. That reactive model is useful but leaves a lot of value on the table.
The most powerful automation is the kind that does not require you to think about it. Your disk fills up - the agent cleans it. A CI build fails - the agent reads the error log and opens a fix PR. A meeting invite conflicts with a focus block - the agent drafts a reschedule. You never had to ask, and you might not even have noticed the problem before it was already handled.
This is the proactive agent pattern, and it is increasingly the direction the industry is moving. Salesforce's 2026 AI trends report highlighted event-driven architectures as the defining shift for the year - agents moving from user-initiated tools to systems that respond to real-world triggers automatically.
The Trigger Is the Hard Part
Reactive agents are architecturally simple. You send input, the model responds.
Proactive agents require monitoring infrastructure. The agent needs a continuous feed of signals to act on - file system events, log streams, API webhooks, calendar changes, process health metrics. Without a signal source, there is nothing to trigger proactive behavior.
The good news is that macOS and most modern systems provide exactly these signals. Here are concrete implementations for four common proactive triggers:
File system watcher (using Swift FileSystemWatcher):
let watcher = DirectoryWatcher(path: "/Users/matt/Desktop")
watcher.onChange = { event in
if event.type == .created && event.path.hasSuffix(".png") {
agent.organizeScreenshot(at: event.path)
}
}
watcher.start()
Log monitor (disk space check):
import shutil
def check_disk_space():
total, used, free = shutil.disk_usage("/")
free_gb = free / (1024 ** 3)
if free_gb < 5.0:
agent.trigger("disk_low", context={"free_gb": free_gb})
Calendar conflict detector:
def check_calendar_conflicts():
events = calendar_api.get_today_events()
for i, event in enumerate(events[:-1]):
next_event = events[i + 1]
if event.end_time > next_event.start_time:
agent.trigger("calendar_conflict", context={
"event1": event, "event2": next_event
})
Git hook trigger (post-CI fail):
# .git/hooks/post-receive
#!/bin/bash
STATUS=$(check_ci_status $COMMIT_HASH)
if [ "$STATUS" = "failed" ]; then
fazm trigger ci_failure --commit $COMMIT_HASH
fi
The key property: each trigger is specific and observable. "Disk space under 5GB" is actionable. "Something seems wrong" is not.
The Risk Tier System
Acting without explicit permission is where proactive agents go wrong. The solution is not to avoid proaction - it is to match action scope to risk level.
Three tiers work well in practice:
Tier 1 - Execute immediately, log it: Low-risk, easily reversible actions where the downside of acting is smaller than the downside of not acting.
- Organizing files into existing folders
- Clearing log files older than 30 days
- Moving screenshots from Desktop to a Screenshots folder
- Closing system notifications
Tier 2 - Execute and notify: Actions with moderate scope. The agent acts but tells you what it did in a way you can review later.
- Drafting (not sending) emails
- Creating calendar events
- Applying code formatting
- Committing staged changes with a descriptive message
Tier 3 - Notify and wait for approval: High-impact, hard-to-reverse actions. The agent surfaces the opportunity but takes no action until you confirm.
- Sending emails or messages
- Deploying code
- Deleting files without trash fallback
- Making purchases or financial transactions
- Pushing to a shared git branch
class ProactiveAgent:
def act(self, action, risk_tier: int):
if risk_tier == 1:
result = action.execute()
self.log(action, result)
elif risk_tier == 2:
result = action.execute()
self.notify(f"Done: {action.description}", result)
elif risk_tier == 3:
self.notify_pending(action.description)
approval = self.wait_for_approval(action.id)
if approval:
result = action.execute()
Avoiding the Annoyance Trap
Proactive agents that notify too frequently become the problem they were meant to solve. If your agent sends you a notification every time it does something, you will start ignoring all of them - including the important ones.
The discipline is: Tier 1 actions should be invisible (you learn about them by looking at logs occasionally, not through notifications). Tier 2 actions should be visible but scannable in bulk. Tier 3 approvals should be rare - reserved for actions with real stakes.
A well-configured proactive agent should interrupt you fewer times per day than a poorly configured reactive agent. If your agent is generating more attention overhead than it saves, the trigger thresholds or notification policy needs recalibration.
The Trust Gradient
Start narrow. Begin with a single Tier 1 trigger that has obvious value and minimal risk - organizing screenshots into a folder, or archiving temp files older than 7 days.
Watch it run for a week. Did it do what you expected? Did it do anything surprising? Correct the edge cases.
After a week of correct behavior on Tier 1, add a Tier 2 trigger. After Tier 2 works reliably, consider the first Tier 3.
This is not just a safety practice - it is the most efficient path to a useful proactive agent. Proactive actions that break trust set the whole project back. Proactive actions that consistently work as expected build confidence faster than any other approach.
Teams at IBM and Salesforce building enterprise agentic systems in 2026 describe the same pattern: start with the lowest-stakes monitoring and expand scope only after each tier has earned trust through consistent behavior. The human-in-the-loop is not a limitation of current AI - it is the mechanism by which trust accumulates and scope expands responsibly.
macOS-Specific Implementation Notes
On macOS, launchd is the natural scheduling layer for proactive agents. It starts processes, restarts them on failure, and runs on schedules without any app needing to stay open:
<!-- ~/Library/LaunchAgents/ai.fazm.monitor.plist -->
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/fazm</string>
<string>monitor</string>
</array>
<key>StartInterval</key>
<integer>300</integer> <!-- every 5 minutes -->
FSEvents (via the FileSystemWatcher wrapper or native Swift APIs) handles file triggers without polling. Calendar events via EventKit give you real-time conflict detection. For log monitoring, NSFileHandle on a log file works for simple cases; for production use, a dedicated log aggregation pipeline is more reliable.
Proactive agents are not about removing human control. They are about handling the obvious, low-stakes, repetitive interventions so that human attention is available for the decisions that actually need it.
Fazm is an open source macOS AI agent. Open source on GitHub.