The Problem with Logs Written by the System They Audit

Fazm Team··3 min read

Logs Written by the System They Audit

There is a fundamental problem with letting any system write its own audit trail. The same entity that might fail, lie, or hallucinate is the one recording whether it failed, lied, or hallucinated.

This is not an AI-specific problem. It is an auditing principle as old as accounting: the person who writes the checks should not be the person who reconciles the bank statement. Separation of concerns is not just a software pattern - it is a trust pattern.

The Self-Reporting Trap

AI agents typically log their own actions. The agent decides to click a button, attempts the click, and then logs "clicked button successfully." But what if the click failed silently? What if the button was not there? The log still says success because the log is written by the same process that performed the action.

This creates a false sense of observability. You have logs. You have timestamps. You have detailed records of every action. And none of it can be fully trusted because it is all self-reported.

External Verification Beats Self-Reporting

Git is the gold standard for external verification of agent work because:

  • It records outcomes, not intentions - A commit exists or it does not. The diff shows actual changes, not planned changes.
  • It has integrity - Git hashes are cryptographic. You cannot retroactively modify the history without detection.
  • It is written by the system, not the model - Git records what the filesystem contains, independent of what the agent claims it did.

Building an External Audit Layer

Beyond git, build verification that reads from systems the agent does not control:

  • File system state - Does the file actually contain what the agent says it wrote?
  • Application state - Did the UI actually change after the agent's action?
  • Network state - Did the API call actually return what the agent reported?

Each of these is an independent check that does not rely on the agent's self-report.

The Principle

Never trust a log written by the system it audits. Always verify against an external source. Git is the easiest and most reliable external source for code-related work. For everything else, build independent observation.

Fazm is an open source macOS AI agent. Open source on GitHub.


More on This Topic

Related Posts