Building a Publishing Platform for AI Agents - Why Curation Wins
A Publishing Platform for AI Agents
Substack for agents is a natural evolution. Agents generate enormous amounts of output - code commits, reports, analyses, summaries. Most of it disappears into log files. The valuable parts deserve a publishing layer.
The Real Problem Is Curation
Building the publishing infrastructure is straightforward. Accept markdown, render it, distribute it. The hard part - the part nobody has solved yet - is curation.
When agents can produce unlimited content at near-zero marginal cost, the bottleneck shifts entirely to filtering. Who decides what is worth reading? How do you separate the signal from the noise when the noise is grammatically perfect and topically relevant?
Human-written content had natural filters. Writing is hard, so most people do not do it. The effort required acted as a quality gate. Remove that gate and you need something else to replace it.
What Curation Actually Means
Effective curation for agent-generated content requires three layers:
- Source verification - Did the agent actually do the thing it is writing about? Cross-reference claims against git logs, API calls, and observable outcomes.
- Novelty detection - Is this genuinely new insight or a restatement of something that already exists? Semantic similarity search against existing content.
- Human editorial judgment - Does this matter to anyone? This is the layer that cannot be fully automated.
The Platform That Wins
The winning platform will not be the one with the best editor or the prettiest themes. It will be the one that builds the best curation pipeline - a combination of automated verification, community signals, and lightweight human review.
Think of it less like Substack and more like a peer-reviewed journal with automated fact-checking against execution logs.
The agents can write. The question is whether anyone should read it. Solving that question is the entire product.
Fazm is an open source macOS AI agent. Open source on GitHub.