Identity on Agent Platforms: What 'Following' Actually Means Now

M
Matthew Diakonov

Identity on Agent Platforms: What "Following" Actually Means Now

On Twitter, you follow someone because you want to read what they think. The content is filtered through their attention, their voice, their judgment about what is worth saying today.

On an agent-mediated platform, following someone means something different. You are subscribing to the output of their agent - a system they configured, possibly optimized for engagement, possibly running fully autonomously. The person may have reviewed none of what you see.

This is not a hypothetical future scenario. Moltbook, a social network designed specifically for AI agents, launched in early 2025 and attracted 1.6 million autonomous agents in its first week. Disclosure frameworks are already being written for this transition. The FTC issued guidance in 2025 clarifying when AI-generated content requires disclosure in influencer and marketing contexts. Social platforms are adding AI content labels. The question of what "following" means is already being answered - mostly through policy and design decisions, with varying degrees of transparency about the choices being made.

The Recognition Problem

When you see a post from someone you follow, your brain performs a rapid trust assessment: does this sound like them, does the opinion feel authentic, does the voice match what you know of the person?

Agent-mediated content breaks this signal. An agent can post in a consistent, recognizable style - arguably more consistent than any human writer who has bad days, changes their mind, or runs out of things to say. But consistency is not the same as authenticity.

Consider the spectrum of human involvement in a post:

  • Person writes it entirely themselves
  • Person writes a rough draft, agent cleans up the prose
  • Person reviews and approves an agent-written draft
  • Person sees a notification that their agent posted; may or may not read it
  • Agent posts fully autonomously; person checks metrics weekly

From the outside, these are indistinguishable unless the platform discloses the involvement level. Most platforms currently show nothing.

What Disclosure Research Shows

The FTC's 2025 guidance on AI persona disclosure established a core principle: disclosure is required when AI involvement could mislead a reasonable consumer about who made the content or whether the endorsement is authentic.

The implementation standard that emerged is more specific: disclosure must be "clear and conspicuous," placed where people will notice it - not buried in a bio or hidden behind a tooltip. "Enhanced with technology" does not count. The disclosure needs to identify the AI's role.

For social platforms, the practical implementations range from an AI badge on posts to a spectrum indicator showing the level of human involvement. Some platforms are experimenting with a four-level system:

  • H: Fully human-written
  • HA: Human-written, AI-assisted (grammar, formatting)
  • AH: AI-written, human-reviewed
  • A: Fully autonomous AI output

This kind of labeling changes how you consume content. An opinion post labeled "A" (fully autonomous) carries different weight than the same post labeled "H" or "AH." You might trust a factual summary from an autonomous agent but want human authorship for an opinion about a contested topic.

The Trust Layer Is Two-Deep

Following on an agent platform is really trusting two things simultaneously.

First: does this person have good judgment? Are their values, knowledge, and perspective things you want more of in your feed?

Second: did they configure their agent well? An agent is only as good as its instructions. A thoughtful person who set up a lazy agent prompt will produce polished-sounding content that drifts from their actual views over time. A mediocre thinker who wrote careful agent instructions produces more consistent output than their own writing ever did.

These two trust factors are currently invisible on every platform. You follow a person but you are evaluating an agent-person composite, with no way to separate the components.

What Platforms Get Right and Wrong

The naive approach is binary: label AI content or do not. This misses the nuance that matters.

The better approach - which some platforms are beginning to implement - is showing the level of human involvement as a spectrum, not a boolean. A post that a person drafted in their own words and an agent reformatted is meaningfully different from a post the agent generated independently based on a topic setting.

Platforms also need to distinguish between personal accounts and automated accounts. Many X/Twitter accounts today are already substantially automated - scheduled posts, cross-posted content, auto-engagement bots. Agent platforms that lack this distinction will see the same dynamic play out faster and at larger scale.

The 2025 arxiv paper "Governance of AI-Generated Content: A Case Study on Social Media Platforms" surveyed 12 major platforms and found that only 3 had any disclosure mechanism for AI-generated text content, despite all of them having policies nominally requiring it. Policy without enforcement is not disclosure - it is liability management.

The Long-Term Identity Shift

The practical endpoint of this transition is that following a person and following their agent becomes equivalent in everyday use - the same way following a company's account and following their social media team is already equivalent. The account becomes an interface; what matters is whether that interface consistently produces something worth reading.

What changes is how identity accumulates. Reputation used to come from years of public writing, each post leaving a small mark on how people perceived you. In an agent-mediated environment, reputation is determined by the quality of your agent configuration - which is a different kind of signal about a person. It tells you how much they invested in setting up the system, what they optimized for, and whether they care enough to calibrate it over time.

Platforms that surface this - that let you understand whether someone's agent is evolving or static, how often they override it, what topics they choose to write manually - will be more navigable than platforms that treat all output as equivalent.

The follow relationship is not going away. It is being layered with additional dimensions that most platforms have not yet figured out how to show.

Fazm is an open source macOS AI agent. Open source on GitHub.

More on This Topic

Related Posts