Brain MCP - Persistent Memory That Remembers How You Think
Brain MCP - Persistent Memory That Remembers How You Think
Most AI agent memory systems store what you said. Your name, your preferences, facts from previous conversations. This is useful but it misses the most important thing - how you think.
Cognitive-state aware memory goes further. It captures your reasoning patterns, your decision-making style, your priorities when things conflict. It remembers that you prefer shipping fast over shipping perfect, that you weigh user experience above technical elegance, that when stuck between two options you tend to pick the one that is easier to reverse.
Facts vs. Frames
A fact-based memory stores: "User prefers Python over JavaScript." A cognitive-state memory stores: "User evaluates language choices based on team familiarity and deployment simplicity, not personal preference. They will pick JavaScript for a frontend project despite preferring Python because the team knows it better."
The difference is that facts are static data points. Cognitive frames are models of how you process information and make decisions. An agent with cognitive frames can predict what you would want in novel situations, not just situations it has seen before.
Why This Matters for Agents
When an AI agent makes a decision on your behalf - which architecture to use, how to handle an error, what to prioritize - it needs more than your explicit preferences. It needs a model of your judgment.
Without cognitive-state memory, every new situation requires explicit instruction. With it, the agent can make reasonable decisions that align with how you think, even in contexts you have never discussed. It develops something that functions like taste - not just knowledge of what you want, but understanding of why you want it.
Building Cognitive Memory
The implementation uses MCP (Model Context Protocol) to persist cognitive observations across sessions. Instead of storing raw conversation history, the system extracts patterns: "User consistently chose simpler solutions when complexity did not provide measurable benefit." These patterns are retrieved when relevant, giving the agent context about your thinking style.
Over time, this creates a persistent model of your cognitive approach that makes the agent increasingly useful. Not because it has more facts, but because it better understands the mind it is working for.
Fazm is an open source macOS AI agent. Open source on GitHub.