Personal context for AI agents on macOS, the way it actually ships today.

Most articles on this topic describe an Apple Intelligence promise, a chat-side memory feature, or a system prompt hack with a bio paragraph. Fazm ships a different thing: a Swift extractor that reads identity, addresses, payment metadata, saved accounts, and the tools you actually use directly out of your local Chromium browser data, and stores it in a SQLite database the agent queries at runtime. This is the page that walks through how that pipeline works, where it lives in the open source repo, and where it does not reach.

M
Matthew Diakonov
11 min read
4.9from open source on GitHub
Reads Arc, Chrome, Brave, and Edge data from disk
Stored locally at ~/ai-browser-profile/memories.db
Queryable from the agent via one tool call

Direct answer (verified 2026-04-30)

On macOS today, an AI agent gets personal context in one of three ways. (1) Cloud LLM memory layered onto chat sessions, like ChatGPT memory or Claude Projects, scoped to whatever you told the model. (2) Apple Intelligence's on-device personal context, scoped to Apple's own indices (Mail, Calendar, Messages, Notes), still in partial rollout. (3) A local extraction pipeline that reads identity, addresses, payment metadata, saved logins, and your most-visited tools out of your existing Chromium browser data and stores them on disk for the agent to query. Fazm is the third pattern, and the one that reaches the autofill and history data the first two cannot see.

What the other articles on this topic miss

I read every page that currently shows up for this question and they cluster around the same two stories. The first story is Apple Intelligence: a future where Siri uses your messages, calendar, and notes to answer questions like "when is mom's flight". The second is chat-side memory: ChatGPT and Claude quietly writing notes about you across sessions and prepending them to your prompts. Both are real. Both are useful. Neither answers the question a small business owner is actually asking when they search for personal context for an agent on their Mac, which is "how does this thing know who I am, what address I ship to, what card I use, and what tools I open every day, without me typing it in".

That data already exists on the Mac. It is sitting in your Chromium browser's autofill database, your saved logins file, and your bookmarks JSON, where you have been adding to it for years. The gap nobody fills is: how does an agent on the same machine read that data, on its own, the first time you launch it. The rest of this page is the answer in code paths.

0 prompts

Without browser-extracted personal context, an agent told 'send the invoice to my usual client at the usual address' has nothing to bind 'usual' to.

the actual reason this pipeline exists

The four sources, on disk, by file name

Personal context on macOS is not one file. It is four, spread across each Chromium browser's profile directory. The extractor opens them in this order, and each contributes a different slice of who you are.

From browser files to a queryable memories.db

1

Web Data

SQLite with addresses, autofill form fields, credit card metadata. Decoded via Chromium's address_type_tokens map.

2

Login Data

SQLite with saved usernames per origin. Passwords stay in keychain, only emails and account names are read.

3

History

SQLite with URLs and visit counts. Top hosts get mapped to friendly service names (GitHub, Stripe, Notion, etc.).

4

Bookmarks

JSON file of bookmarked domains. Useful for tools you saved before you visited them often.

5

memories.db

Single local SQLite at ~/ai-browser-profile/memories.db. Each row: key, value, tags, source.

How a single voice prompt reaches your local memories.db

The agent does not load the whole database into the prompt. That would be wasteful and would leak more than the turn needs. Instead it routes through a query tool, fetches a ranked slice, and includes only the matching memories. Here is the data flow on a typical question.

Query path: voice prompt to ranked rows

You
Agent
query_browser_profile
memories.db
tags filter
ranked rows
model prompt

The anchor fact, in files you can open

The list of supported browsers and the exact filesystem paths the extractor reads from are not buried. They are a public constant in the Swift package. The address-type decoding map, which translates Chromium's internal numeric type codes into named fields like first_name, email, phone, and street_address, is in the same file. Two excerpts, both from ai-browser-profile-swift-light, both verifiable in the open source repo.

ai-browser-profile-swift-light/Sources/AutofillExtractor/Constants.swift

That map is the entire reason the extractor can come back with "your phone is X, your shipping address is Y, your work email is Z" instead of a pile of unlabeled strings. Chromium tags each row with one of those numeric type codes when you autofill a form. The extractor decodes them, attaches semantic tags so the agent can ask for a slice, and writes the row to your local memories.db.

What a typical extracted profile looks like

These are the field counts I see on my own machine after a first extraction. Your numbers depend on how long you have been using your Chromium browsers and how much you have autofilled. The shape, not the exact count, is what matters.

0
Chromium browsers scanned
0
semantic tags in the vocabulary
0
Chromium address type codes decoded
0
seconds for first extraction

The nine tags are identity, contact_info, account, tool, address, payment, contact, work, and knowledge. The agent uses them as filters at query time so a question like "what is my home address" only pulls address rows, not the hundreds of unrelated rows in the database.

The four steps the app actually walks you through

This is what happens the first time you launch Fazm with the browser profile feature enabled. Nothing is silent. Each step is a turn in the onboarding chat where you can say no.

  1. 1

    Confirm and extract

    The agent asks for permission to scan your browser data, then runs the Swift extractor against any of Arc, Chrome, Brave, and Edge it can find on disk. Takes about ten to twenty seconds.

  2. 2

    Review the summary

    The agent reads back what it found in coherent prose: full name, all emails, phone numbers, addresses, cards by last four, top tools, saved accounts. Not a bullet dump, a paragraph you can scan.

  3. 3

    Correct or delete

    If a value is wrong or you do not want it stored, you say so. The agent calls edit_browser_profile with action 'delete' or 'update' and runs SQL against memories.db. No audit trail you cannot remove.

  4. 4

    Use it

    From that point on, every prompt that needs personal context routes through query_browser_profile with the right tags filter. The first time you say 'send the usual to my main client', it works.

Browsers the on-disk extractor reads today

Four supported, two on the roadmap. The four supported ones share the Chromium Web Data SQLite schema, which is why one extractor can handle them all.

Arc

~/Library/Application Support/Arc/User Data. Reads autofill, logins, history, bookmarks.

Chrome

~/Library/Application Support/Google/Chrome. The largest profile most users have.

Brave

~/Library/Application Support/BraveSoftware/Brave-Browser. Same schema as Chrome.

Edge

~/Library/Application Support/Microsoft Edge. Same schema as Chrome.

Safari

Roadmap. Autofill is in ~/Library/Safari plus the keychain, so a different read path is needed.

Firefox

Roadmap. Profile data lives in ~/Library/Application Support/Firefox/Profiles.

What the agent does not store, and what it never does

Three things worth being clear on. None of them are settings you have to find, they are how the extractor is written.

Passwords are not extracted. The Login Data SQLite contains usernames and the encrypted blobs for passwords. The extractor reads usernames and origins, never the encrypted password field. The macOS keychain stays untouched.

Full credit card numbers are not extracted. Chromium stores card numbers encrypted on disk. The extractor pulls only the metadata Chromium keeps in plain SQLite columns: card holder name, expiration, nickname, last four digits. Enough for the agent to disambiguate "the Mercury card" from "the Chase card", not enough to pay anyone.

Nothing is uploaded by the extractor. The Swift package has zero external dependencies. The output is a SQLite file in your home directory. If you point the agent at a cloud model for a chat turn, the rows the agent decided to include in that turn's prompt go to the model. If you point it at a local model, even that does not happen.

Where this layer fits with the agent's other context

Browser-extracted personal context is one of three layers the agent reads from on every turn that needs to know anything about you. The system prompt declares all three explicitly, in Desktop/Sources/Chat/ChatPrompts.swift around line 119.

0Memory: long-term observations from chats
0Browser profile: identity, accounts, tools
0Conversation history: searched on demand

The browser-profile layer is the only one that exists before you have ever talked to the agent. The other two grow as you use it. Together they cover the gap between "what the model knows from training" and "who you actually are".

Honest limits

Worth saying out loud. The extractor reads what your browsers already saved, which means if you autofill nothing, you get nothing. People who type their address fresh into every form and never save logins will end up with a near-empty memories.db, and the agent will lean on the conversation-side memory layer instead.

macOS only, today. The Swift extractor uses paths under ~/Library/Application Support that only exist on macOS. The same idea is portable to Linux and Windows (Chromium uses the same schema), but the package as written is Mac-first.

Full Disk Access is required. Without it, macOS will not let any process read another browser's SQLite. Fazm asks for it during onboarding and explains why. If you skip it, the extractor returns an empty profile and the agent says so instead of pretending it knows things it does not.

Want to see your own profile after extraction?

Fifteen minutes on a call. Bring your Mac, install Fazm together, watch the first extraction run, then decide if the layer is worth keeping on.

Questions people ask before letting an agent read their browser data

What does 'personal context for AI agents on macOS' actually mean?

It is the set of facts an agent needs to know about you, on your machine, before any of your prompts make sense. Your name, your work email, the company name autofilled into checkout forms, the credit card metadata you use most, the addresses you ship to, the saved logins on the sites you actually use, the tools you visit every day. Without those, an agent that is told 'send the invoice to my usual client' has nothing to bind 'usual' to. Three patterns exist on macOS today. Cloud LLM memory layered onto chats (ChatGPT, Claude Projects). Apple Intelligence's on-device personal context (rolling out gradually). And a local extraction pipeline that scrapes the data out of your existing browser files and stores it on disk for the agent to query, which is what Fazm does.

Where does Fazm read personal context from on a Mac?

Four Chromium browsers are supported by the bundled extractor: Arc, Google Chrome, Brave, and Microsoft Edge. The exact paths are in ai-browser-profile-swift-light/Sources/AutofillExtractor/Constants.swift under the browserPaths constant. They are all under ~/Library/Application Support: Arc/User Data, Google/Chrome, BraveSoftware/Brave-Browser, Microsoft Edge. Inside each, the extractor opens four files: 'Web Data' (autofill addresses, form fields, credit card metadata), 'Login Data' (saved usernames per site), 'History' (top services by visit count), and 'Bookmarks' (the JSON file of bookmarked domains). Each is a copy-and-read pass against the original SQLite or JSON file. Nothing is uploaded.

What gets stored on disk after extraction?

A single SQLite database at ~/ai-browser-profile/memories.db. Every memory is a row with a normalized key, a value, a list of semantic tags, and a source string. Tags are drawn from a fixed vocabulary so the agent can ask for a slice instead of the whole thing. The vocabulary is identity, contact_info, account, tool, address, payment, contact, work, and knowledge. A typical extracted profile contains your full name, all emails the browsers ever autofilled into a form, your phone numbers, every address you have shipped to, the last four digits of payment cards you autofilled, the saved usernames on each site you have logged into, and the top services by visit count.

How does the agent query that database when I am chatting with it?

Two tools. `query_browser_profile(query, tags?)` does a ranked text search over the memories table, optionally filtered by one of the nine tags. `extract_browser_profile()` re-runs the Swift extractor if the database is empty or stale. Both are wired into the agent's tool router in Desktop/Sources/Providers/ChatToolExecutor.swift. The skill file at Desktop/Sources/BundledSkills/ai-browser-profile.skill.md tells the model exactly when to call them: when you ask 'what is my email', tags ['contact_info']; when you ask 'what tools do I use', tags ['tool']; when you ask 'who am I', tags ['identity']. The system prompt that lists this as one of three context sources is in Desktop/Sources/Chat/ChatPrompts.swift around line 123.

How is this different from Apple Intelligence's personal context?

Different surface, different scope, different timeline. Apple Intelligence's personal context is on-device and tied to Apple's own indices: messages, calendar, mail, notes, photos. It is the right answer for that surface. It does not read your Chromium browser data on a Mac, and as of April 2026 the rollout is still partial and bound to specific Apple silicon and OS combinations. The Fazm extractor reads the data Apple Intelligence does not touch: the four Chromium browsers' on-disk SQLite, the autofill profile a small business owner has been building over years of checkouts and signup forms, the saved logins on the sites that actually run their day. The two are complementary. Fazm reaches the data the OS-level features ignore.

How is this different from ChatGPT memory or Claude Projects?

Cloud memory is a chat-side feature. The model writes notes about you across sessions and prepends them to future prompts. It works fine for the things you have told it. It cannot see your address book, the phone number you autofill into Stripe checkout, the payment card nickname you saved in Chrome, or the fact that you log into QuickBooks every Monday. That data lives on your Mac, and it never reaches the cloud unless you copy-paste it into the chat. A local extraction pipeline reads it directly. The trade-off is privacy: the data is sensitive enough that you would not want it leaving the machine, which is exactly why the database stays on disk in your home directory and the agent has to be on the same machine to read it.

What if I do not use Chromium browsers, only Safari?

The Swift extractor today targets Arc, Chrome, Brave, and Edge (the four Chromium-family browsers that share the Web Data SQLite schema). Safari is on the roadmap; its autofill data lives in ~/Library/Safari and the encrypted keychain, so the read pattern is different. If Safari is your only browser, the extracted profile will be empty and the agent will rely on its conversational memory and any answers you give it directly. Most macOS users we see have at least one Chromium browser installed (often Chrome or Arc) even when Safari is their daily driver, because some sites still misbehave outside of Chromium.

What if I do not want some of the extracted data in the database?

The same skill exposes `edit_browser_profile(action: 'delete' | 'update', query, new_value?)`. After the first extraction the agent presents a complete summary of what it found ('here is your name, here are the emails I saw, here is the address I think is yours, here are the cards by last four digits, here are the top tools') and asks if anything should be removed or corrected. The deletes are immediate and SQL-level. There is no audit trail you cannot delete. The whole memories.db is a file, and removing it resets the personal context layer entirely.

Does the agent send any of this to a cloud model?

Only the slice you ask about, and only if you are using a cloud model for that turn. The default routing is: when you ask the agent something that needs personal context, it calls `query_browser_profile` first, gets the matched memories, and includes them in the prompt to the language model for that turn. So if you ask 'send the usual to my main client', the model sees a few rows from the database and uses them. If you ask the agent something that does not require personal context, the database is never opened. You can also point the agent at a fully-local model using the custom API endpoint setting, which keeps the slice on-device end to end.

Where is this wired up in the open source repo so I can verify it?

Three concrete paths to read. The Swift extractor lives at github.com/m13v/ai-browser-profile-swift-light, packaged as a Swift Package Manager dependency of the Fazm Desktop target (see Desktop/Package.swift in github.com/m13v/fazm). The tool router is at Desktop/Sources/Providers/ChatToolExecutor.swift around line 80, with cases for 'extract_browser_profile' and 'query_browser_profile'. The skill prompt that tells the model when to use them is at Desktop/Sources/BundledSkills/ai-browser-profile.skill.md. None of those files are obfuscated or compiled-only. You can clone the repo, open the files, and trace one round trip from a chat message to the SQLite read in about twenty minutes.