EU data residencyComputer use agentSource-verified May 2026

A truly local computer use agent in the EU is local execution plus EU-region reasoning, not magic

Most write-ups about EU data sovereignty for desktop AI agents talk in generalities. Here is the specific picture for May 2026: the four data layers a Mac agent actually has, which ones stay on your machine, which one still goes to a US endpoint by default, and the exact Fazm setting (one text field, three lines of Swift) that fixes it.

M
Matthew Diakonov
8 min read
Direct answer (verified 2026-05-10)

Can I run a local computer use agent in the EU?

Yes for the execution layer. A Mac agent that drives the accessibility tree, like Fazm, runs clicks, keystrokes, and element reads on your machine; screen pixels never leave it.

The reasoning layer is the wrinkle. Anthropic's direct API currently exposes only us and global inference geos. Verified at platform.claude.com/docs/en/build-with-claude/data-residency: "Inference geo: Only ‘us’ and ‘global’ are available at launch. Additional regions will be added over time." If you need EU residency today, you route Claude through AWS Bedrock EU regions (Ireland, Stockholm, Frankfurt, Paris) or Google Vertex AI EU regions instead. Fazm's Settings > Advanced > AI Chat > Custom API Endpoint writes any URL you paste into the agent subprocess's ANTHROPIC_BASE_URL, so swapping to an EU endpoint is one text field.

What “local” actually means for a Mac desktop agent

A desktop agent is not one thing. It is a stack of four data layers, each of which makes its own residency decision. The word “local” only makes sense once you know which layer is being described.

The four data layers of a Mac computer use agent

1

Screen content

AX tree, on Mac

2

Voice (optional)

Deepgram by default

3

Model reasoning

Claude, configurable

4

Working memory

Local SQLite, on Mac

  • Screen content. What the agent reads off the screen to know what is there. In Fazm this is an accessibility-tree query against the focused app: roles, labels, values, hierarchy. Stays on the Mac.
  • Voice input (optional). If you talk to the agent. Default Fazm transcription routes through Deepgram (cloud, US-based). Disable voice or swap to an on-device model if voice is part of your compliance scope.
  • Model reasoning. Where the next-token decisions happen. Claude family by default, hitting api.anthropic.com unless you point the agent elsewhere. This is the layer the EU residency conversation is about.
  • Working memory. The agent's knowledge graph, file index, AI user profile, and chat history. Local SQLite on the Mac. The schema is in Desktop/Sources/FileIndexing/KnowledgeGraphRecord.swift if you want to read it.

The reasoning layer is the one with a US default

This part trips most evaluators. People hear “local agent” and assume the model is local too. In 2026 that is rarely true on consumer hardware: frontier model quality on long-horizon agentic workloads still lives in cloud-hosted endpoints. The honest pitch is local execution plus configurable EU-region reasoning.

us | global

Inference geo: Only 'us' and 'global' are available at launch. Additional regions will be added over time.

platform.claude.com/docs/en/build-with-claude/data-residency, retrieved 2026-05-10

The implication for an EU shop running a Mac agent: the direct Anthropic API is not your EU path. AWS Bedrock has Claude models in eu-west-1 (Ireland), eu-north-1 (Stockholm), and via cross-region inference profiles in eu-central-1 (Frankfurt) and eu-west-3 (Paris). Vertex AI exposes Claude models across ten EU regions with regional endpoints. Either of those is a valid EU residency path. The trick is wiring your desktop agent to talk to them.

The wiring is one Swift file and one text field

Fazm exposes the swap as a setting. Open the app, go to Settings > Advanced > AI Chat, scroll to Custom API Endpoint, paste an Anthropic-compatible URL, hit return. The bridge restarts automatically and the next message goes to that URL.

The Swift wiring is small enough to read in one sitting. The setting is declared at Desktop/Sources/MainWindow/Pages/SettingsPage.swift:885 as a SwiftUI @AppStorage("customApiEndpoint"). The injection happens in Desktop/Sources/Chat/ACPBridge.swift:467-470:

Desktop/Sources/Chat/ACPBridge.swift (lines 467-470)

Anything that speaks the Anthropic message protocol works on the other end: an Anthropic-compatible shim in front of Bedrock-EU, the same in front of Vertex-EU, a corporate proxy, or a local model server with the same surface. The agent on your Mac never has to know the difference. The full source is at github.com/m13v/fazm; verify the snippet, then verify the rest.

Where the model call goes when you change one URL

Your message
AX tree query
Workflow trigger
Fazm agent
Bedrock EU
Vertex AI EU
Local LLM

Three EU-friendly destinations, one variable. The clicks, typing, file ops, and AX-tree reads on the left side never change shape regardless of which destination is live. That is the whole point: the layer that handles the user's screen and apps is local; only the reasoning hop is configurable.

EU endpoints, ranked by setup effort

The four practical paths for keeping the model layer EU-resident, in roughly increasing order of work to set up.

FeatureWhat you getSetup path
Direct Anthropic API (api.anthropic.com)us, global only (per platform.claude.com/docs/en/build-with-claude/data-residency, May 2026)Works out of the box, no EU residency yet
AWS Bedrock EUIreland (eu-west-1), Stockholm (eu-north-1) direct; Frankfurt (eu-central-1), Paris (eu-west-3) via cross-region inference profilesPoint customApiEndpoint at an Anthropic-protocol proxy in front of Bedrock
Google Vertex AI EU10 EU regions support Claude models with regional endpointsSame pattern: Anthropic-protocol shim in front of Vertex EU, paste the URL
Self-hosted local modelOpen-weight Llama 70B-class or Qwen on Apple SiliconRun a llama.cpp or vLLM server with an Anthropic-compatible shim, point at localhost

None of this requires changes to Fazm itself. The agent does not care which Anthropic-compatible endpoint serves it; it just sets ANTHROPIC_BASE_URL on the subprocess and continues.

Why an accessibility-tree agent fits the EU story better than a screenshot agent

Two architectures dominate the computer-use category. Screenshot agents read the screen by capturing pixels and sending them to a vision model on every step. Accessibility-tree agents query a structured representation of UI elements that macOS already exposes for screen readers. Under an EU residency lens, the two architectures differ a lot, even if both point their model calls at the same EU region.

  • Data minimization. A screenshot agent transmits the entire visible window on every step: chat threads, customer records, browser tabs, anything that happens to be on screen. An AX-tree agent transmits only the structured slice the model needs to decide the next action, and only at decision points.
  • Volume of cross-border calls. Screenshot agents make one model call per click, per scroll, per keystroke decision. AX-tree agents make most action steps with no model call at all (find element by role, click, type, read response). Even when both call out to an EU endpoint, the AX-tree path moves an order of magnitude less user data.
  • Audit surface. Pixel uploads are hard to reason about for a DPO. Structured element data is auditable: you can log exactly which fields were read, which elements were acted on, which strings the model saw. Open source widens the audit further; the AX-engine code is in Fazm's repo and reads like documentation.

Both architectures can be deployed against an EU model endpoint. The accessibility-tree path is the easier compliance story because it sends less data, less often, in a more legible format.

A practical EU setup for Fazm in 30 minutes

For a small EU shop that wants the model layer kept in-region. Order matters: the bridge picks up the new endpoint on its next restart, so flip the toggle last.

  1. Pick the EU model path. AWS Bedrock in eu-west-1 (Ireland) or eu-north-1 (Stockholm) for direct Claude access; Frankfurt and Paris work via cross-region inference profiles. Vertex AI works the same way across its EU regions. Pick whichever your organization already has a contract with.
  2. Stand up an Anthropic-protocol shim in front of that endpoint. Open-source proxies exist that translate between Bedrock's InvokeModel API and Anthropic's message format; one of them deployed inside your EU VPC is enough. The shim's URL is what you paste into Fazm.
  3. In Fazm, open Settings > Advanced > AI Chat. Toggle Custom API Endpoint on and paste the shim URL. Hit return. The ACP bridge restarts. Verify with lsof -i -p <agent-pid> that traffic is going to your shim, not api.anthropic.com.
  4. Decide on voice. If you need it and EU residency on audio matters, disable Deepgram-based transcription in Settings or swap to an on-device model. If voice is not part of your workflow, leave it off.
  5. Document the configuration. The architecture decision your DPO cares about is: screen pixels do not leave the Mac, model traffic terminates in the EU region, working memory is local SQLite. All three are inspectable in the Fazm source on GitHub.

What separates an honest EU-friendly agent from a marketing claim

Three checks I would run on any desktop agent that markets itself to EU users. None of these require an NDA or a procurement call.

  1. What does the agent read from the screen? If the answer is “screenshots,” pixel data leaves the machine on every step. If the answer is “the accessibility tree,” structured element data leaves only at decision points. Read the source, do not trust the architecture diagram.
  2. Can the model endpoint be changed without a rebuild? A user-facing setting that injects a base URL into the agent subprocess counts. A hard-coded SDK call does not. Fazm passes this with one text field; many agents do not.
  3. Is the source readable? You cannot audit either of the above on a closed agent. Open source is the only way to verify the EU swap path is a real piece of code and not a slide.

Fazm passes all three by construction: AX engine, customApiEndpoint setting, MIT-licensed source on github.com/m13v/fazm. That does not eliminate every cross-border data question; it shrinks the surface area to one configurable hop and makes the rest verifiable from the repo.

Want to walk through your own EU residency setup?

Twenty minutes with the team. Bring your model contract, your DPO checklist, and the workflows you want the agent to run. We will trace where the data actually goes and what the swap path looks like.

Frequently asked questions

Is there a fully local computer use agent for the EU in 2026?

Honest answer: no commercial Mac agent is fully local end-to-end if it uses a frontier model. The execution layer (clicks, typing, accessibility-tree reads, file ops) runs on your machine; that part is genuinely local. The reasoning layer is whatever your agent calls for the next token, and frontier model quality currently lives in cloud-hosted Claude, GPT-4-class, and Gemini-class endpoints. The practical EU-friendly setup is local execution plus EU-region inference: an accessibility-API based desktop agent (Fazm) plus a Claude endpoint hosted in an EU region (AWS Bedrock eu-west-1 / eu-north-1 / eu-central-1 / eu-west-3, or Vertex AI EU regions).

Does Anthropic's direct API offer EU data residency?

Not as of May 2026. Anthropic's data-residency docs at platform.claude.com/docs/en/build-with-claude/data-residency state that the inference_geo parameter supports only 'us' and 'global'. Workspace geo also only supports 'us' currently. The page says additional regions will be added over time, but if you need EU data residency today, the path is AWS Bedrock or Google Vertex AI in an EU region, not the direct Anthropic API.

What does Fazm actually do to support an EU setup?

Two things. First, the automation engine reads the macOS accessibility tree on-device and acts on elements directly; pixel data never leaves your Mac. That removes the biggest data-residency problem screenshot-based agents have. Second, Settings > Advanced > AI Chat > Custom API Endpoint is a text field that writes whatever URL you paste into the ANTHROPIC_BASE_URL environment variable for the agent subprocess. Three lines of Swift in Desktop/Sources/Chat/ACPBridge.swift at lines 467 to 470. Point that field at an Anthropic-compatible Bedrock-EU or Vertex-EU proxy and your model traffic stays in the EU.

Why does an accessibility-tree agent matter more than a screenshot agent for EU data residency?

A screenshot agent uploads pixel data on every step. Every click, every scroll, every keystroke decision is pixels-plus-instruction sent to a vision model. Even if you point that to an EU-region endpoint, the screen content of every focused window is leaving the user's machine continuously. An accessibility-tree agent reads structured element data on-device, hands the model the relevant slice (a list of buttons, a form's fields, the visible text of an article), and acts on coordinates that are computed locally. The model sees text it would have seen anyway and never sees the visual layout. The data-minimization story is dramatically simpler.

Where is the customApiEndpoint setting in Fazm and how do I use it?

Open the app, go to Settings, then Advanced, then AI Chat. Scroll to the custom endpoint toggle. Turn it on, paste your Anthropic-compatible URL into the field, hit return. The agent subprocess restarts and the next message goes to that URL. Source: Desktop/Sources/MainWindow/Pages/SettingsPage.swift line 885 (@AppStorage('customApiEndpoint')) and Desktop/Sources/Chat/ACPBridge.swift line 469 (env['ANTHROPIC_BASE_URL'] = customEndpoint). No Xcode, no rebuild, no command line.

What about voice transcription? Is that local too?

Honest disclosure: the default real-time transcription path uses Deepgram (TranscriptionService.swift), which is cloud-hosted in the US. If voice is part of your EU compliance scope, treat it as a separate decision: turn off voice input or run an on-device model for that layer. Text input through chat does not hit any transcription service.

Can I run Claude fully local on my Mac and skip the cloud entirely?

Yes if you accept the capability tradeoff. Anthropic does not publish open-weight Claude models, but you can serve any open-weight model behind an Anthropic-compatible shim (e.g. an Anthropic-message-protocol proxy in front of a llama.cpp or vLLM server running a Llama or Qwen 70B-class model on Apple Silicon). Point Fazm's customApiEndpoint at the local URL and the agent's model traffic never leaves your Mac. The trade is that frontier-Claude quality on agentic workloads is better than 70B-class local models in 2026; for routine workflows local is fine, for harder reasoning you will feel the gap.

What does Fazm store on its servers about my workflow?

The desktop app's working memory (knowledge graph, file index, AI user profile, chat history) is in a local SQLite database on your Mac. Source: Desktop/Sources/FileIndexing/AIUserProfileService.swift and KnowledgeGraphStorage.swift, both backed by AppDatabase.shared.getDatabaseQueue() pointing at a local DB. Anonymous error telemetry exists like in any consumer app, but the data your agent works with sits in your home folder, not a Fazm-managed cloud.

Is Fazm GDPR compliant out of the box?

Compliance is a property of how you deploy a tool, not a sticker on a download page. Fazm's architecture removes most of the surface area: screen content stays on the Mac, working memory is local, source is open and auditable. The remaining cross-border-transfer question concerns the model call. If you configure customApiEndpoint to an EU-region Claude endpoint (Bedrock or Vertex) and disable cloud transcription, the agent's data flow stays inside the EU. That is a configuration decision your DPO can verify by reading the source. We are not a substitute for legal advice.

Where can I read the source for these claims?

github.com/m13v/fazm. The customApiEndpoint UI lives in Desktop/Sources/MainWindow/Pages/SettingsPage.swift around line 885. The env-var injection that wires it through to the agent runtime is in Desktop/Sources/Chat/ACPBridge.swift at lines 467 to 470. The transcription service is in Desktop/Sources/TranscriptionService.swift. The local databases are under Desktop/Sources/FileIndexing/. Read the Swift, do not trust the marketing.

fazm.AI Computer Agent for macOS
© 2026 fazm. All rights reserved.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.