EU ACCESS / LOCAL DESKTOP AGENT

The agent layer has no geographic gate. The inference layer is a setting.

A local Mac agent like Fazm runs in Berlin, Paris, Dublin, and Warsaw on day one because the runtime lives on your laptop. The cloud LLM call is a separate question, and Fazm exposes it as a single text field in Settings. Below: what each EU restriction actually is on the cloud side, where the local layer fits, and the four file paths in Fazm that wire it together.

M
Matthew Diakonov
8 min read
DIRECT ANSWER, VERIFIED 2026-05-05

Yes, EU users can run a local computer-use AI agent on macOS today. The agent itself runs on your Mac, so there is no geographic gate at the application layer. If you also want EU data residency for the cloud LLM call, point Fazm's Custom API Endpoint (Settings, AI Chat) at an AWS Bedrock Frankfurt or Google Cloud Vertex AI EU gateway. Anthropic's direct API supports inference geos of us and global only, so EU-only inference requires a hyperscaler endpoint, per platform.claude.com data residency.

What "EU access" actually breaks down into

The phrase is doing two jobs at once. People typing it into Google are asking either "does the product work for me, a person physically in the EU" or "is the data my employer trusts me with allowed to leave the EU." They are very different questions and most articles conflate them.

Question A: can I use the product

Geographic availability. Does the signup page accept my address. Does the app open in my country. Does payment go through. For a downloadable Mac app this question has one answer: yes, on day one, because there is nothing for the vendor to gate.

Question B: where does my data live

GDPR and the EU AI Act. When the agent calls a cloud LLM, where does that call physically run, what does the DPA say, and which Standard Contractual Clauses apply. This is the question hosted Cowork-style products keep failing in 2026.

Operator

Pro tier only. 200 USD per month. Came to EU March 2025 after a multi-month gap.

ChatGPT Atlas

Globally available on macOS. Agent mode is Plus or Pro tier. Web browser only.

Claude Cowork

Routes inference through US AWS. EU residency tracked as open issue (claude-code 40526).

Fazm

Local Mac app. No geographic gate. Custom API Endpoint setting routes inference wherever you want.

The architectural difference, in one diagram

A hosted computer-use product owns both the agent runtime and the inference. If the vendor has not stood up an EU agent runtime, the whole stack waits. A local agent splits the two so each can be located independently.

Fazm: agent local, inference configurable

Voice
macOS Accessibility
Browser
Fazm + ACP bridge
api.anthropic.com
Bedrock Frankfurt
Vertex AI EU

The exact wiring inside Fazm

The Custom API Endpoint feature is small, intentionally. It is one UserDefault, one env var, and one subprocess restart. Four files in the open-source repo are involved.

From the Settings text field to the LLM call

1

Settings UI

Desktop/Sources/MainWindow/Pages/SettingsPage.swift line 984 renders the TextField. The placeholder string hints at a proxy URL on a local port. The bound state is the customApiEndpoint AppStorage value declared on line 885.

2

Persist the URL

Hitting Return calls chatProvider.restartBridgeForEndpointChange() on line 988. The new URL is now in NSUserDefaults under the customApiEndpoint key, persisted across launches.

3

Bridge subprocess starts

Desktop/Sources/Chat/ACPBridge.swift line 467-470 reads the same UserDefault. If the value is non-empty, it writes ANTHROPIC_BASE_URL into the env dictionary that is passed to the Node ACP bridge subprocess via Process.environment on line 484.

4

Claude Agent SDK uses the override

The ACP bridge spawns the Claude Agent SDK as a child process. The SDK honors ANTHROPIC_BASE_URL out of the box and routes every Messages API call to that URL. Inference now happens wherever your gateway points it.

The exact lines, copied verbatim from the repo

// Desktop/Sources/Chat/ACPBridge.swift, around line 467
// Custom API endpoint (allows proxying through Copilot, corporate gateways, etc.)
if let customEndpoint = defaults.string(forKey: "customApiEndpoint"),
   !customEndpoint.isEmpty {
  env["ANTHROPIC_BASE_URL"] = customEndpoint
}

// Desktop/Sources/MainWindow/Pages/SettingsPage.swift, line 984
// (placeholder string elided here, bound to customApiEndpoint state)
TextField(placeholderHint, text: $customApiEndpoint)
  .textFieldStyle(.roundedBorder)
  .scaledFont(size: 13)
  .onSubmit {
    Task { await chatProvider?.restartBridgeForEndpointChange() }
  }
2 of N

The direct Anthropic API supports inference geos of 'us' and 'global' only. Workspace geo is currently 'us' only. EU-only inference requires AWS Bedrock or Vertex AI EU regions.

Anthropic data residency docs, May 2026

Cloud-only versus local-plus-EU-endpoint

The two models, viewed from the seat of an EU operator who wants inference to stay in Germany.

What changes when you put the agent layer on your Mac

Hosted computer-use service. The agent runtime is in a US datacenter. Your screen content travels to the US for the agent to interpret it before any LLM call happens. The vendor's EU rollout (or lack of one) decides whether you can use the product at all this quarter.

  • Agent + inference both vendor-controlled, both in US
  • Whole product gated by vendor's EU rollout schedule
  • Pro-tier subscription typically required
  • Screen pixels traverse the wire even on simple text reads

What stays on the Mac, by data category

The cloud call is unavoidable when you are using a frontier model. But the question of what is in that call has more granularity than people assume.

Voice audio

Stays on the Mac. WhisperKit runs Apple's Core ML port of Whisper locally. The model file ships with the app. Only the transcribed text becomes part of the prompt.

Native macOS app state (Mail, Notes, Calendar, Numbers, etc.)

Read as accessibility-tree text via the bundled mcp-server-macos-use binary. Roles, labels, frame coordinates, and visible values are returned as UTF-8. No screenshots are taken for these tools.

Browser DOM

Read by Playwright with --image-responses omit. Snapshots are written to /tmp/playwright-mcp. The accessibility-snapshot text goes to the model; the PNG stays on disk unless the model explicitly fetches it by path.

Screenshots, when they do go to the model

Resized to a 1920px max edge by sips before they leave the machine, capped at 20 image-bearing turns per session, then encoded as base64 in the Messages API call to your configured endpoint.

The prompt and tool results

Goes to the configured endpoint. Default is api.anthropic.com (US/global routing). Set the Custom API Endpoint to a Bedrock Frankfurt gateway and this becomes an EU-resident call.

Practical setup for an EU-resident inference path

The translation step is the only piece that is not pure UI. Bedrock and Vertex use their own protocols, so something has to translate Anthropic Messages API calls into Bedrock InvokeModel or Vertex predict calls. The simplest open-source option is LiteLLM in proxy mode, which speaks Anthropic on the inbound side and Bedrock on the outbound side.

  1. Stand up an Anthropic-compatible proxy in an EU region (Frankfurt, Paris, Stockholm). LiteLLM, an Express wrapper around the official anthropic-bedrock SDK, or any of the open-source Anthropic-to-Bedrock adapters all work.
  2. Configure the proxy with AWS credentials scoped to the Bedrock region of your choice and the Claude model IDs Bedrock exposes there.
  3. Verify the proxy by curling /v1/messages with an Anthropic-style payload. The response should match the Anthropic SDK schema.
  4. In Fazm, open Settings, AI Chat, Custom API Endpoint. Toggle the switch on, paste the proxy URL (the public hostname plus /v1 path you exposed in step 1), press Return.
  5. Fazm restarts the bridge. The next Messages API call goes to your proxy, the proxy translates and signs the AWS request, and the actual inference runs in Bedrock Frankfurt.

For Mode A (Fazm built-in credits) the same path works because the ACP bridge does not care whether the upstream key is Anthropic-direct or proxy-issued. For Mode B (your own Claude OAuth) the OAuth tokens are scoped to api.anthropic.com, so a proxy in front would need to impersonate the OAuth flow, which is not a practical setup. EU residency therefore pairs naturally with API-key mode and a personal AWS account.

Three things this approach does not solve

Worth saying clearly so the page is not a sales pitch.

  • 1.It does not exempt you from EU AI Act obligations if your downstream use case is high-risk. Local execution is not a regulatory shortcut for screening job applicants or scoring credit. The Act applies to deployers regardless of whether inference happens in Frankfurt or San Francisco.
  • 2.It does not change Anthropic's data retention policy on whatever endpoint you do call. Bedrock has its own retention model (no training on customer data, region-scoped logs), the direct Anthropic API has its own. Your DPA needs to match whichever you pick.
  • 3.It does not magic away the latency. A round-trip from a Mac in Berlin to Bedrock Frankfurt and back is genuinely faster than the same trip to us-east-1, but a frontier-model response is still a model-bound second or two. Voice-first interaction feels slower over a transatlantic link, faster over an EU one, but neither is instantaneous.

Want to talk through an EU-resident setup before you wire it up?

Walk through your specific compliance constraint and what the Bedrock proxy needs to look like. 30 minutes, no sales pitch, just a screen-share.

Common questions

Can EU users run a local computer-use agent on macOS today?

Yes. A native Mac app like Fazm has no geographic gate at the agent layer. The download works in every EU member state, and the runtime that drives your browser, Google Apps, and native macOS apps lives on your machine. The only place geography becomes a question is the cloud LLM call, and that is configurable.

Is Operator (OpenAI) available in the EU now?

Yes, since March 2025, but on the Pro tier only. OpenAI announced Operator availability for Pro users in the EU, Switzerland, Norway, Liechtenstein, and Iceland on March 2025. Pro is 200 USD per month at the time of writing. ChatGPT Atlas is generally available on macOS in the EU, but its agent mode is a paid-tier feature too.

What about Anthropic's hosted computer-use products?

The direct Anthropic API is available across all EU member states. The newer Claude Cowork product currently routes all inference through US infrastructure with no EU residency option, which is tracked publicly on the claude-code GitHub issues. Anthropic has indicated Microsoft Foundry will host Claude on Azure EU later in 2026, but no firm date has been published. For now, the EU residency path on hosted Anthropic models is AWS Bedrock (Frankfurt, Paris, Stockholm) or Google Cloud Vertex AI EU regions.

What does Fazm's Custom API Endpoint setting actually do?

It is a string field in Settings, AI Chat, Custom API Endpoint. The value is persisted in the customApiEndpoint UserDefault (declared at SettingsPage.swift:885 and rendered as a TextField at line 984). When the ACP bridge subprocess starts, ACPBridge.swift line 467-470 reads that value and exports it as ANTHROPIC_BASE_URL in the bridge environment. The Claude Agent SDK running inside the bridge honors that variable and sends requests to your URL instead of api.anthropic.com. Point it at a translation proxy that fronts Bedrock Frankfurt or Vertex AI EU and the inference happens in the EU.

What about the screen content - does that leave my Mac?

Less of it than most people assume. For native macOS apps, Fazm uses mcp-server-macos-use, a Swift MCP server that reads the accessibility tree via AXUIElement. Roles, labels, positions, and values come back as UTF-8 text, not pixels. For browser tasks, Playwright is launched with --image-responses omit (acp-bridge/src/index.ts line 1491), which writes screenshots to /tmp/playwright-mcp on disk and skips the inline base64 blob that would otherwise land in the model context. When images do go to the model, they are resized to a 1920px max edge by sips (MAX_SCREENSHOT_DIM at line 1037) and capped at 20 image-bearing turns per session (MAX_IMAGE_TURNS at line 1242).

Does GDPR allow sending screen text to a US-based LLM at all?

GDPR allows international transfers under several mechanisms (Standard Contractual Clauses, adequacy decisions, the EU-US Data Privacy Framework). Anthropic and OpenAI both publish DPAs with SCCs. The compliance question is whether your specific data category and processor agreement combination clear your internal review. The architectural lever Fazm gives you is choice: leave the endpoint default and inference goes to api.anthropic.com (currently US/global routing), or point it at a Bedrock Frankfurt gateway and the inference itself happens in Germany. The agent layer choice is independent of the inference layer choice.

Do I need a developer setup to use Bedrock Frankfurt with Fazm?

You need an Anthropic-compatible HTTP gateway in front of Bedrock, because Bedrock speaks its own AWS-signed protocol and Fazm sends standard Anthropic Messages API calls. LiteLLM, Anthropic's own anthropic-bedrock SDK wrapped in a small Express server, or any of the open-source proxies in this space all work. Once that gateway is reachable at the URL where you stood it up, paste that URL into Custom API Endpoint and Fazm uses it from that point on. The translation step is the developer-flavored part, the Fazm side is one paste.

Is voice-first interaction available in the EU?

Yes. Fazm's voice transcription runs on-device with WhisperKit (Apple's Core ML port of Whisper). The audio never leaves the Mac for the speech-to-text step, so the EU AI Act's biometric-data sensitivities around voice processing do not apply to the Whisper layer. The transcribed text then becomes part of the prompt that goes to whichever LLM endpoint you configured.

What about EU AI Act high-risk classification - does a computer-use agent qualify?

Reading the regulation literally, a generic productivity agent that automates a user's own desktop tasks is not in any of the high-risk Annex III use cases (employment screening, credit scoring, biometric ID, etc.). It is the deployer's responsibility to classify their specific use case. If you build a workflow where the agent screens job applications or makes credit decisions, that is high-risk regardless of whether the agent runs locally or in the cloud. Local execution does not exempt you from the Act, but it does make documentation easier because there is no cross-border transfer to disclose at that layer.

Where can I read the source code that this page cites?

Fazm is MIT-licensed at github.com/m13v/fazm. The Swift desktop app source is under Desktop/Sources, the Node.js ACP bridge is under acp-bridge/src. Specific paths used here: Desktop/Sources/MainWindow/Pages/SettingsPage.swift for the Custom API Endpoint UI, Desktop/Sources/Chat/ACPBridge.swift for the env-var export, acp-bridge/src/index.ts for the MCP server registration and screenshot guards. Line numbers track HEAD at the time of writing and may drift as the codebase evolves.