April 13-14, 2026 open-source windowANTHROPIC_BASE_URLFazm v2.2.0, Apr 11

Open source AI on April 13-14, 2026, and the one env var that lets any of those weights operate a real Mac

Every roundup for the week names the weights. OLMo 2 32B. Codestral 2. Llama 4 Maverick. NVIDIA Ising. None of them covers the piece a consumer Mac user needs to actually run these models against real work: a single environment variable on the client side and a proxy that speaks the Anthropic shape. Fazm shipped the client side of that seam on April 11, 2026, in v2.2.0, two days before this SERP window opened. This guide walks the seam, with line-numbered Swift, a LiteLLM config, and a verifiable path in the checked-in source.

F
Fazm
11 min read
4.9from 200+
Anchor line: env["ANTHROPIC_BASE_URL"] = customEndpoint in ACPBridge.swift:381
Shipped in Fazm v2.2.0 on 2026-04-11, two days before this April 13-14 window
LiteLLM config maps Anthropic model IDs to OLMo 2 32B, Codestral 2, Llama 4 Maverick

Open source AI projects and tools in the April 13-14, 2026 window

OLMo 2 32BCodestral 2Llama 4 MaverickNVIDIA IsingQwen 3.6 PlusGLM-5.1Gemma 4PrismML Bonsai 8BvLLM 0.8.xllama.cppOllamaLiteLLMSGLangGooseGoogle ADK

The anchor fact

One line of Swift, one env var, two days before the window

The seam that connects the April 13-14, 2026 open-source release wave to a working Mac agent is one assignment inside the ACP bridge launcher. It reads a UserDefaults key, writes an environment variable into the child process, and walks away. That is the whole thing.

ACPBridge.swift lines 379-382 and 396

How the open-weights fan-in works

Many weights, one endpoint, one accessibility-tree tool surface

On the left: the open weights and projects that landed in the April 13-14, 2026 window. In the middle: Fazm's Custom API Endpoint, which reads as ANTHROPIC_BASE_URL inside the ACP bridge. On the right: the tool surface that any of those models inherits once they are wired in.

Open weights to desktop control, via one endpoint

OLMo 2 32B
Codestral 2
Llama 4 Maverick
NVIDIA Ising
Qwen 3.6 Plus
Fazm Custom Endpoint
Desktop control via AX APIs
Same skills, same memory
ACP v0.25.0 error frames
Local-first routing

What each layer does on your Mac

The four moving parts between an open-weights release and a click

1. The open weights

OLMo 2 32B (Allen AI, fully open), Codestral 2 (Mistral, Apache 2.0), Llama 4 Maverick (Meta open weights), NVIDIA Ising (open quantum-AI). These land on Hugging Face, GitHub, or Ollama registries and drive the April 13-14 roundup cycle.

2. The inference server

vLLM 0.8.x, llama.cpp server, Ollama, or SGLang hosts the weights and exposes an OpenAI-compatible /v1/chat/completions endpoint on a local port.

3. The Anthropic shim

LiteLLM (or an equivalent Anthropic-compatible proxy) translates /v1/messages-shaped requests into whatever /v1/chat/completions shape the backend speaks, remapping Anthropic model IDs to open-weights models in a YAML config.

4. The Mac consumer app

Fazm. One env var (ANTHROPIC_BASE_URL) changes who answers. Skills, accessibility-tree tools, memory graph, pop-out chat, floating bar stay identical.

The line of code that ties it together

env["ANTHROPIC_BASE_URL"] = customEndpoint at ACPBridge.swift line 381. Shipped in v2.2.0 on April 11, 2026, two days before the April 13-14 open-source window.

The Settings toggle, in checked-in source

The TextField that writes to the @AppStorage key

The placeholder text in the UI is literally https://your-proxy:8766. The settings card copy reads "Route API calls through a custom endpoint (e.g. corporate proxy, GitHub Copilot bridge)." The phrase "GitHub Copilot bridge" is the hint that this seam was designed for external backends, not just network proxies. A LiteLLM instance in front of OLMo 2 32B qualifies as one such bridge.

SettingsPage.swift lines 837, 932-947

The LiteLLM config that finishes the circuit

Map Anthropic model IDs to open weights

Fazm's chat layer asks for claude-3-opus-20240229 or claude-3-5-sonnet-20241022 on every turn. LiteLLM intercepts that request at ANTHROPIC_BASE_URL and remaps it to whichever open-weights model is listed under that name in litellm.yaml. Here is a working mapping for the April 13-14, 2026 releases.

~/.litellm/config.yaml
2 days

The April 13-14, 2026 open-source wave lands on a Mac as a two-line config change, not a new app build.

Fazm CHANGELOG.json, v2.2.0, 2026-04-11

The numbers that make this uncopyable

Everything on this page maps to a line in the Fazm repo

0
Line in ACPBridge.swift where ANTHROPIC_BASE_URL is written
0
Line in SettingsPage.swift with the @AppStorage backing key
v0.0
Fazm release that introduced the Custom API Endpoint setting
0 days
Gap between v2.2.0 ship date (Apr 11) and the Apr 13 window
0Line of Swift to wire it
0Env var written
0Source files involved
0Days before April 13 window

End-to-end setup

From open-weights release to driving a Mac, in five steps

April 13-14, 2026 open-weights release to desktop control

1

1. Pick an open model from the April 13-14 window

OLMo 2 32B via Hugging Face, Codestral 2 via vLLM, Llama 4 Maverick via Ollama. Any of these will do. Fazm's tool surface is text-first, so vision is not required.

2

2. Serve it on a local port

ollama serve on 11434, vllm serve codestral-2 --port 8000, llama-server --port 8080. Whatever you already run for local inference.

3

3. Put LiteLLM in front of it

litellm --config ~/.litellm/config.yaml --port 8080. The config remaps Anthropic model IDs (claude-3-opus-20240229, claude-3-5-sonnet-20241022) to your local open-weights endpoints.

4

4. Flip the Custom API Endpoint toggle in Fazm

Settings > Chat > Custom API Endpoint. Paste http://localhost:8080. Save. Fazm restarts the ACP bridge with ANTHROPIC_BASE_URL set to your LiteLLM instance. This is the v2.2.0 feature, shipped 2026-04-11.

5

5. Ask Fazm to do something on your Mac

Start a chat. The model on the other end is now whichever open-weights model your LiteLLM config points at. The skill library, the memory graph, the accessibility-tree tools all behave identically. Only the weights changed.

What the swap actually looks like

Same chat window, new backend

ACP bridge spawn with custom endpoint

Verify every claim on this page

Three greps, one CHANGELOG lookup

Nothing on this page is argued from authority. Every claim maps to a line in the Fazm checked-in source that a reader with the repo can run grep against. Here are the four commands.

Verification transcript

Want to try the April 13-14, 2026 open weights on your own Mac today?

Download Fazm, then flip the Custom API Endpoint toggle in Settings once LiteLLM is running. Two screens, no app reinstall.

Download Fazm free

Frequently asked questions

What were the headline open source AI projects and tools on April 13-14, 2026?

The window covered Allen AI's OLMo 2 32B fully-open weights continuing to roll through fine-tune and evaluation forks, Mistral's Codestral 2 (Apache 2.0) picking up momentum for local coding via llama.cpp and vLLM, Meta's Llama 4 Maverick open weights driving a wave of task-specific distillations, NVIDIA's Ising family of open AI models for quantum error correction, and new inference-side updates landing in vLLM 0.8.x, llama.cpp, SGLang, and LiteLLM (the Anthropic-compatible proxy layer this page uses as its anchor). On the agent-framework side, Google ADK and Goose both pushed new open-source agent scaffolding the same week.

What is the one environment variable this page says lets any open source model drive a Mac?

ANTHROPIC_BASE_URL. Fazm's Custom API Endpoint setting writes a URL into that variable on the agent bridge child process. The exact line is in Desktop/Sources/Chat/ACPBridge.swift at line 381, which reads env["ANTHROPIC_BASE_URL"] = customEndpoint. Anything that speaks the Anthropic request shape at that URL (LiteLLM with its Anthropic adapter, an Anthropic-compatible proxy in front of vLLM, llama.cpp's OpenAI-compatible server wrapped by LiteLLM's anthropic_to_openai translator, a corporate gateway) gets to handle the turn. The model on the other end can be OLMo 2 32B, Codestral 2, Llama 4 Maverick, Qwen, anything.

Where in the Fazm source does this actually live?

Three files. /Users/matthewdi/fazm/Desktop/Sources/Chat/ACPBridge.swift lines 379-382 is where the env var is injected into the ACP bridge child process. /Users/matthewdi/fazm/Desktop/Sources/MainWindow/Pages/SettingsPage.swift line 837 is where the @AppStorage("customApiEndpoint") backing property lives, and lines 904-948 render the toggle and TextField with placeholder text "https://your-proxy:8766". /Users/matthewdi/fazm/Desktop/Sources/Providers/ChatProvider.swift line 2103 is where the provider reads the same UserDefaults key when rebuilding the HTTP client on endpoint changes.

When exactly did this ship relative to the April 13-14, 2026 window?

Two days before. The Custom API Endpoint setting landed in Fazm v2.2.0 on April 11, 2026. The CHANGELOG.json line reads: "Added custom API endpoint setting for proxies and corporate gateways." That is 2026-04-11, indexed by version 2.2.0. So by the time open source roundups published April 13 and 14 posts naming which weights had dropped, the consumer-side seam for routing any of those weights into a Mac agent had been shipping for 48-72 hours.

Does this mean Fazm is an open source project itself?

No. Fazm is a closed-source consumer Mac app. The interesting claim is narrower: Fazm composes with the open source ecosystem through an endpoint seam. The ACP bridge it starts is just a Node child process reading ANTHROPIC_BASE_URL from its environment, so a user who runs LiteLLM, a vLLM server with an Anthropic shim, or any other Anthropic-compatible proxy pointed at open weights can route a full Fazm session (chat, desktop control, memory graph, skills) through their open-model backend without Fazm ever touching the weights.

Why do most April 13-14 open source AI roundups miss this angle?

Because the roundups cover the upstream side of the stack (weights, repos, papers, benchmarks) and stop before the client. The downstream question (what does it take for a non-developer Mac user to actually use these releases for real desktop work) lives in a consumer app's Settings screen and a sixty-line LiteLLM config, not on Hugging Face or GitHub Trending. It is also invisible to anyone who has not installed the consumer app, so there is no path by which a typical roundup author would find it.

What does LiteLLM have to do with this?

LiteLLM is the open source proxy that speaks the Anthropic request shape on one side and translates it to any provider (OpenAI, HuggingFace, vLLM, Ollama, llama.cpp server) on the other. It is the standard way to put an Anthropic-compatible endpoint in front of an open-weights model. Once you run litellm --config litellm.yaml --port 8080 locally, you set Fazm's Custom API Endpoint to http://localhost:8080, and Fazm will route every turn to whichever open model you configured in litellm.yaml. The CHANGELOG hint that this was the intended use is the phrase "proxies and corporate gateways" in the v2.2.0 entry.

Can I verify the env var injection myself?

Yes. Open /Users/matthewdi/fazm/Desktop/Sources/Chat/ACPBridge.swift in any editor and jump to line 379. The three-line block reads: "if let customEndpoint = defaults.string(forKey: \"customApiEndpoint\"), !customEndpoint.isEmpty { env[\"ANTHROPIC_BASE_URL\"] = customEndpoint }". Then open Desktop/Sources/MainWindow/Pages/SettingsPage.swift at line 837 to see the @AppStorage backing key, and at line 933 to see the TextField with placeholder "https://your-proxy:8766". Three files, three confirmations. Nothing on this page is invented.

Does the model see screenshots, or accessibility-tree text?

Accessibility-tree text. Fazm's desktop control goes through native macOS accessibility APIs (AXUIElementCreateApplication, AXUIElementCopyAttributeValue) via a bundled mcp-server-macos-use binary, not screenshots piped into a vision model. The significance for open source release week is that the tool surface presented to the model is text, so open weights like Codestral 2 and OLMo 2 32B, which have no vision capability, can still drive Mac apps as well as any frontier model. The open-weights story is not gated on whether your model has vision.

What is the difference between "local-first" and "fully local" in Fazm's copy?

Fazm's v2.3.2 CHANGELOG entry on 2026-04-16 reads: "Tightened privacy language in onboarding and system prompts to accurately say 'local-first' instead of 'nothing leaves your device.'" Relevant here because as soon as a user points the Custom API Endpoint at a hosted open-weights proxy (or even a localhost proxy that itself reaches out for inference), the router can leave the machine. Fully-local only holds when the ANTHROPIC_BASE_URL points at a localhost proxy backed by a localhost vLLM or llama.cpp process. Local-first is the honest default frame.

The April 13-14, 2026 open source AI release window, on a Mac

The weights ship upstream. The proxy ships as open source. The consumer-app seam that lets those weights operate a real Mac was already shipping two days before this SERP window opened, and it lives at line 381 of one Swift file.

Download Fazm free
fazm.AI Computer Agent for macOS
© 2026 fazm. All rights reserved.

How did this page land for you?

Comments

Public and anonymous. No signup.

Loading…