The April 2026 LLM release that actually reached a Mac user was one substring match, not a benchmark chart.
Every roundup this month lists the same nine releases ranked by SWE-bench and MMLU. None of them describe the consumer-app layer where those model names have to become clickable buttons. Fazm's answer is three const strings and a three-row substring table: DEFAULT_MODEL at acp-bridge line 1245, modelFamilyMap at ShortcutSettings line 160, and an emitModelsIfChanged broadcast at line 1278 that turns a new modelId into a new button with zero app update.
April 2026 release chips + the shipping-code names that route them
What every April 2026 LLM roundup covers, and the one thing they all miss
If you search llm release april 2026, the top results agree on the facts: at least nine major releases shipped in the first two weeks, weighted toward context-efficiency improvements and open-source parity. Opus 4.7 GA. GPT-5 Turbo. Claude Mythos Preview. Gemma 4 under Apache 2.0. GLM-5.1 754B MoE. Qwen 3.6-Plus with 1M context. DeepSeek V3.2. Grok 4.1. Every source ranks them by SWE-bench, long-context, MMLU. None describe the question a consumer Mac user has the next day: how does any of this become a button I can click?
The SERP gap, item by item
What the top ten results for 'llm release april 2026' do not cover
- llm-stats.com: ranks April 2026 models by benchmarks; no client-side integration angle
- unite.ai best LLMs April 2026: feature comparison table; no consumer app surface described
- buildfastwithai.com best AI models April 2026: benchmark rankings; no Mac integration path
- llmgateway.io timeline: tracks release dates; stops at the API boundary
- whatllm.org new AI models April 2026: editorial take on open source vs closed; no UI layer
- aiflashreport.com model releases: comprehensive table; no chat-client layer described
The anchor fact: three const strings and one substring match
The entire 'add support for the new April 2026 LLM' path inside Fazm collapses to a handful of named constants. DEFAULT_MODEL is the only hardcoded model ID in the 2,700-line bridge; every other line reads from it. The substring family map is three rows. emitModelsIfChanged is a one-line broadcast. incomingSessionKey is a single null-coalesce that keys a separate ACP session per model. Together they mean a new release gets a new button with zero app update required.
The six lines a new April 2026 LLM release touches
- Line 1245: const DEFAULT_MODEL = 'claude-sonnet-4-6' (the only hardcoded model ID)
- Lines 160-163: modelFamilyMap substring table ('haiku','sonnet','opus' → 'Scary','Fast','Smart')
- Line 1278: send({ type: 'models_available', models: filtered }) (the broadcast)
- Line 1390: incomingSessionKey = msg.sessionKey ?? (msg.model || DEFAULT_MODEL)
- Line 1475: acpRequest('session/set_model', { sessionId, modelId: requestedModel })
- Lines 188-190: fallback branch for unknown model families (order=99, raw API name)
The one broadcast line (bridge side)
acp-bridge/src/index.ts lines 1245 through 1290. DEFAULT_MODEL is the fallback. emitModelsIfChanged filters the ACP SDK's raw model list (removing the 'default' pseudo-model), hashes it, and sends one JSON line over stdout when it changes. This is the single outbound message that ties a new Anthropic release to a new Fazm button.
The three-row substring match (Swift side)
Desktop/Sources/FloatingControlBar/ShortcutSettings.swift lines 151 through 216. modelFamilyMap has three rows. Any inbound modelId that contains one of the three substrings gets a Scary/Fast/Smart label with a stable sort order. Anything else falls through to the raw API name at order=99. This is why Gemma 4, GLM-5.1, and GPT-5 Turbo render with their literal names when you point Fazm at a custom endpoint, while Opus 4.7 inherits the same 'Smart' shelf Opus 4.6 had.
Per-model session isolation, one line
acp-bridge/src/index.ts line 1390. Each inbound query carries an optional model field. The session key is either explicit or falls back to the model name or finally DEFAULT_MODEL. This is how you can run Opus 4.7 in the main chat and Claude Mythos in the floating bar inside one process, with independent abort controllers and separate server-side sessionIds.
The five-hop flow from provider catalog to Mac button
Anthropic flipping Opus 4.7 to GA on April 2 does not require a Fazm release. Here is what actually happens between their server-side catalog change and your app showing a new button.
How a new April 2026 modelId becomes a clickable button on a Mac
The six timeline steps from click to token
1. Provider adds the new model to its server-side catalog
Anthropic publishes opus-4-7 in the model catalog (April 2). OpenAI flips GPT-5 Turbo in the API (April 7). A self-hosted team restarts their Gemma 4 proxy with a new model ID in the manifest. At this moment, no client-side change has happened.
2. Claude Agent SDK returns it inside session/new's availableModels
The next time Fazm pre-warms a session (Desktop/Sources/Providers/ChatProvider boot path), the ACP SDK's session/new RPC includes result.models.availableModels with the new modelId. This is what the bridge hooks into: acp-bridge/src/index.ts line 1340 reads result.models?.availableModels and passes it to emitModelsIfChanged.
3. Bridge hashes, dedups, and broadcasts models_available
emitModelsIfChanged at index.ts lines 1271 to 1280 stringifies the filtered list and compares it to lastEmittedModelsJson. If unchanged, nothing happens. If changed, send() writes one JSON line to stdout: { type: 'models_available', models: [...] }. This is the single most important line in the whole 'LLM release support' story: line 1278.
4. Swift reads stdin, routes to ShortcutSettings.updateModels
ACPBridge.swift decodes the stdin JSON and invokes ShortcutSettings.updateModels(acpModels) on the main actor. The compactMap at lines 184 to 195 runs each inbound modelId through the three-row substring family map, choosing Scary/Fast/Smart when a substring matches and falling through to the raw API name when none do.
5. SettingsPage.swift ForEach re-renders the button row
Because availableModels is a @Published var, SwiftUI schedules a view update on the next tick. SettingsPage.swift lines 2013 to 2031 ForEach(shortcutSettings.availableModels) produces a new button per model. From the user's perspective, they open the app, go to Settings, and see the April 2026 release as a new clickable option. No update banner, no recompile, no TestFlight push.
6. User clicks the new button, session/set_model fires
The click updates selectedModel in UserDefaults. On the next handleQuery, line 1390 computes incomingSessionKey = msg.sessionKey ?? (msg.model || DEFAULT_MODEL) and routes to the correct ACP session. Line 1475 calls acpRequest('session/set_model', { sessionId, modelId: requestedModel }). From click to first token of response: one RPC round trip.
A live log: Opus 4.7 GA arriving on a Mac
What the Fazm log at /tmp/fazm.log shows when you launch the app for the first time after Anthropic flips opus-4-7 to GA in the model catalog on April 2, 2026.
The handshake, as a sequence diagram
models_available broadcast and session/set_model on click
The April 2026 release slate, mapped to the shipping-code path
Every release below travels the same five-hop path. The only thing that varies is which branch of the substring family map it matches.
Claude Opus 4.7 GA — April 2
Anthropic promoted Opus 4.7 to general availability on April 2. Inside Fazm, zero code changes were required: the modelId 'opus-4-7' arrived in session/new's availableModels, matched the 'opus' substring, and was labeled Smart (Opus, latest) on first launch. Existing conversations kept their Opus 4.6 session; a fresh session on the same key called session/set_model to upgrade.
Gemma 4 Apache 2.0 — April 2
Google's open-source release. Via a custom endpoint fronting a self-hosted Gemma 4 server, Fazm sees it as a non-haiku/sonnet/opus model. Falls through the substring map to the 'use the API name directly' branch at ShortcutSettings.swift line 188.
GPT-5 Turbo — April 7
OpenAI's multimodal flagship. Fazm reaches it through any Anthropic-compatible proxy (LiteLLM, Cloudflare AI Gateway). Same path as Gemma 4: arrives in models_available, skips the three-family map, renders with its raw name.
Claude Mythos Preview — April 7
Anthropic's partner-only preview. For the ~50 Project Glasswing organizations, Mythos IDs surface through the same emitModelsIfChanged broadcast. No UI gating in Fazm itself: if the SDK hands back the modelId, the bridge forwards it and the button appears.
GLM-5.1 (754B MoE) & GLM-5V-Turbo
Zhipu's MoE flagship + multimodal sibling, under MIT license. Identical Fazm path: custom endpoint + models_available forwarding. Button renders with raw 'glm-5.1-754b' name at order=99 because no substring matches Haiku/Sonnet/Opus.
Qwen 3.6-Plus — 1M-token context
Alibaba's agentic release. The 1M-context claim stresses the pre-warming path in acp-bridge at lines 1335 to 1345 (session pre-warm + session/set_model), which keeps one Qwen session parked so the first query does not pay the latency of both session/new and set_model.
DeepSeek V3.2 & Grok 4.1
Incremental point releases. Because the bridge has zero model allowlist, both show up in the picker the moment their proxy returns them in availableModels. No Fazm release needed for either.
Roundup page vs. shipping-code path: eight head-to-head rows
| Feature | Generic April 2026 LLM roundup | Fazm shipping code |
|---|---|---|
| Where the April 2026 release appears to a Mac user | As a section heading in a benchmark roundup article ranked by SWE-bench | As a new button in the picker, alongside Scary/Fast/Smart, with an optional family label |
| What happens when Anthropic flips 'opus' alias to opus-4-7 | The roundup adds a new row to its 'latest models' table | DEFAULT_MODEL const changes on next Fazm release; every other line stays identical |
| Support path for an open-source model (Gemma 4, GLM-5.1) | Describe the model's architecture; link to the provider's playground | Point customApiEndpoint at an Anthropic-compatible proxy; button appears automatically |
| How two April releases run side by side | Two browser tabs on two different provider consoles | Per-model session key: main chat on Opus 4.7 + floating bar on Mythos, one process |
| What a 'does Fazm support X' question resolves to | Maybe, check the provider's status page and your endpoint's model allowlist | Yes, if the SDK or proxy returns the modelId in availableModels (~30 lines of wiring) |
| Shipping cycle to pick up a new release | A new article gets published | Hours (config update on the provider side) to days (one-line DEFAULT_MODEL bump on ours) |
| Who decides the consumer-facing label | The model's press release or the benchmark leaderboard | A three-row substring table hardcoded in ShortcutSettings.swift: Scary/Fast/Smart |
| Where the actual model-switch call lives | Not applicable; roundups do not switch anything | acp-bridge/src/index.ts: acpRequest('session/set_model', { sessionId, modelId }) |
“Two hardcoded model IDs. Three substring-match rows. One broadcast line. That is the entire 'LLM release April 2026' story for a consumer Mac chat client.”
/Users/matthewdi/fazm/acp-bridge/src/index.ts + ShortcutSettings.swift
Need a Mac chat client that absorbs every April 2026 LLM release without waiting for us?
Book 20 minutes to see the three-const anchor live: new modelId arriving on stdout, substring match, button appearing, session/set_model firing.
Book a call →Frequently asked questions
What LLMs were released in April 2026?
The month was unusually dense. Anthropic shipped Claude Opus 4.7 GA on April 2 along with a Claude Mythos Preview on April 7 (limited to ~50 partner organizations via Project Glasswing). OpenAI released GPT-5 Turbo on April 7 with native image and audio generation inside a single model. Google released the Gemma 4 family under Apache 2.0 on April 2. Zhipu released GLM-5.1 (754B MoE, MIT license) and GLM-5V-Turbo. Alibaba released Qwen 3.6-Plus with 1M-token context. DeepSeek released V3.2. xAI released Grok 4.1. At least nine significant models shipped in the first two weeks alone, and that count undercounts because it excludes point-release updates like Opus 4.7.1 and Haiku minor bumps.
What does a consumer Mac chat app have to do to support a new April 2026 model?
Inside Fazm, close to nothing. The ACP bridge reads the model list from the Claude Agent SDK's session response (result.models.availableModels), hashes it against the last broadcast, and if it changed calls send({ type: 'models_available', models: filtered }) at acp-bridge/src/index.ts line 1278. The Swift UI subscribes to that message, runs it through a three-row substring table at ShortcutSettings.swift lines 160 to 163, and re-renders the button row. When the Anthropic SDK starts returning 'claude-opus-4-7' or 'claude-mythos-preview' in its availableModels, a Fazm user with zero app updates sees new buttons appear the next time they open the app. The one place an app update is required is when the SDK itself rev-locks to a new major protocol shape.
Where does the three-string anchor live in the code?
The entire pipeline is anchored on three const strings and one substring-match table. First const: DEFAULT_MODEL = 'claude-sonnet-4-6' at acp-bridge/src/index.ts line 1245. This is the only place in the whole bridge where a model name is hardcoded, and it exists solely as the fallback when no model is specified on an incoming query. Second const: SONNET_MODEL = 'claude-sonnet-4-6' one line below, used by pre-warmed sessions. Third const: the events literal 'models_available' used at index.ts line 1278 inside send({ type: 'models_available', models: filtered }). The substring-match table is the four-line modelFamilyMap at ShortcutSettings.swift lines 160 to 163 that bins any new model ID into one of three consumer-facing families: Scary (Haiku), Fast (Sonnet), Smart (Opus). When OpenAI's GPT-5 Turbo or Zhipu's GLM-5.1 show up via a custom endpoint with names that do not contain any of those three substrings, ShortcutSettings.swift lines 188 to 190 fall through to 'use the API name directly' and the model renders as its raw ID.
How does a new release flow from provider to Fazm without an app update?
Five hops. First, the provider adds the new model to its server-side catalog (Anthropic's console, the OpenAI API, a self-hosted Gemma 4 server exposing an Anthropic-compatible interface). Second, the Claude Agent SDK's session/new response returns it inside the models.availableModels array. Third, the ACP bridge receives that response inside handleQuery's pre-warm path and calls emitModelsIfChanged (index.ts lines 1271 to 1280). Fourth, the bridge sends a single stdout JSON line: { type: 'models_available', models: [{ modelId: 'claude-opus-4-7', name: 'Opus 4.7, latest' }, ...] }. Fifth, ACPBridge.swift's read loop decodes it and calls ShortcutSettings.updateModels, which re-bins every new ID through the substring family table at lines 180 to 216 and republishes availableModels on the @Published var. The SwiftUI ForEach at SettingsPage.swift lines 2013 to 2031 re-renders the button row on the next tick.
Can I A/B two April 2026 models side by side inside Fazm?
Yes. That is what the session key is for. Every incoming query carries an optional model field, and acp-bridge/src/index.ts line 1390 computes incomingSessionKey = msg.sessionKey ?? (msg.model || DEFAULT_MODEL). If two surfaces (say main and floating) each specify a different model, the bridge routes them to two separate ACP sessions with independent sessionIds, and acpRequest('session/set_model', { sessionId, modelId }) at lines 1341 and 1366 sets the per-session model server-side. The practical consequence: you can run Opus 4.7 in the main chat and Claude Mythos in the floating bar simultaneously in one Node.js process. The per-session isolation also means one session's compaction, tool calls, and rate-limit hits do not leak into the other session's state.
How does a brand-new model name that does not match 'haiku', 'sonnet', or 'opus' render?
ShortcutSettings.swift lines 184 to 190 handle that case explicitly. The family map compactMap returns nil for unknown model IDs, then the fallback branch constructs a ModelOption with label = model.name (the SDK-provided display name, e.g., 'Gemma 4 12B Instruct' or 'GLM-5.1-754B') or falls back to the raw modelId, and gives it order=99 so it sorts after the three named families. The button still renders, just with its literal SDK name instead of the Scary/Fast/Smart alias. This is why a user running Fazm against a Gemma 4 or GLM-5.1 custom endpoint sees those models in the picker with their real names rather than being blocked.
What role does DEFAULT_MODEL play when a new release lands?
DEFAULT_MODEL = 'claude-sonnet-4-6' at acp-bridge/src/index.ts line 1245 is the only piece of the pipeline that has to change when a new month's minor upgrade ships. When Anthropic rotates the default alias from sonnet-4-6 to sonnet-4-7, Fazm updates that one constant in the next release. Everything else, including the user's picker, rate limit routing, and session pre-warming, reads from that constant. The rest of the 2,700-line bridge never hardcodes another model ID. This is why a keyword like 'llm release april 2026' ends up being answered by 'change one string' rather than 'ship a feature'.
Does Fazm support models from providers other than Anthropic?
Through the customApiEndpoint setting, yes, with one constraint: the endpoint must speak an Anthropic-compatible wire protocol. LiteLLM, Cloudflare AI Gateway, and any self-hosted proxy that translates Anthropic's /messages shape into OpenAI, Gemini, or vLLM work. When you point Fazm at a proxy fronting GPT-5 Turbo or Gemma 4, the ACP SDK still negotiates available models over its own protocol, and whatever IDs come back are fed through emitModelsIfChanged. The models_available stdout line then surfaces them in the picker with their real names. The exact user-visible consequence: switching Fazm to a GPT-5 Turbo proxy and opening the picker shows a new 'gpt-5-turbo' button next to Scary/Fast/Smart, with no recompile.
Why do the three consumer labels 'Scary', 'Fast', 'Smart' exist at all?
Because the labels are the product, not the model IDs. A non-technical Mac user reading 'claude-sonnet-4-6' vs 'claude-opus-4-7' has no way to tell what they should pick. ShortcutSettings.swift line 160 hardcodes the answer: Haiku is Scary (means 'fast and cheap, so it runs silently in the background'), Sonnet is Fast (general-purpose default), Opus is Smart (for hard problems). The labels are deliberately evaluative, not descriptive. When a new model family ships in April 2026, the substring table preserves that evaluative shelf: opus-4-7 is still Smart, sonnet-4-7 is still Fast. The shelf is stable even as the underlying model IDs rotate monthly.
What happens to an in-flight chat when I switch models mid-session?
Nothing. The per-model session isolation at acp-bridge/src/index.ts line 1390 means each model has its own ACP sessionId, its own CWD, its own ongoing query state, and its own abort controller. When you click from Smart to Fast in the picker, your previous session keeps running (or stays paused) under its own key, and the bridge routes the next query to whatever session key the new model maps to. The explicit code at lines 1449 to 1504 handles three cases: session key already exists (reuse it, possibly calling session/set_model if the model changed), session key new (create a new ACP session), and model swap inside a pre-existing key (call session/set_model at line 1475). The result is that model switching is essentially free: no connection drop, no context loss for either session.
Where can I verify all of this in the Fazm source tree?
The three-const anchor is at /Users/matthewdi/fazm/acp-bridge/src/index.ts lines 1245 to 1290 (DEFAULT_MODEL, emitModelsIfChanged, and the broadcast). The substring-match table is at /Users/matthewdi/fazm/Desktop/Sources/FloatingControlBar/ShortcutSettings.swift lines 151 to 216 (defaultModels, modelFamilyMap, normalizeModelId, updateModels). The per-model session key is at index.ts line 1390. The session/set_model calls are at index.ts lines 1341, 1366, 1475, 1495, 1503, and 1810. The UI button row that re-renders on every models_available broadcast is at /Users/matthewdi/fazm/Desktop/Sources/MainWindow/Pages/SettingsPage.swift lines 2013 to 2031. Everything is visible to anyone who opens the repo; there is no hidden configuration server or LaunchDarkly flag gating any of it.
Related April 2026 guides
Large Language Model Research Updates April 2026
The same month, viewed through the compaction indicator instead of the model picker: TurboQuant, PaTH Attention, Muse Spark, and the orange 'compacting context' label.
vLLM Release April 2026 Changelog
The open-source serving side. What April's vLLM releases never touch: the consumer Mac app that reads their localhost:8000 and draws a button.
Local LLM News April 2026
The month's local-LLM shipping notes, mapped against the same Mac-agent surface that resolves a custom endpoint into a button in the picker.
Comments (••)
Leave a comment to see what others are saying.Public and anonymous. No signup.