AI model updates, April 2026: the four-hop event chain Fazm uses to pick up a new Claude without a release
Every SERP result for "ai model updates april 2026" is a benchmark and date dump: GPT-5.4 Thinking 75.0% on OSWorld-Verified, Mythos 93.9% SWE-bench, Gemini 3.1 Pro, Gemma 4 in four variants, Llama 4 Scout and Maverick. None of them show the part that matters to a user who already has an AI app installed: what happens inside the product on the day Anthropic flips GA on the next Claude. This page walks that runtime, line by line, from the ACP SDK's JSON-RPC response to the SwiftUI floating-bar rebind.
What shipped upstream in April 2026
So the downstream context is clear, here is the upstream events list. This is the part every other page stops at.
GPT-5.4 Thinking
OpenAI. Test-time compute, 75.0% on OSWorld-Verified desktop tasks. Not wired into Fazm; app is Anthropic-only today via ACP.
Claude Opus 4.7 GA
Anthropic, 2026-04-14. Surfaces in Fazm's picker as 'Smart (Opus, latest)' because ShortcutSettings.swift:161 matches substring 'opus'. Zero-line app change required.
Claude Mythos
Anthropic, confirmed 2026-04-07. 93.9% SWE-bench, 94.6% GPQA Diamond. Not publicly released; not in availableModels; not in Fazm.
Gemini 3.1 Pro
Google. Current public flagship as of April 2026. Not wired into Fazm; same ACP-only reason as GPT-5.4.
Gemma 4 family
Google, 2026-04-02. 2.3B to 31B variants, natively multimodal. Open weights. 31B Dense ranks #3 on Arena AI among open models.
Llama 4 Scout + Maverick
Meta, 2026-04-05. First Llama with MoE architecture and native multimodal pretraining. Scout: 17B active, 16 experts, 10M context. Maverick: 17B active, 128 experts, 1M context.
Sources: llm-stats.com, blog.mean.ceo, releasebot.io, buildfastwithai, and MIT Technology Review's 2026-04-13 state-of-AI charts.
The gap the roundups miss
Every "April 2026 model updates" article lists the events and stops. GPT-5.4 Thinking scored 75.0%, Mythos hit 93.9%, Opus 4.7 went GA on 2026-04-14 at a staggered per-account rollout. That is the upstream view: what Anthropic, OpenAI, Google, and Meta released.
There is a second view, almost entirely uncovered on the SERP: what did an installed, signed, notarized consumer Mac AI app actually do in the product layer to turn those upstream events into something a user sees? What code path got traversed? What JSON-RPC call was added? Which three lines of Swift decide the label on the picker row?
This page is that second view, for Fazm, for April 2026. The answer is a four-hop event chain and a three-tuple family map.
The four-hop event chain
ACP SDK speaks JSON-RPC to the bridge. The bridge speaks a small typed protocol (protocol.ts:OutboundMessage) to Swift over stdio. Swift publishes to SwiftUI.
From Anthropic GA to your floating bar
session/new response carries the new model list
Three tuples survived every April 2026 Claude update
These three tuples at ShortcutSettings.swift lines 159-163 are the only client-side mapping from Anthropic's model namespace to the Fazm picker. They did not change when Haiku 4.5 shipped. They did not change when Sonnet 4.6 shipped. They did not change when Opus 4.6 went GA. They did not change when Opus 4.7 went GA on 2026-04-14. The substring matcher at line 183 reads the current model ID from the ACP SDK's session/new response and picks the right tuple by modelId.contains($0.substring).
The absorb cost, in numbers
Hop 2, in full: emitModelsIfChanged
The bridge filter is ten lines. It does three things: logs the raw ACP SDK output so the current "default" sentinel shape is inspectable, drops entries where modelId equals "default", and guards emission behind a JSON-string equality check. The JSON-diff matters: without it, every session/new (including warmup and resume paths) would re-broadcast the same list and churn the SwiftUI binding.
What /tmp/fazm-dev.log shows on the day of a release
The dev build writes to /tmp/fazm-dev.log. On the first ACP session start after an Anthropic GA, the raw availableModels array lands, gets filtered, gets emitted, gets decoded in Swift, and rebinds the picker. From user perspective, Cmd+Space opens the floating bar and the new model is already there.
The chain, hop by hop
session/new returns models.availableModels
ACP SDK's JSON-RPC response includes an availableModels array. Each item is {modelId, name, description?}.
{
"sessionId": "acp_01JRXXXXXX",
"models": {
"availableModels": [
{ "modelId": "default", "name": "Default" },
{ "modelId": "haiku", "name": "Claude Haiku 4.5" },
{ "modelId": "sonnet", "name": "Claude Sonnet 4.6" },
{ "modelId": "opus", "name": "Claude Opus 4.7" }
]
}
}emitModelsIfChanged filters and JSON-diffs
acp-bridge/src/index.ts:1130-1144 drops the 'default' pseudo-model, stringifies the filtered array, and only emits when it differs from lastEmittedModelsJson.
ACPBridge.swift decodes models_available
Swift reads the stdout line, matches 'models_available', parses the models array into [(modelId, name, description)] tuples, and calls onModelsAvailable (lines 1131-1211).
ShortcutSettings.updateModels relabels by substring
The callback matches each modelId against three substrings: 'haiku' -> 'Scary (Haiku, latest)', 'sonnet' -> 'Fast (Sonnet, latest)', 'opus' -> 'Smart (Opus, latest)'. @Published availableModels is reassigned; SwiftUI rebinds the floating-bar picker.
The April 18 patch that kept history on a model switch
Absorbing a new model at session start is not enough. Users flip between Scary, Fast, and Smart mid-conversation constantly. Before 2026-04-18, that flip tore down the ACP session and re-seeded the history. Commit f0d49f0f at 14:01:10 PDT added a 10-line block inside handleQuery that calls the ACP protocol's session/set_model JSON-RPC on an existing session when requestedModel differs from the cached model.
The effect: flipping Smart to Fast mid-turn switches the ACP session's active model without discarding the prior messages. Opus 4.7 replies, then Sonnet 4.6 replies, then Haiku 4.5 replies, on the same thread. The sessions.get(sessionKey).model map on the bridge is the source of truth for which model "this session" is currently on.
Hops, mapped to the actual files
If you want to verify this yourself on the Fazm source, the four hops live in three files. All line numbers are as of 2026-04-19.
Hop 1: ACP SDK response
session/new JSON-RPC returns models.availableModels on the same reply that returns sessionId. No separate fetch, no polling, no WebSocket subscription.
Hop 2: bridge filter + JSON diff
acp-bridge/src/index.ts:1130-1144 drops the 'default' pseudo-model, JSON-stringifies the array, and emits only when it differs from lastEmittedModelsJson.
Hop 3: Swift decode
ACPBridge.swift:1131-1133 maps 'models_available' to a typed case; lines 1202-1211 call onModelsAvailable with [(modelId, name, description)] tuples.
Hop 4: substring relabel
ShortcutSettings.swift:178 updateModels matches each modelId against three substrings and writes the @Published availableModels array.
Why this is possible on Fazm specifically
Fazm's automation layer does not depend on the chat model. The app reads macOS accessibility trees (AXUIElement, AXObserver, kAX* attribute keys) and acts on any app on the Mac through that tree, rather than taking screenshots and asking a vision model what is on the screen. The accessibility layer is independent of which Claude model is behind the chat.
That separation is why an April 2026 Claude update absorbs as three substring tuples plus a 10-line JSON-RPC hop, not a retraining cycle, not a new vision model integration, and not a new prompt-format migration. Screenshot-based computer-use agents have to pin to a specific vision model because the screenshot encoder, tokenizer, and output schema all change between versions. Fazm does not.
Treat the chat model as a swappable resource. Keep the layer you own (perception of the user's screen, execution of local actions, persistence of memory and files) stable, well-tested, and opinionated. Rebuild when the layer you own changes, not when Anthropic's does. That is the architectural lesson of April 2026 for any consumer AI app on the Mac.
Want the four-hop chain wired into your own Mac app?
I will walk through Fazm's ACP bridge, the models_available event emission, and the substring-family matcher on a 20-minute call.
Book a call →April 2026 model updates FAQ
What AI models shipped in April 2026, and which ones does Fazm actually use?
April 2026 shipped GPT-5.4 Thinking (OpenAI, 75.0% on OSWorld-Verified), Claude Mythos (Anthropic, 93.9% SWE-bench, not publicly released), Gemini 3.1 Pro (Google), Gemma 4 family (2.3B to 31B, 2026-04-02), and Llama 4 Scout and Maverick (Meta, 2026-04-05, Scout with a 10M-token context window). Fazm ships three Anthropic models inside the app: haiku, sonnet, opus. Those three strings are all the product hard-codes. The floating bar populates actual model IDs at runtime from the ACP SDK's session/new response, so Opus 4.6 GA in March and Opus 4.7 GA on 2026-04-14 both appeared in the picker without a Fazm release.
How does Fazm absorb a new Claude model without shipping an app update?
Four hops. Hop 1: ACP SDK's session/new JSON-RPC returns models.availableModels on the response. Hop 2: acp-bridge/src/index.ts lines 1130-1144 filters out the 'default' pseudo-model, JSON-stringifies the array, compares to lastEmittedModelsJson, and calls send({type:'models_available', models:filtered}) only when the list changed. Hop 3: The Swift ACPBridge at lines 1131-1133 decodes 'models_available' into a ModelsAvailableMessage and at lines 1202-1211 calls onModelsAvailable with the parsed tuples. Hop 4: ShortcutSettings.swift line 178 updateModels iterates the tuples, matches each modelId against the three substrings 'haiku'/'sonnet'/'opus' at lines 159-163, assigns 'Scary'/'Fast'/'Smart' labels plus a family suffix, sorts by order, and writes the @Published availableModels. The floating-bar SwiftUI view rebinds on publish. No release required.
Why does the Fazm picker say 'Scary (Haiku, latest)', 'Fast (Sonnet, latest)', 'Smart (Opus, latest)'?
Because the three tuples at ShortcutSettings.swift:159-163 are the only client-side mapping the app ships. Each tuple is (substring, shortLabel, family, order). The updateModels() function at line 178 does modelId.contains(substring); the first match sets the label as '\(shortLabel) (\(family), latest)'. This produces 'Scary (Haiku, latest)' for any Haiku revision, 'Fast (Sonnet, latest)' for any Sonnet, 'Smart (Opus, latest)' for any Opus. The word 'latest' is not dynamic; it is a fixed suffix that is truthful because the ACP SDK always returns the current GA revision for each family in availableModels.
What does commit f0d49f0f change on 2026-04-18, and why does it matter for model updates?
Commit f0d49f0f, 2026-04-18 14:01:10 PDT, added a 10-line block at acp-bridge/src/index.ts lines 1497-1509. When handleQuery reuses an existing ACP session and the requested model differs from the session's current model, the bridge now calls acpRequest('session/set_model', { sessionId, modelId: requestedModel }) and updates sessions.get(sessionKey).model before re-emitting. Before this commit, switching from Smart to Fast mid-conversation either ignored the switch (old path) or tore down the session and lost the history. After it, the user can flip Smart to Fast mid-turn, keep every prior message, and see the new model's first reply on the same thread.
What is the 'default' pseudo-model and why is acp-bridge filtering it out?
The Claude Agent SDK (@anthropic-ai/claude-agent-sdk) includes a sentinel entry {modelId:'default', name:'...'} in models.availableModels as a hint for 'whatever the caller's default is'. Fazm's picker would render it as a selectable row with no meaningful label, so acp-bridge/src/index.ts:1133 does availableModels.filter(m => m.modelId !== 'default') before emission. Commit e15b06e3 on 2026-04-17 11:59:29 PDT added that line along with a raw-log of the unfiltered array so the list is inspectable when an agent SDK update changes the sentinel name.
How is this different from other 'ai model updates april 2026' pages?
Other pages list releases: GPT-5.4 Thinking scored this, Mythos hit that, Gemma 4 ships four variants. Those are the upstream events. This page covers the downstream absorb: the JSON-RPC plumbing (acp-bridge/src/index.ts:1130-1144 and 1497-1509), the Swift decode path (ACPBridge.swift:1131-1211), and the substring-family matcher (ShortcutSettings.swift:159-212) that together let a signed, notarized, installed Mac app pick up Opus 4.7 the first time a user opens it after Anthropic flips GA, without Fazm shipping a new build.
Does Fazm call Anthropic directly, or does it go through the Agent Client Protocol?
Through the Agent Client Protocol. acp-bridge/src/index.ts embeds @anthropic-ai/claude-agent-sdk wrapped in @agentclientprotocol/claude-agent-acp. Swift talks JSON-RPC over stdio to the bridge via ACPBridge.swift. That is how Fazm gets session/new, session/prompt, session/set_model, and session/cancel without wiring raw Anthropic Messages API by hand. When Anthropic ships a new model, the Agent SDK picks it up via its own release (commit 95287a32 on 2026-04-18 absorbed 21 patch releases in one lockfile write) and the ACP layer surfaces it in session/new's models.availableModels.
Do screenshot-based agents have this property too?
No. Screenshot-based computer-use agents are usually pinned to a specific vision model ID because the screenshot encoder, tokenizer, and output format all change between versions. Fazm does not do screenshots; the automation layer reads macOS accessibility trees (AXUIElement, AXObserver, kAX* attribute keys) and executes local actions through that tree. The chat model is an independent, swappable resource. That separation is why the April 2026 model churn was absorbed as three substring tuples plus a 10-line JSON-RPC hop, not a retraining or re-integration cycle.
Comments (••)
Leave a comment to see what others are saying.Public and anonymous. No signup.