Every April 2026 LLM launch article lists specs. This one is about what a shipped Mac app has to do to surface a new model on release day.
If you already read the spec roundups for GPT-5.4, Claude Opus 4.7, Gemini 3.1, Gemma 4, and Llama 4, you know what each one claims at MMLU and what each one costs per million tokens. What you do not know from any of those pages is what the app you actually use has to do to add a new model to its picker within minutes, without a rebuild, without an App Store review, and without asking you to install anything. In Fazm the answer is exactly 33 lines of code, split across one Node file and one Swift file. Here is the path, by line number.
The SERP gap for this keyword
Search "ai model release or llm launch april 2026" and the top results are all the same article with a different logo on it. Each one has a table of context windows, a bar chart of benchmark deltas, a paragraph about pricing, and a line that says "available today in the API". That line is load-bearing. It hides the real question a user of a shipped consumer app actually has: on the day Opus 4.7 GAs, how long before it shows up in the app I already have installed, and what actually has to happen inside that app for it to appear there?
The answer is not "the team ships a new binary." Apple notarization alone takes 15 to 45 minutes. A Sparkle roll on top of that means the median user would see the new model hours or days late. That's unacceptable for a product whose point is to be the floating bar for whatever your Mac can currently do. So Fazm pushes the model list through the ACP bridge as runtime data, and the Mac app treats it as untrusted input. This page is the walk through of exactly how.
Four hops, from Anthropic backend to your floating bar
The path is not magic. It is a chain of four deterministic forwarders, each with exactly one job. Nothing here caches at rest, nothing here is configured per-user, nothing here needs a release.
Where a new model id actually travels
The anchor: 11 lines of TypeScript, one cached string
The Node side of the bridge is the throttle and the translator. Its entire job on this path is to prevent redundant emits and to strip the "default" pseudo-model the SDK always returns. The diff is a straight string equality check against a module-level variable. No hashing, no timestamp comparison, no content-addressing. The function is called on every session boundary and returns in constant time when nothing has changed.
The relabeler: a 3-row substring map
The Swift side is the taxonomy. It knows that "latest Opus" always means the smartest tier, regardless of whether the id is claude-opus-4-5-20251015 or claude-opus-4-7-20260414. The floating bar's compact mode has three slots (Scary, Fast, Smart). The map gives each slot a substring it owns. A new model id gets its UI label derived from whichever row claims it.
The four family cards
Three rows, plus one fallback branch for a family word that doesn't yet exist. Every Claude model launch in the last 12 months resolved cleanly against one of the three substring rows. The fourth card is what happens when Anthropic introduces a tier that doesn't use any of those words.
haiku -> Scary
Row 1 of modelFamilyMap. Matches any model id containing the substring 'haiku'. Sort order 0. Labeled 'Scary (Haiku, latest)'. When a new Haiku generation ships (claude-haiku-5-0-...), the label is regenerated from the match, no code touched.
sonnet -> Fast
Row 2 of modelFamilyMap. Matches any model id containing 'sonnet'. Sort order 1. Labeled 'Fast (Sonnet, latest)'. Covers the mid-tier default picked for everyday floating-bar queries where latency matters more than deep reasoning.
opus -> Smart
Row 3 of modelFamilyMap. Matches any model id containing 'opus'. Sort order 2. Labeled 'Smart (Opus, latest)'. On April 14, 2026 this row silently picked up claude-opus-4-7-20260414 the first time the Claude Agent SDK returned it in availableModels.
unknown family -> API name
Lines 187 to 189 of ShortcutSettings.swift. If the new model id does not contain haiku, sonnet, or opus (for example a future Claude tier with a new family word), updateModels falls back to the API-returned name field and parks the option at sort order 99, still selectable, still functional, just at the end of the list.
The six-step walkthrough, release-day edition
Reconstructed from the actual code paths for the morning of April 14, 2026. Every step references a real file and real line numbers. If you have Fazm installed and want to verify, open the Console app and filter for "ACPBridge" or "ShortcutSettings" while you start a new chat.
Anthropic flips the new model on, backend-side
On an April 2026 launch day (Opus 4.7 on April 14, Claude Mythos on April 7) the new model id enters the Claude Agent SDK's server-side availableModels response. No Fazm code runs yet. The SDK inside every installed Fazm's bundled Node process will see the new id on its next round trip.
Fazm calls session/new, gets the updated availableModels array
A user opens a fresh chat. ACPBridge.swift calls session/new on the Node bridge. The bridge forwards to the Claude Agent SDK. The SDK returns { sessionId, models: { availableModels: [...] } } with the new id in the list, tagged with a name and description (acp-bridge/src/index.ts line 1345 and 1488).
emitModelsIfChanged diffs the serialized snapshot
At line 1271 the bridge filters the default pseudo-model, JSON.stringify's the rest, and compares to lastEmittedModelsJson. The first session after the new model appears is the only one that triggers an emit. Every subsequent session that day is a silent pass.
models_available rides the stdio channel to Swift
send({ type: 'models_available', models: filtered }) at line 1279 writes the new list as a JSON line to stdout. ACPBridge.swift parses it at line 1131 and routes it through onModelsAvailable at line 1211.
updateModels substring-matches the new id
ShortcutSettings.swift line 180 iterates the new array. For the Opus 4.7 case the id contains 'opus', matching row 2 of modelFamilyMap, so the option becomes ModelOption(id: 'claude-opus-4-7-20260414', label: 'Smart (Opus, latest)', shortLabel: 'Smart') without any code change.
The floating bar re-renders
availableModels is @Published, so SwiftUI tears down the picker and rebuilds it with the new row. The user sees Smart (Opus, latest) immediately. If their selectedModel UserDefault still points at an older id, line 201 logs a warning but keeps the selection working.
What the log looks like when it fires
Tail /tmp/fazm-dev.log on a dev build or /tmp/fazm.log on production. Two lines per emit, plus silence on every subsequent session until the model list actually changes again.
The handoff as a sequence diagram
Five actors, eight messages. The SDK answers session/new, the bridge decides whether the model list changed, and the Swift app lights up the picker. This is what runs the first time a user opens a chat on launch day.
session/new on LLM launch day, cold boot
Why the diff sits in Node, not Swift
The Swift side has its own equality check at line 195 of ShortcutSettings.swift that guards the @Published write. So why also diff in Node?
- Wire cost. A models_available envelope is maybe 400 bytes of JSON, but writing it to stdio on every session/new in a pre-warm loop still means thousands of no-op writes per day on a chatty user. Killing them at the producer is cheaper than killing them at the consumer.
- Clear logs. The bridge log only shows "Emitted models_available" when something actually changed. Reading grep 'Emitted models_available' /tmp/fazm-dev.log gives you a clean timeline of when each new model was first seen on this install.
- Safer restarts. lastEmittedModelsJson is process-scoped and resets to empty on bridge restart. That way after a crash-and-relaunch the first session always re-emits the current list to Swift, so a stale Swift-side availableModels never wins over a fresh bridge.
April 2026 LLM launches, with Fazm's actual ingestion path
The dynamic-registration path in this page is specifically for models that flow through the Claude Agent SDK, which at this writing means the Claude family. Non-Anthropic model launches reach Fazm through two sibling mechanisms that shipped in April 2026: custom API endpoint (v2.2.0 on April 11) and custom MCP server support via ~/.fazm/mcp-servers.json (v2.4.0 on April 20).
The changelog entry, verbatim
Shipped in Fazm v2.4.0 on 2026-04-20, the changelog line for this feature is one sentence. It is the nice-to-read summary of the 33 lines above.
"Available AI models now populate dynamically from the agent, so newly released Claude models appear without an app update."
CHANGELOG.json, v2.4.0, 2026-04-20
Why this matters outside Fazm
Every consumer-facing AI app that wraps a third-party LLM has this problem. The temptation is to hardcode the model list, because hardcoded arrays feel safer than trusting runtime data. The trade-off is invisible on a slow news week and humiliating on a launch day. The real fix is small: one string in the producer, one @Published array in the consumer, one substring map to keep the UX consistent across generations. The product claim "works with any app on your Mac, not just the browser" is about the accessibility-API perception layer. The product claim "newly released Claude models appear without an app update" is about this path. They are separate halves of the same idea: push the integration point as close to the user's session as you can, and stop shipping a new binary for things that are not binary-shaped.
Want to watch Fazm pick up a new Claude model live?
Book a 20-minute call. We'll tail /tmp/fazm-dev.log together, trigger a session/new, and show emitModelsIfChanged fire against your own account.
Book a call →Frequently asked questions
What is the shipping-app angle on April 2026 LLM launches that this page covers?
Every April 2026 model release article you can find on Google is a spec roundup: context length, MMLU, pricing per million tokens, benchmark deltas versus the last generation. None of them answer the practical question for a user of a consumer Mac AI app: when Anthropic, OpenAI, or Google flips a new model on, how fast does it actually appear in the app I already have installed, and what does the app have to do to make that happen without a binary rebuild? This page is specifically about how Fazm handles that for Claude models routed through the Claude Agent SDK. The answer is 33 lines of code split across acp-bridge/src/index.ts and Desktop/Sources/FloatingControlBar/ShortcutSettings.swift. Nobody else writes this up because nobody else ships a consumer Mac app that bridges the Claude Agent SDK into a floating bar.
How does a brand new model id actually reach the Fazm floating bar?
Four hops. Hop one: the Claude Agent SDK returns the new model inside the models.availableModels array on its next session/new or session/resume response. The Fazm app calls session/new on every fresh chat and periodically during pre-warm. Hop two: the Node ACP bridge (acp-bridge/src/index.ts) runs emitModelsIfChanged on that array at every session boundary. That function filters out the default pseudo-model, then JSON.stringify's the rest and compares against a process-global string lastEmittedModelsJson declared at line 1249. If the string is identical, the function returns immediately. If it's different, it writes the new snapshot and sends a models_available envelope over the bridge stdio channel. Hop three: the Swift side (ACPBridge.swift line 1202) receives the models_available case, parses each dict into (modelId, name, description), and fires onModelsAvailable. Hop four: ShortcutSettings.updateModels (line 178) walks the new list, substring-matches each modelId against a 3-entry modelFamilyMap ('haiku'/'sonnet'/'opus'), and publishes availableModels. The floating bar is bound to that @Published property, so the new model appears in the picker the next render tick.
What is lastEmittedModelsJson and why does it matter?
It's a single module-level string in the ACP bridge Node process, declared at acp-bridge/src/index.ts line 1249 as let lastEmittedModelsJson = "". Its entire job is to prevent the bridge from spamming the Swift app with an identical models_available envelope every time a new chat session is created. emitModelsIfChanged at line 1271 takes the availableModels array, filters out the 'default' pseudo-model (line 1274), JSON.stringify's the rest, and at line 1277 bails out if the serialized form matches the stored string. Only on a true change does it overwrite the snapshot at line 1278 and emit. The practical effect on LLM launch day: the first session created after Opus 4.7 appears in the SDK's availableModels response triggers exactly one models_available message to Swift. Every subsequent chat that day is a no-op at the bridge layer. Cheap check, zero downstream churn.
What is the modelFamilyMap and why is it only 3 rows?
ShortcutSettings.swift lines 159 to 163 define a private static tuple array with exactly three entries: ('haiku', 'Scary', 'Haiku', 0), ('sonnet', 'Fast', 'Sonnet', 1), ('opus', 'Smart', 'Opus', 2). The first field is the substring matched against the incoming modelId. The second is the short label shown inside the floating bar's compact mode. The third is the family display word. The fourth is the sort order. It's three rows because Claude has three user-facing tiers at any given moment, and the tier is baked into the model id as a substring. When Opus 4.7 shipped as something like claude-opus-4-7-20260414 on April 14 the substring 'opus' still matched, so Fazm labeled it 'Smart (Opus, latest)' without anyone editing a file. If Anthropic introduces a genuinely new family, updateModels at line 188 to 189 falls back to using the API-returned model name directly, still inserts it in the floating bar, and just places it at sort order 99 at the end of the list.
How is this different from the dual-lane fallback Fazm uses on rollout-gap days?
These are two distinct features on different code paths and they solve different problems. Dynamic model registration (this page) is about the new model id becoming selectable in the floating bar picker. Dual-lane fallback (covered on /t/ai-model-release-april-2026) is about what happens when the user selects the new model but their personal Claude plan hasn't been flipped to it yet on the Anthropic backend. Registration lives in acp-bridge/src/index.ts emitModelsIfChanged plus ShortcutSettings.swift updateModels. Fallback lives in ChatProvider.swift isModelAccessError plus the retryAfterModelFallback flag. A user can hit registration without ever hitting fallback (if their plan got access quickly) and can hit fallback without ever hitting registration (if they were already on an older model that just had its access revoked). The two subsystems are decoupled by design so that either can evolve without touching the other.
Can I see the models_available message in my own Fazm log?
Yes. Open Fazm, start a new chat, and tail /tmp/fazm-dev.log on a dev build or /tmp/fazm.log on a production build. Look for two lines. The bridge side logs 'Emitted models_available: <modelId>=<name>, ...' from acp-bridge/src/index.ts line 1280. The Swift side logs 'ACPBridge: received models_available with N models' from ACPBridge.swift line 1203 and then 'ShortcutSettings: updated availableModels to [...]' from ShortcutSettings.swift line 198. The pair of lines appears once per genuinely new model set. If you open five chats in a row after the first one and see no models_available lines, that's because lastEmittedModelsJson is doing its job: the snapshot hasn't changed so the bridge stays quiet.
Why does the Fazm team not ship a new binary every time a new Claude model drops?
Because the Claude Agent SDK is the one authoritative source for which model ids are currently selectable on the user's account. Hardcoding a model id array inside a notarized Mac binary would mean every Claude release requires a full app rebuild, re-sign, re-notarize, and Sparkle update roll. Notarization alone takes 15 to 45 minutes on a good day. A user who installed Fazm in February would be stuck on February's model list until they update. Shipping the list dynamically over the ACP bridge means the app version you installed six weeks ago picks up Opus 4.7 as soon as the SDK inside the bundled Node process sees it on the backend. The v2.4.0 changelog entry on 2026-04-20 captured this verbatim: 'Available AI models now populate dynamically from the agent, so newly released Claude models appear without an app update.'
Does this work for non-Anthropic model releases like GPT-5.4 or Gemini 3.1?
Not through this specific path. The Claude Agent SDK only surfaces Claude family models in its availableModels field, so GPT-5.4 and Gemini 3.1 do not flow through emitModelsIfChanged. For those Fazm takes a different approach: custom API endpoints and MCP servers. v2.2.0 on 2026-04-11 added a custom API endpoint setting for proxies, which is how corporate users route to OpenAI-compatible gateways. v2.4.0 on 2026-04-20 added custom MCP server support via ~/.fazm/mcp-servers.json, which is how users hook in any LLM that exposes an MCP interface. The dynamic-registration story in this page is about the built-in Claude path through the Claude Agent SDK. Non-Claude providers are served by a sibling mechanism, not the same one.
Comments (••)
Leave a comment to see what others are saying.Public and anonymous. No signup.