LLM release, April 2026Desktop agentRelease cadence pattern

Large language model release, April 2026 news: how a shipping Mac app absorbs the whole wave without releasing itself

Every recap of the April 2026 LLM wave lists the same benchmark scores and per-token prices. This one does not. This is the story of a three-entry substring map, one protocol message named models_available, and why a desktop agent can absorb six frontier releases inside a thirty day window without pushing a new binary to its users.

F
Fazm
10 min read
4.9from 200+
Every claim anchored to a real line in the Fazm desktop source
Covers the substring-match pattern and the ACP models_available protocol
Explains why accessibility-tree input makes the vision benchmark race irrelevant to the default loop

The wave, by the numbers

The April 2026 release cadence is the loudest in the LLM space since GPT-4 landed. Six organizations shipped production-grade models inside a single thirty day window. Here is the shape.

0Frontier-tier LLMs released in April 2026
0Days in the window
0Fazm app releases needed to pick them up
0Lines of substring map that absorb the wave

The third number is the one nobody else is reporting on. It is also the one that matters if you are building on top of this.

Absorbed, not reacted to

0 releases / 0 updates

Anthropic Haiku 4.5, Sonnet 4.6, Opus 4.6. OpenAI GPT-5 Turbo. Google Gemini 3.1 Pro. Meta Muse Spark. DeepSeek R2. Microsoft three foundational models. Fazm did not ship a release to absorb any of them. The substring map did the work.

The anchor fact: a thirteen-line constant that absorbs the churn

The whole pattern is sitting in a single file inside the Fazm desktop source tree. The model picker does not hardcode IDs. It runs substring matches against a three-entry tuple. The file is Desktop/Sources/FloatingControlBar/ShortcutSettings.swift and the relevant block starts at line 157. This is the pull quote.

3 tuples

Mapping from model family substring to user-friendly labels and ordering. Order determines display order in the UI (lowest first).

ShortcutSettings.swift line 157 comment, April 2026

Desktop/Sources/FloatingControlBar/ShortcutSettings.swift

Note the fallback at the bottom. If a brand new family ships (a hypothetical 'sage' line, say), it still enters the picker with order 99 and a SDK-provided label. Known families get personas. Unknown families still work. That is the whole contract.

The second moving part: a protocol frame named models_available

The substring map is useless without something feeding it. That something is an inbound frame from the Agent Client Protocol bridge. When the Claude Agent SDK spawns its Node.js subprocess, the subprocess announces the list of models it is currently willing to serve. Fazm reads that frame, hands it to updateModels, and the picker rerenders. The bridge code is small.

Desktop/Sources/Chat/ACPBridge.swift

models_available is emitted every session. Every time you open Fazm, the current picker list is the current SDK's idea of what models exist. The binary is not the source of truth. The running SDK is.

How the release adoption actually runs

Walk through the exact sequence that fires when Anthropic adds a new model alias to the Claude Agent SDK. Nothing here requires a new Fazm build.

1

The SDK adds a new alias

Anthropic ships a new Claude variant. The Claude Agent SDK (bundled Node.js subprocess) begins advertising it in the models list. No Fazm release required at this point.

2

The subprocess announces models_available

On the next launch, the ACP bridge starts the subprocess. The subprocess emits a frame with type 'models_available' and an array of model dicts. The Swift side parses it in ACPBridge.swift:1131-1133.

3

updateModels does the work

ShortcutSettings.updateModels (lines 183-224) iterates the array. For each model ID, it runs fullId.contains(substring) against each entry in modelFamilyMap. The matching tuple provides the persona label ('Scary', 'Fast', 'Smart') and the display order.

4

availableModels is replaced atomically

If the new list differs from the current list, availableModels is reassigned. Because it is @Published, every SwiftUI view observing the picker rerenders immediately. No user interaction, no reload, no restart.

5

The user sees the new model next session

The picker now shows the new variant with its persona label. The user's selectedModel is preserved if still valid, or normalized to the new full ID if only the alias matched. Zero Swift code changed.

What the ACP frame looks like end to end

A simplified view of what data flows into the substring map. The SDK is on the left. The Swift app is on the right. The bridge is the hub in the middle.

SDK to picker, one launch

Claude Agent SDK
models_available frame
description field
ACPBridge.swift
updateModels()
modelFamilyMap
@Published availableModels

What the launch log actually prints

The updateModels function logs the new list every time it runs. A normal launch on Fazm in April 2026 prints something like this when the SDK advertises the Haiku 4.5 / Sonnet 4.6 / Opus 4.6 trio.

fazm-dev.log

All six April 2026 frontier releases, in one view

What landed, at which tier, and whether Fazm wires it into the default loop. The substring map handles the Anthropic column automatically. The others are one setting change away.

Anthropic Claude (Haiku 4.5, Sonnet 4.6, Opus 4.6)

Three-tier refresh across the frontier. Default agent surface for Fazm. All three arrive in the picker through the substring match at ShortcutSettings.swift:159-163 and render as 'Scary', 'Fast', 'Smart' respectively.

OpenAI GPT-5 Turbo

Native multimodal refresh. Not in Fazm's default loop. Reachable via Custom API Endpoint with an Anthropic-shape shim. ChatGPT also received GPT-5.3 Instant Mini for faster consumer replies.

Google Gemini 3.1 Pro

2M token context, Vertex GA. Bigger than Fazm's accessibility-tree capture usually needs. Fazm's passive screen analysis loop stays on Gemini 2.5 Pro for video input.

Meta Muse Spark

First major model out of Meta Superintelligence Labs under Alexandr Wang. Smaller footprint, competitive on multimodal perception and agentic tasks at an order of magnitude less compute than mid-size Llama 4.

DeepSeek R2

AIME 92.7%, ~70% price cut relative to frontier cloud offerings. Open-weights pressure on the pricing tier. Not in Fazm's default loop but trivially routable via the Custom API Endpoint.

Microsoft foundational models

Three foundational models covering text, voice, and image, announced early April 2026. Announced under the Microsoft AI umbrella, aimed at pushing back against OpenAI and Google in the same quarter.

Try a desktop agent that survives the release cadence

Fazm ships with Claude Haiku 4.5, Sonnet 4.6, and Opus 4.6 already routed through the persona picker. Future Claude variants drop in automatically.

Download Fazm

Why this is not how most apps handle it

The typical pattern is: a hardcoded lookup table of supported model IDs, checked in to source, released alongside a UI update whenever a new model drops. That works until the SDK moves faster than the app. It also loses the chance to reuse stable user-facing labels across releases. Here is the side-by-side.

FeatureTypical desktop agentFazm
Adds a new LLM to the in-app pickerRequires app update + TestFlight + reviewReads SDK's models_available frame at launch, zero Swift changes
Routes the model into a user-facing tierHardcoded lookup table, breaks on new IDsSubstring match (haiku / sonnet / opus) against modelFamilyMap
Display order across picker rendersSDK-order (changes between releases)Stable order field in each tuple (0, 1, 2)
Input surface to the modelScreenshots (binds to vision benchmark)macOS accessibility tree (structured text, model-neutral)
User-facing labels across release wavesChange every release (Claude 4, Claude 4.5, Claude 4.6)Stable personas: Scary, Fast, Smart
Non-Anthropic frontier releasesPer-provider in-app integration codeCustom API Endpoint + Anthropic-shape shim

The quieter story under the April 2026 headlines

Benchmarks are loud. Release cadence patterns are durable.

Every recap of the April 2026 wave leads with SWE-bench Verified numbers, AIME scores, 2M token context windows, and per-token prices. Those are the loud facts. They are also the facts that will be out of date by the end of Q2 2026. What does not go out of date: the pattern a product uses to absorb the next wave without forcing its users to react.

Fazm's version of that pattern is a three-entry substring map, an ACP protocol frame, and a text-based input surface that does not care about the vision benchmark race. That is the story the leaderboards cannot tell, and it is the reason Fazm shipped zero updates to pick up six frontier models in a thirty day window.

What this pattern gets you, if you are building on top of the April 2026 wave

You do not need to be building a desktop agent to use this. The shape is portable to any product that picks models at runtime from an SDK-provided list.

Release-cadence-proofing checklist

  • Do not hardcode model IDs in the UI
  • Use substring matching on family names, not exact equality
  • Read the SDK's own model list at session start
  • Keep user-facing labels stable across versions (personas, not model IDs)
  • Preserve the selected model across list updates when still valid, normalize it otherwise
  • Have a fall-through path for brand-new families (they still work, just without a persona)
  • Log every list change with enough detail to audit later

April 2026 release chips, at a glance

The loud half of the story, for completeness. Everything below landed inside the April 2026 window.

Claude Opus 4.6Claude Sonnet 4.6Claude Haiku 4.5GPT-5 TurboGPT-5.3 Instant MiniGemini 3.1 Pro (2M ctx)Meta Muse SparkDeepSeek R2Mistral Large 3Llama 4 MaverickMicrosoft foundational x3

Frequently asked questions

What large language models were released in April 2026?

Six production-grade releases inside a thirty day window. Meta's Muse Spark (the first major model out of Meta Superintelligence Labs under Alexandr Wang), Anthropic's Claude Sonnet 4.6 and Opus 4.6 frontier refresh plus Haiku 4.5 at the tier below, Google's Gemini 3.1 Pro with a 2M token context window on Vertex GA, OpenAI's GPT-5 Turbo with native multimodal and GPT-5.3 Instant Mini for consumer ChatGPT, Microsoft's three foundational models covering text, voice, and image, and DeepSeek R2 with AIME 92.7% at roughly a 70% price cut relative to frontier cloud pricing. Mistral announced Large 3 with EU data-residency positioning. Meta also pushed Llama 4 Maverick and Scout further into mainstream open weights.

Why does a shipping Mac app not need to release an update when a new LLM drops?

Because the model list is not hardcoded in the binary. In Fazm's case, two layers keep the release cycle out of user-facing surface area. First, the picker uses substring matching in a three-entry modelFamilyMap at Desktop/Sources/FloatingControlBar/ShortcutSettings.swift lines 159-163. A new model ID like claude-sonnet-4-7-20260901 passes fullId.contains("sonnet") at line 193 and is routed into the 'Fast' persona slot with display order 1. Second, the ACP bridge emits a protocol message called models_available (ACPBridge.swift:1131-1133) that rewrites the picker at runtime. When the Claude Agent SDK updates to a new model alias, Fazm picks it up on the next launch with zero Swift changes.

What is the 'modelFamilyMap' and why does it matter for the April 2026 release wave?

It is a three-tuple static constant at ShortcutSettings.swift:159-163: ("haiku", "Scary", 0), ("sonnet", "Fast", 1), ("opus", "Smart", 2). Each entry is a substring that Fazm searches for in any incoming model ID, plus the user-facing persona label, plus the display order. The substring match means any new Claude variant in that family gets the correct persona without code changes. The display order means the three tiers always render in the same sequence in the picker, even if the SDK returns them in a different order. That tiny constant is why the April 2026 wave did not force a Fazm release.

What is models_available and how does it work in production?

models_available is an inbound protocol message the Agent Client Protocol bridge parses at Desktop/Sources/Chat/ACPBridge.swift lines 1131-1133. The underlying Node.js ACP subprocess (the Claude Agent SDK) announces the list of currently available models as an array of dicts. The bridge converts those to tuples of (modelId, name, description?) and hands them to ShortcutSettings.updateModels (lines 183-224). updateModels runs the substring match, preserves the persona labels, extracts a version tag from the description field (e.g. 'Sonnet 4.6 · Best for everyday tasks' becomes 'Fast (Sonnet 4.6)'), and replaces availableModels in one atomic swap. The picker view is @Published and rerenders. No release needed.

Why does this matter when every other April 2026 news recap leads with benchmarks?

Because the benchmark leaderboard is not how a shipping product decides what its users run. Benchmarks tell you which model is smartest in a contained lab setting. Production picks a model based on rate limit behavior, tool-call format stability, latency under real workloads, and compatibility with the control surface your agent actually operates against. Fazm operates against the macOS accessibility tree (structured text, not screenshots), so the vision benchmark race is largely irrelevant to its default loop. What matters is tool-call reliability and sustained rate limit headroom. That is a different lens than any April 2026 recap uses.

What happens inside Fazm the instant Anthropic adds 'claude-opus-4-7' to the Claude Agent SDK?

Three concrete things. (1) The next time a user opens Fazm, the Node.js ACP subprocess spawns, authenticates, and sends its models_available frame with the new ID in the array. (2) ACPBridge.swift routes the frame through setModelsAvailableHandler (line 233), which calls ShortcutSettings.updateModels. (3) updateModels runs fullId.contains("opus") at line 193, matches the third tuple in modelFamilyMap, keeps the 'Smart' persona label, uses the description field to produce 'Smart (Opus 4.7)' if that version string is included, and assigns display order 2. The picker rerenders. No Swift code changed. No app release. No user action.

Does Fazm run all six April 2026 frontier models?

No. The default agent loop runs Claude Sonnet 4.6 (persona 'Fast'), reachable through the Anthropic-shape messages endpoint that the Claude Agent SDK uses. Claude Haiku 4.5 and Opus 4.6 are in the picker for fast reflexive queries and hard multi-step plans respectively. A separate passive screen analysis loop runs Gemini 2.5 Pro inside GeminiAnalysisService for multi-minute session video chunks with function calls to the local database. The other April 2026 releases (GPT-5 Turbo, Gemini 3.1 Pro, DeepSeek R2, Mistral Large 3, Meta Muse Spark, Llama 4 Maverick) can be routed through the Custom API Endpoint setting with an Anthropic-shape shim, but none of them are wired into the default config out of the box.

Why pick substring matching over a lookup table?

Because substring matching absorbs future releases that a lookup table would miss. A dictionary keyed on 'claude-opus-4-6' fails the moment Anthropic ships 'claude-opus-4-7'. Substring matching on 'opus' passes both, and keeps passing for the next several version bumps in the same family. The tradeoff is that a brand-new family name (say Anthropic shipping a 'claude-sage-1.0') would not match any entry and would fall through to the unknown-family branch at ShortcutSettings.swift:199-201, which derives a label from the SDK's own name/description fields. That branch exists so new families still work, they just do not get a persona until you add a fourth tuple.

What is the practical upshot for someone picking a desktop agent in April 2026?

When you evaluate a Mac desktop agent around an LLM release wave, ignore which specific models it claims to support today and ask what happens when a new model drops tomorrow. A product that hardcodes model IDs in its UI will flash stale names in the picker until the next app release. A product that reads the SDK's own model list and routes by substring keeps working through the churn. Fazm is the latter: the picker updates when the SDK updates, and the user-facing persona labels ('Scary', 'Fast', 'Smart') do not change across releases. That is a durable way to build against a release cadence that nobody controls.

Does the pattern work for non-Anthropic models?

Partially. The substring-match + models_available pattern applies cleanly inside the Anthropic family because the SDK announces its own list. For a non-Anthropic frontier release (GPT-5 Turbo, Gemini 3.1 Pro, DeepSeek R2), the user points Fazm's Custom API Endpoint setting at a shim that speaks the Anthropic messages shape, and the bridge then treats that endpoint as the Claude surface. ACPBridge.swift injects the endpoint as ANTHROPIC_BASE_URL before the subprocess spawns. The models_available frame will still come from the shim, so the picker still updates dynamically, it just lists whatever models the shim chooses to expose.

How often does the model picker actually change in practice?

Not every day. It changes whenever the Claude Agent SDK bundled inside Fazm updates its list of supported aliases, or whenever Anthropic pushes a version bump that the SDK picks up without a Fazm release. During April 2026, the wave included Claude Haiku 4.5, Sonnet 4.6, and Opus 4.6, all of which show up in Fazm's picker with the correct persona labels without any Swift change. Users do not see a release note for this. They open Fazm and the 'Fast (Sonnet)' option quietly became 'Fast (Sonnet 4.6)'. That is by design.

A Mac agent that already absorbed the April 2026 wave

Fazm runs on the accessibility tree, routes through three Claude 4.x tiers via a substring map, and picks up new model variants at launch via the ACP models_available frame. No release required to follow the release cadence.

Download Fazm
fazm.AI Computer Agent for macOS
© 2026 fazm. All rights reserved.

How did this page land for you?

Comments

Public and anonymous. No signup.

Loading…