April 2026, the downstream receipts9 releases in 14 daysSourced from CHANGELOG.json

LLM developer tools updates April 2026, read from the downstream side. Nine Fazm releases in fourteen days.

Every LLM dev tools roundup this month lists the same vendor releases. Claude 4.6 Opus and Sonnet. GPT-5 Turbo on April 7. Llama 4 Behemoth still training. Qwen 3 32B on 24GB. The Responses API shell tool, hosted container workspace, and reusable agent skills. All real. None of them show what a shipping consumer AI agent had to release, version by version, in the same fourteen days to actually use any of it. Fazm's CHANGELOG.json is the receipts.

F
Fazm
10 min read
4.9from 200+
Sourced from /CHANGELOG.json in the Fazm repo (9 April releases)
Line numbers from Desktop/Sources/MainWindow/Pages/SettingsPage.swift
Every release dated and quoted verbatim

What every April 2026 LLM dev tools recap already covers

Claude 4.6 OpusClaude 4.6 SonnetClaude 4.6 Haiku (signaled)GPT-5 TurboResponses API: shell toolResponses API: hosted workspaceResponses API: reusable skillsLlama 4 Behemoth (in training)Qwen 3 32B 4-bit (24GB GPU)Mistral Medium 3Gemini 2.5 updatesACP v0.25.0

The vendor side is a press release. The downstream side is a release schedule.

Search "LLM developer tools updates April 2026" and the front page is a stack of vendor-timeline articles. Anthropic shipped Claude 4.6 Opus and Sonnet, with a 4.6 Haiku signaled for late April. OpenAI shipped GPT-5 Turbo on April 7 with native image and audio inside the text model. Meta's Llama 4 Behemoth (reportedly over two trillion parameters) stayed in training. Qwen 3 32B at 4-bit quantization fit onto a 24GB GPU. Mistral Medium 3 landed. OpenAI extended the Responses API with a shell tool, a built-in agent execution loop, a hosted container workspace, context compaction, and reusable agent skills.

Every top-5 result tells this story well. None of them tells you what actually shipping a consumer AI agent in the same window looked like. That is this page.

The product we are reporting from is Fazm, a Mac AI agent that drives any app through native accessibility APIs (AXUIElement), not screenshots. Its source is public and its changelog lives at /CHANGELOG.json. Nine entries are dated between April 3 and April 16.

0Fazm releases shipped in 14 days (April 3 to April 16)
0Days per release (average cadence)
0LLM-dev-tool concerns surfaced into user-facing UI
0Forks of @agentclientprotocol/claude-agent-acp required

Every release in order, with the vendor change it answers.

Dates and quotes are verbatim from /CHANGELOG.json. The anchor fact of this page: nine releases in fourteen days, each traceable to an April 2026 LLM dev tools change.

1

v2.0.1 - April 3

Redesigned paywall to show upgrade options as equally visible cards with inline pricing. Fixed duplicate AI responses being sent to phone users. Fixed detached chat window creating parallel sessions instead of independent ones. This is the release where the pop-out chat window story starts, one day before the Smart/Fast toggle.

The LLM dev tools angle: once Opus-sized contexts cross 100k on a single turn, you need a real Mac window that streams independently, not just a floating bar, and one chat session per window. v2.0.1 paid that debt.
2

v2.0.6 - April 4

Added Smart/Fast model toggle in the chat header bar for quick switching between Opus and Sonnet. Fixed AI agent accidentally typing its reasoning into user documents when controlling desktop apps. Chat observer cards are now auto-accepted (deny to undo) instead of requiring manual approval.

The release that ships the cheap/fast switch is also the release that fixes the agent pasting chain-of-thought into a user's Notes app. Cheaper per-turn inference and safer per-turn behavior land together because they have to.
3

v2.0.7 - April 5

Fixed chat failing silently when personal Claude account lacks access to the selected model, now auto-falls back to built-in account. Fixed browser agent ignoring already-open tabs when navigating to websites. Fixed voice input silently failing when transcription times out.

Frontier models ship to Vertex and the Anthropic API first, and a given user's personal Claude plan may not have access yet. Fazm's built-in Claude account (Vertex-backed, introduced in v1.5.0) becomes the fallback.
4

v2.0.9 - April 5

Fixed error messages not showing when Claude credits are exhausted. Added scroll-to-bottom button in chat when scrolled up.

A second release the same day, purely to make credit-exhaustion legible to the user. This matters more in April 2026 than it did in March because Anthropic's rate-limit and usage windows changed shape.
5

v2.1.2 - April 7

Upgraded ACP protocol to v0.25.0 with improved error handling for credit exhaustion and rate limits. Fixed onboarding silently failing when credits run out, now shows an error with options to connect Claude or skip. Fixed mobile web app failing to connect when desktop tunnel restarts.

Agent Client Protocol v0.25.0 lands in Fazm the same day GPT-5 Turbo ships. The protocol bump is what keeps the consumer UI honest about WHY a request failed: billing_error, rate_limit, overage_disabled, and so on become structured sessionUpdate types.
6

v2.1.3 - April 9

Added Manage Subscription button in Settings for Fazm Pro subscribers. Fixed error messages not showing in pop-out chat window. Fixed follow-up messages replacing previous user messages when sent during a stuck loading state.

A bridge release between the ACP bump and the proxy/custom-endpoint work. The pop-out window continues to get hardened because long-context Opus turns live inside it.
7

v2.2.0 - April 11

Added custom API endpoint setting for proxies and corporate gateways. Added global shortcut (Cmd+Shift+N) to open a new pop-out chat window from anywhere. Fixed pop-out windows not restoring after app update or restart. Fixed suggested replies persisting after starting a new chat. Updated paywall to highlight full desktop control capabilities.

The enterprise-facing release. @AppStorage("customApiEndpoint") at SettingsPage.swift line 840 is the UI for corporate proxies, GitHub Copilot bridges, and self-hosted Anthropic-compatible endpoints. Placeholder text: https://your-proxy:8766.
8

v2.2.1 - April 12

Fixed duplicate AI response appearing in pop-out and floating bar when sending follow-up messages.

A one-line follow-up to v2.2.0, turning around the multi-window streaming bugs that only surface once users can fire new windows from a global shortcut.
9

v2.3.2 - April 16

Tightened privacy language in onboarding and system prompts to accurately say 'local-first' instead of 'nothing leaves your device'. Fixed AI responses leaking between pop-out chat windows when streaming simultaneously. Fixed onboarding chat splitting AI messages into multiple bubbles.

The precision fix nobody talks about. Calling a frontier LLM is by definition a network call. 'Local-first' is the honest label. This is the April 2026 LLM tool story the vendor pages cannot tell.

Vendor change, mapped to the Fazm release that absorbed it

Each of these is a real April 2026 upstream shift mapped to the specific Fazm release it forced.

Upstream LLM change -> Fazm release -> user-facing surface

Opus vs Sonnet price-performance splits
Anthropic plan entitlements diverge from Vertex
ACP error envelope updated
Enterprise users want custom routing
Industry copy-tightening on 'on-device' vs 'local'
Fazm April 2026 release train
Smart/Fast toggle (header)
Auto-fallback to built-in
Structured sessionUpdate
customApiEndpoint setting
'Local-first' onboarding

Six downstream bets shipped this month

User-visible surfaces that changed inside Fazm because something upstream did. Each one has a real changelog line.

Smart/Fast toggle

v2.0.6, April 4. A model switch in the chat header so users can pick Opus or Sonnet per turn without opening Settings. The feature only makes sense when the cost gap between frontier models is the shape April 2026 actually has.

Auto-fallback to built-in Claude

v2.0.7, April 5. If the user picks a model their personal Claude account cannot access, the turn routes through Fazm's built-in Vertex-backed Claude instead of silently failing.

ACP v0.25.0 bump

v2.1.2, April 7. Structured errorType in sessionUpdate so billing_error and rate_limit reach the UI with a real shape. Protocol upgrades replace app updates as the fast path for back-end LLM policy changes.

customApiEndpoint for corporate proxies

v2.2.0, April 11. @AppStorage("customApiEndpoint") at SettingsPage.swift line 840, placeholder https://your-proxy:8766. Route API calls through a corporate proxy, self-hosted Anthropic-compatible endpoint, or a GitHub Copilot bridge.

Pop-out chat windows as first-class

v2.0.1 through v2.2.1. Detached chat windows that behave like normal Mac windows so long Opus turns stream without the floating bar getting in the way. Cmd+Shift+N opens a new one from anywhere.

Privacy language: local-first

v2.3.2, April 16. Replaces 'nothing leaves your device' with 'local-first' because LLM turns still call the model. The precise phrasing is the honest one.

The anchor code for v2.2.0

The v2.2.0 changelog line "Added custom API endpoint setting for proxies and corporate gateways" corresponds to this excerpt from Desktop/Sources/MainWindow/Pages/SettingsPage.swift. Line 840 is the @AppStorage key. The placeholder at line 936 and the help copy at line 943 are verbatim.

Desktop/Sources/MainWindow/Pages/SettingsPage.swift
9 / 14

The downstream side of an LLM dev tools update is a release schedule, not a press release.

Fazm CHANGELOG.json, April 3 to April 16, 2026

What the April log actually looks like

A compacted view of the shipping history, one version per line. Every entry maps to a real upstream change the same week.

fazm release train

How this page differs from every other April 2026 LLM-tools roundup

All of these roundups cover the vendor side. This one covers the downstream side too.

FeatureTypical April 2026 LLM dev tools roundupThis page
Mentions vendor releases (Claude 4.6, GPT-5 Turbo, Responses API)Yes, exhaustivelyYes, with dates
Shows downstream consumer-app releases in the same windowNoYes, 9 Fazm versions with dates and changelog lines
Points to real source paths a reader can openNoSettingsPage.swift line 840, CHANGELOG.json, acp-bridge/package.json
Explains why an update forced a UI changeNoPer release, with the reason linked to the vendor change
Names the specific @AppStorage key for custom endpointsNocustomApiEndpoint, placeholder https://your-proxy:8766
Clarifies 'local-first' vs 'nothing leaves your device'NoCites the v2.3.2 line on April 16

A quick aside on the accessibility-API part

Fazm drives Mac apps through real accessibility APIs

Most desktop AI agents you read about in April 2026 roundups use screenshot sampling and vision models to find buttons and read UI. Fazm walks the macOS accessibility tree of any app (AXUIElement) and interacts with real elements. That is why the customApiEndpoint setting and the Smart/Fast toggle had to ship the same month they did. An agent that clicks real UI on your machine has to be legible to you about which model it is using, where the API call is routing, and when it is about to fall back. The April 2026 release train is that legibility being paid into the product one version at a time.

"Local-first" is the right copy for a frontier-model app

v2.3.2 on April 16 tightened onboarding language from "nothing leaves your device" to "local-first". The Hindsight memory system runs embedded Postgres on-device. Accessibility-driven automation runs on-device. Screen observation runs on-device. But the LLM turn itself is a call to Claude. "Local-first" captures that correctly. The fact that this is a release all by itself is the point: the April 2026 LLM dev tools story is as much about honest wording as it is about any new model.

See the April release train running in the product

Fazm is a free Mac download. The Smart/Fast toggle is in the chat header. The customApiEndpoint setting is in Settings, AI Chat. The whole April 2026 changelog is bundled.

Download Fazm

Frequently asked questions

What are the big LLM developer tools updates in April 2026 the SERP already covers?

The standard recap is: Anthropic shipping Claude 4.6 (with a 4.6 Haiku signaled for late April), OpenAI's GPT-5 Turbo release on April 7 with native image and audio inside the text model, Meta's Llama 4 Behemoth training run (still not released), Qwen 3 32B 4-bit squeezing onto a 24GB GPU, Mistral Medium 3, Gemini 2.5 updates, and OpenAI extending the Responses API to add a shell tool, a built-in agent execution loop, a hosted container workspace, context compaction, and reusable agent skills. Every top-5 SERP page tells this story. None of them show what shipping consumer AI apps had to release in the same window to actually use any of it.

What does a shipping consumer LLM agent actually release in a month like April 2026?

Receipts from Fazm, a consumer macOS AI agent whose changelog is public at /CHANGELOG.json: nine releases between April 3 and April 16. v2.0.1 (April 3) redesigned the paywall and fixed the AI agent accidentally typing its reasoning into user documents. v2.0.6 (April 4) added a Smart/Fast toggle in the chat header for quick switching between Opus and Sonnet. v2.0.7 (April 5) auto-falls back to Fazm's built-in Claude when a personal Claude account lacks access to the selected model. v2.1.2 (April 7) upgraded ACP to v0.25.0 for credit exhaustion and rate limits. v2.2.0 (April 11) added a custom API endpoint setting for proxies and corporate gateways. v2.3.2 (April 16) tightened privacy language from 'nothing leaves your device' to 'local-first' to reflect that model calls do leave the machine.

Where does Fazm's customApiEndpoint setting actually live in the code?

Desktop/Sources/MainWindow/Pages/SettingsPage.swift line 840 declares @AppStorage("customApiEndpoint") private var customApiEndpoint: String = "". The UI binding is at line 921. The TextField placeholder is https://your-proxy:8766 at line 936. The help copy at line 943 reads verbatim: 'Route API calls through a custom endpoint (e.g. corporate proxy, GitHub Copilot bridge). Leave empty to use the default Anthropic API.' That single setting is the April 11 v2.2.0 changelog line: 'Added custom API endpoint setting for proxies and corporate gateways.' You can verify it against any Fazm checkout.

Why does a consumer AI app need a Smart/Fast toggle in April 2026?

Because the cost and speed gap between frontier models widened in April 2026 and the UX has to catch up. Opus stayed the top model for extended tool-use chains. Sonnet stayed an order of magnitude cheaper per token. Fazm's v2.0.6 release on April 4 added a Smart/Fast toggle in the chat header so a user can flip between Opus and Sonnet per turn without opening Settings. That same release shipped: 'Fixed AI agent accidentally typing its reasoning into user documents when controlling desktop apps.' Both lines are in the same version block in CHANGELOG.json. Cheaper per-turn inference and safer per-turn behavior landed together because they had to.

What is the ACP v0.25.0 upgrade in Fazm 2.1.2 and why does it matter?

Agent Client Protocol is the JSON-RPC dialect the Claude Code ACP adapter speaks over stdio. Fazm's 2.1.2 changelog entry on April 7 reads: 'Upgraded ACP protocol to v0.25.0 with improved error handling for credit exhaustion and rate limits.' Before the bump, onboarding could silently fail when built-in credits ran out. After, the sessionUpdate payload carries structured errorType values like billing_error and rate_limit, and the onboarding flow shows an error with options to connect Claude or skip. In the broader April 2026 story, this matters because protocol upgrades are now the main way consumer apps absorb back-end LLM policy changes (rate limits, overage states, five-hour vs seven-day windows) without waiting on a new app release.

What exactly does auto-fallback to the built-in account do?

Fazm v2.0.7 on April 5 shipped: 'Fixed chat failing silently when personal Claude account lacks access to the selected model, now auto-falls back to built-in account.' Concretely, the user signs in with their personal Claude account, picks Opus in the chat header, and sends a message. If that user's plan does not have Opus access yet (plan tier, pre-release gate, org policy), the prior behavior was a silent failure. The new behavior routes through Fazm's built-in Vertex-backed Claude for that turn. The relevant settings copy in SettingsPage.swift line 873 reads: 'Using Fazm's built-in Claude account. No sign-in required.'

Why does v2.3.2 tighten the privacy language from 'nothing leaves your device' to 'local-first'?

Because it is true. The April 16 v2.3.2 changelog reads: 'Tightened privacy language in onboarding and system prompts to accurately say "local-first" instead of "nothing leaves your device".' Fazm uses native macOS accessibility APIs to drive Mac apps and the Hindsight memory system is embedded Postgres running locally. But the LLM turn itself still calls Anthropic. Saying 'local-first' keeps the contract legible: everything the app can do locally, it does locally, and the parts that require a frontier model are routed to Claude. In an April 2026 LLM dev tools context this is the right wording to copy. 'Nothing leaves your device' breaks the minute your app calls an API.

Is this just a product changelog dressed up as SEO?

It is a product changelog read as a timeline of downstream LLM dev tool integration. Every release in the timeline maps to a real external change: Opus vs Sonnet price-performance splits (Smart/Fast toggle), Anthropic plan entitlements diverging from Vertex (auto-fallback), ACP updating its error type envelope (protocol bump), corporate proxies wanting an AI client that routes through them (customApiEndpoint), and the industry-wide sharpening of 'on-device' vs 'local' vs 'local-first' (privacy language). Those are the April 2026 LLM tool updates viewed from the consumer side. Which is the angle no other article takes.

If I only remember one thing from this page, what should it be?

When you read an 'LLM developer tools updates April 2026' article, ask what it would take to actually use the thing described. The vendor side is a press release. The downstream side is a release schedule. Fazm's April 2026 schedule ran at roughly one release every 1.56 days and every release was a direct response to an upstream LLM change. If your tool of choice did not ship at least four or five releases in April 2026, it is probably not using the updates it told you it supports.

Where can I verify any of this myself?

Fazm is open source at github.com/mediar-ai/fazm. The changelog is /CHANGELOG.json in the repo root. The Swift settings page is at Desktop/Sources/MainWindow/Pages/SettingsPage.swift: line 840 for customApiEndpoint, line 873 for the built-in Claude account label, lines 921 to 943 for the proxy UI. The ACP bridge package is at acp-bridge/package.json. Every file and line number on this page can be opened and read.

Last updated 2026-04-19. Verified against /CHANGELOG.json (9 April releases) and Desktop/Sources/MainWindow/Pages/SettingsPage.swift lines 840, 873, 921, 936, and 943.

fazm.AI Computer Agent for macOS
© 2026 fazm. All rights reserved.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.