Latest AI LLM news, April 2026Runtime-layer viewMac desktop agent

Latest AI LLM news, April 2026: the runtime that actually delivers new models to a user's Mac

Every April 2026 LLM news recap is a list of model cards and benchmark scores. None of them describe the plumbing that carries that news onto a user's machine. This page does. Fazm, a shipping Mac automation app, spawns Anthropic's Agent Client Protocol SDK as a Node subprocess inside a 256 MB V8 heap, auto-installs 17 bundled Claude Skills, and picks up new Claude aliases via a models-available callback without a Fazm release. Line numbers below, grep-able for yourself.

F
Fazm
12 min read
4.9from 200+
Every plumbing claim pinned to a file and line in the Fazm desktop source tree
ACP SDK version, tool timeouts, screenshot dimension cap, MCP server list, all grep-able
Covers the runtime path the SERP's top 5 LLM-news articles all skip

How April 2026 LLM news actually reaches a user's Mac

01 / 05

1. Anthropic ships a new Claude alias

A new claude-haiku-4-5 or claude-sonnet-4-6 lands on the API. The model card hits Twitter. The press release ships.

256 MB heap

Spawn Node with --max-old-space-size=256 --max-semi-space-size=16 and point it at the ACP bridge. This is the entire April 2026 LLM release cycle, compressed into one subprocess.

Desktop/Sources/Chat/ACPBridge.swift, line 343

The anchor fact: the whole April 2026 release cycle arrives through one subprocess spawn

Every story in the April 2026 LLM news cycle, from Claude Opus 4.6 topping Arena to Gemini 3.1 Pro's 2M-context GA to GLM-5.1's open-source MoE release, ultimately has to reach a user somewhere. On a Fazm user's Mac the path goes through one Node subprocess with a deliberate memory budget. Here is the spawn.

Desktop/Sources/Chat/ACPBridge.swift, line 343

And the SDK it loads is pinned here. Bumping this one version number is how Fazm absorbs Anthropic's runtime-side April 2026 news without a Fazm binary release.

acp-bridge/package.json

Where the April 2026 news actually flows through

Press releases go on the left, a user on the right. In between is a subprocess on the user's own machine running the ACP SDK with a pinned version number. That subprocess is the only place "latest LLM news" becomes usable.

Providers on the left, user on the right, ACP bridge subprocess in the middle

Anthropic Claude 4.x aliases
Anthropic Claude Skills
@playwright/mcp
MCP ecosystem (macos-use, whatsapp, google-workspace)
User-defined MCP from ~/.fazm/mcp-servers.json
ACP bridge subprocess (256 MB V8 heap)
Fazm chat (Ask / Act)
Floating bar
Screen observer loop
Shortcut model picker

The Claude Skills part of April 2026, pre-installed

Anthropic's Claude Skills system gave Claude-compatible runtimes a file-system format for capabilities. Fazm ships 17 of them inside the app bundle. The user never runs an install command, and a checksum compare on every launch means the bundled versions win when they change. The April 2026 implication: when a skill gets a better prompt, the next Fazm app update ships it to every user automatically.

17 bundled Claude Skills, auto-installed

Fazm ships .skill.md files for pdf, docx, xlsx, pptx, video-edit, frontend-design, canvas-design, doc-coauthoring, deep-research, travel-planner, web-scraping, social-autoposter, social-autoposter-setup, find-skills, ai-browser-profile, google-workspace-setup, telegram. SkillInstaller.swift copies them into ~/.claude/skills/ on first launch and checksums each one on subsequent launches.

Auto-categorized for onboarding

SkillInstaller.categoryMap buckets them into Personal, Productivity, Documents, Creation, Research & Planning, Social Media, Discovery. The category is a UI hint, not a skill-file field, which means you can re-shuffle the onboarding display without touching the skill contents.

Checksummed updates

SHA-256 of each bundled file is compared to the installed copy on launch. If the bundle changed, the installed copy gets overwritten, even if the user had hand-edited it. This is the price of 'the bundle always wins so April 2026's latest skill revisions land automatically.'

Obsolete-skill pruning

SkillInstaller.obsoleteSkills currently lists 'hindsight-memory' (a removed skill). Every app launch deletes any match from ~/.claude/skills/. This is how a skill gets retired cleanly when Anthropic changes how a capability should be exposed.

Skill discoverability on-device

The bundled find-skills.skill.md lets the agent search installed skills by keyword so it can surface the right tool for a user request without hardcoding prompts. This is the Claude Skills equivalent of an app store index living on your Mac.

No manual install step

The user does not run 'claude skill install' for any of these. The skills arrive inside the Fazm app bundle, get copied to ~/.claude/skills/ on first launch, and are visible to any Claude-compatible runtime on the same machine, including the ACP subprocess Fazm itself spawns.

Five MCP servers, orbiting the ACP subprocess

The April 2026 news on MCP is mostly about the ecosystem expanding. For a shipping app, MCP is a list of processes to spawn on boot. buildMcpServers in acp-bridge/src/index.ts (lines 992-1100) registers five by default, plus whatever the user has declared in ~/.fazm/mcp-servers.json. Each one is a separate subprocess with its own stdin/stdout pipe.

ACP
bridge
fazm_tools
playwright
macos-use
whatsapp
google-workspace
~/.fazm/mcp-servers.json (user)

fazm_tools is the one that loops back to Swift via a Unix socket, which is how Claude can call functions implemented natively in the desktop app (accessibility queries, notifications, audio).

The April 2026 runtime, in numbers you can verify

0MB V8 heap cap on the Node ACP subprocess
0Claude Skills bundled and auto-installed on first launch
0Default MCP servers wired up in buildMcpServers
0Max screenshot dimension in px (under Claude's 2000px cap)

Each of these is a line in the source tree. The 256 MB lives at ACPBridge.swift:343. The 17 is the count of .skill.md files in Desktop/Sources/BundledSkills/. The 5 is the default return of buildMcpServers in acp-bridge/src/index.ts. The 1920 is MAX_SCREENSHOT_DIM on line 713 of that same file.

Tool-timeout tiers:0sinternal0sMCP0sdefault

What happens when Anthropic ships a new model in late April 2026

The "news" is the SDK version bump and the alias appearing in a session/new response. The user experience is that the picker updates the next time they open the app. Here is the step sequence.

1

1. User launches Fazm for the first time

ACPBridge.start() finds a bundled node binary, resolves the bridge script path, and spawns Node with '--max-old-space-size=256 --max-semi-space-size=16'. SkillInstaller.install() in parallel copies BundledSkills/*.skill.md into ~/.claude/skills/. The 17 skills are now available to any Claude runtime on this Mac, including Fazm's own subprocess.

2

2. Bridge boots and asks the SDK what's available

buildMcpServers registers fazm_tools, playwright (with --image-responses omit), macos-use, whatsapp, and google-workspace (if their binaries exist). session/new returns an availableModels list with short aliases. The bridge emits a models_available JSON line back to Swift.

3

3. Swift normalizes the aliases to full IDs

ShortcutSettings.onModelsAvailable receives [(haiku, ...), (sonnet, ...), (opus, ...)], runs them through normalizeModelId to expand to claude-haiku-4-5-20251001, claude-sonnet-4-6, claude-opus-4-6, and rebuilds the picker in the order Scary -> Fast -> Smart (modelFamilyMap: haiku=0, sonnet=1, opus=2).

4

4. User asks Fazm to do something

Query arrives in Swift -> query() serializes to JSON lines -> bridge forwards to ACP SDK -> SDK drives the conversation, manages compaction boundaries, handles tool calls, tracks usage tokens (input, output, cache read, cache write). Tool timeouts kick in at 10s/120s/300s depending on tool kind.

5

5. Anthropic ships a new model next week

The SDK's availableModels response starts including the new alias. The next time a Fazm user opens a session, the picker updates. The binary on their Mac did not change. The April 2026 release cycle has flowed through the runtime, not through an App Store push.

A small April 2026 detail: Playwright screenshots and the 2000px limit

Claude's image-input endpoint rejects images over 2000 pixels on either dimension. Retina Macs with @playwright/mcp can produce screenshots well above that. If your agent sends them through untouched, the call fails. Fazm watches the Playwright output directory and resizes in-place before the ACP SDK ever sees the image.

acp-bridge/src/index.ts, lines 712-716

This is a two-line April 2026 detail that never makes it into a release notes roundup, but it is the difference between "the browser agent works on my laptop" and "why does my screen-capture flow 413 after 40 minutes of use."

Verify the runtime plumbing with grep

If you do not trust any of the line-number claims above, here are the greps. Run them against a local Fazm desktop source tree.

Four greps against the April 2026 Fazm runtime

Press-release view vs runtime view of April 2026

The press-release view is what the top SERP results for this keyword are built on. The runtime view is what actually determines whether a Fazm user gets any of it.

FeaturePress-release view (top SERP articles)Runtime view (Fazm source tree)
How new models reach a userModel card, benchmark table, Arena rankingACP session/new availableModels -> onModelsAvailable -> picker
Claude SkillsMentioned once in the 'developer updates' section17 bundled .skill.md files, SHA-256 checked on launch
MCPListed as 'ecosystem growing'buildMcpServers spawns 5 subprocesses per session
Memory constraints on-deviceNot mentioned--max-old-space-size=256 for the ACP subprocess
Tool timeoutsNot mentioned10s internal, 120s MCP, 300s default (env-overridable)
Screenshot dimension limitNot mentionedMAX_SCREENSHOT_DIM = 1920 with in-place resize
Rate-limit telemetry429 errors, occasionallyStatusEvent.rateLimit(status, resetsAt, type, utilization, overage)
What changes on a new Anthropic releaseA new bullet in next month's recapVersion bump in acp-bridge/package.json, no Fazm binary change

April 2026 LLM news, as the SERP lists it

Fazm's default loop wires up the Claude tier plus the ACP SDK. The rest are reachable through Custom API Endpoint plus a shim, or via the bundled skills and MCP servers.

Claude Opus 4.6 (Arena #1)Claude Sonnet 4.6Claude Haiku 4.5GPT-5 Turbo (native multimodal)Gemini 3.1 Pro (2M ctx, GA on Vertex)Gemini 2.5 ProGLM-5.1 (MIT, 744B MoE)Llama 4 Scout (edge, 17B)Llama 4 MaverickMistral Large 3DeepSeek R2Qwen 3.5Claude Skills (bundled per app)ACP SDK 0.29.xMCP ecosystem expansion

Three things the runtime view adds to the April 2026 news cycle

Not instead of benchmark scores and model cards. Beside them. The press-release view gives you the upper bound of what a model could do. The runtime view gives you the actual shape of what reaches a user.

Version pins are the real release notes

The line that matters for how April 2026 reaches you is @agentclientprotocol/claude-agent-acp ^0.29.2 in acp-bridge/package.json. Everything else flows through that.

On-device memory is a constant

A 256 MB V8 heap cap is cheap on a laptop but small on the hype cycle. 2M-context news is real in the cloud and abstract on the client, which is fine because context lives in the provider's memory, not yours.

Accessibility-tree input changes the calculus

Fazm reads Mac apps through accessibility APIs, not screenshots. That is why the runtime cares about MAX_SCREENSHOT_DIM only for the browser path, not the main control loop. The top April 2026 SERP articles collapse both paths into one.

See the April 2026 runtime on your own Mac

Fazm ships with the Agent Client Protocol SDK running as a 256 MB Node subprocess, 17 bundled Claude Skills, five MCP servers wired up on boot, and three tiers of Claude in the picker. Sonnet 4.6 is the default for the main agent loop. The entire April 2026 Anthropic release cycle flows through one package.json pin.

Download Fazm

Frequently asked questions

What does 'latest AI LLM news April 2026' actually look like inside a shipping Mac app?

It looks like a package.json line and a subprocess spawn. Fazm's acp-bridge/package.json pins @agentclientprotocol/claude-agent-acp at ^0.29.2. When the app launches, ACPBridge.swift:343 spawns Node with exactly '--max-old-space-size=256 --max-semi-space-size=16' and hands it the ACP SDK. The SDK owns the conversation, tool calls, context compaction, and the Claude auth handshake. The 'news' (new Claude aliases, new behavior, new rate-limit telemetry shapes) arrives through that subprocess at runtime, not through a Fazm app release.

Why a 256 MB V8 heap for the ACP subprocess?

Because it runs on a user's Mac alongside everything else they have open. Desktop agents that sit resident on a laptop cannot treat RAM the way a cloud runner does. The cap is set at /Users/.../fazm/Desktop/Sources/Chat/ACPBridge.swift line 343: '--max-old-space-size=256 --max-semi-space-size=16 bridgePath'. The April 2026 news cycle is full of models that advertise 2 million token contexts. That context does not live in the Node bridge's heap, it lives on the provider's side. The bridge's job is to shuttle JSON-lines, not to hold context, which is why 256 MB is enough.

How does Fazm pick up a new Anthropic model without shipping a new app release?

Through the ACP session/new response. When the bridge starts a session, the SDK returns an availableModels array with short aliases like 'haiku', 'sonnet', 'opus'. That array flows back through the bridge as a 'models_available' JSON line. In Swift, ACPBridge's onModelsAvailable callback hands the list to ShortcutSettings, which normalizes short aliases to full IDs (claude-haiku-4-5-20251001, claude-sonnet-4-6, claude-opus-4-6) and re-sorts via a family map. When Anthropic publishes a new Sonnet or Opus, the picker picks it up the next time the user opens the app. No Fazm binary needs to change.

What about Claude Skills, the April 2026 developer news?

Fazm ships 17 of them bundled. Desktop/Sources/BundledSkills/ contains ai-browser-profile.skill.md, canvas-design.skill.md, deep-research.skill.md, doc-coauthoring.skill.md, docx.skill.md, find-skills.skill.md, frontend-design.skill.md, google-workspace-setup.skill.md, pdf.skill.md, pptx.skill.md, social-autoposter.skill.md, social-autoposter-setup.skill.md, telegram.skill.md, travel-planner.skill.md, video-edit.skill.md, web-scraping.skill.md, xlsx.skill.md. SkillInstaller.swift copies them into ~/.claude/skills/ on first launch, compares SHA-256 digests on subsequent launches to detect updates, and also maintains an obsoleteSkills list ('hindsight-memory' currently) that gets removed on every launch. The user never has to run a skill-install command.

What are the MCP servers Fazm bundles on boot?

Five by default. acp-bridge/src/index.ts buildMcpServers (lines ~992-1100) registers fazm_tools (stdio to a Unix socket, lets Claude call back into Swift), playwright (via @playwright/mcp, with --output-mode file and --image-responses omit), macos-use (native binary for Mac accessibility automation), whatsapp (native Catalyst control via accessibility APIs), and google-workspace (bundled Python MCP for Gmail/Calendar/Drive/Docs/Sheets). User-defined servers from ~/.fazm/mcp-servers.json are appended. The set is rebuilt per session, which is why the April 2026 ecosystem news (Claude Agent SDK, MCP proliferation) actually shows up in the app's runtime rather than on a blog post.

How does Fazm handle Playwright screenshots given Claude's image dimension limit?

It resizes them in-place. acp-bridge/src/index.ts line 713 sets MAX_SCREENSHOT_DIM = 1920 and watches /tmp/playwright-mcp for new .png or .jpeg files. Any screenshot that comes in larger than 1920 on either dimension gets resized before the ACP SDK ships it to Anthropic's API. The comment on the line calls out 'stay under 2000px API limit'. Retina Macs otherwise produce screenshots over 2000 that the API will reject.

What are the tool timeout budgets inside the ACP bridge?

Three tiers, set at acp-bridge/src/index.ts lines 77-79. TOOL_TIMEOUT_INTERNAL_MS is 10 seconds (for ToolSearch and similar internal calls). TOOL_TIMEOUT_MCP_MS is 120 seconds (for anything whose title starts with mcp__). TOOL_TIMEOUT_DEFAULT_MS is 300 seconds (for Bash, Write, Edit, Read, etc.). Users can override globally via FAZM_TOOL_TIMEOUT_SECONDS, surfaced in the app as Settings > Advanced > Tool Timeout. When a timer fires, the bridge synthesizes a 'completed' tool_activity event so the Swift UI unblocks and the model can recover, then emits a visible tool_result_display with the timeout and a fazm://settings/tool-timeouts deep link.

Does the bridge auto-approve tool permissions?

Yes, during a session. acp-bridge/src/index.ts handles session/request_permission by always selecting allow_always (or allow_once when only allow_once is offered). The comment on that branch notes 'matches agent-bridge's bypassPermissions behavior'. The trust model is that the user has already opted in by running Fazm. Gate logic that would otherwise interrupt the user to approve every Bash call or file write lives elsewhere in the product (Ask mode vs Act mode), not at the ACP session/request_permission layer.

Which 'latest AI LLM news April 2026' entries actually change Fazm's default behavior?

Three categories. First, new Claude aliases, because they land in the picker through the models-available callback. Second, ACP SDK version bumps, because they change what the subprocess can do (new sessionUpdate kinds, new rate-limit telemetry shapes). Third, MCP ecosystem changes, because the buildMcpServers function is the ground truth for what the app wires up. Most other April 2026 news (GPT-5 Turbo multimodal, GLM-5.1 open-source MoE, Llama 4 Scout edge model, Gemini 3.1 Pro 2M context GA) does not change the default loop until someone routes a custom endpoint through the Anthropic-shape shim, because the default is Claude.

Where is the Gemini part of Fazm in this picture?

On a second loop, not through ACP. GeminiAnalysisService.swift uses model 'gemini-pro-latest' (line 67) and records analyzer runs tagged 'gemini-2.5-pro' (line 264). That loop buffers session-recording chunks, sends them for multimodal analysis, and uses function-calling into the local database. It runs independently of whichever Claude tier the chat is using. So when the April 2026 news is about a Gemini release (3.1 Pro GA, Google/Broadcom TPU deal), the part of Fazm that cares is that separate service, not the ACP bridge.

Why ship the ACP bridge as a subprocess and not a native SDK?

Because the ACP SDK is TypeScript and the fastest way to keep it up with Anthropic's release cadence is to not reimplement it. The bridge is ~2600 lines of TypeScript (acp-bridge/src/index.ts) that translates between ACP's JSON-RPC over stdio and Fazm's own JSON-lines protocol over stdin/stdout. Writing a native Swift port would mean chasing Anthropic's changes by hand. Spawning Node is a pragmatic call: 256 MB heap cap, a Unix socket back to Swift for fazm_tools, and the full ACP SDK riding the wave of Anthropic's April 2026 updates with a version bump instead of a rewrite.

What does the April 2026 rate-limit telemetry look like inside Fazm?

Richer than a single 429. ACPBridge.StatusEvent includes a .rateLimit case with status, resetsAt, rateLimitType, utilization, overageStatus, and overageDisabledReason fields. This shape comes from the ACP SDK and mirrors Claude's expanded rate-limit telemetry, which arrived in the April 2026 cycle. The bridge surfaces utilization warnings before they become rejections, which is why the UI can show a 'your Claude account is at 85% of hourly limit' notice rather than only showing an error after a request fails.

What the SERP cannot show about April 2026

Claude Opus 4.6 leads Arena. Gemini 3.1 Pro is GA on Vertex with 2M context. GLM-5.1 is MIT-licensed with 744B parameters. These are facts. They will still be the facts tomorrow. The press releases will still exist.

The thing that changes quietly is the runtime layer. ACP SDK version 0.29.2 was a minor bump that added rate-limit telemetry fields. The Claude Skills system turned capabilities into file-system artifacts a desktop app can bundle. The MCP ecosystem expanded enough that a consumer app can ship five servers without asking the user to configure any of them. The 1920px screenshot cap is a workaround for a quiet API limit that would otherwise break Retina laptops. None of this makes the press-release headlines. All of it determines whether April 2026's "latest LLM news" ends up doing anything on your Mac.

fazm.AI Computer Agent for macOS
© 2026 fazm. All rights reserved.

How did this page land for you?

React to reveal totals

Comments ()

Leave a comment to see what others are saying.

Public and anonymous. No signup.