Every Open Source LLM Released in April 2026
Twelve open source models shipped in a single month. Parameter counts broke records. Benchmarks got rewritten. But one question matters more than any leaderboard: can these models actually control your computer?
The Complete April 2026 Release List
Sorted by release date. Every model listed here is open-weight at minimum, with license terms noted. Parameter counts show total and active (for MoE architectures).
Gemma 4 (Google)
Four variants: E2B, E4B, 26B MoE, 31B Dense. Apache 2.0. Multimodal with vision. Codeforces ELO 2150, AIME 89.2%, GPQA Diamond 84.3%.
Llama 4 Scout (Meta)
109B total / 17B active. 16 MoE experts. 10M token context window. Meta Community License (700M MAU cap).
Llama 4 Maverick (Meta)
400B total / 17B active. 128 MoE experts. Same license as Scout. AIME 88.3%.
GLM-5.1 (Zhipu AI)
744B total / 40B active. MIT license. 200K context. 94.6% parity with Claude Opus 4.6 on Claude Code eval benchmark.
Qwen 3 72B (Alibaba)
72B dense. Apache 2.0. MMLU-Pro 79.8. Strong multilingual support across 29 languages.
Qwen 3 MoE 235B (Alibaba)
235B total / 22B active. Apache 2.0. MMLU-Pro 81.5. Competitive with GPT-4 class models at a fraction of the inference cost.
Codestral 2 (Mistral)
22B dense. Apache 2.0. Code-focused with fill-in-the-middle support. Designed for IDE integration and code completion.
OLMo 2 32B (Allen AI)
32B dense. Apache 2.0. Fully open: training data, code, checkpoints, and training logs all published.
Bonsai 8B (PrismML)
8B params with 1-bit quantization. 14x model size reduction. Pushes the boundary on efficient inference for edge devices.
Gemma 3n (Google)
4B effective / 2B memory footprint. On-device multimodal. Designed for phones and laptops without cloud inference.
April 2026 by the Numbers
What Benchmarks Don't Measure
GLM-5.1 hits 94.6% of Opus on coding benchmarks. Gemma 4 scores 89.2% on AIME math. These numbers are real. But desktop automation agents need something benchmarks do not test: reliable, structured tool calling through operating system accessibility APIs.
When an AI agent controls your Mac, it reads a live accessibility tree (a structured representation of every button, text field, menu, and label in every open app). It issues tool calls to click, type, scroll, and navigate. If a single tool call is malformed, or the model hallucinates an element that does not exist, the entire automation breaks. This is a different failure mode than getting a coding benchmark wrong.
How Desktop Automation Actually Works
Why Fazm's Model Selector Has Three Options, Not Thirty
Open the Fazm desktop app, click the model selector in the floating control bar, and you see exactly three choices:
Fazm available models (ShortcutSettings.swift, line 144)
- claude-haiku-4-5-20251001 (labeled "Scary" in the UI)
- claude-sonnet-4-6 (labeled "Fast", the default)
- claude-opus-4-6 (labeled "Smart")
No Llama. No Gemma. No GLM. Not because those models are bad at generating text, but because Fazm routes every action through macOS accessibility APIs. The model must produce structured tool calls that map to real UI elements: clicking a specific button in Finder, typing into a specific text field in Safari, selecting a specific menu item in any native app. One hallucinated element reference and the workflow fails.
The default was originally Opus, but got changed to Sonnet after users hit Anthropic's rate limits too quickly. That tradeoff (intelligence vs. throughput) is invisible in any benchmark table, but it shapes which model a real product ships with.
See it in action
Fazm automates any app on your Mac using real accessibility APIs, not screenshots. Free to start.
Download Fazm →Benchmark Performance vs. Desktop Automation Readiness
What changes when you move from benchmarks to real automation
Standard LLM benchmarks test isolated tasks: generate code, solve math, answer questions. The model works in a single turn with no tool use.
- Single-turn text generation
- No tool calling required
- No error recovery needed
- No real-time state tracking
What Open Source Models Need to Unlock Desktop Agents
These are the capabilities that matter for real automation agents, roughly in order of how close open source models are to achieving them.
Reliable structured tool calling
The model must produce valid JSON tool calls with the correct schema every time. A 98% success rate sounds good until you realize a 20-step workflow has a 67% chance of at least one failure. GLM-5.1 and Qwen 3.6-Plus are closest here.
Long context with grounding
A macOS accessibility tree for a complex app can exceed 50,000 tokens. The model needs to find the right element in that tree without hallucinating. Llama 4 Scout's 10M context helps with capacity, but grounding accuracy is the harder problem.
Multi-step planning and recovery
When a click lands on the wrong element (the app changed state between reading and acting), the model must recognize the unexpected state, backtrack, and try a different approach. This requires planning over sequences, not just single turns.
Real-time latency constraints
Desktop automation feels wrong if there is a 5-second pause between each action. The model needs to be fast enough to feel responsive. MoE architectures help (only 17B to 40B active), but serving infrastructure for open source models is still catching up.
License Guide: What "Open Source" Actually Means
Not all of these models carry the same terms. The differences matter if you plan to build products on top of them.
| License | Models | Key restriction |
|---|---|---|
| Apache 2.0 | Gemma 4, Qwen 3, Codestral 2, OLMo 2 | None. Full commercial use. |
| MIT | GLM-5.1 | None. Full commercial use. |
| Meta Community License | Llama 4 Scout, Llama 4 Maverick | Products with 700M+ monthly active users must request a separate license from Meta. |
| Modified MIT | Kimi K2.5 | Attribution required. Products above 100M MAU need a separate agreement. |
| Gemma License | Gemma 3n | Open weights but not OSI-approved. Commercial use allowed with Google's terms. |
Frequently asked questions
Which open source LLM released in April 2026 has the best coding performance?
GLM-5.1 from Zhipu AI achieved 94.6% parity with Claude Opus 4.6 on the Claude Code evaluation benchmark. It is a 744B MoE model (40B active parameters) released under the MIT license with a 200K context window. Codestral 2 from Mistral (22B, Apache 2.0) is a strong alternative for pure code generation with fill-in-the-middle support.
What is the difference between Llama 4 Scout and Llama 4 Maverick?
Llama 4 Scout is a 109B total parameter model with 17B active parameters and 16 MoE experts. It supports a 10 million token context window. Llama 4 Maverick is larger at 400B total parameters (still 17B active) with 128 MoE experts. Both use Meta's Community License, which restricts commercial use above 700 million monthly active users.
Can open source LLMs from April 2026 run desktop automation agents?
Not reliably for full desktop automation. While models like GLM-5.1 and Qwen 3.6-Plus score well on coding and reasoning benchmarks, desktop automation requires structured tool calling through accessibility APIs, multi-step error recovery, and real-time screen state interpretation. These capabilities are not measured by standard benchmarks. Fazm, for example, still uses Claude exclusively because tool-calling reliability on native macOS apps has not yet reached parity in open source models.
Which April 2026 open source LLMs are truly open (not just open-weight)?
OLMo 2 32B from Allen AI is the most open. It publishes the full training data, training code, evaluation code, and training logs under Apache 2.0. Most other releases are open-weight only, meaning you get the model parameters but not the training data or full reproduction recipe. Gemma 4 uses Apache 2.0 for weights but does not release training data.
What does MoE (Mixture of Experts) mean for the April 2026 models?
MoE means only a fraction of the model's total parameters activate for each token. Llama 4 Maverick has 400B total parameters but only 17B are active per inference, making it faster and cheaper to run than a 400B dense model. GLM-5.1 (744B total, 40B active) and Qwen 3 MoE 235B (235B total, 22B active) use the same approach. This lets open source models match frontier performance while staying runnable on smaller hardware.
Which open source models from April 2026 can run on consumer hardware?
Gemma 4 E2B (2B parameters) and Bonsai 8B (1-bit quantized, effectively 14x smaller than its full-precision size) are designed for consumer devices. Gemma 3n targets on-device deployment with a 4B effective / 2B footprint architecture. For larger models, you will need multiple GPUs or cloud instances, though Llama 4 Scout's 17B active parameters make it more practical than its 109B total size suggests.
The models are getting better. The automation is here now.
Open source LLMs will eventually power desktop agents. Today, Fazm uses frontier models with real accessibility APIs to automate any app on your Mac. No screenshots, no pixel matching.
Try Fazm free