Hugging Face or GitHub for new AI projects in April 2026. They answer two different questions, and a Mac agent pulls from both.
Hugging Face publishes weights. GitHub publishes the agents, harnesses, and bridges that drive those weights. They are complements, not substitutes. A useful new AI project from April 2026 usually has an entry on both at once: the model on Hugging Face, the code that runs the model on GitHub. This guide walks through what each platform actually shipped during the month, and shows the two lines of Swift in Fazm (file path included) that route any Hugging Face model through the same field a consumer Mac app already uses to pick its Claude endpoint.
Direct answer, verified 2026-05-06
Both. The two platforms are not in competition. They publish different artifacts:
- Hugging Face hosts the model weights, tokenizer configs, chat templates, and quantized variants (GGUF, MLX, AWQ).
- GitHub hosts the agent harnesses, MCP servers, inference engines, bridges, and consumer apps that drive the weights.
- A Mac AI agent (Fazm) installs four GitHub-published npm packages on every clean build of its bridge sub-process, and routes any Hugging Face model through one Swift-side env var.
Sources: huggingface.co/models, github.com/trending, github.com/mediar-ai/fazm.
Thesis: weights and code are two sides of one project
Most articles on this topic flatten Hugging Face and GitHub into a single bullet list of trending things. That misses what a person searching for new AI projects in April 2026 actually needs: a way to know which side of the platform split a release lives on, so they can find the matching half.
Take Qwen3.6-27B. The Hugging Face page (a model card, weights, four quant variants, a tokenizer config, a chat template) is one side. The other side is whatever code you use to actually run those weights: llama.cpp on GitHub, MLX on GitHub, vLLM on GitHub, Ollama on GitHub. And one layer up, the agent harness that calls the model: Claude Code on GitHub, Codex on GitHub, the ACP protocol on GitHub, the consumer-facing Mac app (Fazm) on GitHub. None of those agent layers exist on Hugging Face. None of the Hugging Face weights exist on GitHub. The project is the pair, not the half.
Below: what each platform actually shipped during April 2026 for a desktop AI agent on macOS, and the wire path that turns the pair into something a non-developer can run.
Side one: Hugging Face in April 2026
Four releases hit the agent filter. The filter is four properties: structured tool calling, an Anthropic-shaped serve path, a quantized variant that fits in unified memory on Apple Silicon, and a shippable license. Reasoning-only releases miss the first. SaaS-only weights miss the second. Anything above 70B at full precision misses the third. Non-commercial weights miss the fourth. Four April 2026 releases land all four:
Hugging Face / April 2026
Releases that pass the agent filter
Google Gemma 4
Apache 2.0 release, E2B/E4B/26B A4B/31B variants, MLX 4-bit quants land on day one. Tool calling is built in.
Alibaba Qwen3.6-27B
April 22 2026, Apache 2.0, native mlx-lm and mlx-vlm support, the strongest open-weight tool caller of the month.
Qwen3-Coder-Next
Mixture-of-experts variant, ~3B activated of an 80B total. Targeted at code-heavy tool-calling workflows.
LittleLamb 0.3B Tool-Calling
Multiverse Computing, built from Qwen3-0.6B with CompactifAI compression. Sub-250MB quantized, useful as a tool router in front of a bigger model.
None of these run as agents on their own. Each is a set of weights that a separate piece of GitHub code has to wrap into a process the operating system can talk to. The second side of the platform split is where that wrapping lives.
Side two: GitHub in April 2026
The list below is not a roundup of trending repos. It is the literal contents of acp-bridge/package.json in the Fazm repo, plus the consumer Mac app itself. Each of these GitHub-hosted projects is installed at its published version on every clean build of the bridge sub-process, and visible to any user who reads the lockfile.
GitHub / April 2026
Packages a real consumer Mac app pins
@playwright/mcp
Pinned at 0.0.73 in acp-bridge/package.json. The Microsoft Playwright MCP server that lets a Claude agent drive a real Chromium tab over stdio.
@agentclientprotocol/claude-agent-acp
Pinned at ^0.29.2. The Claude agent client protocol implementation that Fazm 2.4.0 (April 20 2026) upgraded to during the month.
@zed-industries/codex-acp
Pinned at ^0.12.0. The ACP wrapper for OpenAI's Codex CLI, used by Fazm's experimental Codex backend that routes GPT-5 family models through your ChatGPT subscription.
mediar-ai/fazm
The consumer Mac app source itself. Swift, SwiftUI, the floating control bar, the Node bridge, the MCP server manager, all open under github.com/mediar-ai/fazm.
The point of pinning is reproducibility. The Anthropic agent client protocol shipped a backwards-incompatible model-name change between the version Fazm 2.3.x used and v0.29.2 in 2.4.0. Pinning by exact version means the consumer app picks up the version it tested against, users see one CHANGELOG line, and the upstream package keeps moving on its own cadence. That is the workflow GitHub is good at; Hugging Face does not have an equivalent.
The bridge between them: two lines of Swift
The reason any Hugging Face model can replace Anthropic inside a consumer Mac app comes down to two lines of code in the Fazm repo. In Desktop/Sources/Chat/ACPBridge.swift, the bridge process reads UserDefaults for the key customApiEndpoint and, if it is non-empty, exports it as ANTHROPIC_BASE_URL in the agent process env. The Claude Code SDK obeys that env var when it sends tool-calling messages. Set the field to a translation proxy (LiteLLM, claude-code-router, or any Anthropic-compatible adapter), point that proxy at a Hugging Face model running locally on llama.cpp, MLX, vLLM, or Ollama, and the harness on top stays the same.
Wire path: Hugging Face model -> Fazm chat
From the agent's point of view there is no difference between Anthropic's hosted endpoint and the local proxy. From a privacy point of view there is a large one: when the proxy is local, the bytes never leave the machine.
What separates a usable release from a curiosity
The agent filter mentioned above is worth being explicit about. It is what turns a Hugging Face leaderboard score into something a Mac AI agent can actually run. The first four items are the qualifying properties; the last two are the most common reasons a release fails to qualify even when it tops a benchmark.
Agent filter, applied
- Structured tool calling the harness can parse on the wire (no markdown-only outputs).
- A serve path in the Anthropic /v1/messages shape, native or via a translation proxy.
- A quantized variant that fits in unified memory on Apple Silicon with usable token rates.
- A license you can ship with a paid consumer app (Apache 2.0, MIT, or similar).
- Reasoning-only models without tool-call output. Skip them, the harness cannot use them.
- SaaS-only weights with no local serve. The local-first wire path closes immediately.
How the two platforms orbit one consumer agent
The same loop a Mac agent runs depends on artifacts from both Hugging Face and GitHub at every step. Weights live on one. Inference engines, bridges, MCP servers, chat harnesses, and the consumer app live on the other. Pull any one ring out and the loop stops.
Mac agent
“GitHub-published npm packages pinned in acp-bridge/package.json. Each one is installed at its published version on every clean build of the bridge.”
acp-bridge/package.json, github.com/mediar-ai/fazm
When neither platform is the answer
For most users on a Mac in April 2026, neither Hugging Face nor GitHub is the platform they should think about first. The right answer for a multi-step, reasoning-heavy, planning-light workflow is still a hosted frontier model: Claude Sonnet 4.6 or Opus, sitting behind the same agent harness, with no proxy in the middle. The local-first wire path is for cases where the proxy actually helps: tool-call-heavy bulk work, sensitive data that must not leave the machine, an offline session on a laptop, or repeated queries cheap enough that round-tripping to a cloud provider is the slow part.
The reason to know how the two platforms split is not so you switch immediately. It is so you know what you are switching between when the time comes. Fazm's Settings keeps the toggle at one field, so the switch is one paste and one restart of a session, not a new app.
Resolution
"Hugging Face or GitHub" is the wrong frame. Both, almost always. The April 2026 release cycle made this more obvious, not less, because the agent infrastructure packaged on GitHub got faster at adopting whatever weights Hugging Face publishes. The Anthropic agent client protocol on GitHub matured to v0.29.2 during the month. The Playwright MCP server moved to 0.0.73. The Codex ACP wrapper landed at 0.12.0. Hugging Face shipped four agent-shaped weight releases in the same window. Pin one half from each, and the consumer app on top picks them both up on the next clean build, with no user action.
The single piece of code that joins the two halves is two lines of Swift in a public repo. Read the file at github.com/mediar-ai/fazm under Desktop/Sources/Chat/ACPBridge.swift, around lines 468 to 469.
Wire a Hugging Face model into your Mac agent without writing the harness
Twenty-five minutes, working session. We pick a model from April 2026, stand up a local proxy, and flip Fazm's customApiEndpoint to it before the call ends.
Frequently asked questions
Hugging Face or GitHub, which one actually has the new AI projects from April 2026?
Both, but they publish different things. Hugging Face publishes the model weights, the tokenizer config, the chat template, and the quantized variants (GGUF for llama.cpp, MLX for Apple Silicon, AWQ and GPTQ for cloud GPUs). GitHub publishes the agent harnesses, the MCP servers, the inference engines, the bridges, and the apps that wrap all of that into something a human runs. A weight file on Hugging Face does nothing on its own. An agent on GitHub still needs weights to talk to. The two platforms are complements, not substitutes, and a useful AI project usually shows up in both places at once: the model on Hugging Face, the code that drives it on GitHub. April 2026 produced new entries in both columns, and a consumer Mac agent like Fazm pulls directly from both.
What did Hugging Face actually ship in April 2026 that matters for a desktop AI agent?
Four releases passed the agent filter: working tool calling, an Anthropic-compatible serve path, MLX or GGUF support on Apple Silicon, and a license you can ship. Google Gemma 4 (E2B, E4B, 26B A4B, 31B variants) landed early in the month under Apache 2.0. Alibaba Qwen3.6-27B shipped on April 22 with native mlx-lm and mlx-vlm support. Qwen3-Coder-Next added a 3B-activated mixture-of-experts variant tuned for tool calling. Multiverse Computing's LittleLamb 0.3B Tool-Calling published a tiny tool-router model built from Qwen3-0.6B with their CompactifAI compression, sized to fit on the Apple Neural Engine. None of these four ship as runnable agents. They ship as weights, and a separate piece of code on GitHub turns them into agents.
What did GitHub ship in April 2026 that a real consumer macOS app already pins?
Look at acp-bridge/package.json in the Fazm repo on GitHub at github.com/mediar-ai/fazm. The bridge sub-process is a small Node project, and its dependencies are: @playwright/mcp at 0.0.73, @agentclientprotocol/claude-agent-acp at ^0.29.2, @zed-industries/codex-acp at ^0.12.0, and ws at ^8.20.0. Three of those four are open-source agent infrastructure that landed or moved during April 2026. Fazm 2.4.0 on April 20 upgraded the Claude agent protocol to v0.29.2 and Fazm 2.7.1 on May 1 added the Codex backend that uses @zed-industries/codex-acp. None of these are vendored or forked. They install at their published version on every clean build, which is what makes them visible to a user reading the lockfile.
How does a Hugging Face model end up running inside a consumer macOS app like Fazm?
Through one Swift-side env var. In Desktop/Sources/Chat/ACPBridge.swift, lines 468 and 469 read UserDefaults for the key customApiEndpoint, and if it is non-empty, set ANTHROPIC_BASE_URL on the agent process env. The agent process is the Claude Code SDK, which obeys ANTHROPIC_BASE_URL when it sends tool-calling messages over /v1/messages. Set the field to the URL of a translation proxy (LiteLLM, claude-code-router, or any Anthropic-compatible adapter), point that proxy at a Hugging Face model running locally on llama.cpp, MLX, vLLM, or Ollama, and the agent talks to the local model instead of Anthropic. The harness on top does not change. The wire shape on the front is still Anthropic. The weights on the back are now whatever you pulled from Hugging Face that morning.
Why pin GitHub-published packages instead of vendoring them, and why does that matter to the user?
Because the surface area of a desktop AI agent moves fast. The Anthropic agent client protocol shipped a backwards-incompatible model-name change between the version Fazm 2.3.x used and v0.29.2 in 2.4.0. The Playwright MCP server fixes a Chrome profile bug almost every minor release. Vendoring those would mean the consumer Mac app re-bundles every fix as a code change of its own. Pinning by exact published version means the agent infrastructure on GitHub upgrades on its own cadence, the consumer app picks up the version it tested against, and the user sees a single CHANGELOG entry that says, in plain English, what changed. The April 26 fix in Fazm 2.4.2 is a good example: a stored opus model preference had to be migrated forward to the new ACP alias, and the migration is one Swift function rather than a re-vendored npm package.
Does the agent read screenshots from Hugging Face vision models, or does it use macOS accessibility instead?
Default path is accessibility, not screenshots. The frontmost macOS app exposes its UI as an accessibility tree of buttons, text fields, table cells, and labelled values. Fazm's agent reads that tree as structured data and operates on it directly. A vision model from Hugging Face would be doing the same job in pixels, which is slower, more tokens, and brittle across themes and color schemes. The accessibility path is also why Fazm works with native apps that have no DOM at all (Mail, Messages, Notes, third-party Mac apps), not just with the browser. Vision models still have a place when the accessibility tree is empty or dishonest, but for the everyday case the answer is structured first, screenshots only when forced.
If I want to point Fazm at a brand-new MCP server that landed on GitHub this week, how does that work?
Open Settings, scroll to the MCP servers section, click Add, and paste the same fields you would put under mcpServers in a Claude Code config: name, command, optional args, optional env. Save. The bridge sub-process picks the new server up on the next session start. Or edit ~/.fazm/mcp-servers.json directly. The format is documented at the top of MCPServerManager.swift in the Fazm repo and is identical to the mcpServers shape Claude Code uses, so an entry from a public README on GitHub pastes in with no rewriting. Fazm 2.4.0 on April 20 was the release that made this user-facing; before that, MCP servers were hardcoded in the bridge.
Is the wire between Fazm and a Hugging Face model the same wire used for Claude itself?
Yes. The agent harness is the same in both cases: it serializes Anthropic /v1/messages with tool_use and tool_result blocks, expects a streaming response in the same shape, and routes whatever comes back to the same parser. The only thing that changes is the URL the agent sends those bytes to. With customApiEndpoint empty, the SDK uses Anthropic's hosted API. With it set, the SDK uses your translation proxy, and the proxy answers in the Anthropic shape regardless of what the actual model on the back end speaks. From the agent's perspective there is no difference. From a privacy perspective there is a big one: when the proxy is local, the bytes never leave the machine.
What is the case for not bothering with Hugging Face at all and just using a hosted model?
If your task is short, latency-sensitive, and reasoning-heavy, the hosted frontier models still win. Multi-step planning across noisy accessibility input is where local Hugging Face models from April 2026 still trail Claude Sonnet 4.6 and Opus by a meaningful margin. The reason to wire them in anyway is that local models excel at the boring tool-call-heavy parts of a workflow: looking up a CRM contact, filling a recurring form, batching ten near-identical document edits. A common stack on a Mac with 32GB or more is a local 27B-class Hugging Face model for the bulk parts of a routine, and the hosted frontier model for the planning. Fazm's customApiEndpoint can be flipped between the two without restarting the app, so you can A/B in real time.
Where on the Fazm repo do these claims actually live, so I can verify them?
Open github.com/mediar-ai/fazm in a browser. The Node bridge dependencies are in acp-bridge/package.json. The Swift code that maps customApiEndpoint to ANTHROPIC_BASE_URL is Desktop/Sources/Chat/ACPBridge.swift, around lines 468 to 469. The Settings UI that exposes the field is Desktop/Sources/MainWindow/Pages/SettingsPage.swift, around line 984 (the placeholder text suggests a local proxy URL on a non-standard port). The MCP server config file is created by Desktop/Sources/MCPServerManager.swift, which writes ~/.fazm/mcp-servers.json with the same shape Claude Code uses. The April 2026 changelog entries are in CHANGELOG.json at the repo root, with the 2.4.0 entry on April 20 being the release that made user-defined MCP servers visible.
Related guides
Best new Hugging Face models for Mac agents, April 2026
The four April 2026 Hugging Face releases that pass the agent filter, and the three lines of Swift that wire any of them in.
Open-source AI projects, tools, and updates on GitHub in April 2026
Which GitHub-published npm packages a real consumer macOS app actually pinned during April 2026, and what each upgrade between 2.4.0 and 2.5.0 changed for the user.
New open-source AI projects from April 2026, the month MCP jumped to a signed Mac app
Custom MCP server support landed in a notarized consumer macOS app on April 16 to 20 2026. Five servers are hardcoded; the rest live in ~/.fazm/mcp-servers.json.