The Saturday a 13-line commit revealed that 96% of one app's feedback funnel was silent noise
April 25, 2026 had no big-name model release. Every recap of that day lists the same earlier-April drops (Gemma 4, GLM-5.1, Llama 4 Scout and Maverick, Codestral 2, Qwen 3, the OpenAI Codex Labs and Agents SDK story from April 21-22). On the day itself, the headline event inside MIT-licensed consumer Mac AI was one commit in the Fazm repository at 20:26:32 PT. The diff is 13 lines. The number in the commit body is the interesting part.
Direct answer, verified 2026-05-13
The big April 2026 open source AI releases (Gemma 4 on April 2, GLM-5.1 on April 3, Llama 4 Scout and Maverick on April 5, Qwen 3 on April 5-9, Codestral 2 on April 8, OpenAI Codex Labs and Agents SDK on April 21-22) had all shipped before April 25. April 25, 2026 was a Saturday with no headline foundation model release. The day's shipping event in MIT-licensed consumer Mac AI was Fazm commit 2fbc891c at 20:26:32 PT: 13 lines across three Swift files, adding a source: "silent" vs "modal" property to the Feedback Opened and Feedback Submitted events so a polluted analytics funnel could be cleaned up.
What every other writeup says about April 25
If you pull up three different recaps of open source AI for April 25, 2026, you get a recycled month-level list. Google Gemma 4 (the 31B Dense flagship plus a 26B MoE variant) under Apache 2.0 from April 2. Zhipu's GLM-5.1 744B MoE under MIT from April 3. Meta's two Llama 4 drops, Scout (109B MoE) and Maverick (400B MoE), under the Llama 4 Community license from April 5. Alibaba Cloud's Qwen 3 family ranging from 0.6B to 235B under Apache 2.0 in the April 5-9 window. Mistral's Codestral 2 on April 8. Block / Linux Foundation moving Goose to an Agent Framework on April 5. Google's Agent Development Kit and OpenAI's Agents SDK both on April 9. OpenAI Codex Labs and ChatGPT Images 2.0 around April 21-22.
All of that is true, and none of it shipped on April 25. The calendar of a quiet Saturday is mostly bug-fix Saturday across the consumer AI stack: small PRs to llama.cpp, Ollama, vLLM, the usual. If the question is "what should I install today," you should install whatever the earlier-April release notes already told you to install. If the question is "what got shipped today," you have to scope smaller.
The one thing that did ship
One commit in the fazm repository. Author date Sat Apr 25 20:26:32 2026 -0700. Subject line eight words: Distinguish silent from modal feedback in analytics. Diff stat: 3 files changed, 13 insertions(+), 10 deletions(-). The interesting part is not the diff. It is the commit body.
commit 2fbc891c, body
The exclamationmark.triangle "Report an issue" icon in the floating bar calls FeedbackWindow.sendSilently(), which fired both Feedback Opened and Feedback Submitted (length 0) instantly. This made silent log uploads indistinguishable from real form submissions and polluted the funnel: 133 of 138 Feedback Submitted events in the last 30 days had length 0.
Verbatim from the second paragraph of git show 2fbc891c.
“133 of 138 Feedback Submitted events in the last 30 days had length 0. The funnel was 96.4 percent silent log uploads pretending to be form submissions.”
commit 2fbc891c body, verbatim ratio
How the funnel got polluted
One name (Feedback Submitted) was fired from two different code paths. Both paths are valid product surfaces. Neither was wrong to fire an event. The bug is that the funnel reading them treated them as one stream.
The two paths to one event name, pre-commit
Modal path
User opens form, types, clicks Submit
FeedbackView.show
Opens NSWindow, fires Feedback Opened
Feedback Submitted
Fires with real length
And the silent path that polluted the same event name
Silent path
User taps exclamation-triangle icon
sendReport()
AIResponseView.swift:1606
FeedbackWindow.sendSilently
Uploads /tmp/fazm.log to Sentry
Feedback Opened + Submitted
Both fire instantly, length 0
The silent path is doing useful work. It uploads logs to Sentry with the user's email, the active session, and a copy of /tmp/fazm.log (or /tmp/fazm-dev.log for the dev build). That is valuable telemetry. The bug is that it was named with the same event identifiers as the modal form path, so any funnel built on those names averaged the two together. Because the silent path fires both Opened and Submitted in the same call, with no time between them, it always converts. The funnel showed near-perfect conversion because most events were the same call counted twice.
The 13 lines of fix
The diff touches three files. AnalyticsManager.swift gains a source parameter that defaults to "modal". PostHogManager.swift threads it into the event properties. FeedbackView.swift passes "silent" from the sendSilently call path. The modal path keeps the default value.
Desktop/Sources/PostHogManager.swift, post-commit
func feedbackOpened(source: String = "modal") {
track("Feedback Opened", properties: [
"source": source
])
}
func feedbackSubmitted(feedbackLength: Int, source: String = "modal") {
track("Feedback Submitted", properties: [
"feedback_length": feedbackLength,
"source": source
])
}Desktop/Sources/FeedbackView.swift, the two call sites
AnalyticsManager.shared.feedbackOpened(source: "silent") // ... Sentry capture, log attachment ... AnalyticsManager.shared.feedbackSubmitted(feedbackLength: 0, source: "silent")
That is the entire shape. Two characters of property on the event payload (the key "source"). Two call sites that pass "silent". Every other call site in the codebase gets the default "modal" for free, because Swift default parameter values work that way. The funnel can now filter on properties.source == "modal" and read the real conversion rate of the form path; or filter on properties.feedback_length > 0 and read the real submission count.
The numbers, before the fix
These are what the funnel actually looked like, from the commit body and the prior 30-day analytics window the commit refers to.
Five real form submissions in a month. The rest was the exclamation-triangle icon doing its job: uploading logs to Sentry so someone could debug a session. The dashboard had no way to know. Before the fix, anyone looking at the conversion rate of Feedback Opened -> Feedback Submitted was reading a near-100% number that meant nothing.
Why this is a generic bug class, not a Fazm-specific story
Any product with a "send me your logs" button has the ingredients for this bug. The pattern: one analytics event name, two UI surfaces, one of them silently fires both the start and the end of the funnel because there is no user friction between them. The fix is a property on the event payload. The hard part is noticing.
| Feature | Named-by-code-path (the default) | Named-by-intent (source split) |
|---|---|---|
| Event shape | Feedback Submitted, no source, two surfaces fire the same event | Feedback Submitted with source: 'modal' | 'silent' |
| Funnel reads as | One average funnel, dominated by whichever surface fires more | Two clean funnels, filterable, comparable |
| Conversion rate accuracy | Inflated by zero-friction silent paths, often by 10-100x | Matches what a user did |
| Detection cost | Manual: walk every UI surface, ask which events it fires, compare to the funnel definition | Funnel dashboards are honest by default |
| Fix cost | Two characters of property name plus call-site updates | Already fixed |
| What you should do today | Filter funnels by feedback_length > 0 (or your payload-size equivalent) and see what changes | Audit your event names, name them by user intent |
Rows describe the analytics-event design choice exposed by commit 2fbc891c. Funnel pollution shapes vary; the principle (name by intent, not by code path) generalizes.
Reproducing the whole thing yourself
Every claim on this page is in the public git log of the fazm repository. If you want to read the commit, the diff, and the call sites, this is the shortest path.
git clone https://github.com/mediar-ai/fazm
cd fazm
git show 2fbc891c
# Author: m13v <i@m13v.com>
# Date: Sat Apr 25 20:26:32 2026 -0700
#
# Distinguish silent from modal feedback in analytics
#
# Desktop/Sources/AnalyticsManager.swift | 8 ++++----
# Desktop/Sources/FeedbackView.swift | 4 ++--
# Desktop/Sources/PostHogManager.swift | 11 +++++++----
grep -n "sendSilently" Desktop/Sources/FloatingControlBar/AIResponseView.swift
# 1606: FeedbackWindow.sendSilently()
grep -n "feedbackOpened\|feedbackSubmitted" Desktop/Sources/PostHogManager.swift
# 523: func feedbackOpened(source: String = "modal") {
# 529: func feedbackSubmitted(feedbackLength: Int, source: String = "modal") {What the rest of the week looked like in the same repo
For shape, here is the commit cadence around April 25, 2026, in the same repository. The v2.4.2 release tag landed on Sunday, April 26 at 14:12 PT, so April 25 sits inside the post-v2.4.1, pre-v2.4.2 window.
- April 24, 07:07 PT - 9856d794, Stripe checkout cancellation no longer shows an error page after the user returns to the app.
- April 24, 22:52 PT - f5d0a1e2, workspace directory picker now reads "Select" instead of "Open"; Custom API Endpoint help text gains an example for local LLM bridges.
- April 25, 20:26 PT - 2fbc891c, the silent-vs-modal analytics fix. The only commit on this date.
- April 26, 14:12 PT - bdf57afd, Release 2.4.2, four items move from "unreleased" into the 2.4.2 changelog block.
- April 26, 16:02 PT - be29a3c7, SQL tool description gains observer_activity schema notes for the AI agent.
The 2.4.2 release notes do not mention the analytics fix as a line item, because the fix is an internal observability change, not a user-facing behavior change. It is still arguably the most consequential commit of that release window: every subsequent decision read off the feedback funnel is now reading a real number.
What to take away if you ship anything with telemetry
Name events by user intent. Not by the code path that fires them. A Feedback Submitted event should mean "a human submitted feedback," not "the FeedbackWindow class fired its submit handler." If you have two surfaces that share an intent name but feel different to users, pass a source property. Two characters of payload buys you a clean funnel forever.
Funnels do not have unit tests. No code change in the world would have caught this bug. The code did what it was written to do. The test would have to be a human sitting in front of a dashboard and asking "wait, why is this conversion rate so high?" That is the only signal. Schedule that human time the way you schedule code review.
Silent telemetry pollutes more than loud bugs. A crash is loud, gets reported, gets fixed. A silently inflated metric is quiet and decides the product roadmap. The April 25 commit body is one of the cleanest examples of this trade-off in any recent consumer-AI git log: a 13-line fix to a number that, before the fix, was wrong by a factor of roughly 27.
Want to see what the Fazm event-source split looks like in PostHog?
Book a 15-min walkthrough. I will pull up the live funnel filtered on source=modal vs source=silent and show what a 96% noise reduction actually looks like on a dashboard.
Frequently asked questions
What open source AI shipped on April 25, 2026?
April 25, 2026 was a Saturday. None of the big-name foundation model drops everyone associates with April 2026 landed on that date. Gemma 4 (Apr 2), GLM-5.1 (Apr 3), Llama 4 Scout and Maverick (Apr 5), Codestral 2 (Apr 8), Qwen 3 family (Apr 5-9), and the OpenAI Codex Labs / Agents SDK news (Apr 21-22) had all already shipped. The one shipping commit in MIT-licensed consumer Mac AI that day was Fazm commit 2fbc891c at 20:26:32 PT, a 13-line analytics fix touching three Swift files: AnalyticsManager.swift, FeedbackView.swift, and PostHogManager.swift.
Why does a 13-line analytics fix matter when bigger projects are getting all the coverage?
Because the commit message contains a number that should make anyone running AI product analytics uncomfortable: 133 of 138 Feedback Submitted events in the prior 30 days had length 0. That is 96.4 percent of one funnel reading as zero-character feedback. The cause was not bad users. It was a single UI element (an exclamation-triangle 'Report an issue' icon in the floating control bar) calling FeedbackWindow.sendSilently(), which fired both Feedback Opened and Feedback Submitted with length 0 instantly, on the same code path that uploads logs to Sentry. The event names were shared with the real form submission, so the funnel looked active. It was not active. It was 96 percent silent log uploads pretending to be feedback.
How can I verify the commit, the file paths, and the 133-of-138 number myself?
Clone github.com/mediar-ai/fazm and run: git show 2fbc891c. The author date is Sat Apr 25 20:26:32 2026 -0700. The diff touches Desktop/Sources/AnalyticsManager.swift (4 lines changed), Desktop/Sources/FeedbackView.swift (2 lines changed), and Desktop/Sources/PostHogManager.swift (7 lines changed). The 133 of 138 number lives in the commit body, the second paragraph. The phrase 'polluted the funnel' is in there verbatim. The fix is a source: String parameter (defaulting to 'modal') added to feedbackOpened and feedbackSubmitted, passed as 'silent' from FeedbackWindow.sendSilently().
Where in the app is the silent path triggered from?
Desktop/Sources/FloatingControlBar/AIResponseView.swift, line 1606, inside a private sendReport() function. The function is wired to the exclamation-triangle 'Report an issue' button that appears in the AI response floating bar. Tapping that button uploads /tmp/fazm.log to Sentry under the message 'User Report (logs only)' and previously fired the same analytics events as if the user had opened the feedback form and submitted a zero-length form. After commit 2fbc891c, both events still fire, but with source: 'silent'.
Why is this a meaningful bug class, not a one-off bookkeeping fix?
Because it is the default failure mode of any product with a 'send me your logs' button. The shape is: one UI affordance does telemetry on the same event-name as a different UI affordance. Funnel dashboards average the two together. The funnel looks fine. The conversion number on the dashboard is wrong, often by an order of magnitude, in a direction that is sometimes flattering. The fix is two characters of property on the event, but you only know to apply it if you sit down and re-read every event your app fires and ask whether it can be triggered by a UI surface other than the one the funnel assumes. That re-reading is what produced this commit.
Was anything else released on April 25, 2026 that the headline writers missed?
Not in the fazm repo. April 25 has a single commit. April 24 had two (a workspace picker label tweak and a Stripe checkout cancellation redirect fix). April 26 had two (the v2.4.2 release tag and a SQL tool description update). The week was paced around the v2.4.2 release on Sunday, April 26. In the broader open source AI space, llama.cpp, Ollama, vLLM, and the other infra repos all had their usual cadence of small PRs that day, but no headline release. If you are looking for 'what shipped on April 25, 2026' as a specific calendar question, the answer is mostly: bug-fix Saturday across the consumer AI stack.
Does Fazm use accessibility APIs or screenshots to read the screen?
Accessibility APIs. The macOS AXUIElement family, including kAXFocusedWindowAttribute, kAXChildrenAttribute, kAXValueAttribute, and the role and title attributes. The agent reads structured element data: button labels, window titles, text-field values, roles. That is a different stack from the screenshot-and-vision pipelines most consumer 'computer use' agents use today. The April 25 commit is unrelated to that stack; it is in the analytics layer, not the screen-reading layer. It is included here because it is what literally shipped on April 25.
Is Fazm a framework or a consumer app?
Consumer macOS app. Signed and notarized DMG, MIT-licensed source on GitHub. The April 25 commit is interesting partly because it is the kind of thing a framework's authors would document in a changelog and a consumer app's authors would catch by sitting down to read their own dashboards. The pattern matches: the bug was found by a human reading a funnel chart, not by an error monitor, not by a user complaint, not by a unit test. Funnels do not have unit tests.
How do I avoid the same bug class in my own AI product?
Three concrete moves. First, name every analytics event after the user intent, not the code path. If the same intent ('submit feedback') has two different surfaces ('silent log upload' and 'modal form'), give them different event names or pass a source property so the funnel can split them. Second, walk every UI affordance once per quarter and ask: when this is tapped, what events fire? Compare against your event reference. The mismatch is the bug. Third, set up a funnel that filters on feedback_length > 0 (or whatever your equivalent payload size is). If the conversion rate of that filter is very different from the unfiltered rate, you have silent noise.
Why are there separate roundups for individual days in April 2026?
Because the cadence of consumer AI shipping is daily, not monthly. A month-level roundup is a snapshot; it does not show you what changed on what day or why. A day-level roundup forces a specific question: what did this single Saturday produce? The answer for April 25, 2026 is uninteresting if you scope to model releases and interesting if you scope to one commit in one Swift file. That gap is the entire reason to write per-day.
Where can I read about the days around April 25, 2026?
There is a deeper writeup of the April 22 Fazm v2.4.1 release at /t/open-source-ai-projects-tools-updates-april-21-22-2026 (the commit that taught the agent to recommend its own Remote Control feature instead of suggesting Telegram-bot workarounds). The April 20-21 update window is at /t/open-source-ai-projects-tools-updates-april-20-21-2026. The April 13-14 announcement window is at /t/open-source-ai-projects-announcements-april-13-14-2026. Each is built around one specific commit or one specific shipping event in that window, not a generic recap.
Related days in the April 2026 log
April 21-22, 2026: the v2.4.1 release that taught a Mac AI agent to recommend its own features
10 lines added to ChatPrompts.swift; the AI stops suggesting Telegram-bot workarounds for a native feature.
April 20-21, 2026 open source AI updates
What shipped in the immediate run-up to the v2.4.1 release.
April 13-14, 2026 open source AI projects announcements
Mid-month checkpoint on the April model-release wave.
Comments (••)
Leave a comment to see what others are saying.Public and anonymous. No signup.