APRIL 2026 / RECURRING DIGEST / LOCAL DB

Open source AI on GitHub, the last 24 hours, as a recurring digest your Mac reads for you

Awesome-lists go stale. GitHub topic pages are star-sorted, not time-sorted. Neither answers the question the query actually asks. The answer is a one-liner GitHub search URL with pushed:>YYYY-MM-DD, read every morning by a Mac agent that speaks the macOS accessibility tree and writes results into a table on your own disk. This guide shows both the URL and the agent.

M
Matthew Diakonov
9 min read
4.9from Written from the Fazm source tree
Live GitHub search URL you can try right now
AX tree, not screenshots
Local fazm.db digest, no SaaS
Bundled web-scraping and deep-research skills
MIT-licensed Mac agent

THE QUERY

The GitHub URL almost nobody posts

Every article that ranks for this keyword is either a curated list or a static roundup. None of them give you the native GitHub URL that scopes activity to a 24-hour window. Here it is. Paste it into a browser. Change the date to yesterday. You will see the answer to the search query.

github-search-last-24h.txt

The date is GitHub-UTC. If you are in US Pacific and it is 8am local, "yesterday" in UTC is already roughly 16 hours into the past, so you will miss the overnight commits from Asia if you use your local date. Always pass yesterday's UTC date. A Mac agent like Fazm handles this automatically with new Date(Date.now() - 86400000).toISOString().slice(0,10).

THE GAP

What every top search result gets wrong

If you run the same query we are ranking for and read the top five hits, the pattern is obvious. Each one answers a different, easier question.

FeatureTypical top SERP resultFazm + live GitHub query
Scopes to a true 24-hour windowNo, frozen at publish timeYes, pushed:>YYYY-MM-DD
Updates itselfNo, rewritten by handYes, re-run on a schedule
Works across topics (llm, agents, ml)Usually one topic, flat listAny topic, combinable
Local DB, no SaaS quotaN/A, static blog postexecute_sql on fazm.db
Uses structured page data (no OCR)N/AmacOS accessibility tree
Consumer app, no dev setupN/AInstall DMG, grant permission

THE PIPELINE

What happens when you say "get me the last 24 hours of AI releases"

Fazm is not a dashboard. It is a Mac-resident agent that drives real apps. When you ask for a GitHub digest, the request flows through four real pieces of its source tree.

How the digest gets built

You
GitHub
Chrome window
Fazm agent
AX tree
fazm.db
Morning brief

THE ANCHOR FACT

Two lines in AppState.swift that change what the agent sees

The part of Fazm that makes GitHub release tracking reliable is not the LLM. It is two calls in the desktop process. Both live in Desktop/Sources/AppState.swift. You can grep for them yourself.

Desktop/Sources/AppState.swift (lines 437-442)

When you open the GitHub search URL in Chrome, every repository card in the results list is a AXGroup with labeled children: repo name, description, language, star count, last-push timestamp. Fazm walks the tree and pulls each field directly. There is no screenshot step, no OCR, no fuzzy matching on pixel rows. That is why the digest is stable even when GitHub tweaks its layout.

THE TOOLS

Three ACP-bridge tools that do the work

Fazm exposes a small set of tools to the LLM via the Anthropic Agent Client Protocol. Three of them are all you need for this workflow. They live in acp-bridge/src/fazm-tools-stdio.ts.

capture_screenshot

Declared at line 296. Used only when the AX tree does not have what the model needs. For GitHub release pages, AX is almost always enough, so this stays idle.

execute_sql

Declared at line 281. SELECT auto-limits to 200 rows; UPDATE/DELETE require WHERE; DDL blocked. Perfect for append-only digest tables like ai_releases_daily.

web-scraping skill

Defined at Desktop/Sources/BundledSkills/web-scraping.skill.md. Bundled Python at Desktop/Sources/Resources. Used when the result page is JS-rendered enough that AX is incomplete.

deep-research skill

Defined at Desktop/Sources/BundledSkills/deep-research.skill.md. Optional: triggered when you want the daily digest to include a one-paragraph summary per top repo rather than just names.

THE RUN

What a real session looks like in the chat log

This is abbreviated. The real turn log is longer because the agent narrates as it works. The shape, though, is what you would see the first time you ran the workflow.

fazm chat session

Repo names and deltas above are illustrative of the pattern, not a specific date.

0bundled skills
0row SQL auto-limit
0hhour window filter
0SaaS round-trips

THE SETUP

From zero to a recurring 8am digest, in five steps

1

Install Fazm

Download the signed DMG from fazm.ai. Drag to Applications. Open. During onboarding, grant Accessibility and Screen Recording permission. These are the two that matter for this workflow. AX is used to read the GitHub page; Screen Recording covers the rare AX gap.

2

Pin the GitHub query

In chat: 'remember this GitHub search URL as my daily AI release query.' The agent writes it to the local kv store. From then on, references to 'my AI release query' resolve to that URL.

3

Dry-run the read

'Open my AI release query in Chrome and read the top 20 results.' The agent opens the URL, waits for the list to render, reads the AX tree, and prints the rows back in chat. If any repo is missing a field, it will tell you and optionally fall back to web-scraping via Playwright.

4

Pipe it into fazm.db

'Create a table ai_releases_daily with columns date, repo, description, stars, pushed_at, language. Insert today's run.' The agent calls execute_sql. The table now exists on your disk at ~/Library/Application Support/Fazm/fazm.db.

5

Schedule the recurrence

'Do this every weekday at 8am and send me the top 10 in chat.' Fazm writes a recurring task. From now on the digest runs while you are still making coffee, and the results are queryable with one execute_sql call.

THE ONE-LINER

If you insist on a shell command

For the subset of readers who would rather see this without the agent, the gh CLI gets you 80 percent of the way there. This is the shape Fazm internally builds when you ask for the digest, just expressed in bash.

ai-releases-last-24h.sh

The shell script is fine for a one-off. The reason to use Fazm instead is that it persists every run into a local table, dedupes across days, optionally fetches the /releases tab of the top N repos for tagged versions, and does not require you to have gh installed or a token in your shell profile.

WHY THIS WORKS

A daily release feed is an AX problem, not an LLM problem

Most "AI news in the last 24 hours" products solve the wrong half. They stack an LLM on top of a noisy feed and pay it to summarize. The better move is to fix the feed. GitHub already knows which repos were pushed in the last 24 hours on a given topic. The native search URL returns that list. The only thing missing from your daily workflow is a reliable reader that runs on a schedule and writes results you can query later. For consumer Mac users, an accessibility-tree reader is a better fit than a scraper, and a local SQLite is a better fit than a SaaS dashboard.

TOPICS WORTH WATCHING

The topic filters most worth pinning

Swap any of these into the topic: slot in the URL. These are the topics that generate meaningful same-day activity in 2026. Scroll the row.

topic:artificial-intelligencetopic:llmtopic:agentstopic:open-source-aitopic:mltopic:inferencetopic:diffusiontopic:ragtopic:embeddingstopic:vector-databasetopic:fine-tuningtopic:model-context-protocoltopic:anthropictopic:llamatopic:deepseektopic:qwentopic:multimodaltopic:speech-to-texttopic:text-to-speechtopic:voice-agent
24h

The 'last 24 hours' window is the one GitHub view that never ages. Any other digest is stale the moment it publishes.

From the Fazm source tree

By the numbers

Approximate shape of what shipped in a typical recent 24-hour window on the topic:artificial-intelligence filter. The exact numbers move daily; the ratios are stable enough to plan around.

0+
repos pushed
0+
tagged releases
0
rows Fazm reads per run (top slice)
0
external APIs needed

Want your own daily open source AI digest?

15 minutes on a call, we will wire up a Fazm recurring task that reads your favorite GitHub filter every morning and drops the results into a local table.

Book a call

Frequently asked questions

What is the exact GitHub search query for 'open source AI projects with releases or updates in the past 24 hours'?

GitHub has two useful queries, and you want both. For repository pushes in the last day: https://github.com/search?q=topic%3Aartificial-intelligence+pushed%3A%3E2026-04-19&s=updated&type=repositories (replace the date with yesterday's UTC date). For tagged releases only: https://github.com/search?q=topic%3Aartificial-intelligence+created%3A%3E2026-04-19&type=releases. You can swap the topic for topic:llm, topic:agents, topic:diffusion, topic:open-source-ai, or a multi-topic AND like topic:ml+topic:inference. GitHub's default sort on a search URL is best-match, so append &s=updated to sort by latest push. The date filter is an inclusive greater-than; GitHub treats it as UTC.

Why not just use RSS feeds or an 'awesome-ai' list?

Three reasons. Feeds exist per repo, not per topic, so you would need to subscribe to thousands of them to match topic:artificial-intelligence. Curated awesome-lists are maintained by humans and lag release activity by days or weeks; alvinreal/awesome-opensource-ai and thebigbone/opensourceAI are useful but neither is a real-time feed. GitHub's own topic pages (github.com/topics/open-source-ai, /topics/artificial-intelligence) show stars-sorted lists with no date scoping at all, so a repo that was last updated in 2023 still sits on top. The search URL with pushed:>YYYY-MM-DD is the only GitHub-native way to answer the 'past 24 hours' question without writing a script.

What does Fazm actually do that a browser bookmark does not?

Two concrete things. First, when Fazm looks at the rendered GitHub page it reads the macOS accessibility tree of the window, not a screenshot. In the Fazm source that means AXUIElementCreateApplication(frontApp.processIdentifier) followed by AXUIElementCopyAttributeValue(appElement, kAXFocusedWindowAttribute as CFString, ...) in Desktop/Sources/AppState.swift lines 439 through 441. The repository cards come back as typed roles and values, not pixels, so there is no OCR step and no risk of misread star counts. Second, Fazm's ACP bridge exposes an execute_sql tool against the local fazm.db (acp-bridge/src/fazm-tools-stdio.ts line 281) so the agent writes each day's digest to a table on your machine, no external API, no SaaS quota.

Do I have to be a developer to use this?

No. Fazm is a consumer Mac app. You install the DMG, grant Accessibility and Screen Recording permission during onboarding, open the chat, and say something like 'every morning at 8am, open this GitHub search URL in Chrome, read the top 20 repos that were pushed in the last day, and write a one-line summary of each into a new table in fazm.db called ai_releases_daily.' The agent calls capture_screenshot, web_scraping (a bundled skill at Desktop/Sources/BundledSkills/web-scraping.skill.md), and execute_sql on your behalf. None of these require a terminal.

Where is the 'web-scraping' skill referenced in the source?

Desktop/Sources/BundledSkills/web-scraping.skill.md. It is one of 17 bundled skill definitions that ship with the app (others include deep-research.skill.md, pdf.skill.md, docx.skill.md, video-edit.skill.md). The skill tells the agent which Python stack to prefer: requests + BeautifulSoup for static pages, Playwright for JS-rendered pages, Scrapy for larger crawls. Because Fazm ships its own bundled Python at Desktop/Sources/Resources (see bundled_python_path in Desktop/Sources/Chat/ChatPrompts.swift line 22), the skill works even on a Mac where the user has never installed Python.

What is actually shipping on GitHub in the last 24 hours in AI, as of April 2026?

The mix is predictable if you run the query daily. Big inference engines (llama.cpp, ollama, vllm, mlc-llm) ship small commits almost hourly. Agent frameworks (autogen, langchain, crewAI, smolagents, ACP implementations) ship a feature or two a day. Model-card and recipe repos for new open weights (DeepSeek-R1, GLM-5.1, Qwen3, Kimi-Dev-72B, Llama 4 Maverick) spike when a model drops and then settle. Tool-adjacent projects like MCP servers and Claude Code extensions churn fastest. The value of a 24-hour filter is not 'I will read everything,' it is 'I will see the spikes.' A local digest makes the spikes obvious at a glance.

How does this compare to services like Hacker News, DailyDev, or GitHub Trending?

They each solve a different slice. Hacker News surfaces posts humans vote on, which means you see the projects with marketing behind them. GitHub Trending is sorted by star velocity so it skews toward virality, not technical change. DailyDev aggregates blog posts, not commits. None of them give you 'every repo on topic X that was pushed in the last 24 hours,' which is exactly what the native GitHub search URL does and exactly what a Mac agent can read into a local DB. Use the aggregators for stories; use this setup for raw release signal.

What does an end-to-end run look like from a user's point of view?

Open Fazm's floating bar. Say: 'get me the last 24 hours of AI releases on GitHub.' The agent opens Chrome, navigates to the pushed:> search URL, waits for the results list to render, reads the list through the accessibility tree, extracts repo name, description, stars, and last-push timestamp for each row, optionally opens the /releases tab on the top 5 to check for tagged versions, and returns a short markdown digest in chat. Tell it 'save that to fazm.db as ai_releases_daily with today's date' and the execute_sql tool appends a row per repo. Tell it 'do that every day at 8am' and it writes a recurring task.

Why is an accessibility tree better than a screenshot for reading GitHub?

Because GitHub's release list is structured. The DOM carries role, name, description, star count, language, and timestamp as discrete attributes. When you pass a screenshot to an LLM, the model has to OCR the page and guess which numbers belong to which repo. When Fazm hands it the AX tree, each repository card is already a typed group with labeled children. That difference matters most on the long tail of repos whose names use uncommon characters (Chinese, Japanese, math symbols) where OCR error rates spike. For a daily release digest that you intend to trust, the AX path is the right substrate.

Is Fazm itself open source?

Yes. Repository is github.com/mediar-ai/fazm. License is MIT. The README says 'Fully open source. Fully local.' The Swift desktop app lives under Desktop/, the Node ACP bridge under acp-bridge/. You can clone it, build with run.sh, read the exact files this page cites (AppState.swift, fazm-tools-stdio.ts, web-scraping.skill.md), and inspect or fork the tool that runs this workflow.