New AI projects on Hugging Face and GitHub around May 13, 2026. Finding them is easy. Telling which ones survive is the actual skill.
There is no official list of what launched on a given day. There is a flood of names, and most guides about it just reprint the flood. This one does something more useful: it shows you how to look at any project you found this week and tell, before you install anything, whether it will still exist next month. The test is release cadence, and the worked example is a real changelog you can open right now.
Direct answer, verified 2026-05-15
No platform publishes a dated "released on May 13" list. You find that day's projects on two live surfaces:
- GitHub: open github.com/trending, set the language and the date range, and read repos ranked by stars gained in that window.
- Hugging Face: the Trending tab on huggingface.co/models and the Papers section, both reordered by recent attention.
Both are leaderboards of attention, not changelogs of the day. The harder and more useful question is which of the projects you find are worth your time, and the answer to that is release cadence. The rest of this page is how to read it.
The thesis: novelty is the weakest signal on either platform
Search for new AI projects from any specific date and you get a wall of repos. By mid-2026 there are millions of AI-related repositories on GitHub and over a million model repos on Hugging Face. A date filter does not reduce that to a shortlist. It reduces it to a slightly smaller wall.
The instinct most people fall back on is the star count. It feels like a quality score. It is not. A star is a single click that costs nothing and never expires. It does not get taken back when a project goes quiet. A repo can carry tens of thousands of stars earned during one strong launch week and have had no real commit since. The star count measures how good the launch post was. It tells you nothing about whether anyone is still maintaining the code today, which is the only thing that matters once you actually depend on it.
The signal that does predict survival is cadence: how often the maintainers ship, and whether what they ship is real work. Cadence lives in the commit history and the changelog. Unlike stars, it cannot be manufactured by a good headline. A project either pushed fixes in the last two weeks or it did not. So the question to carry into any roundup is not "what is new on May 13." It is "of the things that are new, which ones are alive."
Exhibit A: what an alive project looks like up close
Abstract advice is cheap. Here is a concrete, checkable example. Fazm is an open-source macOS AI agent, and its release history is committed to the public repo as a single file: CHANGELOG.json. You can open it without cloning anything. As of May 15, 2026 it holds 42 tagged releases going back to March 27, 2026. The interesting part is the first half of May.
tagged releases, May 1 to 15
releases dated May 13 itself
of 15 days with a release
One bar per calendar day. Source: CHANGELOG.json in the public repo. May 13 is shaded darker.
15 days, 22 tagged releases, one quiet day (May 6). This is what a maintained project looks like from the outside.
That shape is the thing to recognize. It is not a launch spike followed by silence. It is a steady drip: one release most days, two or three on the busy ones, one genuinely quiet day. When you find a project on May 13 and want to know whether it is worth adopting, this is the picture you are trying to reconstruct from its commit graph. If you cannot, that is your answer.
The reframe: stop scoring by recency, start scoring by cadence
The change this guide is asking for is small but it inverts how you read a roundup. Toggle between the two habits below.
How to read a list of new AI projects
You scan the roundup, sort by stars, and pick the repo at the top with the most recent commit date. The pick feels safe because the numbers are big and the date is fresh. You install it. Three weeks later the project has not moved, an issue you filed is unanswered, and you are maintaining a fork you never wanted.
- A star is one free click that never expires
- A single fresh commit hides a stalled month
- Launch-week hype and long-term health look identical
A two-minute vetting routine
Run this on any project you found on Hugging Face or GitHub this week, before you install it. It works the same whether the project is an agent, a framework, a CLI, or a desktop app.
Vet a project in four steps
Open the history
Find the changelog file or the commits tab. No public history is itself a red flag.
Check the span
Read the last 20 entries. Do they reach the last two weeks, or stop a month ago?
Check the substance
Are they real bug and crash fixes, or 'bump version' and 'update README' noise?
Check the honesty
Does the project admit and fix its own regressions? That means people still use it.
Step four is the one most people skip and the one that matters most. A maintainer who writes "this fixes something the last release broke" is a maintainer who is running their own software in anger. Fazm's 2.9.17 entry, dated May 14, 2026, says exactly that: it fixes a streaming bug introduced one day earlier in 2.9.16. A changelog willing to name its own mistakes is worth more than one that only ever announces triumphs.
What the May 13 entries actually say
To make the substance test concrete, here are the two releases Fazm tagged on May 13, 2026, the date in the question. Neither is a version bump. Both describe a specific way the software failed for a real person.
Fixed pop-out chat windows getting permanently stuck after a different window hit a Claude usage limit. The agent subprocess now restarts cleanly, and each conversation resumes from the same session ID once the limit refreshes instead of losing its context.
Added a timeout and a Cancel button to Google sign-in so the app stops hanging when the OAuth tab is closed early. Cleared stale subscription state on sign-out so a new account does not inherit the previous user's entitlements. Added detection for silently-dead background subagents so a stuck turn auto-cancels.
These are not glamorous. That is the point. Glamour is what launch posts are made of, and launch posts are what star counts reward. The unglamorous, dated, specific fix is the artifact of a project that is actually being used. When you evaluate something you found this week, you are looking for entries that read like these.
When new genuinely does matter
The honest counterargument: there is a class of release where novelty is the whole story, and that is model weights. When a lab publishes a model on Hugging Face that jumps a capability the previous generation could not reach, the release date is real news, and you do not wait two weeks of commit history to take it seriously. A model is a finished artifact. It does not need a daily drip of fixes to be good.
So the cadence test is not universal. It is sharpest for software that has to keep moving: agents, frameworks, CLIs, bridges, MCP servers, desktop apps. For a model repo you grade differently. You check whether quantized variants exist, whether the model card documents tool-calling behavior honestly, whether the license lets you ship it, and whether anyone has built running code around it yet. For the deeper split between what each platform publishes and why, our April 2026 guide on Hugging Face versus GitHub walks through it.
But for a tool, a stalled commit graph is a much louder warning than it is for a model, because a tool that stops moving stops working as the rest of the stack moves around it. The agent protocols and browser automation packages that a desktop AI agent depends on shift almost every week. A project pinned against last quarter's versions, with no recent commits to catch up, is already broken. It just has not told you yet.
Resolution
"What new AI projects appeared around May 13, 2026" is a question you can answer in thirty seconds with a date filter on GitHub Trending and the Hugging Face Trending tab. It is also the wrong question to stop on. The list it returns is a list of launches, and a launch tells you that something started, not that it will last.
The question worth your time is which of those projects are alive, and the answer is sitting in plain sight in every repo: the changelog, the commit graph, the dated record of whether anyone is still shipping fixes. Open it. Count the entries from the last two weeks. Read what they say. A project like Fazm, with 22 tagged releases across the first 15 days of May and a public CHANGELOG.json you can read without cloning, is making that easy on purpose. The ones that make it hard are telling you something too.
Want a Mac AI agent whose changelog you can actually read
Twenty-five minutes. We open Fazm's public release history together, walk the cadence, and set it up on your machine before the call ends.
Frequently asked questions
How do I find AI projects that specifically appeared around May 13, 2026?
There is no official 'released on May 13' list anywhere. The closest you get is two live surfaces. On GitHub, open github.com/trending, set the language filter (Python, TypeScript, Rust, Swift) and the date range to 'today' or 'this week', and you see repos sorted by stars gained in that window, which is a proxy for what appeared or broke out recently. On Hugging Face, the Trending tab on huggingface.co/models and the Papers section both reorder by recent activity. Neither is a changelog of the day. They are leaderboards of attention, which is a different thing, and that distinction is the whole point of this guide.
Is a high star count a reliable sign a new AI project is worth using?
No. A star is one click. It costs nothing, it never expires, and it does not get retracted when a project goes quiet. A repo can hold 40,000 stars from a launch week six months ago and have had no commit since. Stars measure how good the launch post was, not whether the project is alive today. The signal that actually predicts whether a project still exists next month is cadence: how often the maintainers ship, and whether the things they ship are real fixes or cosmetic noise. Cadence is visible in the commit history and the changelog, and unlike stars it cannot be faked by a good Show HN.
What does Fazm's changelog actually show about release cadence?
Fazm is an open-source macOS AI agent, and its CHANGELOG.json is committed to the public repo at github.com/mediar-ai/fazm. As of May 15, 2026 it lists 42 tagged releases going back to March 27, 2026. In the first 15 days of May alone there are 22 of them: at least one release on 14 of those 15 days, and on May 11 and May 14 there were three each. May 13, 2026 carried two: versions 2.9.10 and 2.9.12. That is the texture you are looking for when you vet any project you found this week. Open the changelog, count the dated entries in the last two weeks, and read what they say.
Hugging Face or GitHub: which one should I check first for new AI projects?
It depends what kind of project you mean, because the two platforms hold different artifacts. Hugging Face hosts model weights, datasets, and the Spaces that demo them. GitHub hosts the agent code, the inference engines, the bridges, and the apps that run those weights. A new model is a Hugging Face event. A new agent, tool, or framework is a GitHub event. Most useful projects show up on both at once. For the longer treatment of how the two platforms split, see our April 2026 guide on the same question. For vetting, the method below is identical regardless of which platform you found the project on.
How can I tell if a project I found today will still be maintained next month?
Read the last 20 commits or the last 20 changelog entries, and ask three questions. First, span: do they cover the last two weeks, or did the activity stop a month ago? Second, substance: are the entries describing bugs that real users hit, edge cases, crash fixes, or are they 'update README' and version bumps? Third, honesty: does the project admit its own regressions? Fazm's 2.9.17 entry on May 14 literally says it is fixing something that 2.9.16 broke the day before. A changelog that names its own mistakes is written by people still using the thing. That is the strongest single signal you can get without installing anything.
What did Fazm ship on May 13, 2026 specifically?
Two releases. Version 2.9.10 fixed pop-out chat windows getting permanently stuck after a different window hit a Claude usage limit; the ACP subprocess now restarts cleanly and each conversation resumes from the same session ID once the limit refreshes. Version 2.9.12 added a timeout and a Cancel button to Google sign-in so the app no longer hangs when the OAuth tab is closed early, cleared stale subscription state on sign-out, and added detection for silently-dead background subagents so a stuck turn auto-cancels instead of hanging forever. Both entries describe specific failure modes real users hit. That is what an active project's daily output looks like up close.
Does this vetting method apply to model weights on Hugging Face too?
Partly. For a model repo the cadence signal is weaker, because a good set of weights is a finished artifact and does not need daily commits. There you look at different things: whether quantized variants exist, whether the model card documents tool-calling behavior, whether the license lets you ship it, and whether anyone has built a Space or a GitHub project around it. The cadence method is sharpest for code projects: agents, frameworks, CLIs, MCP servers, desktop apps. Those are software that has to keep moving, and a stalled commit graph on a tool is a much louder warning than a stalled commit graph on a model.
Related guides
Hugging Face or GitHub for new AI projects in April 2026
Why the two platforms answer different questions: weights live on one, the agent code that runs them lives on the other.
AI tech developments, May 11 to 12, 2026
A mid-May 2026 snapshot of model releases and project activity, the kind of roundup this guide teaches you to read critically.
Open-source Mac AI agents, April 2026
The open-source desktop agent landscape on macOS, and where a real shipping project fits in it.
Comments (••)
Leave a comment to see what others are saying.Public and anonymous. No signup.