Notion 3.4 part 2, and everything it still can not do
Notion AI updates April 2026, and where the desktop layer starts
Workers for Agents, voice input, AI Meeting Notes, Mail and Calendar in the agent. The April 14 and April 17 releases made Notion AI a stronger planner. They did not make it leave Notion. This page is about the line Notion AI stops at, and the real macOS accessibility code that keeps going.
What Notion actually shipped in April 2026
Two dated releases. On April 14 Notion 3.4 part 2 introduced Workers for Agents (Enterprise-only sandboxed code execution), voice input for AI prompts on the desktop app, AI Meeting Notes triggered from Cmd+K with custom instruction templates, shareable AI chat links, a Views API with eight endpoints, and smart filters. On April 17 Notion added Mail and Calendar to agent settings, alongside Slack, so Custom Agents can now take cloud-API actions against those three services. The Custom Agents free trial was extended through May 3, 2026.
Most guides for this keyword stop right there. They list eight features, they link to the release notes, and they end. This page is interested in a different question. Where is the line that these features do not cross, and what happens to a real workflow the moment it needs to cross it?
The eight updates, in one frame
Workers for Agents
Enterprise-only sandboxed code execution with a 30 second timeout, a 128 MB memory cap, and a domain allowlist per workspace. Fine for data munging, useless for desktop automation.
Voice input
You can speak a prompt to the agent on desktop. It transcribes and runs. Nice, does not change what the agent can reach.
AI Meeting Notes
Cmd+K triggers meeting-note capture with custom instruction templates. Still scoped to Notion pages.
Shareable AI chats
Read-only links to an agent conversation. Good for handoff, no new execution capability.
Mail and Calendar
Custom Agents gained Gmail and Google Calendar actions via Google's cloud APIs. Not native Mail.app, not native Calendar.app.
Views API
Eight endpoints for programmatically reading and mutating Notion views. API surface, not a new agent surface.
Smart filters
Filter views with AI prompts. Better search, still in Notion.
The anchor fact
Notion AI’s agent sandbox runs at most 0 seconds on 0 MB of RAM
That is the published shape of Workers for Agents. It is a server-side Python sandbox with an Enterprise gate, a wall-clock timeout, a memory cap, and an outbound-domain allowlist that the workspace admin controls. There is no way to open Xcode from inside it. There is no way to paste into Final Cut Pro. There is no way to type into a native text field. This is not a bug, it is the boundary. Fazm is what runs outside the boundary.
The wall, drawn as a diagram
Notion AI routes everything through Notion’s servers and its handful of supported cloud-API integrations. The agent can read a Notion page, call the Views API, speak to Gmail and Google Calendar and Slack. That is the left side. The moment a workflow hits an app that does not expose a cloud API Notion supports, the flow dies. Fazm is the execution layer that picks up the request and actually drives the Mac.
From the agent's mouth to the Mac's hands
“Notion AI is the brain on the cloud. A macOS accessibility layer is the hands on the machine. Skipping the second half is what makes most agents look clever in a demo and useless in a real workflow.”
Fazm design notes, April 2026
Accessibility APIs, not screenshots
There is a common shape of Mac agent that takes screenshots of the screen, runs them through a vision model, infers what the buttons are, and clicks based on pixel coordinates. That approach works until fonts change, or dark mode flips, or the window is half offscreen, or a modal covers the control. Fazm does not do that.
Fazm reads the macOS accessibility tree directly, through the same AXUIElement API that VoiceOver uses. Elements have roles, titles, values, and children as structured data. The app is not guessing what a button says. It is asking the OS for the button by role and title.
The tell is in the permission-check code. A screenshot bot only needs Screen Recording permission. Fazm’s permission flow has to be paranoid about accessibility trust specifically, because accessibility is how it reads everything.
Three things worth noticing. First, the switch on AXError distinguishes real denial (.apiDisabled) from the ambiguous .cannotComplete that Qt and OpenGL apps return even when AX is fine. Second, the Finder cross-check disambiguates those cases by asking a known AX-compliant app the same question. Third, the event-tap probe exists because macOS 26 Tahoe caches TCC decisions per process and the cache can lie. None of that code is necessary for a screenshot-based tool. All of it is necessary when accessibility APIs are the foundation.
The boundary, in numbers
A single desktop request, traced
This is what a Fazm run looks like under the hood when a Notion-resident agent hands off a build-and-report task. Notice the log lines. They are AX event names (AXPress, AXValueChangedNotification), not pixel coordinates.
Notion AI versus a real macOS layer
| Feature | Notion AI, April 2026 | Fazm |
|---|---|---|
| Sandbox timeout | 30s in Workers for Agents | No wall clock cap, runs locally |
| Memory per task | 128 MB in Workers for Agents | Limited by the user's Mac |
| App reach | Gmail, Google Calendar, Slack (cloud APIs) | Any AppKit / Catalyst / SwiftUI app on the Mac |
| Reads app state via | JSON from cloud APIs | macOS Accessibility API (AXUIElement) |
| Acts on the app via | Cloud API HTTP calls | Synthesized CGEvent and AX actions |
| Plan required | Enterprise for Workers for Agents | Consumer plan is enough |
| Data leaving the Mac | All of it, by design | Only what the LLM call needs |
| Fails on | Apps without a Notion-supported API | Apps that opt out of the AX tree |
Mac apps Fazm drives via AX today
The handoff pattern, step by step
The cleanest pattern is to let Notion AI own the cloud-resident half of a workflow, then hand the desktop-resident half to Fazm through a simple prompt. You do not need to rebuild your Notion setup. You are adding a pair of hands that live on macOS.
Agent plans in Notion
Your Custom Agent reads the brief, calls the Views API, pulls in the linked Gmail thread through the April 17 Mail integration, drafts the next action.
Agent describes the desktop step in English
The output is a sentence a human could follow: 'open Xcode, switch to the Fazm scheme, run the build, paste any error into Slack #eng.' No code, no URL, no API.
Fazm reads it and maps to AX calls
Fazm uses the macOS Accessibility API to locate Xcode's menu bar, press Product then Build, observe AXValueChangedNotification on the build log, and read back the result without touching pixels.
Result flows back to the agent
Fazm returns a short structured report (success, duration, errors). The Notion agent pastes it into the Notion page and updates the status. Loop closed, no Enterprise gate, no sandbox timeout.
Get started
- 1
Install Fazm
Download the signed, notarized Mac app from fazm.ai
- 2
Grant Accessibility
Fazm requests the AX trust during onboarding and verifies it with the three-layer check
- 3
Point it at an app
Bring any app to the front and ask Fazm to do something in it
- 4
Hand off from Notion
Have your Notion agent describe the desktop step in one sentence, then paste it into Fazm
Why the architecture matters, not just the feature list
A lot of SEO pages for this keyword are feature lists. They exist because Notion ships a lot of features. They are useful for remembering the name of the thing, and not useful for deciding what to do on Monday. The decision you actually have to make is not “which Notion AI update is best.” It is “what does my workflow need that Notion AI cannot reach, and what should cover that gap.”
The answer, on a Mac, is almost always a tool with real accessibility access. Workers for Agents is genuinely useful for data cleaning on a Notion page. It is not going to open your CAD tool. Smart filters are genuinely useful for finding things inside Notion. They are not going to file your expense report in a desktop accounting app. The April 14 and April 17 releases move Notion AI further into the “planner” role. Something else has to be the doer.
Fazm is trying to be the doer on macOS, and the shape of the implementation makes that explicit. Three layers of AX trust checking. Zero image-classification code in the read path. Event-tap fallback that only makes sense if AX is load bearing. That is the choice, made in Swift, visible in the file.
Bottom line for April 2026
Notion AI got better at planning, better at speaking, and better at sharing. It did not get better at leaving Notion. If your workflow ends at a Notion page, the new features are enough. If it ends at an app living on your Mac, you still need a desktop execution layer, and that is what Fazm is for.
Run a Notion handoff on your own Mac
Ten-minute call. We map a desktop step from your Notion agent to a Fazm run, and you watch it drive your app end to end.
Book a call →Frequently asked questions
What were the Notion AI updates in April 2026?
Two main releases. On April 14, 2026 Notion 3.4 part 2 introduced Workers for Agents (Enterprise-only sandboxed code execution with a 30 second timeout and a 128 MB memory cap), voice input for AI prompts on the desktop app, AI Meeting Notes triggered from Cmd+K with custom instruction templates, shareable AI chat links as read-only artifacts, and a Views API with eight endpoints plus smart filters. On April 17, 2026 Notion added Mail and Calendar to agent settings, letting Custom Agents take action against those two services. The Custom Agents free trial was extended through May 3, 2026.
What are the actual limits on Workers for Agents?
Workers for Agents is a sandbox, not a general purpose runtime. The published caps are 30 seconds of wall clock execution and 128 MB of memory per invocation. Outbound network calls are restricted by a domain allowlist configured per workspace admin. It is Enterprise plan only. For the agent patterns it is meant to cover (data cleaning on a Notion page, formatting a response, calling a preapproved API) the limits are fine. For anything that needs to open an app, drive a desktop UI, or run a long task, Workers for Agents does not reach there.
Can the Notion agent click buttons in apps on my Mac?
No. The April 2026 agent integrations with Mail, Calendar, and Slack are cloud API integrations, not desktop automation. The agent talks to Google's Gmail API or Microsoft's Graph API or Slack's Web API over HTTPS. It never opens Mail.app, never moves your cursor, never types into a native text field. If the app you care about does not expose a public API Notion supports, the agent can not touch it.
What is the gap between Notion AI and a desktop agent?
Notion AI lives inside Notion's URL and its partner cloud APIs. A desktop agent like Fazm lives inside macOS, on top of real operating system APIs. That difference shows up the moment your workflow touches an app without a usable web API, which on most Macs is most apps. Xcode, Figma, Final Cut, Logic Pro, QuickBooks, FileMaker, legacy in-house software, native Mail, native Messages, Terminal. Notion AI can read what a user pasted into a Notion page about those apps. It can not drive them.
How is Fazm different from a screenshot-based Mac agent?
Fazm reads the active app through the macOS Accessibility API (the same API VoiceOver uses) and issues synthesized events through CGEvent. It is not looking at pixels. The difference shows up in Fazm's permission check code, which does three things no screenshot bot would ever need. First it calls AXUIElementCreateApplication on the frontmost app and inspects AXError codes. Second it falls back to a Finder cross check when macOS returns the ambiguous cannotComplete error. Third it creates a CGEvent tap with CGEvent.tapCreate(.cgSessionEventTap, .tailAppendEventTap, .listenOnly, ...) as a tie breaker to bypass macOS 26 Tahoe's per-process TCC cache. That entire code path in Desktop/Sources/AppState.swift is only necessary because AX is the primary mechanism. A pixel-reading tool has no reason to probe the AX trust state that carefully.
Does Fazm replace Notion AI or work alongside it?
Alongside. The natural split is research and drafting in Notion AI where its Views API, shareable chats, and knowledge graph are genuinely strong, then execution in Fazm for anything that leaves the browser. An agent reads a spec in Notion, asks Fazm to open Xcode, run a build, check a log, message a teammate in iMessage, save a screenshot to a specific folder, and report back. Neither product is trying to be the other. Most of the April 2026 updates make Notion AI a better planner. Fazm is a better pair of hands on macOS.
Which macOS apps does Fazm work with?
Any AppKit, Catalyst, or SwiftUI app that opts into the standard accessibility tree, which is most Mac apps. That covers Xcode, Pages, Numbers, Keynote, Figma desktop, Notion desktop, Slack desktop, Mail, Messages, Calendar, Finder, Safari, Chrome, Arc, VS Code, Cursor, QuickBooks, and the long tail of native business apps. Apps that render with custom OpenGL or Qt surfaces (some CAD tools, older Python GUIs) advertise as accessible but the tree is sparse, so Fazm falls back to keyboard-driven workflows for those.
Do I need an Enterprise plan for the desktop execution layer?
No. Workers for Agents is Enterprise-only on Notion's side because it runs code in Notion's infrastructure with liability implications. Fazm runs on the user's own Mac under the user's own login, so there is no multi tenant sandbox to gate. The consumer plan covers the same accessibility-driven automation that the team plan does, and the machine runs entirely locally for the execution path.