From the Mac side of a Pi 5 desk

The Pi 5 news cycle, generated on your Mac by a 856-line deep-research skill bundled inside the app

Most pages on the Pi 5 cycle are static rundowns that decay the moment they render. Fazm bundles a 0-line skill at Desktop/Sources/BundledSkills/deep-research.skill.md plus a deliberately tiny web-scraping.skill.md (58 lines, second-smallest in the bundle), composes them at runtime, and runs the whole thing from Cmd+Shift+Space. The 8-phase pipeline, 5 to 10 parallel WebSearches, 3 to 5 parallel Task agents, and the DOI-resolving citation gate apply to Pi 5 board SKUs, Pi OS Bookworm point releases, AI Kit firmware, and Compute Module 5 deliveries the same way they apply to anything else. Three files land in a dated folder under your Documents.

deep-research.skill.md (856)web-scraping.skill.md (58)SkillInstaller.swift~/Documents/Raspberry_Pi_5_News_Research_[YYYYMMDD]/
M
Matthew Diakonov
10 min read
4.8from early-access users
17 .skill.md files in the bundle, auto-installed to ~/.claude/skills/ on first launch
8-phase pipeline with 5-10 parallel WebSearches and 3-5 parallel Task agents
Reports written to ~/Documents/[Topic]_Research_[YYYYMMDD]/ as MD + HTML + PDF

The angle of this page

Every other guide on this topic is a curated list of board SKUs, OS point releases, AI accelerator firmware, and forum highlights from the week the writer happened to publish. The list goes stale the moment it is rendered, the linked vendor changelogs start to 404, and there is no runtime that re-executes when you want this week's rundown.

Fazm ships a different shape. It is a consumer Mac app that bundles a 17-file skill library. The largest is a deep-research pipeline written as a 856-line markdown spec. The second-smallest is a 58-line web-scraping reference card. The two compose at runtime: the deep-research skill provides the 8-phase pipeline, the parallel search budget, and the citation verifier; the web-scraping skill is pulled in only when a vendor changelog or a forum post needs structured extraction past the WebSearch summary.

When you press Cmd+Shift+Space and ask “what is new with Raspberry Pi 5 this week”, the floating bar agent loads both skills, decomposes the query into 5 to 10 angles, fires every search in parallel, runs three to five parallel research agents, validates every citation against a DOI resolver, and writes an MD, HTML, and PDF report into a dated folder in your Documents. The paths are real, the line counts are real, and the pipeline is what you get on a default install.

Cmd+Shift+Space to a folder in Documents

Inputs, hub, outputs

Cmd+Shift+Space
find-skills.skill.md
ChatProvider.swift
8-phase pipeline + thin scraper
5-10 parallel WebSearches
3-5 parallel Task agents
verify_citations.py
~/Documents/Pi5_News_Research_[YYYYMMDD]/

The hub is the composition of two files: ~/.claude/skills/deep-research/SKILL.md (856 lines) and ~/.claude/skills/web-scraping/SKILL.md (58 lines). Both copied out of the app bundle on first launch by SkillInstaller.swift and re-copied whenever the bundled SHA-256 differs from the on-disk copy.

The bundle, by line count

Seventeen .skill.md files ship inside Fazm.app at Desktop/Sources/BundledSkills/. The deep-research skill is the largest by 60% over the next entry. Total bundled skill content is 0 lines. web-scraping is intentionally near the bottom because the orchestration belongs upstairs in deep-research. Every file in the marquee below ships with the binary.

deep-research (856)
travel-planner (534)
docx (481)
doc-coauthoring (375)
pdf (314)
social-autoposter (302)
xlsx (291)
social-autoposter-setup (274)
pptx (232)
telegram (203)
find-skills (148)
google-workspace-setup (132)
canvas-design (129)
video-edit (105)
web-scraping (58)
ai-browser-profile (47)
frontend-design (42)
0skill files in the bundle
0 linesdeep-research.skill.md
0 linesweb-scraping.skill.md
0 linestotal skill content shipped
58

The web-scraping skill is intentionally tiny because the heavy lifting (mode selection, parallel search budget, citation verification, packaging) lives in deep-research.skill.md right next to it. Composition is the design.

Counted at /Users/matthewdi/fazm/Desktop/Sources/BundledSkills/ on April 27, 2026

The Pi 5 news beats, as bento cards

Each card below is one of the search angles the agent decomposes the query into during Phase 3 RETRIEVE. The decomposition is generated at runtime, not hardcoded; deep-research.skill.md tells the agent how to decompose any topic into 5 to 10 angles, and for a Pi 5 cycle the angles converge on the six beats below.

Board variants

The Pi 5 family keeps adding SKUs. Memory tiers, package revisions, retail allocations. Search angle decomposes to 'Pi 5 16GB availability', 'Pi 5 SKU revision changelog', 'Pi 5 retailer stock reports'. Standard mode pulls 4 to 6 sources for this beat alone.

Pi OS releases (Bookworm point releases)

Image hashes, kernel rev, key apt diffs. Search angle decomposes to 'Raspberry Pi OS Bookworm changelog', 'Pi OS image SHA256', 'apt diff between point releases'. The skill enforces 3+ independent sources per claim before TRIANGULATE passes.

AI accelerators (AI HAT, AI Kit)

Hailo-8 / Hailo-8L firmware drops, model zoo updates, libcamera integration patches. The web-scraping.skill.md gets pulled in here when a vendor changelog needs structured extraction past the WebSearch summary.

Compute Module 5

CM5 IO Board availability, eMMC SKUs, compatibility matrix with carrier boards. Distinct from consumer Pi 5; deserves its own search angle in Phase 3 to avoid bleed.

Kernel mainline + firmware

rpi-update, bootloader EEPROM bumps, kernel.org submissions for Pi-specific drivers. Phase 4 TRIANGULATE often downgrades single-thread forum claims to inference here.

Community + partner vendors

Pimoroni, Adafruit, The Pi Hut, OKdo product launches; community projects on the Pi forum and r/raspberry_pi. Useful but lower credibility weight; Phase 4 routinely flags one-source claims from this beat.

Why the web-scraping skill is 58 lines, verbatim

A small file that does one job is the whole point. The web-scraping skill is a reference card: which library to reach for in which situation, plus the always-on rules (rate limit, robots.txt, retry, dedup). It does not select modes. It does not budget parallelism. It does not verify citations. Those belong in the 856-line skill that loads alongside it. The opening of the file, verbatim, is below.

BundledSkills/web-scraping.skill.md (lines 1-30 of 58)

What Phase 3 looks like for this exact topic

Below is what the agent does in a single message when the query is a Pi 5 cycle question. Eight WebSearches plus four Task agents fire at once. The decomposition is generated by the skill at runtime; the angles below are not hardcoded. The skill names the wrong pattern (sequential execution) and marks it not allowed. For a news topic, parallelism is the difference between a five-minute report and a thirty-minute report.

Phase 3 RETRIEVE, decomposed for the Pi 5 news cycle

The five steps from keystroke to dated folder

The path is short on purpose. The user types one query. The agent announces one mode line. Phase 3 fires once. The verify gate runs once. Three files land. No browser tab to keep open, no subscription to renew, no aggregator that decides what counts.

From Cmd+Shift+Space to ~/Documents

  1. 1

    Cmd+Shift+Space

    Floating bar opens. You type 'what is new with Raspberry Pi 5 in the last week, with board SKUs, OS releases, AI Kit firmware, and Compute Module 5'.

  2. 2

    Skill match

    find-skills.skill.md matches 'analyze trends' / 'comprehensive analysis' against deep-research.skill.md and pulls it into context. web-scraping.skill.md is on standby.

  3. 3

    Mode announce

    Agent prints one line: 'Starting standard mode research, 5-10 min, 15-30 sources'. No approval needed.

  4. 4

    Phase 3 parallel

    8 WebSearches + 4 Task agents fire in a single message. Triangulation runs as results stream in. Outline refinement adapts the report shape if a beat dominates.

  5. 5

    Verify + package

    verify_citations.py resolves DOIs, validate_report.py runs 8 checks. Three files land in ~/Documents/Raspberry_Pi_5_News_Research_[YYYYMMDD]/ and the HTML+PDF open automatically.

What a real run looks like in the terminal

The skill writes its own progress trace as it executes. Below is the shape of a Standard-mode run for a Pi 5 cycle topic. The agent announces the mode, fires the parallel batch, runs the verify gate, and lands three files in a dated folder under your Documents.

Fazm floating bar - Pi 5 news run

Static rundown vs runtime on your Mac

The clearest way to see why the runtime matters is to put a normal static rundown next to a re-run on your Mac. Same question, two shapes of answer.

Same question, two shapes of answer

You open a Pi 5 news page someone wrote three days ago. Some of the linked vendor changelogs already 404. The community forum thread it cites has moved. The board SKU section was right last week and is wrong now. There is no way to ask 'what changed in the last 24 hours'. You either reload the same stale page or open a chatbot and watch it summarise nothing in two paragraphs.

  • Stale at render time
  • 1-3 unverified URLs
  • No re-execution
  • No DOI resolution
  • No way to ask for fresher

Comparison: hosted rundown vs bundled skill on your Mac

Most existing playbooks for the Pi 5 cycle live on a hosted page that does not re-execute when you reload. The table below contrasts that pattern with what you get when the runtime ships inside the app.

FeatureStatic rundown / chatbot paragraphFazm + bundled deep-research + web-scraping
Where the Pi 5 rundown is generatedOn a hosted page picked for everybody and frozen at render timeOn your Mac via Cmd+Shift+Space, in a folder timestamped to today, with markdown + HTML + PDF you can re-open or hand off
Source count per report1-3 cited URLs, often broken, no DOI resolutionStandard mode floor: 15 to 30 sources, hard minimum of 10 enforced by validate_report.py
Topic decompositionWhatever the writer felt like covering5 to 10 search angles generated from your query at runtime, all fired in parallel in one message
Vendor changelog scrapingEither skipped or pasted as a quoteweb-scraping.skill.md (58 lines) is pulled in by the agent when a vendor changelog needs structured extraction past the WebSearch summary
Citation verificationNone; hallucinated DOIs shipverify_citations.py resolves DOIs and flags suspicious entries (2024+ no DOI, no URL, failed resolution) before the report is written
Re-executionRead once, decay foreverRe-run tomorrow with the same query to get a fresh report against the next 24 hours of the cycle
Output formatInline web copy, ad-supported, locked to the publisher's domainThree files (MD, HTML, PDF) on your disk under ~/Documents/, openable forever

The verify gate is what makes this honest

The skill is explicit about a two-strike policy at the validate stage. Auto-fix once. Manual correction once. After two failures, stop and surface the issues rather than ship a flawed report. That line in the skill body is what keeps a Pi 5 news run from fabricating SKU revisions or AI Kit firmware versions that never existed.

The two scripts referenced are scripts/verify_citations.py (DOI resolution, title/year matching, suspicious-entry flagging) and scripts/validate_report.py (eight automated checks: summary length, required sections, citation formatting, bibliography match, no placeholder text, word count range, source minimum, no broken internal links). Both run before the markdown is moved out of staging. The reader sees only what passed both gates.

What this guide does not tell you to do

It does not tell you to subscribe to a newsletter, pin a tab to a curated list, or trust a chatbot paragraph that decays the moment you click away. The premise is that the runtime should sit on your Mac, the citations should resolve, and the artifact should be something you can re-open in a year and still trust.

The fact that the runtime is two bundled markdown files plus two Python verify scripts, not a server you do not control, is the whole point. You can read deep-research.skill.md and web-scraping.skill.md the way you would read any other file. You can grep them. You can fork them. You can hand the output to someone who is allowed to be skeptical, because every Pi 5 SKU, every Bookworm point release, every AI Kit firmware version is grounded against a citation that was actually resolved before the report was written.

Want this Pi 5 news pipeline running on your Mac by tomorrow morning?

A 15-minute call walks you through the install, the floating-bar trigger, and the dated folder in your Documents.

Frequently asked questions

Why is web-scraping.skill.md only 58 lines when scraping a fast-moving news cycle like Pi 5 sounds like a heavy job?

Because the heavy lifting is not in the scraping skill. It is in deep-research.skill.md, which is 856 lines in the same directory at /Users/matthewdi/fazm/Desktop/Sources/BundledSkills/. The web-scraping skill is a thin reference card listing the libraries it expects (requests, BeautifulSoup, lxml for static; Selenium, Playwright, Puppeteer/pyppeteer for dynamic; Scrapy, jina, firecrawl for scale; agentQL, multion for complex flows) plus rate-limit, robots.txt, retry, and dedup notes. Composition is the design: when you ask Fazm for a Pi 5 news rundown, the agent loads deep-research first for the 8-phase pipeline (Scope, Plan, Retrieve, Triangulate, Synthesize, Critique, Refine, Package), and pulls web-scraping in only when a forum post or vendor changelog needs structured extraction. Two skills, one runtime, fixed line counts, no opaque server.

Where exactly do these skills live in the Fazm source tree, and how do they reach my disk?

All 17 .skill.md files ship inside Fazm.app at Desktop/Sources/BundledSkills/. SkillInstaller.swift in the same Sources tree iterates the bundle's Resources/BundledSkills directory at launch, computes a SHA-256 over each file, compares it against the on-disk copy at ~/.claude/skills/<name>/SKILL.md, and copies the bundled version forward when the checksum has changed. So deep-research.skill.md becomes ~/.claude/skills/deep-research/SKILL.md the first time you run the app, and web-scraping.skill.md becomes ~/.claude/skills/web-scraping/SKILL.md beside it. Skill content can change without an App Store push: rebuild the binary, ship, the .app reseeds on next launch.

What does a real Pi 5 news run look like when you press Cmd+Shift+Space and ask for it?

The floating bar agent matches the phrase against the description front-matter of every skill it finds in ~/.claude/skills/. 'comprehensive analysis', 'analyze trends', and 'research report' all hit deep-research. The agent announces the mode in one line ('Starting standard mode research, 5-10 min, 15-30 sources'), then in a single message decomposes the query into 5 to 10 search angles (Pi 5 board variants, Pi OS Bookworm changelogs, AI HAT and AI Kit firmware, Compute Module 5 availability, kernel mainline patches, community/forum signals, vendor partner announcements, retail/stock reports) and fires every WebSearch and 3-5 parallel Task agents at once. Phase 4 TRIANGULATE requires 3+ independent sources per claim. Phase 8 PACKAGE writes three files under ~/Documents/[Topic]_Research_[YYYYMMDD]/. Verify gate runs python scripts/verify_citations.py and python scripts/validate_report.py before the markdown is moved out of staging.

What is the actual install footprint on first launch?

Fazm checks ~/.claude/skills/ exists, creates it if not, then iterates the bundled .skill.md files. Each becomes a directory. The 17 files in the bundle total 4,523 lines of skill content: deep-research at 856 lines, travel-planner at 534, docx at 481, doc-coauthoring at 375, pdf at 314, social-autoposter at 302, xlsx at 291, social-autoposter-setup at 274, pptx at 232, telegram at 203, find-skills at 148, google-workspace-setup at 132, canvas-design at 129, video-edit at 105, web-scraping at 58, ai-browser-profile at 47, frontend-design at 42. Reference subfolders (reference/methodology.md, templates/report_template.md, templates/mckinsey_report_template.html, scripts/verify_citations.py, scripts/validate_report.py) are written alongside SKILL.md when present. The category headings on the onboarding screen are Personal, Productivity, Documents, Creation, Research and Planning, Social Media, Discovery; deep-research and web-scraping land under Research and Planning along with travel-planner.

Why use a deep-research skill on a Mac to track Pi 5 news instead of a regular news site or a chatbot?

Verifiability and re-execution. A static news site captures one snapshot of the Pi 5 cycle and decays the moment you click away. A chatbot collapses the cycle into a paragraph, optionally cites two URLs, and never resolves them. The bundled skill writes a 4,000+ word report (Standard mode floor) backed by 15 to 30 sources (hard minimum 10 enforced by validate_report.py), every factual claim grounded against a numbered citation that has been DOI-resolved by verify_citations.py. The output is three files (markdown, HTML, PDF) in a dated folder you own forever. You can re-run the same query tomorrow and get a fresh report against the next 24 hours of the cycle, with the same anti-hallucination protocol applied.

What does the 8-phase pipeline actually skip in Standard mode for a topic like Pi 5 news?

Phases 6 (CRITIQUE) and 7 (REFINE) are reserved for Deep and UltraDeep modes. Standard runs Phases 1, 2, 3, 4, 4.5, 5, 8 plus the verify gate. For a hardware/community news cycle that is the right cut: Phase 4 TRIANGULATE catches one-source claims that look like leaks, Phase 4.5 OUTLINE REFINEMENT (the WebWeaver 2025 pattern) lets the report restructure when, say, an AI Kit firmware drop turns out to be the lead story instead of board variants. Phase 5 SYNTHESIZE writes prose, not bullets ('Bullets fragment thinking' is in the body of the file). If you need red-team scrutiny on a specific claim ('did this CM5 SKU actually ship'), you switch to Deep with one phrase ('compare in depth') and Phases 6 and 7 turn on.

How does the parallel-search rule speed up a Pi 5 news run in practice?

The skill is explicit: 'Phase 3 RETRIEVE - Mandatory Parallel Search'. It tells the agent to decompose into 5 to 10 angles before any tool call, then launch all of them in a single message. It even shows the wrong shape ('WebSearch #1 -> wait -> WebSearch #2 -> wait -> WebSearch #3') marked as not allowed. For Pi 5 news, that decomposition usually splits into core boards, OS releases, AI accelerators, Compute Module deliveries, kernel patches, community forums, partner vendor (Pimoroni, Adafruit, The Pi Hut, OKdo) announcements, and retail availability. Eight WebSearches plus three to five Task agents fire simultaneously. On Standard mode the whole pipeline lands in 5 to 10 minutes; sequential execution would balloon that to 30+.

What is the verify gate doing that a chatbot answer is not?

Two scripts in sequence. python scripts/verify_citations.py --report [path] resolves DOIs, matches title and year metadata, and flags suspicious entries (any 2024+ citation without a DOI, no URL, or a failed resolution). Fabricated citations are surfaced before they reach the report. Then python scripts/validate_report.py --report [path] runs eight automated checks: executive summary length 50 to 250 words, required sections present, citations formatted [1] [2] [3], bibliography matches in-text citations, no placeholder text, word count between 500 and 10,000, minimum 10 sources, no broken internal links. Two-strike policy on failure: auto-fix once, manual correction once, then stop and surface the issues rather than ship a flawed report. A chatbot will pretend; the verify gate is told, in writing, not to.

Where do the report files actually land for a Pi 5 news run?

Under ~/Documents/[TopicName]_Research_[YYYYMMDD]/. The topic name is extracted from the question and slugified with underscores or CamelCase. For 'raspberry pi 5 news this week' you get something like ~/Documents/Raspberry_Pi_5_News_Research_20260427/. Inside that folder, three files share the same base name: research_report_20260427_pi5_news.md (primary source), research_report_20260427_pi5_news.html (rendered with the McKinsey-style template at ./templates/mckinsey_report_template.html), and research_report_20260427_pi5_news.pdf (printable). HTML and PDF open automatically in their default applications. A copy of the markdown also lands at ~/.claude/research_output/ for internal tracking. Every artifact is on your disk; every citation is resolvable; every claim is checkable.

Does Fazm actually need to be running on macOS specifically, or could I run the same pipeline elsewhere?

Fazm is a consumer Mac app. The floating bar lives in Cmd+Shift+Space, the AppKit accessibility hooks are macOS-only, and the bundle is a .app. The bundled skills themselves are plain markdown plus Python scripts; in principle they work anywhere Claude Code or a similar runner can read SKILL.md. The reason this page is about Mac users is the integration: SkillInstaller.swift seeds ~/.claude/skills/ from the .app bundle, the agent loads the same skills the floating bar loads, and the verify scripts run in your local shell. The whole loop is Mac-native, which is why a Mac user with a Pi 5 on the desk is the natural reader of this guide.