Claude Code on a Rust + Swift desktop app: the layout we ship at Fazm
Most Claude Code writeups assume one language. Fazm is a macOS app that ships a Rust backend on Cloud Run, a Swift Package executable that runs on the user's Mac, and a Node service that wraps the agent client protocol. One repo. Three toolchains. Two release pipelines. Multiple agents editing it at the same time. This is the working layout that lets the agent move across the language boundary without breaking either side.
Direct answer · verified 2026-05-12
How do you use Claude Code on a Rust + Swift Mac app?
Split the repo into top-level subtrees per language so each native build tool stays at its own root (Backend/ for the Rust crate, Desktop/ for the Swift Package, any glue like acp-bridge/ alongside). Give the agent one wrapper command that builds the local-app subtrees in dependency order behind a directory-based file lock, plus a status file the agent polls before it decides whether to rebuild or just send a test trigger to the running app. Keep two separate release pipelines: GitHub Actions plus Workload Identity Federation for the Linux container, Codemagic for the signed and notarized Mac binary. Verified against github.com/mediar-ai/fazm on 2026-05-12.
The three subtrees and what each one is for
The repo opens to seven or eight folders. Three of them carry weight. The others are scripts, build artifacts, and a docs tree. The three that matter, with the build command each one accepts and the deploy target each one feeds, look like this.
Backend/
Rust crate
axum 0.7, tokio, reqwest. Eight route modules. Built with cargo build. Deploys to Cloud Run in us-east5 via GitHub Actions on push to main with a path filter on Backend/**.
Desktop/
Swift Package
Package.swift, executableTarget Fazm with PostHog, Sentry, GRDB, Sparkle 2.9, swift-markdown-ui, Firebase, macos-session-replay. Built with xcrun swift build. Signed and notarized for DMG plus Sparkle ZIP by Codemagic on a v*-macos tag.
acp-bridge/
Node TS module
Wraps @agentclientprotocol/claude-agent-acp 0.29.2 and @zed-industries/codex-acp 0.12.0. Built with npm run build. Embedded into the .app bundle by run.sh and launched as a child process by the Swift app over a local websocket.
The discipline is that each subtree's root holds the one file its build tool insists on (Cargo.toml, Package.swift, package.json). Nothing else lives at those roots that the build tool will misinterpret. When Claude Code opens the repo, it never has to ask which build system is in charge of which directory.
One wrapper, three real builds
The Rust crate has nothing to do with the local desktop app, so it is left out of the wrapper entirely. The wrapper builds the two pieces that ship inside the .app bundle: the ACP bridge first, then the Swift target, because the bridge's compiled JavaScript has to be in place before the Swift app is bundled. Codemagic runs the same shape in CI, just with signing and notarization grafted on. The contract is: an agent that knows it touched Desktop/ or acp-bridge/ runs ./run.sh; an agent that only touched Backend/ never does, because the local app is unaffected.
What the agent reads next is not pgrep. It is cat /tmp/fazm-dev-status. That file is the authoritative answer to "is there a build going, is the app running, did the last build fail." A second agent that wakes up mid-build sees building 47189 1715549170, checks kill -0 47189, finds it alive, waits, and skips its own redundant rebuild.
Two pipelines because Codemagic and GitHub Actions are good at different jobs
The Rust backend deploys to Cloud Run from a GitHub Actions workflow that authenticates with Workload Identity Federation, so there are no stored service account keys to rotate. The trigger is a push to main with a path filter on Backend/**, which means a Swift-only commit does not consume a deploy slot. Cloud Run runs with --min-instances=2 so Sparkle's daily appcast check and the post-launch /v1/keys burst never hit a cold start.
The Swift desktop app is a different animal. A universal binary needs a real Mac runner, a Developer ID certificate, an Apple notarization round-trip, a DMG, and a Sparkle ZIP. The trigger is pushing a tag of the form v*-macos. Codemagic runs the workflow on an M2 Mac mini, signs with the team's certificate, ships the build through notarytool, cuts a GitHub Release, and Sparkle picks it up from the appcast that lives, of course, on the Rust backend.
One commit can fan out to none, one, or both pipelines
The interesting cell is the second one. A Swift-only commit fires no deploy. That is by design. Tag-based releases are the only way the user's Fazm app changes; everything between tags is dev work that does not ship. Two pipelines, two triggers, no overlap. An agent that just wrote a SwiftUI view does not have to think about Cloud Run.
How the agent actually edits across the boundary
Most cross-language tasks are one of two shapes. Either an existing route grows a field that the Swift caller needs to read, or a new feature wants a new route plus a new caller. Both shapes have the same two failure modes for an agent left to its own devices. It will edit one side, "test" by running the other side, get a confusing error, then go on a tangent inspecting infrastructure. The fix is not a smarter agent. The fix is a short paragraph in CLAUDE.md that names both files for any feature that crosses the seam.
Cross-seam contract (excerpt from CLAUDE.md)
When a feature requires both Rust and Swift changes: 1. Edit the Rust route at Backend/src/routes/<name>.rs 2. Update the Swift caller at Desktop/Sources/<name>Service.swift 3. Run cargo run --release in one terminal 4. Flip APP_BACKEND_URL to http://127.0.0.1:8080 in .env.app 5. ./run.sh, then send a com.fazm.testQuery trigger that exercises the new field 6. Confirm the round-trip in /private/tmp/fazm-dev.log Do not deploy the Rust change to Cloud Run until the Swift caller ships in a tagged build. Production users never see a half-rolled schema.
This is not exotic. It is the same discipline a senior engineer would write in a wiki. The point is that CLAUDE.md is the wiki the agent reads on every conversation start. Putting the seam contract there moves cross-language reliability from a vibe to a routine.
Three test seams, one for each subtree
Local cargo test covers the Rust crate. Distributed notifications drive the Swift app without a UI loop. The Node ACP bridge speaks a documented websocket protocol you can poke from the command line with node scripts/probe.js. Three independent seams, no headless browser, no flaky end-to-end suite, and the agent can verify each one in seconds.
None of these require a browser. None of them require a screenshot. Each one matches the language and runtime of the subtree it tests. The agent does not have to context-switch into a different test framework when it crosses into a different folder.
The thing that broke once: Rust async error types
Worth naming because it is the failure mode that catches every multi-agent run. Claude Code is fluent at writing axum handlers but defaults to anyhow::Error as the failure type. axum 0.7 wants the error to implement IntoResponse, and anyhow::Error does not. The compile error is long, the suggested fix is a wrapper struct, and an agent that does not know your project will invent a second AppError type instead of using the existing one.
One line in CLAUDE.md fixes it: "All axum handler errors return AppError defined at Backend/src/routes/mod.rs." It is a tiny piece of context that turns a 90-second compile-error tangent into a zero-second non-event.
What this layout is not
- It is not a monorepo in the Bazel sense. There is no shared build graph, no cross-crate caching, no remote execution. Each subtree builds with its own native tool. The cost of unifying the build was higher than the cost of two pipelines.
- It is not Tauri or Electron. The Swift app is a real native macOS binary built with SwiftPM, not a web view wrapping a Rust backend. The Rust code lives on a server because that is where it does work the Mac app cannot do (Stripe webhooks, Sparkle appcast hosting, Vertex proxying), not because the UI is built in Rust.
- It is not an FFI bridge. There is no swift-bridge, no UniFFI, no .h header generated from Rust. The Swift app talks to the Rust backend over HTTPS like any other client. If we ever needed a real native FFI layer we would add a fourth subtree for it, not bend the existing two.
- It is not generic enough to drop into any project. It is the layout that fits this product. The shape of your hybrid app may want a different split. The principle (one tool per subtree, one wrapper per workflow, one status file per running process) generalizes; the directory names do not.
Want to see this layout in a real running repo?
Twenty minutes, screen share, walk through the actual files and the multi-agent loop. No pitch, just the code.
Frequently asked questions
Frequently asked questions
Why split the repo by language at all instead of letting Claude Code work across one mixed tree?
Because cargo and swift refuse to share a working directory cleanly. cargo wants Cargo.toml at the crate root and writes target/ next to it. Swift Package Manager wants Package.swift at the package root and writes .build/ next to it. The Codemagic build script and the GitHub Actions Cloud Run deploy each have a single source path they need to point at. If you bury the Rust crate three levels under a Swift target, every tool fights you, and Claude Code spends tool calls navigating instead of editing. Top-level Backend/, Desktop/, and acp-bridge/ keep each tool happy, keep the deploy paths trivial, and keep the agent's reasoning local to one language at a time.
Why not unify on a single build system like Bazel?
We tried it on a previous project. Bazel works once you accept that you now have a third language (Starlark) and a third toolchain to maintain, and that every dependency upgrade has to be re-pinned in WORKSPACE. With Claude Code in the loop, Bazel adds a layer of indirection the agent constantly has to translate through. The cost of running cargo and swift natively, then gluing them in a 50-line shell script, was lower than the cost of a meta build system. Run the native tool that ships with the language, then orchestrate.
What does ./run.sh actually do that Claude Code cannot just do itself?
It acquires /tmp/fazm-build.lock with mkdir, kills any prior 'Fazm Dev' process, builds the ACP bridge with npm, builds the Swift target with xcrun swift build, copies bundled resources into the .app bundle, launches the app, and writes the lifecycle state to /tmp/fazm-dev-status. Claude Code can run any of those individually but the order matters and the lock matters. With multiple agents on the same repo (which is normal), running the steps directly produces partial bundles, dueling builds, and stale processes. The wrapper enforces that there is exactly one build at a time and exactly one running app. The Rust backend is not part of run.sh because it deploys separately to Cloud Run, not into the local app bundle.
How does Claude Code know whether to rebuild or just send a test trigger to the running app?
It reads /tmp/fazm-dev-status before doing anything. The file is one line: <state> <pid> <unix_timestamp>, with state in {building, running, exited, failed}. If state is running and kill -0 on the PID succeeds, the agent skips the build and goes straight to a distributed notification (com.fazm.testQuery, com.fazm.control). If state is building and the holder is alive, the agent waits. If state is exited or the holder is dead, the agent runs run.sh. The status file is the difference between agents that cooperate and agents that thrash.
What goes in the Rust backend and what goes in the Swift app?
Rust handles things that need to live behind a stable HTTPS endpoint with predictable cold-start behavior: appcast hosting for Sparkle, Stripe webhooks, Vertex AI proxying, signed key issuance, license validation, session-recording uploads. Swift handles everything that touches the user's screen, microphone, accessibility tree, or local SQLite. The split is not philosophical, it is operational: anything where 'one bug ships to every user immediately' has to live server-side so we can fix it without a Sparkle update, anything where 'we need the user's logged-in browser cookies' has to live in the Swift app because that data never leaves the device.
Why deploy Rust through GitHub Actions but Swift through Codemagic?
Different constraints. The Rust backend is a Linux container deploying to Cloud Run, which is what GitHub Actions plus Workload Identity Federation is built for: no stored keys, push to main with a path filter on Backend/**, gcloud reads OIDC and deploys. The Swift app is a universal macOS binary that has to be signed with a Developer ID certificate, notarized by Apple, packaged as a DMG, and have a Sparkle ZIP cut for auto-update. That requires a real Mac runner, an Apple developer account, certificate management, and a ten-minute notarization round-trip. Codemagic is built for that and GitHub Actions Mac runners are not. Different pipelines because different jobs.
Where does Claude Code actually trip up on the language boundary?
Three places. First, Rust async error types: it will write a route that returns Result<Json<T>, anyhow::Error> when axum wants Result<Json<T>, AppError> with an IntoResponse impl. Fix is to keep an AppError type at the top of the routes module and reference it in CLAUDE.md. Second, Swift concurrency: it will mark a SwiftUI view's @State mutation @MainActor when the surrounding view body already runs on the main actor, producing a redundant-isolation warning. Tell it once. Third, when a Swift change requires a Rust API change, it will edit the Swift call site and wait for the Rust route to magically update. The fix is to give it the seam contract explicitly: 'the Rust route is at Backend/src/routes/<x>.rs, the Swift caller is at Desktop/Sources/<y>.swift, edit both before testing.'
Is there a way to test changes that span Rust and Swift without deploying the backend?
Yes, point the Swift app at a local cargo run --release. Every Swift call site that talks to the backend reads its base URL from .env.app, so flipping it to http://127.0.0.1:8080 makes the running Fazm Dev hit your local Rust process. Then you can iterate on a Rust route and a Swift caller in the same minute. The acp-bridge sits over a local websocket on 127.0.0.1, so it is already local and never round-trips to a remote.
Is the Fazm repo public so I can read the actual files?
Yes. github.com/mediar-ai/fazm is the public repository. Backend/Cargo.toml, Desktop/Package.swift, acp-bridge/package.json, run.sh, scripts/fazm-lock.sh, .github/workflows/deploy-backend.yml, codemagic.yaml, and CLAUDE.md are all there. Every claim on this page is checkable against a specific file.
Related guides on the same workflow
Build a macOS app with Claude Code
The Swift-only side of this story: SPM instead of .xcodeproj, the one-command build, the status file, distributed notification test triggers, and accessibility-API verification.
Parallel Claude Code agents and file ownership
Folder ownership is not enough. Here is the 144-line lock script Fazm ships in scripts/fazm-lock.sh, with the idle-window check and the stale-PID detection that make it work.
Custom Claude Code skills workflow
How a single bundled-skill folder convention lets the agent learn project-specific verbs once and use them on every future task.
Comments (••)
Leave a comment to see what others are saying.Public and anonymous. No signup.