Deploying 9 Cloudflare Workers in Parallel with Git Worktrees and AI Agents
Deploying 9 Cloudflare Workers in Parallel with Git Worktrees
Managing nine Cloudflare Workers used to be a serial process. Update one, test it, deploy it, move to the next. A full deployment cycle took the better part of a day.
Now each Worker gets its own git worktree and its own agent, and all nine deploy in parallel. The same deployment cycle takes under 15 minutes.
The Problem with Serial Worker Deployment
Cloudflare Workers in a monorepo share the same git working directory. When you want to update Worker 3, you are working in a directory that also contains Workers 1, 2, 4-9. Any changes you make - even on a separate branch - are visible to other processes using the same checkout.
When a single developer does serial deployments, this is fine. When multiple agents are updating different Workers simultaneously, the shared working directory creates chaos:
- Agent A checks out branch
deploy/worker-3, Agent B simultaneously checks outdeploy/worker-7- the checkout destroys Agent A's changes wrangler.tomlconfigurations bleed between Workers when the working directory is not isolated- Test runs for Worker 3 pick up uncommitted changes from Agent B's work on Worker 7
The fundamental problem: git checkout changes the files on disk. Multiple agents sharing a checkout, even on different branches, cannot work simultaneously without stomping on each other.
The Worktree Solution
Git worktrees solve this by giving each agent a completely separate directory on disk, each at its own branch, while sharing the same underlying git object store.
# Create nine worktrees for nine Workers
for i in $(seq 1 9); do
git worktree add ../worker-$i -b deploy/worker-$i
done
# Verify the setup
git worktree list
# /path/to/repo abc1234 [main]
# /path/to/worker-1 abc1234 [deploy/worker-1]
# /path/to/worker-2 abc1234 [deploy/worker-2]
# ...
# /path/to/worker-9 abc1234 [deploy/worker-9]
Nine directories, nine branches, nine independent working copies. An agent assigned to ../worker-3 cannot accidentally affect ../worker-7 because they are in completely separate directory trees.
The Deployment Workflow
Each agent runs the same sequence in its assigned worktree:
# Inside ../worker-3/
git pull origin main # Get latest changes
# Apply Worker-3-specific changes
npm run test --workspace=worker-3 # Run Worker 3 tests only
npx wrangler deploy --config=workers/worker-3/wrangler.toml
Because each agent operates in an isolated directory, all nine sequences run simultaneously. There are no file conflicts. wrangler deploy for Worker 3 reads only the wrangler.toml in ../worker-3, not the configurations for any other Worker.
The deployment output from all nine agents streams in parallel. The bottleneck is Cloudflare's API rate limits, not the deployment logic.
Monorepo Structure That Makes This Work
This pattern works cleanly when the monorepo is structured to give each Worker a self-contained directory:
workers/
worker-1/
src/
wrangler.toml
package.json
worker-2/
src/
wrangler.toml
package.json
...
shared/ # Shared utilities imported by workers
utils.ts
Each Worker's wrangler.toml references only its own entry point. The shared utilities are read-only during deployment - agents never need to modify them. Cross-Worker dependencies are handled through the package system, not through shared mutable files.
If Workers share a root package.json or a top-level wrangler.toml, the worktree isolation breaks down because agents will contend on those shared files. The fix is to move each Worker to its own package.json and wrangler.toml.
Cleanup
After deployment, the worktrees can be removed:
for i in $(seq 1 9); do
git worktree remove ../worker-$i
done
Or you can keep them around and reuse them for the next deployment cycle. The worktrees are cheap to keep - they share the object store with the main repo, so storage overhead is minimal.
Why This Beats Other Approaches
Parallel npm scripts with a single checkout: Running npm run deploy:all with parallel script execution still uses a single git checkout. Any script that reads from the git index will see conflicts. Worktrees give true filesystem isolation, not just process parallelism.
Separate full clones: Cloning the repo nine times gives isolation but uses nine times the disk space and does not share the object store. Worktrees are the same isolation with none of the duplication cost.
CI/CD matrix jobs: A CI matrix running nine parallel deployment jobs is equivalent and works well for CD pipelines. Worktrees give you the same pattern locally - useful when you want to iterate on deployments without pushing to a CI trigger each time.
Scaling the Pattern
This approach works for any set of independent deployable units. Microservices, Lambda functions, static sites - anything that can be deployed independently and lives in a monorepo benefits from this pattern.
The constraint is that units must be genuinely independent. If deploying Worker 3 requires Worker 5 to deploy first, parallelism breaks the ordering. The pattern works best when Workers are horizontally isolated - each one can deploy successfully regardless of the others' state.
For the Cloudflare Workers case, this is usually true by design. Workers are independent serverless functions with their own KV namespaces, Durable Objects, and configurations. The deployment ordering only matters when you have cross-Worker API dependencies, which is an architecture smell anyway.
Fazm is an open source macOS AI agent. Open source on GitHub.