Developer Workflow Guide

Parallel Agent Visibility: Tracking Multiple AI Agents on One Codebase

A screenshot on r/ClaudeCode showed six terminal panes running Claude Code agents simultaneously, each working on a different part of the same project. The caption: "My agents plugging away :)". The comments immediately split into two camps - people excited about the productivity and people asking how you actually keep track of what each agent is doing. The visibility problem is real. When you go from one agent to three, you can track it in your head. When you go from three to six, you cannot. And when agents start making changes that affect each other's work, the coordination problem compounds the visibility problem.

1. The Visibility Problem with Parallel Agents

Running a single AI agent is like managing a single employee. You give them a task, check in occasionally, review the output. Running five agents simultaneously is like managing five employees who all share the same desk, the same computer, and the same files - without any of them knowing the others exist.

The visibility problem has multiple layers:

  • Status visibility - which agents are actively working, which are waiting for confirmation, which have errored out? When you have six terminal panes, it is easy to miss that one agent has been stuck on a confirmation prompt for 10 minutes.
  • Progress visibility - how far along is each agent in its task? Is the API refactor 20% done or 80% done? Without progress tracking, you cannot make informed decisions about priority or reallocation.
  • Change visibility - which files has each agent modified? Are any two agents touching the same files? If Agent A refactored a function that Agent B is adding tests for, both will produce work that does not merge cleanly.
  • Error visibility - when an agent hits an error, you need to notice quickly. An agent spinning on a build failure wastes compute and your subscription quota. An agent that made a bad architectural decision early will compound the mistake as it continues.

Without deliberate visibility systems, parallel agents become parallel black boxes. You launch them, hope for the best, and discover the problems when you try to merge everything together.

2. tmux Layouts for Agent Monitoring

tmux is the most common tool for running parallel terminal agents, and how you structure your tmux layout directly affects your ability to monitor agents effectively. There are several patterns that work:

LayoutBest ForAgent CountMonitoring Quality
Even split panesSimultaneous monitoring of all agents2 to 4Good - can read output in each pane
One large + small status panesFocus on one agent while monitoring others3 to 6Good for focused work
Named windows, one per agentFull-screen per agent, switch via status barAny numberOne-at-a-time only
Dashboard window + agent windowsSummary view plus full access to each agent5+Best overall

The dashboard-plus-windows pattern works best for serious parallel work. You dedicate one tmux window to a status dashboard (built with watch commands, git status checks, and process monitors) and give each agent its own full-screen window. The status bar shows window names, so you can see at a glance which agents are running.

Pro tip: Name each tmux window with the task, not the agent number. "api-refactor" and "test-coverage" are more useful than "agent-1" and "agent-2" when you are switching quickly between contexts.

3. Status Tracking and Progress Indicators

The simplest and most effective status tracking approach is a shared file that each agent updates. Create a markdown file in the project root - something like agent-status.md - and instruct each agent to update its section when it starts a subtask, completes it, or encounters a problem.

This is low-tech but works surprisingly well. You can watch the file in your dashboard window, and a quick glance tells you the state of every agent. Some developers extend this with:

  • Git branch status - a watch command showing the latest commit on each agent's branch, updated every 10 seconds
  • File change monitors - fswatch or inotifywait on the working directories, printing a line whenever an agent creates or modifies a file
  • Build status - a continuously running build check that catches compilation errors as agents introduce changes
  • Token usage tracking - monitoring the Claude Code log files to see how much quota each agent is consuming
  • Time-since-last-output - a simple timer per agent that resets whenever new output appears. If an agent has not produced output in 5 minutes, it might be stuck.

The goal is not to build a sophisticated monitoring system. It is to answer the three questions you care about most: Is each agent making progress? Has anything gone wrong? Are any agents touching the same files?

4. Conflict Detection and Resolution

The most insidious problem with parallel agents is not merge conflicts - those are visible and mechanical to resolve. It is semantic conflicts: two agents making changes that merge cleanly but produce broken code because they made incompatible assumptions.

Example: Agent A renames a function parameter from "userId" to "accountId" across the API layer. Agent B adds a new API endpoint that uses "userId" because it was reading the codebase as it existed when it started. The changes merge without conflicts, but the new endpoint uses a parameter name that no longer exists in the shared validation layer. The bug does not surface until runtime.

Strategies that reduce semantic conflicts:

  • Non-overlapping file sets - the most reliable strategy. Each agent owns a set of files and does not touch anything outside that set.
  • Frequent merges - merge each agent's branch into a shared integration branch every 30 to 60 minutes. Run tests after each merge. Catch conflicts early while they are small.
  • Shared interface contracts - define the interfaces between components before agents start. Each agent implements its side of the contract. As long as the contracts are respected, the implementations merge safely.
  • Build and test on every merge - automated CI that runs the full test suite when branches merge catches most semantic conflicts, even if the merge itself is clean.

Practical rule: if two agents need to modify the same file, sequence them instead of parallelizing them. The time lost from sequential execution is less than the time lost from debugging a semantic merge conflict.

5. Log Aggregation Across Agents

Each Claude Code agent produces its own log stream - a mix of model responses, tool calls, file edits, and command execution output. When you need to understand what happened across multiple agents over the last hour, reading each log separately is tedious and makes it hard to see the timeline of events across agents.

A simple aggregation approach: pipe each agent's output to a timestamped log file, then use a combined view that interleaves them chronologically. This gives you a single timeline of everything every agent did, which is invaluable for post-hoc debugging.

ApproachSetup EffortReal-Time?Searchable?
tmux logging pluginLowYesGrep on log files
script command per paneLowYes (tail -f)Grep on typescript files
Custom dashboard with multitailMediumYes - color-coded per agentBuilt-in search
Claude Code JSONL logsNone (built-in)Post-hocStructured - filter by type

Claude Code's own JSONL logs (in ~/.claude/projects/) are the most structured option. They record every tool call, every file edit, and every model response in a machine-readable format. For serious post-hoc analysis of what went wrong, these are the best source of truth.

6. Orchestration Approaches

Manual orchestration - you switching between terminals and dispatching tasks - works for 3 to 5 agents but does not scale beyond that. Several approaches to more structured orchestration have emerged:

  • Claude Code native sub-agents - the officially supported path. A primary agent decomposes work and spawns sub-agents for parallel execution. The primary agent handles coordination and result synthesis. This keeps everything within one session and Anthropic's terms of service.
  • Task queues in shared files - a low-tech approach where you maintain a task list in a markdown file. Each agent picks up the next unassigned task, marks it as in-progress, and marks it complete when done. You are still the dispatcher, but the task state is visible and persistent.
  • Git-based coordination - each agent works on a named branch. A CI pipeline attempts merges and runs tests. Merge failures alert you to conflicts. This uses existing infrastructure and scales well.
  • Desktop orchestration tools - Fazm approaches agent orchestration from the desktop level rather than the terminal level. As a macOS agent built on accessibility APIs, it can monitor and coordinate across multiple applications - not just terminal sessions. This is useful when your multi-agent workflow spans coding, testing, deployment dashboards, and communication tools.
  • Custom orchestration scripts - for teams with specific needs, a lightweight Python or Node script that manages task assignment, tracks progress, and handles result collection. This requires more upfront investment but gives full control over the coordination logic.

The right approach depends on how often you run parallel agents and how complex your coordination needs are. For daily use with 3 to 5 agents, tmux windows with a shared status file is sufficient. For teams running 10+ agents on large codebases, more structured orchestration pays for itself quickly.

7. Scaling Beyond Manual Coordination

The developers getting the most value from parallel agents are the ones who have invested in their coordination infrastructure. Not complex tooling - just deliberate practices that make the multi-agent workflow predictable and debuggable.

The maturity progression typically looks like:

StagePracticesTypical Agent Count
Ad hocOpen terminals, run agents, switch between them2 to 3
Structuredtmux layout, named sessions, task planning document3 to 5
MonitoredDashboard window, file change alerts, automated build checks5 to 8
OrchestratedSub-agent dispatch, automated conflict detection, log aggregation8+

Most individual developers settle at the Structured or Monitored stage. Teams working on large codebases with dedicated infrastructure move to the Orchestrated stage. There is no need to over-engineer your parallel workflow if you are running 3 agents - but if you find yourself regularly running 5 or more, the monitoring and coordination investment pays off within a few sessions.

The key insight from practitioners who have been doing this for months: the bottleneck is never the agents. It is always the human's ability to maintain awareness of what the agents are doing. Every investment in visibility - better tmux layouts, status files, log aggregation, build monitors - directly increases the number of agents you can run effectively.

The ceiling is not technical. It is cognitive. The tools and practices described here are all about extending your cognitive capacity to manage parallel work. Get those right, and parallel agents become a genuine productivity multiplier instead of a source of merge conflicts and wasted compute.

Orchestrate multi-app workflows from one place

Fazm is a macOS AI agent that coordinates across applications using accessibility APIs. When your workflow spans terminals, browsers, and desktop apps - let an agent handle the coordination.

Get Fazm for macOS

fazm.ai - macOS AI agent for multi-app orchestration