Building UI for Agentic Workflows Using MCP Apps

Fazm Team··2 min read

Building UI for Agentic Workflows Using MCP Apps

The hardest part of building a UI for AI agent workflows is not the interface design - it is making the agent's behavior predictable enough that a UI can represent it.

The Schema Problem

MCP tools communicate through JSON. The agent sends a tool call with parameters, the tool returns a result. For this to work in a UI, both the input and output need to follow strict schemas. Otherwise the UI has no idea what to display.

The common mistake is making tool schemas too flexible. A tool that accepts "any JSON object" as input cannot have a form generated for it. A tool that returns "a string with some data" cannot be rendered in a structured view. Strict schemas are not a limitation - they are the foundation of usable UI.

Design Tools for Display

Every MCP tool should be designed with its UI representation in mind. What will the user see when this tool is called? What will they see when it returns? If the answer is "a blob of JSON," the tool needs a better schema.

Good MCP tool design means typed parameters with clear descriptions, enum values for constrained choices, and structured return types that map to UI components. A tool that returns {status: "success", items: [...], count: number} can be rendered as a table with a status badge. A tool that returns a plain string cannot.

One Tool, One Thing

The tools that work best in UIs do one specific thing. "Search contacts" is a good tool - it has clear inputs (query string, filters) and clear outputs (list of contacts). "Manage CRM" is a bad tool - it could do anything, so the UI cannot predict what to show.

When you split broad tools into specific ones, the UI practically designs itself. Each tool maps to a screen, a form, or a widget. The agent's workflow becomes a sequence of UI states that the user can follow and understand.

Fazm is an open source macOS AI agent. Open source on GitHub.

More on This Topic

Related Posts