Why AI Agents Need Feedback Loops, Not Just Instructions

M
Matthew Diakonov

Open Loop vs Closed Loop

Most AI agent failures come from the same root cause: the agent follows instructions without checking if they worked.

Tell an agent "click the submit button." It clicks where it thinks the button is. But the page had not finished loading. The click hit empty space. The agent moves to the next step, assuming success. Everything after that point is wrong.

This is open-loop control - execute and hope. Closed-loop control is different: execute, observe the result, and adjust.

Why Instructions Alone Fail

Instructions assume a predictable environment. Software is not predictable. Pages load at different speeds. Dialogs appear unexpectedly. Buttons move when the window resizes. Network requests fail. Applications crash.

An instruction-following agent has no way to handle any of this. It has a script, and it runs the script. When reality diverges from the script, the agent either fails visibly or - worse - continues silently doing the wrong thing.

What Closed-Loop Agents Do Differently

A closed-loop agent treats every action as a hypothesis. "I clicked the submit button" becomes "I clicked the submit button - did the form actually submit?"

After every action, the agent:

  1. Observes the new state. Takes a screenshot or reads the accessibility tree.
  2. Compares against expectations. Did the expected change actually happen?
  3. Decides the next action based on reality. Not based on what the script says should happen next.

This is slower than open-loop execution. It is also dramatically more reliable.

Practical Feedback Patterns

  • State assertions. After clicking "Save," verify the success toast appeared before continuing.
  • Retry with variation. If a click did not work, try clicking at slightly different coordinates or waiting for the element to become interactive.
  • Error detection. Watch for error dialogs, red borders on form fields, or unexpected page navigations.
  • Progress tracking. For multi-step tasks, verify each step completed before starting the next.

The Cost Tradeoff

Feedback loops cost tokens and time. Every observation means another API call or screenshot analysis. But the cost of recovering from a failed open-loop run is always higher than the cost of checking along the way.

Build agents that verify. Not agents that assume.

Fazm is an open source macOS AI agent. Open source on GitHub.

More on This Topic

Related Posts