Love Research - 47 Couples and Calibrated Prediction Models
Love Research - 47 Couples and Calibrated Prediction Models
A study tracked 47 couples with calibrated prediction models - systems that do not just predict outcomes but know how confident they should be in those predictions. The results were surprising not because of what they predicted, but because of how the calibration itself changed the research.
Calibration Changes Everything
Most prediction models are either overconfident or underconfident. A calibrated model says "I am 70% sure this couple will report higher satisfaction next month" and is right about 70% of the time. This sounds simple but it is extraordinarily rare in social science.
With 47 couples, the calibrated model was able to identify which relationship dynamics it could predict reliably and which ones it could not. Conflict resolution patterns were highly predictable. Spontaneous gestures of affection were not. The model knew what it did not know.
The Agent Design Lesson
This maps directly to how AI agents should work. An agent that knows its own uncertainty is dramatically more useful than one that is always confident. When an agent says "I am not sure this is the right action" rather than plowing ahead, it saves you from cascading errors.
Most current AI agents are uncalibrated - they present every output with the same level of confidence. A code suggestion and a guess about user intent get delivered identically. This makes it impossible for the user to know when to trust the agent and when to verify.
Building Calibrated Agents
Practical calibration for desktop agents means:
- Confidence signals - the agent should communicate uncertainty explicitly
- Ask before acting on low-confidence decisions
- Track accuracy over time and adjust confidence thresholds
- Separate observation from interpretation - "I see the error message" vs "I think the fix is..."
The 47 couples study showed that the most useful predictions were not the most accurate ones. They were the ones that came with honest uncertainty estimates.
Fazm is an open source macOS AI agent. Open source on GitHub.