1 min read

The Nine Percent

Only 9% of organizations can intervene before an AI agent completes a harmful action.

57% have agents in production. 89% have observability tooling. Fewer than one in ten can actually stop one mid-task.

Observability tells you what happened. Evals tell you if it should have happened. Neither tells you how to stop it mid-flight.

If you've assumed your observability stack gives you control over your agents, this is your correction.

The friction that keeps showing up: teams treat "kill the process" as an intervention. It isn't.

Process-killing stops the agent but doesn't roll back the damage.

Deleted records stay deleted. Escalated permissions stay escalated.

The 9% who can intervene built the interrupt mechanism into the architecture before deployment, not after something went wrong.

Vendor demos never show this part. The demo shows the agent completing the task. The 8% who had outages saw what happens when it completes the wrong one.

Ask your team one question.

If an agent starts writing to production right now, what is the actual intervention?

If the answer is "kill the process," that is a gap, not a mechanism.