1 min read

One Word, Three Things

Your CEO says "AI strategy." Your engineer says "AI can't do that." They're both right. They're talking about completely different things.

Drew Breunig noticed this and declared: there are only three use cases.

Gods. AGI. Superintelligence. The thing that might replace all of us or destroy civilization. Important to think about, but not what you're building products with today.

Interns. Human-in-the-loop tools with higher error rates. Coding agents. Research assistants. Useful, but require supervision. You wouldn't ship their output without checking it.

Cogs. Small models that do one thing with six-nines reliability. Spam filters. Sentiment classifiers. They work in production because they're narrow, tested, and predictable.

The framework "blew up."

Executives from startups to large companies told Breunig it finally gave them a way to think about AI product strategy. Not because it was technically sophisticated. Because it forced clarity.

Most AI confusion isn't about capability. It's about people using the same word for three different things.

When your CEO says "we need an AI strategy," are they talking about gods, interns, or cogs? When your engineer says "AI can't do that reliably," which category are they referencing?

The framework works because it forces you to exercise judgment about which category you're actually building for.

The value wasn't a new technology. It was a taxonomy that made visible what was already there.

Sometimes the most valuable thing you can build is vocabulary.


Go deeper: The Context Flow