The Context Flow
Whether AI makes you better, makes you worse, or makes you capable of the impossible comes down to a single variable.
The Radiologist's Warning
In 2016, Geoffrey Hinton, the godfather of deep learning, made a prediction that sent shockwaves through medical education:
"We should stop training radiologists now. It's just completely obvious that within five years, deep learning is going to do better than radiologists."
Medical schools panicked. Students questioned their career choices. The prevailing wisdom became: any job that involves pattern matching is about to disappear.
Eight years later, we have more radiologists than ever.
Not because the AI failed. The AI got spectacularly good at detecting tumors, fractures, and anomalies. In many cases, better than humans.
But something unexpected happened.
The practices that learned to work WITH the AI discovered a third option nobody predicted. Not replacement. Not merely assistance. Expansion. Radiologists using AI could now detect patterns across thousands of scans that would take a human lifetime to review. They could catch rare presentations that no single doctor would ever see enough of to recognize.
The radiologists who let AI generate first and reviewed second saw their judgment atrophy. The radiologists who framed questions first and used AI to test hypotheses got sharper. But the radiologists who asked "what was impossible before?" found capabilities that didn't exist for either human or machine alone.
Same technology. Three completely different outcomes.
The difference wasn't the AI. It was the direction of context flow.
The Master Variable
After studying hundreds of human-AI implementations across industries, one variable explains more than any other:

Who provides context to whom?
This single question determines which of three paths you're on:
Path A: Augmentation
Human frames problem → AI executes → Human refines → Learning loop
Context flows Human → AI. Human leads, AI accelerates.

Result: Expertise preserved. Quality sustained. Judgment sharpens with use.
Path B: Substitution
Human asks (no work) → AI generates → Human accepts (no feedback) → Expertise atrophies
Context flows AI → Human. Human prompts, but provides no real context. Human accepts, but provides no real evaluation.

Result: Expertise erodes. Quality spirals. System eventually collapses.
Path C: Expansion
Human provides context → AI enables impossible → New capability emerges
Context flows Human → AI → Previously Unreachable.

Result: Work that couldn't exist before. Neither human nor AI could do it alone.
Critical distinction: Path C isn't just doing existing work faster. It's creating work that was impossible.
AlphaFold solved protein structure prediction in hours. A problem that took PhD students years. Not because it replaced biochemists. Because biochemists asking the right questions could now reach answers that were physically impossible to compute before.
Personalized medicine at scale. One-to-one tutoring for millions. Exploring ten thousand design variants. These require human context (what problems matter, what "success" means) combined with AI capability (scale and speed no human can match).
Path C still requires Path A's discipline. The moment you let AI generate without human framing, Path C degrades to Path B.
The Button Problem
Here's what makes this tricky:
Path B feels easier.
Why formulate the question when the AI will generate something anyway? Why provide context when you can just edit the output? Why exercise judgment when you can outsource it?
This is The Button Problem.
When generating something becomes as easy as pressing a button, the path of least resistance is to press the button first and think later.
But here's the trap: every time you press the button without framing the problem, you weaken the muscle that knows whether the output is good.
Writers who draft with AI from the first paragraph lose the instinct for what makes prose work.
Lawyers who generate contracts from prompts without understanding the logic behind each clause become unable to spot the subtle errors that create liability.
Consultants who let AI structure their recommendations lose the strategic judgment that makes recommendations worth paying for.
The button is always easier. The button always costs you something.
The Three Dynamics
1. Capability Asymmetry
Humans and AI have opposite strengths.
AI excels at: Speed. Scale. Pattern matching. Consistency. Never getting tired.
Humans excel at: Context. Judgment. Meaning-making. Handling novelty. Knowing when the rules don't apply.
Neither is better. They're different. The question is how you combine them.
2. The Jagged Frontier
AI capability isn't linear. It's jagged.
It can write a serviceable sonnet but can't count the syllables. It can pass the bar exam but fails at basic logic puzzles. It can diagnose rare diseases but hallucinates treatments that don't exist.
You cannot predict from first principles what AI will be good or bad at. You have to test.
This means you need humans who understand the jagged edges. Who know where the AI fails. Who have the judgment to deploy it where it helps and override it where it doesn't.
Path A builds this knowledge. Path B erodes it. Path C requires it.
3. The Context Reservoir
Your organization has a reservoir of expertise. Domain knowledge. Historical precedent. Understanding of this specific situation.
It replenishes when: Humans exercise judgment. Encounter novelty. Teach others.
It depletes when: AI substitutes for thinking. Entry-level roles get eliminated. Lazy cognition sets in.
Critical property: The reservoir takes decades to fill and depletes invisibly. You won't notice it draining until you need expertise nobody has anymore.
The Expertise Erosion Spiral
Here's the dark pattern that Path B creates:
- AI automates entry-level work
- Junior people never learn the fundamentals
- No pipeline of future experts develops
- Senior people retire with knowledge that was never transferred
- Organization loses the ability to supervise AI effectively
- AI errors go undetected
- Quality degrades
- But everyone still trusts the AI
- System fails in ways nobody understands
This takes years to play out. Which is why organizations keep choosing Path B. The costs are invisible until they're catastrophic.
Warning Signs You're on Path B
Individual level:
- [ ] Defaulting to AI first drafts that you "review"
- [ ] Declining to engage with edge cases
- [ ] Noticeable skill atrophy when AI is unavailable
Organizational level:
- [ ] Entry-level roles being eliminated, not redesigned
- [ ] Quality metrics declining without clear cause
- [ ] Expert departure rate increasing
- [ ] "Just use ChatGPT" becoming standard advice
Industry level:
- [ ] Race-to-bottom pricing
- [ ] Talent pipeline drying up
- [ ] Regulations lagging behind damage
If you're checking boxes, you're on Path B.
The Recovery Protocol
Path B → Path A isn't instant. Expertise takes time to rebuild.
DETECT: Monitor error rates, expertise surveys, quality trends. Watch for leading indicators: time-to-competence increasing, senior load increasing.
DIAGNOSE: Map which processes are in substitution mode. Identify where context is flowing AI→Human.
DESIGN: Redesign interfaces to restore human context provision first. Create learning roles that can't be automated.
DEPLOY: Phased transition. Parallel running until quality matches. Training investment in the interface layer.
DEFEND: Structural protection through quality gates. Economic alignment for expertise, not just efficiency.
What Path A Actually Looks Like
- Human frames the problem (context flows Human → AI)
- AI generates options at scale
- Human evaluates against criteria only humans can assess
- Human interrogates surprising results
- Human corrects systematic errors
- Human synthesizes into judgment call
- Loop repeats. Human gets sharper. AI gets better directed.
Examples:
- Radiologist reviews patient history, forms hypothesis, uses AI to test it, catches AI miss because they know the patient context
- Writer outlines argument, uses AI to draft sections, rewrites voice and logic
- Lawyer structures deal, uses AI to flag precedents, applies judgment to this specific situation
- Consultant frames strategic question, uses AI to analyze data, interprets meaning for this client
What Path C Unlocks
Once Path A is stable, Path C becomes accessible:
| Domain | Previously Impossible | Now Possible |
|---|---|---|
| Biology | Protein structure prediction | Solved in hours instead of years |
| Medicine | Personalized treatment at scale | AI + genomics + individual context |
| Education | 1:1 tutoring for millions | AI tutor with human curriculum direction |
| Design | Exploring 10,000 variants | Generative design with human judgment |
| Science | Cross-domain hypothesis generation | AI pattern matching + human validation |
Path C isn't about efficiency. It's about reaching places that were unreachable.
But it only works if humans provide the context. Without that, Path C degrades to Path B. The machine generates more, but nothing means anything.
The Great Market Split
Markets are starting to split.
Path A organizations: Humans who can supervise AI. Find the Missing Middle. Create new value. Charge premium rates. Attract the best talent. Build moats.
Path B organizations: Cut costs. Automated everything possible. Produce commodity outputs at commodity prices. Racing to the bottom.
Path C organizations: Highest investment. Longest horizon. Winner-takes-all returns. Creating categories that didn't exist.
The middle is vanishing. There's less room for "we're okay" positions.
The Direction You Choose
Every task is a choice.
Do you frame first, or do you generate first?
Do you evaluate rigorously, or do you approve quickly?
Do you ask "what was impossible before?" or do you ask "how do I do this faster?"
The technology doesn't determine the outcome. The direction of flow does.
Path A compounds expertise. Every loop makes you better at using the next loop.
Path B depletes expertise. Every shortcut makes the next shortcut feel more necessary.
Path C expands capability. But only if Path A's discipline holds.
The Standard
The organizations and individuals who thrive in the AI era won't be the ones who use AI most. They'll be the ones who direct context flow correctly.
Human context → AI capability → Human judgment → Better outcome
Or:
Human context → AI capability → Previously impossible → New category
That's the loop. That's the path. Everything else is extraction dressed up as innovation.
The question isn't whether to use AI. The question is: which direction is context flowing? And are you building something that couldn't exist before?
Those two questions will determine whether AI makes you obsolete, irreplaceable, or capable of the impossible.
Choose the direction carefully.
"Same technology. Three completely different outcomes. The difference wasn't the AI. It was the direction of context flow."
Explore Further
The Jagged Frontier — Why AI's capability boundaries are unpredictable, and why you need experts to navigate them.
The Missing Middle — The new work AI creates that couldn't exist before. Not faster. New.
Use It or Lose It — Skills atrophy without practice. AI makes the atrophy invisible.
The Apprenticeship Paradox — Why eliminating junior roles destroys senior expertise.
The Button Problem — When creation becomes too easy, what signals value?
Browse all notes: Hybrid Intelligence →