8 min read

Dashboard Blindness

By mid-2021, Zillow's algorithm was performing beautifully.

The models were processing hundreds of data points per property. The dashboard showed expanding margins. The buy signals were green across every market. In Q2 alone, Zillow purchased a record 3,805 homes. So they kept buying.

By November, they announced the shutdown. By the time the full accounting was done, the Homes segment had lost $881 million for the year. Zillow Offers was dead. Approximately 2,000 people lost their jobs.

The data wasn't wrong. Every number on every dashboard was accurate. Zillow's models measured what algorithms could process: comparable sales, price trends, market velocity. All of it precise. All of it real.

All of it yesterday's truth.

The market had already moved. By the time the models caught up, Zillow owned thousands of homes at prices the market had already abandoned.

This wasn't a data quality problem. It was something more specific, more structural, and far more common than anyone wants to admit.


What Everyone Gets Wrong

When dashboards fail, we reach for familiar explanations.

Bad data. Insufficient metrics. Outdated tools. Need more real-time visibility. Need a better BI platform. Need AI-powered analytics.

These explanations share an assumption: the problem is technical. Fix the plumbing and the answers flow.

But Zillow didn't have a plumbing problem. They had more real estate data than any company in history. Their algorithms were sophisticated. Their dashboards were real-time. Every technical box was checked.

The problem wasn't what the dashboard showed. It was what the dashboard couldn't show, and what happened next because of that gap.

Here's what I've found after studying measurement failure across domains: from real estate to policing, from healthcare to retail, from individual teams to entire enterprises.

Measurement doesn't fail randomly. It fails in two spirals.

Two cascading patterns that explain why smart organizations with good data make catastrophic decisions. Each spiral has an entry point, an amplifier, and a lock. And your position in the spiral determines everything about what you should do next.


The Lock-In Spiral

The Entry: Availability Blindness

Every measurement system starts with a choice: what do we measure?

In theory, you measure what matters. In practice, you measure what's available.

Zillow measured comparable sales and price trends because that data existed. What actually mattered was market momentum, buyer sentiment shifts, and local supply pipeline timing.

That data didn't exist in a form algorithms could process. So the dashboard was built on what was available, not what was important.

This is availability blindness. Organizations don't choose metrics deliberately. They inherit them from whatever data infrastructure already exists. The tools define the dashboard. The dashboard defines reality.

It happens everywhere. Marketing teams measure pageviews because Google Analytics is already installed. Nonprofits measure services delivered because outcomes take years to track. Hospitals measure wait times because the timestamp already exists in the medical record.

In each case, the metric exists not because someone decided it mattered most, but because it was easy to capture.

Goodhart noticed this in 1975 studying monetary policy. The Bank of England measured money supply indicators not because they captured economic health, but because they were available weekly.

Donald Campbell documented the same pattern in social programs. Jerry Muller found it in education, healthcare, and policing.

The mechanism is the same everywhere: data has friction. Easy data wins.

But availability blindness isn't the end of the story. It's the beginning of a cascade.

The Amplifier: Target Corruption

Once you've measured the wrong thing, the next step is predictable.

Someone sets a target on it.

New York City's CompStat system launched in 1994 with a simple premise: track crime statistics by precinct, hold commanders accountable for the numbers. Data-driven policing.

It worked. Crime dropped. The dashboard showed green.

Then targets were set. Precinct commanders whose numbers triggered multiple detection flags faced intense scrutiny in weekly meetings. The incentive was clear: keep your numbers down.

A series of investigations between 2010 and 2013 revealed what the dashboard couldn't show. The Village Voice's "NYPD Tapes" exposed secret recordings of supervisors pressuring officers to manipulate reports. A subsequent audit found over 2,000 grand larcenies that had been misclassified as lesser crimes in a single year. Officers routinely downgraded felonies to misdemeanors. They refused to take reports. They reclassified crimes to keep the numbers clean.

The data was accurate. Every downgraded report was correctly filed. The dashboard faithfully reflected what officers recorded. It just no longer reflected what was happening on the streets.

This is Goodhart's Law in action: when a measure becomes a target, it ceases to be a good measure. Not because the data degrades, but because the behavior around the data changes. Targets don't corrupt numbers. They corrupt the relationship between numbers and reality.

The measure worked perfectly as description. It became destructive as target.

And now the cascade accelerates. Because once gaming develops around a wrong metric, something else happens.

The Lock: Commitment Blindness

You can't change it.

By 1993, Sears had built a century of direct-mail analytics. They could predict catalog response rates to three decimal places. Their measurement infrastructure represented decades of investment: the models, the benchmarks, the historical comparisons, the team expertise built around catalog metrics.

There was just one problem. The catalog business model was dying. Walmart had overtaken Sears as America's largest retailer three years earlier. The future was supply chain speed, not catalog precision.

Everyone at Sears could see the shift. But the dashboards were built. The bonuses were tied to catalog metrics. The board expected catalog numbers. Careers had been built on improving those numbers by fractions of a percent.

To stop measuring catalog performance would mean admitting the metrics were wrong. That the bonuses were misaligned. That the strategy built on those numbers was outdated.

So they kept measuring. Kept reporting. Kept optimizing a dying model with increasing precision.

This is commitment blindness. Cialdini's consistency principle applied to metrics. We've invested so much in the current measurement system that changing it feels like admitting failure. The sunk cost isn't just financial. It's identity.

The complete cascade: Available → Corrupted → Locked.

You measure what's easy (not what matters). Targets get set on those wrong metrics. Gaming develops. And then you can't change because the infrastructure, incentives, and identities are all invested in the current system.

Each step makes the next one worse. Each step is harder to reverse.

But there's a second cascade. And this one doesn't break your metrics.

It breaks your people.


The Blame Spiral

The Entry: Systems Blindness

Manufacturing hits efficiency targets. Sales hits quota. Customer service hits response time. Every department earns its bonus.

The company loses market share for the third straight year.

How is this possible? Every dashboard is green.

Ackoff spent decades explaining how. When you optimize parts independently, the whole suffers.

Manufacturing optimizes for unit cost, which means longer production runs, which means less flexibility. Sales optimizes for quota, which means pushing whatever's easiest to sell, not what customers need. Customer service optimizes for response time, which means faster calls, not resolved problems.

Each department's green light is someone else's hidden cost.

This is systems blindness. The most dangerous type because it's invisible from inside any single dashboard. You can only see it from above, looking at the whole. And most organizations don't have a dashboard for the whole.

But here's where the cascade turns dark.

The Amplifier: Attribution Blindness

When the whole fails but the parts look fine, someone has to be blamed.

Between 2005 and 2011, Hewlett-Packard fired three CEOs in six years. Carly Fiorina tried to make HP a consumer-media company. Mark Hurd tried operational discipline. Leo Apotheker tried to abandon hardware entirely and pivot to software. Each brought a different strategy. Each failed.

The board replaced each one without ever answering the question none of them could solve alone: what is HP? A hardware company, a software company, or a services company? The board never decided. They just kept firing the person closest to the failure.

"It's hard to think of another board that has failed as consistently as this one," said corporate governance expert Nell Minow. Harvard professor Rakesh Khurana diagnosed it precisely: "The board's caught in this infinite loop. They're searching for an identity with new CEOs, and it's not clear that the board has yet solved its own issues."

The dashboard showed each CEO's performance. It didn't show the board dysfunction that produced those numbers. And humans have a well-documented default: when something goes wrong, we blame the person closest to the failure.

Psychologists call it the Fundamental Attribution Error. We overweight personal factors and underweight situational ones. Deming quantified it for organizations: 85% of problems are system problems. But the dashboard puts a name next to every red number.

Three CEOs. Same dysfunctional board. Same unresolved identity crisis. Different scapegoat. The resolution only came when Meg Whitman stopped trying to unify the company and split HP in two, effectively admitting the problem was structural, not personal.

The worst part? Each firing felt like a solution. Each new CEO arrived with energy and a fresh strategy. For a few quarters, things seemed different. Then the system reasserted itself. The numbers went red. The cycle restarted.

The Lock: Interpretation Blindness

Now the cascade completes. Because every time you blame an individual and the problem persists, you don't question the blame. You confirm it.

The executive believes the new strategy is working. They find three metrics that show growth. They overlook two that show churn increasing. "The data supports our position."

A year later, the product fails. "Nobody could have predicted this."

Actually, the dashboard did predict it. The contradicting data was there. Nobody looked because confirmation bias turns dashboards into mirrors. You don't read the data. You read your beliefs reflected in the data.

This is interpretation blindness. Kahneman's research on System 1 thinking explains the mechanism. Our fast, automatic cognition dominates how we process information, including data. We anchor on the first number we see. We seek confirming evidence. We dismiss contradictions.

The complete cascade: System fails → Blame people → Confirm bias.

The system produces the failure. The attribution falls on individuals. And then selective interpretation confirms that individuals were indeed the problem. The system is never examined. The cycle never breaks.


The One Rule

Two cascades. Six types of blindness. One intervention principle.

The earlier in the cascade you intervene, the more failures you prevent.

Fix availability blindness (Type 3) before targets corrupt your wrong metrics. Fix systems blindness (Type 1) before you start blaming people for structural problems.

Every downstream failure in the cascade becomes unnecessary if you catch the upstream cause.

This means the first question isn't "which type of dashboard blindness do we have?" It's "which cascade are we in?"

If you're measuring wrong things, gaming has developed, and you can't change: you're in the Lock-in Spiral. The intervention point is metric selection. Not better targets. Not anti-gaming policies. Go back to the beginning and ask: are we measuring what matters, or what's available?

If teams succeed while the organization fails, and people keep getting blamed: you're in the Blame Spiral. The intervention point is system design. Not better hiring. Not more accountability. Go back to the beginning and ask: does our measurement system see the whole, or only the parts?

The cascade model doesn't just diagnose the problem. It diagnoses the position. And position determines intervention.


State Change

Go back to Zillow.

With the cascade model, the $881 million write-down isn't a "data failure" or an "algorithm problem" or a failure of "real-time analytics."

It's a Type 3 entry into the Lock-in Spiral.

Zillow measured what was algorithmically available (comparable sales, price trends) instead of what actually mattered (market momentum shifts, buyer sentiment changes). That's availability blindness. Targets were set on those algorithmically-derived signals. The buying machine accelerated. And by the time the gap between dashboard and reality became undeniable, the commitment was locked in: thousands of homes purchased, a business unit built, an identity invested.

Available. Corrupted. Locked.

The intervention point was never better algorithms. It was better questions at the beginning: are we measuring what we can process, or what actually matters?

Every organization has dashboards. Most of those dashboards are accurate. And accuracy is exactly the problem. An accurate measurement of the wrong thing feels like truth.

It isn't.

Your dashboard isn't lying to you. It's showing you everything it can see. The question is what it was built to look at.

And whether you're already two steps into a cascade you haven't named yet.


Dashboard Blindness is a framework for diagnosing measurement failure. It identifies six types of blindness organized into two cascading patterns. For the complete diagnostic tool, intervention map, and scale matrix, see the Dashboard Blindness Framework.