Why “Average Impact” Is the Most Dangerous Metric in EdTech

Dr. Melissa Hogan
February 25, 2026

If there is one number EdTech loves more than any other, it’s the average.

The average score gain.
The average growth percentile.
The average improvement across classrooms.

It’s neat. It’s simple. It fits cleanly into a slide.

And it is one of the most dangerous metrics in education.

Because averages don’t tell districts what actually happened. They tell a story that is easy to market and dangerously incomplete.

The Comfort of the Average

Averages are appealing because they reduce complexity. They take hundreds or thousands of students, dozens of classrooms, and countless instructional variables and compress them into a single, reassuring number.

On paper, that feels efficient.
In practice, it hides the very information districts need to make responsible decisions.

When a vendor reports an “average gain,” it immediately raises questions that are rarely answered:

  • Who actually benefited?
  • Who didn’t?
  • Under what conditions did gains occur?
  • What happened to students outside the middle?

Without answers to those questions, an average isn’t insight, it’s camouflage.
And in district decision-making, camouflage creates risk.

How Averages Mask Reality

In real classrooms, learning is not evenly distributed. Students enter with different prior knowledge. Teachers implement with varying fidelity. Instructional routines mature over time.

An average flattens all of that variation into a single point, masking three critical truths.

1. Averages hide distribution

Two programs can report the same average gain and represent completely different realities.

In one case, most students experience modest, consistent growth.
In another, a small subset shows large gains while many see little improvement, or even regression.

The average can’t tell the difference.

For districts committed to equity, this isn’t a technical oversight, it’s a structural blind spot. A tool can look successful on average while leaving the students who most need support behind.

2. Averages erase implementation conditions

Averages rarely explain how impact was achieved.

  • Were gains driven by classrooms with consistent weekly usage?
  • Did outcomes depend on professional learning, curriculum alignment, or dosage thresholds?

Without this context, districts can’t replicate success or diagnose failure. The result is a familiar pattern: promising early results followed by stagnation at scale.

3. Averages create false confidence

Perhaps most dangerously, averages project certainty where little exists.

A single number feels definitive, even when it rests on unstable foundations: small samples, uncontrolled comparisons, or wide variance across classrooms.

In that sense, averages don’t just simplify reality. They overstate confidence in it.

The Equity Problem No One Talks About

In EdTech, equity is often discussed rhetorically and measured statistically, but rarely examined distributionally.

Average impact metrics can show “overall improvement” while masking:

  • Uneven gains across subgroups
  • Floor effects where the lowest-performing students see little change
  • Differential outcomes tied to access, usage, or instructional context

If districts cannot see who benefits and under what conditions, they cannot ensure impact is equitable, no matter how positive the headline number looks.

Equity cannot be averaged. It must be examined.

Why This Matters for District Decisions

Districts aren’t buying averages. They’re making long-term instructional commitments, often at scale, often under constraint, and often with real consequences for students and teachers.

When decisions rely on average impact alone:

  • Risk is underestimated
  • Variability is ignored
  • Instructional failure is discovered too late

The goal of impact evidence isn’t to produce the cleanest number.
It’s to reduce uncertainty, surface risk, and inform better decisions.

Averages do none of those things on their own.

What Real Impact Analysis Looks Like

Moving beyond averages doesn’t require proprietary models or inaccessible analytics. It requires a different mindset, one that treats impact as something to be understood, not summarized away.

Real impact analysis asks:

  • What does the full distribution of outcomes look like?
  • Where do gains accelerate and where do they stall?
  • What thresholds of use, fidelity, or alignment matter?
  • Which students benefit most and which require additional support?

 Asking these questions is what turns impact from a headline claim into an honest understanding of how learning actually changes.

Raising the Bar

The EdTech market has grown comfortable with averages because they are easy to communicate and difficult to challenge. But comfort is not a standard and simplicity is not rigor.

If we are serious about impact, we must demand more than a mean score and a positive trend line. We must demand evidence that reflects the real complexity of classrooms and the real responsibility of district leadership.

Because when everything is averaged, the students who need the most support are often the ones who disappear.

One thing you can do right now, with lasting impact:

Ask every vendor to show you how outcomes varied, not just what the average was.

If they can’t clearly explain distributions, subgroup differences, or the conditions under which gains occurred, treat the claim as marketing, not evidence.

Next up: The Instructional Signal vs. Noise Problem in EdTech Impact. Moving beyond averages reveals nuance, but nuance alone isn’t enough. Post 3 explores how to distinguish meaningful learning signals from statistical noise, and why rigor, not complexity, is what separates real impact from coincidence.

For definitions and key terms used throughout this series, see the Impact Reality Series Glossary of Terms.