What We Learned About Impact and What the Market Must Do Next

A chart with three gears in front that say equity, discipline, rigor
Dr. Melissa Hogan
March 25, 2026

What We Learned About Impact and What the Market Must Do Next

When we launched The Impact Reality Series, the goal was not to critique for critique’s sake. It was to surface a harder truth the EdTech market has been avoiding:

Impact isn’t broken. Our standards are.

Across five posts, we challenged assumptions that have quietly become accepted practice, assumptions that shape how districts buy, how vendors sell, and how “success” is declared. Taken together, these posts form a single argument about what real impact requires, and why the market must mature beyond comfort metrics.

Here’s what this series made clear.

1. Impact is too often mistaken for marketing

(Impact or Illusion? Why EdTech Must Stop Marketing “Impact”)

We began by naming the core problem: much of what is labeled “impact” is not evidence at all.

Positive trends, testimonials, and descriptive charts may be encouraging, but without statistical testing, comparison groups, and instructional attribution, they function as marketing assets, not decision-grade proof.

Before debating results, districts and vendors must agree on what counts as impact in the first place.

2. Averages don’t reveal the truth, they conceal it

(Why “Average Impact” Is the Most Dangerous Metric in EdTech)

Once evidence is defined, how it’s summarized matters.

Averages flatten variation, mask inequities, and hide implementation failure. Two programs can report the same average gain while producing fundamentally different outcomes for students and classrooms.

If districts care about equity, scalability, and reliability, impact cannot be reduced to a single number.

3. Positive trends aren’t signal without rigor

(The Instructional Signal vs. Noise Problem in EdTech Impact)

Moving beyond averages exposes a deeper challenge: separating real instructional impact from noise.

Without baseline controls, sufficient sample sizes, duration, and attribution, correlations are easily mistaken for causation. When rigor is absent, noise fills the gap.

Impact must be detectable, attributable, and statistically meaningful, or it isn’t impact at all.

4. Weak evidence creates long-term risk

(From ESSA Tiers to Evidence Debt)

At this point, the series shifted from analysis to consequence.

ESSA tiers are important but insufficient on their own. When evidence is treated as static or symbolic, districts unknowingly accumulate Evidence Debt: the long-term cost of decisions made on fragile or outdated proof.

Evidence is not a checkbox. It is a living obligation, one that must be revisited as implementation scales and conditions change.

5. Impact that doesn’t last was never impact

(The Impact Half-Life)

Finally, we addressed the question that determines everything else: does impact endure?

Early gains are common. Durable gains are rare.

Without systems designed to sustain fidelity, alignment, and instructional routines, results decay over time. When impact expires before renewal, districts are left chasing momentum instead of building coherence.

Durability is not an extra metric. It is the ultimate test.

The throughline: Impact is a system, not a moment

Each post examined a different failure point, but the conclusion is singular:

  • Impact cannot be proven quickly
  • It cannot be summarized by averages
  • It cannot be inferred from trends
  • It cannot be outsourced to minimum standards
  • And it cannot be declared without durability

Impact is a system. And systems demand rigor, discipline, and accountability over time.

What comes next

The future of EdTech will not be shaped by who tells the most compelling story. It will be shaped by who can consistently demonstrate:

  • Instructionally attributable outcomes
  • Equitable distributions of benefit
  • Transparent conditions for success
  • Learning gains that hold up year after year

Districts are already asking harder questions. Vendors will either rise to meet them or be exposed by them.

This series is an invitation to raise the bar together. Because when standards rise, students benefit most.

Impact earns its name only when it lasts.

One thing you can do right now, with lasting impact:

Shift your buying lens from:

“Does this work?”

to

“Under what conditions does this work and will it still work next year?”

That single question separates durable, evidence-backed tools from those built on momentum and narrative.

For definitions and key terms used throughout this series, see the Impact Reality Series Glossary of Terms.