The Impact Half-Life: Why Most EdTech “Results” Expire Before Renewal

Dr. Melissa Hogan
March 19, 2026

Most EdTech impact looks strongest right before it fades.

Early pilots show promise. First-year dashboards trend upward. Case studies highlight encouraging gains. And then, inevitably, results flatten, variability increases, and the story becomes harder to tell.

This isn’t failure. It’s impact decay.

And until the industry is willing to name it, districts will continue mistaking temporary effects for lasting change, often right as renewal decisions are being made.

Introducing the Impact Half-Life

In science, a half-life measures how long something remains effective before it diminishes. Impact in education behaves the same way.

Impact Half-Life refers to the length of time learning gains persist once novelty wears off, implementation scales, and instructional routines are tested by real-world constraints.

If gains disappear by year two, the impact didn’t endure. It expired.

Why Early Results So Often Mislead

EdTech results frequently peak during:

  • Initial adoption
  • High-support implementation phases
  • Concentrated professional learning efforts

These conditions matter, but they are rarely permanent.

As tools scale across more classrooms:

  • Fidelity varies
  • Usage patterns fragment
  • Professional learning becomes less intensive
  • Instructional alignment weakens

Without systems explicitly designed to sustain impact, early gains erode, not because the idea was flawed, but because durability was never measured or engineered.

The Renewal Illusion

Renewals often rely on the same evidence used to justify adoption, sometimes years later.

But evidence has a shelf life.

When districts renew based on:

  • Initial pilot outcomes
  • First-year averages
  • Outdated case studies

They assume impact is still present, even though implementation conditions have changed.

This creates a dangerous lag between reality and decision-making. By the time decline becomes visible, districts are already invested and students have already paid the cost.

Why Durability Is the Real Test of Impact

True instructional impact should:

  • Persist beyond early adopters
  • Hold under varying levels of support
  • Benefit students consistently across contexts
  • Improve, not erode, as systems mature

Durability isn’t an add-on metric. It’s the difference between a promising intervention and a dependable one.

If impact can’t survive scale, it can’t justify renewal.

Measuring What Endures

Assessing Impact Half-Life requires a different orientation toward evidence.

It means asking:

  • Do gains stabilize or decay over time?
  • Which behaviors and conditions sustain outcomes?
  • Where does impact break down and why?
  • Are results replicable across cohorts and years?

These questions are harder than reporting a single outcome. But they are the questions districts must answer to protect instructional coherence and long-term student growth.

The Cost of Ignoring Impact Decay

When impact decay goes unnoticed:

  • Systems chase new tools instead of strengthening routines
  • Teachers absorb the burden of inconsistency
  • Students experience fragmented instructional experiences

The problem isn’t that EdTech doesn’t work. It’s that working once is not the same as working sustainably.

A New Definition of Success

The future of EdTech impact will not be defined by who shows the biggest early gains.

It will be defined by who can demonstrate:

  • Stable outcomes over time
  • Clear conditions for success
  • Learning gains that endure long after adoption excitement fades

Because impact that expires before renewal was never impact. It was momentum.

One thing you can do right now, with lasting impact:

Evaluate whether impact persists, not just whether it appeared.

Ask vendors to show multi-year or cohort-over-cohort outcomes and to explain what systems support sustained gains once early implementation support fades.

Next up: What We Learned About Impact and What the Market Must Do Next

At this point, the pattern is clear. The problem isn’t a lack of data, it’s how impact is defined, measured, and communicated. The final post brings our series together, outlining what a higher standard for impact looks like and what districts should demand next.

For definitions and key terms used throughout this series, see the Impact Reality Series Glossary of Terms.