Dr. Melissa Hogan
March 4, 2026

Once you stop relying on averages, a harder problem emerges.
If learning outcomes vary across classrooms, and they always do, how do we know whether an EdTech tool actually caused the gains we’re seeing? How do we distinguish real instructional impact from randomness, coincidence, or context?
This is the signal vs. noise problem.
And it sits at the center of nearly every overstated impact claim in EdTech.
In any complex system, a signal is the meaningful pattern you are trying to detect. Noise is everything else.
In education:
The challenge isn’t that noise exists.
The challenge is that many EdTech impact claims fail to account for it at all.
One of the most common impact statements in EdTech sounds like this:
“Students who used the product more performed better.”
At face value, that seems intuitive. But without controls, it tells us almost nothing.
High-performing classrooms often:
When those factors aren’t accounted for, apparent “impact” may simply reflect who chose to use the tool, not what the tool actually did.
Correlation is easy to find in education. Attribution is not.
Most EdTech studies underestimate how much noise they are working with. Common sources include:
Baseline differences. If students start at different levels, growth comparisons are meaningless without adjustment. Without baseline equivalence, you cannot tell whether outcomes reflect learning or starting position.
Implementation variability. Inconsistent usage, uneven professional learning, and partial rollout introduce variance that can overwhelm true signal. When implementation is treated as a footnote rather than a variable, results become unstable.
Sample size and statistical power. Small samples produce volatile results. A large effect in a small group may disappear entirely at scale. Without sufficient power, many “findings” are indistinguishable from statistical fluctuation.
Time effects, Learning is nonlinear. Early spikes may reflect novelty or practice effects rather than durable understanding. Without sufficient duration, short-term gains are easily mistaken for long-term impact.
Each of these introduces noise. Together, they can completely distort interpretation.
Separating signal from noise doesn’t require perfection, but it does require discipline.
At minimum, credible impact analysis should demonstrate:
This is the baseline for separating real instructional impact from coincidence.
Districts are increasingly asked to evaluate competing impact claims that all sound convincing. Trend lines go up. Testimonials are strong. Dashboards are polished.
But without signal detection, districts are left guessing.
The cost of guessing is high:
Clear signal detection helps districts invest with confidence, protect instructional coherence, and focus resources where they matter most..
The EdTech market has normalized weak signal detection because it’s uncomfortable to admit how much noise exists. But ignoring noise doesn’t make impact stronger. It makes it fragile.
If we want impact districts can trust, renew, and scale, we must be willing to ask harder questions:
Because in education, signal matters and noise is expensive.
Prioritize evidence that isolates instructional impact from correlation.
Favor studies and analyses that control for baseline differences, implementation variability, and time, so you’re investing in signal, not coincidence.
Next up: From ESSA Tiers to Evidence Debt: The Hidden Risk Districts Don’t See
When evidence lacks rigor, the risk isn’t just misinterpretation, it’s accumulation. Post 4 explores how weak or superficial impact claims create evidence debt, and why the long-term consequences often fall on districts, not vendors.
For definitions and key terms used throughout this series, see the Impact Reality Series Glossary of Terms.