We Still Don’t Have Enough Evidence in EdTech

The digitalization of education is accelerating at breakneck speed, leading to robust debates about the role of technology in the classroom. Whatever your views on this issue, it is clear that if we are to harness the immense potential of EdTech, evidence must be the core guiding principle that ensures that innovation is effectively deployed to improve learning outcomes. This year’s Global Education Monitoring (GEM) Report echoed this call, highlighting the urgent need for more robust, impartial evidence on the impact of EdTech.

The Jacobs Foundation was founded on the premise that evidence should be at the heart of education policy and practice. Our Co-CEOs Fabio Segura and Simon Sommer wrote a piece in EdSurge calling on EdTech to be more evidence-driven, saying that "robust scientific evidence does not presently play an integral part in how most EdTech products are designed, deployed and evaluated."[1] So what progress, if any, has been made since then?

It appears not much. According to the GEM report, the vast majority of evidence is generated in the richest countries, meaning that we are not getting a complete picture of EdTech’s potential impact for those it could benefit the most. Even in wealthy countries, evidence is in remarkably short supply. The report finds that “A survey of teachers and administrators in 17 US states showed that only 11% requested peer-reviewed evidence prior to adoption.” In the United Kingdom, just 7% of education technology companies had conducted randomized controlled trials, and only 12% had used third-party certification.

To play our part in turning words into action, the Jacobs Foundation has been working to develop a framework to determine evidence levels in EdTech, and measure progress over time. In the development process, we found that we struggled to find a simple and reliable methodology to ascertain the level of evidence of a given product or company. Many rating systems exist, but most of them include an element of subjectivity or require substantial amounts of time and capacity from resource-constrained management teams. 

To get around this challenge, we sought to develop an assessment that:

  1. Used an already accepted standard, ideally already put in place by a government or regulatory body.

  2. Required no input from the company being assessed.

  3. Allowed us to understand the rigor of the evidence available on a product.

  4. Included a potential commercial benefit for companies that advanced in the assessment.

We started with the ESSA Evidence framework, which was introduced in 2015 as part of the Every Student Succeeds Act (ESSA), a federal law in the United States that governs K-12 education policy. One significant aspect of ESSA is its emphasis on evidence-based practices and interventions to improve student outcomes. The ESSA Evidence Framework classifies education interventions and strategies into four tiers based on the strength of evidence supporting their effectiveness.

Source: U.S. Dept of Ed's Office of EdTech (here) or the U.S. Dept of Ed's Non-Regulatory Guidance (here).

Federal funding and district purchasing increasingly factor in ESSA Evidence levels in their decisions, so companies have a vested interest in advancing their tier. We wanted to use the ESSA framework to answer two simple questions for our EdTech portfolio, which at the time was comprised of eight venture capital funds and 152 companies: How many of our portfolio companies have a published study for one of their products? And what is the level of rigor of that study?

We commissioned Learn Platform, a company providing evidence services to states, school districts, and EdTech companies to conduct a review of our portfolio, identify publicly available studies, ­and assess the rigor of those studies against the ESSA framework

The preliminary results were not surprising.  We found that only 21% of K12 companies[2] assessed had conducted a study eligible for an ESSA tier and of those, less than half (45%) were sufficiently rigorous to qualify for an ESSA tier. Results were variable across the eight VC funds, with some having one out of every three companies with a study and others having none at all.

In parallel to conducting the assessment, we also offered all the companies in our portfolio match funding to procure Learn Platform’s Evidence as a Service product which could help them design and conduct a study that would qualify them for an ESSA tier.  We saw significant and very quick uptake of this match funding and think there is potential in expanding this offer to other providers of evidence services in the future.

We also plan to conduct the ESSA-aligned assessment of our portfolio on an annual basis to track progress versus our 21% baseline. In doing so, we also plan to address some of our methodological shortcomings:

  1. Capturing the impact claims made by certain companies.  For companies with no published studies, this will allow us to differentiate between those who claim impact without evidence and those who never make a claim to impact student outcomes (e.g. school administration software). 

  2. Establish more granular ratings for early-stage companies.  Given that four out of five K12 companies in our portfolio had no published study, we will explore ways to provide information on where these companies are in their evidence journey. 

  3. Once major methodological issues are solved, we would like to expand the scope to companies outside our portfolio and the sector at large.

Global investment in EdTech is soaring, with global spending expected to reach a staggering $404 Billion globally by 2025. The digital revolution is well and truly here to stay. Ultimately, however, if EdTech is to live up to its vast potential to improve learning outcomes, it must be driven by robust evidence. Our assessment has the potential to provide transparency to both investors as well as purchasers of EdTech, empowering them to put their capital and resources into the interventions that demonstrate the most efficacy. On a broader systems level, we hope that it will provide a model for evidence-driven decision-making that can be used to help education systems adapt to a rapidly shifting landscape.


[1] https://www.edsurge.com/news/2022-06-03-edtech-should-be-more-evidence-driven

[2] About 35% of our portfolio companies are active in K12 so we used those 53 companies as a denominator since the ESSA methodology is less directly relevant for non K12 companies.