
There’s a study that should be everywhere in education conversations. It isn’t.
Published in Scientific Reports — Nature’s peer-reviewed journal — in 2025, a randomised controlled trial compared students using AI tutoring against students in in-class active learning. The result: AI tutoring produced an effect size of 0.73 to 1.3 standard deviations over conventional instruction.
To translate: that’s roughly the difference between a student performing at the 50th percentile and the 75th to 90th percentile. That’s not a marginal gain. That’s a revolution.
But here’s what the study also shows — and what gets lost in the rush to quote headline numbers: the AI worked because it was designed with pedagogical guidance. It wasn’t AI replacing teachers. It was AI augmenting a carefully designed teaching approach.
What the Research Actually Shows
The RCT wasn’t “add AI and see what happens.” It was structured:
- AI was used to augment existing curriculum, not replace it
- Teachers were involved in designing the AI’s approach
- Students received AI tutoring alongside human instruction, not instead of it
The effect came from AI + pedagogy, not AI alone.
This is the critical insight that gets lost in the hype.
The “Throw AI At It” Problem
Every technology has this phase: someone discovers it works, so they apply it everywhere without understanding why it works.
Education is seeing this now. “Add ChatGPT to the classroom!” “AI will teach maths!” “The textbook is obsolete!”
But AI without pedagogy is just faster mediocrity.
If the underlying teaching approach is weak, AI makes it fast and weak. If the pedagogy is strong, AI makes it scalable and strong.
The Human Is Still the Variable
Here’s what I learned at Wall Street English: the teacher remains the critical variable.
We can have the best AI in the world. But if the teacher isn’t engaged, the student isn’t engaged. If the teacher doesn’t know how to use the AI as a tool, the AI becomes a gimmick.
AI amplifies good pedagogy. It doesn’t fix bad pedagogy.
The significant gains in the RCT came because the human was still in the loop — designing, guiding, and mentoring. The AI was executing a well-designed teaching approach at scale, not replacing the thinking behind it.
What This Means
For educators considering AI:
- Start with pedagogy, add AI — Don’t ask “how do we add AI?” Ask “what’s broken in our teaching?” Then see if AI helps.
- Teacher training matters — The AI is a tool. The teacher needs to know how to use it.
- Measure outcomes, not activity — Did test scores improve? Retention? Engagement? Not “did we use AI?”
For edtech companies:
- Hire educators — Not as consultants. As product designers.
- Design with teachers, not for them — The best edtech comes from people who’ve been in classrooms.
- The teacher is the customer, not the student — Get the teacher to love your tool, and they’ll get students to use it.
The Real Opportunity
The RCT result isn’t about AI. It’s about human-AI collaboration.
When you design for that collaboration — teacher + AI, not teacher vs. AI — the results are extraordinary. The evidence is there. The question is whether education leaders are willing to do the design work first, rather than reaching for the technology as a shortcut.
That’s what actually works.
Actionable Takeaways
- Don’t throw AI at it — Start with the teaching problem, then evaluate AI as a solution
- Invest in teacher training — The tool is only as good as the hand that holds it
- Measure what matters — Outcomes over activity, always
Leave a Reply