A study published this week analysed 4.9 million interactions between 144,544 students and an AI tutor. The findings should change how every school thinks about AI literacy.
The good news: students are not cheating. 92.9% of interactions showed responsible learning intent. The moral panic about students using AI to copy and paste their way through school is not supported by this data. At least not when the AI tool is designed for learning rather than essay generation.
The bad news: almost none of them are using AI well. Fewer than 1% of students consistently scored as high-quality on the study's prompt rubric. Less than one in a hundred.
Students know how to access AI. They do not know how to think with it.
The 10x gap
The study, published by the learning platform StudyFetch, found a performance gap that should alarm anyone responsible for student outcomes. On open-ended questions, students who engaged well with AI scored 31.9% compared to 3.3% for those who did not. Roughly a 10x difference.
"Engaged" does not mean "used AI more." It means used AI better. Asked sharper questions. Gave clearer instructions. Evaluated the output critically rather than accepting whatever came back.
The students who treated AI as a thinking partner performed ten times better than the students who treated it as an answer machine.
More years of school did not close the gap
Here is the finding that matters most. In this sample, education level made no difference. Being further along in your studies did not make you better at using AI. The gap persisted regardless of how many years a student had spent in formal education.
That tells us something important. Students are not picking up AI literacy by being in school. It has to be taught.
But it can be taught
The study found that AI literacy appears to be teachable. Among students who kept using the platform, most reached mid-level prompt quality after 10 to 15 interactions.
10 to 15 interactions. Not a semester-long course. Not a week-long unit. A handful of structured experiences where someone shows a student how to engage with AI critically rather than passively.
The question is who provides those structured experiences. Right now, in most schools, nobody does. There is no AI literacy built into the Maths curriculum. None in History. None in Science or English or PE or Art.
But the skills are there. Asking better questions is what English teachers teach every day. Evaluating evidence is what History teachers do. Testing assumptions is the foundation of Science teaching. Estimating before calculating is a core Maths skill.
Every one of those is an AI literacy skill. Teachers are already building the foundations. They just have not been shown how to connect them to AI.
What 1% means for your school
Sit with that number for a moment. Fewer than 1% of students use AI well.
Your students are in that 99%. Not because they are lazy or dishonest. Because nobody has taught them the difference between using AI and using AI well. Between accepting an output and questioning it. Between typing a question and crafting a prompt that gets a useful answer.
That is a teaching problem. And it has a teaching solution.
The solution is not a standalone AI module. It is not a one-off assembly. It is the teachers who are already in the building, already teaching the skills, making the connection between what they do every day and what AI changes about the world their students are growing up in.
A History teacher who shows students how source evaluation applies to AI-generated content. A Science teacher who gets students to test an AI hypothesis against experimental method. A Maths teacher who asks students to estimate before the AI calculates, then compare the two. An English teacher who gets students to critique AI-generated persuasive writing the same way they would critique a human text.
10 to 15 interactions. A few activities spread across a few lessons. That is all the study says it takes.
The gap is real. It is also closable. The question is whether your school starts closing it now or discovers the problem when it shows up in an inspection report or a set of exam results.
Matthew Wemyss is the founder of AILitKit and IN&ED, and author of AI in Education: An Educator's Handbook.