South Korea tracked what happened when students used generative AI tools over an extended period. The results were not what the EdTech evangelists predicted.
University students got measurably worse at basic maths and science. Not because the tools were bad. Because the tools were too good. Students stopped doing the cognitive work themselves. They offloaded the thinking to the machine and never built the mental muscle to do it without one.
South Korea's Ministry of Education calls it the "training wheels" effect. Use the support long enough and you never learn to balance. Motivation drops. Adaptability declines. The foundational skills that every higher-order skill depends on start to erode.
This is not an anti-technology argument. This is a measurement. It happened. It was tracked. And it forced one of the most digitally advanced countries on earth to restructure how it teaches.
The third-order problem
Most conversations about AI in schools stop at the second order. First order: AI exists. Second order: students should learn to use it. Almost nobody talks about the third order: students need enough foundational knowledge to judge when AI is helping and when it is doing the thinking for them.
That judgement is impossible without strong subject knowledge. A student who does not understand how to structure an argument cannot tell whether ChatGPT structured one well. A student who has not learned to evaluate a historical source cannot spot when an AI summary strips out the context that makes the source meaningful. A student who cannot do long division has no way of knowing whether an AI calculator gave a reasonable answer.
AI literacy without subject knowledge is not literacy. It is button-pressing.
What South Korea actually did
They did not ban AI. They did not restrict access. They restructured.
They mandated smaller class sizes for foundational maths and science courses. They invested $69 million in digital classroom environments and $43 million in AI tutoring and monitoring systems across 6,000 schools. They created micro-degree programmes in applied AI specifically for non-engineering students, because the gap was worst among students who had no technical framework for understanding what the tools were doing.
And they targeted the funding at regional universities, not the elite ones. The students at flagship institutions were mostly fine. It was everyone else who was falling behind. The intervention went where the problem was.
The lesson for every school
The South Korean data confirms something that experienced teachers already suspect. Giving students AI tools before they have the foundational skills to evaluate AI output does not accelerate learning. It replaces it.
Real AI literacy starts with the subject. A Science teacher who builds strong experimental method skills gives students the ability to question AI-generated hypotheses. A Maths teacher who ensures students can estimate gives them a way to catch AI calculation errors. An English teacher who teaches close reading gives students the tools to notice when AI-generated prose is hollow.
The AI literacy layer only works if the subject layer is solid underneath it.
That is why AILitKit starts with the teacher's existing lesson. Not with AI concepts. Not with prompt engineering. With the subject knowledge the teacher already delivers. The tool finds where AI literacy connects to that knowledge and gives the teacher four activities that build the connection. Coaching notes included. Support, challenge and differentiation included.
Because the training wheels have to come off eventually. And when they do, the student needs to be able to balance.
Matthew Wemyss is the founder of AILitKit and IN&ED, and author of AI in Education: An Educator's Handbook.