South Korea has some of the most tech-savvy students on the planet. High internet penetration. Government-backed digital infrastructure. AI tools everywhere.
And their university students are getting worse at basic maths and science.
Not marginally worse. Measurably, longitudinally worse. The country's own education ministry identified the cause: prolonged reliance on generative AI tools was eroding students' ability to solve problems independently. Students were offloading the thinking to the machine and never building the cognitive muscle themselves.
The Koreans have a name for what happened. They call it the "training wheels" effect. Students used AI as a support tool for so long that they never learned to balance without it. Motivation dropped. Adaptability declined. The foundational skills that every other skill is built on started to rot.
The response was not what you might expect
South Korea did not ban AI. They did not restrict access. They did not go backwards.
They restructured introductory university courses to rebuild the foundations. They mandated smaller class sizes for foundational maths and science. They invested $69 million in digital classroom infrastructure and $43 million in digital tutoring systems across 6,000 schools. And they targeted the funding specifically at regional universities, not the elite ones. The elite students were mostly fine. It was everyone else who was falling behind.
The most important part: they created "AI-applied micro-degree programmes" specifically for non-engineering students. Because the problem was not that STEM students could not use AI. The problem was that everyone else had no framework for understanding what AI was doing to their thinking.
What this means for schools
South Korea's experience tells us something that most AI-in-education conversations miss entirely. AI literacy is not just about learning to use AI. It is about learning when not to.
That is a third-order insight and it matters enormously. First order: AI exists. Second order: you can use AI. Third order: you need enough foundational knowledge to judge when AI is helping and when it is replacing the thinking you should be doing yourself.
A student who uses ChatGPT to draft an essay has not learned to write. A student who uses an AI calculator for every equation has not learned maths. A student who asks AI to summarise a source rather than reading it has not learned to evaluate evidence.
None of this means AI is bad. It means the teaching has to come first. Students need to understand their subject before they can critically engage with what AI does to it.
This is what AI literacy actually looks like
Real AI literacy is not "here is how to prompt ChatGPT." It is a History student who can look at an AI-generated summary of a primary source and say: "This misses the context. The original was written during a famine and the tone matters. The AI flattened it."
That student needs two things to make that judgement. First, enough historical knowledge to spot what the AI got wrong. Second, enough AI literacy to understand why it got it wrong.
You cannot teach the second without the first. South Korea proved it.
This is exactly why AILitKit starts with the teacher's existing lesson, not with AI concepts. The subject knowledge is the foundation. AI literacy is the layer that sits on top. If you build the layer without the foundation, you get what South Korea got: students who can use the tools but cannot think without them.
Every guide AILitKit generates connects AI literacy to subject-specific knowledge. Not in the abstract. In the actual lesson the teacher is already planning to deliver.
Because the goal was never "teach students about AI." The goal is: teach students to think, and then show them how AI changes what thinking means.
Matthew Wemyss is the founder of AILitKit and IN&ED, and author of AI in Education: An Educator's Handbook.