Two-thirds of 13 to 18 year olds in the UK have used generative AI. A National Literacy Trust survey of over 60,000 young people found that figure in 2024, and by every indication it has grown since. Nearly half use it weekly or more.
They are using it to write. To research. To answer questions. To generate images. To have conversations with chatbots. Some are using it to complete homework. Some are using it to cheat. Most are using it without any structured understanding of what it is, how it works, or what can go wrong.
The gap is not access. Students have access. The gap is understanding.
What students are doing with AI
The research paints a consistent picture. Students are using AI primarily as a shortcut. They type a question, get an answer, and move on. This is the equivalent of copying from the first Google result without reading the article. It works often enough to feel reliable. It fails in ways that are invisible to someone who does not know what to look for.
A student who asks an AI to summarise the causes of World War I gets a fluent, confident paragraph. The paragraph might be accurate. It might contain a fabrication presented with the same confidence as a verified fact. The student cannot tell the difference unless they already know the content well enough to check. And if they already know the content well enough to check, they did not need the AI.
This is the core problem. AI is most dangerous to the people who know the least about the topic they are asking about. That includes most students, most of the time.
What students are not being taught
Teacher Tapp data shows that only one in five secondary school staff say anyone at their school teaches students how AI works. Not how to use AI tools. How AI works. There is a difference.
A student can learn to use ChatGPT in five minutes. Learning to use it well takes longer. Learning to evaluate what it produces takes longer still. And learning to decide when not to use it at all is the hardest skill of the lot.
Schools are not teaching any of these layers systematically. The Computing department may cover algorithms and machine learning concepts, but those lessons stay in Computing. The English teacher whose students are using AI to draft essays has not been given the language or the tools to address it in a subject-specific way. The History teacher whose students are using AI to generate source analysis does not have a structured activity for discussing what happens when AI fabricates a historical quotation.
AI literacy is not one skill. It is a set of thinking skills that look different in every subject. Evaluating AI-generated text in English is a different task from evaluating AI-generated data in Geography. Both require critical thinking, but the critical thinking is applied to different content with different standards.
The safeguarding dimension
There is a safeguarding angle that most schools have not addressed. Students are having extended conversations with AI chatbots. Some of these conversations are personal. Students ask chatbots for advice about relationships, mental health, family problems, and identity. The chatbot responds with confidence. It does not know the student. It cannot assess risk. It cannot refer to a safeguarding lead.
The DfE's Generative AI Safety Standards, published in January 2026, name five specific risks: cognitive offloading, anthropomorphism, manipulation, emotional dependence, and distress detection. The draft KCSIE 2026 consultation now classifies generative AI applications that simulate interaction as a contact safeguarding risk, the same category as grooming.
Students need to understand why a chatbot is not a counsellor. Not because chatbots are useless, but because a student who treats an AI as a trusted adviser is making a category error that could have real consequences. This is not a Computing lesson. It is a PSHE lesson, a tutor-time conversation, a safeguarding briefing. AI literacy is the vehicle.
What teachers already know how to do
The good news is that every teacher already has the foundational skills for this work. A teacher who asks students to evaluate the reliability of a source is already teaching critical thinking. A teacher who asks students to consider who wrote something and why is already teaching the reasoning that transfers to AI literacy.
The step from "Who wrote this source?" to "What data was this AI trained on?" is not a leap. It is a shift of context. The thinking is the same. The subject matter is different.
What teachers lack is not capability. It is structure. A ten-minute activity they can slot into a lesson they are already teaching. A script for introducing the concept. A question they can ask that opens the discussion. And a plan for what to do if the discussion goes somewhere unexpected.
This is what AILitKit generates. You give it a lesson from your scheme of work. It finds where AI literacy connects to what you are already teaching and builds the activities, the script, the questions, and the contingency plan. Not a new topic to add. A new lens on the topic you already have.
If you are interested in the frameworks behind all of this, our guide to AI literacy frameworks explains how UNESCO, OECD, PISA 2029, and others define what students should understand about AI.
The question is not whether to start
Students are using AI now. They are forming habits, assumptions, and mental models about what AI can do. Those models are being shaped by unguided experience. Every month that passes without structured AI literacy is a month in which students are learning the wrong lessons.
Starting does not require a new curriculum, a technology upgrade, or a department-wide initiative. It requires one teacher, one lesson, and one good question.
AILitKit takes a lesson from your scheme of work and builds AI literacy activities that fit your subject, your key stage, and your students. No tech required. Try a free guide at ailitkit.com