In March 2026, the Council of Europe's Steering Committee for Education approved a draft recommendation that redefines what AI literacy means. Not as a technical skill. Not as a practical competency. As a three-dimensional framework where the human dimension comes first.
The three dimensions are technological, practical, and human. Most AI literacy frameworks start with the technology. How does machine learning work? What is a neural network? How do algorithms process data? The Council of Europe deliberately reversed the order. The human dimension sits at the top.
That is not a cosmetic choice. It is a philosophical position with serious implications for every school.
What the human dimension actually means
The Council of Europe's policy brief is blunt about the problem. Most AI literacy initiatives focus on technological aspects or practical skills while "inadequately addressing human impacts." The human dimension gap means that critical considerations about AI's effects on human rights, democracy, and the rule of law are "frequently omitted or minimised" in current educational frameworks.
The human dimension covers ethics, democratic participation, critical thinking, cultural awareness, social agency, and environmental impact. It asks questions that technical frameworks skip. What does this technology do to power? Who benefits and who does not? What happens to democratic debate when AI systems shape the information people see? What are the environmental costs of running these models? How does AI affect a person's ability to think independently?
The framework moves beyond what the Council calls "qualification" (knowledge and skills) to include "subjectification" (individual development) and "socialisation" (participation in society). In plain language: knowing how AI works is not enough. Students also need to develop as individuals who can make independent judgements about AI, and as citizens who can participate in democratic decisions about how AI is governed.
Why they rejected the alternatives
The Council of Europe considered three options before landing on the three-dimensional framework. The policy brief lays them out.
Option one was to maintain the status quo and fold AI into existing digital literacy programmes. The upside: less disruption, fewer resources needed. The downside: it "risks inadequate attention to the unique challenges posed by AI systems" and "may perpetuate the technical/practical focus while continuing to neglect human dimensions."
Option two was a competence-based framework with measurable, age-specific AI skills. The upside: clear targets and standardised assessment. The downside: it "overemphasises technical skills at the expense of broader awareness" and "inadequately addresses the critical human dimension of AI literacy."
They went with option three. A comprehensive three-dimensional model that balances technology and practical skills with human rights, democracy, and the rule of law. The recommendation is expected to be formally adopted by the Committee of Ministers in June 2026, and it will guide all 46 member states in developing their own AI literacy education.
The "anthropomorphic problem"
One detail in the Council of Europe's reasoning is particularly relevant for schools. The policy brief identifies the "anthropomorphic qualities" of AI systems as a unique educational challenge. Users "frequently misperceive these systems as operating with human-like intentions or understanding."
This is the problem every teacher sees when a student says "the AI thinks..." or "ChatGPT told me..." Students attribute human qualities to machine outputs. They trust AI-generated text the way they trust a knowledgeable person. They do not naturally question the provenance, accuracy, or motivation behind what appears on screen because the output reads like something a human would write.
That misperception is exactly what the human dimension of AI literacy is designed to address. Not by teaching students how AI generates text (that is the technological dimension). Not by teaching them to use AI tools effectively (that is the practical dimension). But by building the critical awareness to recognise that AI output is not human thought, no matter how convincingly it reads.
What this means for UK schools
The Council of Europe's 46 member states include the United Kingdom. When the recommendation is formally adopted, it will serve as guidance for national curriculum development across Europe.
The three-dimensional model has a direct implication for how schools structure AI literacy. If the human dimension comes first, then AI literacy is not primarily a Computing subject. It is a whole-school concern.
The ethical questions belong in RE and PSHE. The democratic participation questions belong in Citizenship and History. The environmental questions belong in Geography and Science. The critical thinking about AI-generated content belongs in English, Media Studies, and every subject that involves evaluating sources.
The technological and practical dimensions still matter. Students do need to understand how AI works and how to use it. But those sit alongside the human dimension, not above it. The Council of Europe is explicit: the framework "centres human rights" and "places core European values of human rights, democracy and rule of law at the heart of AI education."
The Compass is coming too
Alongside the recommendation, the Council of Europe released its "Compass for AI and Education" in March 2026. The Compass is a practical implementation tool designed to "turn Council of Europe standards into tangible action." It covers AI literacy, regulation and governance, teaching and assessment with AI, and evaluation of educational technologies.
The Council also established a new Committee of Experts on Artificial Intelligence and Education for 2026-2027, tasked with preparing "toolboxes and draft legal instruments on the use of artificial intelligence in education."
This is not a single report that will sit on a shelf. It is an institutional programme with legal instruments, practical tools, expert committees, and a recommendation heading for formal adoption. The machinery is moving.
Where AILitKit fits
AILitKit was built on the same principle the Council of Europe arrived at: the human dimension comes first.
Every guide starts with the teacher's existing lesson, in their subject, with their students. The AI literacy connections are drawn from what the teacher already knows and already teaches. A History teacher connecting source evaluation to algorithmic bias. A Maths teacher connecting probability to AI decision-making. A PE teacher connecting fitness tracking to data ethics.
Those are human dimension activities. They build critical awareness, ethical reasoning, and the ability to question AI systems. They do not require the teacher to understand neural networks. They require the teacher to understand their own subject and to see where AI changes the questions students need to ask.
Four activities per guide. Coaching notes so any teacher can deliver them. Support, challenge and differentiation built in. Alignment to 11 frameworks, including the Council of Europe's own.
Because the Council of Europe is right. The human part comes first. And in a school, the human part starts with the teacher.
Matthew Wemyss is the founder of AILitKit and IN&ED, and author of AI in Education: An Educator's Handbook.