Here is a number that should worry anyone investing in AI training. Job postings requiring AI skills have risen 247% since 2023. The supply of workers with verified AI competencies has grown only 63%. Demand is growing nearly four times faster than supply.
And here is the problem underneath that problem. There is no widely accepted way to compare AI qualifications. Nobody can tell you which certifications, courses, or training programmes actually prepare someone to use AI well at work. Not the employers paying for the training. Not the workers spending their time and money on it. Not the governments writing the policies.
A new initiative called CRAFT is trying to change that.
What CRAFT is
CRAFT stands for Credentialing Responsible AI for Future-Ready Talent. It was launched by the AI 2030 Institute, a global non-profit with over 6,000 members across more than 60 countries.
The idea is simple. There are hundreds of AI qualifications available right now, from universities, tech companies, online platforms, professional bodies. They vary enormously in length, cost, depth and quality. CRAFT is the first national benchmark designed to identify which of those qualifications actually prepare people to use AI responsibly in the workplace.
It works by surveying both the organisations that create AI qualifications and the employers that hire people with them. It then cross-references that data with real-time labour market information to see whether what providers teach matches what employers actually need.
The first benchmark report is due later this year.
Why this matters for schools
You might be reading this thinking: this is a workforce issue, not a schools issue. And you would be wrong.
The CRAFT benchmark exists because the AI skills market grew faster than anyone could quality-check it. Certifications appeared. Workers enrolled. Employers hired based on credentials that nobody had validated. Workers risk investing in programmes that do not translate into hiring, advancement, or responsible AI decision-making on the job.
Schools are at the start of that same pipeline. The students in your classroom today will be choosing AI qualifications in five or ten years. The question is whether they arrive at that decision with enough foundational AI literacy to tell a good qualification from a bad one.
A student who has spent years learning to critically evaluate AI outputs, to understand how AI systems make decisions, to question bias and reliability, that student will choose better. They will spot the course that teaches real skills versus the one that teaches you to click buttons on a platform. They will know the difference between understanding AI and being certified in a specific product.
That critical evaluation starts in school. In every subject. Not in a standalone AI module that gets squeezed into one term of Year 9.
The 56% wage premium
Research from PwC found that workers with advanced AI skills now earn 56% more than people in the same roles without those skills. That is up from 25% the year before. The premium is growing.
At the same time, employers expect 39% of workers' core skills to change by 2030. Not 39% of tech workers. All workers. AI and data skills top the list of the fastest-growing skill areas. But human skills like creative thinking, resilience and leadership remain critical.
That last point matters. The most valuable workers in 2030 will not be the ones who can use AI. Everyone will be able to use AI. The most valuable workers will be the ones who can think critically about what AI gives them, challenge it, improve on it and make decisions that AI cannot make.
Those are the skills schools build. Or should be building.
78% of leaders want to hire for AI. Only 25% of employees have been trained.
Microsoft and LinkedIn's 2025 Work Trend Index found that 78% of leaders are looking to hire for AI roles, while only 25% of employees have received formal AI training.
That gap is the market AILitKit operates in. Not the corporate training gap, but the one upstream of it. The school-level gap. If 75% of today's workforce has not been trained in AI, those people went through 12 or 13 years of school and left without the foundations.
The CRAFT benchmark is trying to sort the qualification market after the fact. That is necessary and important work. But the longer-term fix is to build AI literacy into school education so that every student leaves with the critical thinking skills, the subject knowledge, and the AI literacy foundations that make them capable of choosing the right qualifications, asking the right questions, and using AI responsibly in whatever career they enter.
The responsible AI connection
CRAFT focuses on "responsible AI" specifically. The AI 2030 Institute advances a six-pillar framework covering security and safety, privacy, accountability, transparency, fairness and sustainability.
Every one of those pillars connects to subjects schools already teach. Privacy connects to Citizenship, PSHE and Computing. Accountability connects to History and Ethics. Fairness connects to Maths (how do you measure fairness?), English (who gets to speak?), and Geography (who benefits and who does not?). Sustainability connects to Science and Geography.
Teachers do not need to learn a six-pillar responsible AI framework. They need someone to show them that the concepts they already teach are the building blocks of responsible AI thinking. That is what AILitKit does. Upload your lesson. See the connections. Get four activities with coaching notes, support, challenge, and adaptations for every learner. About a minute.
The CRAFT benchmark will tell employers which AI qualifications are worth trusting. AILitKit helps schools build the foundations that make any future AI qualification more useful.
One works at the end of the pipeline. The other works at the start.
ailitkit.com
Matthew Wemyss is the founder of AILitKit and IN&ED, and author of AI in Education: An Educator's Handbook.