Most schools do not have an AI policy. Of those that do, many have a single paragraph in the acceptable use policy that says something like "students should not use AI to complete assessed work" and leaves it at that.
This is not a policy. It is a rule. A policy explains what you are trying to achieve, how you plan to achieve it, what the boundaries are, and who is responsible. A rule just says no.
The good news is that you do not need to write one from scratch. Your school already has policies that address most of what an AI policy needs to cover. Safeguarding, data protection, acceptable use, academic integrity, SEND. An AI policy brings these together under one coherent framework and adds the AI-specific elements that are currently missing.
What an AI policy needs to cover
An AI policy for a school addresses two distinct things, and conflating them is the most common mistake schools make.
The first is AI use by staff and students. This is the operational side. Which AI tools are approved? Who can use them? What data can be entered? What requires line manager approval? What requires parental consent? This is the part most schools think of when they hear "AI policy."
The second is AI literacy as curriculum content. This is the educational side. What are you teaching students about AI? Where in the curriculum does it sit? How are you tracking coverage? What frameworks are you aligning to? This part is less common in school policies because it is newer territory.
A complete AI policy covers both. The operational side protects the school. The educational side prepares the students. Neither is optional.
Building on what you already have
Your acceptable use policy already covers technology use by staff and students. It probably addresses internet safety, device management, social media, and data sharing. AI tools are a category of technology. Many of the same principles apply. The AI-specific additions are about the nature of AI outputs (they can be wrong, biased, or fabricated), the data processing implications (AI tools may process personal data externally), and the assessment integrity dimension (AI-generated work and academic honesty).
Your safeguarding policy already covers online safety. In the UK, KCSIE 2025 addresses online risks including content, contact, conduct, and commerce. The 2026 consultation draft adds AI deepfakes as child-on-child abuse and classifies generative AI as a contact risk. Your safeguarding policy needs an AI-specific section that references these developments and names the specific risks that DfE GenAI Safety Standards identify: cognitive offloading, anthropomorphism, manipulation, emotional dependence, and distress detection. For UAE schools, the same principle applies using Wadeema's Law, the National Child Protection Policy in Educational Institutions, and the new Child Digital Safety Law.
Your academic integrity policy already covers plagiarism. AI-generated work is a form of plagiarism when submitted without acknowledgment. JCQ, Cambridge International, IB, and the College Board all have published guidance on AI in assessed work. Your academic integrity policy should reference the guidance from your school's exam board and make the rules clear: AI can be used as a research tool with proper acknowledgment, but work submitted for assessment must be the student's own.
Your data protection policy already covers personal data processing. If your school uses AI tools that process student data, those tools need to be assessed for data protection compliance. In the UK, this means UK GDPR. In the EU, the GDPR and the EU AI Act. In the UAE, the PDPL (Federal Decree-Law No. 45 of 2021).
An AI policy does not replace any of these. It cross-references them and adds the connecting tissue.
A framework for structure
Here is a structure that works. Not a template to copy, but a framework to adapt.
Section 1: Purpose and scope. One paragraph. What this policy is for, who it applies to (staff, students, governors, visitors, contractors), and how it relates to the school's other policies. Name the policies it connects to.
Section 2: Principles. Three to five statements that guide the school's approach to AI. These should be specific enough to be useful. "We will use AI responsibly" is too vague. "Staff and students will evaluate AI outputs before acting on them" is specific. "AI will be used to support learning, not to replace the thinking that learning requires" draws a clear line.
Section 3: Approved tools and access. Which AI tools are approved for use? Who approved them? Are they accessed through school-managed accounts? What data protection checks have been completed? This section should be a living list that is updated as tools change. Name the current tools. State who has access (all staff, specific roles, students with teacher supervision, students independently).
Section 4: Staff use of AI. What staff can use AI for (lesson planning, resource creation, administrative tasks, report writing). What they cannot use AI for (final assessment decisions without human review, generating safeguarding reports, anything that bypasses professional judgement on student welfare). What they must disclose (if school policy requires disclosure of AI use in reports or communications).
Section 5: Student use of AI. What students can use AI for in the classroom (research with acknowledgment, idea generation, structured activities under teacher supervision). What students cannot do (submit AI-generated work as their own, share personal data with AI tools, use AI during supervised assessments). Reference your exam board's specific guidance.
Section 6: AI literacy in the curriculum. What the school is doing to teach students about AI. Which frameworks you are aligning to. Where AI literacy sits in the curriculum (embedded across subjects, dedicated sessions, both). How you are tracking coverage. This is where AILitKit's output directly feeds: the framework alignment table, the coverage heatmap, and the governor report provide the evidence that this section of the policy is being implemented.
Section 7: Safeguarding. The AI-specific safeguarding risks. Reference KCSIE (UK), Wadeema's Law (UAE), or your national equivalent. Reference the DfE GenAI Safety Standards or their equivalent. State the reporting procedure for AI-related safeguarding concerns. Name the DSL or child protection specialist.
Section 8: Data protection. How AI tools process student data. What due diligence has been completed. Where data is stored and processed. Parental consent requirements for student-facing AI tools.
Section 9: Review. How often the policy is reviewed (annually at minimum, given how fast AI evolves). Who reviews it. How changes are communicated to staff, students, and parents.
What Ofsted and inspectors look for
Inspectors are not yet asking for AI policies by name. But they are asking about safeguarding, curriculum breadth, and how schools prepare students for the modern world. A school that can show a coherent AI policy alongside evidence of curriculum implementation is ahead of the curve.
For Dubai schools, KHDA inspections are increasingly examining how schools address AI in their provision. The KHDA/MIT RAISE programme creates an expectation that schools will be able to demonstrate AI literacy integration. For Abu Dhabi schools, ADEK's Curriculum Policy v1.2 requires AI literacy provision by 2026-27, and the inspection will look for evidence.
The governor report that AILitKit generates provides exactly the kind of evidence an inspector would find useful: how many guides, which subjects, which key stages, which framework domains covered, what students actually experienced, and what the school plans to do next.
The single most important thing your policy should say
Every AI policy needs one sentence that draws the line between using AI and understanding AI. Something like:
"This school teaches AI literacy so that students can evaluate, question, and make informed decisions about AI. Teaching students about AI is not the same as encouraging students to use AI for their work. Activities designed to build AI literacy are distinct from assessed coursework and are governed by different expectations."
Without this sentence, teachers will hesitate. They will worry that running an AI literacy activity might be seen as endorsing AI use in assessed work. The policy must give them permission to teach about AI without the fear that they are opening a door they cannot close.
Starting now
You do not need to write the perfect policy on the first attempt. Write a draft. Share it with your SLT. Get feedback. Revise. The policy will evolve as your school's understanding of AI evolves. The mistake is waiting for perfection. The right time to have an AI policy was last year. The second best time is this term.
If you want the evidence base to support Section 6 (AI literacy in the curriculum), start with one AILitKit guide. It gives you activities, framework alignment, and safeguarding notes. Generate a few more across different subjects and the Whole Curriculum guide gives you the school-wide picture, including evidence statements for your self-evaluation form and a governor briefing.
The policy gives you the framework. The guides give you the evidence. Between them, you have something an inspector, a governor, or a parent can look at and understand.
AILitKit generates the evidence your AI policy needs: framework-aligned activities, safeguarding notes, and governor reports showing curriculum coverage. Start with one lesson. Try it free at ailitkit.com