Safeguarding & Responsible Use

Last updated: March 2026

Teacher-led delivery

All AILitKit suggestions are designed for teacher-led delivery. Students should not interact directly with the AILitKit platform. Teachers are responsible for adapting suggestions to their classroom context.

Age-appropriate content

Guides are tailored to the selected age group and stage. Activities are designed to be age-appropriate. Teachers should review all suggestions before use, particularly for sensitive topics such as AI bias, privacy, and online safety.

Acceptable use policies

AI literacy activities should be delivered in line with your school's acceptable use policy (AUP) and online safety procedures. Where activities involve internet use, normal safeguarding protocols apply.

Inclusion and differentiation

Guides include optional differentiation notes. These are suggestions, not prescriptions. Every learner is different. Adaptations should be reviewed by the class teacher and inclusion coordinator where appropriate.

Data protection

Guide generation uses Google Gemini, and the layered pre-generation safeguarding classifier uses Meta Llama Guard 4 (12B) for the first pass and Google Gemini for the curriculum-aware reviewer. All three are accessed through OpenRouter under the “Always enforce zero data retention” policy: requests are routed only to providers that contractually do not retain, log or train on the input. Keyword embeddings are processed by Scaleway in the EU. Uploaded documents are automatically deleted after guide generation. Only the filename and upload date are retained, supporting data minimisation. Do not upload documents containing student names, personal data, or safeguarding records. See our Privacy Policy for full details.

Regional alignment

AILitKit is used by teachers across multiple regions. Below is how the platform aligns with the safeguarding and responsible AI frameworks relevant to each.

UK: KCSIE 2026 alignment

AILitKit is aligned with Keeping Children Safe in Education 2026, which introduces AI-specific safeguarding requirements for schools:

  • AI-generated imagery: KCSIE 2026 explicitly includes deepfakes and AI-generated imagery in the definition of child-on-child abuse (Part 1, para 34). AILitKit activities address deepfakes as a risk in age-appropriate ways.
  • Preventive education: Schools must address online harms including deepfakes as part of preventive education. AILitKit supports this through discussion and debate activities that build critical evaluation skills.
  • Cybersecurity as safeguarding: Compromised child data is now a safeguarding concern, not just an IT issue. AILitKit never requires student personal data and all approved tools are vetted for data protection.
  • Annual filtering and monitoring review: Governing bodies must review filtering and monitoring effectiveness at least annually (Part 2, para 166). AILitKit's Whole Curriculum guides can support evidence for these reviews.

For full statutory guidance, see the DfE's Keeping Children Safe in Education 2026 and Generative Artificial Intelligence in Education (2025).

EU: AI Act and responsible AI

AILitKit supports EU schools in meeting the transparency and literacy requirements of the EU AI Act:

  • AI literacy (Article 4): The EU AI Act requires that providers and deployers of AI systems ensure sufficient AI literacy among staff. AILitKit directly supports this by helping teachers build AI literacy into their practice.
  • Transparency (Article 50): All AILitKit guides are clearly labelled as AI-generated. Teachers are advised to review all content before classroom use.
  • Risk classification: AILitKit is a limited-risk AI system. It generates educational planning suggestions, not high-stakes decisions about individuals.

AILitKit also aligns with DigComp 2.2, the EU's digital competence framework, which now includes AI-specific competencies.

US: Responsible AI in schools

For US schools, AILitKit supports responsible AI use alongside existing safeguarding standards:

  • ISTE Standards: AILitKit activities align with the International Society for Technology in Education (ISTE) standards, including the Digital Citizen and Computational Thinker competencies.
  • AI4K12 initiative: Guides can be mapped to the AI4K12 “Five Big Ideas in AI” framework (Perception, Representation, Learning, Natural Interaction, Societal Impact).
  • District policies: Activities should be delivered in accordance with your district's acceptable use policy and any state-specific AI guidance. Many states are introducing AI literacy requirements, and AILitKit helps teachers meet these proactively.
  • No student data: AILitKit collects no student data, supporting compliance with FERPA and COPPA. See our US Privacy & Compliance page.

UAE: MoE AI Framework alignment

AILitKit supports UAE schools in meeting the Ministry of Education AI literacy mandates effective from the 2025-26 academic year:

  • MoE AI Curriculum: The UAE mandates AI literacy integration across all K-12 subjects. AILitKit generates guides mapped to the MoE's seven AI domains, helping teachers embed AI literacy into their existing practice.
  • KHDA AI Literacy (Dubai): Dubai private schools under KHDA jurisdiction can use AILitKit guides aligned to the KHDA AI Literacy Framework, launched February 2026.
  • Teacher-led approach: Activities are designed for teacher-led delivery, in line with MoE guidance that AI tools in schools should be supervised by qualified educators.
  • Data protection: AILitKit's data practices align with UAE PDPL requirements. See our UAE Data Protection page.

Contact

For safeguarding or responsible use questions, contact hello@ailitkit.com.