Keeping Students Safe: The New Rules of AI and Safeguarding
AI brings powerful new tools... and new responsibilities.
This post is part of a series exploring how schools can integrate AI meaningfully, ethically and strategically. It offers insights and strategies for educators across all curricula and contexts, from Dubai to Dublin, Delhi to Durban and everywhere in between.
Subscribers get exclusive access to safeguarding audit templates, staff training slides, and example AI policies to support whole-school implementation.
Why Safeguarding Must Evolve with AI
AI is reshaping how students interact with information, each other, and the digital world. But with new power comes new risk and our safeguarding strategies must evolve quickly to keep up. Schools must now think beyond device safety and content filtering. We are safeguarding in an era where machines can mimic student voices, hallucinate fake content, and predict behaviour. This requires rethinking what it means to keep children safe, supported and secure.
Traditional approaches focused on internet access, stranger danger, and harmful content. AI adds new complexity. Students may unknowingly interact with bots that impersonate peers, accept false AI-generated facts, or even use AI tools to bypass assessments. Behind every AI tool is a dataset and often an opaque algorithm we cannot fully explain.
Safeguarding now means protecting students from:
Hallucinations - AI confidently delivering false information that can mislead or confuse.
Predictive profiling - Tools suggesting content or activities based on assumptions that reinforce bias.
Consent and data misuse - AI systems that capture student interactions, voices, and behaviour data, often without full transparency.
Deepfakes and impersonation - The use of AI-generated media to manipulate, mislead or harm.
Algorithmic bias - Systems trained on unrepresentative data, leading to unfair outcomes or exclusions.
Transparency and explainability must be prioritised. Students, families and staff have a right to understand how AI works, especially when it influences learning, feedback, or behaviour management. Global frameworks such as the EU AI Act, UNESCO principles, and WCAG accessibility standards offer guidance that can be adapted for schools.
It is equally important to reinforce that AI should never be used as a diagnostic or classification tool for SEND or mental health needs, it lacks the nuance, context, and ethical oversight to make such decisions.
Safeguarding must also acknowledge that AI use doesn’t end at the school gate. Students access AI chatbots, image generators and video tools at home, often with little adult oversight. Schools must equip learners with the skills to navigate this safely, building critical awareness and ethical digital habits.
The Most Pressing Safeguarding Questions
As you review your school’s current safeguarding policies and training, consider:
Are staff trained to spot AI misuse, including generated content, fake student submissions, or deepfake images?
Do we have protocols if a student uses an AI chatbot inappropriately, e.g. for harmful queries?
Are third-party AI tools used in class or homework compliant with data protection laws?
Is AI discussed in our student digital citizenship programme?
How do we support students who might rely on AI tools to mask learning needs or avoid support?
Have we made it clear that AI cannot replace human safeguarding judgement?
Are staff trained on how to escalate concerns around AI use to governors or regulatory bodies?
Have we integrated discussion of AI transparency and explainability into staff and student training?
In Practice: Classroom and Whole-School Approaches
Here are ways schools are already adapting safeguarding for the AI age:
Primary: Teachers run PSHE sessions exploring AI trust, bias and safety using roleplay scenarios.
Secondary: Students analyse AI-generated news stories for bias and accuracy as part of media literacy.
Safeguarding teams: DSLs include AI risk scenarios in safeguarding logs and training materials.
CPD: Whole-staff briefings now include a segment on “AI and Child Protection” with real-world examples.
Home-school partnership: Parent workshops include discussion on the risks of AI misuse outside school, especially anonymous tools and deepfake technology.
Whole-school culture: Student digital leaders help co-design school guidance on ethical AI use, with regular review points.
Staff training: AI safeguarding CPD includes case studies, interactive risk assessments and guidance on maintaining professional judgement.
Next Steps for Leaders
Urgent Actions
Review Policies – Update your safeguarding and acceptable use policies to reflect AI-specific risks.
Run CPD – Deliver staff training that covers spotting AI misuse, assessing tool risk, and supporting student understanding
Audit Tools – Ensure all AI tools in use are GDPR/FERPA-compliant and meet transparency standards.
Longer-Term Strategies
Escalation Protocols – Agree when concerns about AI misuse should be escalated to governors or regulatory authorities.
Involve Students – Engage student councils in shaping safe AI practices and raising peer awareness.
Ongoing Review – Schedule regular reviews of how AI is impacting safeguarding and data practices.
Family Engagement – Provide clear, accessible guidance to parents about the risks and responsibilities of AI at home.
Policy Development – Include safeguarding updates in wider AI strategy documents and implementation plans.
Resourcing – Allocate funding for secure, ethical and accessible tools and ensure appropriate time is given to staff CPD.
Update your safeguarding policy with these prompts in mind this term.
Useful Links
1. SWGfL – AI and Online Safety
🔗 https://swgfl.org.uk/topics/artificial-intelligence/
A leading digital safety hub for UK schools. This page covers AI risks, digital citizenship advice and guidance for safeguarding professionals.
2. Childnet – School AI Programmes and Digital Safety
🔗 https://www.childnet.com/what-we-do/our-work-in-schools/
Offers practical tools, workshops and policy advice to support schools with AI, data protection, and student voice in digital safety programmes.
Reflective Questions
Is your safeguarding team prepared for the ethical and technical risks of AI tools?
How will you ensure student data is protected under local regulations?
What messages are students receiving about AI safety from school and home?
Who in your school community is responsible for ongoing review of AI safeguarding?
AI in Education Blog Series – Full List
This 4-week series explores how schools can embed AI meaningfully, ethically and strategically across curriculum, CPD, leadership and inclusion. New posts are published four times a week throughout June and July 2025.
Week 1: Orientation – Understanding the Shift
1. Why AI in Schools Is a Pedagogical Shift, Not a Tech Trend
2. How to Talk to Students About AI (Even When You’re Not an Expert)
3. Bridging the Gap: What Parents and Teachers Need to Understand About AI
4. How Ready Is Your School for AI? A Leadership Reflection
Week 2: Teaching, Equity & Ethics
5. Planning with AI Without Losing Professional Judgement
6. Can We Really Teach Ethics in AI? Yes, Here’s How (you are here)
7. What Inclusive AI Use Looks Like in EAL and SEND Contexts
8. (You are here) Keeping Students Safe: The New Rules of AI and Safeguarding
Week 3: Teaching Across Subjects
9. Reimagining Reading and Writing: AI in English and Arabic
10. AI in Math and Science: From Calculation to Simulation
11. What Happens to Critical Thinking When AI Can Summarise?
12. Creativity and Authenticity in the Age of AI
Week 4: Strategy, Assessment and Future Readiness
13. What Every School Needs Before Saying “We Use AI”
14. Why CPD on AI Should Start with Questions, Not Tools
15. What Does “AI Literacy” Really Mean, and How Do We Know Students Are Gaining It?
16. From Pilot to Policy: Embedding AI in the School Development Plan