Keeping Safe Online
As children enter their early teens, they transition from supervised browsing to independent digital exploration. This period is a critical developmental window where children gain technical skills but often lack the emotional maturity to navigate complex social situations or identify sophisticated scams.
Recent data shows that nearly 77% of pre-teens have experienced some form of online harm, ranging from exposure to violent content to unwanted contact. At this age, the drive for social connection makes them particularly vulnerable to peer pressure and the desire to bypass age restrictions on platforms like TikTok or Snapchat.
The risks are generally categorized into four “Cs”:
-
Content: Exposure to age-inappropriate material, including violence, pornography, or content promoting self-harm and unrealistic body standards.
-
Contact: The threat of “grooming” or communication from strangers posing as peers, often within online gaming communities or messaging apps.
-
Conduct: Engaging in or falling victim to cyberbullying, or damaging their own “digital footprint” by sharing permanent, private information
-
Contract: Falling for in-game “dark patterns,” such as hidden subscriptions, unauthorized purchases, or data-harvesting scams.
AI Tools
Artificial Intelligence (AI) tools, such as chatbots and content generators, are quickly integrating into our children’s online world. While AI provides exciting benefits like quickly finding information and enabling creative content, the NSPCC has highlighted crucial safety risks we must address. These risks include exposure to age-inappropriate or misleading information, and the serious danger of AI being misused to create realistic, harmful content, such as non-consensual fake images intended to bully or abuse. Here are six essential tips from the NSPCC to help your child use AI safely.
Talk About AI: Have open conversations with your child about the AI tools they use, discussing the risks and benefits they experience.
Verify Reality: Remind them that not everything online is real. Teach them to look for signs that content may be AI-generated, such as an overly “perfect” appearance or unnatural movements.
Address Misuse: Emphasize that creating content to harm others is wrong. Ensure they know that concerns about harmful content can be reported to a trusted adult, CEOP (law enforcement), or Report Remove (for sexual images/videos).
Check the Sources: Encourage them to always verify the sources listed in AI summaries or chatbot responses, especially if the answer feels important or sensitive. Use trusted websites if sources are missing or unreliable.
Direct to Safe Advice: For health and wellbeing questions, steer them towards reliable, child-friendly sources like Childline, rather than relying solely on AI responses.
Know Where to Find Help: Ensure your child knows they can talk to you, a teacher, or another safe adult if anything worries them online. Childline also provides free, confidential support 24/7 on 0800 11 11.
The Prevent Duty:
In our digital age, the “Prevent duty” is a core part of how schools keep children safe. Just as we protect them from physical harm, we must also guard them against disinformation and extremist narratives that can circulate in online spaces—even in games and apps popular with our students.
How Disinformation Spreads
At this age, children are naturally curious and often look for “simple” answers to complex world events. Extremist groups (of all ideologies) often use conspiracy theories or “fake news” as a gateway. They may use memes, gaming lobbies, or short-form videos to spread an “Us vs. Them” mentality, often blaming specific groups for societal problems.
Early Warning Signs
Radicalisation is a form of grooming. You know your child best; look for subtle shifts in their outlook, such as:
- “In-Group” Language: Using new, derogatory terms for certain groups of people or repeating “scripts” that sound like they come from an adult influencer rather than a child.
- Black-and-White Thinking: Becoming suddenly intolerant of other viewpoints or refusing to engage with friends who are “different.”
- Secretive Browsing: Becoming highly defensive about their screen or quickly switching tabs when you approach.
- Sudden Fixations: A new, obsessive interest in conspiracy theories or “alternative” versions of history.
What You Can Do
The best defence is Digital Resilience.
- Ask the “Why”: If they share a shocking “fact,” ask: “Who made this video?” or “What do they want you to feel?”
- Encourage Empathy: Broaden their horizons with books and media that show diverse perspectives.
- The “Open Door” Policy: Remind them that they can talk to you about anything they see online, even if it’s confusing or scary.
Supporting Healthy Body Image
As our children navigate the transition from childhood to their early teens, their bodies undergo significant changes. This is also the age when many begin exploring social media. While these platforms offer connection, they can also expose young people to unrealistic “ideals” that trigger anxiety about their appearance.
The Impact of the “Digital Filter”
At this developmental stage, children are naturally seeking identity and peer acceptance. Social media algorithms can unfortunately push content that promotes restrictive dieting or obsessive fitness trends. This constant comparison can lead to Body Dysmorphic Disorder (BDD)—where a child becomes fixated on perceived physical flaws—or the early stages of disordered eating.
What to Look For
Early signs in “tweens” can be subtle. Keep an eye out for:
- Sudden Dietary Changes: Categorizing foods as “good” vs. “bad” or suddenly refusing previously enjoyed meals.
- Body Checking: Frequently asking for reassurance about their weight or spending long periods checking their reflection.
- Clothing Shifts A sudden preference for very baggy clothes to hide their frame, even in warm weather.
- Social Comparison: Frequent comments like, “I wish I looked like [Influencer].”
Starting the Conversation
The goal is to build “Body Neutrality.” Help your child understand that their body is an instrument for life (playing sports, hugging, creating), not just something to be looked at. Encourage them to “curate” their feed by unfollowing accounts that make them feel inadequate.
Tip: If you notice a change in your child’s relationship with food or their reflection, trust your gut. Early support is the most effective way to protect their long-term mental health.
PEGI Ratings
We are often concerned to hear about the games that our students are playing and would like to remind you about the importance of the PEGI ratings system. With gaming now a central part of many children’s social lives, understanding PEGI (Pan-European Game Information) ratings is a vital part of home safeguarding. Unlike school subjects, a PEGI rating isn’t about how difficult a game is to play; it is a measure of the suitability of the content for a specific age group.
What do the numbers mean?
PEGI uses five age categories: 3, 7, 12, 16, and 18
- PEGI 3 & 7: Generally suitable for all ages, though PEGI 7 may contain “frightening” sounds or very mild cartoon violence.
- PEGI 12 & 16: These ratings are legally enforceable in the UK for retailers. They indicate more graphic violence, mild to strong swearing, or sexual innuendo
- PEGI 18: Intended for adults only, often featuring gross violence, motiveless killing, or explicit content.Look for the Descriptors
Beyond the number, look at the back of the box or the app store description for Content Descriptors. These icons explain why a game received its rating, such as:
- Violence (Realistic or fantasy)
- Bad Language
- Fear (Horror or scary content)
- In-game Purchases (Includes “Loot Boxes”)
The “Hidden” Risks
It is important to remember that PEGI rates the gameplay itself, not the behaviour of other people online. A game rated PEGI 7 (like Minecraft) or PEGI 12 (like Fortnite) may still expose your child to inappropriate language or contact from strangers through live chat functions.
Top Tip: Always check the “Communication” or “Multiplayer” settings on any new game and ensure your child knows to speak to you if a stranger asks them for personal information.
Social Media Apps
As parents and carers, we are all aware of the risks that can come with children using apps such as WhatsApp and Snapchat, which are widely known for their instant messaging features. However, many families are less aware that other apps — often thought of as “safe” or used for fun — also contain built-in messaging services.
Platforms such as TikTok, Pinterest, and Roblox may seem fine at first glance, but each has the ability for users to send and receive direct messages. In games like Roblox, this can include chat functions with strangers, while TikTok and Pinterest allow private conversations alongside public posts. This means that children could be contacted by people outside their friendship group without parents realising.
We encourage families to regularly check privacy settings, friend lists, and messaging options across all apps your child uses — not just the obvious ones. Many of these features can be switched off or limited, and children are often reassured when parents set clear boundaries together.
It’s also helpful to have open conversations with your child about who they are talking to online and why. Encourage them to come to you if they ever receive a message that makes them feel uncomfortable. Together we can help ensure that social media and gaming remain safe, positive spaces for our children to explore, create, and connect.
Fake News, Misinformation and Disinformation
In line with Keeping Children Safe in Education (KCSIE) 2025, our students take part in lessons on fake news, misinformation and disinformation. The purpose of these lessons is to help them recognise the difference between reliable and unreliable information online, and to understand why this matters. The lessons explain that fake news refers to stories or reports that are deliberately untrue, often created to influence opinions or gain attention. Misinformation is when false information is shared by someone who believes it to be correct, while disinformation is when false information is shared on purpose to mislead or manipulate others. Students explore how such information spreads quickly online, the impact it can have on individuals and communities, and how to question what they see. They are also shown practical tools for checking facts, cross-referencing sources, and identifying bias. By developing these skills, we are helping students to think critically, protect themselves from harmful or misleading content, and grow into responsible digital citizens.
If you would like further guidance on how to support your child in this area, please visit the UK Safer Internet Centre website.


