Best Practice

AI, deepfakes and safeguarding: Ten ways to keep children safe

As feared, we are now seeing the horrifying ways that AI is being used to create images and deepfake videos of children and child abuse. Elizabeth Rose looks at the implications for safeguarding work in schools
Deepfake fears: AI is being used to create images and deepfake videos of child sexual abuse and yet there is little guidance for school safeguarding teams - Adobe Stock

The use of artificial intelligence is a rapidly expanding and developing field, and the potential risks that AI poses to children is an area of increasing concern.

In October 2023, the Internet Watch Foundation (IWF) published a report on the use of AI to generate child sexual abuse material and the wide-ranging harms associated with this, which was then reviewed and updated in mid-2024, tracking some of the rapid advancements in the technology and level of use.

The reports demonstrate the horrifying ways that AI is being used to create images – and now deepfake videos – of child sexual abuse, as well as recommending ways that government and tech companies can respond to this issue.

However, there is currently little in the way of guidance for schools to support understanding of the issue from a safeguarding perspective, to provide strategies or ideas for the prevention of harm to children, or indeed ways to approach the issue in the curriculum.

 

The scale of the problem

It is illegal to possess, take or distribute “indecent pseudo-photographs” of children and illegal to possess a “prohibited image of a child”, both of which cover AI images of child sexual abuse.

However, the IWF has found growing evidence that AI is being used to generate thousands of images of abuse and perpetrators are sharing them on dark and clear web forums.

The nature of the images is also becoming more severe and extreme over time.

The IWF report details findings from monitoring one dark web forum, in which abuse images are shared. They found that in one month (October 2023), 20,254 images were shared and of these, 2,978 were found to be criminal prohibited images.

It is currently not illegal to create and distribute guides on how to generate abuse material using AI. Many of these images contain depictions of “Category A” form of abuse – the most severe – of children, including very young children.

 

The risks to children

It is important that schools and those working in safeguarding understand the emerging threats and risks to children as a result of advancements in AI. Using AI to generate child sexual abuse material has wide-ranging, devastating impacts because:

  • Children who have appeared in child sexual abuse material (known victims) are being used to create new images, revictimising them over and over again.
  • Analysts are spending time identifying whether images are AI-generated or feature “real” children, meaning that the rescue of victims is delayed.
  • Photos from websites, including photographs of famous pre-teen children, are being used to create images and videos of abuse.
  • It provides opportunities for perpetrators to groom, coerce and blackmail children using AI-generated images.
  • Adult offenders may share indecent AI images of children with a child in order to coerce or elicit real images from that child.

Understanding how technology is being used to harm children is crucial in considering how to protect children and equip them with the knowledge that they need to stay safe.

 

How can schools respond to this issue?

Currently, the guidance for schools on the use of AI focuses mainly on the use of AI in managing workload and generating resources. There is scant mention of online safety and little in the way of concrete advice.

The non-statutory guidance document Sharing nudes and semi-nudes: Advice for education settings working with children and young people was updated in March 2024 to include greater reference to this issue and to “deepfakes”, with some overarching advice about responding in cases where these kinds of images have been shared.

Despite this, however, there are things that schools can do to begin to respond to this issue and protect children. Here are 10 ideas:

  1. Safeguarding teams should familiarise themselves with the issue of AI-generated abuse material, by reading the IWF reports and understanding the emerging picture of risk.
  2. Staff should be trained to understand the risks of AI and deepfakes and refer any concerns to the designated safeguarding lead (DSL) as they usually would.
  3. Consideration should be made around the online safety curriculum and how to teach children to stay safe when using artificial intelligence themselves.
  4. The curriculum should equip children with the wider knowledge and understanding that they need to stay safe online – including key messages stressing that they should only interact with people that they know and use safe and suitable websites, and consideration should be made around how to support them in developing digital resilience.
  5. Parents should be informed regularly of risks to children online, supported to ensure that suitable safeguards are applied to home broadband networks and encouraged to check children’s devices regularly. Remind parents not to share images of children publicly online
  6. Staff should also be reminded not to share images of themselves publicly online (keep social media profiles private).
  7. Filtering and monitoring systems in school should prevent children from accessing any harmful content online.
  8. A robust response involving the necessary safeguarding partners should be put in place in the event that children generate or distribute deepfake images or videos of peers.
  9. DSLs should be familiar with tools to support in the removal of abusive or indecent images, such as the Report Remove tool (see further information). Specialist support should be sought where children have experienced online sexual abuse, as well as schools following local procedures for referring any incidents of harm or abuse to social care and to the police.
  10. Remind children regularly of reporting mechanisms and support available in school to respond to any safeguarding issue, including issues online.

 

Final thoughts

As the risks become clearer and the use of AI in all forms becomes more mainstream, it is likely that schools will require specific information to support in responding to issues that emerge.

However, as detailed above there are already things that you can do using existing mechanisms to begin to respond to this threat and develop knowledge and expertise of the risks.

Getting ahead of emerging issues and thinking about prevention is essential in keeping children safe from all forms of harm, including this new and concerning area.

 

Further information & resources