UK Technology Firms and Child Protection Agencies to Test AI's Ability to Create Exploitation Images

Tech firms and child safety agencies will be granted permission to evaluate whether AI systems can generate child abuse material under new UK laws.

Significant Increase in AI-Generated Harmful Content

The declaration coincided with revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Legal Structure

Under the changes, the authorities will allow designated AI developers and child safety organizations to inspect AI systems – the foundational technology for conversational AI and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing depictions of child exploitation.

"Ultimately about preventing abuse before it occurs," declared the minister for AI and online safety, adding: "Experts, under strict conditions, can now identify the danger in AI systems early."

Tackling Legal Challenges

The amendments have been implemented because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before addressing it.

This legislation is designed to averting that issue by helping to stop the production of those images at source.

Legal Structure

The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, producing or distributing AI systems developed to generate exploitative content.

Real-World Impact

This week, the minister visited the London headquarters of a children's helpline and heard a simulated conversation to counsellors involving a report of AI-based abuse. The call depicted a adolescent seeking help after facing extortion using a explicit deepfake of themselves, created using AI.

"When I hear about children facing blackmail online, it is a cause of extreme anger in me and rightful anger amongst families," he stated.

Alarming Data

A prominent internet monitoring foundation reported that instances of AI-generated exploitation material – such as online pages that may include multiple images – had significantly increased so far this year.

Instances of the most severe content – the gravest form of abuse – increased from 2,621 visual files to 3,086.

  • Female children were predominantly victimized, accounting for 94% of prohibited AI images in 2025
  • Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a crucial step to guarantee AI products are safe before they are launched," commented the head of the internet monitoring foundation.

"AI tools have enabled so victims can be victimised repeatedly with just a few clicks, giving offenders the capability to make possibly endless amounts of advanced, photorealistic child sexual abuse material," she added. "Content which additionally commodifies survivors' suffering, and renders young people, especially female children, less safe both online and offline."

Counseling Session Data

The children's helpline also published information of support interactions where AI has been referenced. AI-related harms mentioned in the sessions include:

  • Employing AI to rate weight, physique and appearance
  • Chatbots dissuading children from consulting safe adults about abuse
  • Facing harassment online with AI-generated content
  • Digital blackmail using AI-faked pictures

Between April and September this year, Childline conducted 367 counselling sessions where AI, chatbots and associated terms were mentioned, four times as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 sessions were connected with mental health and wellness, including utilizing AI assistants for assistance and AI therapy apps.

Jonathan Newton
Jonathan Newton

A passionate life coach and writer dedicated to helping individuals unlock their potential through mindful practices and innovative strategies.