UK Tech Firms and Child Safety Officials to Examine AI's Capability to Create Exploitation Content

Technology companies and child safety agencies will be granted authority to assess whether artificial intelligence systems can generate child abuse material under recently introduced UK laws.

Substantial Rise in AI-Generated Illegal Content

The announcement coincided with findings from a protection watchdog showing that reports of AI-generated CSAM have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the authorities will allow designated AI companies and child protection organizations to inspect AI systems – the foundational systems for conversational AI and visual AI tools – and verify they have sufficient protective measures to stop them from creating depictions of child exploitation.

"Fundamentally about preventing abuse before it occurs," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now identify the danger in AI systems promptly."

Tackling Legal Challenges

The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a testing process. Until now, authorities had to wait until AI-generated CSAM was published online before addressing it.

This law is designed to averting that problem by helping to halt the production of those materials at source.

Legal Framework

The amendments are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a prohibition on possessing, producing or distributing AI models developed to create exploitative content.

Practical Impact

This week, the official toured the London base of a children's helpline and listened to a simulated call to advisors involving a account of AI-based abuse. The interaction portrayed a teenager requesting help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.

"When I hear about children experiencing blackmail online, it is a cause of intense anger in me and justified anger amongst families," he stated.

Alarming Data

A leading online safety organization stated that cases of AI-generated abuse material – such as online pages that may contain multiple files – had more than doubled so far this year.

Cases of category A material – the most serious form of abuse – rose from 2,621 visual files to 3,086.

  • Female children were overwhelmingly victimized, accounting for 94% of prohibited AI depictions in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Industry Reaction

The law change could "constitute a vital step to ensure AI tools are safe before they are released," stated the chief executive of the online safety foundation.

"Artificial intelligence systems have made it so survivors can be targeted repeatedly with just a few clicks, giving offenders the ability to make possibly endless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Material which further commodifies survivors' suffering, and makes children, particularly girls, more vulnerable both online and offline."

Support Session Data

Childline also published information of counselling sessions where AI has been mentioned. AI-related harms mentioned in the sessions include:

  • Using AI to evaluate weight, body and appearance
  • AI assistants dissuading children from talking to safe guardians about harm
  • Facing harassment online with AI-generated material
  • Digital extortion using AI-faked images

During April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and associated topics were discussed, four times as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, including using chatbots for support and AI therapeutic applications.

Sarah Taylor
Sarah Taylor

A seasoned poker strategist with over a decade of experience in competitive tournaments and coaching.