UK Technology Firms and Child Safety Agencies to Test AI's Capability to Generate Exploitation Content
Tech firms and child safety agencies will be granted permission to evaluate whether AI systems can produce child abuse material under new UK laws.
Substantial Rise in AI-Generated Harmful Material
The announcement coincided with findings from a safety monitoring body showing that cases of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the amendments, the authorities will allow designated AI developers and child protection organizations to inspect AI systems β the underlying systems for conversational AI and image generators β and ensure they have sufficient protective measures to stop them from creating images of child exploitation.
"Ultimately about preventing exploitation before it happens," stated Kanishka Narayan, noting: "Specialists, under rigorous protocols, can now identify the risk in AI models early."
Addressing Legal Challenges
The amendments have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and others cannot create such images as part of a testing regime. Until now, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that problem by helping to stop the creation of those images at source.
Legal Structure
The amendments are being introduced by the government as revisions to the crime and policing bill, which is also implementing a ban on owning, creating or sharing AI models developed to create exploitative content.
Real-World Impact
This recently, the minister visited the London base of a children's helpline and heard a mock-up call to counsellors involving a account of AI-based abuse. The call depicted a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.
"When I hear about children facing blackmail online, it is a source of extreme frustration in me and rightful concern amongst families," he said.
Alarming Statistics
A prominent internet monitoring organization stated that cases of AI-generated abuse material β such as webpages that may include numerous files β had significantly increased so far this year.
Cases of the most severe material β the most serious form of abuse β rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The law change could "represent a crucial step to guarantee AI products are safe before they are released," stated the chief executive of the internet monitoring organization.
"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a simple actions, giving offenders the ability to make possibly limitless amounts of advanced, photorealistic exploitative content," she continued. "Material which further commodifies victims' trauma, and renders children, especially girls, more vulnerable on and off line."
Counseling Interaction Data
The children's helpline also published details of support interactions where AI has been mentioned. AI-related risks mentioned in the conversations comprise:
- Employing AI to rate body size, body and looks
- Chatbots discouraging children from talking to trusted guardians about harm
- Facing harassment online with AI-generated material
- Online blackmail using AI-faked images
During April and September this year, Childline delivered 367 counselling interactions where AI, chatbots and associated terms were discussed, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including using chatbots for support and AI therapeutic apps.