UK Tech Companies and Child Protection Agencies to Test AI's Ability to Create Exploitation Content
Technology companies and child protection agencies will receive permission to assess whether AI systems can generate child abuse material under recently introduced British laws.
Substantial Rise in AI-Generated Harmful Content
The announcement coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the amendments, the authorities will permit designated AI developers and child safety organizations to examine AI systems β the foundational technology for chatbots and image generators β and ensure they have sufficient safeguards to stop them from producing depictions of child sexual abuse.
"Fundamentally about stopping exploitation before it happens," declared the minister for AI and online safety, adding: "Specialists, under strict protocols, can now detect the risk in AI systems early."
Tackling Regulatory Obstacles
The changes have been implemented because it is against the law to produce and possess CSAM, meaning that AI developers and other parties cannot create such images as part of a testing regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is aimed at preventing that problem by enabling to stop the creation of those images at source.
Legal Structure
The changes are being introduced by the government as revisions to the criminal justice legislation, which is also implementing a ban on possessing, producing or distributing AI models developed to create exploitative content.
Practical Impact
This recently, the minister visited the London headquarters of a children's helpline and listened to a mock-up conversation to advisors featuring a report of AI-based exploitation. The interaction portrayed a teenager seeking help after facing extortion using a sexualised AI-generated image of himself, constructed using AI.
"When I hear about children facing extortion online, it is a source of extreme frustration in me and rightful concern amongst parents," he said.
Alarming Data
A prominent internet monitoring foundation stated that instances of AI-generated exploitation content β such as webpages that may contain numerous files β had significantly increased so far this year.
Instances of category A content β the most serious form of abuse β rose from 2,621 visual files to 3,086.
- Girls were predominantly targeted, accounting for 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a crucial step to guarantee AI products are safe before they are released," commented the chief executive of the online safety organization.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a few clicks, giving offenders the capability to create potentially endless amounts of sophisticated, lifelike exploitative content," she continued. "Content which additionally exploits victims' suffering, and makes young people, particularly female children, less safe both online and offline."
Support Session Information
Childline also published details of support sessions where AI has been mentioned. AI-related harms discussed in the sessions comprise:
- Employing AI to rate weight, physique and appearance
- AI assistants dissuading young people from consulting trusted guardians about harm
- Facing harassment online with AI-generated material
- Online blackmail using AI-manipulated pictures
During April and September this year, the helpline conducted 367 counselling sessions where AI, chatbots and associated topics were discussed, four times as many as in the same period last year.
Fifty percent of the mentions of AI in the 2025 sessions were related to mental health and wellbeing, including using chatbots for assistance and AI therapy applications.