British Tech Firms and Child Safety Officials to Test AI's Capability to Generate Exploitation Images
Tech firms and child safety organizations will be granted permission to evaluate whether artificial intelligence systems can produce child abuse images under new UK laws.
Substantial Rise in AI-Generated Illegal Content
The announcement came as findings from a protection monitoring body showing that cases of AI-generated child sexual abuse material have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the government will permit designated AI companies and child safety organizations to examine AI systems – the foundational technology for chatbots and visual AI tools – and ensure they have adequate protective measures to prevent them from creating images of child exploitation.
"Fundamentally about preventing exploitation before it occurs," declared the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now detect the risk in AI systems promptly."
Tackling Regulatory Obstacles
The amendments have been introduced because it is illegal to create and own CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to preventing that problem by helping to stop the production of those images at source.
Legal Structure
The changes are being added by the government as revisions to the crime and policing bill, which is also establishing a prohibition on owning, creating or sharing AI systems designed to generate child sexual abuse material.
Real-World Impact
This week, the official visited the London base of Childline and listened to a mock-up conversation to advisors featuring a account of AI-based exploitation. The call portrayed a adolescent seeking help after being blackmailed using a explicit AI-generated image of himself, created using AI.
"When I hear about children experiencing extortion online, it is a cause of intense anger in me and justified anger amongst families," he stated.
Alarming Statistics
A leading online safety organization reported that cases of AI-generated exploitation material – such as online pages that may include multiple files – had more than doubled so far this year.
Instances of category A material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, accounting for 94% of prohibited AI images in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a crucial step to ensure AI products are safe before they are released," commented the head of the internet monitoring foundation.
"Artificial intelligence systems have made it so survivors can be victimised all over again with just a few clicks, providing criminals the ability to make potentially limitless amounts of sophisticated, lifelike child sexual abuse material," she added. "Material which additionally exploits victims' suffering, and renders children, especially female children, less safe on and off line."
Counseling Session Information
Childline also released details of support interactions where AI has been referenced. AI-related risks mentioned in the sessions include:
- Employing AI to evaluate weight, body and appearance
- AI assistants discouraging children from talking to trusted guardians about harm
- Being bullied online with AI-generated content
- Digital blackmail using AI-manipulated pictures
Between April and September this year, Childline conducted 367 support interactions where AI, conversational AI and related terms were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including utilizing chatbots for support and AI therapeutic applications.