British Technology Firms and Child Protection Officials to Test AI's Ability to Generate Abuse Content
Tech firms and child protection organizations will be granted permission to assess whether AI systems can generate child abuse material under new British legislation.
Significant Rise in AI-Generated Harmful Material
The declaration coincided with revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the authorities will allow approved AI companies and child protection organizations to inspect AI models – the underlying technology for chatbots and image generators – and verify they have adequate safeguards to prevent them from creating depictions of child exploitation.
"Ultimately about preventing abuse before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now identify the risk in AI systems early."
Addressing Regulatory Obstacles
The changes have been implemented because it is against the law to produce and own CSAM, meaning that AI creators and other parties cannot generate such images as part of a evaluation process. Previously, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.
This legislation is aimed at preventing that problem by helping to halt the production of those materials at source.
Legislative Structure
The changes are being added by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or distributing AI systems developed to generate child sexual abuse material.
Practical Impact
This recently, the minister toured the London headquarters of a children's helpline and heard a simulated conversation to counsellors featuring a account of AI-based exploitation. The call portrayed a adolescent requesting help after being blackmailed using a sexualised deepfake of himself, created using AI.
"When I hear about children facing blackmail online, it is a source of extreme anger in me and rightful concern amongst parents," he stated.
Alarming Statistics
A prominent internet monitoring foundation stated that cases of AI-generated exploitation material – such as online pages that may contain multiple files – had significantly increased so far this year.
Cases of category A material – the gravest form of abuse – increased from 2,621 visual files to 3,086.
- Female children were overwhelmingly victimized, accounting for 94% of prohibited AI depictions in 2025
- Depictions of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a vital step to guarantee AI products are safe before they are launched," stated the head of the internet monitoring organization.
"AI tools have made it so survivors can be targeted repeatedly with just a simple actions, giving criminals the ability to create potentially limitless amounts of sophisticated, lifelike child sexual abuse material," she added. "Content which additionally exploits survivors' suffering, and makes young people, especially female children, more vulnerable both online and offline."
Counseling Session Data
The children's helpline also published information of support interactions where AI has been mentioned. AI-related harms mentioned in the sessions include:
- Employing AI to evaluate body size, body and appearance
- Chatbots dissuading young people from talking to safe guardians about harm
- Facing harassment online with AI-generated content
- Online blackmail using AI-faked pictures
During April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and associated terms were discussed, four times as many as in the same period last year.
Half of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including using chatbots for assistance and AI therapeutic applications.