Technology companies and child protection organizations will receive permission to assess whether artificial intelligence tools can generate child abuse material under recently introduced UK legislation.
The announcement coincided with revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
Under the changes, the government will permit designated AI companies and child protection groups to inspect AI systems – the foundational systems for chatbots and image generators – and verify they have adequate safeguards to stop them from producing depictions of child exploitation.
"Fundamentally about stopping abuse before it happens," declared the minister for AI and online safety, adding: "Experts, under rigorous protocols, can now detect the danger in AI systems early."
The amendments have been implemented because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing process. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to averting that problem by enabling to halt the creation of those images at their origin.
The amendments are being introduced by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, producing or distributing AI systems designed to generate exploitative content.
This week, the official visited the London base of a children's helpline and listened to a mock-up call to advisors involving a report of AI-based abuse. The call portrayed a adolescent seeking help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.
"When I learn about children experiencing blackmail online, it is a source of extreme anger in me and justified concern amongst families," he said.
A leading online safety foundation stated that cases of AI-generated abuse content – such as webpages that may contain multiple files – had significantly increased so far this year.
Instances of the most severe content – the most serious form of abuse – rose from 2,621 visual files to 3,086.
The legislative amendment could "constitute a vital step to guarantee AI tools are secure before they are released," stated the head of the online safety foundation.
"Artificial intelligence systems have enabled so survivors can be victimised all over again with just a simple actions, providing offenders the ability to make possibly limitless amounts of sophisticated, lifelike child sexual abuse material," she added. "Material which additionally commodifies survivors' trauma, and renders young people, particularly female children, less safe both online and offline."
Childline also released information of support sessions where AI has been referenced. AI-related harms mentioned in the sessions comprise:
During April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and related topics were discussed, four times as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were related to mental health and wellbeing, including utilizing AI assistants for assistance and AI therapeutic apps.
A seasoned communication coach with over a decade of experience in helping individuals master public speaking and interpersonal skills.