“The harm from deepfake abuse is real and urgent,” the UN agency said in a statement. “Children cannot wait for the law to catch up.”
At least 1.2 million youngsters have disclosed having had their images manipulated into sexually explicit deepfakes in the past year, according to a study across 11 countries conducted by the UN agency, international police agency, INTERPOL and the ECPAT global network working to end the sexual exploitation of children worldwide.
In some countries, this represents one in 25 children or the equivalent of one child in a typical classroom, the study found.
‘Nudification’ tools
Deepfakes – images, videos, or audio generated or manipulated with AI and designed to look real – are increasingly being used to produce sexualised content involving children, including through so-called “nudification”, where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualised images.
“When a child’s image or identity is used, that child is directly victimised. Even without an identifiable victim, AI-generated child sexual abuse material normalises the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help,” UNICEF said.
“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
Demand for robust safeguards
The UN agency said it strongly welcomed the efforts of those AI developers who are implementing “safety-by-design” approaches and robust guardrails to prevent misuse of their systems.
However, the response so far is patchy, and too many AI models are not being developed with adequate safeguards.
The risks can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly.
“Children themselves are deeply aware of this risk,” UNICEF said, adding that in some of the study countries, up to two thirds of youngsters said they worry that AI could be used to create fake sexual images or videos.
A fast-growing threat
“Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention and protection measures.”
To address this fast-growing threat, the UN agency issued Guidance on AI and Children 3.0 in December with recommendations for policies and systems that uphold child rights. Read the full report here.
Right now, UNICEF is calling for immediate action to confront the escalating threat:
- Governments need to expand definitions of child sexual abuse material to include AI-generated content and criminalise its creation, procurement, possession and distribution
- AI developers should implement safety-by-design approaches and robust guardrails to prevent misuse of AI models
- Digital companies should prevent the circulation of AI-generated child sexual abuse material, not merely remove it, and strengthen content moderation with investment in detection technologies
Read UNICEF’s latest brief on AI and child sexual abuse and exploitation here.





