A joint investigation by WIRED and Indicator has identified reported deepfake sexual abuse incidents in approximately 90 schools worldwide, impacting over 600 students. Since 2023, students in at least 28 countries have been caught using generative AI to create sexualised images of their classmates.
Ordinary images shared on Instagram and Snapchat, the report found, are increasingly being used as raw material for "nudify" apps—AI-powered tools creating a global surge in non-consensual deepfake abuse.
Many incidents never reach the press, and schools often handle them quietly. Some research suggests the problem may already be far larger. A survey by UNICEF estimates that around 1.2 million children had sexual deepfakes created of them last year.
What are Nudify apps?
Nudify apps use generative AI to create fake nude images or videos that appear real enough to spread quickly through group chats and messaging apps. Within hours, manipulated images can circulate across entire schools.
For the girls targeted, the impact runs far beyond embarrassment. Many describe in the report feeling violated and constantly anxious about who might have seen the images or where they could surface next.
The report highlights the visceral fear of those targeted, with one student in Iowa sharing the haunting realisation: “I’m worried that every time they see me, they see those photos.” The trauma extends to the home, with families reporting that victims often stop eating and experience constant bouts of crying. One family member noted, “She's been crying. She hasn't been eating.”
Legal Consequences: Is AI-Generated Content Considered CSAM?
The images carry serious legal implications as well. Because they depict minors in explicit situations, they are widely treated as child sexual abuse material. Shane Vogt, a lawyer representing a victim in New Jersey, along with Yale Law School students Catharine Strong, Tony Sjodin, and Suzanne Castillo, noted in the report that his client “feels hopeless because she knows that these images will likely make it onto the internet and reach pedophiles.”
Vogt emphasized that “she is severely distressed by the knowledge that these images are out there, and she will have to monitor the internet for the rest of her life to keep them from spreading.” For these victims, the "nudification" isn't just a digital prank; it is a permanent violation of their privacy and dignity.
Who is Creating These Images?
Patterns appear across nearly every case. The images are usually created by teenage boys and shared among classmates. The motivations vary. Some involve curiosity or dares, others revenge or humiliation.
“The goal is not always sexual gratification,” Siddharth Pillai of the RATI Foundation told WIRED. “Increasingly, the intent is humiliation, denigration, and social control.”
From Manual Editing to Instant AI
Technology has changed the scale of the problem. Creating manipulated imagery once required technical skill and time. Today, dozens of apps and bots can generate convincing results in seconds. “What AI changes is scale, speed, and accessibility,” Pillai explained.
Schools are still trying to catch up. In some places, officials have been criticised for responding slowly or failing to treat incidents as serious abuse. Victims and their families often end up pushing for accountability themselves.
Meanwhile, some schools have begun adjusting how they share student images. Institutions in South Korea and Australia have reduced the use of student photos on public platforms or changed how they appear in yearbooks to prevent misuse. Support for victims remains uneven.
Lloyd Richardson of the Canadian Centre for Child Protection described the urgency clearly. “I think you’d be hard-pressed to find a school that has not been affected by this,” he said. “The most important thing is how we’re able to help the victims when this happens, because the effects of this can be massive.”