Schools are increasingly struggling with a new and disturbing problem as students use artificial intelligence to turn ordinary photos of their classmates into sexually explicit deepfake images and videos. The spread of these manipulated visuals has caused serious harm to victims, often leading to trauma, fear and social isolation.
The issue drew national attention this fall after AI-generated nude images circulated widely in a middle school in Louisiana. Two boys were eventually charged in the case, but the situation escalated before action was taken. One of the girls whose images were allegedly created was expelled after she got into a fight with a boy she accused of making the fake pictures of her and her friends.
Lafourche Parish Sheriff Craig Webre said the incident showed how dangerous and accessible this technology has become. “While the ability to alter images has been available for decades, the rise of AI has made it easier for anyone to alter or create such images with little to no training or experience,” he said, adding that parents must urgently talk to their children about the issue.
Across the United States, more states are now passing laws to deal with deepfakes. The Louisiana case is believed to be the first prosecution under the state’s new law, according to Republican state senator Patrick Connick, who authored the legislation. In 2025, at least half the states enacted laws targeting the misuse of generative AI to create realistic but fake images, including those linked to child sexual abuse material.
There have already been prosecutions of students in states like Florida and Pennsylvania, while schools in California have expelled students for similar acts. In one shocking case in Texas, a fifth-grade teacher was charged with using AI to create child sexual abuse images of his own students.
Experts say the technology behind deepfakes has become dangerously simple. Sergio Alexander, a research associate at Texas Christian University, said that what once required technical skill can now be done easily through mobile apps or social media tools. “Now, you can do it on an app, you can download it on social media, and you don’t have to have any technical expertise whatsoever,” he said.
The scale of the problem is growing fast. The National Center for Missing and Exploited Children reported that complaints of AI generated child sexual abuse images jumped from about 4,700 in 2023 to 440,000 in just the first six months of 2025.
Despite this rise, experts believe schools are not doing enough. Sameer Hinduja, co-director of the Cyberbullying Research Center, said schools need to update their policies and clearly explain the consequences of creating or sharing deepfakes. He warned that students may feel free to act if they believe adults are unaware. “Students don’t think that the staff, the educators are completely oblivious,” he said.
Hinduja also said many parents wrongly assume schools are already handling the issue. “So many of them are just so unaware and so ignorant,” he said, comparing the response to an “ostrich syndrome” where people avoid facing an uncomfortable reality.
The emotional impact on victims can be severe. Alexander said AI deepfakes are different from traditional bullying because fake images and videos can spread quickly and resurface again and again. Many victims suffer from anxiety and depression. “They literally shut down because it makes it feel like there’s no way they can even prove that this is not real because it does look 100 percent real,” he said.
Experts are urging parents to start open conversations with their children. Alexander suggested beginning casually by talking about harmless fake videos online and then asking children how they would feel if they were placed in such a video. He said most children will admit they know someone who has been affected.
Laura Tierney, founder and chief executive of The Social Institute, said children must feel safe telling their parents about deepfakes without fearing punishment. She explained that many children stay silent because they worry their phones will be taken away. She promotes the SHIELD approach, which includes stopping the spread, involving a trusted adult, informing platforms, collecting evidence without downloading content, limiting social media use and directing victims to help.
Tierney said the number of steps involved shows how complex the issue has become, and why parents, schools and authorities need to respond with urgency and care.



















































