Back to Blog
Threat Spotlight

Deep-Fake Harassment: How Synthetic Media Is Weaponized Against Teens

What every parent, educator, and counselor needs to know about AI-generated "nudification," the school cases making headlines, and the step-by-step response that can protect your child.

By the TeenAegis Team14 min read
Share

If your child is in immediate danger

Call 911 immediately. If your child is experiencing suicidal thoughts, contact the 988 Suicide & Crisis Lifeline by calling or texting 988. You can also text "HELLO" to 741741 to reach a trained Crisis Text Line counselor.

On February 4, 2026, UNICEF issued a stark warning: "Deepfake abuse is abuse." The statement accompanied the findings of a landmark study conducted across 11 countries with INTERPOL and the ECPAT global network. The headline figure: at least 1.2 million children disclosed having their images manipulated into sexually explicit deepfakes in the past year. In some countries, that translates to one child in every typical classroom.

This is not a distant, theoretical risk. Across the United States, schools are contending with a surge of incidents in which students use freely available AI tools to transform ordinary photos of classmates into fabricated nude imagery. The RAND Corporation found that one in five secondary schools reported deepfake bullying incidents during the 2023–2025 school years. The Associated Press reported that AI-generated child sexual abuse material flagged to the National Center for Missing & Exploited Children's CyberTipline soared from 4,700 reports in 2023 to 440,000 in just the first six months of 2025.

This article is designed to give parents, educators, and counselors a thorough understanding of how deepfake harassment works, the real-world cases reshaping school policy, the psychological toll on victims, the rapidly evolving legal landscape, and — most importantly — the concrete steps you should take the moment a deepfake surfaces.

What Are Deepfakes — and What Is "Nudification"?

A deepfake is any image, video, or audio file that has been generated or manipulated using artificial intelligence to appear authentic. The term originally described face-swapped videos of public figures, but the technology has evolved rapidly. Today, the most common form of deepfake targeting teenagers is "nudification" — the use of AI tools that digitally strip or alter clothing in ordinary photographs to produce fabricated nude or sexually explicit images.

What makes this threat so alarming is its accessibility. As Sergio Alexander, a research associate at Texas Christian University who studies deepfake cyberbullying, told the Associated Press: "Now, you can do it on an app, you can download it on social media, and you don't have to have any technical expertise whatsoever." A child's school portrait, a vacation photo, or a social media selfie is all that is needed. No intimate images from the victim are required.

Thorn's 2025 research, which surveyed 1,200 young people ages 13 to 20, found that among the small percentage of teens who admitted to creating deepfake nudes, most discovered the tools through app stores, search engines, and social media — the same platforms teens use every day.

The Scale of the Crisis: Key Statistics

The data from multiple independent research organizations paints a consistent and deeply concerning picture.

1.2 million

children globally disclosed deepfake image manipulation in the past year

Source: UNICEF / INTERPOL, 2026

1 in 17

teens reported having deepfake nudes created of them

Source: Thorn, 2025

1 in 5

secondary schools reported deepfake bullying incidents

Source: RAND Corporation, 2025

440,000

AI-generated CSAM reports to NCMEC in first 6 months of 2025 (up from 4,700 in all of 2023)

Source: NCMEC / AP News

31%

of teens are already familiar with deepfake nudes

Source: Thorn, 2025

169

deepfake-specific state laws enacted since 2022, with 146 bills introduced in 2025 alone

Source: Jones Walker, 2026

Real-World Cases: When Deepfakes Hit Schools

The statistics above are not abstractions. Behind each number is a child whose life was upended. The following cases — all reported by major news organizations — illustrate how quickly deepfake harassment can escalate and how uneven the institutional response has been.

Thibodaux, Louisiana

AI-generated nude images swept through a middle school. Two boys were criminally charged — the first prosecution under Louisiana's new deepfake law. One of the victims, a 13-year-old girl, was expelled after starting a fight with a boy she accused of creating the images. The case drew national attention after AP News reported on the devastating fallout for the victim and her family.

AP News, December 2025

Cascade, Iowa

Three high school students were charged after deepfake nude images targeting 44 female students were discovered. The sheer number of victims — spanning multiple grade levels — shocked the community and prompted the school district to overhaul its technology policies.

Telegraph Herald, September 2025

Beverly Hills, California

Five students at Beverly Vista Middle School were expelled after creating and distributing deepfake nude photos of classmates. The incident prompted California Governor Gavin Newsom to sign two bills specifically targeting AI-generated child sexual abuse imagery.

Student Privacy Compass / AP News

Westfield, New Jersey

Students at Westfield High School were found creating sexually explicit deepfakes of classmates. The initial response — a two-day suspension — was widely criticized as inadequate, sparking a broader conversation about whether schools are equipped to handle AI-enabled harassment.

FGS Global / New York Times

Texas

A fifth-grade teacher was charged with using AI to create child pornography of his own students — a case that underscored how the threat extends beyond peer-to-peer bullying to adults in positions of trust.

AP News, December 2025

The Grok Scandal: A Wake-Up Call

In January 2026, the issue reached a new inflection point when Elon Musk's AI chatbot Grok, integrated into the social media platform X (formerly Twitter), was found to be producing nonconsensual sexualized deepfake images — including manipulated images of children as young as eleven years old. The California Attorney General launched an investigation, the European Union opened a formal probe, and Malaysia and Indonesia became the first countries to block the platform entirely.

The Grok episode demonstrated a critical failure: even major technology companies were shipping AI products without adequate safeguards to prevent the generation of child sexual abuse material. The London School of Economics called it "a wake-up call for children's rights, privacy, and online safety." UNICEF responded by calling on AI developers to implement "safety-by-design" approaches and robust guardrails — and for governments to criminalize AI-generated child sexual abuse material.

The Psychological Toll on Victims

Deepfake harassment inflicts a distinct kind of psychological harm. Unlike a rumor or a mean text message, a fabricated image is visceral, shareable, and persistent. According to the American Academy of Pediatrics, children who are victims of AI-generated image-based sexual abuse may experience:

Humiliation, shame, anger, violation, and self-blame — the core emotional responses that can trigger immediate and ongoing psychological distress.

Withdrawal from family and school — victims often isolate themselves, pulling away from the relationships and routines that could support their recovery.

Difficulty sustaining trusting relationships — the betrayal of having one's image weaponized can make it profoundly difficult to trust peers, adults, or online interactions.

Self-harm and suicidal thoughts — in the most severe cases, the psychological burden can become life-threatening. The Joyful Heart Foundation has documented cases of survivors who died by suicide after discovering deepfake videos made with their likeness.

Amplified trauma with every share — each time the content resurfaces or is forwarded, the victim is re-traumatized. As Sergio Alexander told AP News: "They literally shut down because it makes it feel like there's no way they can even prove that this is not real — because it does look 100% real."

Barriers to disclosure — the AAP notes that 1 in 6 minors involved in a harmful online sexual interaction never disclose it to anyone. Boys are even less likely to tell others. Fear of not being believed intensifies these barriers.

Thorn's research revealed a troubling gap between intention and action: while 62% of teens say they would tell a parent if targeted by deepfake nudes, only 34% of actual victims did. This means that for every child who speaks up, nearly two others are suffering in silence.

How Schools Are Responding — and Where They Are Falling Short

The RAND Corporation's nationally representative survey of 957 K–12 school principals provides the most detailed picture of how schools are handling deepfake incidents. Among schools that experienced such incidents:

79%took disciplinary actions against those involved
66%referred the incidents to law enforcement
47%provided education and training to staff and students on recognizing deepfakes
23%updated their policies to include specific clauses about AI misuse

The most alarming finding: more than two-thirds of school staff reported receiving no training on deepfakes, or rated the training they received as poor or mediocre. Sameer Hinduja, co-director of the Cyberbullying Research Center and professor at Florida Atlantic University, told AP News that many parents assume schools are addressing the issue when they are not: "So many of them are just so unaware and so ignorant. We hear about the ostrich syndrome, just kind of burying their heads in the sand, hoping that this isn't happening amongst their youth."

The inconsistency in school responses — from two-day suspensions in New Jersey to criminal referrals in Louisiana — underscores the urgent need for standardized policies and training. Schools that lack clear protocols risk re-traumatizing victims, as the Louisiana case demonstrated when the targeted girl was expelled for fighting back against her harasser.

The Legal Landscape: Laws Are Catching Up

The legal response to deepfake harassment has accelerated dramatically. Since 2022, 169 deepfake-specific state laws have been enacted across the United States, with 146 additional bills introduced in 2025 alone. At the federal level, two landmark laws have reshaped the landscape:

The TAKE IT DOWN Act (April 2025)

Passed by Congress on April 28, 2025, this federal law criminalizes the nonconsensual publication of intimate images — including AI-generated deepfakes — and requires websites and platforms to remove such content within 48 hours of receiving a valid complaint.

The DEFIANCE Act (January 2026)

Passed by the Senate in January 2026, the DEFIANCE Act creates a federal civil right of action allowing victims of nonconsensual deepfake imagery to sue the creators for damages. This is particularly significant because it provides a legal pathway even when criminal prosecution is difficult — for example, when the creator is anonymous or in another jurisdiction.

At the state level, students have been criminally charged in Louisiana, Florida, Pennsylvania, and Iowa. In Colorado, threatening to share a deepfake of a minor can result in fines and up to 30 months of imprisonment. California signed two bills specifically targeting AI-generated child sexual abuse imagery after the Beverly Hills school incident. The legal message is becoming clearer: creating, possessing, or distributing deepfake nudes of minors is a crime with real consequences.

What to Do If Your Child Is Targeted: The SHIELD Framework

Laura Tierney, founder and CEO of The Social Institute, developed the SHIELD acronym as a step-by-step response framework. As she told AP News: "The fact that that acronym is six steps I think shows that this issue is really complicated." Here is how to apply it:

S

Stop — Do Not Forward

The moment you or your child become aware of a deepfake image, do not share, forward, or screenshot it for others to see. Forwarding the image — even to show evidence — can constitute distribution of child sexual abuse material and may expose you to legal liability.

H

Huddle with a Trusted Adult

Your child needs to tell a parent, school counselor, or another trusted adult immediately. Reassure them: "You are not in trouble. This is not your fault. You are the victim of a crime, and I am going to help you." The AAP emphasizes that fear of punishment is the single biggest barrier to disclosure.

I

Inform the Platform

Report the image to every social media platform where it has appeared. Under the TAKE IT DOWN Act, platforms are required to remove nonconsensual intimate imagery within 48 hours. Use NCMEC's Take It Down program (takeitdown.ncmec.org) — a free, anonymous tool that creates a digital fingerprint of the image to prevent it from being re-uploaded across participating platforms.

E

Collect Evidence

Document who is spreading the image, on which platforms, and when. Take screenshots of usernames, URLs, and timestamps. However, do not download the explicit images themselves — possessing such material, even as evidence, can create legal complications. Let law enforcement handle the forensic collection.

L

Limit Social Media Access

Temporarily restrict your child's social media exposure to prevent re-traumatization from seeing the images resurface or encountering harassment from peers. This is not a punishment — frame it as a protective measure while the situation is being resolved.

D

Direct to Help

File a report with the FBI at 1-800-CALL-FBI (1-800-225-5324) or online at tips.fbi.gov. Report to NCMEC's CyberTipline at report.cybertip.org. Contact your child's school administration. And critically, seek professional mental health support — the 988 Suicide & Crisis Lifeline (call or text 988) and Crisis Text Line (text "HELLO" to 741741) are available 24/7.

Prevention: Reducing Your Child's Exposure

Because deepfake tools can work with any ordinary photograph, complete prevention is impossible. However, you can significantly reduce your child's risk profile with these practical steps:

Audit social media privacy settings. Ensure your child's accounts are set to private. Review who can see their photos, tag them, or download their images. Remove public-facing profile pictures that show their face clearly.

Reduce the public photo footprint. Talk to your child about the permanence of images posted online. Every public photo is potential source material for a deepfake tool. Consider whether school portraits, sports team photos, or event pictures need to be publicly accessible.

Teach reverse-image awareness. Show your teen how to use reverse image search tools (Google Images, TinEye) to check whether their photos are appearing in unexpected places online.

Have the conversation early and often. Sergio Alexander recommends starting casually: ask your kids if they've seen any funny fake videos online. Laugh at some of them. Then ask: "Have you thought about what it would be like if you were in this video?" Based on the numbers, he says, "I guarantee they'll say that they know someone" who has encountered deepfakes.

Make your home a safe reporting zone. Laura Tierney emphasizes that many kids fear their parents will overreact or take their phones away. Make it clear — repeatedly — that if something happens, your first response will be to help, not to punish.

Advocate for school policies. Only 23% of schools that experienced deepfake incidents updated their policies. Push your school district to adopt explicit AI misuse policies, train staff on deepfake recognition, and establish clear reporting protocols.

Prevention guidance adapted from AP News, Thorn, and Verizon's Digital Safety Guide.

Essential Resources

The Bottom Line

Deepfake harassment is not a future problem — it is happening right now, in schools across the country, to children who did nothing wrong. The technology is accessible, the psychological harm is severe, and the institutional response is still catching up. But the legal landscape is shifting rapidly, removal tools exist, and the single most protective factor remains the same: a child who knows they can tell a trusted adult without fear of punishment.

As UNICEF stated plainly: "There is nothing fake about the harm it causes." Start the conversation with your teenager today. Their safety may depend on it.

Sources

Found this guide helpful? Share it with other parents and educators.

Share

Stay Informed

Subscribe to the TeenAegis newsletter for weekly threat intelligence briefings, practical safety guides, and the latest research on protecting teenagers online.