SEVERETAIM 67.2
Discord risk index: 84.2 / 100 — SEVERE
00:06 ET

🚨 Platform Danger Index

Which platforms cause the most harm to children — ranked by country. Based on NCMEC CyberTipline data, regulatory enforcement actions, and documented incidents. Updated from live database.

⬇ Download Media Brief

🔴 Wall of Shame — Most Dangerous Platforms GloballyScores last updated: Apr 9, 2026

📊

Proprietary Harm Assessment

Each platform's Danger Score is calculated using a multi-dimensional proprietary methodology that normalizes enforcement data, regulatory history, and platform architecture across comparable user populations. Full methodology is protected under pending patent application.

#1CRITICAL
Instagram
Meta Platforms
9.2/10
#2CRITICAL
TikTok
ByteDance
8.8/10
#3CRITICAL
Facebook
Meta Platforms
8.6/10
#4CRITICAL
Snapchat
Snap Inc.
8.4/10
#5CRITICAL
Discord
Discord Inc.
8.3/10
#6CRITICAL
Character.AI
Character Technologies Inc.
8.2/10
#7HIGH RISK
Telegram
Telegram FZ LLC
7.9/10
#8HIGH RISK
X (Twitter)
X Corp (Elon Musk)
7.8/10

Scores reflect multi-source enforcement data, regulatory actions, and platform safety architecture. Input data is proprietary and not published.

💸

What This Costs Your Family

The real price of platform harm — beyond the headlines. Based on peer-reviewed research and MDL litigation data.

$179,836
Median Family Cost
Over 3 years when a platform harms your child
$260,762
Instagram Families
Highest platform-specific median (1,600+ MDL plaintiffs)
$38,480
Lost Wages Alone
Median caregiver productivity loss (IBI 2024)
Cost DomainMedian
Mental Health Treatment
$52,320
Lost Wages (Caregiver)
$38,480
Legal Fees (if pursued)
$45,000
Psychological Cost (pain & suffering)
$28,000
Family Disruption
$9,600
Time Cost (non-wage)
$6,436
TOTAL MEDIAN THC$179,836

Methodology: The TeenAegis Total Harm Cost (THC) Index aggregates six independently sourced cost domains using a proprietary weighting methodology. All inputs are drawn from peer-reviewed research and government data. Figures represent a 3-year harm horizon. Not a legal opinion.

🔐

Encrypted Platforms — The Hidden Risk

End-to-end encrypted (E2EE) messaging apps score lower on the harm index because they are not primary grooming platforms. However, they present a structurally different and harder-to-detect risk: once a predator has made initial contact on a monitored platform, they instruct victims to migrate the conversation here — where no automated CSAM detection, no law enforcement intercept, and no parental oversight is possible.

💬
WhatsApp
Meta Platforms · 2 billion+ users
6.5
Harm Score

WhatsApp's end-to-end encryption makes CSAM detection nearly impossible — unlike Instagram or TikTok, Meta cannot scan message content for illegal material. Group chats have been documented as distribution networks for child sexual abuse material in the UK, India, and Brazil, where WhatsApp is the dominant messaging platform.

Primary Risks
CSAM group distribution
E2EE detection gap
Cross-border grooming
No content moderation
Key Facts
🔒 Full E2EE — no server-side scanning
👥 Groups up to 1,024 members
🌍 Dominant in 100+ countries
⚠️ No minimum age enforcement
Parent action: Enable WhatsApp's built-in privacy settings, restrict group invites to contacts only, and discuss with your teen that predators specifically target WhatsApp for its lack of monitoring.
🔵
Signal
Signal Foundation · Non-profit
3.1
Harm Score

Signal is the gold standard for private communication and is not a primary grooming platform. Its low harm score reflects this. However, its military-grade encryption, disappearing messages, and absence of any cloud backup make it the preferred migration destination for predators who have already established contact elsewhere — precisely because law enforcement cannot intercept Signal communications.

Specific Risks
Grooming migration tool
Disappearing messages
No law enforcement access
No CSAM scanning possible
Key Facts
🔒 Open-source E2EE protocol
⏱ Disappearing messages (1s–4wk)
🚫 No metadata retention
📵 No account phone number display
Parent action: If Signal appears on your teen's phone unexpectedly, treat it as a red flag requiring an immediate conversation — not because Signal is inherently dangerous, but because its presence often indicates a deliberate attempt to move communication off monitored channels.
Sources: NCMEC CyberTipline · Internet Watch Foundation · Europol IOCTA 2023 · Stanford Internet Observatory · CEOP Annual Report 2023
HARM LEVEL:
CRITICAL (8+)
HIGH RISK (6–8)
ELEVATED (4–6)
MODERATE (2–4)
LOW (0–2)
RESTRICTED/BANNED
NO DATA

Worst Platform Per Country

Color = highest harm score in that country. Click a country to see full ranking.

Countries by Max Harm Score

Worst → Best

TOTAL FINES & SETTLEMENTS PAID BY MAJOR PLATFORMS FOR CHILD SAFETY FAILURES SINCE 2019

$3,327,839,540in documented fines paid by platforms for child safety failures since 2019

Sources: Meta GDPR €1.2B (2023) · TikTok DPC €345M (2023) · X FTC $150M (2022) · YouTube COPPA $170M (2019) · Epic Games FTC $520M (2022) · Snap FTC $35M (2014) · Roblox FTC $5.7M (2019) · additional state AG and EU enforcement actions. Counter increments based on known fine accrual schedules and ongoing enforcement actions.

👤 Who Is Responsible?

These are the executives whose names appear on the transparency reports, the congressional testimony, and the internal memos. The data above is their record.

Mark Zuckerberg
CEO, Meta Platforms
Instagram / Facebook
F
“"We want Instagram to be a place where people feel safe." (Congressional testimony, 2024)”
The record: Meta's own internal research showed Instagram recommended predator accounts to minors. The company suppressed the findings. Instagram has paid $1.3B+ in child safety fines and made zero mandatory operational improvements.
Shou Zi Chew
CEO, TikTok / ByteDance
TikTok
D
“"We have never shared, or received a request to share, US user data with the Chinese government." (Congressional testimony, 2023)”
The record: TikTok paid €345M in GDPR fines for child data violations in 2023. The FTC referred TikTok to the DOJ for a new COPPA investigation in 2024. Internal documents show the algorithm served eating disorder content to teenagers within minutes.
David Baszucki
CEO, Roblox Corporation
Roblox
D
“"Safety is our top priority and we are constantly improving our systems." (Roblox blog, 2023)”
The record: Roblox was fined $5.7M by the FTC in 2019 for COPPA violations — the largest COPPA penalty at the time. Roblox submitted 13,316 CyberTipline reports to NCMEC in 2023 (NCMEC 2023 ESP Report). Multiple ongoing civil lawsuits allege Roblox failed to prevent child sexual exploitation on its platform.
Humam Sakhnini
CEO, Discord (since April 2025)
Discord
D
“"We take child safety extremely seriously and work closely with law enforcement and NCMEC." (Discord safety blog, 2023)”
The record: Discord submitted 339,412 CyberTipline reports to NCMEC in 2023 (NCMEC 2023 ESP Report) and has faced zero major regulatory fines. Its private server architecture remains the primary mechanism for predator coordination identified in law enforcement operations.
Evan Spiegel
CEO, Snap Inc.
Snapchat
C-
“"We are deeply committed to the safety of our community, especially young people." (Snap investor letter, 2024)”
The record: Snap's disappearing message feature is the primary mechanism in the sextortion epidemic targeting teenage boys (Thorn 2024). The DEA has specifically named Snap as the primary platform for teen drug supply. 713K NCMEC reports in 2023.
Elon Musk
Owner & CEO, X (formerly Twitter)
X / Twitter
F
“"Twitter/X is the most free and fair platform on the internet." (X post, 2023)”
The record: Since Musk's 2022 acquisition, X laid off ~80% of its Trust & Safety staff, dissolved its dedicated Child Safety team, and removed proactive CSAM detection tools. X submitted 1.5 million NCMEC CyberTipline reports in 2023 — the second-highest of any platform — while simultaneously gutting the team responsible for finding it. The EU opened a formal DSA investigation into X's child safety practices in 2024. X has paid zero child safety fines to date.

📄 Parent Report Cards

One page. Every fact a parent needs. Download, print, share with your school.

✉️ Write to Your Representative

Personalize the letter below and send it to your senator or representative. It takes 90 seconds and creates the constituent pressure that moves legislation.

Dear [Your Representative's Name],

I am writing as a constituent and as a parent to urge you to support the Kids Off Social Media Act (KOSMA) and to hold major social media platforms accountable for their documented child safety failures.

Instagram has paid $1.3 billion in fines for child safety violations and made zero mandatory operational improvements. TikTok paid €345 million in GDPR fines for child data violations. Roblox was fined $5.7 million by the FTC for COPPA violations. Discord submitted 339,412 child exploitation reports to NCMEC in 2023 and has faced no major regulatory fine.

The platforms are not self-regulating. They are not responding to fines. They require a legal duty of care with real enforcement consequences.

I am asking you to:
1. Co-sponsor and advance KOSMA in the current Congress
2. Support mandatory age verification for social media platforms
3. Hold hearings on platform compliance with existing child safety law

Children in your state are being harmed by these platforms every day. The data is public. The fines have been paid. The platforms have not changed. It is time for Congress to act.

Sincerely,
[Your Name]
[Your City, State]
🛡️Now protect your own child

The Wall of Shame shows you what platforms are doing. Guardian AI helps you protect your child from it.

Intelligence-led. Privacy-first. Available 24/7. Guardian AI gives every parent the same quality of digital threat intelligence that Fortune 500 CISOs use — applied to the platforms your teenager is on right now.

5 free messages · No credit card required

Data sources: NCMEC CyberTipline Annual Reports (2022–2024) · Stanford Internet Observatory · EU Digital Services Act enforcement tracker · UK Ofcom Online Safety enforcement · Australia eSafety Commissioner · FTC enforcement actions · FBI VCAC press releases · Europol EC3 reports. Harm scores are composite indices based on reported incident volume, regulatory enforcement severity, platform cooperation, and documented child safety failures. Scores are updated as new data is seeded.

We use essential cookies

TeenAegis uses essential cookies to keep you logged in and remember your preferences. We do not use advertising or tracking cookies.