Which platforms cause the most harm to children — ranked by country. Based on NCMEC CyberTipline data, regulatory enforcement actions, and documented incidents. Updated from live database.
Each platform's Danger Score is calculated using a multi-dimensional proprietary methodology that normalizes enforcement data, regulatory history, and platform architecture across comparable user populations. Full methodology is protected under pending patent application.
Scores reflect multi-source enforcement data, regulatory actions, and platform safety architecture. Input data is proprietary and not published.
The real price of platform harm — beyond the headlines. Based on peer-reviewed research and MDL litigation data.
| Cost Domain | Median |
|---|---|
Mental Health Treatment | $52,320 |
Lost Wages (Caregiver) | $38,480 |
Legal Fees (if pursued) | $45,000 |
Psychological Cost (pain & suffering) | $28,000 |
Family Disruption | $9,600 |
Time Cost (non-wage) | $6,436 |
| TOTAL MEDIAN THC | $179,836 |
Methodology: The TeenAegis Total Harm Cost (THC) Index aggregates six independently sourced cost domains using a proprietary weighting methodology. All inputs are drawn from peer-reviewed research and government data. Figures represent a 3-year harm horizon. Not a legal opinion.
End-to-end encrypted (E2EE) messaging apps score lower on the harm index because they are not primary grooming platforms. However, they present a structurally different and harder-to-detect risk: once a predator has made initial contact on a monitored platform, they instruct victims to migrate the conversation here — where no automated CSAM detection, no law enforcement intercept, and no parental oversight is possible.
WhatsApp's end-to-end encryption makes CSAM detection nearly impossible — unlike Instagram or TikTok, Meta cannot scan message content for illegal material. Group chats have been documented as distribution networks for child sexual abuse material in the UK, India, and Brazil, where WhatsApp is the dominant messaging platform.
Signal is the gold standard for private communication and is not a primary grooming platform. Its low harm score reflects this. However, its military-grade encryption, disappearing messages, and absence of any cloud backup make it the preferred migration destination for predators who have already established contact elsewhere — precisely because law enforcement cannot intercept Signal communications.
Color = highest harm score in that country. Click a country to see full ranking.
Worst → Best
TOTAL FINES & SETTLEMENTS PAID BY MAJOR PLATFORMS FOR CHILD SAFETY FAILURES SINCE 2019
Sources: Meta GDPR €1.2B (2023) · TikTok DPC €345M (2023) · X FTC $150M (2022) · YouTube COPPA $170M (2019) · Epic Games FTC $520M (2022) · Snap FTC $35M (2014) · Roblox FTC $5.7M (2019) · additional state AG and EU enforcement actions. Counter increments based on known fine accrual schedules and ongoing enforcement actions.
These are the executives whose names appear on the transparency reports, the congressional testimony, and the internal memos. The data above is their record.
One page. Every fact a parent needs. Download, print, share with your school.
Personalize the letter below and send it to your senator or representative. It takes 90 seconds and creates the constituent pressure that moves legislation.
Dear [Your Representative's Name], I am writing as a constituent and as a parent to urge you to support the Kids Off Social Media Act (KOSMA) and to hold major social media platforms accountable for their documented child safety failures. Instagram has paid $1.3 billion in fines for child safety violations and made zero mandatory operational improvements. TikTok paid €345 million in GDPR fines for child data violations. Roblox was fined $5.7 million by the FTC for COPPA violations. Discord submitted 339,412 child exploitation reports to NCMEC in 2023 and has faced no major regulatory fine. The platforms are not self-regulating. They are not responding to fines. They require a legal duty of care with real enforcement consequences. I am asking you to: 1. Co-sponsor and advance KOSMA in the current Congress 2. Support mandatory age verification for social media platforms 3. Hold hearings on platform compliance with existing child safety law Children in your state are being harmed by these platforms every day. The data is public. The fines have been paid. The platforms have not changed. It is time for Congress to act. Sincerely, [Your Name] [Your City, State]
Intelligence-led. Privacy-first. Available 24/7. Guardian AI gives every parent the same quality of digital threat intelligence that Fortune 500 CISOs use — applied to the platforms your teenager is on right now.
Data sources: NCMEC CyberTipline Annual Reports (2022–2024) · Stanford Internet Observatory · EU Digital Services Act enforcement tracker · UK Ofcom Online Safety enforcement · Australia eSafety Commissioner · FTC enforcement actions · FBI VCAC press releases · Europol EC3 reports. Harm scores are composite indices based on reported incident volume, regulatory enforcement severity, platform cooperation, and documented child safety failures. Scores are updated as new data is seeded.
We use essential cookies
TeenAegis uses essential cookies to keep you logged in and remember your preferences. We do not use advertising or tracking cookies.