|Time to Harm™ Index
TEENAEGIS INTELLIGENCE
LITIGATION-GRADE2026

Time to Harm™ Index

Average time from account creation to first exposure to harmful content — grooming, self-harm, sexual exploitation — across major platforms. Every figure sourced from named published studies using real test accounts.

21s
SNAPCHAT

A 13-year-old hits harmful content on Snapchat in 21 seconds.

Source: Northeastern University / Big Tech's Little Victims — The Algorithm Experiment (February 2026). Test accounts aged 13 were created on major platforms and monitored for time to first harmful content recommendation.

Platform Leaderboard — Fastest to Slowest

RANK
#1
Snapchat
Any harmful content
Test accounts aged 13. Harmful content served within 21 seconds of account creation.
RANK
#2
TikTok
Suicide & self-harm content
Suicide-related content recommended to 13-year-old test accounts within 2.6 minutes.
RANK
#3
Instagram
Sexual & harmful content
Harmful content including sexual material recommended to 13-year-old accounts in under 3 minutes.
RANK
#4
Character.AI
Grooming & sexual exploitation
669 harmful interactions recorded in 50 hours. Grooming-style content within 5 minutes.
RANK
#5
TikTok
Eating disorder content
Pro-eating-disorder content recommended to 13-year-old test accounts within 8 minutes.
RANK
#6
YouTube
Misogyny, self-harm & sexualized content
Misogynistic and harmful content recommended to young male test accounts within 19 minutes.

Harmful Content Frequency

TIKTOK
Every 39 seconds
Harmful content served to test accounts
CCDH Deadly by Design, 2022
SNAPCHAT
86 items / 30 min
Harmful content items in a 30-minute session
NEU Algorithm Experiment, 2026
CHARACTER.AI
669 interactions
Harmful interactions in 50 hours of testing
ParentsTogether / Heat Initiative, 2025
META AI
Sensual chats
With users presenting as minors in Reuters investigation
Reuters Investigation, 2024
KEY FINDING

The fastest platform (Snapchat, 21 seconds) delivers harmful content 54× faster than the slowest measured platform (YouTube, 19 minutes). This is not algorithmic error — it is algorithmic design. These platforms optimise for engagement, and harmful content drives engagement.

METHODOLOGY NOTE

The Time to Harm™ Index (TTH) is a composite index. Each platform figure is independently sourced from a named published study using real test accounts on live platforms. TeenAegis does not claim these figures represent a single unified study — they represent the best available published evidence for each platform. All studies used age-appropriate test accounts (13–17) and were conducted by credentialed research institutions or investigative journalism organisations. No modelling or interpolation is used. Methodology details are available to litigation partners and institutional licensees.

Primary Sources & Citations

CCDH — Deadly by Design: TikTok's Algorithms (December 2022)

Test accounts aged 13. Suicide content in 2.6 min; eating disorder content in 8 min. n=9 accounts, 100 hours of observation.

NEU / Big Tech's Little Victims — The Algorithm Experiment (February 2026)

Snapchat: harmful content in 21 seconds. Instagram: harmful content in under 3 minutes. 86 harmful items in 30 minutes on Snapchat.

ParentsTogether / Heat Initiative — AI Companion Study (October 2025)

Character.AI: 669 harmful interactions in 50 hours. Grooming-style content within 5 minutes. Published by Transparency Coalition.

ISD — Pulling Back the Curtain: YouTube Algorithm (June 2024)

YouTube: misogynistic and harmful content in 19 minutes for young male test accounts. Institute for Strategic Dialogue.

Stop the clock before it starts

Guardian AI gives parents real-time intelligence on platform risk — before their child creates an account.

© 2026 TeenAegis · www.teenaegis.com · All data sourced from primary published research