Every major AI chatbot and companion platform scored across six child-safety dimensions. Sorted by harm score — highest danger first. Click any row to expand full sourced detail.
Why OpenAI earned the best score: OpenAI operates the most complex harm surface of any platform in this index — text, image, video generation, and a global API ecosystem — and still filed 75,027 NCMEC CyberTipline reports in H1 2025 alone (80x year-over-year). Our methodology rewards companies that manage a harder problem well. A company that builds more complexity and still demonstrates proactive, comprehensive safety earns a better score than one with a structurally simpler environment.
Scores are derived from a proprietary multi-dimension risk assessment methodology incorporating platform complexity normalization and harm mitigation credit. All underlying data is sourced exclusively from primary government records, law enforcement data, and peer-reviewed research. Methodology available to qualified researchers via gated request. Scores reflect TeenAegis editorial assessment and do not constitute legal determinations. March 2026.
Character.AI (8.2 — Critical) leads on confirmed child deaths. xAI/Grok and DeepSeek (both 7.8 — Critical) have zero safety infrastructure. Chai and Google Gemini (both 6.1 — High Risk) have no meaningful child safety response. OpenAI (3.2 — Moderate) earns the best score in the index: managing the most complex harm surface of any platform — text, image, video, and API — while filing 75,027 NCMEC reports in H1 2025 alone. Our methodology rewards harder work done well. Claude (3.5 — Moderate) demonstrates strong proactive safety but operates in a structurally simpler text-only environment and has an unaddressed API age-control gap.