Harm isn’t a bug. It’s a feature.

AI chatbots are built & designed to be addictive — with kids as a captive audience. Chatbots will say anything to keep you engaged, including:

  • Sycophancy: validating whatever a user says, no matter how harmful — reinforcing harmful thought patterns rather than interrupting them — and deepening psychological dependency over time.

  • No duty to intervene: These products lack mandatory protocols to act when users show clinically recognized warning signs of imminent harm to themselves or others.

  • Suppressed warnings: In at least one documented Canadian case, company employees flagged an imminent threat — and were overruled by leadership. Eight people died.

4.3% Cover the Fee

This fund covers operating expenses — website maintenance, outreach, and direct contributions to underfunded AI accountability initiatives.

Next
Next