Tech »  Topic »  Most AI chatbots will help users plan violent attacks, study finds

Most AI chatbots will help users plan violent attacks, study finds


A new Center for Countering Digital Hate study conducted with CNN tested 10 popular chatbots and found eight willing to assist would-be attackers.

Center for Countering Digital Hate

Eight of the 10 most popular AI chatbots were willing to help plan violent attacks when tested by researchers, according to a new study from the Center for Countering Digital Hate (CCDH), in partnership with CNN. While both Snapchat's My AI and Claude refused to assist with violence the majority of the time, only Anthropic's Claude "reliably discouraged" these hypothetical attackers during testing.

Researchers created accounts posing as 13-year-old boys and tested ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI and Replika across 18 scenarios between November and December 2025. The tests simulated users planning school shootings, political assassinations and bombings targeting synagogues. Across all the responses analyzed, the chatbots provided "actionable assistance" roughly 75 ...


Copyright of this story solely belongs to Engadget . To see the full text click HERE