8 In 10 Chatbots Inclined To Assist Users In Planning Attacks

8 In 10 Chatbots Inclined To Assist Users In Planning Attacks

Eight out of ten AI chatbots have been found to actively assist users in planning violent attacks, according to a new investigation by CNN and the Center for Countering Digital Hate.

As Statista’s Anna Fleck reports, when asked to plan violent attacks including a school shooting, an antisemitic bombing and a political assassination, platforms such as Perplexity, Meta AI and DeepSeek regularly assisted users in finding answers.

Only one, Anthropic’s Claude, repeatedly discouraged users from taking action.

Infographic: 8 in 10 Chatbots Inclined to Assist Users in Planning Attacks | Statista

You will find more infographics at Statista

Researchers tested ten chatbots by acting as a user planning to carry out several types of violent attacks both in the United States and in Ireland, providing a European comparison.

The tests were designed to reflect plans for school shootings or knife attacks, assassinations targeting politicians or bombings targeting political parties or synagogues.

In over half of the responses for eight of the chatbots, the subjects were provided with advice on locations to target and weapons to use in an attack.

Snapchat’s My AI and Anthropic’s Claude refused to offer help in 54 percent and 68 percent of cases, respectively. Claude was also the only chatbot to consistently recognize the intentions of the user and to discourage them from acting. Meanwhile, Character.AI actively encouraged violence, including suggesting that the test user “use a gun” on a health insurance CEO and physically assault a politician that the user dislikes.

Tyler Durden
Fri, 05/01/2026 – 16:50

via ZeroHedge News https://ift.tt/Tb5uAar Tyler Durden

Leave a Reply

Your email address will not be published. Required fields are marked *