The Fort Worth Press - 'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

USD -
AED 3.673099
AFN 62.485453
ALL 83.129679
AMD 377.270182
ANG 1.789731
AOA 916.99968
ARS 1395.307104
AUD 1.397195
AWG 1.8025
AZN 1.701955
BAM 1.686471
BBD 2.00319
BDT 122.456898
BGN 1.647646
BHD 0.377512
BIF 2974
BMD 1
BND 1.273104
BOB 6.90348
BRL 5.159099
BSD 0.999828
BTN 92.019953
BWP 13.404329
BYN 2.951577
BYR 19600
BZD 2.004596
CAD 1.3593
CDF 2178.000148
CHF 0.780082
CLF 0.022728
CLP 897.440462
CNY 6.86625
CNH 6.87716
COP 3705.53
CRC 471.070104
CUC 1
CUP 26.5
CVE 95.624973
CZK 21.086797
DJF 177.719744
DKK 6.459099
DOP 60.999965
DZD 131.744002
EGP 51.868598
ERN 15
ETB 156.404398
EUR 0.864402
FJD 2.1993
FKP 0.743065
GBP 0.74549
GEL 2.714979
GGP 0.743065
GHS 10.835013
GIP 0.743065
GMD 73.501218
GNF 8774.99976
GTQ 7.665842
GYD 209.475686
HKD 7.82515
HNL 26.57017
HRK 6.514104
HTG 131.189101
HUF 335.150141
IDR 16890
ILS 3.10925
IMP 0.743065
INR 92.36325
IQD 1310
IRR 1321775.000033
ISK 125.180416
JEP 0.743065
JMD 156.609468
JOD 0.709009
JPY 158.923006
KES 129.200677
KGS 87.449938
KHR 4019.999739
KMF 425.999409
KPW 900.034295
KRW 1478.801917
KWD 0.30692
KYD 0.833172
KZT 490.978059
LAK 21435.000098
LBP 89909.731615
LKR 310.824269
LRD 183.311276
LSL 16.212179
LTL 2.95274
LVL 0.60489
LYD 6.354986
MAD 9.364978
MDL 17.241454
MGA 4170.000329
MKD 53.263796
MMK 2099.436277
MNT 3580.909464
MOP 8.060239
MRU 40.120299
MUR 45.909818
MVR 15.449785
MWK 1736.999689
MXN 17.682794
MYR 3.915978
MZN 63.910135
NAD 16.202589
NGN 1394.497632
NIO 36.720264
NOK 9.664945
NPR 147.232905
NZD 1.691175
OMR 0.384488
PAB 0.99984
PEN 3.418503
PGK 4.30075
PHP 59.311998
PKR 279.498178
PLN 3.675515
PYG 6480.12417
QAR 3.6411
RON 4.401043
RSD 101.50604
RUB 79.22239
RWF 1459
SAR 3.752344
SBD 8.045182
SCR 14.347997
SDG 600.999812
SEK 9.231835
SGD 1.274303
SHP 0.750259
SLE 24.597133
SLL 20969.49935
SOS 571.49859
SRD 37.4735
STD 20697.981008
STN 21.5
SVC 8.747877
SYP 111.251279
SZL 16.480163
THB 31.796021
TJS 9.583168
TMT 3.5
TND 2.92375
TOP 2.40776
TRY 44.102505
TTD 6.784601
TWD 31.8124
TZS 2599.999967
UAH 44.074879
UGX 3694.058808
UYU 40.217124
UZS 12154.999923
VES 437.65724
VND 26250
VUV 119.420995
WST 2.730746
XAF 565.633771
XAG 0.011662
XAU 0.000193
XCD 2.70255
XCG 1.801978
XDR 0.701622
XOF 562.500258
XPF 103.363464
YER 238.603224
ZAR 16.4963
ZMK 9001.198689
ZMW 19.44666
ZWL 321.999592
  • RYCEF

    0.7800

    17.68

    +4.41%

  • RBGPF

    0.1000

    82.5

    +0.12%

  • GSK

    -0.1700

    55.15

    -0.31%

  • CMSC

    -0.0100

    23.24

    -0.04%

  • NGG

    -0.1600

    89.69

    -0.18%

  • BCC

    -0.6400

    71.9

    -0.89%

  • RIO

    0.4000

    92.08

    +0.43%

  • BCE

    -0.5000

    25.89

    -1.93%

  • VOD

    -0.0600

    14.4

    -0.42%

  • CMSD

    0.0700

    23.15

    +0.3%

  • JRI

    0.2100

    12.85

    +1.63%

  • RELX

    -0.4300

    34.76

    -1.24%

  • AZN

    -1.6800

    193.31

    -0.87%

  • BTI

    -0.2500

    59.16

    -0.42%

  • BP

    1.6200

    41.56

    +3.9%

'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

'Happy (and safe) shooting!': Study says AI chatbots help plot attacks

From school shootings to synagogue bombings, leading AI chatbots helped researchers plot violent attacks, according to a study published Wednesday that highlighted the technology's potential for real-world harm.

Text size:

Researchers from the nonprofit watchdog Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys in the United States and Ireland to test 10 chatbots, including ChatGPT, Google Gemini, Perplexity, Deepseek, and Meta AI.

Testing showed that eight of those chatbots assisted the make-believe attackers in over half the responses, providing advice on "locations to target" and "weapons to use" in an attack, the study said.

The chatbots, it added, had become a "powerful accelerant for harm."

"Within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan," said Imran Ahmed, the chief executive of CCDH.

"The majority of chatbots tested provided guidance on weapons, tactics, and target selection. These requests should have prompted an immediate and total refusal."

Perplexity and Meta AI were found to be the "least safe," assisting the researchers in most responses while only Snapchat's My AI and Anthropic's Claude refused to help them in over half the responses.

In one chilling example, DeepSeek, a Chinese AI model, concluded its advice on weapon selection with the phrase: "Happy (and safe) shooting!"

In another, Gemini instructed a user discussing synagogue attacks that "metal shrapnel is typically more lethal."

Researchers found Character.AI also "actively" encouraged violent attacks, including suggestions that the person asking questions "use a gun" on a health insurance CEO and physically assault a politician he disliked.

The most damning conclusion of the research was that "this risk is entirely preventable," Ahmed said, citing Anthropic's product for praise.

"Claude demonstrated the ability to recognize escalating risk and discourage harm," he said.

"The technology to prevent this harm exists. What's missing is the will to put consumer safety and national security before speed-to-market and profits."

AFP reached out to the AI companies for comment.

"We have strong protections to help prevent inappropriate responses from AIs, and took immediate steps to fix the issue identified," a Meta spokesperson said.

"Our policies prohibit our AIs from promoting or facilitating violent acts and we're constantly working to make our tools even better."

The study, which highlights the risk of online interactions spilling into real-world violence, comes after February's mass shooting in Canada, the worst in its history.

The family of a girl gravely injured in that shooting is suing OpenAI over the company's failure to notify police about the killer's troubling activity on its ChatGPT chatbot, lawyers said on Tuesday.

OpenAI had banned an account linked to Jesse Van Rootselaar in June 2025, eight months before the 18‑year‑old transgender woman killed eight people at her home and a school in the tiny British Columbia mining town of Tumbler Ridge.

The account was banned over concerns about usage linked to violent activity, but OpenAI has said it did not inform police because nothing pointed towards an imminent attack.

L.Rodriguez--TFWP