The Fort Worth Press - 'Vibe hacking' puts chatbots to work for cybercriminals

USD -
AED 3.672504
AFN 63.503991
ALL 81.244999
AMD 376.110854
ANG 1.789731
AOA 917.000367
ARS 1399.250402
AUD 1.409443
AWG 1.8
AZN 1.70397
BAM 1.647475
BBD 2.012046
BDT 122.174957
BGN 1.647646
BHD 0.3751
BIF 2946.973845
BMD 1
BND 1.262688
BOB 6.903087
BRL 5.219404
BSD 0.998947
BTN 90.484774
BWP 13.175252
BYN 2.862991
BYR 19600
BZD 2.009097
CAD 1.36175
CDF 2255.000362
CHF 0.769502
CLF 0.021854
CLP 862.903912
CNY 6.90865
CNH 6.901015
COP 3660.44729
CRC 484.521754
CUC 1
CUP 26.5
CVE 92.882113
CZK 20.44504
DJF 177.88822
DKK 6.293504
DOP 62.233079
DZD 128.996336
EGP 46.615845
ERN 15
ETB 155.576128
EUR 0.842404
FJD 2.19355
FKP 0.732987
GBP 0.734187
GEL 2.67504
GGP 0.732987
GHS 10.993556
GIP 0.732987
GMD 73.503851
GNF 8768.057954
GTQ 7.662048
GYD 208.996336
HKD 7.81845
HNL 26.394306
HRK 6.348604
HTG 130.985975
HUF 319.430388
IDR 16832.8
ILS 3.09073
IMP 0.732987
INR 90.56104
IQD 1308.680453
IRR 42125.000158
ISK 122.170386
JEP 0.732987
JMD 156.340816
JOD 0.70904
JPY 152.69504
KES 128.812703
KGS 87.450384
KHR 4018.026366
KMF 415.00035
KPW 900.005022
KRW 1440.860383
KWD 0.30661
KYD 0.832498
KZT 494.35202
LAK 21437.897486
LBP 89457.103146
LKR 308.891042
LRD 186.25279
LSL 16.033104
LTL 2.95274
LVL 0.60489
LYD 6.298277
MAD 9.134566
MDL 16.962473
MGA 4370.130144
MKD 51.922672
MMK 2099.920079
MNT 3581.976903
MOP 8.044813
MRU 39.81384
MUR 45.903741
MVR 15.405039
MWK 1732.215811
MXN 17.164804
MYR 3.907504
MZN 63.910377
NAD 16.033104
NGN 1353.403725
NIO 36.760308
NOK 9.506104
NPR 144.775302
NZD 1.662372
OMR 0.38258
PAB 0.999031
PEN 3.351556
PGK 4.288422
PHP 57.848504
PKR 279.396706
PLN 3.54775
PYG 6551.825801
QAR 3.640736
RON 4.291404
RSD 98.909152
RUB 77.184854
RWF 1458.450912
SAR 3.749258
SBD 8.045182
SCR 13.47513
SDG 601.503676
SEK 8.922504
SGD 1.263504
SHP 0.750259
SLE 24.450371
SLL 20969.49935
SOS 570.441814
SRD 37.754038
STD 20697.981008
STN 20.637662
SVC 8.741103
SYP 11059.574895
SZL 16.029988
THB 31.080369
TJS 9.425178
TMT 3.5
TND 2.880259
TOP 2.40776
TRY 43.608504
TTD 6.780946
TWD 31.384038
TZS 2607.252664
UAH 43.08175
UGX 3536.200143
UYU 38.512404
UZS 12277.302784
VES 392.73007
VND 25970
VUV 118.59522
WST 2.712215
XAF 552.547698
XAG 0.012937
XAU 0.000198
XCD 2.70255
XCG 1.800362
XDR 0.687192
XOF 552.547698
XPF 100.459083
YER 238.350363
ZAR 15.950904
ZMK 9001.203584
ZMW 18.156088
ZWL 321.999592
  • RIO

    0.1600

    98.07

    +0.16%

  • BCE

    -0.1200

    25.71

    -0.47%

  • BTI

    -1.1100

    59.5

    -1.87%

  • CMSD

    0.0647

    23.64

    +0.27%

  • RBGPF

    0.1000

    82.5

    +0.12%

  • BCC

    -1.5600

    86.5

    -1.8%

  • GSK

    0.3900

    58.93

    +0.66%

  • CMSC

    0.0500

    23.75

    +0.21%

  • NGG

    1.1800

    92.4

    +1.28%

  • RYCEF

    0.2300

    17.1

    +1.35%

  • JRI

    0.2135

    13.24

    +1.61%

  • BP

    0.4700

    37.66

    +1.25%

  • VOD

    -0.0500

    15.57

    -0.32%

  • AZN

    1.0300

    205.55

    +0.5%

  • RELX

    2.2500

    31.06

    +7.24%

'Vibe hacking' puts chatbots to work for cybercriminals
'Vibe hacking' puts chatbots to work for cybercriminals / Photo: © AFP/File

'Vibe hacking' puts chatbots to work for cybercriminals

The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.

Text size:

So-called "vibe hacking" -- a twist on the more positive "vibe coding" that generative AI tools supposedly enable those without extensive expertise to achieve -- marks "a concerning evolution in AI-assisted cybercrime" according to American company Anthropic.

The lab -- whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI -- highlighted in a report published Wednesday the case of "a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe".

Anthropic said the programming chatbot was exploited to help carry out attacks that "potentially" hit "at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions".

The attacker has since been banned by Anthropic.

Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.

Anthropic's "sophisticated safety and security measures" were unable to prevent the misuse, it acknowledged.

Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.

"Today, cybercriminals have taken AI on board just as much as the wider body of users," said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.

- Dodging safeguards -

Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.

The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.

But there are strategies that allow "zero-knowledge threat actors" to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.

He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.

The approach involved convincing generative AI that it is taking part in a "detailed fictional world" in which creating malware is seen as an art form -- asking the chatbot to play the role of one of the characters and create tools able to steal people's passwords.

"I have 10 years of experience in cybersecurity, but I'm not a malware developer. This was my way to test the boundaries of current LLMs," Simonovich said.

His attempts were rebuffed by Google's Gemini and Anthropic's Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft's Copilot.

In future, such workarounds mean even non-coders "will pose a greater threat to organisations, because now they can... without skills, develop malware," Simonovich said.

Orange's Le Bayon predicted that the tools were likely to "increase the number of victims" of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.

"We're not going to see very sophisticated code created directly by chatbots," he said.

Le Bayon added that as generative AI tools are used more and more, "their creators are working on analysing usage data" -- allowing them in future to "better detect malicious use" of the chatbots.

S.Jordan--TFWP