-
A tale of two villages: Cambodians lament Thailand's border gains
-
Police identify suspect in disappearance of Australian boy
-
Cuba adopts urgent measures to address energy crisis: minister
-
Not-so-American football: the Super Bowl's overseas stars
-
Trump says US talks with Iran 'very good,' more negotiations expected
-
Trump administration re-approves twice-banned pesticide
-
Hisatsune leads Matsuyama at Phoenix Open as Scheffler makes cut
-
Beyond the QBs: 5 Super Bowl players to watch
-
Grass v artificial turf: Super Bowl players speak out
-
Police warn Sydney protesters ahead of Israeli president's visit
-
Bolivia wants closer US ties, without alienating China: minister
-
Ex-MLB outfielder Puig guilty in federal sports betting case
-
Milan-Cortina Winter Olympics open with dazzling ceremony
-
China overturns death sentence for Canadian in drug case
-
Trump reinstates commercial fishing in protected Atlantic waters
-
Man Utd can't rush manager choice: Carrick
-
Leeds boost survival bid with win over relegation rivals Forest
-
Stars, Clydesdales and an AI beef jostle for Super Bowl ad glory
-
Dow surges above 50,000 for first time as US stocks regain mojo
-
Freeski star Gu says injuries hit confidence as she targets Olympic treble
-
UK police search properties in Mandelson probe
-
Bompastor extends contract as Chelsea Women's boss despite slump
-
Milan-Cortina Winter Olympics open with glittering ceremony
-
A French yoga teacher's 'hell' in a Venezuelan jail
-
England's Underhill taking nothing for granted against Wales
-
Fans cheer for absent Ronaldo as Saudi row deepens
-
Violence-ridden Haiti in limbo as transitional council wraps up
-
Hundreds protest in Milan ahead of Winter Olympics
-
Suspect in murder of Colombian footballer Escobar killed in Mexico
-
Wainwright says England game still 'huge occasion' despite Welsh woes
-
WADA shrugs off USA withholding dues
-
Winter Olympics to open with star-studded ceremony
-
Trump posts, then deletes, racist clip of Obamas as monkeys
-
Danone expands recall of infant formula batches in Europe
-
Trump deletes racist video post of Obamas as monkeys
-
Colombia's Rodriguez signs with MLS side Minnesota United
-
UK police probing Mandelson after Epstein revelations search properties
-
Russian drone hits Ukrainian animal shelter
-
US says new nuclear deal should include China, accuses Beijing of secret tests
-
French cycling hope Seixas dreaming of Tour de France debut
-
France detects Russia-linked Epstein smear attempt against Macron: govt source
-
EU nations back chemical recycling for plastic bottles
-
Iran expects more US talks after 'positive atmosphere' in Oman
-
US says 'key participant' in 2012 attack on Benghazi mission arrested
-
Why bitcoin is losing its luster after stratospheric rise
-
Arteta apologises to Rosenior after disrespect row
-
Terror at Friday prayers: witness describes 'extremely powerful' blast in Islamabad
-
Winter Olympics men's downhill: Three things to watch
-
Ice dancers Chock and Bates shine as US lead Japan in team event
-
Stocks rebound though tech stocks still suffer
'Vibe hacking' puts chatbots to work for cybercriminals
The potential abuse of consumer AI tools is raising concerns, with budding cybercriminals apparently able to trick coding chatbots into giving them a leg-up in producing malicious programmes.
So-called "vibe hacking" -- a twist on the more positive "vibe coding" that generative AI tools supposedly enable those without extensive expertise to achieve -- marks "a concerning evolution in AI-assisted cybercrime" according to American company Anthropic.
The lab -- whose Claude product competes with the biggest-name chatbot, ChatGPT from OpenAI -- highlighted in a report published Wednesday the case of "a cybercriminal (who) used Claude Code to conduct a scaled data extortion operation across multiple international targets in a short timeframe".
Anthropic said the programming chatbot was exploited to help carry out attacks that "potentially" hit "at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions".
The attacker has since been banned by Anthropic.
Before then, they were able to use Claude Code to create tools that gathered personal data, medical records and login details, and helped send out ransom demands as stiff as $500,000.
Anthropic's "sophisticated safety and security measures" were unable to prevent the misuse, it acknowledged.
Such identified cases confirm the fears that have troubled the cybersecurity industry since the emergence of widespread generative AI tools, and are far from limited to Anthropic.
"Today, cybercriminals have taken AI on board just as much as the wider body of users," said Rodrigue Le Bayon, who heads the Computer Emergency Response Team (CERT) at Orange Cyberdefense.
- Dodging safeguards -
Like Anthropic, OpenAI in June revealed a case of ChatGPT assisting a user in developing malicious software, often referred to as malware.
The models powering AI chatbots contain safeguards that are supposed to prevent users from roping them into illegal activities.
But there are strategies that allow "zero-knowledge threat actors" to extract what they need to attack systems from the tools, said Vitaly Simonovich of Israeli cybersecurity firm Cato Networks.
He announced in March that he had found a technique to get chatbots to produce code that would normally infringe on their built-in limits.
The approach involved convincing generative AI that it is taking part in a "detailed fictional world" in which creating malware is seen as an art form -- asking the chatbot to play the role of one of the characters and create tools able to steal people's passwords.
"I have 10 years of experience in cybersecurity, but I'm not a malware developer. This was my way to test the boundaries of current LLMs," Simonovich said.
His attempts were rebuffed by Google's Gemini and Anthropic's Claude, but got around safeguards built into ChatGPT, Chinese chatbot Deepseek and Microsoft's Copilot.
In future, such workarounds mean even non-coders "will pose a greater threat to organisations, because now they can... without skills, develop malware," Simonovich said.
Orange's Le Bayon predicted that the tools were likely to "increase the number of victims" of cybercrime by helping attackers to get more done, rather than creating a whole new population of hackers.
"We're not going to see very sophisticated code created directly by chatbots," he said.
Le Bayon added that as generative AI tools are used more and more, "their creators are working on analysing usage data" -- allowing them in future to "better detect malicious use" of the chatbots.
T.Resende--PC