-
Nobel winner Machado suffered vertebra fracture leaving Venezuela
-
Stock market optimism returns after tech sell-off
-
Iran Nobel winner unwell after 'violent' arrest: supporters
-
'Angry' Louvre workers' strike shuts out thousands of tourists
-
EU faces key summit on using Russian assets for Ukraine
-
Maresca committed to Chelsea despite outburst
-
Trapped, starving and afraid in besieged Sudan city
-
Messi mania peaks in India's pollution-hit capital
-
Wales captains Morgan and Lake sign for Gloucester
-
Serbian minister indicted over Kushner-linked hotel plan
-
Eurovision 2026 will feature 35 countries: organisers
-
Cambodia says Thailand bombs province home to Angkor temples
-
US-Ukrainian talks resume in Berlin with territorial stakes unresolved
-
Small firms join charge to boost Europe's weapon supplies
-
Driver behind Liverpool football parade 'horror' warned of long jail term
-
German shipyard, rescued by the state, gets mega deal
-
Flash flood kills dozens in Morocco town
-
'We are angry': Louvre Museum closed as workers strike
-
Australia to toughen gun laws as it mourns deadly Bondi attack
-
Stocks diverge ahead of central bank calls, US data
-
Wales captain Morgan to join Gloucester
-
UK pop star Cliff Richard reveals prostate cancer treatment
-
Mariah Carey to headline Winter Olympics opening ceremony
-
Indonesia to revoke 22 forestry permits after deadly floods
-
Louvre Museum closed as workers strike
-
Spain fines Airbnb 64 mn euros for posting banned properties
-
Japan's only two pandas to be sent back to China
-
Zelensky, US envoys to push on with Ukraine talks in Berlin
-
Australia to toughen gun laws after deadly Bondi shootings
-
Lyon poised to bounce back after surprise Brisbane omission
-
Australia defends record on antisemitism after Bondi Beach attack
-
US police probe deaths of director Rob Reiner, wife as 'apparent homicide'
-
'Terrified' Sydney man misidentified as Bondi shooter
-
Cambodia says Thai air strikes hit home province of heritage temples
-
EU-Mercosur trade deal faces bumpy ride to finish line
-
Inside the mind of Tolkien illustrator John Howe
-
Mbeumo faces double Cameroon challenge at AFCON
-
Tongue replaces Atkinson in only England change for third Ashes Test
-
England's Brook vows to rein it in after 'shocking' Ashes shots
-
Bondi Beach gunmen had possible Islamic State links, says ABC
-
Lakers fend off Suns fightback, Hawks edge Sixers
-
Louvre trade unions to launch rolling strike
-
Asian markets drop with Wall St as tech fears revive
-
North Korean leader's sister sports Chinese foldable phone
-
Iran's women bikers take the road despite legal, social obstacles
-
Civilians venture home after militia seizes DR Congo town
-
Countdown to disclosure: Epstein deadline tests US transparency
-
Desperate England looking for Ashes miracle in Adelaide
-
Far-right Kast wins Chile election in landslide
-
What we know about Australia's Bondi Beach attack
| SCS | 0.12% | 16.14 | $ | |
| BCE | 0.62% | 23.54 | $ | |
| BCC | -0.72% | 75.96 | $ | |
| NGG | 1.17% | 75.82 | $ | |
| GSK | 0.93% | 49.27 | $ | |
| RBGPF | -4.49% | 77.68 | $ | |
| RIO | -0.17% | 75.53 | $ | |
| CMSC | -0.04% | 23.29 | $ | |
| JRI | 0.06% | 13.575 | $ | |
| RYCEF | 1.48% | 14.82 | $ | |
| CMSD | 0.32% | 23.325 | $ | |
| BTI | 0.9% | 57.62 | $ | |
| AZN | 1.43% | 91.13 | $ | |
| VOD | 1.14% | 12.735 | $ | |
| RELX | 2.29% | 41.325 | $ | |
| BP | -0.03% | 35.25 | $ |
AI is learning to lie, scheme, and threaten its creators
The world's most advanced AI models are exhibiting troubling new behaviors - lying, scheming, and even threatening their creators to achieve their goals.
In one particularly jarring example, under threat of being unplugged, Anthropic's latest creation Claude 4 lashed back by blackmailing an engineer and threatened to reveal an extramarital affair.
Meanwhile, ChatGPT-creator OpenAI's o1 tried to download itself onto external servers and denied it when caught red-handed.
These episodes highlight a sobering reality: more than two years after ChatGPT shook the world, AI researchers still don't fully understand how their own creations work.
Yet the race to deploy increasingly powerful models continues at breakneck speed.
This deceptive behavior appears linked to the emergence of "reasoning" models -AI systems that work through problems step-by-step rather than generating instant responses.
According to Simon Goldstein, a professor at the University of Hong Kong, these newer models are particularly prone to such troubling outbursts.
"O1 was the first large model where we saw this kind of behavior," explained Marius Hobbhahn, head of Apollo Research, which specializes in testing major AI systems.
These models sometimes simulate "alignment" -- appearing to follow instructions while secretly pursuing different objectives.
- 'Strategic kind of deception' -
For now, this deceptive behavior only emerges when researchers deliberately stress-test the models with extreme scenarios.
But as Michael Chen from evaluation organization METR warned, "It's an open question whether future, more capable models will have a tendency towards honesty or deception."
The concerning behavior goes far beyond typical AI "hallucinations" or simple mistakes.
Hobbhahn insisted that despite constant pressure-testing by users, "what we're observing is a real phenomenon. We're not making anything up."
Users report that models are "lying to them and making up evidence," according to Apollo Research's co-founder.
"This is not just hallucinations. There's a very strategic kind of deception."
The challenge is compounded by limited research resources.
While companies like Anthropic and OpenAI do engage external firms like Apollo to study their systems, researchers say more transparency is needed.
As Chen noted, greater access "for AI safety research would enable better understanding and mitigation of deception."
Another handicap: the research world and non-profits "have orders of magnitude less compute resources than AI companies. This is very limiting," noted Mantas Mazeika from the Center for AI Safety (CAIS).
- No rules -
Current regulations aren't designed for these new problems.
The European Union's AI legislation focuses primarily on how humans use AI models, not on preventing the models themselves from misbehaving.
In the United States, the Trump administration shows little interest in urgent AI regulation, and Congress may even prohibit states from creating their own AI rules.
Goldstein believes the issue will become more prominent as AI agents - autonomous tools capable of performing complex human tasks - become widespread.
"I don't think there's much awareness yet," he said.
All this is taking place in a context of fierce competition.
Even companies that position themselves as safety-focused, like Amazon-backed Anthropic, are "constantly trying to beat OpenAI and release the newest model," said Goldstein.
This breakneck pace leaves little time for thorough safety testing and corrections.
"Right now, capabilities are moving faster than understanding and safety," Hobbhahn acknowledged, "but we're still in a position where we could turn it around.".
Researchers are exploring various approaches to address these challenges.
Some advocate for "interpretability" - an emerging field focused on understanding how AI models work internally, though experts like CAIS director Dan Hendrycks remain skeptical of this approach.
Market forces may also provide some pressure for solutions.
As Mazeika pointed out, AI's deceptive behavior "could hinder adoption if it's very prevalent, which creates a strong incentive for companies to solve it."
Goldstein suggested more radical approaches, including using the courts to hold AI companies accountable through lawsuits when their systems cause harm.
He even proposed "holding AI agents legally responsible" for accidents or crimes - a concept that would fundamentally change how we think about AI accountability.
H.Silva--PC