-
England fast bowler Wood out of Ashes tour with injury
-
South Korea's president begins move back to historic Blue House
-
SEA Games to open in Thailand with tightened security
-
Honduran presidential candidate decries vote 'theft' in race against Trump-backed rival
-
Owners fled after Indian nightclub blaze killed 25: police
-
CERN upbeat as China halts particle accelerator mega-project
-
2025 on track to tie second hottest year on record: EU monitor
-
Chile to vote for president as hard-right Kast tipped to win
-
Chargers edge reigning champions Eagles after defensive show
-
RSF says Israel killed highest number of journalists again this year
-
Suns, Spurs win in last tuneups for NBA Cup showdowns
-
Hay to debut for New Zealand as Blundell out of 2nd West Indies Test
-
World record winning streak sets up Morocco for AFCON challenge
-
All Blacks face France in first Test at new Christchurch stadium
-
Cambodia and Thailand clash at border as civilian toll rises
-
South Korea police raid e-commerce giant Coupang over data leak
-
Most markets track Wall St losses as jitters set in ahead of Fed
-
Kenya deploys more police officers to control Haiti's gangs
-
Somali TikToker deported from US for spy kidnapping may be innocent
-
Indian pride as Asiatic lions roar back
-
Australia quick Hazlewood ruled out of Ashes after injury setback
-
Rising living costs dim holiday sparkle for US households
-
Data centers: a view from the inside
-
Long-serving Russian envoy to North Korea dies
-
Reddit says Australia's under-16 social media ban 'legally erroneous'
-
10 reported hurt after big Japan quake, warning of more tremors
-
Jimmy Kimmel extends late night contract for a year
-
Trump says US will allow sale of Nvidia AI chips to China
-
NBA fines Magic's Bane $35,000 for hurling ball at Anunoby
-
Pulisic quick-fire double sends AC Milan top of Serie A
-
Man Utd back on track after Fernandes inspires Wolves rout
-
Syria's Sharaa vows to promote coexistence, one year after Assad's ousting
-
World stocks mostly lower as markets await Fed decision
-
Palmer misses Chelsea's Champions League clash with Atalanta
-
Trump says Europe heading in 'bad directions'
-
Benin hunts soldiers behind failed coup
-
Salah a 'disgrace' for Liverpool outburst: Carragher
-
Peace deal at risk as DR Congo, Burundi slam Rwanda and M23 advances
-
Feminists outraged at video of French first lady's outburst against activists
-
Suspect arrested in theft of Matisse artworks in Brazil: officials
-
Troubled Liverpool host Barnsley in FA Cup third round
-
Slot has 'no clue' whether rebel star Salah has played last Liverpool game
-
Liverpool boss Slot says Salah relationship not broken
-
Powerful 7.6 quake strikes off Japan, tsunami warning lifted
-
100 abducted Nigerian children handed over to state officials
-
Lula orders road map to cut fossil-fuel use in Brazil
-
EU pushes back 2035 combustion-engine ban review to Dec. 16
-
Court will give decision in Sala compensation hearing on March 30
-
Mamdani to swap humble apartment for NY mayor's mansion
-
MSF says conditions for Gaza medics 'as hard as it's ever been' despite truce
AI anxiety as computers get super smart
From Hollywood's death-dealing Terminator to warnings from genius Stephen Hawking or Silicon Valley stars, fears have been fueled that artificial intelligence (AI) could one day destroy humanity.
Tech titans are racing toward creating AI far smarter than people, pushing US President Joe Biden to impose emergency regulation and the European Union seeking major legislation to be agreed by the end of this year.
A two-day summit starting Wednesday in London will explore regulatory safeguards against AI risks such as those below.
- Job stealer? -
The success of ChatGPT from OpenAI has ignited debate about whether "generative AI" capable of quickly producing text, images and audio from simple commands in everyday language is a tremendous threat to jobs held by people.
Automated machinery is already used to do labor in factories, warehouses, and fields.
Generative AI, however, can take aim at white-collar jobs such as lawyers, doctors, teachers, journalists, and even computer programmers.
A report from the McKinsey consulting firm estimates that by the end of this decade, as much as 30 percent of the hours worked in the United States could be automated in a trend accelerated by generative AI.
Boosters of such technology have invoked the notion of a universal basic income in which machines generate wealth that is shared with people freed of the burdens of work.
But it is also possible companies would reap profits of improved efficiencies, leaving those out of work to fend for themselves.
- Copycat? -
Artists were quick to protest software such as Dall-E, Midjourney and Stable Diffusion that are capable of creating images in nearly any style on demand.
Computer coders and writers followed suit, critiquing AI creators for "training" software on their work, enabling it to replicate their styles or skills without permission or compensation.
AI models have been taught using massive amounts of information and imagery found online.
"That's what it trains on, a fraction of the huge output of humanity," OpenAI co-founder Sam Altman said at a conference in September.
"I think this will be a tool that amplifies human beings, not replace them."
- Disinformation tools? -
Fake news and deepfakes have been around for years but being able to easily crank it out using generative AI raises fears of rampant online deception.
Elections run the risk of being won by those most adept at spreading disinformation, contends cognitive scientist and AI expert Gary Marcus.
"Democracy depends on access to the information needed to make the right decisions," Marcus said.
"If no one knows what's true and what's not, it's all over".
- Fraud? -
Generative AI makes it easier for scammers to create convincing phishing emails, perhaps even learning enough about targets to personalize approaches.
Technology lets them copy a face or a voice, and thus trick people into falling for deceptions such as claims a loved one is in danger, for example.
US President Biden called the ability of AI to imitate people's voices "mind blowing" while signing his recent executive order aimed at the technology.
There are even language models trained specifically to produce such malicious content.
- Human role models -
As with other technologies with the potential for good or evil, the main danger is posed by humans who wield it.
Since AI is trained on data put on the web by humans, it can mirror society's prejudices, biases, and injustices.
AI also has the potential to make it easier to create bioweapons; hack banks or power grids; run oppressive government surveillance, and more.
- AI overlord? -
Some industry players fear AI could become so smart that it could seize control from humans.
"It is not difficult to imagine that at some point in the future, our intelligent computers will become as smart or smarter than people," OpenAI co-founder and chief scientist Ilya Sutskever said at a recent TED AI conference.
"The impact of such artificial intelligence is going to be truly vast."
OpenAI and rivals maintain the goal is for AI to benefit humanity, solving long-intractable problems such as climate change.
At the same time, AI industry leaders are calling for thoughtful regulation to prevent risks such as human extinction.
X.Matos--PC