-
England fast bowler Wood out of Ashes tour with injury
-
South Korea's president begins move back to historic Blue House
-
SEA Games to open in Thailand with tightened security
-
Honduran presidential candidate decries vote 'theft' in race against Trump-backed rival
-
Owners fled after Indian nightclub blaze killed 25: police
-
CERN upbeat as China halts particle accelerator mega-project
-
2025 on track to tie second hottest year on record: EU monitor
-
Chile to vote for president as hard-right Kast tipped to win
-
Chargers edge reigning champions Eagles after defensive show
-
RSF says Israel killed highest number of journalists again this year
-
Suns, Spurs win in last tuneups for NBA Cup showdowns
-
Hay to debut for New Zealand as Blundell out of 2nd West Indies Test
-
World record winning streak sets up Morocco for AFCON challenge
-
All Blacks face France in first Test at new Christchurch stadium
-
Cambodia and Thailand clash at border as civilian toll rises
-
South Korea police raid e-commerce giant Coupang over data leak
-
Most markets track Wall St losses as jitters set in ahead of Fed
-
Kenya deploys more police officers to control Haiti's gangs
-
Somali TikToker deported from US for spy kidnapping may be innocent
-
Indian pride as Asiatic lions roar back
-
Australia quick Hazlewood ruled out of Ashes after injury setback
-
Rising living costs dim holiday sparkle for US households
-
Data centers: a view from the inside
-
Long-serving Russian envoy to North Korea dies
-
Reddit says Australia's under-16 social media ban 'legally erroneous'
-
10 reported hurt after big Japan quake, warning of more tremors
-
Jimmy Kimmel extends late night contract for a year
-
Trump says US will allow sale of Nvidia AI chips to China
-
NBA fines Magic's Bane $35,000 for hurling ball at Anunoby
-
Pulisic quick-fire double sends AC Milan top of Serie A
-
Man Utd back on track after Fernandes inspires Wolves rout
-
Syria's Sharaa vows to promote coexistence, one year after Assad's ousting
-
World stocks mostly lower as markets await Fed decision
-
Palmer misses Chelsea's Champions League clash with Atalanta
-
Trump says Europe heading in 'bad directions'
-
Benin hunts soldiers behind failed coup
-
Salah a 'disgrace' for Liverpool outburst: Carragher
-
Peace deal at risk as DR Congo, Burundi slam Rwanda and M23 advances
-
Feminists outraged at video of French first lady's outburst against activists
-
Suspect arrested in theft of Matisse artworks in Brazil: officials
-
Troubled Liverpool host Barnsley in FA Cup third round
-
Slot has 'no clue' whether rebel star Salah has played last Liverpool game
-
Liverpool boss Slot says Salah relationship not broken
-
Powerful 7.6 quake strikes off Japan, tsunami warning lifted
-
100 abducted Nigerian children handed over to state officials
-
Lula orders road map to cut fossil-fuel use in Brazil
-
EU pushes back 2035 combustion-engine ban review to Dec. 16
-
Court will give decision in Sala compensation hearing on March 30
-
Mamdani to swap humble apartment for NY mayor's mansion
-
MSF says conditions for Gaza medics 'as hard as it's ever been' despite truce
US judge backs using copyrighted books to train AI
A US federal judge has sided with Anthropic regarding training its artificial intelligence models on copyrighted books without authors' permission, a decision with the potential to set a major legal precedent in AI deployment.
District Court Judge William Alsup ruled on Monday that the company's training of its Claude AI models with books bought or pirated was allowed under the "fair use" doctrine in the US Copyright Act.
"Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use," Alsup wrote in his decision.
"The technology at issue was among the most transformative many of us will see in our lifetimes," Alsup added in his 32-page decision, comparing AI training to how humans learn by reading books.
Tremendous amounts of data are needed to train large language models powering generative AI.
Musicians, book authors, visual artists and news publications have sued various AI companies that used their data without permission or payment.
AI companies generally defend their practices by claiming fair use, arguing that training AI on large datasets fundamentally transforms the original content and is necessary for innovation.
"We are pleased that the court recognized that using 'works to train LLMs was transformative,'" an Anthropic spokesperson said in response to an AFP query.
The judge's decision is "consistent with copyright's purpose in enabling creativity and fostering scientific progress," the spokesperson added.
- Blanket protection rejected -
The ruling stems from a class-action lawsuit filed by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, who accused Anthropic of illegally copying their books to train Claude, the company's AI chatbot that rivals ChatGPT.
However, Alsup rejected Anthropic's bid for blanket protection, ruling that the company's practice of downloading millions of pirated books to build a permanent digital library was not justified by fair use protections.
Along with downloading of books from websites offering pirated works, Anthropic bought copyrighted books, scanned the pages and stored them in digital format, according to court documents.
Anthropic's aim was to amass a library of "all the books in the world" for training AI models on content as deemed fit, the judge said in his ruling.
While training AI models on the pirated content posed no legal violation, downloading pirated copies to build a general-purpose library constituted copyright infringement, regardless of eventual training use.
The case will now proceed to trial on damages related to the pirated library copies, with potential penalties including financial damages.
Anthropic said it disagreed with going to trial on this part of the decision and was evaluating its legal options.
Valued at $61.5 billion and heavily backed by Amazon, Anthropic was founded in 2021 by former OpenAI executives.
The company, known for its Claude chatbot and AI models, positions itself as focused on AI safety and responsible development.
P.Queiroz--PC