Meta, Google, OpenAI Face New Pressure After Anthropic’s $1.5B Shadow Library Deal

A towering brass balance scale in a bright minimalist scene, one pan piled with luminous open books and the other holding a faceted AI crystal cube, gold coins swirling between them like pages in motion, dramatic warm gold versus cool teal lighting, medium-close framing with bold shapes and high contrast.

Around half a million writers are set to be eligible for at least $3,000 each under a $1.5 billion class action settlement reached with Anthropic, the company behind the Claude AI system. According to Yahoo Finance, the agreement follows allegations that Anthropic downloaded millions of books from so-called shadow libraries to train its models.

The settlement is described as the largest payout in the history of U.S. copyright law, but it does not resolve the broader question of whether using copyrighted works to train AI is unlawful. In June, federal judge William Alsup ruled that training AI on copyrighted material can qualify as fair use, calling it a “transformative” practice. As cited by Yahoo Finance, the judge wrote that Anthropic’s models trained on works “not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”

The case, Bartz v. Anthropic, had centered on two distinct issues: the legality of AI training on copyrighted content and the alleged piracy of books from shadow libraries. It was the piracy allegation—rather than the training itself—that advanced the suit toward trial before the settlement eliminated the need for one. Yahoo Finance notes that the decision offers a reference point for other courts as numerous similar cases proceed against companies including Meta, Google, OpenAI, and Midjourney.

Anthropic’s response and what comes next

Company statement and case implications

“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims,” said Aparna Sridhar, deputy general counsel at Anthropic, in a statement quoted by Yahoo Finance. “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”

The payments to eligible writers are tied to the alleged unauthorized downloading of books, not to the act of training AI models on copyrighted material itself. Yahoo Finance reports that this distinction reflects Judge Alsup’s earlier finding that AI training can be protected by fair use, while highlighting the separate legal exposure posed by how training data is acquired.

With dozens of related cases still in motion, Bartz v. Anthropic now provides a recent precedent on fair use in the context of AI, while the settlement underscores the legal risks of sourcing data from shadow libraries. As courts continue to parse these issues, outcomes in other jurisdictions may differ.

Total
0
Shares
Pridaj komentár

Vaša e-mailová adresa nebude zverejnená. Vyžadované polia sú označené *

Previous Post
A gleaming gold balance scale on a minimalist pedestal holds a stack of richly textured hardcover books on one side and a glowing crystalline AI cube emitting electric-blue light on the other, pages morphing into pixels around it, bright high-key scene with warm gold versus cool blue contrast, centered medium shot, no text or logos.

Largest Copyright Recovery Ever? Anthropic’s $1.5B Deal Could Recast AI Training Fights

Next Post
Bright editorial collage of President Donald Trump centered at a White House reception, flanked by Mark Zuckerberg, Sundar Pichai, Tim Cook, Satya Nadella, Sam Altman, and Bill Gates in formal attire, close-up head-and-shoulders portraits arranged in a semicircle, neutral expressions, soft gold-and-ivory background with subtle White House interior cues, crisp high-key lighting with warm skin tones contrasted by cool teal highlights.

Big Tech Pledges $1.5 Trillion for U.S. AI Buildout at Trump White House

Related Posts