Around half a million writers are set to be eligible for at least $3,000 each under a $1.5 billion class action settlement reached with Anthropic, the company behind the Claude AI system. According to Yahoo Finance, the agreement follows allegations that Anthropic downloaded millions of books from so-called shadow libraries to train its models.
Largest payout meets ongoing legal debate
The settlement is described as the largest payout in the history of U.S. copyright law, but it does not resolve the broader question of whether using copyrighted works to train AI is unlawful. In June, federal judge William Alsup ruled that training AI on copyrighted material can qualify as fair use, calling it a “transformative” practice. As cited by Yahoo Finance, the judge wrote that Anthropic’s models trained on works “not to race ahead and replicate or supplant them — but to turn a hard corner and create something different.”
The case, Bartz v. Anthropic, had centered on two distinct issues: the legality of AI training on copyrighted content and the alleged piracy of books from shadow libraries. It was the piracy allegation—rather than the training itself—that advanced the suit toward trial before the settlement eliminated the need for one. Yahoo Finance notes that the decision offers a reference point for other courts as numerous similar cases proceed against companies including Meta, Google, OpenAI, and Midjourney.
Anthropic’s response and what comes next
Company statement and case implications
“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims,” said Aparna Sridhar, deputy general counsel at Anthropic, in a statement quoted by Yahoo Finance. “We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.”
The payments to eligible writers are tied to the alleged unauthorized downloading of books, not to the act of training AI models on copyrighted material itself. Yahoo Finance reports that this distinction reflects Judge Alsup’s earlier finding that AI training can be protected by fair use, while highlighting the separate legal exposure posed by how training data is acquired.
With dozens of related cases still in motion, Bartz v. Anthropic now provides a recent precedent on fair use in the context of AI, while the settlement underscores the legal risks of sourcing data from shadow libraries. As courts continue to parse these issues, outcomes in other jurisdictions may differ.