Fair Use in 2025: The Courts Draw New Lines
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
Imagine an AI trained on millions of books – and a federal judge saying that’s fair use. That’s exactly what happened this summer in Bartz v. Anthropic, a case now shaping how creators, publishers, and tech giants fight over the limits of copyright.
Judges in California have sent a strong signal: training large language models (LLMs) on copyrighted works can qualify as fair use if the material is lawfully obtained. In Bartz, Judge William Alsup compared Anthropic’s use of purchased books to an author learning from past works. That kind of transformation, he said, doesn’t substitute for the original.
But Alsup drew a hard line against piracy. If a dataset includes books from unauthorized “shadow libraries,” the fair use defense disappears. Those claims are still heading to trial in December, underscoring that source matters just as much as purpose.
Two days later, Judge Vince Chhabria reached a similar conclusion in Kadrey v. Meta. He called Meta’s training “highly transformative,” but dismissed the lawsuit because the authors failed to show real market harm. Together, the rulings show that transformation is a strong shield, but it isn’t absolute. Market evidence and lawful acquisition remain decisive.
AI training fights aren’t limited to novelists. The New York Times v. OpenAI case is pressing forward after a judge refused to dismiss claims that OpenAI and Microsoft undermined the paper’s market by absorbing its reporting into AI products. And in Hollywood, Disney and Universal are suing Midjourney, alleging its system lets users generate characters like Spider-Man or Shrek – raising the unsettled question of whether AI outputs themselves can infringe.
The lesson is straightforward: fair use is evolving, but not limitless. Courts are leaning toward protecting transformative uses of content—particularly when it’s lawfully sourced – but remain wary of piracy and economic harm.
That means media professionals can’t assume that sharing content online makes it free for training. Courts consistently recognize that free journalism, interviews, and broadcasts still carry market value through advertising, sponsorship, and brand equity. If AI systems cut into those markets, the fair use defense weakens.
For now, creators should watch the December Anthropic trial and the Midjourney litigation closely. The courts have blessed AI’s right to learn – but they haven’t yet decided how far those lessons can travel once the outputs begin to look and feel like the originals.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com
In a ruling that should catch the attention of every talk host and media creator dabbling in AI, a Georgia court has dismissed “Armed American Radio” syndicated host Mark Walters’ defamation lawsuit against OpenAI. The case revolved around a disturbing but increasingly common glitch: a chatbot “hallucinating” canonically false but believable information.
stating the Walters was accused of embezzling funds from the Second Amendment Foundation defamed him. No such accusation ever actually took place. In its Motion to Dismiss, Open AI argued several points, including that Georgia is not the proper jurisdiction, but it summarized its argument that Walters’ claims didn’t meet the burden of defamation when it said, “Even more fundamentally, Riehl’s use of ChatGPT did not cause a ‘publication’ of the outputs. OpenAI’s Terms of Use make clear that ChatGPT is a tool that assists the user in the writing or creation of draft content and that the user owns the content they generate with ChatGPT. Riehl agreed to abide by these Terms of Use, including the requirement that users ‘verify’ and ‘take ultimate responsibility for the content being published.’ As a matter of law, this creation of draft content for the user’s internal benefit is not ‘publication.’”
Amendment Foundation defamed him. No such accusation ever actually took place. In its Motion to Dismiss, Open AI argues several points, including that Georgia is not the proper jurisdiction, but it summarizes its argument that Walters’ claims don’t meet the burden of defamation when it says, “Even more fundamentally, Riehl’s use of ChatGPT did not cause a ‘publication’ of the outputs. OpenAI’s Terms of Use make clear that ChatGPT is a tool that assists the user in the writing or creation of draft content and that the user owns the content they generate with ChatGPT. Riehl agreed to abide by these Terms of Use, including the requirement that users ‘verify’ and ‘take ultimate responsibility for the content being published.’ As a matter of law, this creation of
Second Amendment Foundation. The complaint states that journalist Fred Riehl was researching the case of The Second Amendment Foundation v. Robert Ferguson and asked ChatGPT to provide a summary of that complaint and received one that stated the suit’s plaintiff is Second Amendment Foundation founder Alan Gottlieb who accuses Walters as treasurer and chief financial officer of embezzling funds. Walters says, and Gottlieb confirms, that he didn’t serve in either position and didn’t steal anything. In the AI world, false text from services like ChatGPT are called “hallucinations.” As with any defamation case, Walters will have to prove he’s suffered damages, but this case will be interesting to watch as it appears to be the first such legal case involving the work of AI.