A 20th Century Rulebook Officiating a 2026 Game
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
Every media creator knows this moment. You are building a segment, you find the clip that makes the point land, and then the hesitation kicks in. Can I use this? Or am I about to invite a problem that distracts from the work itself?
That question has always lived at the center of fair use. What has changed is not the question, but the context around it. Over the past year, two federal court decisions involving AI training have quietly clarified how judges are thinking about copying, transformation, and risk in a media environment that looks nothing like the one for which these rules were originally written.
Fair use was never meant to be static. Anyone treating it as a checklist with guaranteed outcomes is working from an outdated playbook. What we actually have is a 20th century rulebook being used to officiate a game that keeps inventing new positions mid-play. The rules still apply. But how they are interpreted depends heavily on what the technology is doing and why.
That tension showed up clearly in two cases out of the Northern District of California last summer. In both, the courts addressed whether training AI systems on copyrighted books could qualify as fair use. These were not headline-grabbing decisions, but they mattered. The judges declined to declare AI training inherently illegal. At the same time, they refused to give it a free pass.
What drove the analysis was context. What material was used. How it was ingested. What the system produced afterward. And, critically, whether the output functioned as a replacement for the original works or something meaningfully different. Reading the opinions, you get the sense that the courts are no longer talking about “AI” as a single concept. Each model is treated almost as its own actor, with its own risk profile.
A simple medical analogy helps. Two patients can take the same medication and have very different outcomes. Dosage matters. Chemistry matters. Timing matters. Courts are beginning to approach AI the same way. The same training data does not guarantee the same behavior, and fair use analysis has to account for that reality.
So why should this matter to someone deciding whether to play a 22-second news clip?
Because the courts relied on the same four factors that govern traditional media use. Purpose. Nature. Amount. Market effect. They did not invent a new test for AI. They applied the existing one with a sharper focus on transformation and substitution. That tells us something important. The framework has not changed. The scrutiny has.
Once you see that, everyday editorial decisions become easier to evaluate. Commentary versus duplication. Reporting versus repackaging. Illustration versus substitution. These are not abstract legal concepts. They are practical distinctions creators make every day, often instinctively. The courts are signaling that those instincts still matter, but they need to be exercised with awareness, not habit.
The mistake I see most often is treating fair use as permission rather than analysis. Fair use is not a shield you invoke after the fact. It is a lens you apply before you hit publish. The recent AI cases reinforce that point. Judges are not interested in labels. They are interested in function and effect.
Fair use has always evolved alongside technology. Printing presses, photocopiers, home recording, digital editing, streaming. AI is just the newest stress test. The takeaway is not panic, and it is not complacency. It is attention.
If you work in the media today, the smart move is to understand how the rulebook is being interpreted while you are busy playing the game. The rules still count. The field just looks different now.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.
The Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality
Every talk host knows the move: play the clip. It might be a moment from late-night TV, a political ad, or a viral post that sets the table for the segment. It’s how commentary comes alive – listeners hear it, react to it, and stay tuned for your take.
When we first covered this case, it felt like only 2024 could invent it – a disgraced congressman, George Santos, selling Cameos and a late-night host, Jimmy Kimmel, buying them under fake names to make a point about truth and ego. A year later, the Second Circuit turned that punchline into precedent. (Read story here:
Charlie Kirk’s tragic assassination shook the talk radio world. Emotions were raw, and broadcasters across the spectrum tried to capture that moment for their audiences. Charles Heller of KVOI in Tucson
Imagine an AI trained on millions of books – and a federal judge saying that’s fair use. That’s exactly what happened this summer in Bartz v. Anthropic, a case now shaping how creators, publishers, and tech giants fight over the limits of copyright.
Ninety seconds. That’s all it took. One of the interviews on the TALKERS Media Channel – shot, edited, and published by us – appeared elsewhere online, chopped into jumpy cuts, overlaid with AI-generated video game clips, and slapped with a clickbait title. The credit? A link. The essence of the interview? Repurposed for someone else’s traffic.
Imagine a listener “talking” to an AI version of you – trained entirely on your old episodes. The bot knows your cadence, your phrases, even your voice. It sounds like you, but it isn’t you.
Imagine SiriusXM acquires the complete Howard Stern archive – every show, interview, and on-air moment. Months later, it debuts “Howard Stern: The AI Sessions,” a series of new segments created with artificial intelligence trained on that archive. The programming is labeled AI-generated, yet the voice, timing, and style sound like Stern himself.
You did everything right – or so you thought. You used a short clip, added commentary, or reshared something everyone else was already posting. Then one day, a notice shows up in your inbox. A takedown. A demand. A legal-sounding, nasty-toned email claiming copyright infringement, and asking for payment.
A radio (or video podcast) host grabs a viral clip, tosses in some sharp commentary, and shares it online. The goal? Make some noise. The result? A takedown notice for copyright infringement – and then a letter threatening a defamation suit.
In the golden age of broadcasting, the rules were clear. If you edited the message, you owned the consequences. That was the tradeoff for editorial control. But today’s digital platforms – YouTube, X, TikTok, Instagram – have rewritten that deal. Broadcasters and those who operate within the FCC regulatory framework are paying the price.
In a ruling that should catch the attention of every talk host and media creator dabbling in AI, a Georgia court has dismissed “Armed American Radio” syndicated host Mark Walters’ defamation lawsuit against OpenAI. The case revolved around a disturbing but increasingly common glitch: a chatbot “hallucinating” canonically false but believable information.