A 20th Century Rulebook Officiating a 2026 Game
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
Every media creator knows this moment. You are building a segment, you find the clip that makes the point land, and then the hesitation kicks in. Can I use this? Or am I about to invite a problem that distracts from the work itself?
That question has always lived at the center of fair use. What has changed is not the question, but the context around it. Over the past year, two federal court decisions involving AI training have quietly clarified how judges are thinking about copying, transformation, and risk in a media environment that looks nothing like the one for which these rules were originally written.
Fair use was never meant to be static. Anyone treating it as a checklist with guaranteed outcomes is working from an outdated playbook. What we actually have is a 20th century rulebook being used to officiate a game that keeps inventing new positions mid-play. The rules still apply. But how they are interpreted depends heavily on what the technology is doing and why.
That tension showed up clearly in two cases out of the Northern District of California last summer. In both, the courts addressed whether training AI systems on copyrighted books could qualify as fair use. These were not headline-grabbing decisions, but they mattered. The judges declined to declare AI training inherently illegal. At the same time, they refused to give it a free pass.
What drove the analysis was context. What material was used. How it was ingested. What the system produced afterward. And, critically, whether the output functioned as a replacement for the original works or something meaningfully different. Reading the opinions, you get the sense that the courts are no longer talking about “AI” as a single concept. Each model is treated almost as its own actor, with its own risk profile.
A simple medical analogy helps. Two patients can take the same medication and have very different outcomes. Dosage matters. Chemistry matters. Timing matters. Courts are beginning to approach AI the same way. The same training data does not guarantee the same behavior, and fair use analysis has to account for that reality.
So why should this matter to someone deciding whether to play a 22-second news clip?
Because the courts relied on the same four factors that govern traditional media use. Purpose. Nature. Amount. Market effect. They did not invent a new test for AI. They applied the existing one with a sharper focus on transformation and substitution. That tells us something important. The framework has not changed. The scrutiny has.
Once you see that, everyday editorial decisions become easier to evaluate. Commentary versus duplication. Reporting versus repackaging. Illustration versus substitution. These are not abstract legal concepts. They are practical distinctions creators make every day, often instinctively. The courts are signaling that those instincts still matter, but they need to be exercised with awareness, not habit.
The mistake I see most often is treating fair use as permission rather than analysis. Fair use is not a shield you invoke after the fact. It is a lens you apply before you hit publish. The recent AI cases reinforce that point. Judges are not interested in labels. They are interested in function and effect.
Fair use has always evolved alongside technology. Printing presses, photocopiers, home recording, digital editing, streaming. AI is just the newest stress test. The takeaway is not panic, and it is not complacency. It is attention.
If you work in the media today, the smart move is to understand how the rulebook is being interpreted while you are busy playing the game. The rules still count. The field just looks different now.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.
The Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality

examining the lifelong love affair of a fictional couple from childhood to old age – an emotional roller coaster ride reflecting the romantic ups and downs of a complex relationship. The tear-jerker is a departure from the heavy-hitting social commentaries that have made Gunhill Road a favorite among talk radio hosts and audiences for the past half decade. The intriguing group, formed in the late 1960s, is still going strong with core members Steve Goldrich, Paul Reisch, Brian Koonin, and Michael Harrison. Matthew B. Harrison produces the ensemble’s videos that employ leading-edge techniques and technology. Ms. Farber, who shares lead vocals on the song with Brian Koonin, is a talented singer, songwriter, and instrumentalist with a number of singles, albums and television commercial soundtracks among her credits. She is presently an advocate for the well-being of nursing home residents and organizer of initiatives to bring live music into their lives. 
and the gut-wrenching chaos of informational overload. Non-partisan lyrics cry out: “Too much information clogging up my brain… and I can’t change the station; it’s driving me insane!” Co-written and performed by band members Steve Goldrich, Paul Reisch, Brian Koonin, and Michael Harrison, the dramatic images accompanying the music include a dynamic montage of exasperated people being driven to the brink of madness by the pressure of what feels like non-stop, negative NOISE. Produced by
Every talk host knows the move: play the clip. It might be a moment from late-night TV, a political ad, or a viral post that sets the table for the segment. It’s how commentary comes alive – listeners hear it, react to it, and stay tuned for your take.
When we first covered this case, it felt like only 2024 could invent it – a disgraced congressman, George Santos, selling Cameos and a late-night host, Jimmy Kimmel, buying them under fake names to make a point about truth and ego. A year later, the Second Circuit turned that punchline into precedent. (Read story here:
Jimmy Kimmel’s first monologue back after the recent suspension had the audience laughing and gasping, and, in the hands of countless radio hosts and podcasters, replaying. Within hours, clips of his bit weren’t just being shared online. They were being chopped up, (re)framed, and (re)analyzed as if they were original show content. For listeners, that remix feels fresh. For lawyers, it is a fair use minefield.
Charlie Kirk’s tragic assassination shook the talk radio world. Emotions were raw, and broadcasters across the spectrum tried to capture that moment for their audiences. Charles Heller of KVOI in Tucson
Superman just flew into court – not against Lex Luthor, but against Midjourney. Warner Bros. Discovery is suing the AI platform, accusing it of stealing the studio’s crown jewels: Superman, Batman, Wonder Woman, Scooby-Doo, Bugs Bunny, and more.
Imagine an AI trained on millions of books – and a federal judge saying that’s fair use. That’s exactly what happened this summer in Bartz v. Anthropic, a case now shaping how creators, publishers, and tech giants fight over the limits of copyright.
Ninety seconds. That’s all it took. One of the interviews on the TALKERS Media Channel – shot, edited, and published by us – appeared elsewhere online, chopped into jumpy cuts, overlaid with AI-generated video game clips, and slapped with a clickbait title. The credit? A link. The essence of the interview? Repurposed for someone else’s traffic.
Imagine a listener “talking” to an AI version of you – trained entirely on your old episodes. The bot knows your cadence, your phrases, even your voice. It sounds like you, but it isn’t you.
Imagine SiriusXM acquires the complete Howard Stern archive – every show, interview, and on-air moment. Months later, it debuts “Howard Stern: The AI Sessions,” a series of new segments created with artificial intelligence trained on that archive. The programming is labeled AI-generated, yet the voice, timing, and style sound like Stern himself.
You did everything right – or so you thought. You used a short clip, added commentary, or reshared something everyone else was already posting. Then one day, a notice shows up in your inbox. A takedown. A demand. A legal-sounding, nasty-toned email claiming copyright infringement, and asking for payment.
Mark Walters

In radio and podcasting, editing isn’t just technical – it shapes narratives and influences audiences. Whether trimming dead air, tightening a guest’s comment, or pulling a clip for social media, every cut leaves an impression.
Let’s discuss how CBS’s $16 million settlement became a warning shot for every talk host, editor, and content creator with a mic.
In early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.
When Georgia-based nationally syndicated radio personality, and Second Amendment advocate Mark Walters (longtime host of “Armed American Radio”) learned that ChatGPT had falsely claimed he was involved in a criminal embezzlement scheme, he did what few in the media world have dared to do. Walters stood up when others were silent, and took on an incredibly powerful tech company, one of the biggest in the world, in a court of law.
In a ruling that should catch the attention of every talk host and media creator dabbling in AI, a Georgia court has dismissed “Armed American Radio” syndicated host Mark Walters’ defamation lawsuit against OpenAI. The case revolved around a disturbing but increasingly common glitch: a chatbot “hallucinating” canonically false but believable information.
In the ever-evolving landscape of digital media, creators often walk a fine line between inspiration and infringement. The 2015 case of “Equals Three, LLC v. Jukin Media, Inc.” offers a cautionary tale for anyone producing reaction videos or commentary-based content: fair use is not a free pass, and transformation is key.
As the practice of “clip jockeying” becomes an increasingly ubiquitous and taken-for-granted technique in modern audio and video talk media, an understanding of the legal concept “fair use” is vital to the safety and survival of practitioners and their platforms.



competition last evening (2/22) at the 1st Circuit Court of Appeals in Boston, MA. The American Bar Association, Law Student Division holds a number of annual national moot court competitions. One such event, the National Appellate Advocacy Competition, emphasizes the development of oral advocacy skills through a realistic appellate advocacy experience with moot court competitors participating in a hypothetical appeal to the United States Supreme Court. This year’s legal question focused on the Communications Decency Act – “Section 230” – and the applications of the exception from liability of internet service providers for the acts of third parties to the realistic scenario of a journalist’s photo/turned meme being used in advertising (CBD, ED treatment, gambling) without permission or compensation in violation of applicable state right of publicity statutes. Harrison tells TALKERS, “We are at one of those sensitive times in history where technology is changing at a quicker pace than the legal system and legislators can keep up with – particularly at the consequential juncture of big tech and mass communications. I was impressed and heartened by the articulateness and grasp of the Section 230 issue displayed by the law students arguing before me.”