Why “Play the Clip” Still Matters
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
Every talk host knows the move: play the clip. It might be a moment from late-night TV, a political ad, or a viral post that sets the table for the segment. It’s how commentary comes alive – listeners hear it, react to it, and stay tuned for your take.
That simple act is powered by a fragile piece of legal machinery known as the Fair Use Balancing Act. Without it, half of talk radio, podcasting, and online news/talk commentary wouldn’t exist. Fair Use allows creators to quote, parody, or critique copyrighted material without permission – but only when the new use transforms the old. It’s the backbone of what we now call “react” or “remix” culture.
Fair use isn’t a license; it’s a defense. When you rely on it, you admit you used someone else’s work and trust that a judge will see your purpose – criticism, news, education – as transformative. That’s a high-wire act few think about when the mic is hot.
The doctrine works on a sliding scale: courts weigh four factors – purpose, nature, amount, and market effect. In plain English, they ask, Did you change the meaning? Did you take too much? Did you cost the owner money? There are neither checklists nor guarantees.
That flexibility is what makes American media vibrant – and also what keeps lawyers busy. Each decision takes time, context, and money. The price of creative freedom is uncertainty.
The same logic now drives the debate over AI training and voice cloning. Machines don’t “comment” on your broadcast; they absorb it. And if courts treat that as transformative analysis instead of reproduction, the next generation of “hosts” may not need microphones at all.
For broadcasters, that’s the new frontier: your archives, tone, and phrasing are training data. Once ingested, they can be repurposed, remixed, and re-voiced without violating traditional copyright rules. The Fair Use Balancing Act may protect innovation – but it rarely protects the innovator.
Fair use was designed to keep culture evolving, not to leave creators behind. It balances a creator’s right to profit against society’s right to build upon shared work. But balance only works if both sides know the weight they’re carrying.
Every time you play the clip, remember you’re exercising one of the oldest and most essential freedoms in media. Just make sure the next voice that plays you is doing the same thing – for the right reasons, and under the same rules.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com
When we first covered this case, it felt like only 2024 could invent it – a disgraced congressman, George Santos, selling Cameos and a late-night host, Jimmy Kimmel, buying them under fake names to make a point about truth and ego. A year later, the Second Circuit turned that punchline into precedent. (Read story here:
Charlie Kirk’s tragic assassination shook the talk radio world. Emotions were raw, and broadcasters across the spectrum tried to capture that moment for their audiences. Charles Heller of KVOI in Tucson
Imagine an AI trained on millions of books – and a federal judge saying that’s fair use. That’s exactly what happened this summer in Bartz v. Anthropic, a case now shaping how creators, publishers, and tech giants fight over the limits of copyright.
Ninety seconds. That’s all it took. One of the interviews on the TALKERS Media Channel – shot, edited, and published by us – appeared elsewhere online, chopped into jumpy cuts, overlaid with AI-generated video game clips, and slapped with a clickbait title. The credit? A link. The essence of the interview? Repurposed for someone else’s traffic.
Imagine a listener “talking” to an AI version of you – trained entirely on your old episodes. The bot knows your cadence, your phrases, even your voice. It sounds like you, but it isn’t you.
Imagine SiriusXM acquires the complete Howard Stern archive – every show, interview, and on-air moment. Months later, it debuts “Howard Stern: The AI Sessions,” a series of new segments created with artificial intelligence trained on that archive. The programming is labeled AI-generated, yet the voice, timing, and style sound like Stern himself.
You did everything right – or so you thought. You used a short clip, added commentary, or reshared something everyone else was already posting. Then one day, a notice shows up in your inbox. A takedown. A demand. A legal-sounding, nasty-toned email claiming copyright infringement, and asking for payment.
A radio (or video podcast) host grabs a viral clip, tosses in some sharp commentary, and shares it online. The goal? Make some noise. The result? A takedown notice for copyright infringement – and then a letter threatening a defamation suit.
In the golden age of broadcasting, the rules were clear. If you edited the message, you owned the consequences. That was the tradeoff for editorial control. But today’s digital platforms – YouTube, X, TikTok, Instagram – have rewritten that deal. Broadcasters and those who operate within the FCC regulatory framework are paying the price.
In a ruling that should catch the attention of every talk host and media creator dabbling in AI, a Georgia court has dismissed “Armed American Radio” syndicated host Mark Walters’ defamation lawsuit against OpenAI. The case revolved around a disturbing but increasingly common glitch: a chatbot “hallucinating” canonically false but believable information.