Creators, Commentators, or Publishers: Liability Remains the Same
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
The rise of independent, talk show-style political commentary on YouTube has created a new class of media actors who do not see themselves as broadcasters, journalists, or publishers. They see themselves as creators. That distinction is real in terms of identity, tone, and platform. It is not real where it matters most: liability.
The difference exists in how the work is produced and presented. It disappears the moment the content is published.
In practice, these creators are engaging in acts that courts have long recognized as publication. They are selecting topics, framing narratives, editing clips, and distributing content to large audiences. Those decisions are not neutral. They are editorial.
The absence of FCC regulation in this space has created a persistent misunderstanding. Traditional broadcasters operate under a regulatory framework that includes licensing and content restrictions. Independent creators do not. But the lack of FCC oversight does not reduce exposure. It removes one layer of regulation while leaving the core legal risk fully intact.
Defamation law applies equally to both groups. A false statement of fact about a real person that causes reputational harm can give rise to liability whether it is spoken on a licensed radio station or uploaded to a monetized YouTube channel. The standards may differ depending on whether the subject is a public or private figure, but the underlying obligation remains the same: accuracy matters.
There is no YouTube exception. There is no creator carveout. The law does not care how the content was distributed, what the platform calls you, or how you see yourself. It cares who made the statement, who chose to publish it, and whether it was false.
The structure of YouTube content introduces additional risk. Many creators rely on rapid production cycles and clip-based commentary. This increases the likelihood of error, particularly when context is compressed or omitted. Editing choices that seem minor from a production standpoint can materially change meaning, which is precisely the type of conduct that courts examine in defamation and false light claims.
Monetization further complicates the analysis. Revenue from ads, memberships, or sponsorships strengthens the argument that content is commercial in nature. That does not eliminate First Amendment protections, but it can influence how a court evaluates intent and reasonableness.
There is also a tendency to assume that platform norms provide a form of protection. If a piece of content is allowed to remain online, or even promoted by an algorithm, it can feel implicitly validated. That assumption is misplaced. Platform enforcement decisions are not legal determinations. They are business judgments.
The most important point is simple and often overlooked. Liability does not turn on intent. It turns on what was said, whether it was false, and whether reasonable steps were taken to verify it.
The platform may change how content looks. It may change how fast it spreads. It may change who gets to participate.
It does not change the consequences of getting it wrong.
Time passes. Technology and fancy packaging change. Exposure and liability do not.
Matthew B. Harrison is a media and intellectual property attorney who advises talk show hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

AI is now embedded in the modern newsroom. Not as a headline, not as a novelty, but as infrastructure. It drafts outlines, summarizes complex reporting, surfaces background details, and accelerates prep for live conversations. For media creators operating under relentless deadlines, that efficiency is not theoretical. It is practical and daily.
TALKERS magazine, the leading trade publication serving America’s professional broadcast talk radio and associated digital communities since 1990, is pleased to participate as the presenting sponsor of the forthcoming Intercollegiate Broadcasting System (IBS) conference for the second consecutive year.
For years, “protect your name and likeness” sounded like lawyer advice in search of a problem. Abstract. Defensive. Easy to ignore. That worked when misuse required effort, intent, and a human decision-maker willing to cross a line.
Every media creator knows this moment. You are building a segment, you find the clip that makes the point land, and then the hesitation kicks in. Can I use this? Or am I about to invite a problem that distracts from the work itself?
The Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality

examining the lifelong love affair of a fictional couple from childhood to old age – an emotional roller coaster ride reflecting the romantic ups and downs of a complex relationship. The tear-jerker is a departure from the heavy-hitting social commentaries that have made Gunhill Road a favorite among talk radio hosts and audiences for the past half decade. The intriguing group, formed in the late 1960s, is still going strong with core members Steve Goldrich, Paul Reisch, Brian Koonin, and Michael Harrison. Matthew B. Harrison produces the ensemble’s videos that employ leading-edge techniques and technology. Ms. Farber, who shares lead vocals on the song with Brian Koonin, is a talented singer, songwriter, and instrumentalist with a number of singles, albums and television commercial soundtracks among her credits. She is presently an advocate for the well-being of nursing home residents and organizer of initiatives to bring live music into their lives. 
and the gut-wrenching chaos of informational overload. Non-partisan lyrics cry out: “Too much information clogging up my brain… and I can’t change the station; it’s driving me insane!” Co-written and performed by band members Steve Goldrich, Paul Reisch, Brian Koonin, and Michael Harrison, the dramatic images accompanying the music include a dynamic montage of exasperated people being driven to the brink of madness by the pressure of what feels like non-stop, negative NOISE. Produced by
Every talk host knows the move: play the clip. It might be a moment from late-night TV, a political ad, or a viral post that sets the table for the segment. It’s how commentary comes alive – listeners hear it, react to it, and stay tuned for your take.
When we first covered this case, it felt like only 2024 could invent it – a disgraced congressman, George Santos, selling Cameos and a late-night host, Jimmy Kimmel, buying them under fake names to make a point about truth and ego. A year later, the Second Circuit turned that punchline into precedent. (Read story here:
Jimmy Kimmel’s first monologue back after the recent suspension had the audience laughing and gasping, and, in the hands of countless radio hosts and podcasters, replaying. Within hours, clips of his bit weren’t just being shared online. They were being chopped up, (re)framed, and (re)analyzed as if they were original show content. For listeners, that remix feels fresh. For lawyers, it is a fair use minefield.
Charlie Kirk’s tragic assassination shook the talk radio world. Emotions were raw, and broadcasters across the spectrum tried to capture that moment for their audiences. Charles Heller of KVOI in Tucson
Superman just flew into court – not against Lex Luthor, but against Midjourney. Warner Bros. Discovery is suing the AI platform, accusing it of stealing the studio’s crown jewels: Superman, Batman, Wonder Woman, Scooby-Doo, Bugs Bunny, and more.
Imagine an AI trained on millions of books – and a federal judge saying that’s fair use. That’s exactly what happened this summer in Bartz v. Anthropic, a case now shaping how creators, publishers, and tech giants fight over the limits of copyright.
Ninety seconds. That’s all it took. One of the interviews on the TALKERS Media Channel – shot, edited, and published by us – appeared elsewhere online, chopped into jumpy cuts, overlaid with AI-generated video game clips, and slapped with a clickbait title. The credit? A link. The essence of the interview? Repurposed for someone else’s traffic.
Imagine a listener “talking” to an AI version of you – trained entirely on your old episodes. The bot knows your cadence, your phrases, even your voice. It sounds like you, but it isn’t you.
Imagine SiriusXM acquires the complete Howard Stern archive – every show, interview, and on-air moment. Months later, it debuts “Howard Stern: The AI Sessions,” a series of new segments created with artificial intelligence trained on that archive. The programming is labeled AI-generated, yet the voice, timing, and style sound like Stern himself.
You did everything right – or so you thought. You used a short clip, added commentary, or reshared something everyone else was already posting. Then one day, a notice shows up in your inbox. A takedown. A demand. A legal-sounding, nasty-toned email claiming copyright infringement, and asking for payment.
Mark Walters

In radio and podcasting, editing isn’t just technical – it shapes narratives and influences audiences. Whether trimming dead air, tightening a guest’s comment, or pulling a clip for social media, every cut leaves an impression.
Let’s discuss how CBS’s $16 million settlement became a warning shot for every talk host, editor, and content creator with a mic.
In early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.
When Georgia-based nationally syndicated radio personality, and Second Amendment advocate Mark Walters (longtime host of “Armed American Radio”) learned that ChatGPT had falsely claimed he was involved in a criminal embezzlement scheme, he did what few in the media world have dared to do. Walters stood up when others were silent, and took on an incredibly powerful tech company, one of the biggest in the world, in a court of law.
In a ruling that should catch the attention of every talk host and media creator dabbling in AI, a Georgia court has dismissed “Armed American Radio” syndicated host Mark Walters’ defamation lawsuit against OpenAI. The case revolved around a disturbing but increasingly common glitch: a chatbot “hallucinating” canonically false but believable information.
In the ever-evolving landscape of digital media, creators often walk a fine line between inspiration and infringement. The 2015 case of “Equals Three, LLC v. Jukin Media, Inc.” offers a cautionary tale for anyone producing reaction videos or commentary-based content: fair use is not a free pass, and transformation is key.
As the practice of “clip jockeying” becomes an increasingly ubiquitous and taken-for-granted technique in modern audio and video talk media, an understanding of the legal concept “fair use” is vital to the safety and survival of practitioners and their platforms.
