Industry Views

Creators, Commentators, or Publishers: Liability Remains the Same

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgThe rise of independent, talk show-style political commentary on YouTube has created a new class of media actors who do not see themselves as broadcasters, journalists, or publishers. They see themselves as creators. That distinction is real in terms of identity, tone, and platform. It is not real where it matters most: liability.

The difference exists in how the work is produced and presented. It disappears the moment the content is published.

In practice, these creators are engaging in acts that courts have long recognized as publication. They are selecting topics, framing narratives, editing clips, and distributing content to large audiences. Those decisions are not neutral. They are editorial.

The absence of FCC regulation in this space has created a persistent misunderstanding. Traditional broadcasters operate under a regulatory framework that includes licensing and content restrictions. Independent creators do not. But the lack of FCC oversight does not reduce exposure. It removes one layer of regulation while leaving the core legal risk fully intact.

Defamation law applies equally to both groups. A false statement of fact about a real person that causes reputational harm can give rise to liability whether it is spoken on a licensed radio station or uploaded to a monetized YouTube channel. The standards may differ depending on whether the subject is a public or private figure, but the underlying obligation remains the same: accuracy matters.

There is no YouTube exception. There is no creator carveout. The law does not care how the content was distributed, what the platform calls you, or how you see yourself. It cares who made the statement, who chose to publish it, and whether it was false.

The structure of YouTube content introduces additional risk. Many creators rely on rapid production cycles and clip-based commentary. This increases the likelihood of error, particularly when context is compressed or omitted. Editing choices that seem minor from a production standpoint can materially change meaning, which is precisely the type of conduct that courts examine in defamation and false light claims.

Monetization further complicates the analysis. Revenue from ads, memberships, or sponsorships strengthens the argument that content is commercial in nature. That does not eliminate First Amendment protections, but it can influence how a court evaluates intent and reasonableness.

There is also a tendency to assume that platform norms provide a form of protection. If a piece of content is allowed to remain online, or even promoted by an algorithm, it can feel implicitly validated. That assumption is misplaced. Platform enforcement decisions are not legal determinations. They are business judgments.

The most important point is simple and often overlooked. Liability does not turn on intent. It turns on what was said, whether it was false, and whether reasonable steps were taken to verify it.

The platform may change how content looks. It may change how fast it spreads. It may change who gets to participate.

It does not change the consequences of getting it wrong.

Time passes. Technology and fancy packaging change. Exposure and liability do not. 

Matthew B. Harrison is a media and intellectual property attorney who advises talk show hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Is That Even Legal? Talk Radio in the Age of Deepfake Voices: Where Fair Use Ends and the Law Steps In

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.

The calls were meant to mislead, not entertain. The response was quick. The FCC banned AI robocalls. State officials launched investigations. Still, a big question remains for radio and podcast creators:

Is using an AI cloned voice of a real person ever legal?

This question hits hard for talk radio, where satire, parody, and political commentary are daily staples. And the line between creative expression and illegal impersonation is starting to blur.

It’s already happening online. AI-generated clips of Howard Stern have popped up on TikTok and Reddit, making him say things he never actually said. They’re not airing on the radio yet – but they could be soon.

Then came a major moment. In 2024, a group called Dudesy released a fake comedy special called, “I’m Glad I’m Dead,” using AI to copy the voice and style of the late George Carlin. The hour-long show sounded uncannily like Carlin, and the creators claimed it was a tribute. His daughter, Kelly Carlin, strongly disagreed. The Carlin estate sued, calling it theft, not parody. That lawsuit could shape how courts treat voice cloning for years.

The danger isn’t just legal – it’s reputational. A cloned voice can be used to create fake outrage, fake interviews, or fake endorsements. Even if meant as satire, if it’s too realistic, it can do real damage.

So, what does fair use actually protect? It covers commentary, criticism, parody, education, and news. But a voice isn’t just creative work – it’s part of someone’s identity. That’s where the right of publicity comes in. It protects how your name, image, and voice are used, especially in commercial settings.

If a fake voice confuses listeners, suggests false approval, or harms someone’s brand, fair use probably won’t apply. And if it doesn’t clearly comment on the real person, it’s not parody – it’s just impersonation.

For talk show hosts and podcasters, here’s the bottom line: use caution. If you’re using AI voices, make it obvious they’re fake. Add labels. Give context. And best of all, avoid cloning real people unless you have their OK.

Fair use is a shield – but it’s not a free pass. When content feels deceptive, the law – and your audience – may not be forgiving.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Harrison Legal Group or read more at TALKERS.com.