Creators, Commentators, or Publishers: Liability Remains the Same
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
The rise of independent, talk show-style political commentary on YouTube has created a new class of media actors who do not see themselves as broadcasters, journalists, or publishers. They see themselves as creators. That distinction is real in terms of identity, tone, and platform. It is not real where it matters most: liability.
The difference exists in how the work is produced and presented. It disappears the moment the content is published.
In practice, these creators are engaging in acts that courts have long recognized as publication. They are selecting topics, framing narratives, editing clips, and distributing content to large audiences. Those decisions are not neutral. They are editorial.
The absence of FCC regulation in this space has created a persistent misunderstanding. Traditional broadcasters operate under a regulatory framework that includes licensing and content restrictions. Independent creators do not. But the lack of FCC oversight does not reduce exposure. It removes one layer of regulation while leaving the core legal risk fully intact.
Defamation law applies equally to both groups. A false statement of fact about a real person that causes reputational harm can give rise to liability whether it is spoken on a licensed radio station or uploaded to a monetized YouTube channel. The standards may differ depending on whether the subject is a public or private figure, but the underlying obligation remains the same: accuracy matters.
There is no YouTube exception. There is no creator carveout. The law does not care how the content was distributed, what the platform calls you, or how you see yourself. It cares who made the statement, who chose to publish it, and whether it was false.
The structure of YouTube content introduces additional risk. Many creators rely on rapid production cycles and clip-based commentary. This increases the likelihood of error, particularly when context is compressed or omitted. Editing choices that seem minor from a production standpoint can materially change meaning, which is precisely the type of conduct that courts examine in defamation and false light claims.
Monetization further complicates the analysis. Revenue from ads, memberships, or sponsorships strengthens the argument that content is commercial in nature. That does not eliminate First Amendment protections, but it can influence how a court evaluates intent and reasonableness.
There is also a tendency to assume that platform norms provide a form of protection. If a piece of content is allowed to remain online, or even promoted by an algorithm, it can feel implicitly validated. That assumption is misplaced. Platform enforcement decisions are not legal determinations. They are business judgments.
The most important point is simple and often overlooked. Liability does not turn on intent. It turns on what was said, whether it was false, and whether reasonable steps were taken to verify it.
The platform may change how content looks. It may change how fast it spreads. It may change who gets to participate.
It does not change the consequences of getting it wrong.
Time passes. Technology and fancy packaging change. Exposure and liability do not.
Matthew B. Harrison is a media and intellectual property attorney who advises talk show hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

AI is now embedded in the modern newsroom. Not as a headline, not as a novelty, but as infrastructure. It drafts outlines, summarizes complex reporting, surfaces background details, and accelerates prep for live conversations. For media creators operating under relentless deadlines, that efficiency is not theoretical. It is practical and daily.
The Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality
Every talk host knows the move: play the clip. It might be a moment from late-night TV, a political ad, or a viral post that sets the table for the segment. It’s how commentary comes alive – listeners hear it, react to it, and stay tuned for your take.
When we first covered this case, it felt like only 2024 could invent it – a disgraced congressman, George Santos, selling Cameos and a late-night host, Jimmy Kimmel, buying them under fake names to make a point about truth and ego. A year later, the Second Circuit turned that punchline into precedent. (Read story here:
In the golden age of broadcasting, the rules were clear. If you edited the message, you owned the consequences. That was the tradeoff for editorial control. But today’s digital platforms – YouTube, X, TikTok, Instagram – have rewritten that deal. Broadcasters and those who operate within the FCC regulatory framework are paying the price.
When Georgia-based nationally syndicated radio personality, and Second Amendment advocate Mark Walters (longtime host of “Armed American Radio”) learned that ChatGPT had falsely claimed he was involved in a criminal embezzlement scheme, he did what few in the media world have dared to do. Walters stood up when others were silent, and took on an incredibly powerful tech company, one of the biggest in the world, in a court of law.
competition last evening (2/22) at the 1st Circuit Court of Appeals in Boston, MA. The American Bar Association, Law Student Division holds a number of annual national moot court competitions. One such event, the National Appellate Advocacy Competition, emphasizes the development of oral advocacy skills through a realistic appellate advocacy experience with moot court competitors participating in a hypothetical appeal to the United States Supreme Court. This year’s legal question focused on the Communications Decency Act – “Section 230” – and the applications of the exception from liability of internet service providers for the acts of third parties to the realistic scenario of a journalist’s photo/turned meme being used in advertising (CBD, ED treatment, gambling) without permission or compensation in violation of applicable state right of publicity statutes. Harrison tells TALKERS, “We are at one of those sensitive times in history where technology is changing at a quicker pace than the legal system and legislators can keep up with – particularly at the consequential juncture of big tech and mass communications. I was impressed and heartened by the articulateness and grasp of the Section 230 issue displayed by the law students arguing before me.”