Industry Views

Navigating the Deepfake Dilemma in the Age of AI Impersonation

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgThe Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality

Every seasoned broadcaster or media creator has a radar for nonsense. You have spent years vetting sources, confirming facts, and throwing out anything that feels unreliable. The complication now is that artificial intelligence can wrap unreliable content in a polished package that looks and sounds legitimate.

This article is not aimed at people creating AI impersonation channels. If that is your hobby, nothing here will make you feel more confident about it. This is for the professionals whose job is to keep the information stream as clean as possible. You are not making deepfakes. You are trying to avoid stepping in them and trying even harder not to amplify them.

Once something looks real and sounds real, a significant segment of your audience will assume it is real. That changes the amount of scrutiny you need to apply. The burden now falls on people like you to pause before reacting. 

Two Clips That Tell the Whole Story

Consider two current examples. The first is the synthetic Biden speech that appears all over social media. It presents a younger, steadier president delivering remarks that many supporters wish he would make. It is polished, convincing, and created entirely by artificial intelligence.

The second is the cartoonish Trump fighter jet video that shows him dropping waste on unsuspecting civilians. No one believes it is real. Yet both types of content live in the same online ecosystem and both get shared widely.

The underlying facts do not matter once the clip begins circulating. If you repeat it on the air without checking it, you become the next link in the distribution chain. Not every untrue clip is misinformation. People get things wrong without intending to deceive, and the law recognizes that. What changes here is the plausibility. When an artificial performance can fool a reasonable viewer, the difference between a mistake and a misleading impression becomes something a finder of fact sorts out later. Your audience cannot make that distinction in real time. 

Parody and Satire Still Exist, but AI Is Blurring the Edges

Parody imitates a person to comment on that person. Satire uses the imitation to comment on something else. These categories worked because traditional impersonations were obvious. A cartoon voice or exaggerated caricature did not fool anyone.

A convincing AI impersonation removes the cues that signal it is a joke. It sounds like the celebrity. It looks like the celebrity. It uses words that fit the celebrity’s public image. It stops functioning as commentary and becomes a manufactured performance that appears authentic. That is when broadcasters get pulled into the confusion even though they had nothing to do with the creation. 

When the Fake Version Starts Crowding Out the Real One

Public figures choose when and where to speak. A Robert De Niro interview has weight because he rarely gives them. A carefully planned appearance on a respected platform signals importance.

When dozens of artificial De Niros begin posting daily commentary, the significance of the real appearance is reduced. The market becomes crowded. Authenticity becomes harder to protect. This is not only a reputational issue. It is an economic one rooted in scarcity and control.

You may think you are sharing a harmless clip. In reality, you might be participating in the dilution of someone’s legitimate business asset. 

Disclaimers Are Not Shields

Many deepfake channels use disclaimers. They say things like this is parody or this is not the real person. A parking garage can also post a sign that it is not responsible for damage to your car. That does not absolve them when something collapses on your vehicle.

A disclaimer that no one negotiates or meaningfully acknowledges does not protect the creator or the people who share the clip. If viewers believe it is real, the disclaimer (often hidden in plain sight) is irrelevant. 

The Liability No One Expects: Damage You Did Not Create

You can become responsible for the fallout without ever touching the original video. If you talk about a deepfake on the air, share it on social media, or frame it as something that might be true, you help it spread. Your audience trusts you. If you repeat something inaccurate, even unintentionally, they begin questioning your judgment. One believable deepfake can undermine years of credibility. 

Platforms Profit From the Confusion

Here is the structural issue that rarely gets discussed. Platforms have every financial incentive to push deepfakes. They generate engagement. Engagement generates revenue. Revenue satisfies stockholders. This stands in tension with the spirit of Section 230, which was designed to protect neutral platforms, not platforms that amplify synthetic speech they know is likely to deceive.

If a platform has the ability to detect and label deepfakes and chooses not to, the responsibility shifts to you. The platform benefits. You absorb the risk. 

What Media Professionals Should Do

You do not need new laws. You do not need to give warnings to your audience. You do not need to panic. You do need to stay sharp.

Here is the quick test. Ask yourself four questions.

Is the source authenticated?
Has the real person ever said anything similar?
Is the platform known for synthetic or poorly moderated content?
Does anything feel slightly off even when the clip looks perfect?

If any answer gives you pause, treat the clip as suspect. Treat it as content, not truth. 

Final Thought (at Least for Now)

Artificial intelligence will only become more convincing. Your role is not to serve as a gatekeeper. Your role is to maintain professional judgment. When a clip sits between obviously fake and plausibly real, that is the moment to verify and, when necessary, seek guidance. There is little doubt that the inevitable proliferation of phony internet “shows” is about to bloom into a controversial legal, ethical, and financial industry issue.  

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Why “Play the Clip” Still Matters

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgEvery talk host knows the move: play the clip. It might be a moment from late-night TV, a political ad, or a viral post that sets the table for the segment. It’s how commentary comes alive – listeners hear it, react to it, and stay tuned for your take.

That simple act is powered by a fragile piece of legal machinery known as the Fair Use Balancing Act. Without it, half of talk radio, podcasting, and online news/talk commentary wouldn’t exist. Fair Use allows creators to quote, parody, or critique copyrighted material without permission – but only when the new use transforms the old. It’s the backbone of what we now call “react” or “remix” culture.

Fair use isn’t a license; it’s a defense. When you rely on it, you admit you used someone else’s work and trust that a judge will see your purpose – criticism, news, education – as transformative. That’s a high-wire act few think about when the mic is hot.

The doctrine works on a sliding scale: courts weigh four factors – purpose, nature, amount, and market effect. In plain English, they ask, Did you change the meaning? Did you take too much? Did you cost the owner money? There are neither checklists nor guarantees.

That flexibility is what makes American media vibrant – and also what keeps lawyers busy. Each decision takes time, context, and money. The price of creative freedom is uncertainty.

The same logic now drives the debate over AI training and voice cloning. Machines don’t “comment” on your broadcast; they absorb it. And if courts treat that as transformative analysis instead of reproduction, the next generation of “hosts” may not need microphones at all.

For broadcasters, that’s the new frontier: your archives, tone, and phrasing are training data. Once ingested, they can be repurposed, remixed, and re-voiced without violating traditional copyright rules. The Fair Use Balancing Act may protect innovation – but it rarely protects the innovator.

Fair use was designed to keep culture evolving, not to leave creators behind. It balances a creator’s right to profit against society’s right to build upon shared work. But balance only works if both sides know the weight they’re carrying.

Every time you play the clip, remember you’re exercising one of the oldest and most essential freedoms in media. Just make sure the next voice that plays you is doing the same thing – for the right reasons, and under the same rules.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com

Industry Views

When Satire Stands Its Ground

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgWhen we first covered this case, it felt like only 2024 could invent it – a disgraced congressman, George Santos, selling Cameos and a late-night host, Jimmy Kimmel, buying them under fake names to make a point about truth and ego. A year later, the Second Circuit turned that punchline into precedent. (Read story here: https://talkers.com/2024/12/19/jimmy-kimmels-fair-use-victory-what-it-means-for-content-creators/)

And just to clear the record: this has nothing to do with Jimmy Kimmel’s unrelated dust-up with FCC Commissioner Brendan Carr. Different story, different planet. This one’s about copyright and commentary – and it’s a clear win for both.

The Set-Up

After his expulsion from Congress, George Santos began offering paid video shout-outs on Cameo. Kimmel’s writers sent absurd requests under pseudonyms for a segment called “Will Santos Say It?” – and he did. The show aired those clips to highlight how easily a public figure would say anything for a fee.

(If you want a taste, look up “Jimmy Kimmel Pranks George Santos on Cameo” on YouTube. That’s the kind of transformative satire the court later called “sarcastic criticism and commentary.”)

Santos sued Kimmel, ABC, and Disney for copyright infringement, fraud, and breach of contract, claiming the videos were sold for “personal use.” The district court tossed it; Santos appealed.

The Ruling

On September 15, 2025, the Second Circuit unanimously affirmed the dismissal. The panel said Kimmel’s use was transformative: he turned Santos’s self-promotion into political satire. Even Santos’s complaint described the bit as sarcastic commentary.

Claims of “market harm” fell flat. Airing a few clips on network TV doesn’t compete with Cameo. Embarrassment isn’t economic loss.

And the supposed bad faith – using fake names to order the clips – didn’t undo fair use. The court stuck to the statutory factors: purpose, nature, amount, and effect. Mischief isn’t a fifth one.

The rest of the claims – fraud, contract, enrichment – stayed dismissed as pre-empted or too thin to matter.

Why It Matters

This decision lands as courts wrestle with whether AI’s use of copyrighted works can ever be “transformative.” Santos v. Kimmel shows what that word really means: a human taking existing material and using it to say something new.

Fair use protects meaning, not mimicry. That’s why satire, commentary, and criticism still stand when they have a point.

For media creators, the lesson is simple: transformation beats permission. If you use third-party material, make sure you’re adding perspective – not just recycling content. That, more than any fine print, is what keeps you on the right side of the line.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Neutraliars: The Platforms That Edit Like Publishers but Hide Behind Neutrality

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn the golden age of broadcasting, the rules were clear. If you edited the message, you owned the consequences. That was the tradeoff for editorial control. But today’s digital platforms – YouTube, X, TikTok, Instagram – have rewritten that deal. Broadcasters and those who operate within the FCC regulatory framework are paying the price.

These companies claim to be neutral conduits for our content. But behind the curtain, they make choices that mirror the editorial judgment of any news director: flagging clips, muting interviews, throttling reach, and shadow banning accounts. All while insisting they bear no responsibility for the content they carry.

They want the control of publishers without the accountability. I call them neutraliars.

A “neutraliar” is a platform that claims neutrality while quietly shaping public discourse. It edits without transparency, enforces vague rules inconsistently, and hides bias behind shifting community standards.

Broadcasters understand the weight of editorial power. Reputation, liability, and trust come with every decision. But platforms operate under a different set of rules. They remove content for “context violations,” downgrade interviews for being “borderline,” and rarely offer explanations. No appeals. No accountability.

This isn’t just technical policy – it’s a legal strategy. Under Section 230 of the Communications Decency Act, platforms enjoy broad immunity from liability related to user content. What was originally intended to allow moderation of obscene or unlawful material has become a catch-all defense for everything short of outright defamation or criminal conduct.

These companies act like editors when it suits them, curating and prioritizing content. But when challenged, they retreat behind the label of “neutral platform.” Courts, regulators, and lawmakers have mostly let it slide.

But broadcasters shouldn’t.

Neutraliars are distorting the public square. Not through overt censorship, but through asymmetry. Traditional broadcasters play by clear rules – standards of fairness, disclosure, and attribution. Meanwhile, tech platforms make unseen decisions that influence whether a segment is heard, seen, or quietly buried.

So, what’s the practical takeaway?

Don’t confuse distribution with trust.

Just because a platform carries your content doesn’t mean it supports your voice. Every upload is subject to algorithms, undisclosed enforcement criteria, and decisions made by people you’ll never meet. The clip you expected to go viral. Silenced. The balanced debate you aired. Removed for tone. The satire? Flagged for potential harm.

The smarter approach is to diversify your presence. Own your archive. Use direct communication tools – e-mail lists, podcast feeds, and websites you control. Syndicate broadly but never rely solely on one platform. Monitor takedowns and unexplained drops in engagement. These signals matter.

Platforms will continue to call themselves neutral as long as it protects their business model. But we know better. If a company edits content like a publisher and silences creators like a censor, it should be treated like both.

And when you get the inevitable takedown notice wrapped in vague policy language and polished PR spin, keep one word in mind.

Neutraliars.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Is That Even Legal? Talk Radio in the Age of Deepfake Voices: Where Fair Use Ends and the Law Steps In

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.

The calls were meant to mislead, not entertain. The response was quick. The FCC banned AI robocalls. State officials launched investigations. Still, a big question remains for radio and podcast creators:

Is using an AI cloned voice of a real person ever legal?

This question hits hard for talk radio, where satire, parody, and political commentary are daily staples. And the line between creative expression and illegal impersonation is starting to blur.

It’s already happening online. AI-generated clips of Howard Stern have popped up on TikTok and Reddit, making him say things he never actually said. They’re not airing on the radio yet – but they could be soon.

Then came a major moment. In 2024, a group called Dudesy released a fake comedy special called, “I’m Glad I’m Dead,” using AI to copy the voice and style of the late George Carlin. The hour-long show sounded uncannily like Carlin, and the creators claimed it was a tribute. His daughter, Kelly Carlin, strongly disagreed. The Carlin estate sued, calling it theft, not parody. That lawsuit could shape how courts treat voice cloning for years.

The danger isn’t just legal – it’s reputational. A cloned voice can be used to create fake outrage, fake interviews, or fake endorsements. Even if meant as satire, if it’s too realistic, it can do real damage.

So, what does fair use actually protect? It covers commentary, criticism, parody, education, and news. But a voice isn’t just creative work – it’s part of someone’s identity. That’s where the right of publicity comes in. It protects how your name, image, and voice are used, especially in commercial settings.

If a fake voice confuses listeners, suggests false approval, or harms someone’s brand, fair use probably won’t apply. And if it doesn’t clearly comment on the real person, it’s not parody – it’s just impersonation.

For talk show hosts and podcasters, here’s the bottom line: use caution. If you’re using AI voices, make it obvious they’re fake. Add labels. Give context. And best of all, avoid cloning real people unless you have their OK.

Fair use is a shield – but it’s not a free pass. When content feels deceptive, the law – and your audience – may not be forgiving.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Harrison Legal Group or read more at TALKERS.com.

Industry Views

Mark Walters v. OpenAI: A Landmark Case for Spoken Word Media

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgWhen Georgia-based nationally syndicated radio personality, and Second Amendment advocate Mark Walters (longtime host of “Armed American Radio”) learned that ChatGPT had falsely claimed he was involved in a criminal embezzlement scheme, he did what few in the media world have dared to do. Walters stood up when others were silent, and took on an incredibly powerful tech company, one of the biggest in the world, in a court of law.

Taking the Fight to Big Tech

Walters, by filing suit against OpenAI, the creator of ChatGPT, become the first person in the United States to test the boundaries of defamation law in the age of generative artificial intelligence.

His case was not simply about clearing his name. It was about drawing a line. Can artificial intelligence generate and distribute false and damaging information about a real person without any legal accountability?

While the court ultimately ruled in OpenAI’s favor on specific legal procedure concerns, the impact of this case is far from finished. Walters’ lawsuit broke new ground in several important ways:

— It was the first known defamation lawsuit filed against an AI developer based on content generated by an AI system.
— It brought into the open critical questions about responsibility, accuracy, and liability when AI systems are used to produce statements that sound human but carry no editorial oversight.
— It continued to add fuel to the conversation of the effectiveness of “use at your own risk” disclaimers when there is real world reputational damage hanging in the balance.

Implications for the Radio and Podcasting Community

For those spoken-word creators, regardless of platform on terrestrial, satellite, or the open internet, this case is a wake-up call, your canary in a coal mine. Many shows rely on AI tools for research, summaries, voice generation, or even show scripts. But what happens when those tools get it wrong? (Other than being embarrassed, and in some cases fined or terminated) And worse, what happens when those errors affect real people?

The legal system, as has been often written about, is still playing catch-up. Although the court ruled that the fabricated ChatGPT statement lacked the necessary elements of defamation under Georgia law, including provable harm and demonstrable fault, the decision highlighted how unprepared current frameworks are for this fast-moving, voice-driven digital landscape.

Where the Industry Goes from Here

Walters’ experience points to the urgent need for new protection and clearer guidelines:

— Creators deserve assurance that the tools they use are built with accountability in mind. This would extend to copyright infringement and to defamation.
— Developers must be more transparent about how their systems operate and the risks they create. This would identify bias and attempt to counteract it.
— Policymakers need to bring clarity to who bears responsibility when software, not a person, becomes the speaker.

A Case That Signals a Larger Reckoning

Mark Walters may not have won this round in court, but his decision to take on a tech giant helped illuminate how quickly generative AI can create legal, ethical, and reputational risks for anyone with a public presence. For those of us working in media, especially in formats built on trust, voice, and credibility, his case should not be ignored.

“This wasn’t about money. This was about the truth,” Walters tells TALKERS. “If we don’t draw a line now, there may not be one left to draw.”

To listen to a longform interview with Mark Walters conducted by TALKERS publisher Michael Harrison, please click here

Media attorney, Matthew B. Harrison is VP/Associate Publisher at TALKERS; Senior Partner at Harrison Media Law; and Executive Producer at Goodphone Communications. He is available for private consultation and media industry contract representation. He can be reached by phone at 724-484-3529 or email at matthew@harrisonmedialaw.com. He teaches “Legal Issues in Digital Media” and serves as a regular contributor to industry discussions on fair use, AI, and free expression.

Industry News

Matthew B. Harrison Holds Court Over Section 230 Explanation for Law Students at 1st Circuit Court of Appeals in Boston

As an attorney with extensive front-line expertise in media law, TALKERS associate publisher and senior partner in the Harrison Legal Group Matthew B. Harrison (pictured at right on the bench), was selected to hold court as “acting” judge in a moot trial involving Section 230 for law students engaged in a nationalim competition last evening (2/22) at the 1st Circuit Court of Appeals in Boston, MA. The American Bar Association, Law Student Division holds a number of annual national moot court competitions. One such event, the National Appellate Advocacy Competition, emphasizes the development of oral advocacy skills through a realistic appellate advocacy experience with moot court competitors participating in a hypothetical appeal to the United States Supreme Court. This year’s legal question focused on the Communications Decency Act – “Section 230” – and the applications of the exception from liability of internet service providers for the acts of third parties to the realistic scenario of a journalist’s photo/turned meme being used in advertising (CBD, ED treatment, gambling) without permission or compensation in violation of applicable state right of publicity statutes. Harrison tells TALKERS, “We are at one of those sensitive times in history where technology is changing at a quicker pace than the legal system and legislators can keep up with – particularly at the consequential juncture of big tech and mass communications. I was impressed and heartened by the articulateness and grasp of the Section 230 issue displayed by the law students arguing before me.”

Industry News

Michael Harrison Says AI is One of the Most Important Talk Topics of Our Times

TALKERS founder Michael Harrison has kicked off a nationwide guesting tour of talk shows promoting discussion of the upside and downside of AI in conjunction with the release of the new song, “I Got a Line in New York City,” by the long-established classic rock group, Gunhill Road. Harrison performs lead vocals on the track performed with band members Steve GoldrichPaul Reisch and Brian Koonin. The music video of the song (produced by Harrison’s son and TALKERS associate publisher Matthew B. Harrison) has been described as a computer’s “fever dream about the Big Apple.” Although the music is totally organic, all of the visual graphics on the video have been assisted in their creation by generative artificial intelligence. Harrison says, “There’s huge interest in the topic of AI including the existential issues of its potential impact on our species. In the art community, debate is raging over whether AI enhances originality and creativity or if it is ushering in the death of individual artists and the role they play in the humanities.” See that video here.

Harrison launched the tour late last week appearing on the Rich Valdes show on Westwood One and has subsequently appeared on network programs hosted by Doug Stephan, Dr. Daliah Wachs, and WABC’s Frank Morano, as well as Harry Hurley on WPG, Atlantic City,  Todd Feinburg on WTIC-AM, Hartford and Michael Zwerling on KSCO, Santa Cruz.  WOR, New York has posted the video and an  accompanying story here.

To book Michael Harrison please call Barbara Kurland at 413-565-5413 or email info@talkers.com

Industry News

Panel Discussion to Tackle the Talk Media Industry’s Key Concerns

One of the most popular sessions at the annual TALKERS Conference is “The Big Picture” panel and this year’s planned installment of the discussion promises to continue in that tradition of perspective and pertinence.  The panel will be introduced by TALKERS associate publisher/media attorney, Matthew B. Harrison, Esq. and moderated by TALKERS publisher Michael Harrison.  Panelists include (in alphabetical order): Arthur Aidala, Esq. founding partner, Aidala, Bertuna & Kamins, PC/host, AM 970 The Answer, New York; Dr. Asa Andrew, CEO/host, The Doctor Asa NetworkLee Habeeb, host/producer, Our American StoriesLee Harris, director of Integrated Operations, NewsNation; and Kraig Kitchin, CEO, Sound Mind, LLC/chairman, Radio Hall of Fame.  One more panelist has yet to be named.  The issues that the session will cover include: the existential cultural, technological and financial issues facing radio and talk media; the medium’s role in the national political conversation and culture wars; the impact of artificial intelligence on intellectual property and creative originality; the evolution of ethics, justice and journalism in American society; and an examination of potential topics and concerns that will keep the medium vibrant as we move deeper into the 21st century. “It’s all about perspective,” says panel moderator Michael Harrison. “If we are to survive as an industry as well as a community, we have to step back and look at the big picture within which we operate… and it is getting bigger and bigger with each passing moment. We must avoid becoming smaller and smaller.”  More than 60 luminaries from the talk media industry are set to speak at a power-packed day of fireside chats, solo addresses, panel discussions, workshops, award presentations, new equipment showcases and endless networking opportunities. TALKERS 2023 is nearing an advance sellout. See more about the agenda, registration, sponsorship and hotel information here