Industry Views

Navigating the Deepfake Dilemma in the Age of AI Impersonation

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgThe Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality

Every seasoned broadcaster or media creator has a radar for nonsense. You have spent years vetting sources, confirming facts, and throwing out anything that feels unreliable. The complication now is that artificial intelligence can wrap unreliable content in a polished package that looks and sounds legitimate.

This article is not aimed at people creating AI impersonation channels. If that is your hobby, nothing here will make you feel more confident about it. This is for the professionals whose job is to keep the information stream as clean as possible. You are not making deepfakes. You are trying to avoid stepping in them and trying even harder not to amplify them.

Once something looks real and sounds real, a significant segment of your audience will assume it is real. That changes the amount of scrutiny you need to apply. The burden now falls on people like you to pause before reacting. 

Two Clips That Tell the Whole Story

Consider two current examples. The first is the synthetic Biden speech that appears all over social media. It presents a younger, steadier president delivering remarks that many supporters wish he would make. It is polished, convincing, and created entirely by artificial intelligence.

The second is the cartoonish Trump fighter jet video that shows him dropping waste on unsuspecting civilians. No one believes it is real. Yet both types of content live in the same online ecosystem and both get shared widely.

The underlying facts do not matter once the clip begins circulating. If you repeat it on the air without checking it, you become the next link in the distribution chain. Not every untrue clip is misinformation. People get things wrong without intending to deceive, and the law recognizes that. What changes here is the plausibility. When an artificial performance can fool a reasonable viewer, the difference between a mistake and a misleading impression becomes something a finder of fact sorts out later. Your audience cannot make that distinction in real time. 

Parody and Satire Still Exist, but AI Is Blurring the Edges

Parody imitates a person to comment on that person. Satire uses the imitation to comment on something else. These categories worked because traditional impersonations were obvious. A cartoon voice or exaggerated caricature did not fool anyone.

A convincing AI impersonation removes the cues that signal it is a joke. It sounds like the celebrity. It looks like the celebrity. It uses words that fit the celebrity’s public image. It stops functioning as commentary and becomes a manufactured performance that appears authentic. That is when broadcasters get pulled into the confusion even though they had nothing to do with the creation. 

When the Fake Version Starts Crowding Out the Real One

Public figures choose when and where to speak. A Robert De Niro interview has weight because he rarely gives them. A carefully planned appearance on a respected platform signals importance.

When dozens of artificial De Niros begin posting daily commentary, the significance of the real appearance is reduced. The market becomes crowded. Authenticity becomes harder to protect. This is not only a reputational issue. It is an economic one rooted in scarcity and control.

You may think you are sharing a harmless clip. In reality, you might be participating in the dilution of someone’s legitimate business asset. 

Disclaimers Are Not Shields

Many deepfake channels use disclaimers. They say things like this is parody or this is not the real person. A parking garage can also post a sign that it is not responsible for damage to your car. That does not absolve them when something collapses on your vehicle.

A disclaimer that no one negotiates or meaningfully acknowledges does not protect the creator or the people who share the clip. If viewers believe it is real, the disclaimer (often hidden in plain sight) is irrelevant. 

The Liability No One Expects: Damage You Did Not Create

You can become responsible for the fallout without ever touching the original video. If you talk about a deepfake on the air, share it on social media, or frame it as something that might be true, you help it spread. Your audience trusts you. If you repeat something inaccurate, even unintentionally, they begin questioning your judgment. One believable deepfake can undermine years of credibility. 

Platforms Profit From the Confusion

Here is the structural issue that rarely gets discussed. Platforms have every financial incentive to push deepfakes. They generate engagement. Engagement generates revenue. Revenue satisfies stockholders. This stands in tension with the spirit of Section 230, which was designed to protect neutral platforms, not platforms that amplify synthetic speech they know is likely to deceive.

If a platform has the ability to detect and label deepfakes and chooses not to, the responsibility shifts to you. The platform benefits. You absorb the risk. 

What Media Professionals Should Do

You do not need new laws. You do not need to give warnings to your audience. You do not need to panic. You do need to stay sharp.

Here is the quick test. Ask yourself four questions.

Is the source authenticated?
Has the real person ever said anything similar?
Is the platform known for synthetic or poorly moderated content?
Does anything feel slightly off even when the clip looks perfect?

If any answer gives you pause, treat the clip as suspect. Treat it as content, not truth. 

Final Thought (at Least for Now)

Artificial intelligence will only become more convincing. Your role is not to serve as a gatekeeper. Your role is to maintain professional judgment. When a clip sits between obviously fake and plausibly real, that is the moment to verify and, when necessary, seek guidance. There is little doubt that the inevitable proliferation of phony internet “shows” is about to bloom into a controversial legal, ethical, and financial industry issue.  

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Is That Even Legal? Talk Radio in the Age of Deepfake Voices: Where Fair Use Ends and the Law Steps In

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgIn early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.

The calls were meant to mislead, not entertain. The response was quick. The FCC banned AI robocalls. State officials launched investigations. Still, a big question remains for radio and podcast creators:

Is using an AI cloned voice of a real person ever legal?

This question hits hard for talk radio, where satire, parody, and political commentary are daily staples. And the line between creative expression and illegal impersonation is starting to blur.

It’s already happening online. AI-generated clips of Howard Stern have popped up on TikTok and Reddit, making him say things he never actually said. They’re not airing on the radio yet – but they could be soon.

Then came a major moment. In 2024, a group called Dudesy released a fake comedy special called, “I’m Glad I’m Dead,” using AI to copy the voice and style of the late George Carlin. The hour-long show sounded uncannily like Carlin, and the creators claimed it was a tribute. His daughter, Kelly Carlin, strongly disagreed. The Carlin estate sued, calling it theft, not parody. That lawsuit could shape how courts treat voice cloning for years.

The danger isn’t just legal – it’s reputational. A cloned voice can be used to create fake outrage, fake interviews, or fake endorsements. Even if meant as satire, if it’s too realistic, it can do real damage.

So, what does fair use actually protect? It covers commentary, criticism, parody, education, and news. But a voice isn’t just creative work – it’s part of someone’s identity. That’s where the right of publicity comes in. It protects how your name, image, and voice are used, especially in commercial settings.

If a fake voice confuses listeners, suggests false approval, or harms someone’s brand, fair use probably won’t apply. And if it doesn’t clearly comment on the real person, it’s not parody – it’s just impersonation.

For talk show hosts and podcasters, here’s the bottom line: use caution. If you’re using AI voices, make it obvious they’re fake. Add labels. Give context. And best of all, avoid cloning real people unless you have their OK.

Fair use is a shield – but it’s not a free pass. When content feels deceptive, the law – and your audience – may not be forgiving.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Harrison Legal Group or read more at TALKERS.com.

Front Page News Industry News

Wednesday, September 14, 2022

WIBC-FM, Indianapolis Personality Rob Kendall Sues Political Activist for Defamation. WIBC-FM, Indianapolis late morning personality Rob Kendall is suing conservative political activist Gabriel Whitley, Spencer McDaniel, and Young Conservatives of Southern Indiana for defamation after the group posted comments on Facebook in which they refer to Kendall as “Pedo Rob,” implying that he’s a pedophile. The Evansville Courier & Press reports the story, noting that Whitley is a declared candidate for mayor of Evansville. Kendall co-hosts the late morning program on Radio One’s WIBC-FM with Casey Daniels. Kendall is being represented by fellow WIBC personality Abdul Hakim-Shabazz. McDaniel says the Facebook page is a satire page and as such is protected speech. He says, “Gabriel Whitley had no part in this, I run the page but it was a joke and I take full responsibility for the joke. We will win and all charges will be dropped.” The suit states, “The satiric effect emerges only as the reader concludes by the very outrageousness of the words that the whole thing is a put-on.” Hakim-Shabazz says that “the context of the post does not allow a reasonable reader to think the defendant’s words were a ‘put on.’” Kendall is seeking $50,000 in damages, punitive damages as the court deems proper,  attorney fees and court costs, and other relief the court deems proper.

Comrex Unveils New Remote Contribution Solution. The new cloud service for remote, on-air or production contribution from Comrex is called Gagl, and it allows between one and five users to send and receive audio from computers and smartphones. Each user receives their own mix-minus to hear other connected guests, and the Gagl audio is delivered to a Comrex hardware codec (such as ACCESS or BRIC-Link, usually in a studio). All participants can hear other participants and the codec “sends” audio back to them. Comrex says that participants can connect and send audio by simply clicking a link using any common web browser. Gagl is designed to be used with consumer grade equipment, so contributors only need a device and a headset to get on the air. Gagl uses the Opus audio encoder, with a bit rate that delivers both voice and music in excellent quality. Gagl also delivers audio directly to a Comrex codec with all the stability enhancements, pro-grade audio connections, and features that hardware codecs provide. The simple user interface makes it easy for users with any level of technical experience to use. Comrex adds that Gagl could be used as the hub for a news program or for a morning radio show to support multiple simultaneous contributor connections. Because it offers low latency, it’s appropriate for call-in talk radio. Gagl could also be used to allow a single contributor to connect back to the studio from a computer or smartphone. Gagl works with Comrex hardware IP audio codecs including the AES67-compatible ACCESS NX Rack IP audio codec and ACCESS MultiRack multi-channel IP audio codec as well as the BRIC-Link series. Find out more about Gagl here.

Edison Research: Spotify Takes Over as Top U.S. Podcast Network. The latest Q2 2022 data from Edison Research’s ranking of the Top Podcast Networks in the U.S. reveals that Spotify, SXM Media, and iHeartRadio take the top three spots, respectively. Spotify surpassed SXM Media’s weekly reach by a tiny margin. Edison’s ranking is based on surveys of 8,000 podcast listeners in the U.S. and measures reach as a percentage of the weekly podcasting audience. Rankings are compiled by measuring the total unduplicated reach of all the shows represented by a given network. See the complete list of the top 30 podcast networks here.

Podtrac: iHeartRadio Top Podcast Publisher for August 2022. The ranking of top podcast publishers from Podtrac has been released for August and, based on Unique Monthly Audience, iHeartRadio is ranked #1 with more than 35.5 million UMA for its 689 active shows. Wondery is ranked #2 (24.5 million UMA) for 204 shows, and NPR comes in at #3 (19.7 million UMA) for 49 shows. See Podtrac’s entire chart of the top 20 podcast publishers for August 2022 here.

Brian Kilmeade in Albany. Pictured above is FOX News Channel and FOX News Radio star Brian Kilmeade (center) on stage at The Egg Performing Arts Center in Albany on September 8. With him is FOX News Channel producer Alyson Mansfield (left) and news/talk WGDJ, Albany owner Paul Vandenburgh (right). Kilmeade’s radio program is heard from 10:00 am to 12:00 noon on WGDJ.

KFAN’s Dan Barreiro to Be Enshrined in Hall of Fame. Longtime Twin Cities sports talk personality Dan Barreiro – host of “Bumper to Bumper with Dan Barreiro” and “Sunday Sermons with Dan Barreiro” – is being honored this Saturday (9/17) at the Minnesota Broadcasting Hall of Fame Induction Ceremony. Barreiro was a sports columnist at the Star Tribune for 17 years. He joined iHeartMedia’s KFXN-FM, Minneapolis “KFAN 100.3” in 1992. iHeartMedia Minneapolis market president Greg Alexander says, “Dan epitomizes the make-up of a tremendous talk show in the Twin Cities. He puts together compelling interviews and has the ability to break down any and every topic to entertain our listeners. He is a true talent.” Market SVP of programming Gregg Swedberg adds, “There isn’t a radio personality who has had the kind of ratings success over the last three decades that Dan Barreiro has. He has done it by hosting intelligent, entertaining, and compelling radio. He’s more than the ‘Big Ticket,’ he is the blueprint.”

Audacy’s I’m Listening Mental Health Campaign to Produce Sixth Annual Special Program. The ongoing mental health public service campaign from Audacy includes the two-hour audio program, “I’m Listening,” once again co-hosted by Carson Daly and by Dr. Alfiee M. Breland-Noble. Each year, Audacy activates “I’m Listening” through national campaigns featuring artists, celebrities, and athletes who share their experiences with mental health. Partnering with the American Foundation for Suicide Prevention, these events help raise awareness and support of issues that we all face in our daily lives. Guests on this year’s program will include: Carrie Underwood, Ed Sheeran, Adele, Vice President Kamala Harris, Lizzo, Ricky Williams, Maren Morris, Charlie Puth and Stephen A. Smith, who will share personal mental health stories. The special airs nationwide on Wednesday, September 21 from 6:00 pm to 8:00 pm local time across more than 230 Audacy stations and will be live streamed via its digital app and website. Also, returning to the Hollywood Bowl for its ninth year, the star-studded concert “We Can Survive” takes place on Saturday, October 22 with featured acts including Alanis Morrisette, Garbage, Halsey, OneRepublic, Weezer and more.

Trumps vs DOJ, Inflation/Financial Markets/Railroad Strike, Lindell Search Warrant, Russia-Ukraine War, British Royalty, Musk-Twitter Case, and Ken Starr Dies Among Top News/Talk Stories Yesterday (9/13). Former President Donald Trump’s battle with the Department of Justice over its investigation into documents he kept at Mar-a-Lago; the still rising rate of inflation, Tuesday’s financial markets beating, and the looming railroad strike; MyPillow founder Mike Lindell has phone taken by FBI in warranted search; Ukraine’s retaking of territory from Russian forces; the royal family’s activities in the aftermath of Queen Elizabeth’s death; the court battle between Elon Musk and Twitter over his canceled acquisition bid; and former special prosecutor Ken Starr dies were some of the most-talked-about stories on news/talk radio yesterday, according to ongoing research from TALKERS magazine.