Navigating the Deepfake Dilemma in the Age of AI Impersonation
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
The Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality
Every seasoned broadcaster or media creator has a radar for nonsense. You have spent years vetting sources, confirming facts, and throwing out anything that feels unreliable. The complication now is that artificial intelligence can wrap unreliable content in a polished package that looks and sounds legitimate.
This article is not aimed at people creating AI impersonation channels. If that is your hobby, nothing here will make you feel more confident about it. This is for the professionals whose job is to keep the information stream as clean as possible. You are not making deepfakes. You are trying to avoid stepping in them and trying even harder not to amplify them.
Once something looks real and sounds real, a significant segment of your audience will assume it is real. That changes the amount of scrutiny you need to apply. The burden now falls on people like you to pause before reacting.
Two Clips That Tell the Whole Story
Consider two current examples. The first is the synthetic Biden speech that appears all over social media. It presents a younger, steadier president delivering remarks that many supporters wish he would make. It is polished, convincing, and created entirely by artificial intelligence.
The second is the cartoonish Trump fighter jet video that shows him dropping waste on unsuspecting civilians. No one believes it is real. Yet both types of content live in the same online ecosystem and both get shared widely.
The underlying facts do not matter once the clip begins circulating. If you repeat it on the air without checking it, you become the next link in the distribution chain. Not every untrue clip is misinformation. People get things wrong without intending to deceive, and the law recognizes that. What changes here is the plausibility. When an artificial performance can fool a reasonable viewer, the difference between a mistake and a misleading impression becomes something a finder of fact sorts out later. Your audience cannot make that distinction in real time.
Parody and Satire Still Exist, but AI Is Blurring the Edges
Parody imitates a person to comment on that person. Satire uses the imitation to comment on something else. These categories worked because traditional impersonations were obvious. A cartoon voice or exaggerated caricature did not fool anyone.
A convincing AI impersonation removes the cues that signal it is a joke. It sounds like the celebrity. It looks like the celebrity. It uses words that fit the celebrity’s public image. It stops functioning as commentary and becomes a manufactured performance that appears authentic. That is when broadcasters get pulled into the confusion even though they had nothing to do with the creation.
When the Fake Version Starts Crowding Out the Real One
Public figures choose when and where to speak. A Robert De Niro interview has weight because he rarely gives them. A carefully planned appearance on a respected platform signals importance.
When dozens of artificial De Niros begin posting daily commentary, the significance of the real appearance is reduced. The market becomes crowded. Authenticity becomes harder to protect. This is not only a reputational issue. It is an economic one rooted in scarcity and control.
You may think you are sharing a harmless clip. In reality, you might be participating in the dilution of someone’s legitimate business asset.
Disclaimers Are Not Shields
Many deepfake channels use disclaimers. They say things like this is parody or this is not the real person. A parking garage can also post a sign that it is not responsible for damage to your car. That does not absolve them when something collapses on your vehicle.
A disclaimer that no one negotiates or meaningfully acknowledges does not protect the creator or the people who share the clip. If viewers believe it is real, the disclaimer (often hidden in plain sight) is irrelevant.
The Liability No One Expects: Damage You Did Not Create
You can become responsible for the fallout without ever touching the original video. If you talk about a deepfake on the air, share it on social media, or frame it as something that might be true, you help it spread. Your audience trusts you. If you repeat something inaccurate, even unintentionally, they begin questioning your judgment. One believable deepfake can undermine years of credibility.
Platforms Profit From the Confusion
Here is the structural issue that rarely gets discussed. Platforms have every financial incentive to push deepfakes. They generate engagement. Engagement generates revenue. Revenue satisfies stockholders. This stands in tension with the spirit of Section 230, which was designed to protect neutral platforms, not platforms that amplify synthetic speech they know is likely to deceive.
If a platform has the ability to detect and label deepfakes and chooses not to, the responsibility shifts to you. The platform benefits. You absorb the risk.
What Media Professionals Should Do
You do not need new laws. You do not need to give warnings to your audience. You do not need to panic. You do need to stay sharp.
Here is the quick test. Ask yourself four questions.
Is the source authenticated?
Has the real person ever said anything similar?
Is the platform known for synthetic or poorly moderated content?
Does anything feel slightly off even when the clip looks perfect?
If any answer gives you pause, treat the clip as suspect. Treat it as content, not truth.
Final Thought (at Least for Now)
Artificial intelligence will only become more convincing. Your role is not to serve as a gatekeeper. Your role is to maintain professional judgment. When a clip sits between obviously fake and plausibly real, that is the moment to verify and, when necessary, seek guidance. There is little doubt that the inevitable proliferation of phony internet “shows” is about to bloom into a controversial legal, ethical, and financial industry issue.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.
Every talk host knows the move: play the clip. It might be a moment from late-night TV, a political ad, or a viral post that sets the table for the segment. It’s how commentary comes alive – listeners hear it, react to it, and stay tuned for your take.
When we first covered this case, it felt like only 2024 could invent it – a disgraced congressman, George Santos, selling Cameos and a late-night host, Jimmy Kimmel, buying them under fake names to make a point about truth and ego. A year later, the Second Circuit turned that punchline into precedent. (Read story here:
In the golden age of broadcasting, the rules were clear. If you edited the message, you owned the consequences. That was the tradeoff for editorial control. But today’s digital platforms – YouTube, X, TikTok, Instagram – have rewritten that deal. Broadcasters and those who operate within the FCC regulatory framework are paying the price.
When Georgia-based nationally syndicated radio personality, and Second Amendment advocate Mark Walters (longtime host of “Armed American Radio”) learned that ChatGPT had falsely claimed he was involved in a criminal embezzlement scheme, he did what few in the media world have dared to do. Walters stood up when others were silent, and took on an incredibly powerful tech company, one of the biggest in the world, in a court of law.
competition last evening (2/22) at the 1st Circuit Court of Appeals in Boston, MA. The American Bar Association, Law Student Division holds a number of annual national moot court competitions. One such event, the National Appellate Advocacy Competition, emphasizes the development of oral advocacy skills through a realistic appellate advocacy experience with moot court competitors participating in a hypothetical appeal to the United States Supreme Court. This year’s legal question focused on the Communications Decency Act – “Section 230” – and the applications of the exception from liability of internet service providers for the acts of third parties to the realistic scenario of a journalist’s photo/turned meme being used in advertising (CBD, ED treatment, gambling) without permission or compensation in violation of applicable state right of publicity statutes. Harrison tells TALKERS, “We are at one of those sensitive times in history where technology is changing at a quicker pace than the legal system and legislators can keep up with – particularly at the consequential juncture of big tech and mass communications. I was impressed and heartened by the articulateness and grasp of the Section 230 issue displayed by the law students arguing before me.”