By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
The Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality
Every seasoned broadcaster or media creator has a radar for nonsense. You have spent years vetting sources, confirming facts, and throwing out anything that feels unreliable. The complication now is that artificial intelligence can wrap unreliable content in a polished package that looks and sounds legitimate.
This article is not aimed at people creating AI impersonation channels. If that is your hobby, nothing here will make you feel more confident about it. This is for the professionals whose job is to keep the information stream as clean as possible. You are not making deepfakes. You are trying to avoid stepping in them and trying even harder not to amplify them.
Once something looks real and sounds real, a significant segment of your audience will assume it is real. That changes the amount of scrutiny you need to apply. The burden now falls on people like you to pause before reacting.
Two Clips That Tell the Whole Story
Consider two current examples. The first is the synthetic Biden speech that appears all over social media. It presents a younger, steadier president delivering remarks that many supporters wish he would make. It is polished, convincing, and created entirely by artificial intelligence.
The second is the cartoonish Trump fighter jet video that shows him dropping waste on unsuspecting civilians. No one believes it is real. Yet both types of content live in the same online ecosystem and both get shared widely.
The underlying facts do not matter once the clip begins circulating. If you repeat it on the air without checking it, you become the next link in the distribution chain. Not every untrue clip is misinformation. People get things wrong without intending to deceive, and the law recognizes that. What changes here is the plausibility. When an artificial performance can fool a reasonable viewer, the difference between a mistake and a misleading impression becomes something a finder of fact sorts out later. Your audience cannot make that distinction in real time.
Parody and Satire Still Exist, but AI Is Blurring the Edges
Parody imitates a person to comment on that person. Satire uses the imitation to comment on something else. These categories worked because traditional impersonations were obvious. A cartoon voice or exaggerated caricature did not fool anyone.
A convincing AI impersonation removes the cues that signal it is a joke. It sounds like the celebrity. It looks like the celebrity. It uses words that fit the celebrity’s public image. It stops functioning as commentary and becomes a manufactured performance that appears authentic. That is when broadcasters get pulled into the confusion even though they had nothing to do with the creation.
When the Fake Version Starts Crowding Out the Real One
Public figures choose when and where to speak. A Robert De Niro interview has weight because he rarely gives them. A carefully planned appearance on a respected platform signals importance.
When dozens of artificial De Niros begin posting daily commentary, the significance of the real appearance is reduced. The market becomes crowded. Authenticity becomes harder to protect. This is not only a reputational issue. It is an economic one rooted in scarcity and control.
You may think you are sharing a harmless clip. In reality, you might be participating in the dilution of someone’s legitimate business asset.
Disclaimers Are Not Shields
Many deepfake channels use disclaimers. They say things like this is parody or this is not the real person. A parking garage can also post a sign that it is not responsible for damage to your car. That does not absolve them when something collapses on your vehicle.
A disclaimer that no one negotiates or meaningfully acknowledges does not protect the creator or the people who share the clip. If viewers believe it is real, the disclaimer (often hidden in plain sight) is irrelevant.
The Liability No One Expects: Damage You Did Not Create
You can become responsible for the fallout without ever touching the original video. If you talk about a deepfake on the air, share it on social media, or frame it as something that might be true, you help it spread. Your audience trusts you. If you repeat something inaccurate, even unintentionally, they begin questioning your judgment. One believable deepfake can undermine years of credibility.
Platforms Profit From the Confusion
Here is the structural issue that rarely gets discussed. Platforms have every financial incentive to push deepfakes. They generate engagement. Engagement generates revenue. Revenue satisfies stockholders. This stands in tension with the spirit of Section 230, which was designed to protect neutral platforms, not platforms that amplify synthetic speech they know is likely to deceive.
If a platform has the ability to detect and label deepfakes and chooses not to, the responsibility shifts to you. The platform benefits. You absorb the risk.
What Media Professionals Should Do
You do not need new laws. You do not need to give warnings to your audience. You do not need to panic. You do need to stay sharp.
Here is the quick test. Ask yourself four questions.
Is the source authenticated?
Has the real person ever said anything similar?
Is the platform known for synthetic or poorly moderated content?
Does anything feel slightly off even when the clip looks perfect?
If any answer gives you pause, treat the clip as suspect. Treat it as content, not truth.
Final Thought (at Least for Now)
Artificial intelligence will only become more convincing. Your role is not to serve as a gatekeeper. Your role is to maintain professional judgment. When a clip sits between obviously fake and plausibly real, that is the moment to verify and, when necessary, seek guidance. There is little doubt that the inevitable proliferation of phony internet “shows” is about to bloom into a controversial legal, ethical, and financial industry issue.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.