Industry Views

A 20th Century Rulebook Officiating a 2026 Game

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgEvery media creator knows this moment. You are building a segment, you find the clip that makes the point land, and then the hesitation kicks in. Can I use this? Or am I about to invite a problem that distracts from the work itself?

That question has always lived at the center of fair use. What has changed is not the question, but the context around it. Over the past year, two federal court decisions involving AI training have quietly clarified how judges are thinking about copying, transformation, and risk in a media environment that looks nothing like the one for which these rules were originally written.

Fair use was never meant to be static. Anyone treating it as a checklist with guaranteed outcomes is working from an outdated playbook. What we actually have is a 20th century rulebook being used to officiate a game that keeps inventing new positions mid-play. The rules still apply. But how they are interpreted depends heavily on what the technology is doing and why.

That tension showed up clearly in two cases out of the Northern District of California last summer. In both, the courts addressed whether training AI systems on copyrighted books could qualify as fair use. These were not headline-grabbing decisions, but they mattered. The judges declined to declare AI training inherently illegal. At the same time, they refused to give it a free pass.

What drove the analysis was context. What material was used. How it was ingested. What the system produced afterward. And, critically, whether the output functioned as a replacement for the original works or something meaningfully different. Reading the opinions, you get the sense that the courts are no longer talking about “AI” as a single concept. Each model is treated almost as its own actor, with its own risk profile.

A simple medical analogy helps. Two patients can take the same medication and have very different outcomes. Dosage matters. Chemistry matters. Timing matters. Courts are beginning to approach AI the same way. The same training data does not guarantee the same behavior, and fair use analysis has to account for that reality.

So why should this matter to someone deciding whether to play a 22-second news clip?

Because the courts relied on the same four factors that govern traditional media use. Purpose. Nature. Amount. Market effect. They did not invent a new test for AI. They applied the existing one with a sharper focus on transformation and substitution. That tells us something important. The framework has not changed. The scrutiny has.

Once you see that, everyday editorial decisions become easier to evaluate. Commentary versus duplication. Reporting versus repackaging. Illustration versus substitution. These are not abstract legal concepts. They are practical distinctions creators make every day, often instinctively. The courts are signaling that those instincts still matter, but they need to be exercised with awareness, not habit.

The mistake I see most often is treating fair use as permission rather than analysis. Fair use is not a shield you invoke after the fact. It is a lens you apply before you hit publish. The recent AI cases reinforce that point. Judges are not interested in labels. They are interested in function and effect.

Fair use has always evolved alongside technology. Printing presses, photocopiers, home recording, digital editing, streaming. AI is just the newest stress test. The takeaway is not panic, and it is not complacency. It is attention.

If you work in the media today, the smart move is to understand how the rulebook is being interpreted while you are busy playing the game. The rules still count. The field just looks different now.

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

Industry Views

Navigating the Deepfake Dilemma in the Age of AI Impersonation

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer

imgThe Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality

Every seasoned broadcaster or media creator has a radar for nonsense. You have spent years vetting sources, confirming facts, and throwing out anything that feels unreliable. The complication now is that artificial intelligence can wrap unreliable content in a polished package that looks and sounds legitimate.

This article is not aimed at people creating AI impersonation channels. If that is your hobby, nothing here will make you feel more confident about it. This is for the professionals whose job is to keep the information stream as clean as possible. You are not making deepfakes. You are trying to avoid stepping in them and trying even harder not to amplify them.

Once something looks real and sounds real, a significant segment of your audience will assume it is real. That changes the amount of scrutiny you need to apply. The burden now falls on people like you to pause before reacting. 

Two Clips That Tell the Whole Story

Consider two current examples. The first is the synthetic Biden speech that appears all over social media. It presents a younger, steadier president delivering remarks that many supporters wish he would make. It is polished, convincing, and created entirely by artificial intelligence.

The second is the cartoonish Trump fighter jet video that shows him dropping waste on unsuspecting civilians. No one believes it is real. Yet both types of content live in the same online ecosystem and both get shared widely.

The underlying facts do not matter once the clip begins circulating. If you repeat it on the air without checking it, you become the next link in the distribution chain. Not every untrue clip is misinformation. People get things wrong without intending to deceive, and the law recognizes that. What changes here is the plausibility. When an artificial performance can fool a reasonable viewer, the difference between a mistake and a misleading impression becomes something a finder of fact sorts out later. Your audience cannot make that distinction in real time. 

Parody and Satire Still Exist, but AI Is Blurring the Edges

Parody imitates a person to comment on that person. Satire uses the imitation to comment on something else. These categories worked because traditional impersonations were obvious. A cartoon voice or exaggerated caricature did not fool anyone.

A convincing AI impersonation removes the cues that signal it is a joke. It sounds like the celebrity. It looks like the celebrity. It uses words that fit the celebrity’s public image. It stops functioning as commentary and becomes a manufactured performance that appears authentic. That is when broadcasters get pulled into the confusion even though they had nothing to do with the creation. 

When the Fake Version Starts Crowding Out the Real One

Public figures choose when and where to speak. A Robert De Niro interview has weight because he rarely gives them. A carefully planned appearance on a respected platform signals importance.

When dozens of artificial De Niros begin posting daily commentary, the significance of the real appearance is reduced. The market becomes crowded. Authenticity becomes harder to protect. This is not only a reputational issue. It is an economic one rooted in scarcity and control.

You may think you are sharing a harmless clip. In reality, you might be participating in the dilution of someone’s legitimate business asset. 

Disclaimers Are Not Shields

Many deepfake channels use disclaimers. They say things like this is parody or this is not the real person. A parking garage can also post a sign that it is not responsible for damage to your car. That does not absolve them when something collapses on your vehicle.

A disclaimer that no one negotiates or meaningfully acknowledges does not protect the creator or the people who share the clip. If viewers believe it is real, the disclaimer (often hidden in plain sight) is irrelevant. 

The Liability No One Expects: Damage You Did Not Create

You can become responsible for the fallout without ever touching the original video. If you talk about a deepfake on the air, share it on social media, or frame it as something that might be true, you help it spread. Your audience trusts you. If you repeat something inaccurate, even unintentionally, they begin questioning your judgment. One believable deepfake can undermine years of credibility. 

Platforms Profit From the Confusion

Here is the structural issue that rarely gets discussed. Platforms have every financial incentive to push deepfakes. They generate engagement. Engagement generates revenue. Revenue satisfies stockholders. This stands in tension with the spirit of Section 230, which was designed to protect neutral platforms, not platforms that amplify synthetic speech they know is likely to deceive.

If a platform has the ability to detect and label deepfakes and chooses not to, the responsibility shifts to you. The platform benefits. You absorb the risk. 

What Media Professionals Should Do

You do not need new laws. You do not need to give warnings to your audience. You do not need to panic. You do need to stay sharp.

Here is the quick test. Ask yourself four questions.

Is the source authenticated?
Has the real person ever said anything similar?
Is the platform known for synthetic or poorly moderated content?
Does anything feel slightly off even when the clip looks perfect?

If any answer gives you pause, treat the clip as suspect. Treat it as content, not truth. 

Final Thought (at Least for Now)

Artificial intelligence will only become more convincing. Your role is not to serve as a gatekeeper. Your role is to maintain professional judgment. When a clip sits between obviously fake and plausibly real, that is the moment to verify and, when necessary, seek guidance. There is little doubt that the inevitable proliferation of phony internet “shows” is about to bloom into a controversial legal, ethical, and financial industry issue.  

Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.