Reckless Disregard in the Age of AI: What Verification Now Requires
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
AI is now embedded in the modern newsroom. Not as a headline, not as a novelty, but as infrastructure. It drafts outlines, summarizes complex reporting, surfaces background details, and accelerates prep for live conversations. For media creators operating under relentless deadlines, that efficiency is not theoretical. It is practical and daily.
That reality raises a quiet but consequential legal question. When AI contributes to your research, what does verification now require?
Professional hosts are not reading raw chatbot answers on air and calling it journalism. That caricature misses the real issue. What is actually happening is subtler and far more common.
AI now sits inside research workflows. Producers use it for background. Hosts use it to summarize reporting. Teams use it to outline controversies or draft rundowns. Most of the time, it works. Sometimes, however, it invents.
When that invention involves a real person and a serious allegation, the legal analysis looks familiar.
For public figures, defamation requires proof of actual malice – knowledge of falsity or reckless disregard for truth. For private figures, negligence is usually enough. In both cases, the focus is not on the tool. It is on the content creator’s conduct.
AI does not change the elements. It changes the context in which reasonableness is judged.
Courts have long held that repeating a defamatory statement can create liability, even if someone else said it first. If you rely on a blog, and that blog relied on AI, and the allegation is false, the question becomes whether your reliance was reasonable.
Was the source reputable? Was the claim inherently improbable? Were there obvious red flags? Was contradictory information readily available?
AI’s reputation for “hallucinating” facts now forms part of that backdrop. Widespread awareness that these systems can fabricate citations, merge identities, or invent accusations becomes relevant when a court evaluates your verification choices.
This does not mean using AI indicates reckless disregard. It means using AI does not excuse skipping verification when the stakes are high.
The more specific and damaging the claim, the greater the duty to confirm it through independent, reliable sources. Not another prompt. Not a circular reference to the same unverified blog. Rather, a primary record, official statement, or established reporting.
Documentation matters. If challenged, being able to show that you checked multiple sources before broadcast can be decisive.
None of this is new doctrine. What is new is how seamlessly AI blends into ordinary research habits. That integration makes it easier to forget that the legal question is still about human judgment.
The law will not ask whether your workflow was efficient. It will ask whether your conduct was reasonable under the circumstances.
In the age of AI, verification is not a courtesy. It is risk management.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.

For years, “protect your name and likeness” sounded like lawyer advice in search of a problem. Abstract. Defensive. Easy to ignore. That worked when misuse required effort, intent, and a human decision-maker willing to cross a line.
Every media creator knows this moment. You are building a segment, you find the clip that makes the point land, and then the hesitation kicks in. Can I use this? Or am I about to invite a problem that distracts from the work itself?
The Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality
Every talk host knows the move: play the clip. It might be a moment from late-night TV, a political ad, or a viral post that sets the table for the segment. It’s how commentary comes alive – listeners hear it, react to it, and stay tuned for your take.
When we first covered this case, it felt like only 2024 could invent it – a disgraced congressman, George Santos, selling Cameos and a late-night host, Jimmy Kimmel, buying them under fake names to make a point about truth and ego. A year later, the Second Circuit turned that punchline into precedent. (Read story here:
Charlie Kirk’s tragic assassination shook the talk radio world. Emotions were raw, and broadcasters across the spectrum tried to capture that moment for their audiences. Charles Heller of KVOI in Tucson
Imagine an AI trained on millions of books – and a federal judge saying that’s fair use. That’s exactly what happened this summer in Bartz v. Anthropic, a case now shaping how creators, publishers, and tech giants fight over the limits of copyright.
Ninety seconds. That’s all it took. One of the interviews on the TALKERS Media Channel – shot, edited, and published by us – appeared elsewhere online, chopped into jumpy cuts, overlaid with AI-generated video game clips, and slapped with a clickbait title. The credit? A link. The essence of the interview? Repurposed for someone else’s traffic.
Imagine a listener “talking” to an AI version of you – trained entirely on your old episodes. The bot knows your cadence, your phrases, even your voice. It sounds like you, but it isn’t you.
Imagine SiriusXM acquires the complete Howard Stern archive – every show, interview, and on-air moment. Months later, it debuts “Howard Stern: The AI Sessions,” a series of new segments created with artificial intelligence trained on that archive. The programming is labeled AI-generated, yet the voice, timing, and style sound like Stern himself.
You did everything right – or so you thought. You used a short clip, added commentary, or reshared something everyone else was already posting. Then one day, a notice shows up in your inbox. A takedown. A demand. A legal-sounding, nasty-toned email claiming copyright infringement, and asking for payment.
A radio (or video podcast) host grabs a viral clip, tosses in some sharp commentary, and shares it online. The goal? Make some noise. The result? A takedown notice for copyright infringement – and then a letter threatening a defamation suit.
In the golden age of broadcasting, the rules were clear. If you edited the message, you owned the consequences. That was the tradeoff for editorial control. But today’s digital platforms – YouTube, X, TikTok, Instagram – have rewritten that deal. Broadcasters and those who operate within the FCC regulatory framework are paying the price.
In a ruling that should catch the attention of every talk host and media creator dabbling in AI, a Georgia court has dismissed “Armed American Radio” syndicated host Mark Walters’ defamation lawsuit against OpenAI. The case revolved around a disturbing but increasingly common glitch: a chatbot “hallucinating” canonically false but believable information.