Reckless Disregard in the Age of AI: What Verification Now Requires
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
AI is now embedded in the modern newsroom. Not as a headline, not as a novelty, but as infrastructure. It drafts outlines, summarizes complex reporting, surfaces background details, and accelerates prep for live conversations. For media creators operating under relentless deadlines, that efficiency is not theoretical. It is practical and daily.
That reality raises a quiet but consequential legal question. When AI contributes to your research, what does verification now require?
Professional hosts are not reading raw chatbot answers on air and calling it journalism. That caricature misses the real issue. What is actually happening is subtler and far more common.
AI now sits inside research workflows. Producers use it for background. Hosts use it to summarize reporting. Teams use it to outline controversies or draft rundowns. Most of the time, it works. Sometimes, however, it invents.
When that invention involves a real person and a serious allegation, the legal analysis looks familiar.
For public figures, defamation requires proof of actual malice – knowledge of falsity or reckless disregard for truth. For private figures, negligence is usually enough. In both cases, the focus is not on the tool. It is on the content creator’s conduct.
AI does not change the elements. It changes the context in which reasonableness is judged.
Courts have long held that repeating a defamatory statement can create liability, even if someone else said it first. If you rely on a blog, and that blog relied on AI, and the allegation is false, the question becomes whether your reliance was reasonable.
Was the source reputable? Was the claim inherently improbable? Were there obvious red flags? Was contradictory information readily available?
AI’s reputation for “hallucinating” facts now forms part of that backdrop. Widespread awareness that these systems can fabricate citations, merge identities, or invent accusations becomes relevant when a court evaluates your verification choices.
This does not mean using AI indicates reckless disregard. It means using AI does not excuse skipping verification when the stakes are high.
The more specific and damaging the claim, the greater the duty to confirm it through independent, reliable sources. Not another prompt. Not a circular reference to the same unverified blog. Rather, a primary record, official statement, or established reporting.
Documentation matters. If challenged, being able to show that you checked multiple sources before broadcast can be decisive.
None of this is new doctrine. What is new is how seamlessly AI blends into ordinary research habits. That integration makes it easier to forget that the legal question is still about human judgment.
The law will not ask whether your workflow was efficient. It will ask whether your conduct was reasonable under the circumstances.
In the age of AI, verification is not a courtesy. It is risk management.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.
The Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality
Every talk host knows the move: play the clip. It might be a moment from late-night TV, a political ad, or a viral post that sets the table for the segment. It’s how commentary comes alive – listeners hear it, react to it, and stay tuned for your take.
When we first covered this case, it felt like only 2024 could invent it – a disgraced congressman, George Santos, selling Cameos and a late-night host, Jimmy Kimmel, buying them under fake names to make a point about truth and ego. A year later, the Second Circuit turned that punchline into precedent. (Read story here:
In the golden age of broadcasting, the rules were clear. If you edited the message, you owned the consequences. That was the tradeoff for editorial control. But today’s digital platforms – YouTube, X, TikTok, Instagram – have rewritten that deal. Broadcasters and those who operate within the FCC regulatory framework are paying the price.
When Georgia-based nationally syndicated radio personality, and Second Amendment advocate Mark Walters (longtime host of “Armed American Radio”) learned that ChatGPT had falsely claimed he was involved in a criminal embezzlement scheme, he did what few in the media world have dared to do. Walters stood up when others were silent, and took on an incredibly powerful tech company, one of the biggest in the world, in a court of law.
competition last evening (2/22) at the 1st Circuit Court of Appeals in Boston, MA. The American Bar Association, Law Student Division holds a number of annual national moot court competitions. One such event, the National Appellate Advocacy Competition, emphasizes the development of oral advocacy skills through a realistic appellate advocacy experience with moot court competitors participating in a hypothetical appeal to the United States Supreme Court. This year’s legal question focused on the Communications Decency Act – “Section 230” – and the applications of the exception from liability of internet service providers for the acts of third parties to the realistic scenario of a journalist’s photo/turned meme being used in advertising (CBD, ED treatment, gambling) without permission or compensation in violation of applicable state right of publicity statutes. Harrison tells TALKERS, “We are at one of those sensitive times in history where technology is changing at a quicker pace than the legal system and legislators can keep up with – particularly at the consequential juncture of big tech and mass communications. I was impressed and heartened by the articulateness and grasp of the Section 230 issue displayed by the law students arguing before me.”