By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
AI is now embedded in the modern newsroom. Not as a headline, not as a novelty, but as infrastructure. It drafts outlines, summarizes complex reporting, surfaces background details, and accelerates prep for live conversations. For media creators operating under relentless deadlines, that efficiency is not theoretical. It is practical and daily.
That reality raises a quiet but consequential legal question. When AI contributes to your research, what does verification now require?
Professional hosts are not reading raw chatbot answers on air and calling it journalism. That caricature misses the real issue. What is actually happening is subtler and far more common.
AI now sits inside research workflows. Producers use it for background. Hosts use it to summarize reporting. Teams use it to outline controversies or draft rundowns. Most of the time, it works. Sometimes, however, it invents.
When that invention involves a real person and a serious allegation, the legal analysis looks familiar.
For public figures, defamation requires proof of actual malice – knowledge of falsity or reckless disregard for truth. For private figures, negligence is usually enough. In both cases, the focus is not on the tool. It is on the content creator’s conduct.
AI does not change the elements. It changes the context in which reasonableness is judged.
Courts have long held that repeating a defamatory statement can create liability, even if someone else said it first. If you rely on a blog, and that blog relied on AI, and the allegation is false, the question becomes whether your reliance was reasonable.
Was the source reputable? Was the claim inherently improbable? Were there obvious red flags? Was contradictory information readily available?
AI’s reputation for “hallucinating” facts now forms part of that backdrop. Widespread awareness that these systems can fabricate citations, merge identities, or invent accusations becomes relevant when a court evaluates your verification choices.
This does not mean using AI indicates reckless disregard. It means using AI does not excuse skipping verification when the stakes are high.
The more specific and damaging the claim, the greater the duty to confirm it through independent, reliable sources. Not another prompt. Not a circular reference to the same unverified blog. Rather, a primary record, official statement, or established reporting.
Documentation matters. If challenged, being able to show that you checked multiple sources before broadcast can be decisive.
None of this is new doctrine. What is new is how seamlessly AI blends into ordinary research habits. That integration makes it easier to forget that the legal question is still about human judgment.
The law will not ask whether your workflow was efficient. It will ask whether your conduct was reasonable under the circumstances.
In the age of AI, verification is not a courtesy. It is risk management.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.
Share this with your network