Industry Views

If the Bot Lies, Who Pays?

By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer 

img

A reporter recently asked a clean question with sharp edges: “Who is responsible when an AI defames someone?”
It sounds futuristic. It isn’t. It’s a standard defamation analysis dressed in new technology.
The most publicized early test involved radio host Mark Walters, who sued OpenAI after ChatGPT falsely stated he had been accused of embezzlement. The case was dismissed in federal court in Georgia in 2024. The court concluded the complaint did not plausibly allege the required level of fault. No federal appellate court has yet imposed defamation liability on an AI developer for a hallucinated statement alone.
That matters.
Defamation still requires a false statement of fact, publication to a third party, fault, and damages. An AI system cannot form intent. It cannot know falsity. It is not a legal person. But an AI output can absolutely contain a false statement about a real individual.
Courts will not ask whether “the AI defamed.” They will ask who published the statement.
Publication is broader than many assume. It does not require a broadcast tower. It requires communication to at least one third party. If a chatbot produces a false statement visible only to the person who prompted it and that person is the subject of the statement, there is typically no publication. The moment that output is emailed, posted, quoted, aired, or incorporated into a script, publication is satisfied.
The AI session itself is not the problem. Distribution is.
That is where fault enters the picture.
For public figures, plaintiffs must prove actual malice: knowledge of falsity or reckless disregard for truth. “The computer said it” is not a defense. If a host repeats a serious allegation generated by a system widely known to hallucinate and fails to verify it, a plaintiff will argue reckless disregard. For private figures, negligence is usually enough. Failing to check an AI-generated accusation against readily available sources may meet that standard.
The technology does not lower the bar. Nor does it create a new type of immunity. It simply changes the source of the words.
The unsettled frontier is developer exposure under Section 230 and product liability theories. Courts have not yet produced a controlling appellate decision holding a model developer liable in defamation solely because a model generated a false statement. That question remains open, but it is not yet answered in plaintiffs’ favor.
Here is the practical reality for media professionals.
An AI can generate the sentence.
You are the one who makes it public.
That’s where liability is found.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.