Reckless Disregard in the Age of AI: What Verification Now Requires
By Matthew B. Harrison
TALKERS, VP/Associate Publisher
Harrison Media Law, Senior Partner
Goodphone Communications, Executive Producer
AI is now embedded in the modern newsroom. Not as a headline, not as a novelty, but as infrastructure. It drafts outlines, summarizes complex reporting, surfaces background details, and accelerates prep for live conversations. For media creators operating under relentless deadlines, that efficiency is not theoretical. It is practical and daily.
That reality raises a quiet but consequential legal question. When AI contributes to your research, what does verification now require?
Professional hosts are not reading raw chatbot answers on air and calling it journalism. That caricature misses the real issue. What is actually happening is subtler and far more common.
AI now sits inside research workflows. Producers use it for background. Hosts use it to summarize reporting. Teams use it to outline controversies or draft rundowns. Most of the time, it works. Sometimes, however, it invents.
When that invention involves a real person and a serious allegation, the legal analysis looks familiar.
For public figures, defamation requires proof of actual malice – knowledge of falsity or reckless disregard for truth. For private figures, negligence is usually enough. In both cases, the focus is not on the tool. It is on the content creator’s conduct.
AI does not change the elements. It changes the context in which reasonableness is judged.
Courts have long held that repeating a defamatory statement can create liability, even if someone else said it first. If you rely on a blog, and that blog relied on AI, and the allegation is false, the question becomes whether your reliance was reasonable.
Was the source reputable? Was the claim inherently improbable? Were there obvious red flags? Was contradictory information readily available?
AI’s reputation for “hallucinating” facts now forms part of that backdrop. Widespread awareness that these systems can fabricate citations, merge identities, or invent accusations becomes relevant when a court evaluates your verification choices.
This does not mean using AI indicates reckless disregard. It means using AI does not excuse skipping verification when the stakes are high.
The more specific and damaging the claim, the greater the duty to confirm it through independent, reliable sources. Not another prompt. Not a circular reference to the same unverified blog. Rather, a primary record, official statement, or established reporting.
Documentation matters. If challenged, being able to show that you checked multiple sources before broadcast can be decisive.
None of this is new doctrine. What is new is how seamlessly AI blends into ordinary research habits. That integration makes it easier to forget that the legal question is still about human judgment.
The law will not ask whether your workflow was efficient. It will ask whether your conduct was reasonable under the circumstances.
In the age of AI, verification is not a courtesy. It is risk management.
Matthew B. Harrison is a media and intellectual property attorney who advises radio hosts, content creators, and creative entrepreneurs. He has written extensively on fair use, AI law, and the future of digital rights. Reach him at Matthew@HarrisonMediaLaw.com or read more at TALKERS.com.
TALKERS magazine, the leading trade publication serving America’s professional broadcast talk radio and associated digital communities since 1990, is pleased to participate as the presenting sponsor of the forthcoming Intercollegiate Broadcasting System (IBS) conference for the second consecutive year.
For years, “protect your name and likeness” sounded like lawyer advice in search of a problem. Abstract. Defensive. Easy to ignore. That worked when misuse required effort, intent, and a human decision-maker willing to cross a line.
Every media creator knows this moment. You are building a segment, you find the clip that makes the point land, and then the hesitation kicks in. Can I use this? Or am I about to invite a problem that distracts from the work itself?
The Problem Is No Longer Spotting a Joke. The Problem Is Spotting Reality

examining the lifelong love affair of a fictional couple from childhood to old age – an emotional roller coaster ride reflecting the romantic ups and downs of a complex relationship. The tear-jerker is a departure from the heavy-hitting social commentaries that have made Gunhill Road a favorite among talk radio hosts and audiences for the past half decade. The intriguing group, formed in the late 1960s, is still going strong with core members Steve Goldrich, Paul Reisch, Brian Koonin, and Michael Harrison. Matthew B. Harrison produces the ensemble’s videos that employ leading-edge techniques and technology. Ms. Farber, who shares lead vocals on the song with Brian Koonin, is a talented singer, songwriter, and instrumentalist with a number of singles, albums and television commercial soundtracks among her credits. She is presently an advocate for the well-being of nursing home residents and organizer of initiatives to bring live music into their lives. 
and the gut-wrenching chaos of informational overload. Non-partisan lyrics cry out: “Too much information clogging up my brain… and I can’t change the station; it’s driving me insane!” Co-written and performed by band members Steve Goldrich, Paul Reisch, Brian Koonin, and Michael Harrison, the dramatic images accompanying the music include a dynamic montage of exasperated people being driven to the brink of madness by the pressure of what feels like non-stop, negative NOISE. Produced by
Every talk host knows the move: play the clip. It might be a moment from late-night TV, a political ad, or a viral post that sets the table for the segment. It’s how commentary comes alive – listeners hear it, react to it, and stay tuned for your take.
When we first covered this case, it felt like only 2024 could invent it – a disgraced congressman, George Santos, selling Cameos and a late-night host, Jimmy Kimmel, buying them under fake names to make a point about truth and ego. A year later, the Second Circuit turned that punchline into precedent. (Read story here:
Jimmy Kimmel’s first monologue back after the recent suspension had the audience laughing and gasping, and, in the hands of countless radio hosts and podcasters, replaying. Within hours, clips of his bit weren’t just being shared online. They were being chopped up, (re)framed, and (re)analyzed as if they were original show content. For listeners, that remix feels fresh. For lawyers, it is a fair use minefield.
Charlie Kirk’s tragic assassination shook the talk radio world. Emotions were raw, and broadcasters across the spectrum tried to capture that moment for their audiences. Charles Heller of KVOI in Tucson
Superman just flew into court – not against Lex Luthor, but against Midjourney. Warner Bros. Discovery is suing the AI platform, accusing it of stealing the studio’s crown jewels: Superman, Batman, Wonder Woman, Scooby-Doo, Bugs Bunny, and more.
Imagine an AI trained on millions of books – and a federal judge saying that’s fair use. That’s exactly what happened this summer in Bartz v. Anthropic, a case now shaping how creators, publishers, and tech giants fight over the limits of copyright.
Ninety seconds. That’s all it took. One of the interviews on the TALKERS Media Channel – shot, edited, and published by us – appeared elsewhere online, chopped into jumpy cuts, overlaid with AI-generated video game clips, and slapped with a clickbait title. The credit? A link. The essence of the interview? Repurposed for someone else’s traffic.
Imagine a listener “talking” to an AI version of you – trained entirely on your old episodes. The bot knows your cadence, your phrases, even your voice. It sounds like you, but it isn’t you.
Imagine SiriusXM acquires the complete Howard Stern archive – every show, interview, and on-air moment. Months later, it debuts “Howard Stern: The AI Sessions,” a series of new segments created with artificial intelligence trained on that archive. The programming is labeled AI-generated, yet the voice, timing, and style sound like Stern himself.
You did everything right – or so you thought. You used a short clip, added commentary, or reshared something everyone else was already posting. Then one day, a notice shows up in your inbox. A takedown. A demand. A legal-sounding, nasty-toned email claiming copyright infringement, and asking for payment.
Mark Walters

In radio and podcasting, editing isn’t just technical – it shapes narratives and influences audiences. Whether trimming dead air, tightening a guest’s comment, or pulling a clip for social media, every cut leaves an impression.
Let’s discuss how CBS’s $16 million settlement became a warning shot for every talk host, editor, and content creator with a mic.
In early 2024, voters in New Hampshire got strange robocalls. The voice sounded just like President Joe Biden, telling people not to vote in the primary. But it wasn’t him. It was an AI clone of his voice – sent out to confuse voters.
When Georgia-based nationally syndicated radio personality, and Second Amendment advocate Mark Walters (longtime host of “Armed American Radio”) learned that ChatGPT had falsely claimed he was involved in a criminal embezzlement scheme, he did what few in the media world have dared to do. Walters stood up when others were silent, and took on an incredibly powerful tech company, one of the biggest in the world, in a court of law.
In a ruling that should catch the attention of every talk host and media creator dabbling in AI, a Georgia court has dismissed “Armed American Radio” syndicated host Mark Walters’ defamation lawsuit against OpenAI. The case revolved around a disturbing but increasingly common glitch: a chatbot “hallucinating” canonically false but believable information.
In the ever-evolving landscape of digital media, creators often walk a fine line between inspiration and infringement. The 2015 case of “Equals Three, LLC v. Jukin Media, Inc.” offers a cautionary tale for anyone producing reaction videos or commentary-based content: fair use is not a free pass, and transformation is key.
As the practice of “clip jockeying” becomes an increasingly ubiquitous and taken-for-granted technique in modern audio and video talk media, an understanding of the legal concept “fair use” is vital to the safety and survival of practitioners and their platforms.

