Walters v. OpenAI — Georgia Court Holds ChatGPT Hallucination About Radio Host Wasn’t Defamation

Case
Walters v. OpenAI, LLC
Court
Superior Court of Gwinnett County, Georgia
Date Decided
May 19, 2025
Docket No.
23-A-04860-2
Topics
Generative AI, Defamation, Hallucinations, ChatGPT, Disclaimers, Negligence, Actual Malice

Background

Mark Walters is a nationally syndicated talk-radio host. In 2023, journalist Fred Riehl asked ChatGPT to summarize a federal complaint in Second Amendment Foundation v. Ferguson. ChatGPT’s reply incorrectly named Walters as a defendant in the suit and said he had been accused of embezzling funds from SAF. None of that was true: Walters has no role in SAF, and the actual complaint says nothing about him.

Walters sued OpenAI for defamation, arguing the company was responsible for the false output and that he had been injured by it. Riehl himself never published the hallucinated summary — he double-checked the underlying complaint, found ChatGPT’s output to be wrong, and discarded it. OpenAI moved for summary judgment.

The Court’s Holding

The court granted OpenAI summary judgment on three independent grounds.

First, the output was not defamatory as a matter of law. A reasonable user, primed by ChatGPT’s well-known disclaimers and OpenAI’s repeated warnings about hallucinations, would not treat the chatbot’s response as an assertion of fact about a real person. Riehl’s own behavior — verifying the output against the actual complaint before doing anything with it — reinforced the point.

Second, OpenAI was not negligent. Imposing liability simply because hallucinations are statistically possible would, the court reasoned, collapse a negligence regime into strict liability. OpenAI’s documented anti-hallucination work, training procedures, and conspicuous user warnings showed it had taken reasonable steps. Walters — a public figure — could not establish actual malice, given the absence of any evidence that OpenAI knew or recklessly disregarded the falsity of any specific output.

Third, Walters had no damages. He admitted in deposition that no one believed the false statement, no business or reputation harm followed from it, and he had not lost any opportunities. Without harm, there was no defamation claim to pursue.

Key Takeaways

  • This is one of the first substantive rulings on whether an AI provider can be held liable in defamation for a hallucinated factual assertion. The court’s answer, on these facts, is no.
  • The decision is heavily fact-dependent. The court emphasized OpenAI’s prominent hallucination warnings and the user’s verification habits. A different output, accepted at face value by a less careful user, might come out differently.
  • Public-figure plaintiffs face a steep climb under New York Times v. Sullivan‘s actual-malice standard when suing AI companies, because there is rarely evidence that any specific hallucination was known to the provider.
  • The damages ruling is an important reminder: a viral but unbelieved hallucination, without follow-on harm, does not state a claim.

Why It Matters

Generative-AI companies have spent the last two years bracing for a wave of defamation suits over hallucinated outputs. Walters is the first major win for the industry on the merits, and it gives AI providers a roadmap: prominent disclaimers, documented anti-hallucination work, and reliance on the sophistication of the user can together blunt defamation exposure for one-off false outputs.

The decision is from a Georgia state trial court and is not binding on other courts, but it will be cited heavily by defendants in the many AI-defamation cases now in the pipeline. It also sharpens the strategic question for plaintiffs: to prevail, they will likely need facts where someone reasonably believed and acted on the output, where damages are concrete, and where the provider’s safeguards can be portrayed as inadequate. Walters had none of those.

Full Opinion

Your browser cannot display this PDF inline.

Download the full opinion (PDF)

Leave a Comment

Scroll to Top