This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

self-study / Technology

Dec. 12, 2025

AI defamation suits test anti-SLAPP shields and speaker rights

Lan P. Vu

Attorney
Harder Stonerock LLP

Lan P. Vu is an attorney at Harder Stonerock LLP, which specializes in reputation protection and business litigation, with a focus on media, entertainment, intellectual property, defamation, privacy and First Amendment matters.

See more...

Judges have increasingly reprimanded attorneys for citing fictional cases in briefs generated by artificial intelligence (AI). In these situations, everyone agrees that attorneys should be held accountable. But what happens when AI technology generates false content that defames you or your business? How do we determine who should be accountable then? This is the novel question that courts are grappling with as defamation cases are starting to pop up around the country over AI-generated content. (None of these cases have been filed in California...yet.)

Defamation laws vary from state to state, but the basic elements are generally the same: a publication that is false, defamatory, unprivileged and has a natural tendency to injure or cause special damage. Taus v. Loftus, 40 Cal.4th 683, 720 (2007). A plaintiff must also establish the requisite level of fault, whether it be negligence as a private figure or actual malice as a public figure. Khawar v. Globe Intern., Inc., 19 Cal.4th 254, 274 (1998). Actual malice requires clear and convincing evidence that the defendant made the statements with knowledge of their falsity or reckless disregard for the truth. Id. at 275.

Walters v. OpenAI, L.L.C., Ga. Superior Case No. 23-A-04860-2 (2023), demonstrates how one court applied these traditional defamation principles to modern AI communication technology. In May 2025, a Georgia court granted summary judgment to OpenAI (creator of the AI chatbot, ChatGPT) in a defamation lawsuit by radio host Mark Walters. Walters sued when ChatGPT provided a summary to a journalist falsely stating Walters was being accused of embezzlement in a lawsuit he had no involvement in. See Order, available at https://images.assettype.com/barandbench/2025-06-19/0rtb594b/Mark_Walters_v_OpenAI_1750311692.pdf. The journalist, however, knew that ChatGPT could provide "fictional responses" and quickly determined that the summary was false. See Order, p. 3.

The court made several key findings: (i) that no reasonable reader could have concluded that ChatGPT was communicating "actual facts" about Walters, given ChatGPT's warnings about potential inaccuracies, inconsistent responses, and the journalist's doubts and quick determination that the summary did not contain actual facts; (ii) Walters, as a public figure, could not prove actual malice or refute OpenAI's evidence of its "industry-leading efforts" to reduce AI hallucinations (fabricated facts) and warnings about potential inaccuracies (these efforts and warnings showed OpenAI was not negligent either); and (iii) Walters had no recoverable damages because he conceded he was not harmed. See Order, pp. 5-21. This decision, while limited to the unique facts of that case, suggests that AI developers may not be liable for defamation based on isolated inaccuracies when it has made efforts to reduce potential inaccuracies and warns users about them.

It will be interesting to see how other courts deal with these issues, including in LTL LED, LLC, d/b/a Wolf River Electric, et al., v. Google, a defamation case filed in March 2025 in Minnesota. Wolf River sued Google for publishing search results falsely claiming that it had been sued for deceptive sales practices, among other things. Wolf River is claiming significant damages, including cancelled contracts, in the realm of $110-$210 million. Wolf River is in its initial stages, with plaintiffs seeking to remand the case to state court after Google removed it to federal court. This detail, however, raises a natural procedural question for defamation litigators, which applies to the Walters case as well: Why did the defendants not file special motions to strike in these cases?

These motions, known as anti-SLAPP motions, are routine in defamation litigation. "SLAPP" is an acronym for "strategic lawsuit against public participation." Anti-SLAPP laws usually allow defendants to file an early "special motion to strike" to dismiss entire complaints or specific causes of action that they claim "chill" their exercise of constitutionally protected rights. Since most anti-SLAPP statutes apply to speech made in connection with an issue of public interest, defamation defendants almost always claim that they were exercising their free speech rights. More stringent anti-SLAPP laws (like California's Code of Civil Procedure Section 425.16) stay discovery, award prevailing defendants their attorney's fees and costs, and grant losing defendants the right to an immediate appeal. 

Both Georgia and Minnesota have strict anti-SLAPP laws. Ga. Code Ann. Section 9-11-11.1; Minn. Stat. Ann. Sections 554.07, et seq. Walters and Wolf River filed complaints in their respective state courts, but rather than seek early dismissals under the relevant anti-SLAPP laws, defendants in both cases opted to remove them to federal court where the anti-SLAPP laws either do not apply or are uncertain to apply.

The federal circuits are split on whether state anti-SLAPP statutes apply in federal court. The 11th  Circuit, where Georgia is located, does not apply state anti-SLAPP laws. Carbone v. Cable News Network, Inc., 910 F.3d 1345, 1357 (11th Cir. 2018). The 8th Circuit, located in Missouri, has not decided the issue, but the U.S. District Court, District of Minnesota, held Minnesota's prior anti-SLAPP statute did not apply. Unity Healthcare, Inc. v. County of Hennepin, 308 F.R.D. 537, 554 (D. Minn. 2015). The statute was later found unconstitutional in Mobile Diagnostic Imaging, Inc. v. Hooten, 889 N.W.2d 27, 35 (Minn. Ct. App. 2016). It is undecided whether Minnesota's new anti-SLAPP law -- effective May 2024 -- will apply in federal court, though Unity Healthcare suggests it will not.

In our experience, defendants generally prefer federal courts, which tend to be less favorable to state law defamation claims, but they also routinely take advantage of state law anti-SLAPP statutes. It therefore stood out to us that the Walters and Wolf River defendants did not file anti-SLAPP motions. While there may be special nuances to the particular anti-SLAPP statutes that affect their applicability in those cases, this also presents broader questions regarding whether and how the AI-generated content triggers constitutional free speech protections for purposes of anti-SLAPP laws in general. Either way, we can expect these anti-SLAPP issues to come up in future cases and are curious to see how courts will address them.



I would like to thank Charles Cardinal for his assistance on this article.

#1780

Submit your own column for publication to Diana Bosetti


Related Tests for Technology

self-study/Technology

Why ChatGPT writes fake court opinions

By Clint Ehrlich

self-study/Technology

Embracing AI: ChatGPT tips to boost law practice productivity

By Sonya L. Sigler

self-study/Technology

Pixel privacy pitfalls put websites at risk

By Tyler R. Dowdall

self-study/Technology

California cracks open AI's black box

By Alexander F. Koskey, Madison "MJ" McMahan, Matthew G. White

self-study/Technology

What California businesses need to know about the evolving AI legal framework

By John Brockland, Vassi Iliadis, Roshni Patel