The A.I. Defamation Cases Are Here: ChatGPT Sued for Spreading Misinformation


ChatGPT logo on phone

A Georgia man is suing the makers of ChatGPT for defamation. In a new lawsuit filed in Gwinnett County, Georgia, Mark Walters alleges that OpenAI, the company behind the popular artificial intelligence (A.I.) chatbot ChatGPT, is guilty of publishing libelous information about him. The first-of-its-kind lawsuit brings up novel issues regarding A.I.’s liability for spreading misinformation.

The case stems from reporting that journalist Fred Riehl is doing about a Second Amendment Foundation (SAF) lawsuit against Bob Ferguson, Washington state’s attorney general. Alan Gottlieb is one of the plaintiffs in the lawsuit.

Riehl linked to SAF’s complaint and asked ChatGPT to summarize. It allegedly responded that the complaint was “filed by Alan Gottlieb … against Mark Walters, who is accused of defrauding and embezzling funds from the SAF.” The ChatGPT summary continued by stating that Walters was the group’s treasurer and chief financial officer and that he had “misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures,” per Walters’ complaint.

ChatGPT was wrong across the board. Walters is neither a plaintiff nor a defendant in the lawsuit. He never served as SAF’s treasurer or chief financial officer. And he has not been legally accused of any crimes against SAF.

“ChatGPT’s allegations concerning Walters were false and malicious, expressed in
print, writing, pictures, or signs, tending to injure Walter’s reputation and exposing
him to public hatred, contempt, or ridicule,” states Walters’ complaint. “By sending the allegations to Riehl, OAI published libelous matter regarding Walters.”

Furthermore, Walters alleges that OpenAI is aware that ChatGPT “sometimes makes up facts” and therefore “knew or should have known its communication to Riehl regarding Walters was false, or recklessly disregarded the falsity of the communication.”

But there’s a difference between a company knowing that an artificial intelligence tool can make mistakes and a company knowing that the A.I. tool would make a specific mistake. OpenAI being aware that ChatGPT sometimes errs seems spurious grounds to claim that it knew or should have known ChatGPT would provide false information about Walters. And it seems even more dubious to allege that OpenAI acted with malicious intent here.

And Riehl, the journalist, didn’t end up publishing any of the false information about Walters, which makes it harder to argue that Walter was harmed by ChatGPT’s mistake.

So does Walters’ case have any legal merit?

Law professor and blogger Eugene Volokh suggests that “such libel claims are in principle legally viable. But this particular lawsuit should be hard to maintain.”

Volokh—who has an upcoming paper on libel and A.I. output (a draft of which can be read here)—notes that when it comes to speech about matters of public interest or concern, defamation liability generally arises only when one of two things can be shown: that a defendant knew a statement was untrue (or likely untrue) but recklessly disregarded this fact or that the person being defamed is a private figure who suffered actual damages (things like a loss of income or business opportunities) because of an untrue statement that the defendant was negligent in making.

In this case, “it doesn’t appear from the complaint that Walters put OpenAI on actual notice that ChatGPT was making false statements about him, and demanded that OpenAI stop that, so theory 1 is unavailable,” writes Volokh.

And there seem to be no allegations of actual damages—presumably Riehl figured out what was going on, and thus Walters lost nothing as a result—so theory 2 is unavailable. (Note that Mark Walters might be a public figure, because he’s a syndicated radio talk show host; but even if he is a private figure, that just potentially opens the door to recovery under theory 2 if he can show actual damages, and again that seems unlikely given the allegations in the complaint.)

Now I suppose that Walters could argue that OpenAI knows that ChatGPT often does publish false statements generally (it does, and indeed has acknowledged that), even if it didn’t know about the false statements about Walters in particular. But I don’t think this general knowledge is sufficient, just like you can’t show that a newspaper had knowledge or recklessness as to falsehood just because the newspaper knows that some of its writers sometimes make mistakes. For liability in such cases (again, absent actual damages to a private figure), there has to be a showing that the allegedly libelous “statement was made with ‘actual malice’—that is, with knowledge that it was false or with reckless disregard of whether it was false or not.” And here no-one at OpenAI knew about those particular false statements, at least unless Walters had notified OpenAI about them.

Jess Miers, a lawyer with the business group Chamber of Progress, addresses some other potential concerns about the case, such as whether Section 230—the law protecting online platforms from some legal liability for content derived from third parties—will factor in. Because the underlying complaint doesn’t make a plausible case for defamation, Miers “can see the complaint failing without needing to even reach the 230 issues,” she tweeted yesterday.

Miers notes that when it comes to whether OpenAI should have known ChapGPT might make this mistake, we’re looking at a similar issue to that which we saw in the recent Supreme Court case Twitter v. Taamneh. The Court found Twitter was not guilty of aiding and abetting terrorists just because it hosted Islamic State content.

Just because a company has general knowledge that their products and services could be used to perform illegal uses doesn’t mean that the company is liable for any instance of those uses,” Miers summarized.

Far from being something that should subject OpenAI to legal liability, the fact that OpenAI knows ChatGPT has some issues is a good sign. It means the company can work on fixing those issues and/or work on making sure people who use ChatGPT know not to take its outputs as gospel.

We should also think carefully about what we want OpenAI to do here, suggests Miers. “Perhaps they could provide more disclosures that urge folks not to rely on anything ChatGPT says as fact. But that’s about it. It’s pretty much all or nothing with this kind of technology. In using it, we accept that there will be a lot of junk. But the alternative very well might be ripping the service off the market entirely. Is that the desired outcome?”


FREE MINDS

How age-verification laws threaten our First Amendment right to anonymity.Since the early history of the United States, Americans have enjoyed the right to anonymous speech,” notes Shoshana Weissmann of the R Street Institute. “The First Amendment protects this right, and the Supreme Court has long recognized it. The tradition dates back even farther than the anonymous signers of the Federalist Papers in the 1780s and includes a unanimous Supreme Court case decision in which it was ruled that the National Association for the Advancement of Colored People (NAACP) did not have to disclose names on membership lists to Alabama officials in 1958.”

Laws that mandate age-verification schemes for social media and other online platforms are proliferating before Congress and in statehouses around the country. But these schemes seriously threaten anonymized speech online, points out Weissmann:

With currently proposed legislation and laws, age-verification methods from facial recognition to providing one’s government ID or home address threaten to destroy the possibility of remaining anonymous online (to the degree that is currently possible). And the technology used to verify age ends up verifying more than age. Facial scanning provides a picture or video. Government IDs verify more than just the age of the person logging in, and they cannot account for the possibility that the person logging in could be a child misusing their guardian’s ID. Furthermore, if a person has to verify that their child really is their child as part of parental consent verification, then that adult’s information will be disclosed, too. …

Age-verification mandates could also implicate the rights of individuals with the concept of the “chilling effect” in court. This effect occurs when people voluntarily filter their speech due to laws and can cause courts to overturn these laws that cause the “chilling effect.”

More here.


FREE MARKETS

New York is considering setting minimum prices for nail services. In a crazy foray into state-managed economies, lawmakers behind the Nail Salon Minimum Standards Council Act would not only set new workplace standards and rules for nail salons but also “establish a minimum pricing model for nail services in the state,” notes The New Republic in a piece portraying the bill as a boon to nail salon workers and businesses.

But low prices are one way that new businesses, small businesses, those with lower marketing budgets, and those in less desirable locations can compete with more established, centrally located, or chain establishments. Taking away salon owners’ ability to set their own prices seems to only benefit currently flourishing or big corporate salons, and could be a net negative to workers at smaller and more independent places.

The bill could also seem to be a slippery slope. What makes nail salons unique here? Nothing. And if the state can set minimum prices for manicures and pedicures, it can set prices for haircuts, tomatoes, fitness classes, or just about anything else.

The Nail Salon Minimum Standards Council Act would start by simply creating a commission on minimum pricing to study the issue and make recommendations. But this recommendation process would pave the way for a proposed regulation that, if all goes according to the bill’s plan, would “have the force and effect of law.”


QUICK HITS

• “AI will not destroy the world, and in fact may save it,” writes venture capitalist Marc Andreessen.

• A Connecticut couple is challenging the warrantless surveillance of their property by camera-carrying bears.

• “After days of silence, officials in Florida confirmed on Tuesday that the administration of Gov. Ron DeSantis had orchestrated two recent charter flights that carried groups of migrants from New Mexico to Sacramento,” reports The New York Times.

• A federal judge has halted Florida’s ban on gender transition treatments for minors:

• Ohio Secretary of State Frank LaRose admitted that he supports a measure to raise the threshold for amending the state constitution from a simple majority vote to 60 percent in order to make it harder for an abortion rights amendment to pass.

• A bill that just passed the Louisiana Senate and House with a veto-proof majority would require teachers and schools to get parental permission to refer to a student by any name that is not “the name, or a derivative thereof… that is listed on the student’s birth certificate.” The measure would also require school employees to “use the pronouns for a student that align with the student’s sex unless the student’s parent provides written permission to do otherwise.

• A Wisconsin bill would let people claim a $1,000 tax exemption for any “unborn children for whom a fetal heartbeat has been detected.”

• Tucker Carlson’s new Twitter show has launched.

The post The A.I. Defamation Cases Are Here: ChatGPT Sued for Spreading Misinformation appeared first on Reason.com.

from Latest https://ift.tt/yreJ9mO
via IFTTT

Leave a Reply

Your email address will not be published. Required fields are marked *