“Large Libel Models” Lawsuits, the Aggregate Costs of Liability, and Possibilities for Changing Existing Law

Last week and this, I’ve been serializing my Large Libel Models? Liability for AI Output draft. For some earlier posts on this (including § 230, disclaimers, publication, and more), see here; one particular significant point is at Communications Can Be Defamatory Even If Readers Realize There’s a Considerable Risk of Error. Today, I turn to two arguments against liability.

[* * *]

[A.] Aggregate Costs of Liability

To be sure, once one allows any sorts of legal claims against AI companies based on their programs’ output, this will lead to many more claims, sound or not. Even if the first victories happen where the claims seem strongest—for instance, as to fabricated quotes, or continued communication of fake quotes after the company has been alerted to them—later claims may be much more contestable and complicated. Yet each one will have to be defended, at great expense, even if the AI company prevails. Lay juries may err in deciding that some alternative design would be feasible, thus leading to some erroneous liability verdicts. And common-law courts may likewise extend plausible precedents for liability into much more radical and unjustified liability rules.[1]

As a result, AI companies that produce such software may find it impossible to get liability insurance. And while the richest companies may be able to self-insure, upstart competitors might not be able to. This might end up sharply chilling innovation, in an area where innovation may be especially important, particularly given the importance of AI to national security and international competitiveness.

These are, I think, serious concerns. I am not a cheerleader for the American tort liability system.[2] Perhaps, as the next Part discusses, these concerns can justify statutory immunity, or judicial decisions foreclosing common-law liability.

But these concerns can be, and have been, raised with regard to liability—especially design defect liability—for many other industries.[3] Yet, rightly or wrongly, the legal system has generally allowed such liability claims, despite their financial costs and the danger they pose to innovation. Innovation, the theory has been, shouldn’t take place at the expense of people who are injured by the new products; indeed, the threat of liability is an important tool for pushing innovators towards designing protections that could offer innovation and safety. And whatever Congress may decide as a statutory matter (as it did with providing immunity to Internet companies under § 230), existing common-law principles seem to support some kinds of liability for AI companies.

[B.] Should Current Law Be Changed?

Of course, the legal rules discussed above aren’t the end of the story. Congress could, for instance, preempt defamation liability in such cases, just as it did with § 230. And courts can themselves revise the common-law tort law rules, in light of the special features of AI technology. Courts made these rules, and they can change them. Should they do so? In particular, should they do so as to negligence liability?

The threat of liability, of course, can deter useful, reasonable design as well as the unreasonable. Companies might worry, for instance, that juries might tend to side with injured individuals and against large corporations, and conclude that even the best possible designs are still “unreasonable” because they allowed some false and defamatory statements to be output.

True, the companies may put on experts who can explain why some risk of libel is unavoidable (or avoidable only by withdrawing highly valuable features of the program). But plaintiffs will put on their own experts, and lay juries are unlikely to be good at sorting the strong expert evidence from the weak—and the cost of litigation is likely to be huge, win or lose. As a result, the companies will err on the side of limiting their AIs’ output, or at least output that mentions the names of real people. This in turn will limit our ability to use the AIs to learn even accurate information about people.

And this may be a particular problem for new entrants into the market. OpenAI appears to have over $10 billion in funding, and appears to be valued at almost $30 billion. It can afford to hire the best lawyers, to buy potentially expensive libel insurance, to pay the occasional damages verdict, and to design various features that might diminish the risk of litigation. But potential upstart rivals might not have such resources, and might thus be discouraged from entering the market.

To be sure, this is a problem for all design defect liability, yet such liability is a norm of our legal system. We don’t immunize driverless car manufacturers in order to promote innovation. But, the argument would go, injury to life and limb from car crashes is a more serious harm to society than injury to reputation. We should therefore limit negligent design liability to negligent harm to person or property (since risk to property generally goes hand in hand with risk of physical injury) and exclude negligent harm to reputation.

This argument might be buttressed by an appeal to the First Amendment. Gertz v. Robert Welch, Inc. upheld negligence claims in some defamation cases on the theory that “there is no constitutional value in false statements of fact.”[4] But that decision stemmed from particular judgments about the chilling effect of negligence-based defamation liability in lawsuits over individual stories. Perhaps the result should be different when AI companies are facing liability for supposed negligent design, especially when the liability goes beyond claims such as failure to check quotes or URLs.

Among other things, a reporter writing about a private figure can diminish (though not eliminate) the risk of negligence liability by taking extra care to check the facts of that particular story. An AI company might not be able to take such care. Likewise, reporters writing about a public official or obvious public figure can feel secure that they won’t be subject to negligence liability; an AI likely can’t reliably tell whether some output is about a public official or figure, or is instead about a private figure. The precautions that an AI company might thus need to take to avoid negligence liability might end up softening its answers as to public officials as much as against private figures.

How exactly this should play out is a hard call. Indeed, there is much to be said for negligence liability as well as against it, even when it comes to defamation. In the words of Justice White,

It could be suggested that even without the threat of large presumed and punitive damages awards, press defendants’ communication will be unduly chilled by having to pay for the actual damages caused to those they defame. But other commercial enterprises in this country not in the business of disseminating information must pay for the damage they cause as a cost of doing business ….

Whether or not this is so as to the news media, it can certainly be reasonably argued about AI companies.

[1] See generally Eugene Volokh, The Mechanisms of the Slippery Slope, 116 Harv. L. Rev. 1026 (2003).

[2] See, e.g., Eugene Volokh, Tort Law vs. Privacy, 114 Colum. L. Rev. 879 (2014).

[3] See generally Walter K. Olson, The Litigation Explosion: What Happened When America Unleashed the Lawsuit (1991); Peter W. Huber, Liability: The Legal Revolution and Its Consequences (1988).

[4] 418 U.S. 323, 340 (1974). First Amendment law generally precludes claims for negligence when speech that is seen as valuable—opinions, fictions, or true statements of fact—helps cause some listeners to engage in harmful conduct. But that stems from the speech being valuable; Gertz makes clear that some forms of negligence liability based on false statements of fact are constitutionally permissible.

The post "Large Libel Models" Lawsuits, the Aggregate Costs of Liability, and Possibilities for Changing Existing Law appeared first on Reason.com.

from Latest https://ift.tt/QMVjKtb
via IFTTT

Leave a Reply

Your email address will not be published. Required fields are marked *