Another Note from a Judge About Generative AI Programs

It’s an aside in In re: Vital Pharmaceutical by Bankruptcy Judge Peter Russin (released June 16, but I just came across it):

In preparing the introduction for this Memorandum Opinion, the Court prompted ChatGPT to prepare an essay about the evolution of social media and its impact on creating personas and marketing products. Along with the essay it prepared, ChatGPT included the following disclosure: “As an AI language model, I do not have access to the sources used for this essay as it was generated based on the knowledge stored in my database.” It went on to say, however, that it “could provide some general sources related to the topic of social media and its impact on creating personas and marketing products.” It listed five sources in all. As it turns out, none of the five seem to exist. For some of the sources, the author is a real person; for other sources, the journal is real. But all five of the citations seem made up, which the Court would not have known without having conducted its own research. The Court discarded the information entirely and did its own research the old-fashioned way. Well, not quite old fashioned; it’s not like the Court used actual books or anything. But this is an important cautionary tale. Reliance on AI in its present development is fraught with ethical dangers.

Should be a familiar cautionary tale by now, but I thought it was worth noting again. (I was just testing Claude 2 by asking it for cases on pseudonymity in libel litigation, and it hallucinated some up for me, much as other AI programs have been known to do.)

The post Another Note from a Judge About Generative AI Programs appeared first on Reason.com.

from Latest https://ift.tt/KWnq1g6
via IFTTT

Leave a Reply

Your email address will not be published. Required fields are marked *