The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases. When the circumstance was called to the Court’s attention by opposing counsel, the Court issued Orders requiring plaintiff’s counsel to provide an affidavit annexing copies of certain judicial opinions of courts of record cited in his submission, and he has complied. Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations. Set forth below is an Order to show cause why plaintiff’s counsel ought not be sanctioned.
The Court begins with a more complete description of what is meant by a nonexistent or bogus opinion. In support of his position that there was tolling of the statute of limitation under the Montreal Convention by reason of a bankruptcy stay, the plaintiff’s submission leads off with a decision of the United States Court of Appeals for the Eleventh Circuit, Varghese v China South Airlines Ltd, 925 F.3d 1339 (11th Cir. 2019). Plaintiff’s counsel, in response to the Court’s Order, filed a copy of the decision, or at least an excerpt therefrom.
The Clerk of the United States Court of Appeals for the Eleventh Circuit, in response to this Court’s inquiry, has confirmed that there has been no such case before the Eleventh Circuit with a party named Vargese or Varghese at any time since 2010, i.e., the commencement of that Court’s present ECF system. He further states that the docket number appearing on the “opinion” furnished by plaintiff’s counsel, Docket No. 18-13694, is for a case captioned George Cornea v. U.S. Attorney General, et al. Neither Westlaw nor Lexis has the case, and the case found at 925 F.3d 1339 is A.D. v Azar, 925 F.3d 1291 (D.C. Cir 2019). The bogus “Varghese” decision contains internal citations and quotes, which, in turn, are non-existent: …
The following five decisions submitted by plaintiff’s counsel contain similar deficiencies and appear to be fake as well ….
The court therefore ordered plaintiff’s counsel to show cause why he shouldn’t be sanctioned, and plaintiff’s counsel responded that he was relying on the work of another lawyer at his firm, and this second lawyer (who had 30 years of practice experience) he was relying on ChatGPT. The court ordered a further round of explanations, and here’s the heart of the filing yesterday from the second lawyer (paragraph numbering removed):
The Opposition to the Motion to Dismiss
After this case was removed, Defendant filed a motion to dismiss, arguing, among other things, that this case was subject to a two year statute of limitations under the Montreal Convention 1999 (the “Montreal Convention”), and that Mr. Mata had somehow filed the Complaint too late. As the partner who had been lead counsel on the case since its inception almost three years before, I took responsibility for preparing a response to the motion.
Our position on the motion would be that the claims were timely filed because either the Montreal Convention’s shortened statute of limitations was inapplicable or, in the alternative, that any period of limitations was tolled by Avianca’s bankruptcy.
As discussed in our memorandum of law as well as the declaration of Thomas Corvino, the Firm practices primarily in New York state courts as well as New York administrative tribunals. The Firm’s primary tool for legal research is a program called Fastcase, which is an online legal research program made available to all lawyers at the Firm.
Based on my experience using Fastcase, I understood that the Firm had access to the New York State database as well as to at least some federal cases.
I first attempted to use Fastcase to perform legal research in this case, however it became apparent that I was not able to search the federal database.
My Use of ChatGPT
In an effort to find other relevant cases for our opposition, I decided to try and use ChatGPT to assist with legal research.
l had never used ChatGPT for any professional purpose before this case. I was familiar with the program from my college-aged children as well as the articles 1 had read about the potential benefits of artificial intelligence (AI) technology for the legal and business sectors.
At the time I used ChatGPT for this case, I understood that it worked essentially like a highly sophisticated search engine where users could enter search queries and ChatGPT would provide answers in natural language based on publicly available information.
I realize now that my understanding of how ChatGPT worked was wrong. Had I understood what ChatGPT is or how it actually worked, I would have never used it to perform legal research.
Based on my erroneous understanding of how ChatGPT worked, I used the program to try and find additional caselaw support for our arguments.
I conducted the search in the same general manner I searched any legal research database. I first asked ChatGPT a broader question about tolling under the Montreal Convention and was provided with a general answer that appeared consistent with my pre-existing understanding of the law. 1 then began to ask ChatGPT to find cases in support of more specific principles applicable to this case, including the tolling of the Montreal Convention’s statute of limitations as a result of Defendant’s bankruptcy. Each time, ChatGPT provided me with an affirmative answer including case citations that appeared to be genuine. Attached as Exhibit A are copies of my chat history with ChatGPT when I was performing research for our opposition to the motion to dismiss.
In connection with my research, I also asked ChatGPT to provide the actual cases it was citing, not just the summaries. Each time, ChatGPT provided me with what it described as a “brief excerpt” that was complete with a case caption, legal analysis, and other internal citations. I recognize in hindsight that I should have been more skeptical when ChatGPT did not provide the full case I requested. Nevertheless, given the other features of the “cases” it was providing, I attributed this to the program’s effort to highlight the relevant language based on my search queries.
As noted above, when I was entering these search queries, I was under the erroneous impression that ChatGPT was a type of search engine, not a piece of technology designed to converse with its users. I simply had no idea that ChatGPT was capable of fabricating entire case citations or judicial opinions, especially in a manner that appeared authentic.
I therefore cited several of the cases that ChatGPT provided in our opposition to the motion to dismiss.
The April 25 Affidavit
In response to Defendant’s reply brief—which stated that defense counsel could not locate several of the cases cited in our opposition papers—the Court ordered [the lead lawyer] to file an affidavit annexing nine of the cases cited in our opposition….
I was unable to find one of the cases (which was cited within another case that I had found on ChatGPT). Of the remaining eight cases, I obtained two of those cases from Fastcase. The remaining six, I obtained from ChatGPT (the “ChatGPT Cases”).
Similar to how I used ChatGPT when I was preparing the opposition papers, I asked ChatGPT to provide copies of the six ChatGPT Cases. ChatGPT provided me with what appeared to be partial versions of the six cases. Because I did not have another research database with access to the federal reporters available to me, I did not take these citations and obtain full copies of the cases from another source. (I realize now that I could have gone to a bar association library or colleague that had access to Westlaw and Lexis, but it did not occur to me at the time.) However, when I was responding, I still did not believe it was possible that the cases ChatGPT was providing were completely fabricated. I therefore attached the ChatGPT Cases to the April 25 Affidavit….
The First OSC and My Realization That the ChatGPT Cases Were Not Authentic
In response to the April 25 Affidavit as well as a letter from defense counsel representing that it still could not find the ChatGPT Cases, this Court issued an Order to Show Cause on May 4, 2023 …. The First OSC stated that the Court could not find any record of six of the cases that we attached to the April 25 Affidavit (i.e. the ChatGPT Cases).
When I read the First OSC, I realized that I must have made a serious error and that there must be a major flaw with the search aspects of the ChatGPT program. I have since come to realize, following the Order, that the program should not be used for legal research and that it did not operate as a search engine at all, which was my original understanding of how it worked.
Before the First OSC, however, I still could not fathom that ChatGPT could produce multiple fictitious cases, all of which had various indicia of reliability such as case captions, the names of the judges from the correct locations, and detailed fact patterns and legal analysis that sounded authentic. The First OSC caused me to have doubts. As a result, I asked ChatGPT directly whether one of the cases it cited, “Varghese v. China Southern Airlines Co., Ltd., 925 F.3d 1339 (11th Cir. 2009),” was a real case. Based on what I was beginning to realize about ChatGPT, I highly suspected that it was not. However, ChatGPT again responded that Varghese “does indeed exist” and even told me that it was available on Westlaw and LexisNexis, contrary to what the Court and defendant’s counsel were saying. This confirmed my suspicion that ChatGPT was not providing accurate information and was instead simply responding to language prompts without regard for the truth of the answers it was providing. However, by this time the cases had already been cited in our opposition papers and provided to the Court.
In an effort to be fully transparent, I provided an affidavit … to submit [to the court]. In my affidavit, I made clear that I was solely responsible for the research and drafting of the opposition and that I used ChatGPT for some of the legal research. I also apologized to the Court and reiterated that it was never my intention to mislead the Court.
The Consequences of this Matter
As detailed above, I deeply regret my decision to use ChatGPT for legal research, and it is certainly not something I will ever do again.
I recognize that if I was having trouble finding cases on the Firm’s existing research platform, I should have either asked the Firm to obtain a more comprehensive subscription or used the research resources such as Westlaw and LexisNexis maintained by the law libraries at the local bar associations….