Draft:Mata vs Avianca

Mata v. Avianca, Inc., 678 F.Supp.3d 443 (2023) was a personal injury case heard by the United States District Court for the Southern District of New York in December 2022. It became widely known due to the use of ChatGPT for legal research. The AI tool "hallucinated" a series of non-existent cases and provided fictitious summaries. The case was dismissed and the lawyers faced sanctions.

Facts
The case was brought by Roberto Mata who sought damages against Avianca for injuries he sustained during a 2019 flight from El Salvador to New York City caused by a metal food serving cart striking his left knee. It was initially filed in state court, but Avianca removed it to the federal district court.

In federal court, Avianca filed a motion to dismiss the lawsuit as time-barred by the two year limitation period contained in the Montreal Convention, a multilateral treaty governing the rules on international air travel to which both the United States and El Salvador are signatories.

In response, the plaintiff's lawyers—-Peter LoDuca and Steven A. Schwartz of Levidow, Levidow & Oberman—filed an opposition brief that cited a number of cases in support of the proposition that the Montreal Convention limitation periods could be disregarded in favour of the state law in New York, which had a more generous three year limitation period. They also sought to argue that the bankruptcy of Avianca had tolled the limitation period.

The defence filed a letter stating that they could not locate many of the cases and authorities cited by the plaintiff. The plaintiff's lawyers did not immediately withdraw the Affirmation of Opposition or otherwise address the apparent non-existence of these cases.

Sanctions
The Court issued a supplemental order directing Mr. Schwartz to show cause why he ought not be sanctioned pursuant to Rule 11(b)(2) of the Federal Rules of Civil Procedure (as well as the court’s inherent powers) for citing non-existent cases and submitting of non-existent judicial opinions annexed.

In response to the court's order to show cause, the plaintiff's lawyers filed a memorandum where they admitted they used ChatGPT to research the cases cited. Mr Schwarz stated that he had asked the chatbot whether one case in particulary—Varghese—was real. ChatGPT assured him that one case he had asked about—Varghese—"is a real case". Upon asking for the source, it stated that the case "does indeed exist and can be found on legal research databases such as Westlaw and LexisNexis".

The plaintiff's lawyers acknowledged their mistake and stated that they had no intention to defraud the Court.

The Federal Rules of Civil Procedure requires parties in litigation "[certify] that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances, the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law".

Judge Castel found that the lawyers had violated Rule 11 of the Federal Rule of Civil procedure by submitting the false information and fake legal arguments, failing to read the cited cases, and swearing to the truth of their affadavit without no basis for doing so.

The court found that the attorneys acted in bad faith: had they apologised and withdrew their brief after opposing counsel questioned it, sanctions would probably not have been at is.

The plaintiff's lawyers were fined $5,000 and the court required that they send letters with copies of the opinions and sanctions order both to the original plaintiff as well six judges that ChatGPT had falsely identified as authors of the fake authorities.

Aside from the sanctions issue, the court considered and rejected the plaintiff's assertion that the court could apply the limitation period specified by state law instead of the Montreal Convention.

Reception and implications
Judge Castel's decision warned legal professionals adopting artificial intelligence tools without verifying the accuracy and reliability of their results:

"In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings...”"

The court affirmed the role of technology and the use of artificial intelligence in practice, however, lawyers should comply with the existing rules to ensure that their filings are accurate. This court's position aligns with Rule 1.2 of the ABA Model rules that requires lawyers to maintain the requisite knowledge and skill, and to keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology. The Mata case was notable both for the brazenness of the reliance on ChatGPT and for the fact that it is believed to be the first instance of sanctions for improper reliance on Generative Artificial Intelligence.