Draft:Mata v. Avianca Inc.

Mata v. Avianca, Inc., 678 F.Supp.3d 443 (2023) was a case before the United States Southern District Court of New York In December 2022. The Plaintiff was Roberto Mata who sought damages against Avianca, Inc. for injuries he sustained during flight from El Salvador to New York City. The Defendant's lawyers raised a federal question under the Convention for the Unification of Certain Rules Relating to International Carriage by Air, done at Montreal, Canada (Montreal Convention). However, the case took a different turn when the court discovered that the oppositions filings filed by the Plaintiff's lawyers -Peter LoDuca and Steven A.Schwartz of Levidow, Levidow & Oberman P.C were obtained from a generative Artificial Intelligence tool.

Background
The Plaintiff via its lawyers commenced the suit on or about February 2, 2022, when he filed a Verified Complaint asserting that he was injured when a metal serving cart struck his left knee during a flight from El Salvador to New York. The defendant's counsel filed a Motion to dismiss the suit asserting federal question jurisdiction under the Convention for the Unification of Certain Rules Relating to International Carriage by Air, Done at Montreal, Canada (Montreal Convention). The Plaintiff's lawyers filed an Affirmation in Opposition to the Motion to dismiss.

The Defendant’s counsel filed a letter stating that it could not locate many of the cases and authorities cited in the “Affirmation of opposition” prepared by the Plaintiff’ s lawyers, LoDuca and Mr. Schwartz. The lawyers did not immediately withdraw the Affirmation of Opposition or otherwise address the apparent non-existence of these cases.

The Court issued a supplemental Order directing Mr. Schwartz to show cause why he ought not be sanctioned pursuant to Rule 11(b)(2) of the Federal rule of Civil Procedure and the Court’s inherent powers for aiding and causing the citation of non-existent cases in the Affirmation in Opposition, and the submission of non-existent judicial opinions annexed. In the Affidavit, the Plaintiff's lawyers had annexed what were purported to be copies of the decisions required by the Orders. The Plaintiff's lawyers filed a memorandum in response to the Order to show cause. In the memorandum, lawyers, admitted that they utilized research tools that is completely made up. The plaintiff’s lawyers admitted that they relied on CHAT GPT which assured them that the cases were real and could be found on Westlaw and LexisNexis and continued to provide extended excerpts and favorable quotations. The Plaintiffs lawyers acknowledged their mistake and stated that they had no intention to defraud the Court.

Decision
The New York District Court found that the Plaintiff’s lawyers, LoDuca and Mr. Schwartz violated Rule 11 of the Federal Rule of Civil procedure by;(1) submitting false information and fake legal arguments in their submissions to the court (2)failing to read the cited cases or otherwise take any action to ensure the legal assertions in the Affirmation in Opposition were warranted by the existing law ,(3)swearing to the truth of his first affidavit with no basis for doing so. The court found that the attorneys acted in bad faith after the opposing counsel and the court questioned the cases’ existence, Schwartz prepared an affidavit purporting to contain excerpts from cases generated by ChatGPT, even though he knew one of the cases did not exist.

The court relied on Rule 11(b)(2) of the Federal Rules of Civil Procedure which states that:"'By presenting to the court a pleading, written motion, or other paper—whether by signing, filing, submitting, or later advocating it—an attorney or unrepresented party certifies that to the best of the person’s knowledge, information, and belief, formed after an inquiry reasonable under the circumstances. the claims, defenses, and other legal contentions are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law ...”"The court sanctioned LoDuca, Schwartz, and the Levidow Firm under Rule 11 and, alternatively, under the court's inherent power. The court imposed a $5,000 penalty and ordered the lawyers and law firm to send to their client, Mata, and to “each judge falsely identified as the author of the fake” opinion a copy of the court's Sanctions Opinion and Order, a transcript of the Sanctions Hearing, and a copy of the April Affirmation of Opposition, including the “fake ‘opinion’ attributed to the recipient judge. On the jurisdiction question that was before the court, the court granted the Motion to Dismiss pursuant to Fed. R. Civ. P. 12(b)(6) filed by Avianca, Inc. on the grounds that the Plaintiff's claim was untimely under the under the Montreal Convention’s strict two-year time bar.

Implications in the legal practice
The decision was written by Judge Castel who stated as follows with regards to the use of Artificial Intelligence technology in the legal practice:"“In researching and drafting court submissions, good lawyers appropriately obtain assistance from junior lawyers, law students, contract lawyers, legal encyclopedias and databases such as Westlaw and LexisNexis. Technological advances are commonplace and there is nothing inherently improper about using a reliable artificial intelligence tool for assistance. But existing rules impose a gatekeeping role on attorneys to ensure the accuracy of their filings...”"The court affirmed the role of technology and the use of artificial intelligence in practice, however, lawyers should comply with the existing rules to ensure that their filings are accurate. This court's position aligns with Rule 1.2 of the ABA Model rules that requires lawyers to maintain the requisite knowledge and skill, and to keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology. The Mata case was notable both for the brazenness of the reliance on ChatGPT and for the fact that it is believed to be the first instance of sanctions for improper reliance on Generative Artificial Intelligence. Judges in several jurisdictions have issued rules or practice directions prohibiting and requiring certification of non-use of generative AI or requiring a detailed disclosure of the use of generative AI applications in counsel's court submissions.