AI in New York Litigation: Current Practice and Proposed Changes
Artificial intelligence (AI) is already reshaping litigation practice in New York courts, particularly in the Supreme Court and its Commercial Division. While there are no statewide rules addressing practitioners’ use of AI, judges are increasingly addressing misuse of AI through sanction orders, and rule makers have advanced targeted proposals to bring clarity and accountability to its use. This article surveys how courts are responding to AI misuse, proposed amendments to the CPLR and Commercial Division Rules, and the emerging judge-specific AI procedures.
How New York Judges Are Handling AI Misuse Today
At present, New York does not have a statewide rule addressing the use of AI. Even so, judges—most specifically those within the Commercial Division—are responding decisively when parties rely on AI-generated content without independent verification, especially when filings contain “fabricated” citations or misstate authority. Two recent Commercial Division decisions—by Justice Andrea Masley and Justice Joel M. Cohen—addressed commonly known AI mishaps—briefs with citations to nonexistent authority and nonexistent or incorrect quotations from real cases. See Ader v. Ader, 87 Misc. 3d 1213(A), 2025 N.Y. Slip Op. 51563(U) (Sup Ct, NY County 2025); Enterprise v. Shvo, NYSCEF 154, Index No. 653221/2024 (Sup Ct, NY County, Dec. 24, 2024). The Justices admonished counsel who presented AI-generated citations without proper verification and imposed a fee-shifting sanction under 22 NYCRR §130-1.1. The court in Ader also suggested that it would make another judge, before whom counsel appeared, aware of the situation and directed service on the grievance committees of counsel’s states of admission.
As Justice Masley stated in Enterprise, “[t]he court relies on attorneys to do their jobs: advocate for their clients using law and facts—real law and real facts.” These decisions underscore the common theme: there is nothing inherently improper about using AI, but counsel must check AI-generated work, ensure accuracy, and avoid misleading content. When counsel fail to do so, New York judges are sanctioning counsel.
Proposed Rule 6(e) for the Commercial Division
In a move from case-by-case admonitions to clearer guidance, the Commercial Division Advisory Council has recommended adding a new Commercial Division Rule 6(e). The proposed rule would expressly address the use of AI tools in the preparation of court filings. The proposed Rule 6(e) states:
“The court recognizes that counsel (or a self-represented party) may determine that the use of generative artificial intelligence tools in preparing to file briefs, letters, or memoranda of law with the court is in its client’s (or its) best interests. However, any person who files material with this court remains responsible for providing the court with complete and accurate representations in any such submission consistent with Part 130 of the Rules of the Chief Administrator and any other applicable legal or ethical guidance. Accordingly, any person who files any such material with this court is certifying the accuracy and reliability of such material and any statements made therein.”
This proposal reflects a measured approach. It recognizes that AI may be used, but it imposes one core expectation: counsel must independently review AI-assisted submissions to ensure accuracy, integrity, and compliance with existing obligations.
Proposed Part 161 to the Rules of the Chief Administrator of the Courts
New York’s court system has proposed a new statewide policy, Part 161, to guide the responsible use of AI in drafting court documents. The proposed policy rejects an outright ban and instead reaffirms existing obligations, including those under 22 NYCRR §§130-1.1, making clear that counsel and litigants must independently verify filings and may face sanctions for fictitious facts or citations, regardless of AI use. The policy also states that users would not need to disclose AI use upon submission of court papers, but a later inquiry by the court is not prohibited.
The proposal further includes a model part rule which emphasizes understanding AI’s limitations, including the risk of “hallucinations,” and reminds counsel that by signing court papers, they certify accuracy. The Advisory Committee on Artificial Intelligence and the courts, which recommended the proposal, argues that a disclosure mandate or ban would be impractical and counterproductive, advocating instead a uniform standard paired with education to promote AI literacy and prevent misuse.
Commercial Division Justices’ Individual Procedures Addressing AI
In the absence of statewide uniformity, several Commercial Division justices have begun to incorporate AI directives into their individual practices. For example, Justice Nancy M. Bannon’s part procedures require a certification with any motion, hearing, or trial submission that no AI program was used or, if used, that all AI-generated text, citations, quotations, and analyses were reviewed and approved by counsel, with identification of the AI tool and the specific portions drafted by AI. The part procedures underscore that a violation may result in sanctions. Justice Peter Allen Weinmann’s part procedures likewise require disclosure of the AI tool used, identification of the portions of the submissions drafted by AI, and certification that a human reviewed the AI work product for accuracy and applicability.
Although formats vary, the common denominator is the insistence on appropriate disclosure, human review, and accuracy. They reinforce the common theme—AI can be a helpful aid but never a substitute for legal judgment. While proposed Rule 6(e), if adopted, may obviate the need for Commercial Division judges’ individual AI procedures, practitioners should be mindful that some judges may choose to impose additional AI disclosure and certification requirements.
Legislative Activity: Proposed CPLR Amendment
New rules on the use of AI in civil practice may also be coming on a statewide legislative level. Senate Bill S2698, currently pending in the New York State Senate Rules Committee, would amend the CPLR to require disclosure and certification when AI is used in civil filings. Specifically, the bill would add a new section 2107 mandating that any submission drafted with the assistance of generative AI includes an affidavit disclosing that use and certifying that a human reviewed the source material and verified the accuracy of the AI-generated content. Where no generative AI is used, no disclosure would be required. The bill also includes a conforming amendment to CPLR 5528(a), adding a paragraph requiring appellate briefs to include, if applicable, the same disclosure of AI use and a certification of human oversight.
AI Policy for Judges and Court Staff
New York courts have recently issued guidance on their internal use of AI. In October 2025, the Unified Court System (USC) issued Interim Policy On The Use of Artificial Intelligence applicable to all judges and nonjudicial employees. The policy underscores that AI is a tool and that users remain fully accountable for their work. Key requirements include (i) use of only USC-approved AI tools, (ii) mandatory initial and continuing training, and (iii) strict confidentiality safeguards, including a prohibition on inputting confidential, private, or privileged information into any AI program that does not operate on a private model under UCS’s control. The policy also bars uploading any court-filed documents into public-model AI systems. In sum, the UCS approach encourages careful use of AI with a focus on protecting confidentiality and ensuring accuracy.
Practical Takeaways
New York’s courts are not banning AI; they are demanding oversight and accountability. For Commercial Division practitioners, that means taking steps to ensure court filings are accurate, regardless of the tools counsel use to draft them. The key takeaway here is that practitioners should treat AI outputs as unverified drafts and independently confirm all citations, quotations, facts, and law while also monitoring judges’ part procedures and any new rules to ensure compliance with any AI disclosure and certification requirements.
Reprinted with permission from the January 5, 2026 issue of the New York Law Journal© 2025 ALM Media Properties, LLC. Further duplication without permission is prohibited. All rights reserved.

