Courts, judges, and state bar associations are actively developing ethics rules, opinions, local rules, and guidelines on the use of AI technology in legal practice. Several resources are available that track these developments across jurisdictions.
Generative Artificial Intelligence (AI) Federal and State Court Rules Tracker (Lexis+): This tracker notes the individual civil rules and standing orders implemented by certain federal and state court judges, court administrations, and bar associations governing the use of generative AI in court filings
Use of Artificial Intelligence in the Practice of Law (Westlaw): A 50-state survey that covers law and rule changes enacted relating to the use of AI tools by attorneys or parties in any legal matter.
Judicial Standing Orders on Artificial Intelligence Tracker (Bloomberg Law): This tracker includes federal court judicial standing orders/guidance related to the use of artificial intelligence tools in litigation court filings
State Information on AI (National Center for State Courts): Information on state activities related to artificial intelligence and documentation to assist courts in developing policies and procedures for using generative AI.
Judicial Standing Orders: AI to Z (on-demand webinar by Bloomberg Law): Insights on current legal practices regulated by judicial standing orders and future practices influenced by technology.
Since ChatGPT's debut in November 2022, the practical risks of using generative AI have become widely recognized. These risks include lack of transparency, biased training data, generation of false information (known as "hallucinations"), and privacy concerns. The integration of generative AI into legal practice raises significant ethical considerations for law students and attorneys:
Understanding these issues is crucial for ensuring compliance with ethical standards. Legal scholars emphasize the importance of law students and practitioners developing the skills and knowledge to use these tools responsibly and effectively.
ABA Resolution 112 urges lawyers to address ethical issues related to AI, including bias, explainability, and transparency.
ABA Resolution 604 emphasizes the need for organizations to follow guidelines when designing, developing, and deploying AI systems.
The ABA Ethics Committee's Formal Opinion 512 emphasizes that lawyers must consider their ethical duties when using AI, including competent representation, client confidentiality, communication, supervision, advancing meritorious claims, candor to the court, and charging reasonable fees.
The California State Bar approved the Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law on November 16, 2023. This guidance is designed to assist lawyers in navigating their ethical obligations when using generative AI and is intended to be a living document, updated periodically as the technology evolves and new issues arise.
Stories about lawyers misusing generative AI spread quickly, but courts are not rejecting AI use outright. Instead, judges are setting expectations for accuracy, disclosure, and professional responsibility.
Below are recent cases where courts have addressed generative AI use. These cases highlight an essential point: lawyering still requires human expertise. Verification is critical, and courts may require lawyers to disclose AI-generated content.
March 26, 2025: A New York appeals court reprimanded a pro se litigant for using an AI-generated avatar in a video argument without disclosing it. The pro se litigant, who had permission to submit a video presentation due to a speech-related disability, failed to inform the court that the speaker in the video was an AI avatar. Justice Sallie Manzanet-Daniels criticized him for misleading the court, stopped the video, and allowed the litigant five minutes to present argument in person. The pro se litigant later apologized, explaining he initially planned to use a digital version of himself by faced technical issues.
February 27, 2025: A federal district court in Pennsylvania sanctioned an attorney for citing non-existent cases generated by ChatGPT in two motions. The court found that Rajan "outsourced his job to an algorithm" without verifying the AI-generated citations, which included not only fabricated cases but also overruled and inapposite cases. The judge found the attorney violated Rule 11(b)(2) of the Federal Rules of Civil Procedure, imposed a $2,500 fine, and ordered the attorney to complete a CLE program on AI and legal ethics. The judge emphasized that while technology evolves, the duty of attorneys to "Stop, Think, Investigate and Research" before filing papers remains unchanged.
February 24, 2025: A federal district court in Wyoming sanctioned three attorneys for citing nonexistent cases in a motion in limine. The court could not find eight of the nine cited cases; the attorneys initially responded that some of the cases "can be found on ChatGPT." The motion’s author used MX2.law, an AI tool developed by his firm, Morgan & Morgan, to generate case law without verifying its accuracy. After the court issued a show cause order, the firm withdrew the motion, covered opposing counsel’s fees for defending the motion, and implemented AI-use policies and training to prevent similar issues in the future. The court found that the attorney who authored the motion and his co-signing attorneys violated Rule 11(b) of the Federal Rules of Civil Procedure by failing to conduct a reasonable inquiry into their sources. The judge sanctioned the author $3,000, the co-signing attorneys $1,000 each, and revoked the author’s pro hac vice admission. While acknowledging AI’s potential benefits, the court warned of risks like AI hallucinations and reaffirmed that attorneys must verify citations. The firm warned its attorneys that using AI-generated false case information in court filings could lead to their termination.
February 21, 2025: An Indiana magistrate judge recommended a $15,000 sanction for a Texas attorney who cited three non-existent AI-generated cases in court filings. The attorney admitted to using AI without verifying citations and later took AI-related legal education. He claimed he had used AI for drafting agreements but was unaware it could generate fake cases. The court stressed that attorneys must verify AI-generated content and noted that the sanction was intentionally high, as lighter penalties in similar cases had not been effective deterrents.
February 20, 2025: The District of Columbia Court of Appeals used ChatGPT in an appellate opinion involving an animal cruelty conviction for leaving a dog in a hot car. The majority overturned the conviction, finding insufficient evidence that the dog suffered. In dissent, Judge Joshua Deahl used ChatGPT to confirm that leaving a dog in a hot car is widely considered harmful. A concurring opinion by Judge John P. Howard III warned about AI's risks, including security, privacy, reliability, and bias.
For a more comprehensive list of cases involving the misuse of generative AI tools in court, click here.