Courts, judges, and state bar associations are actively developing ethics rules, opinions, local rules, and guidelines on the use of AI technology in legal practice. Several resources are available that track these developments across jurisdictions.
Generative Artificial Intelligence (AI) Federal and State Court Rules Tracker (Lexis+): This tracker notes the individual civil rules and standing orders implemented by certain federal and state court judges, court administrations, and bar associations governing the use of generative AI in court filings
Use of Artificial Intelligence in the Practice of Law (Westlaw): A 50-state survey that covers law and rule changes enacted relating to the use of AI tools by attorneys or parties in any legal matter.
Judicial Standing Orders on Artificial Intelligence Tracker (Bloomberg Law): This tracker includes federal court judicial standing orders/guidance related to the use of artificial intelligence tools in litigation court filings
State Information on AI (National Center for State Courts): Information on state activities related to artificial intelligence and documentation to assist courts in developing policies and procedures for using generative AI.
Judicial Standing Orders: AI to Z (on-demand webinar by Bloomberg Law): Insights on current legal practices regulated by judicial standing orders and future practices influenced by technology.
Generative AI tools are now common in law schools and legal practice. Although they can streamline some tasks, their misuse poses significant risks to accuracy, confidentiality, and professional responsibility. To use these tools responsibly, law students, legal researchers, and attorneys must understand their limitations and remain accountable for their output. This section explains the primary concerns associated with generative AI in legal work, including hallucinations, bias, outdated knowledge, privacy issues, intellectual property conflicts, and emerging professional standards.
Understanding these risks is necessary to ensure compliance with professional responsibility standards. Law students and attorneys must develop not only technical familiarity with AI tools but also the judgment and discipline to use them ethically and effectively.
ABA Resolution 112 urges lawyers to address ethical issues related to AI, including bias, explainability, and transparency.
ABA Resolution 604 emphasizes the need for organizations to follow guidelines when designing, developing, and deploying AI systems.
The ABA Ethics Committee's Formal Opinion 512 emphasizes that lawyers must consider their ethical duties when using AI, including competent representation, client confidentiality, communication, supervision, advancing meritorious claims, candor to the court, and charging reasonable fees.
The California State Bar approved the Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law on November 16, 2023. This guidance is designed to assist lawyers in navigating their ethical obligations when using generative AI and is intended to be a living document, updated periodically as the technology evolves and new issues arise.
On May 2, 2025, the U.S. Judicial Conference’s Advisory Committee on Evidence Rules approved proposed Rule 707: Machine-Generated Evidence, which would apply the same reliability standards used for expert witnesses under Rule 702 to AI-generated evidence submitted without a human expert. The goal is to guard against unreliable or unauthenticated machine outputs in litigation. The Committee emphasized that the rule is not meant to encourage substituting AI for live expert testimony. The Department of Justice dissented, arguing that Rule 702 already covers such material. The Standing Committee will review proposed Rule 707 and is expected to approve it for public release and comment.
Courts are responding to the increasing use of generative AI in legal practice. Typically, judges are not rejecting the use of AI outright, but they continue to stress that lawyers must verifying citations, understand the content they submit to the court, and comply with rules of professional responsibility -- regardless of whether AI was involved. Recent cases show that reliance on AI does not excuse errors or misconduct.
For examples of generative AI use and misuse in court, consult the case trackers below: