chat loading...
Skip to Main Content

Generative AI Tools and Resources for Law Students

Generative AI in legal research, AI legal tools, AI in legal education, AI for lawyers, AI legal practice tools.

Risks and Limitations of Generative AI Tools

Since ChatGPT's debut in November 2022, the practical risks of using generative AI have become widely recognized. These risks include lack of transparency, biased training data, generation of false information (known as "hallucinations"), and privacy concerns. The integration of generative AI into legal practice raises significant ethical considerations for law students and attorneys:

  • Hallucinations: Generative AI tools can generate "hallucinations," false but plausible-seeming outputs. For example, it might fabricate case citations, misrepresent older versions of statutes as current law, or invent non-existent legal principles. These errors can be subtle and convincing, making human verification necessary. It's important to note that no AI model, even if designed especially for legal research, is completely hallucination-free.
  • Data bias: AI models can reflect and amplify biases present in their training data. For instance, an AI trained on past cases with racial bias might perpetuate this bias in its recommendations or analysis.
  • Knowledge cutoff: Each AI tool has access to information only up to a specific date. When using AI for research, it's essential to identify this cutoff date and supplement the AI outputs with the most up-to-date sources.
  • Privacy risks: Using AI tools in legal practice may inadvertently expose sensitive client information or confidential legal data. This could potentially violate attorney-client privilege and ethical obligations. Lawyers must be cautious about what information they input into AI systems.
  • Intellectual property concerns: Many AI tools are trained on vast amounts of data that may include copyrighted materials. Using these tools could potentially lead to unintentional infringement. The ownership of and ability to copyright AI-generated content are issues that also remain unclear and vary by jurisdiction.

Understanding these issues is crucial for ensuring compliance with ethical standards. Legal scholars emphasize the importance of law students and practitioners developing the skills and knowledge to use these tools responsibly and effectively.

Professional Responsibility Guidelines and Court Rules

ABA Resolution 112 urges lawyers to address ethical issues related to AI, including bias, explainability, and transparency.

ABA Resolution 604 emphasizes the need for organizations to follow guidelines when designing, developing, and deploying AI systems.

The ABA Ethics Committee's Formal Opinion 512 emphasizes that lawyers must consider their ethical duties when using AI, including competent representation, client confidentiality, communication, supervision, advancing meritorious claims, candor to the court, and charging reasonable fees.

The California State Bar approved the Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law on November 16, 2023. This guidance is designed to assist lawyers in navigating their ethical obligations when using generative AI and is intended to be a living document, updated periodically as the technology evolves and new issues arise.

Generative AI in Court: Cases and Consequences

Stories about lawyers misusing generative AI spread quickly, but courts are not rejecting AI use outright. Instead, judges are setting expectations for accuracy, disclosure, and professional responsibility.

Below are recent cases where courts have addressed generative AI use. These cases highlight an essential point: lawyering still requires human expertise. Verification is critical, and courts may require lawyers to disclose AI-generated content.

  • March 26, 2025: A New York appeals court reprimanded a pro se litigant for using an AI-generated avatar in a video argument without disclosing it. The pro se litigant, who had permission to submit a video presentation due to a speech-related disability, failed to inform the court that the speaker in the video was an AI avatar. Justice Sallie Manzanet-Daniels criticized him for misleading the court, stopped the video, and allowed the litigant five minutes to present argument in person. The pro se litigant later apologized, explaining he initially planned to use a digital version of himself by faced technical issues. 

  • February 27, 2025: A federal district court in Pennsylvania sanctioned an attorney for citing non-existent cases generated by ChatGPT in two motions. The court found that Rajan "outsourced his job to an algorithm" without verifying the AI-generated citations, which included not only fabricated cases but also overruled and inapposite cases. The judge found the attorney violated Rule 11(b)(2) of the Federal Rules of Civil Procedure, imposed a $2,500 fine, and ordered the attorney to complete a CLE program on AI and legal ethics. The judge emphasized that while technology evolves, the duty of attorneys to "Stop, Think, Investigate and Research" before filing papers remains unchanged.

  • February 24, 2025: A federal district court in Wyoming sanctioned three attorneys for citing nonexistent cases in a motion in limine. The court could not find eight of the nine cited cases; the attorneys initially responded that some of the cases "can be found on ChatGPT." The motion’s author used MX2.law, an AI tool developed by his firm, Morgan & Morgan, to generate case law without verifying its accuracy. After the court issued a show cause order, the firm withdrew the motion, covered opposing counsel’s fees for defending the motion, and implemented AI-use policies and training to prevent similar issues in the future. The court found that the attorney who authored the motion and his co-signing attorneys violated Rule 11(b) of the Federal Rules of Civil Procedure by failing to conduct a reasonable inquiry into their sources. The judge sanctioned the author $3,000, the co-signing attorneys $1,000 each, and revoked the author’s pro hac vice admission. While acknowledging AI’s potential benefits, the court warned of risks like AI hallucinations and reaffirmed that attorneys must verify citations. The firm warned its attorneys that using AI-generated false case information in court filings could lead to their termination. 

  • February 21, 2025: An Indiana magistrate judge recommended a $15,000 sanction for a Texas attorney who cited three non-existent AI-generated cases in court filings. The attorney admitted to using AI without verifying citations and later took AI-related legal education. He claimed he had used AI for drafting agreements but was unaware it could generate fake cases. The court stressed that attorneys must verify AI-generated content and noted that the sanction was intentionally high, as lighter penalties in similar cases had not been effective deterrents.

  • February 20, 2025: The District of Columbia Court of Appeals used ChatGPT in an appellate opinion involving an animal cruelty conviction for leaving a dog in a hot car. The majority overturned the conviction, finding insufficient evidence that the dog suffered. In dissent, Judge Joshua Deahl used ChatGPT to confirm that leaving a dog in a hot car is widely considered harmful. A concurring opinion by Judge John P. Howard III warned about AI's risks, including security, privacy, reliability, and bias.

For a more comprehensive list of cases involving the misuse of generative AI tools in court, click here