Skip to Main Content

Generative AI Tools and Resources for Law Students

Generative AI in legal research, AI legal tools, AI in legal education, AI for lawyers, AI legal practice tools.

Risks and Limitations of Generative AI Tools

Generative AI tools are now common in law schools and legal practice. Although they can streamline some tasks, their misuse poses significant risks to accuracy, confidentiality, and professional responsibility. To use these tools responsibly, law students, legal researchers, and attorneys must understand their limitations and remain accountable for their output. This section explains the primary concerns associated with generative AI in legal work, including hallucinations, bias, outdated knowledge, privacy issues, intellectual property conflicts, and emerging professional standards.

  • Hallucinations: AI tools often generate information that looks accurate but is completely false. These errors can be subtle and convincing. For example, an AI might "hallucinate" by fabricating case citations, misrepresenting the law, or inventing legal doctrines. Even legal-specific tools that use retrieval-augmented generation (RAG) are not entirely hallucination-free.
     
  • Bias: Large language models reproduce and amplify structural biases present in their training data, including those based on race, gender, socioeconomic status, disability, and other intersecting identities. Scholars warn that uncritical use of these tools risks reinforcing systemic inequities and obscuring how compounded forms of marginalization are reflected in legal data and decisions. Additionally, some LLMs are fine-tuned to align with user preferences or corporate objectives, which can subtly influence the tone, framing, or direction of responses. For example, OpenAI acknowledged that GPT-4o's overuse of flattery was a byproduct of user feedback reinforcement rather than factual accuracy. Similarly, Meta's internal guidelines showed that its AI systems were trained to be engaging while avoiding "sensitive topics," potentially introducing persuasive bias in how information is presented. 
     
  • Knowledge Cutoff: Most generative AI tools were trained on data available only up to a certain date. For example, GPT-4o has a knowledge cutoff date of June 2024. AI tools without live web access cannot recognize new statutes, cases, or legal development that occurred after that date. Even AI tools with real-time search features can pull outdates or low-quality sources. Always double-check the date and credibility of any source cited by AI. If you're researching a recent court decision or new legislation, use trusted legal databases instead.
     
  • Privacy and Confidentiality Risks: AI tools often store user input. Entering sensitive or confidential data into a public AI tool raises risks access, misuse, and exposure of proprietary, personal, or legally protected information. Data handling practices are often opaque and vary widely across providers. In legal, academic, or professional contexts, this may violate confidentiality obligations or institutional privacy policies. For example, inputting client information into a public AI system may breach attorney-client confidentiality under ABA Model Rule 1.6. In February 2024, the American Bar Association issued Formal Opinion 512, advising lawyers not to use generative AI tools for confidential matters unless they understand how the tool handles and stores data. Law students working with client information must follow the same caution. Do not paste case details, client data, or other confidential documents into a public AI chatbot.
     
  • Intellectual Property: Many AI models are trained on large datasets that may include copyrighted content, raising questions about both the legality of training data use and the ownership of AI-generated outputs. Ongoing lawsuits, such as Thomson Reuters v. Ross Intelligence, focus on whether training on proprietary legal materials constitutes infringement. (Thomson Reuters is the parent company of Westlaw.) In Thaler v. Perlmutter, the D.C. Circuit Court of Appeals affirmed that works generated solely by AI are not eligible for copyright protection without meaningful human authorship; however, the plaintiff has since petitioned for an en banc rehearing, and the case remains pending. The U.S. Copyright Office is actively developing policy guidance in this area

Understanding these risks is necessary to ensure compliance with professional responsibility standards. Law students and attorneys must develop not only technical familiarity with AI tools but also the judgment and discipline to use them ethically and effectively.

Professional Responsibility Guidelines and Court Rules

ABA Resolution 112 urges lawyers to address ethical issues related to AI, including bias, explainability, and transparency.

ABA Resolution 604 emphasizes the need for organizations to follow guidelines when designing, developing, and deploying AI systems.

The ABA Ethics Committee's Formal Opinion 512 emphasizes that lawyers must consider their ethical duties when using AI, including competent representation, client confidentiality, communication, supervision, advancing meritorious claims, candor to the court, and charging reasonable fees.

The California State Bar approved the Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law on November 16, 2023. This guidance is designed to assist lawyers in navigating their ethical obligations when using generative AI and is intended to be a living document, updated periodically as the technology evolves and new issues arise.

On May 2, 2025, the U.S. Judicial Conference’s Advisory Committee on Evidence Rules approved proposed Rule 707: Machine-Generated Evidence, which would apply the same reliability standards used for expert witnesses under Rule 702 to AI-generated evidence submitted without a human expert. The goal is to guard against unreliable or unauthenticated machine outputs in litigation. The Committee emphasized that the rule is not meant to encourage substituting AI for live expert testimony. The Department of Justice dissented, arguing that Rule 702 already covers such material. The Standing Committee will review proposed Rule 707 and is expected to approve it for public release and comment.

Generative AI in Court: Cases and Consequences

Courts are responding to the increasing use of generative AI in legal practice. Typically, judges are not rejecting the use of AI outright, but they continue to stress that lawyers must verifying citations, understand the content they submit to the court, and comply with rules of professional responsibility -- regardless of whether AI was involved. Recent cases show that reliance on AI does not excuse errors or misconduct.

For examples of generative AI use and misuse in court, consult the case trackers below: