chat loading...
Skip to Main Content

Generative AI Tools and Resources for Law Students

Generative AI in legal research, AI legal tools, AI in legal education, AI for lawyers, AI legal practice tools.

Risks and Limitations of Generative AI Tools

Since ChatGPT's debut in November 2022, the practical risks of using generative AI have become widely recognized. The integration of generative AI into legal research practice raises significant ethical considerations for law students and attorneys:

  • Hallucinations: Generative AI tools can generate false but plausible outputs. For example, an AI might fabricate case citations, misrepresent the law, or invent legal doctrines. These errors can be subtle and convincing, making human verification nonnegotiable. Even legal-specific tools that use retrieval-augmented generation (RAG) are not entirely hallucination-free.
     
  • Bias: Large language models (LLMs) may reproduce and amplify structural biases present in their training data, including those based on race, gender, socioeconomic status, disability, and other intersecting identities. Scholars warn that uncritical use of these tools risks reinforcing systemic inequities and obscuring how compounded forms of marginalization are reflected in legal data and decisions. Additionally, some LLMs are fine-tuned to align with user preferences or corporate objectives, which can subtly influence the tone, framing, or direction of responses. For example, OpenAI acknowledged that GPT-4o's overuse of flattery was a byproduct of user feedback reinforcement rather than factual accuracy. Similarly, Meta's internal guidelines showed that its AI systems were trained to be engaging while avoiding "sensitive topics," potentially introducing persuasive bias in how information is presented. 
     
  • Knowledge Cutoff: Many AI models are trained on static datasets and have a fixed knowledge cutoff date. For example, GPT-4o’s training data cutoff is June 2024. As a result, these models may not reflect events or developments that occurred afterward. Some tools now offer real-time web access to partially address this limitation; however, the reliability of information retrieved from the web varies, and users should evaluate results critically.
     
  • Privacy Risks: Entering sensitive or confidential information into AI tools, especially those hosted in the cloud or operated by third parties, raises significant data privacy concerns. These tools may log or store user inputs, creating risks that proprietary, personal, or legally protected data could be accessed, misused, or exposed. Even when tools claim not to retain prompts, their data handling practices are often opaque and vary widely across providers. In legal, academic, or professional contexts, this may violate confidentiality obligations or institutional privacy policies. For example, inputting client information into a public AI system may breach attorney-client confidentiality under ABA Model Rule 1.6. In February 2024, the American Bar Association issued Formal Opinion 512, advising lawyers to understand how AI tools collect, store, and use data before relying on them in practice. Users should review privacy policies carefully and avoid sharing sensitive information unless the tool is explicitly designed for secure, private use. 
     
  • Intellectual Property: Many AI models are trained on large datasets that may include copyrighted content, raising questions about both the legality of training data use and the ownership of AI-generated outputs. Ongoing lawsuits, such as Thomson Reuters v. Ross Intelligence, focus on whether training on proprietary legal materials constitutes infringement. (Thomson Reuters is the parent company of Westlaw.) In Thaler v. Perlmutter, the D.C. Circuit Court of Appeals affirmed that works generated solely by AI are not eligible for copyright protection without meaningful human authorship; however, the plaintiff has since petitioned for an en banc rehearing, and the case remains pending. The U.S. Copyright Office is actively developing policy guidance in this area

Understanding these risks is necessary to ensure compliance with professional responsibility standards. Law students and attorneys must develop not only technical familiarity with AI tools but also the judgment and discipline to use them ethically and effectively.

Professional Responsibility Guidelines and Court Rules

ABA Resolution 112 urges lawyers to address ethical issues related to AI, including bias, explainability, and transparency.

ABA Resolution 604 emphasizes the need for organizations to follow guidelines when designing, developing, and deploying AI systems.

The ABA Ethics Committee's Formal Opinion 512 emphasizes that lawyers must consider their ethical duties when using AI, including competent representation, client confidentiality, communication, supervision, advancing meritorious claims, candor to the court, and charging reasonable fees.

The California State Bar approved the Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law on November 16, 2023. This guidance is designed to assist lawyers in navigating their ethical obligations when using generative AI and is intended to be a living document, updated periodically as the technology evolves and new issues arise.

Generative AI in Court: Cases and Consequences

As generative AI becomes more common in legal work, courts are beginning to clarify expectations around its use. While high-profile cases involving fake citations have made headlines, judges are not rejecting AI outright. Instead, they are emphasizing the continued importance of due diligence and professional responsibility. These recent cases show that lawyers, whether assisted by AI or not, are still expected to verify their sources, understand the material, and take full responsibility for what they submit to the court.

  • May 21, 2025: In Walters v. OpenAI, the first known defamation case involving generative AI, a Georgia state court granted summary judgment in favor of defendant OpenAI. Radio host Mark Walters alleged that ChatGPT falsely stated he was being sued for defrauding and embezzling funds from a gun rights advocacy organization. The Gwinnett County Superior Court found that ChatGPT's statements did not meet the legal standard for defamation, emphasizing that Walters suffered no damages and did not seek a correction or retraction. The court also found that a reasonable person would not interpret ChatGPT's output as "actual facts" given the chatbot provided a warning about its limitations, and that Walters had not established OpenAI acted negligently or with actual malice. The court denied Walters's request for punitive damages on First Amendment grounds. 
     
  • May 13, 2025: The U.S. District Court for the Eastern District of New York sanctioned an attorney and her law firm $1,000 for citing four non-existent cases in a court filing. The attorney had relied on a paralegal who used AI-based research tools to generate the citations, which the attorney did not verify before filing. The court found that the attorney violated Rule 11 by failing to conduct a reasonable inquiry and by submitting citations that were not warranted by existing law. It also found subjective bad faith, noting the attorney made no independent effort to confirm the citations. The court emphasized that fake citations are sanctionable and increasingly common due to AI misuse. While the attorney accepted responsibility and implemented new verification protocols, the court held her firm jointly liable and ordered that her client be notified of the violation. 
     
  • May 13, 2025: In Concord Music Group Inc. v. Anthropic PBC, plaintiffs asked a federal district court judge to strike an expert declaration and sanction Anthropic’s attorneys after discovering a citation to a nonexistent journal article in an expert declaration by an Anthropic data scientist. (Anthropic is the company that developed Claude, a family of generative AI models and chatbots.) The cited study, allegedly published in The American Statistician, does not exist. The court called the issue “serious and grave,” ordered preservation of all related materials, and requested Anthropic’s written response before deciding whether to strike the declaration or allow sanctions. In a May 23 order, the court called the error a "plain and simple AI hallucination," questioned how such an error escaped a manual citation check, and noted that lead counsel failed to file the required certification confirming the accuracy of the filing. The court struck the paragraph containing the error and stated that the incident undermined the overall credibility of the expert's declaration.
     
  • May 12, 2025: The Intermediate Court of Appeals of Hawai'i sanctioned an attorney $100 for citing a nonexistent case in a filing. The court found that a motion to dismiss had relied on a fabricated citation to a decision that does not exist in Hawai'i or any reported jurisdiction. The attorney who signed the motion attributed the error to a per diem attorney he had hired and admitted he failed to verify the citation. Although he stated he did not personally use AI, he acknowledged his failure to detect the fabricated source before filing. The court imposed a monetary sanction and emphasized that citing fake opinions violates Hawai'i's Rule 11. The court referenced rising concerns over AI-generated legal error and noted that federal courts have imposed more severe penalties in similar circumstances. 
     
  • May 7, 2025: The UK High Court sanctioned a barrister and solicitors for submitting pleadings containing five fake case citations. The court found their conduct "improper, unreasonable, [and] negligent," rejecting the barrister's explanation that the citations came from a personal list of cases. Although the court could not verify whether AI was involved, it emphasized that submitting unverified material, regardless of its source, constitutes professional misconduct. The court imposed a wasted costs order, assigned monetary penalties, and reduced recoverable fees. The judge ordered the decision sent to professional regulators and described the conduct as "appalling professional misbehavior" that misled the court and undermined the integrity of the legal profession.

For more cases involving the use of generative AI in court, click here