chat loading...
Skip to Main Content

Generative AI Tools and Resources for Law Students

Generative AI in legal research, AI legal tools, AI in legal education, AI for lawyers, AI legal practice tools.

Guide Overview

This guide introduces developments and tools in generative artificial intelligence (generative AI) as they relate to legal research, legal education, and legal practice. It is designed for a wide range of users, from law students just beginning to explore AI to more advanced users integrating these tools into their workflows.

Resources, examples, and tool comparisons will be added and updated regularly. We encourage law students and legal professionals to revisit this guide often to stay current on developments in the field.

What Is Generative AI?

Generative artificial intelligence refers to a category of AI that produces new content. While some AI systems are designed to retrieve or extract specific information from existing materials, generative AI tools create text, code, images, audio, or other forms of content by identifying and applying patterns learned during training. Understanding how these systems work is essential to using them responsibly in legal research and practice.

These tools don’t pull information from a single authoritative source. Instead, they respond to prompts by generating new material that resembles the kinds of texts or outputs they’ve seen before. This means that the responses they generate may be original, persuasive, and well-structured, but not necessarily accurate, complete, or up to date.

AI Terminology and Technical Foundations

To make informed decisions about AI tools, it helps to understand the technology behind them. This section introduces terms that often come up in conversations about legal AI tools.

Large Language Models (LLMs)

Most generative AI tools used in law today rely on Large Language Models (LLMs). These are systems trained on massive collections of text so they can generate language that sounds fluent and relevant.

At their core, LLMs are pattern recognition tools. They do not understand language or concepts as humans do. Instead, they detect statistical relationships between words and phrases during training. This allows them to generate new text that resembles the patterns in their training data.

How Do LLMs Work?

The process can be summarized in four steps:

1. Training and Pattern Recognition

The model is trained on massive text datasets. It learns how words and phrases tend to appear together. For example, it may recognize that "liability" often occurs new "negligence" or "damages." The model doesn't understand these terms; it maps their associations across billions of examples. 

2. Tokenization

When a user enters a prompt, the model breaks it into tokens, small units such as whole words, subwords, punctuation, or spaces. For example, negligence might be split into neglig, ence, and a space. This allows the model to work with the input in a standardized format.

3. Prediction

Using the input and its training data, the model predicts the most likely next token. It doesn’t search for correct answers; it calculates probabilities and selects the most likely continuation based on prior patterns.

4. Response Generation

The model repeats this prediction process, one token at a time, until it completes a response. The final output is a sequence of probabilistic guesses, assembled in real time.

What Does This Mean for Legal Research?

LLM outputs may sound fluent and persuasive, but they’re not based on legal reasoning, factual verification, or expertise. The model generates plausible-sounding responses, not authoritative or reliable ones. Even models advertised as having “reasoning” capabilities still follow statistical patterns. They are not performing legal analysis or exercising judgment. That said, LLMs can be useful for tasks like drafting, summarizing, or idea generation. But they are not a substitute for careful legal research or professional judgement. Understanding this distinction is fundamental to using AI ethically and effectively in law school and legal practice.

Understanding Different Types of AI in Legal Tech

Generative AI is one of several types of artificial intelligence used in legal tools. Understanding how it differs from other AI systems can help you choose the right tool and recognize its limits for a given task. 

Types of AI in Legal Technology
AI Type What It Does Common Legal Use Cases Strengths Limitations/Cautions
Extractive AI Finds and retrieves existing information from a dataset or document. Retrieving case summaries; extracting clauses from contracts; pulling statutes or citations Fast and focused; based on real, existing content Limited to available data; no generation or rephrasing; quality depends on source
Generative AI Creates new content based on learned patterns from massive training datasets Drafting memos, outlines, or summaries; rephrasing legal information; brainstorming Can accelerate processes; Adapts to different tones or styles May fabricate sources or details ("hallucinate"); no fact-checking
Agentic AI Takes initiative to plan, decide, and act toward goals with minimal user input Automating entire workflows; monitoring litigation dockets Automates multi-step processes; can save significant time

Can make autonomous errors; requires strong guardrails and human oversight.

Impact of Generative AI on Legal Practice

Generative AI is already influencing how legal work gets done, from drafting documents to researching unfamiliar areas of law. Major law firms, courts, and bar associations across the U.S. have issued guidance addressing its use, signaling the profession’s growing recognition of both the potential and the risks. In August 2024, the ABA Task Force on Law and Artificial Intelligence released a report examining how AI is reshaping legal work. The report addresses legal ethics, and professional responsibility, court systems and procedural fairness, access to justice and systemic equity, legal education and workforce development, and risk management and AI governance.

These tools can assist with:

  • Legal research (e.g., generating preliminary overviews or helping locate relevant authority),
  • Drafting (e.g., creating rough outlines of memos, contracts, or client communications), and
  • Analysis (e.g., reviewing contracts or summarizing long documents).

But they also raise significant concerns. Among them:

  • Confidentiality: Is the data entered into the tool stored or shared?
  • Accuracy: Are the outputs factually and legally correct?
  • Ethics and oversight: Is the tool’s use consistent with professional obligations?
  • Unauthorized practice: Are non-lawyers using AI-generated content in ways that cross legal boundaries?

These considerations and others are discussed in greater detail in the section Risks and Ethical Considerations in Using Gen AI. Before using generative AI in a legal setting, consider the following questions to help you identify potential ethical, practical, and professional concerns:

  • What sources does the tool use? Does it draw from reliable legal materials, or is it trained on general web content? Has it been fine-tuned for legal use?
  • Does it cite its sources? Can you verify the legal authority it references, or does it “hallucinate” citations that don’t exist?
  • What is the tool designed for? Is it built for legal research, legal drafting, or general writing? A tool made for creative writing may not be appropriate for legal analysis.
  • When in the workflow should you use it? Is it best for brainstorming, early drafting, or summarizing? Use it to support, not replace, legal reasoning and validation.
  • Are there limits or restrictions on its use? Your school, employer, or jurisdiction may have policies that govern acceptable use of AI tools.
  • Is your input secure and confidential? Check the platform’s privacy policy. Some tools store prompts, user data, or outputs, which may pose risks when handling sensitive information.