This guide introduces developments and tools in generative artificial intelligence (generative AI) as they relate to legal research, legal education, and legal practice. It is designed for a wide range of users, from law students just beginning to explore AI to more advanced users integrating these tools into their workflows.
Resources, examples, and tool comparisons will be added and updated regularly. We encourage law students and legal professionals to revisit this guide often to stay current on developments in the field.
Generative artificial intelligence refers to a category of AI that produces new content. While some AI systems are designed to retrieve or extract specific information from existing materials, generative AI tools create text, code, images, audio, or other forms of content by identifying and applying patterns learned during training. Understanding how these systems work is essential to using them responsibly in legal research and practice.
These tools don’t pull information from a single authoritative source. Instead, they respond to prompts by generating new material that resembles the kinds of texts or outputs they’ve seen before. This means that the responses they generate may be original, persuasive, and well-structured, but not necessarily accurate, complete, or up to date.
To make informed decisions about AI tools, it helps to understand the technology behind them. This section introduces terms that often come up in conversations about legal AI tools.
Most generative AI tools used in law today rely on Large Language Models (LLMs). These are systems trained on massive collections of text so they can generate language that sounds fluent and relevant.
At their core, LLMs are pattern recognition tools. They do not understand language or concepts as humans do. Instead, they detect statistical relationships between words and phrases during training. This allows them to generate new text that resembles the patterns in their training data.
The process can be summarized in four steps:
1. Training and Pattern Recognition
The model is trained on massive text datasets. It learns how words and phrases tend to appear together. For example, it may recognize that "liability" often occurs new "negligence" or "damages." The model doesn't understand these terms; it maps their associations across billions of examples.
2. Tokenization
When a user enters a prompt, the model breaks it into tokens, small units such as whole words, subwords, punctuation, or spaces. For example, negligence might be split into neglig, ence, and a space. This allows the model to work with the input in a standardized format.
3. Prediction
Using the input and its training data, the model predicts the most likely next token. It doesn’t search for correct answers; it calculates probabilities and selects the most likely continuation based on prior patterns.
4. Response Generation
The model repeats this prediction process, one token at a time, until it completes a response. The final output is a sequence of probabilistic guesses, assembled in real time.
LLM outputs may sound fluent and persuasive, but they’re not based on legal reasoning, factual verification, or expertise. The model generates plausible-sounding responses, not authoritative or reliable ones. Even models advertised as having “reasoning” capabilities still follow statistical patterns. They are not performing legal analysis or exercising judgment. That said, LLMs can be useful for tasks like drafting, summarizing, or idea generation. But they are not a substitute for careful legal research or professional judgement. Understanding this distinction is fundamental to using AI ethically and effectively in law school and legal practice.
Generative AI is one of several types of artificial intelligence used in legal tools. Understanding how it differs from other AI systems can help you choose the right tool and recognize its limits for a given task.
AI Type | What It Does | Common Legal Use Cases | Strengths | Limitations/Cautions |
---|---|---|---|---|
Extractive AI | Finds and retrieves existing information from a dataset or document. | Retrieving case summaries; extracting clauses from contracts; pulling statutes or citations | Fast and focused; based on real, existing content | Limited to available data; no generation or rephrasing; quality depends on source |
Generative AI | Creates new content based on learned patterns from massive training datasets | Drafting memos, outlines, or summaries; rephrasing legal information; brainstorming | Can accelerate processes; Adapts to different tones or styles | May fabricate sources or details ("hallucinate"); no fact-checking |
Agentic AI | Takes initiative to plan, decide, and act toward goals with minimal user input | Automating entire workflows; monitoring litigation dockets | Automates multi-step processes; can save significant time |
Can make autonomous errors; requires strong guardrails and human oversight. |
Generative AI is already influencing how legal work gets done, from drafting documents to researching unfamiliar areas of law. Major law firms, courts, and bar associations across the U.S. have issued guidance addressing its use, signaling the profession’s growing recognition of both the potential and the risks. In August 2024, the ABA Task Force on Law and Artificial Intelligence released a report examining how AI is reshaping legal work. The report addresses legal ethics, and professional responsibility, court systems and procedural fairness, access to justice and systemic equity, legal education and workforce development, and risk management and AI governance.
These tools can assist with:
But they also raise significant concerns. Among them:
These considerations and others are discussed in greater detail in the section Risks and Ethical Considerations in Using Gen AI. Before using generative AI in a legal setting, consider the following questions to help you identify potential ethical, practical, and professional concerns: