Tool Taxonomy
Legal AI tools vary along two independent dimensions — scope and domain — producing four categories that carry distinct risk profiles.
Legal AI tools can be classified along two independent dimensions: scope (what range of tasks the tool addresses) and domain (whether the tool was designed for legal work specifically).
Scope asks: Can this tool do many different kinds of things, or was it built to do one thing well?
Domain asks: Was this tool designed, trained, or specifically configured for legal work?
The classification turns on how the tool was built, not how a user chooses to use it.
TAG
Broad AI, not designed for law
TAL
Legal AI, many tasks
TSG
One function, not legal-specific
TSL
One specific legal task
Click any cell above to expand its description.
TAG Task-Agnostic General
Broad-capability AI tools not designed for any particular professional domain. These tools can generate text, analyze documents, write code, summarize content, and perform many other tasks across all subjects.
Examples: ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity
License Tiers and Why They Matter
The same TAG tool can present very different risk profiles depending on its license tier, because the terms of service — particularly around data handling, training, and confidentiality — vary dramatically.
Free tier: Highest confidentiality risk. Provider often retains broad rights to use inputs for model training. Generally unsuitable for any client-related information.
Paid individual tier: Reduced but not eliminated risk. Typically offers training opt-out. Terms vary significantly across providers and change frequently.
Enterprise / Education tier: Lowest confidentiality risk among TAG tools. Typically includes contractual data protection commitments (zero-data-retention, no training on inputs). The protective terms exist because the institution negotiated them — verify what your specific agreement covers.
These tiers are generalizations. The specific terms of service for any given tool at any given time govern.
TAL Task-Agnostic Legal
AI tools designed for the legal domain but capable of performing a range of legal tasks — research, drafting, summarization, analysis. Built on top of legal-specific data infrastructure (case law databases, statutory repositories) with features like citation verification, jurisdiction filtering, and retrieval-augmented generation from authoritative legal sources.
Examples: Lexis+ AI, Westlaw CoCounsel, vLex Vincent AI, Casetext
Key advantages over TAG tools: Stronger citation grounding, legal-specific retrieval pipelines, and institutional data-handling terms designed for law firm and legal department use. Not immune to hallucination, but architecture reduces certain categories of risk.
TSG Task-Specific General
AI tools built to perform a specific task — not for any particular professional domain, but one function done well. Used by lawyers but not designed with legal practice in mind.
Examples: Grammarly (writing assistance), Otter.ai (transcription), Adobe Acrobat AI (PDF summarization), Calendly AI (scheduling), Zoom AI Companion (meeting summaries)
Why this category matters: Lawyers use these tools constantly, often without thinking of them as "AI tools" subject to the same scrutiny as a chatbot. But a meeting transcription tool that processes attorney-client conversations raises confidentiality concerns just as serious as pasting those conversations into ChatGPT. The risks differ in kind (these tools typically do not generate legal analysis, so hallucination is less relevant) but not necessarily in severity.
TSL Task-Specific Legal
AI tools built to perform a specific legal task. The most narrowly scoped tools in the taxonomy — designed for one function within legal practice, using legal-specific training, data, or workflows.
Examples: Spellbook (contract drafting and review), EvenUp (demand letter generation), Clearbrief (brief-writing with automated citation checking), Luminance (due diligence document review)
Key consideration: TSL tools tend to operate within constrained parameters, which can reduce certain risks. However, their narrow scope can also create a false sense of security — users may over-trust outputs precisely because the tool seems authoritative within its lane.
A Note on Boundaries
These categories describe a tool's design, not its ceiling. Some tools resist clean classification:
A TAG tool with a legal-focused custom configuration (e.g., a GPT with a legal prompt library) remains TAG. The underlying model was not trained for legal work; the user has simply steered it.
A TAL tool that adds a contract-review feature might blur into TSL territory for that feature, while remaining TAL overall. Classify by the feature being used when evaluating fitness for a specific task.
When in doubt, classify conservatively — the category that demands more caution from the user.