Artificial Intelligence 101 in the Legal Industry: Capabilities, shortfalls & fears

Artificial intelligence has captured the attention of legal professionals, with 9 in 10 aware of such tools — about a third more than among consumers in general, a recent LexisNexis survey found.

Nearly half of the legal professionals surveyed said that generative AI would have a significant or transformative impact on the practice of law. Just 7 percent thought it would have no impact.

Suffice it to say: legal professionals have tuned into this evolving technology, and the vast majority expect their work to change.

But with all the noise out there — ranging from predictions of utopia to doomsday and a plethora of points in between — many remain unclear about what exactly generative AI can do.

What is AI?

The artificial intelligence label gets tossed around broadly, but informed use requires some definitions.

AI is a catchall term that covers machine or computer system simulation of human intelligence to accomplish tasks. Many people have been using AI for years, just without consciously recognizing it: take, for example, the Siri assistant on Apple smartphones, Netflix’s “What to Watch Next” recommendations, or Amazon’s suggestions as to what additional products should be purchased.

Within that broad AI category are multiple subcategories based on specific uses or capabilities. We’ll discuss a few of the most relevant.

For example, machine learning lets computer systems adapt without needing specific instructions. They do this by developing and using algorithms and statistical models from large historical data sets — so consider this as quick analysis that enables predictions. Examples would be image or speech recognition, predicting traffic patterns on a smartphone GPS tool, or even weather forecasts on your 6 o’clock news.

A related subcategory, natural language processing, enables computers to interpret and respond to human language — in text or speech — rather than just computer coding. Again, think of Apple’s Siri, Amazon’s Alexa, or a Google search. This capability typically is used in combination with other models, such as machine learning. It’s more difficult than it sounds: Humans often rely on context or ambiguous concepts to interpret meaning, not to mention using slang and colloquialisms that don’t abide by their dictionary definitions.

Generative AI, which has sparked much of the current conversation, can respond to prompts by creating unique text, images, or other media that doesn’t yet exist in the world. ChatGPT, one of the best-known chatbots, combines generative AI with natural language processing, simulating human-like conversations and helping with tasks such as brainstorming, drafting essays or job descriptions, coding, and other applications. Again, these applications rely on the ability to quickly analyze vast amounts of historical data and make predictions.

What can AI do?

In a nutshell, AI applications offer the potential to speed up mundane or repetitive tasks, allowing humans to concentrate on more complex or creative jobs. Used appropriately, the tools could save time and money for companies and for their clients.

Specific to the legal profession, 65% of lawyers expect generative AI tools to be helpful in research — digging up case law or precedence with similar fact patterns as the case at hand. Next came drafting documents (56%), document analysis (44%), and writing emails (35%), the LexisNexis survey found.

Array, for example, uses AI as an assisted tool for document review, decreasing the time it takes to find where relevant documents are located in a large set of electronically stored information (ESI). AI offers a first pass review that’s quicker than a human first pass but still relies on humans to provide examples of relevant documents for the AI model to reference.

Many of these benefits came up in the explanation of AI, but here’s a quick rundown of some of the biggest: efficiency in handling massive amounts of data; prediction based on that data; basic analysis to help with repetitive jobs and yield faster decisions; the potential to learn and accomplish jobs more in line with a user’s needs; and large language processing and generation.

What are AI’s limitations?

Many people have gotten excited about the mundane tasks that AI can take off their plates but that excitement must not turn into complacency.
Although generative AI can, for example, write a 2,000-word essay almost instantly, humans must exercise caution that the content is trustworthy. For one thing, the output is only as good as the input. Often, a user must ask a question multiple ways, or engineer their prompts, before getting the desired response, and nuance or bias in the way a question is asked can skew the result. Further, additional refinement is needed to match the style, tone or attitude required in the final piece. The tool lacks discernment, common sense, or emotional intelligence about how a response would come across to a human. Also, ChatGPT doesn’t explain exactly how it arrived at conclusions or give specific sources, so performing adequate checks can be difficult — especially if a topic is unfamiliar to the user or the tool simply made up or hallucinated the facts. And AI tools only mimic human creativity.

Users also should keep in mind that chatbots may be limited more than search engines: beware that a specific chat service may not have access to current, real-time or even accurate data.

A big word of caution for the legal profession.

As if questions about data veracity weren’t enough, a key pain point remains for those in the legal profession: privacy.

Samsung employees found that out the hard way earlier this year: Engineers accidentally shared top secret company data while using ChatGPT to help check source code. Samsung reportedly banned employee use of generative AI tools as a result.
Users may not realize that anything shared with ChatGPT is kept and used to further train the model. There now is a function to turn off chat history, but putting anything online can come with vulnerabilities — particularly for legal professionals handling sensitive client data.

It’s a balancing act. At Array, for example, we set clear boundaries in our Subpoena Division on what information is viable — or not — for non-human interaction.

Array also limits its trust of AI in other ways: Although we may use AI for a first pass through discovery records to narrow down potentially relevant information, our attorneys then check the computer model’s output to make sure nothing is missing and validate the results with statistical analysis.
But even if users figure out acceptable boundaries for today, generative AI tools continue to grow and evolve, so users must, as well.

All in all, AI offers great potential for efficiency in the legal profession, but anyone who uses the tools must keep both eyes wide open. Constant education about the benefits, limits and potential risks of AI is essential.

Skip to content