AI vs. The Eye:
5 Common Questions About Technology-Assisted Review
Advances in artificial intelligence, including predictive coding and continuous machine learning, are creating opportunities to make eDiscovery more efficient and more effective.
Yet, according to eDiscovery Today’s 2022 State of the Industry report, a minority (25.9 percent) of active eDiscovery professionals surveyed said they use predictive coding technology in all or most of their cases. Of the respondents surveyed, 36.3 percent said they use it in very few or none of their cases.
Here are five common questions about the use of AI in eDiscovery.
What is TAR?
TAR stands for Technology-Assisted Review. There are two generations of TAR, commonly referred to as TAR 1.0 and TAR 2.0.
What’s TAR 1.0?
TAR 1.0 is commonly referred to as sample-based learning. It represents the first generation of TAR systems.
In TAR 1.0, teams engaging in eDiscovery would pull samples of documents from a larger batch, review them and mark them as responsive or unresponsive to the discovery request.
They would next feed the sample documents into predictive coding software, essentially teaching the computer what to look for in responsive documents and weed out unresponsive documents. Under this approach, document review teams had to pull several sample populations and stop and start the process to test for accuracy and further refine the model based on the results of each sample.
How is TAR 2.0 different from TAR 1.0?
Today, TAR 2.0 is considered the standard form of technology-assisted review. It relies on continuous active learning. In TAR 2.0, teams still rely on samples to provide coding decisions to the algorithm, but those samples are now generated automatically and rather than pre-selecting the samples, the computer selects the sample documents that will make the biggest impact in teaching the system how to differentiate between responsive and non-responsive documents. As the human is coding more documents and entering them into the software, the system is, in the background, continually learning and updating its understanding of what makes a document responsive.
The clear advantage of TAR 2.0 is that the review process is much more seamless, requires less stop and start and tends to result in a stabilized model much earlier in the process.
Where does TAR 2.0 shine?
TAR 2.0 is excellent at quickly weeding out unresponsive documents. It does especially well at sorting distinct information into separate buckets of “responsive” and “unresponsive.” However, systems can struggle with that nuance. For example, if a team is reviewing documents in a lawsuit involving baked goods, the computer might be really good at figuring out the difference between muffins and spaghetti, but have difficulty differentiating responsive documents about muffins made with white flour from muffins made with wheat flour.
Does TAR replace the need for human review?
Using TAR 2.0 in your eDiscovery review process can significantly cut down on the manpower necessary to review potentially thousands of documents and determine their responsiveness.
Teams can use it to immediately reduce the pool of responsive documents and put their teams on reviewing the responsive documents. They can also use TAR 2.0 in conjunction with search terms to further focus on what’s most of interest in the litigation.
At the end of the day, legal teams still need to review records for privileged materials and know what they’re producing, so humans have a role to play in the technology-assisted review process.
If you’re not sure if AI is right for your project, engaging an eDiscovery vendor that is well-versed in the technology can be helpful. Working with an eDiscovery vendor at the outset of a project can put you on track to finish your eDiscovery project with greater efficiency and help contain costs.