When legal teams take on complex litigation, they often outsource the records retrieval process for efficiency reasons. Less time and money spent on preparing subpoenas, tracking down requests, and scanning documents means more time to review your case and work on a winning strategy.
With budgets tightening, law firms and in-house teams expect records retrieval providers to be efficient as well. And with ChatGPT and artificial intelligence making headlines seemingly hourly, lawyers might wonder if records retrieval could just be assigned to an algorithm. In fact, some providers are offering fully-AI powered solutions in records retrieval.
At Array, we’re interested in AI as well, but we operate from a standpoint of data protection first and foremost. Large Language Models (LLMs) can ingest and analyze enormous amounts of data, but that data isn’t always secure in the hands of the AI. Array’s Records Retrieval team deals with loads of personal information, whether it’s healthcare or financial information, and we take that responsibility seriously.
Records retrieval is also about time. There are standards of time for requesting records and clients have expectations for when services will be rendered. So while Array is interested in how AI can help increase efficiency, we’re not interested in full automation and replacing experienced professionals who take care to retrieve records timely and securely.
How Array balances AI, efficiency, and privacy
When Array started to explore how AI could cut down the amount of time it takes for our professionals to complete tasks, we focused on applying the technology to relatively benign information, public knowledge such as business addresses, for example, that could enhance the services provided by our team, not eliminate them. Array has very clear boundaries in our Subpoena Division for what is viable, and what is not viable for non-human interaction.
The solution: Array is using AI for verification purposes, which cuts down the time it takes records professionals to verify locations where records are located before documents are served. After our team uses AI for a first pass, which is significantly faster than a human first pass, we check the computer’s work and make sure nothing is missing. The result is time saved for our team and for our clients.
Why we’re cautious about AI
When LLMs like ChatGPT, Google Bard and Bing AI became popular this year, users raced to try them — without appreciating the potential privacy risks.
When you submit a question or feed data to these AIs for analysis, there’s a chance those tools are storing your queries, including any sensitive information you upload, and using it to improve their models.
That might not seem like a big deal if you’re asking the AI to write a toast for your friend’s wedding. But records retrieval is exponentially more sensitive.
The companies behind the AIs give you some control of your data privacy. But Samsung reportedly had some of its trade secrets compromised when it fed company data into ChatGPT.
In spring 2023, OpenAI unveiled an easier “opt out” security option that lets ChatGPT users disable their chat history so that it can’t be used to train the AI’s models. OpenAI technically holds onto the logs for 30 days and says it will only review them if it’s investigating reports of abuse.
Bing has launched an enterprise model, Bing Chat Enterprise, which won’t save user data, send it to Microsoft or use it to refine Bing’s model. (The original Bing Chat does collect search data, though users can delete their search and chat history.)
Google Bard gives users the ability to prevent it from saving your Bard activity and to delete older activity, though there are loopholes: “Even when Bard Activity is off, your conversations will be saved with your account for up to 72 hours to allow us to provide the service and process any feedback.” Still, Google cautions Bard users not to include information that could identify themselves or others.
Where we’re headed
Like all technology, AI is evolving. While Array is currently testing the AI waters for efficiency opportunities that don’t conflict with our privacy standards, we are monitoring and researching ways LLMs can increase the value of the records services we provide.