Artificial intelligence (AI) enabled legal tools are reaching the market at an incredible pace. Understanding and vetting such technology is critical to ensuring better outcomes for legal projects.
Every day, news articles in my social media feed address some new legal technology powered by AI. The promise of legal AI is incredible. We are looking forward to a future where lawyers’ efficiency and effectiveness is dramatically improved by technology’s virtually unlimited ability to scale. This blog is focused on promoting responsible and knowledgeable adoption of legal AI.
The Millionth Map.
I love old books – actually reading them, that is. Their text is a snapshot in history. One of my recent finds was a series of geography books published in 1960. The volumes included an article regarding “The Millionth Map.” This mapping project was then underway to reduce the world into 961 sections, where one inch of the map would equate to one million inches on the earth (about 16 miles).
The author’s breathless anticipation of this project was palpable. He opined at length as to the benefits of a standardized map that will make comparisons easier, and ensure everyone has access to the information.
I’ll be candid. I caught the excitement. At least initially. It was quickly quelled when Wikipedia told me that the project was subsequently dismissed as useless and dead.
This is often our relationship with new technology. Expectations start high, but time and actual engagement are the true test for sustainability. The key is being among the first to find those technologies that have staying power.
We have all felt the promise of a new technology. We watch a tech demonstration – usually using Enron data in the eDiscovery world – and the technology appears to solve real problems, real fast. But, when we do a proof of concept, or purchase it, we realize its limitations, workflow issues, or otherwise experience that great “tech let-down.” High costs and confusion result.
But, how can we embrace the change without the pain? This is a “now-not yet” problem. We want the technology now, so that it can be a differentiator, but it may be “not yet” ready to solve all of our problems exactly how we want it to.
Legal AI is the perfect example of the now-not yet syndrome of legal technology. It promises a lot, and many believe in its power to transform the practice of law. But, right now, its current capabilities are the subject of much debate.
A recent book explores some of the questions on this subject: “Artificial Intelligence and Legal Analytics,” written by University of Pittsburgh law professor Kevin D. Ashley. I spoke with Professor Ashley in Pittsburgh a few weeks ago about his book and its conclusions. (His office is in the Learning Research and Development Center pictured above.) This discussion really underscores the relationship of the lawyer with technology.
Professor Ashley defines “AI” in the same words as noted MIT scientist Marvin Minsky: “the science of making machines do things that would require intelligence if done by [humans].” While the book is very dense reading, and is not for the technologically faint of heart, there are several fundamental principles about Legal AI that are approachable and important.
Computers do not read. “Rather,” explained the professor in my meeting, “they can extract semantic meaning from texts and that is useful for various tasks. For instance, they can identify a particular holding in a case and rank it using the meaning to improve the legal information task.” In addition, using machine learning, a computer cannot explain its answers the way that a human would, so an attorney can determine if the answer is applicable to the current issue.
Computers can, however, scientifically reason within a carefully limited construct. This has been proven in many different ways and there are a plethora of examples that show that computers can solve certain concrete legal issues. For instance, determining whether a contract has been formed under UCC Article 2 has been modeled, because there are a limited number of variables that must be addressed and accounted for.
But, up until now, the rules defining the construct for reasoning had to be designed by humans. Practically, this means a computer can execute an instruction, but can’t discern and extrapolate the instruction from the data independently. (Frankly, sounds like one of my children, who shall remain nameless.)
Creating these rules is, as you can imagine, a very time-consuming task and has created a bottleneck on progress. But, Ashley posits, this bottleneck will be broken when the machines can actually use the semantic meaning analysis to extract the rules. That means training the computer to use semantic meaning to extract the scientific rules and enable the computer to build the construct instead of a human.
Looking forward, Ashley opines that bridging that gap will eventually result in a new type of tool that enables a collaboration between the human lawyer and the AI-empowered system. Such an application will be a “new breed . . . in which computer and human user collaborate, each performing the intelligent tasks it performs best.” (p. 35) The human will contribute the judgment, imagination, empathy and intuition; the machine will use its precise, consistent logic to provide the support.
Where we are on that spectrum remains to be seen. But, understanding exactly what the machines are contributing is a key to adoption and setting expectations. Ashley remarked that he “had seen many ‘hills and valleys’ with commercial AI and, every time expectations are not met, it ‘puts you right back in the valley.'”
Study First, Implement Second.
Lawyers must understand the technology we are using. This implies partnership with the professionals creating the tools. Ashley suggests testing on a known dataset, using standard validation methodologies. Careful planning and testing will demonstrate the application to the real world, calibrate the return on investment, and ensure that the new Legal AI tools are, in fact, leading to better outcomes, better efficiencies, and better predictability.
When you are running a factory, like we do in eDiscovery, technology implementation should be well-studied and planned. Technology and support should be thoroughly vetted before it is implemented. Importantly, once you understand how the actual technology is working, the lawyers have to understand how to integrate it and themselves in a new workflow. And, to the extent you are using a vendor, having a partnership that is focused on client value, as I blogged about in The eDiscovery Factory and Supply Chain. Great technology that is implemented without thorough testing will only cause confusion, re-work and cost-overruns on the factory floor.
Legal AI holds true promise. The raw horsepower of technology to churn through more data and provide lawyers actionable insights is exciting. To avoid disappointment, we have to clearly understand, implement carefully, and fully consider its impact on our factory processes before we buy and adopt. It may mean we all have to work differently, and accept that technology is playing a different role.
Want to learn more? Baker Donelson is hosting a webinar on analyzing the impact of artificial intelligence in legal September 19, 2018. This session will present a slate of seven industry professionals, each giving exciting, short presentations on the impact of artificial intelligence in the legal industry. Register here.
Next week we will return to discuss the many surprising ways that eDiscovery is just like a manufacturing factory. We will talk then.