Machines vs. lawyers

We all know the market for lawyers is shrinking, but not every part of the legal services sector is in retreat.  John O. MacGinnis writes:

The job category that the Bureau of Labor Statistics calls “other legal services”—which includes the use of technology to help perform legal tasks—has already been surging, over 7 percent per year from 1999 to 2010.

Much of the rest of the piece details how various legal functions can be taken only, if only slowly, by smart software.  Here is a bit more:

Until now, computerized legal search has depended on typing in the right specific keywords. If I searched for “boat,” for instance, I couldn’t bring up cases concerning ships, despite their semantic equivalence. If I searched for “assumption of risk,” I wouldn’t find cases that may have employed the same concept without using the same words. IBM’s Watson suggests that such limitations will eventually disappear. Just as Watson deployed pattern recognition to capture concepts rather than mere words, so machine intelligence will exploit pattern recognition to search for semantic meanings and legal concepts. Computers will also use network analysis to assess the strength of precedent by considering the degree to which other cases and briefs rely on certain decisions. Some search engines, such as Ravel Law, already graphically display how much a particular precedent affected the subsequent course of law. As search progresses, then, machine intelligence not only will identify precedents; it will also guide a lawyer’s judgment about where, when, and how to cite them.

The entire piece is here, interesting throughout, via B.A.

Comments

Well, if (I have not checked the facts, it might be a hoax) this story is really true, then probably lawyers can easily be replaced by robots. "Computer says NO." It still sounds plausible:
http://zoekeating.tumblr.com/post/87134171959/as-if-this-isnt-hard-enough

Real automation of legal decision making is actually very low tech. It involved simple paper and pencil tests to small numbers of questions whose answers have weighted scores that are statistically validated against empirical data. For example, these kinds of tests are routinely used to evaluate recidivism risks of parolees and routinely outperform predictions of highly trained mental health and legal predictions that are not statistically calibrated and validated.

It seems that Watson is just about to revolutionize every single field, if you believe the hype. There is potential there, but the current reality of the technology is that quite a lot of training is needed for it to be deployed in a specific task. Once you get into the details of what it is actually doing and what is required in setting something like this up, it looks a in incremental technology improvement rather than a revolution. IBM is just one of many competitors in this field.

One thing is for sure: going on Jeopardy was a stroke of PR genius by IBM. Its performance was a great achievement, but the limits of that achievement should be considered. A technical article (published a few years ago, shortly before the contest) states that Watson had an 85% correctness rate when answering 70% of the questions, and a 70% correctness rate when answering all questions. This is plenty for winning Jeopardy, but how many tasks tolerate that level of error rate? And 20 people worked on this system for 3 years to get it to that point. Further, a large part of Watson's advantage came from its ability to press the buzzer far more quickly than humans--something that doesn't relate to its overall "intelligence" as a system.

Overall, this is definitely a field with huge potential, but I am skeptical of the way it is being oversold at the moment.

Very interesting technical article on Watson here:
http://www.aaai.org/Magazine/Watson/watson.php

Yay! Now the robots can tell us which web of laws we break every day! This way we can make the law even more complex. Can we have robots write the lawd too?

There was a great article in The Onion years ago about how a robot was eliminating jobs in Congress. It doesn't seem to be on the internet any more, though.

Serious point: would 100% detection of laws we break make things better or worse? It would certainly reveal the problems and contradictions that currently exist in the system.

It isn't a lack of detection that's the real problem; we have a pretty good idea of how often many (if not most) laws are being broken.

It's prosecutorial discretion that causes problems: overcharging in order to leverage a plea, or dropping charges entirely to conserve resources, with predictable socioeconomic disparities between the two.

Pretty old story. Its just that some lawyers did not know what was going on under the hood. Boolean search is old, old, old.

Legal research has been doing network analysis referred to in the story for years, long before it was used in other areas of research. Very inexpensive systems, and larger systems run by consulting firms, do network analysis on litigation document databases. For years, we've been able to download briefs written by other lawyers in writing our own, so there is less research time spent on writing briefs.

All of these advances have occurred because litigation and lawyer time is very expensive. A good legal secretary or paralegal who knows database management software, where to find and copy documents written by others, and how to do network analysis types of manipulation are more valuable, and cheaper, than a first year associate.

The area that is in the dark ages for research is in some parts of academia. That's probably because their time is not worth as much. I live in an area with academics. Mention network analysis to an English literature professor who is an expert in Moby Dick, or to a former dean of the medical school, and their eyes will gloss over. But, not to the mathematician down the block, who 15 years ago was talking emergent systems from chaos at the edge and network analysis in designing electrical systems and fuzzy logic in managing a medical device.

To give you an example of how antiquated academia is relative to large law firm practice, consider this:

Well over 15 years ago most large law firms created internal systems for collecting and coding work product, such as pleadings, forms, and research memoranda, so that the work of a lawyer could be used by other lawyers in the firm.

Now, ask yourself this question: If you are an academic, are the research results and research databases of your data shared within your department with other colleagues and graduate students. Or, is it hoarded on your hard drive or in a thumb drive, just for you.

Lawyers hate hate the new versions of Lexis-Nexis and Westlaw search that take away precise boolean searching and instead return google-like "here's what we think you meant". Among other things it makes it very difficult to justify a claim that there's no case law on subject X.

brad, That is a good point; sometimes boolean, and non-network analysis is better. But, what I found interesting in the article that was cited above was that it was written by a con law professor at Northwestern...who presented as "new" stuff that has been going on for years. It may be that lawyers are unaware of what is going on under the hood, just as you are probably unaware of the Page rank algorithm when you use Google.

That's true. When Lexis changed their UI I heard a lot of complaints about things that had been going on for a while, but people didn't realize because it didn't look radically different.

"If I searched for “assumption of risk,” I wouldn’t find cases that may have employed the same concept without using the same words" - not true. AOR is a legal term of art, so if a case does not mention it, it probably is not applicable to the case. But conceptually I get what the author is saying. BTW I doubt lawyers will let robot replace them. When that time comes, they will change the law to make it just enough more complex to avoid robots--and/or they will make robot law illegal. Speaking as a law school dropout.

Yes, this is a major barrier to these technologies in practice. There is a huge amount of domain-specific knowledge that is needed to implement the technology in any given application. You can't just turn on Watson and feed it all of your legal documents. You have to create a modified version of the software that understands legal terminology, which is very labor-intensive.

Also, the cost of labor to do it the old fashioned way (digesting each new published case by hand) isn't great and there is a huge work force to do it. For example, in Colorado, a third of people admitted to the practice of law don't actually practice law. Many are homemakers and retirees for whom a work from home part-time job digesting new decisions from the state's two appellate courts while still using their professional educations and skill is a very attractive job. Nationwide there are roughly four hundred people who are admitted to the practice of law but don't actively practice law for every appellate judge.

In contrast, IT professionals skilled enough to write search software sophisticated enough to do anything remotely at the level needed to rival the brute force approach with human digesting of case are one in hundred million individuals who can command salaries of millions of dollars per year and are in high demand for other work (like plain old natural language searching with online legal research resources and Google and other search engines) that can better afford to pay for it.

I'm not nearly as pessimistic about IT solutions to this problem as you are. A computerised approach is a winning approach and clearly superior to the brute force method. But once you get down to the nuts and bolts of how this technology works and what is required to deploy it, it is more incremental improvement than revolution.

In reality, you learn by your second year of law school that, instead of "assumption of risk" you can type in "assum! /3 risk" and get every linguistically likely permutation of those two words. Legal automation is on its way, not this year or the next but within ten. But this is a very poor example.

We are already seeing a masterful piece of robotics at work: sprinkle articles about food, a random assortment of Futility Closet style oddities and a few contrarian comments about global warming into an entirely predictable series of articles about macroeconomics and hardly anyone notices.

I've known the comments at MR have been declining lately, but man, sometimes I see comments like this and just shake my head. The internet is a big and diverse place. Feel free to find something more to your liking.

Concept based searches for legal precedents date back to the Roman Empire when case digests indexed by subject were invented. Legal encyclopedias and modern digest companies like Am Jur, Corpus Juris Secondum and the West headnote system, all of which are concept based and using essentially the same methods as the Romans, were the predominant means of legal research tools until key word searching was invented in the 1980s by firms like Lexis-Nexis and West. Computerized concept based searching is still available using computerized headnote searches with West.

The innovation here is in using AI to produce headnotes, rather than in searching legal resources based on concepts rather than keywords. But, I'm skeptical that AI will be good enough to surpass the work of a substantial work force of legally trained case digesters any time soon.

I remember in the '90s our law school forcing us to learn that head note system and paper precedent searches (I forgot the specific term) even while Westlaw and Lexis gave us free acess to their services. We students all recognized that their services obsoleted the law school's mandatory "Roman" approach, but the admin didn't want to hear it.

I wonder how Watson would do in Law Scool? Forced to learn junk the he'll never deploy after the final (except at cocktail parties).

"paper precedent searches (I forgot the specific term)" - the term is Shepardizing, something that I did often as a law student and young associate attorney.

The logical outcome of the process is the eternal enshrinement of bad precedent.

A cheaper and more amusing alternative would be to kill 50% of the lawyers. If genocide is not your thing, then maybe just close the law schools for a decade and put a moratorium on admission to the bar.

Lawyers would love that

My son is a tax lawyer--essentially this means he only handles quite complex tax returns.

He says he would not be able to calculate most of his clients taxes without the various software programs he uses.

He told me this after I complained about how complicated it was to calculate how much of my social security income to claim. He has a software package this just requires him to enter the total social security income and it tells you how much to claim.

The problem for tax lawyers is that nearly everybody could just go online and use the tax software themselves.

If software can do the thinking for you, why do people go to a tax consultant in the first place?

Tax law is mostly about definitions of terms and issue spotting, neither of which online software do a good job of addressing although some are getting better at issue spotting with scripted "client interviews".

I found the contract drafting portion most interesting. There's already software that writes news stories which are hard to tell apart from human journalists; how much longer before all those publicly available contracts on .gov sites out there are indexed, ranked and sorted, then pulled apart and re-formed depending on the individual contract provisions necessary, with usable first drafts generated ? Goodbye, most junior associates.

"depending on the individual contract provisions necessary"
Well, there's the rub isn't it. As Bill says, law firms (even small ones) already have prepared contractual provisions to use as a first draft, what you're describing has already taken place. But the lawyer's not adding value by cutting-and-pasting a basic non-assignment clause. He's adding value by determining whether a non-assignment clause is necessary in the first place, what the likely benefits & drawbacks to his client are, and how to best tweak that clause for the specific circumstances in this case. "Usable first draft" ain't much.

I'm a lawyer. I agree with most of this article. Three points:

(1) A big chunk of litigation involves a mix of legal analysis (which may eventually be automated) and people skills (which will likely be one of the last things to be automated). For example, it will be tough to automate a cross-examination or deposition, which requires the lawyer to talk to a witness, assess their demeanor, etc. etc. The same thing can be said about trial, oral argument, talking with clients, etc.

(2) If people skills can't be automated soon, there will be a countervailing effect. Legal analysis, document review, etc., may become cheaper, which may make litigation cheaper overall. Fewer cases will settle. There will be more work for the attorneys who are good at doing the "people skills" stuff, like taking depositions.

(3) I'm excited about the rise of analytics. It really has not been applied to law yet. It will be, and I look forward to the results.

(One last point: the automation will take a few years. Conceptual searching is still terrible. The products that I use--Google Scholar and WestLaw Next--have not been improving. Hopefully they have some sort of prototype in development that actually are getting better. This market may not be big enough to attract serious attention from Google.)

Comments for this post are closed