AI foundation models and outcome prediction in legal services

AI is predicted to have an outsized impact on legal services: a recent Goldman Sachs Economics Research[1] paper estimates that almost half of current processes and work tasks in legal could end up being automated by AI.

But AI foundation models predict, they don’t reason: the model doesn’t apply judgment, it’s just trained to predict what is most statistically probable. In the words of the Economist,

“it is much more like an abacus than it is like a mind.”[2]

So where does this put the upper end of legal services that can’t (at least yet) be replaced by statistically probable predictions? and will AI have any impact on legal advice and services based on judgment not process?

For business lawyers, a good analogy for the ‘technologisation’ of legal services is an accelerating, down escalator where the bottom is marked ‘commoditisation, process and decline’ and the top is market ‘specialisation, judgment and growth’. What the Goldman Sachs report shows is that AI will speed up the escalator and firms will need to move up market faster just to stay still.

You wouldn’t have wanted to bet against Big Law in the UK any time since the decision was taken to lift the 20 partner limit in 1967. Defying manufacturing recessions and financial crises, legal services have grown to account for 2% of GDP – £1 in every £50 of UK national income comes from legal services. And although the pace of growth changes over the economic cycle, ever widening regulation and complexity have dictated growth: governments like to regulate, and this makes raw material for lawyers.

Against this background, will we see any difference at the top of the escalator as AI turns professional services into computer services?

One possibility is that competitive pressures will lead to more structured and statistical prediction around the application of judgment. Reputation for good judgment in assessing risk in the most complex cases is what sets Big Law apart. At the moment, this remains largely a combination of analysis, experience and intuition: when the partner says to the client: ‘you have a 60% chance of winning’, they’re not saying: ‘every ten times where this fact situation plays out you’ll win six and lose four’; they’re really saying: ‘based on the facts as I’ve analysed them, my experience of similar situations and taking into account that some things that can’t be predicted will happen, my advice is that it’s more likely than not that you’ll win’.

This in turn throws you back on the meaning of ‘likely’ and other words of ‘estimative probability’. Here, context can be everything. Take the confidence level of the IPCC that climate change is man-made. Where ‘more likely than not’ means 50% or above, the IPCC set its level at ‘likely’ (more than 66%) in 2001, ‘very likely’ (>90%) in 2007 and ‘extremely likely’ (>95%) in 2013.

Or take the world of medicine, where the risk of adverse reaction to a medical procedure happening to more than 50% of patients is stated to be ‘likely’; to 10-50% of patients is ‘frequent’; 1-10% is ‘occasional’, and less than 1% of patients is ‘rare’.

Are there lessons for lawyers here to gain competitive advantage by offering a more statistically predictive approach to assessing risk and chances of success using AI? Possibly, but first define your terms: even in the climate change and medical contexts quoted above ‘likely’ means different things – ‘two-thirds’ in one context and ‘half’ in the other. There may also be more than one meaning, as the UK House of Lords said in a case from 1996:

“In everyday usage one meaning of the word likely, perhaps its primary meaning, is probable, in the sense of more likely than not. This is not its only meaning. If I go walking … and ask whether it is likely to rain, I am using likely in a different sense. I am inquiring whether there is a real risk of rain, a risk that ought not to be ignored.”[3]

Drawing on the meaning of standards of proof in civil proceedings (balance of probabilities), John Kay and Mervyn King discuss this in the chapter of their 2020 book ‘Radical Uncertainty’[4] titled ‘Uncertainty, Probability and the Law’. They reject an approach based on bare statistical evidence in favour of an approach based on ‘search for the best explanation’, quoting Oliver Wendell Holmes:

“The life of the law has not been logic; it has been experience … The law … cannot be dealt with as if it contained only the axioms and corollaries of a book of mathematics.”

And in searching for the best explanation they say:

“narrative reasoning is at the heart of legal decision making. The … claimant is required to present an outcome of relevant events. The outcome depends on the quality of that explanation. In civil proceedings, the narrative must be a good one, better than any alternative. If so the claimant succeeds on the balance of probabilities.” (page 211)

So, whilst AI-enabled decision support in assessing risk in complex cases is likely to become more popular, there’s perhaps a way for foundation models to go before they reach the top of that escalator.

Endnotes


[1] Goldman Sachs Economic Research, Global Economics Analyst The Potentially Large Effects of Artificial Intelligence on Economic Growth (BriggsKodnani) (key4biz.it), 26 March 2023

[2] Large, creative AI models will transform lives and labour markets | The Economist, 22 April 2023

[3] Per Lord Nicholls, H & Ors (minors), Re [1995] UKHL 16 (05 April 2000) (bailii.org), [AC 563 at 584G]

[4] John Kay and Mervyn King, Radical Uncertainty, 2020, the Bridge Street Press, Chapter 11, ‘Uncertainty, Probability and the Law

Share:

More Posts

Send Us A Message