This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Law Practice,
Judges and Judiciary

Mar. 4, 2020

Algorithms and the persuadable judge

Algorithms might persuade, but only humans can be persuaded.

Civic Center Courthouse

Curtis E.A. Karnow

Judge, San Francisco County Superior Court

Trials, Settlements

Judge Karnow is author of "Litigation in Practice" (2017) and current co-author of Weil & Brown et al., "California Practice Guide: Civil Procedure Before Trial" (Rutter).

Bowman: Open the pod bay doors, please, HAL. Open the pod bay doors, please, HAL. Hello, HAL. Do you read me? Hello, HAL. Do you read me? Do you read me, HAL?

HAL: Affirmative, Dave. I read you.

Bowman: Open the pod bay doors, HAL.

HAL: I'm sorry, Dave. I'm afraid I can't do that...

Bowman: HAL, I won't argue with you anymore. Open the doors!

HAL: Dave, this conversation can serve no purpose anymore. Goodbye.

You have a dispute in court and are given a choice: pick one of two doors for your decisionmaker. The dispute is anything you like -- whether you'll be released on bail, whether you win or lose your suit. Behind door #1 is a judge (or a jury). Behind door #2 is an algorithm: software. Assume the algorithm is very, very good -- statistically speaking. It's based on a lot of data and has proven, say, 95% correct. Its results in civil cases -- defense verdicts and plaintiffs' verdicts for various sums -- have been tested against real world outcomes and are 95% spot on. (That may sound odd, but work with me here.) You have no idea what the judge's track record is for this sort of dispute; there likely isn't one. There's no track record for a jury, because this particular group of 12 never comprised a jury before, and never will again.

The algorithm is a "black box": You have no idea what's going on inside it. Actually no one does, because the algorithm is self-trained. It's not classic software written and tested by humans, but something far, far more powerful -- a "general reinforcement learning algorithm." It built itself.

Recall the judge (or jury) is a black box too: No one knows how the human mind decides. It's commonplace to say one "rolls the dice" with a jury trial. Maybe you think the odds of securing the right result with a jury are 50-50, or 70-30. But with the algorithm there isn't much rolling; the correct answer outputs 95% of the time. The algorithm doesn't get tired or grumpy or care you were late to court or that you failed to cite-check your brief. But if you chose the algorithm there are at least two reasons you will have no assurance that the outcome is based on what you think are the significant aspects of your case. First, you don't know what data the algorithm trained on, so you will never know which aspects of your case it evaluates. Second, even if it evaluates special aspects of your case, it is still, at the core of its beating silicon heart, just a probability function -- it is usually correct, but might get your case wrong.

So which door do you pick?

The question may depend on one's view of the role of process. Whether it's an end in itself ("procedural fairness," as a Judicial Council report termed it) or a means to an accurate result ("outcome fairness"). Do we have judges and juries because they best ensure accuracy? Or because the process is valuable, telling people they have been heard and, at least where the rules allow it, their particular situation has been considered?

The answer may be a bit of both.

The issue is confounded because often there isn't a "right" result, which is why my hypothetical algorithm doesn't sound quite coherent. This is especially so in civil cases where there is a pretty broad range of reasonable results for compensatory and other sorts of damages, and for calculations of percentage of fault among tortfeasors. But the binary decision of plaintiff vs. defendant is more susceptible to this sort of calculation, as is the guilty vs. not guilty decision in criminal cases. So too the prediction of whether an arrested person will or won't commit crimes pending trial. Indeed, as most readers know, there are many algorithms in use designed to predict recidivism.

The issue is clouded for another reason, which is that algorithms are trained (either by humans or on their own) on existing data, and that data may embed bias. For example, software which seeks to mimic human decisions on recidivism or sentencing might bake racism into the results. And to make matters worse, there are other factors such as parties' zip codes which may be vectors of bias.

But with my invented accuracy of 95% and a self-taught program, I have wished away all these confounding issues to allow me to look at something else: the intuition that we prefer a human decisionmaker, even if we don't know the human is more likely to reach the right result. The reason, I suggest, is not an abstract desire to be heard, but because we think we can affect the result. Every case is different from the rest, and we want to argue that our case is, in a material way, peculiar. The felt power to affect the court's decision, this sort of agency, lies somewhere near the heart of what due process means. It doesn't matter if the judge is inscrutable, or that jury verdicts are unpredictable. What matters is the sense that the party has the ability to affect the outcome. Algorithms don't serve that interest. And the most powerful algorithms, the ones that teach themselves, are the most opaque in their operations and so the least likely to be seen as responsive to the parties' specific efforts to influence outcomes.

There are other problems, too. While human judges may be black boxes of a sort, they are subject to express constraints, such as a list of factors for a given legal test. The human black box is, as it were, permeable. And human decisions can be reviewed on appeal for abuse of discretion, leading in turn to further constraints on future human decisions. It's not clear what appellate review of an algorithm's decision would be.

Don't misunderstand me -- as with many others in the legal profession, I greatly favor the use of software. We can't do without technology assisted review in large document productions, and I have gone so far as to suggest the use of software as an expert witness. ("The Opinion of Machines," 19 Columbia Science & Technology Law Review 136 (2017).) In these cases, software is the tool and the means, never the decision maker. Algorithms might persuade, but only humans can be persuaded. 

#356536


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com