I am far from an expert on the growing trend in law and life towards “algorithmic justice,” or decision-making by machines. But a report released by the Law Foundation of New Zealand and the University of Otago got me thinking about the use of neural networks, predictive modelling, and other forms of algorithmic learning in the field of administrative law. Specifically, as these complex models and machines develop, there will be an urgent need for administrative law—conceived as a form of control over delegated decision-making—to adapt to its new subjects. The key question is whether new rules governing “machine-learning” administrative law need to be developed, or whether existing rules can be massaged to apply to new contexts. In my view, with some trepidation, I think our existing rules of administrative law developed over centuries can meet the task of regulating the brave new world of algorithmic justice. The New Zealand report raises a number of interesting issues, but I want to moot a few of them to show how our rules of administrative law and judicial review can evolve to the challenge of machine learning.
Consider first the problems of delegation that might occur when considering the use of machines to make decisions. One can imagine two scenarios. In scenario one, Parliament could delegate to a machine in an enabling statute to make decisions, such that those decisions are binding. In scenario two, Parliament could delegate to a human to make decisions, but the human—perhaps due to internal agency rules or guidance documents—might in turn subdelegate to a machine.
Each situation presents important challenges that traditional Canadian doctrines of delegation will need to meet. Take the first scenario. Why would Parliament ever delegate like this? The New Zealand report notes a worrying trend, among experts and non-experts alike: automation bias. Automatic bias occurs when human operators “trust the automated system so much that they ignore other sources of information, including their own systems” . We might imagine a world in the not too distant future where Parliament, as entranced by “experts” as it already is in traditional administrative law, might trust machines more than humans.
For the New Zealand report, the real problem in such scenarios is the “abdication” of decision-making responsibility . For Canadians, this language is familiar—as I noted in a recent blog post, Canada’s only restriction on delegation articulated by the Supreme Court is a prohibition on “abdication” of legislative power. What if a machine is given power to formulate and apply rules? This may constitute the abdication of legislative power because a machine is not responsible to Parliament, and it is worthwhile to ask whether a machine could ever be traditionally responsible—or if a human could be made fully responsible for a neural network, given that it is so difficult to disentangle the factors on which the neural network relies . Rather than delving into this morass, courts might think about adopting an easily administrable rule that is based in the Constitution and the precedents of the Supreme Court: they may need to be more willing to apply a version of the non-abdication rule to the machine context than they would in the human context.
Scenario #2 is trickier. Here, there is no abdication problem at first blush, because the delegation runs from Parliament to a responsible Minister or decision-maker formally answerable in Parliament. But what happens when subdelegation occurs to a machine, and the machine makes the decision for the responsible delegated party? The existing law in this area does not seem to see a problem with this. Take for instance the rule that a decision-maker is permitted to adopt subdelegated investigative reports as the final decision (Sketchley, at para 36 et seq). Here, courts do not apply a more searching standard of review to subdelegated parties versus primary delegations.
But the existing rule presents new challenges in the context of machine learning. In the human context, where an agency head adopts a subdelegated party’s report, the lines of accountability and authority are clear. Courts can scrutinize the subdelegated report as the reasons of the agency. But the same possibility is probably precluded in the machine learning context, at least at first blush. Courts would need to know how and why humans have accepted the “thinking” of an algorithm; or it would otherwise need to understand the modelling underpinning the machine. While these sorts of factors would be apparent in an ideal subdelegated human report, they would not appear at first impression in a decision by a machine–again, especially if the way the machine has made the decision is not easily amenable to scrutiny by a human. In such a context, if humans cannot deduce the basis on which machines made decisions, courts should afford little weight to a machine decision, or otherwise prohibit subdelegation to such machines.
This might appear as a drastic response to the potentially boundless potential of machines. But much like expertise as a reason for deference, courts should only countenance the existence of machine decision-making to the extent that it is compatible with fundamental premises of the legal system, like the rule of law. While one could have different conceptions of the rule of law, most would concede that the ability of parties to seek judicial review is one of its fundamental elements (see, on this note, Crevier). Where a court cannot conduct judicial review, and administrative decisions are immunized from review, the decisiomn is not subject to judicial review through the ordinary channels. Courts already worry about this in the context of deficient administrative records on judicial review (see Tsleil-Waututh, at paras 50-51). The same concern is present where humans, for reasons of lack of expertise or technological impediments, cannot look behind the veil of the machine in a way that is cognizable to a court.
In situations where it is possible to deconstruct an algorithm, courts should, as an element of reasonableness review, insist that humans present the modelling to courts in a way that courts can understand. Just like when courts might be asked to review economic analysis and modelling, they should insist that experts be able to deduce from complex formulae what the machine is actually doing and how it made its decision. Subjecting machines to the ordinary world of judicial review is important as a matter of the rule of law.
Of course, all these thoughts are extremely tentative, and subject to change as I learn more. But it seems to me that courts will need to, at the very least, adjust existing rules of judicial review to suit the modern world of machine decision-making. Importantly, we need not move machines out of the realm of normal judicial review. The rule of law says that all are subject to the law, regardless of status. Even experts—machines or humans—are subject to this fundamental tenet.
One thought on “Virtual Insanity: AI and Judicial Review”
I have a slightly different take, probably given my background which is part law (a year and a half of law school before switching to public admin), public admin (25+ years), some regulatory and international, and computers (more hobby than career at this point). Most of what people call AI is far from it, and until people really understand the difference, we can be misled about the dangers.
Watson and Siri are great achievements, but they are as far from AI as a PC is from an abacus. A huge leap, but we would have to go as far again to anything resembling actual AI. At the moment, we are mainly in the world of predictive algorithms.
To use your regulatory type scenario 2 above, what we’re talking about is often plugging a bunch of complex variables into an algorithm, doing some complicated analysis, and having it spit out a “decision”. Except it isn’t really a decision, it is a single line after 1000s of lines of hard coding of different analysis. It’s no different than an algorithm searching for the next prime number, we program it to know what to look for and once we get the various variables weighted, the computer runs the numbers.
For a legal “decision”, the early stages are going to look identical to stare decisis — we are going to look for the giant calculator to show us its math and the “rules” it followed based on precedent and programming so we understand the decision and whether we agree with it, just as we demand written decisions from judges. They both have to show their work to trust the outcome. Computers are very good at showing their work, we just have to tell them to do so. And if we disagree with the outcome it comes to, it means we either fed in the wrong data or we coded it wrong. So we adjust it and voila, the “right” answer. Although some would argue it is ALWAYS the right answer, at least in that it will always give you the answer you told it to give you, even if what you told it isn’t what you meant.
Which isn’t to say we can’t have automation bias…but we’re actually trusting that we coded it right in the first place, not the machine. Most outcomes / uses are going to be regulatory in nature, in my view…enter 20 variables, adjust for current economic environment, risk tolerance, profiles, quotas, and BAM the computer calculates the likelihood it fits what we consider “approvable” right now.
Anything approaching a sentient-style Star Trek-like computer making decisions for us is dreaming in technicolor. It doesn’t make it intelligence, it just makes it a really good algorithm with the world’s infobase at the ready to search.