Virtual Insanity: AI and Judicial Review

I am far from an expert on the growing trend in law and life towards “algorithmic justice,” or decision-making by machines. But a report released by the Law Foundation of New Zealand and the University of Otago got me thinking about the use of neural networks, predictive modelling, and other forms of algorithmic learning in the field of administrative law. Specifically, as these complex models and machines develop, there will be an urgent need for administrative law—conceived as a form of control over delegated decision-making—to adapt to its new subjects. The key question is whether new rules governing “machine-learning” administrative law need to be developed, or whether existing rules can be massaged to apply to new contexts. In my view, with some trepidation, I think our existing rules of administrative law developed over centuries can meet the task of regulating the brave new world of algorithmic justice. The New Zealand report raises a number of interesting issues, but I want to moot a few of them to show how our rules of administrative law and judicial review can evolve to the challenge of machine learning.

Consider first the problems of delegation that might occur when considering the use of machines to make decisions. One can imagine two scenarios. In scenario one, Parliament could delegate to a machine in an enabling statute to make decisions, such that those decisions are binding. In scenario two, Parliament could delegate to a human to make decisions, but the human—perhaps due to internal agency rules or guidance documents—might in turn subdelegate to a machine.

Each situation presents important challenges that traditional Canadian doctrines of delegation will need to meet. Take the first scenario. Why would Parliament ever delegate like this? The New Zealand report notes a worrying trend, among experts and non-experts alike: automation bias. Automatic bias occurs when human operators “trust the automated system so much that they ignore other sources of information, including their own systems” [37]. We might imagine a world in the not too distant future where Parliament, as entranced by “experts” as it already is in traditional administrative law, might trust machines more than humans.

For the New Zealand report, the real problem in such scenarios is the “abdication” of decision-making responsibility [40]. For Canadians, this language is familiar—as I noted in a recent blog post, Canada’s only restriction on delegation articulated by the Supreme Court is a prohibition on “abdication” of legislative power. What if a machine is given power to formulate and apply rules? This may constitute the abdication of legislative power because a machine is not responsible to Parliament, and it is worthwhile to ask whether a machine could ever be traditionally responsible—or if a human could be made fully responsible for a neural network, given that it is so difficult to disentangle the factors on which the neural network relies [42]. Rather than delving into this morass, courts might think about adopting an easily administrable rule that is based in the Constitution and the precedents of the Supreme Court: they may need to be more willing to apply a version of the non-abdication rule to the machine context than they would in the human context.

Scenario #2 is trickier. Here, there is no abdication problem at first blush, because the delegation runs from Parliament to a responsible Minister or decision-maker formally answerable in Parliament. But what happens when subdelegation occurs to a machine, and the machine makes the decision for the responsible delegated party? The existing law in this area does not seem to see a problem with this. Take for instance the rule that a decision-maker is permitted to adopt subdelegated investigative reports as the final decision (Sketchley, at para 36 et seq). Here, courts do not apply a more searching standard of review to subdelegated parties versus primary delegations.

But the existing rule presents new challenges in the context of machine learning. In the human context, where an agency head adopts a subdelegated party’s report, the lines of accountability and authority are clear. Courts can scrutinize the subdelegated report as the reasons of the agency. But the same possibility is probably precluded in the machine learning context, at least at first blush. Courts would need to know how and why humans have accepted the “thinking” of an algorithm; or it would otherwise need to understand the modelling underpinning the machine. While these sorts of factors would be apparent in an ideal subdelegated human report, they would not appear at first impression in a decision by a machine–again, especially if the way the machine has made the decision is not easily amenable to scrutiny by a human. In such a context, if humans cannot deduce the basis on which machines made decisions, courts should afford little weight to a machine decision, or otherwise prohibit subdelegation to such machines.

This might appear as a drastic response to the potentially boundless potential of machines. But much like expertise as a reason for deference, courts should only countenance the existence of machine decision-making to the extent that it is compatible with fundamental premises of the legal system, like the rule of law. While one could have different conceptions of the rule of law, most would concede that the ability of parties to seek judicial review is one of its fundamental elements (see, on this note, Crevier). Where a court cannot conduct judicial review, and administrative decisions are immunized from review, the decisiomn is not subject to judicial review through the ordinary channels. Courts already worry about this in the context of deficient administrative records on judicial review (see Tsleil-Waututh, at paras 50-51). The same concern is present where humans, for reasons of lack of expertise or technological impediments, cannot look behind the veil of the machine in a way that is cognizable to a court.

In situations where it is possible to deconstruct an algorithm, courts should, as an element of reasonableness review, insist that humans present the modelling to courts in a way that courts can understand. Just like when courts might be asked to review economic analysis and modelling, they should insist that experts  be able to deduce from complex formulae what the machine is actually doing and how it made its decision. Subjecting machines to the ordinary world of judicial review is important as a matter of the rule of law.

Of course, all these thoughts are extremely tentative, and subject to change as I learn more. But it seems to me that courts will need to, at the very least, adjust existing rules of judicial review to suit the modern world of machine decision-making. Importantly, we need not move machines out of the realm of normal judicial review. The rule of law says that all are subject to the law, regardless of status. Even experts—machines or humans—are subject to this fundamental tenet.