Interpretation and the Value of Law II

This post is written by Leonid Sirota and Mark Mancini.

We read with interest Stéphane Sérafin, Kerry Sun, and Xavier Foccroulle Ménard’s reply to our earlier post on legal interpretation. In a nutshell, we argued that those who interpret legal texts such as constitutions or statutes should apply established legal techniques without regard for the political valence of outcomes. Only in this way can law function as a common reference and guide in a pluralistic, democratic society in which, as Madison eloquently argued in Federalist No. 10, disagreement about fundamental values and the policies required to implement them is pervasive and bound to remain so “[a]s long as the reason of man continues fallible, and he is at liberty to exercise it”.

Our interlocutors claim that our argument leads legal interpretation into “insipid literalism” and, ultimately, sees law as nothing more than a form given to the outcome of power struggles, rather than as the product of reason striving to advance the common good. We remain unconvinced. Our interlocutors seem to wish to escape the more controversial uses to which the “common good” term has been put, but rely on ambiguous claims in doing so. We write today to address some of these claims.

The bottom line is this: if our interlocutors wish to fundamentally change the way we understand texts by sotto voce urging interpreters to adopt a “substantively conservative” position at the outset of the interpretive task, we must dissent. If they wish to simply “tune-up” the way we use purpose and context to enrich our understanding of bare texts, then that is a worthy contribution to the ongoing effort in which many of us are engaged: trying to make Canadian interpretation more workable, less results-oriented, and more focused on the text itself, understood in light of its legislative context in real, practical cases.

Our response is divided into two parts. First, we describe how our interlocutors misunderstand the relationship between, as Jeremy Waldron put it, “The Concept and the Rule of Law”. Second, we catalogue the ways in which our interlocutors’ position is muddled.

  1. The Rule of Law and the Concept of Law, Again

For our interlocutors, “it is clear” that when we say that interpretation must strive for neutrality in order to enable law to guide the members of a pluralistic society, we are “operating within a positivist legal framework”. At the same time, they suspect us of wanting to smuggle a substantive agenda of expanding pluralism into our interpretive views. Respectfully, they are simply mistaken about this. To be sure, as they suggest, the idea of law as a guide for citizens, and hence of the importance of the law’s compliance with the requirements of Rule of Law that make its guidance effective, is an important feature in the work of some positivists, such as Joseph Raz. But its not the positivists’ exclusive preserve.

Consider Professor Waldron’s argument that we need “to overcome casual positivism―to keep faith with a richer and more discriminating notion of law” (19) ― and further, that “[i]t is a mistake to think that a system of rule could be a legal system if there is no publicly accessible way of identifying the general norms that are supposed to govern people’s behavior” (26). Guiding behaviour, including by enabling and encouraging self-application of publicly available rules by those subject to them, and so upholding human dignity, is a key feature of the Rule of Law discourse, but also, Professor Waldron urges, of the very concept of law. This argument was as much on our minds as Professor Raz’s.

And if Professor Waldron might still be regarded as a positivist, trying to merely formulate a better version of that school’s doctrine, Lon Fuller is, alongside John Finnis and Ronald Dworkin, the epitome of Anglo-American non-positivism. And the idea of law as a guide is perhaps best represented in his famous parable of King Rex, the hapless legislator who repeatedly failed to make laws that his subjects could follow. For Fuller too, the requirement that law be framed so as to outline the state’s expectations of its citizens is a matter of respecting human dignity. It is also a matter of what he describes as reciprocity between those in power and those subject to their decisions. The former can expect compliance if, and only if, they frame their demands in such a way that the latter can make sense of them.

The real issue between our interlocutors and us, we suspect, is not a conflict between positivism and natural law, to which one of us (Sirota) is rather sympathetic. Nor is it our commitment to some nihilistic form of neutrality or, conversely, pluralism. As to the former, substantive legislation is of course not neutral―it embodies the commitments of its makers. The task of an interpreter is to ascertain and give effect to these commitments. To do so well, the interpreter must try to bring both established semantic, contextual, and substantive interpretive tools, and (most importantly) an equanimous disposition to his work―precisely to give effect to the commitments made by those with the authority to enact legislation and avoid imposing his own. A judge interpreting the law will never be perfectly neutral in fact, but an interpreter has no business abusing his position to advance pluralism in law, anymore than he is free to make the law more conservative, more progressive, or anything in between (this point was put eloquently by Justice Stratas in Kattenburg, at para 45). 

Lastly, the issue between our interlocutors and us is not a disagreement about whether law should be infused with reason rather than being a matter of raw power. What we disagree about is how reason matters. For us, as for Fuller, what matters is “the inner morality of law”, or its “artificial reason” as Coke put it ― the morality or reason of legal craft and technique, which ensures that law is intelligible to all those subject to it, simply because they are thinking, reasoning human beings, and which is inherent in the enterprise of governing through law, properly understood, rather than emanating from some benevolent ruler whom the  “[s]ubjects will come to thank”. Our interlocutors’ focus is less on form and more on the content of the law; the reason they appeal to is more substantive than the one on which we focus. We turn now to the substance of their argument.

2. The Motte and Bailey of the Common Good Approach

As we note above, the second broad point we wish to make relates to the ambiguities, whether studied or inadvertent, in our interlocutors’ arguments. We outline three areas where our interlocutors’ positions are confusing. In each, our interlocutors could, on one hand, be advancing controversial propositions about the way texts are interpreted—propositions which could run against the need to avoid outcome-based reasoning. On the other hand, our interlocutors’ position could be wholly uncontroversial, simply relating to the relative place of various interpretive tools (like purpose). If it is the former, our interlocutors should say so clearly. If it’s the latter, our interlocutors should disclaim some of the more controversial purposes for which their arguments could be used.

(A) The Natural Law Motte-and-Bailey

Our interlocutors spend a lot of time talking about natural law. They see it as reflected in the legislative process itself—to them, the natural law tradition asks us to “construe the law itself as permeated by reason.” In a passage bound to feel rather opaque to non-aficionados of the tradition, our interlocutors argue that “[n]atural law reflects an idea of reason immanent in the positive law and lends it intelligibility; while in making its general precepts more specific, the positive law realizes and makes concrete the otherwise abstract elements of the natural law.” More specifically, our interlocutors suggest (putatively relying on Justice Miller in Walsh) that all legislation is designed for the “common good.” So, for our interlocutors, it appears that a reflection on the natural law and the “common good” is inherent in the activity of legislating itself. Even the Constitution, they claim, is influenced by the idea of the “common good.”

We question whether the “common good” can mean the same thing in all these contexts. Hand-waving towards Aquinas or a “model opinion” does not adequately answer this question. Our interlocutors seem to assume that the “common good” as a theoretical matter has been stable across time—from the Angelic Doctor to Justice Miller in 2021. This seems intuitively wrong. Even according to those who subscribe to the natural law tradition, there are debates about what the natural law prescribes.

But ultimately, what we are interested in is how this all bears on legal interpretation; how jurists have applied this idea of the “common good” in relation to real cases and current circumstances. Here, we notice that our interlocutors’ suggestion that appeals to natural law and to the common good are nothing more than reminders of the law’s rationality and pursuit of ascertainable purposes is by no means the only view. Adrian Vermeule, for his part, argues for a “substantively conservative” approach to interpretation designed to support the rulers in endeavours—as Vermeule describes it—to “legislate morality” and to support “the traditional family.” This seems to be a fundamentally different use of the term “common good” than our interlocutors propose.

These two radically different approaches are deployed in typical motte-and-bailey fashion. When outlining their own agenda, the latter-day promoters of the “common good” and natural law support Vermeule’s project to use interpretation to stop the “urban-gentry liberals” from prioritizing their own “financial and sexual” satisfactions, on the basis of external values that exist outside of constitutional and statutory texts. When pressed, however, they retreat to the seemingly innocuous claims about law’s rationality, made to appear rooted in legislation and the Constitution.

These two positions are incompatible. If our interlocutors wish to claim that the pursuit of the “common good” is inherent in the act of legislating, that is a proposition we would be prepared to entertain within the context of deciding what a particular text means, although at least some (and perhaps a good deal of) legislation is demonstrably directed at the private benefit of the law-makers or their constituents, or at entrenching outright bigotry, with appeals to the common good nothing more than a smokescreen. But if our interlocutors wish, instead, to impose an “illiberal legalism,” as Vermeule does, that does not “play defensively within the procedural rules of the liberal order,” than that is a different matter entirely. The former deals with matters of interpretation. The latter concerns itself with the culture wars of the day. Our interlocutors should either disclaim Vermeule’s use of their “common good” or accept it.

(b) The Purposivism Confusion

Our interlocutors’ position on interpretation itself is also equivocal. The language of the “common good”, as used by our interlocutors, seems to invoke one rather uncontroversial argument with which we completely agree: text cannot be understood without understanding its abstract and particular purposes. That is a proposition that textualists and non-textualists alike accept (see A. Scalia and B. Garner, Reading Law: The Interpretation of Legal Texts, at 20), and which is hornbook law in Canada. But at the same time, that basic argument raises more questions than it does answers.

Our interlocutors claim that there is “one truth” in the idea of “purposive interpretation”—the premise that law is designed to fulfill an “end” that is “intelligible to reason.” Our interlocutors embrace a “teleological outlook on the essential nature of legislation.” This seems right so far as it goes. As Max Radin notes in his famous article “A Short Way with Statutes,” “the legislature that put the statute on the books had the constitutional right and power to set [the statute’s] purpose as a desirable one for the community” (398). We agree that texts must be read in light of their purposes if we wish to understand why a legislature used certain words in creating a particular rule ― though again we caution that the legislature’s motives may not have been at all noble or reasoned.

If this is all our interlocutors are suggesting, their use of the “common good” phraseology is benign and probably a distraction. Like Asher Honickman in his response to our interlocutors, we do not see these invocations as adding anything to current debates about understanding legal texts. But we take our interlocutors to be saying something, and so simply saying that law is a teleological enterprise is incomplete without specifying how text drives the interpretive process. What needs to be decided is how we choose what purposes are relevant to interpretation. Here, we could speak of “ulterior” purposes—à la “mischief”—or “implementational purposes”—the legal rules (such as rules, standards, or delegations) that legislatures use, in text, to enact particular ulterior purposes (see, for a discussion of these different purposes, Max Radin, “Statutory Interpretation” at 863, 876). At the highest level of abstraction, one could say that laws are designed to achieve “justice and security” or the “common good” or the “public interest.” This does not tell us much about how a legal instrument should be interpreted, because legislatures do not implement ulterior purposes at all costs or in totality, and courts err when they interpret statutes with this assumption, as one of us has argued here based on the Supreme Court’s decision in West Fraser. Interpreters must work between purposes, keeping a clear eye on the text and the way it enacts particular legal rules (see Sullivan, Statutory Interpretation, at 187).   

At times our interlocutors seem to agree with this position. They say that courts cannot “override the terms or the finitude of a statute” and that “no human law-giver can conceivably grant benediction to the common good across the whole of human affairs.” We agree. And yet, we note that an assumption that the legislature’s “reasoned choice is rendered intelligible by the idea of the common good” ignores that language may only imperfectly capture that aim.  Our interlocutors’ position is similar to the old “strong purposivist” view represented in the Hart & Sacks Legal Process materials: legislatures consist of reasonable people pursuing reasonably purposes reasonably. If one takes this view, then it is possible to claim that the idea of the “common good” contains within it substantive aims that could and should override the terms of a statute. If this is what our interlocutors argue, we must disagree, simply because the implementational means employed by legislatures will always be over- and underinclusive in relation to purposes stated at a high level of abstraction. Overriding the text of a statute in favour of a court’s appreciation of purpose risks ignoring the means the legislature chose.

Lest this discussion seem abstract, let us conclude with a reminder of what this “strong purposivist” view means in practice: the early-20th century Holy Trinity case of the Supreme Court of the United States. The Alien Contract Labor Law prohibited the immigration to the US of “foreigners and aliens under contract or agreement to perform labor or service of any kind in the United States”. It was intended to ban the immigration of Chinese workers―but did not specifically say so. The language of the statute also covered an Anglican priest engaged to work in the United States. Yet the Court held that it did not apply to him, because the United States was a “Christian nation,” and hence the law could not have been meant to exclude Christians as well as minorities. Here, we see that the court took a highly abstract background principle and used it to supplement the terms of a statute. This appears to be fine under at least one reading of the “common good” interpretive idea. And yet, this is an outrageous violation of the Rule of Law’s requirement that law be publicly stated and applied in accordance with its enacted terms. It is also, and not coincidentally, an example of intolerable partiality and bigotry.

We conclude this section by restating the point: our interlocutors’ embrace of teleology in law is interesting and welcome, but not helpful by itself. This is because it does not answer fundamental questions about the relationship between text and purpose; and, at best, a perspective focused on “the common good” adds no conceptual heft to relevant and current interpretive debates. We are left wondering whether our interlocutors simply believe in purposive interpretation, or whether they are advancing some other case.  

(C) The Political Confusion

Last but not least, it is important to emphasize that the idea of the “common good”, which our interlocutors present as having a consistent, definite meaning over time, has been put to very different uses by very different people. Our interlocutors claim, for example, that Josh Hammer’s idea of “common good originalism” is perfectly within the tradition of textualism and positivism.Our interlocutors want to reassure us that interpretation drawing on the “common good” does not pursue external policy goals, but rather seeks to determine the meaning of the law from within.

This is a valiant effort, but it flies in the face of the expressly political valence of Hammer’s essay. Hammer makes the following points about his proposed method:

I call my jurisprudential framework “common good originalism,” and I would humbly submit that it be adopted as conservatives’ new legal standard-bearer—a worthy complement to other simultaneously unfolding New Right/“new consensus” intellectual efforts.

[…]

Put more simply: The concerns of nation, community, and family alike must be prioritized over the one-way push toward ever-greater economic, sexual, and cultural liberationism. And this must be true not merely as a matter of public policy, but as a matter of legal interpretation.

Indeed, the entire first part of Hammer’s essay (and another more recent one) trades in politics. The point for Hammer seems to be the development of a certain type of conservative interpretive method that is an adjunct to a political project. One wonders why Hammer needed or wanted to include expressly political statements in a piece that is—our interlocutors tell us—wholly about interpretation. Do our interlocutors disclaim this part of Hammer’s essay, and more generally, how do they distinguish between legitimate and illegitimate uses of the concept of the “common good”?

That the “common good conservative” movement is a political project is clear from the reaction to the US Supreme Court’s Bostock case. As one of us wrote here, in that case, Gorsuch J decided that Title VII protected against discrimination on the basis of sexual orientation and gender identity, despite their not being expressly listed in the statute, because such discrimination necessarily and logically involves discrimination on the basis of sex. In all likelihood, the framers of Title VII did not foresee that the statute would protect sexual orientation and gender identity. Indeed, as Alito J pointed out in dissent, Congress had declined to add sexual orientation and identity to Title VII in the past.

Now, what divided the majority and the dissent in Bostock was a question of pure textual interpretation. As Tara-Leigh Grove argues, Bostock is representative of “two textualisms.” And as Asher Honickman points out, there are reasons to debate the respective roles of social context, expectations, and semantic context in Bostock. This debate has nothing to do with the political valence of one or the other interpretation.

And yet the conservative meltdown over Bostock focused squarely on the results of the case. Here we see the worry about “economic, social, and cultural liberationism.” For Hammer, Bostock was not a mistaken application of textualism, but a showcase of its fundamental faults, laying “bare the moral and intellectual bankruptcy of the conservative legal movement.” Hence Hammer’s proposal of common good originalism, designed to solve this very “failure.”

Bostock raises many questions about the aims of the “common good” movement more generally, and its relationship to interpretive method. One is hard-pressed to find how the concept of “the common good” solves any legal problems in Bostock that cannot be solved by robust debate among textualists about the role of expectations, intentions, and purpose. While one of our interlocutors seems to suggest that the result in Bostock was wrong because judges should take account of the underlying “metaphysics” of words, we view this perspective as a distraction for judges working through real cases—and this is clearly not what Hammer et al seem to be getting at. They have identified a “failure” in interpretive method—a result that they, for one reason or another, do not like. They have designed an interpretive method to solve that problem. Without Gorsuch J’s political “mistake” in Bostock, “common good originalism” was unlikely to ever enter the conversation as it has (which is all the odder since Bostock is a statutory case). As a result, we cannot endorse this fundamentally political project.

Conclusion

Those who subscribe to the “common good” in interpretation are on the horns of a dilemma. There are those who seek to use the concept for expressly political ends, through the task of interpretation as a sort of “living tree” for conservatives. And then there are our interlocutors, who appear to defend the concept as limited, well-understood, and innocuous. We hope our interlocutors can determine which of these options is theirs—and if they simply wish to change emphasis in textual interpretation, then they can join the ongoing debate on that question.

Against Pure Pragmatism in Statutory Interpretation III: A Way Forward and Walsh (ONCA)

About a month ago, I wrote two posts attacking the concept of “pragmatism” in Canadian statutory interpretation. So my argument goes, the seminal Rizzo case, while commonly said to herald a “purposive” approach to interpretation, is actually methodologically pragmatic This is because the famous paragraph from Rizzo, which contains a list of things an interpret must take into account, does not assign ex ante weights to these factors. That is, it is up the interpreter to choose, in the circumstances of particular cases, which factors will be most relevant. In short, while everyone in theory agrees on what the goal of interpretation is, that agreement rapidly breaks down in the context of particular cases.

In these circumstances, methodological pragmatism is attractive because it permits interpreters to use an entire array of tools as they see fit. So the story goes, this freedom leads to “flexibility.” But it can also lead to a number of pathologies in interpretation that should be avoided. In this final post of the series, I outline these pathologies, sketch a path forward, and then highlight a recent example case (Walsh) from the Ontario Court of Appeal that demonstrates why methodological pragmatism unleashes judges to an unacceptable degree. The point here is that interpretation is designed to determine what the legislature meant when it enacted words. Purpose is important in ascertaining that meaning, but ascertaining purpose is not the point of interpretation. This leads to an approach that prefers some ordering among the relevant interpretive tools (for want of a better phrase), rather than a flexible doctrinal standard motivated by methodological pragmatism.

The Pathologies of Pragmatism

By now, and as I have outlined above and in my previous posts, Canada’s approach to statutory interpretation is oddly enigmatic. On one hand, everyone (seems) to agree on the goal of the enterprise: when courts interpret statutes, they are seeking to discover what Parliament intended when it enacted a particular provision or provisions. Putting aside thorny issues of what “legislative intent” might mean (and see here Richard Ekins’ important work), in practical terms, we are seeking to discover the legal meaning and effect of language enacted by Parliament; we are, put differently, seeking to discover what change has been effected in the law (either common law or existing statute law) by Parliament’s intervention (see Justice Miller’s opinion in Walsh, at para 134).

When a law is adopted, one can speak of ends and means, and it’s this framework that guides the discussion to follow. It would be strangely anodyne to claim that Parliament speaks for no reason when it legislates. We presume, in fact, that every word enacted by Parliament means something (represented in canons like the presumption against surplusage, see also Sullivan at 187). And so it only makes sense to take account of a particular provision’s purpose when considering interpretation. Those are the ends for which Parliament strove when adopting the legislation. Selecting the proper ends of interpretation—at the proper level of abstraction, bearing on the actual text under consideration—is an integral part of interpretation. To avoid a strictly literal approach, text must be read in this context.

But, importantly, this is not the end. What about means? In some ways, and as I will show through the example case, means are the real subject of debate in statutory interpretation. Parliament can achieve an objective in many different ways. In general, Parliament can enact broad, sweeping, mandatory language that covers off a whole host of conduct (within constitutional limits). It could leave it at that. Or it could enact permissive exceptions to general mandatory language. It can enact hard-and-fast rules or flexible standards. Administrative schemes can delegate power to “independent” actors to promulgate its own rules. The point here is that Parliament can decide to pursue a particular, limited purpose, through limited or broad means. This is Parliament’s choice, not the court’s.

While free-wheeling pragmatism can lead to all sorts of pathologies, I want to focus here on the relationship between ends and means, between purpose and text. Pragmatism can distort the proper ascertainment of ends and means. In some cases, the problem will be that the court, without any doctrinal guidance, chooses a purpose at an unacceptably high level of abstraction (see, for example, the debate in Telus v Wellman, and Hillier), perhaps even to achieve some pre-ordained result. The courts can do so because, if one simply follows Rizzo, there is no requirement that a judge seek textual evidence for the establishment of a purpose. Yet we know that, as a descriptive matter, it is most common that purpose is sourced in text (see Sullivan, at 193): an interpreter can usually glean the purpose of the legislation, not from legislative history, subsequent legislative enactments, or even the judge’s own imagination, but from the text itself.

This descriptive state of affairs is normatively desirable for two reasons. First, the point of interpretation is not to establish the purpose or mischief the legislature was intending to solve when it legislated (despite Heydon’s Case). The point is to discover the intent of the legislature as represented in the meaning of the words it used. The words are the law. Purpose assists us in determining the meaning of those words, but it cannot be permitted to dominate the actual goal of the enterprise. A pragmatist approach permits, at least in some cases, for that domination to exist: if purpose is better evidence of intention than text, in some cases, then it can be permitted to override text. But this undermines the point of interpretation.  

Secondly, for all we might say about legislative intentions, the best practical evidence of intention is what has been reduced to paper, read reasonably, fairly, and in context. Since statutory interpretation is not a theoretical exercise but a problem solving-one, the practicality of doctrine is central. For this reason, purpose can best assist us when it is related and grounded in text; when the text can bear the meaning that the purpose suggests the words should carry. To the extent pragmatism suggests something else, it is undesirable.

  Sometimes, however, the problem will lie in the means; while the relevant purpose may be common ground between the parties, there may be a dispute over the meaning of language used to achieve those ends. Such disputes tend to focus on, for example, the choice between ordinary and technical meanings, the role of particular canons of interpretation, and (importantly for our purposes) the relationship between the properly-scoped purpose and the language under interpretation. It is the job of the interpreter to work among these tools synthetically, while not replacing the means Parliament chose to accomplish whatever purpose it set out to accomplish. But with pragmatism, no matter the means chosen by Parliament, there is always the chance that the court can dream up different means (read: words) to accomplish an agreed-upon purpose. Often, these dreams begin with a seemingly benign observation: for example, a court might simply conclude that it cannot be the case that a posited interpretation is the meaning of the words, because it would ineffectually achieve some purpose.

These pathologies can work together in interesting ways. For example, an expansive purpose can cause distortions as the means selection stage of the analysis; a court entranced by a highly abstract purpose could similarly expand the means chosen by the legislature to achieve those means. But even in absence of a mistake at the sourcing stage, courts can simply think that Parliament messed up; that it failed to achieve the purpose it set out to achieve because the means it chose are insufficient, in the court’s eyes.

A Way Forward

When constructing doctrine, at least two considerations to keep in mind pertain to flexibility and formality, for want of better words. Flexibility is not an inherently good or bad thing. Being flexible can permit a court to use a host of different tools to resolve disputes before it, disputes that sometimes cannot be reduced to a formula. Too much flexibility, however, and the judicial reasoning process can be hidden by five-part factorial tests and general bromides. Ideally, one wants to strike a balance between formal limits on how courts must reason, with some built-in flexibility to permit courts some room to react to different interpretive challenges.

The point I have made throughout this series is that Rizzo—to the extent it is followed for what is says—is pragmatic, methodologically. Whatever the benefits of pragmatism, such a model fails to establish any real sequencing of interpretive tools; it does not describe the relationship between the interpretive tools; and leaves to the judge’s discretion the proper tools to choose. While subsequent Supreme Court cases might have hemmed in this pragmatic free-wheeling, they have not gone far enough to clarify the interpretive task.

The starting point for a way forward might begin with the argument that there must be some reasons, ex ante, why we should prefer certain interpretive tools to others. This starting point is informed by a great article written by Justice David Stratas, and his Law Clerk, David Williams. As I wrote here:

The piece offers an interesting and well-reasoned way of ordering tools of interpretation. For Stratas & Williams,  there are certain “green light” “yellow light” and “red light” tools in statutory interpretation. Green light tools include text and context, as well as purpose when it is sourced in text. Yellow light tools are ones that must be used with caution—for example, legislative history and social science evidence. Red light tools are ones that should never be used—for example, personal policy preferences.

In my view, this sort of approach balances formalism and flexibility in interpretation. For the reasons I stated above, the legislative text is really the anchor for interpretation (this is distinct from another argument, often made, that we “start with the text” in interpretation). That is, the text is the best evidence we have of intention, often because it contains within it the relevant purpose that should guide us in discovering the meaning of the text. For this reason, legislative text is a green light consideration. Purpose is also a green-light consideration, but this is because it is sourced in text; if it was not, purpose would be misused in a way that might only be recognizable to a methodological pragmatist. Other tools of interpretation, such as legislative history and social science evidence, can be probative in limited circumstances.

The key innovation here is the Stratas & Williams approach does not rule out so-called “external sources” of meaning, but it does structure the use of various tools for interpretation. For example, the approach does not raise a categorical bar to the consideration of legislative history. But it does make some ex ante prediction about the value of various tools, reasoning for example that purpose is most relevant when it is sourced in text.

This is an immediate improvement over the pragmatist methodology, at least when it comes to my core area of concern, the relationship between purpose and text. In the pragmatist model, purpose can be erroneously sourced and then used to expand the means chosen by the legislature; in other words, it can be used to override the language chosen by the legislature. Under the Stratas & Williams model, such a situation is impossible. Any purpose that is helpful and relevant to the interpretive task will be contained within the language Parliament chose, even if that language is limited, imperfect, or unclear.

An Example Case: Walsh

Much of this can be explained by a recent case, Walsh, at the Ontario Court of Appeal. While Walsh is a very interesting case for many reasons, I want to focus here on a key difference between the majority decision of Gillese JA and the dissent of Miller JA. Gillese JA seems to implicitly adopt a pragmatic approach, arguably making purpose rather than text the anchor of interpretation—presumably because the case called for it. Miller JA, instead, makes text the anchor of interpretation. The difference is subtle, but immensely important, because each opinion takes a different view of the “means” chosen by Parliament.

At issue in Walsh was s.162.1(1) and (2) of the Criminal Code. Section 162.1(1), in short, “makes it an offence for a person to knowingly disseminate an ‘intimate image’ of a person without their consent” [61]. An “intimate image” is defined by s.162.1(2), and relates to a “visual recording of a person made by any means including a photographic, film or video recording.”

Stripping the dispute down to brass tacks, the issue in this case was whether a FaceTime call that displays certain explicit content could constitute a recording. The problem, of course, is that FaceTime video calls cannot be conventionally saved and reproduced, like a photo (putting aside, for a moment, the possibility of recording a FaceTime video call). The Crown, at trial, argued that the language of the provisions are written broadly, and must be read “in the context of the harm that s.162.1 was enacted to address: sexual exploitation committed through technology, including cyberbullying and revenge porn” [23, 55]. For the Crown, the answer was found by reasoning from this general “mischief” that the statute was designed to address: the harm would still exist even despite “the recipient’s inability to further share or preserve the moment…” [23]. The defense, on the other hand reasoned from the ordinary meaning of the word “recording,” concluding that “recording” alludes to the “creation of an image that can be stored, viewed later, and reproduced” [57].

Gillese JA for the majority agreed with the Crown’s argument. She listed five reasons for her agreement, but one is particularly relevant on the issue of the relationship between text and purpose. Gillese JA writes, at paras 68 and 70:

[68] Fourth, restricting the meaning of “recording” to outdated technology—by requiring that it be capable of reproduction—would fail to respond to the ways in which modern technology permits sexual exploitation through the non-consensual sharing of intimate images. In so doing, it would undermine the objects of s.162.1 and the intention of Parliament in enacting it.

[…]

[70] …Giving “visual recording” a broad and inclusive interpretation best accords with the objects of s.162.1 and Parliament’s intention in enacting it.

As we will see, this is precisely backwards.

Miller JA’s dissent should be read in whole. It is a masterclass in statutory interpretation, and it is particularly representative of the approach I favour. But most importantly, Miller JA outlines why the majority’s approach demonstrates a means problem, as described above. For Miller JA, there is no purpose-sourcing problem here, since, as he says, there is common ground about the mischief that these provisions were designed to address [179]. However, for Miller JA, a proper application of the various tools of interpretation counselled an approach that did not rewrite the terms of the statute; the means chosen by the legislature. This approach is supported by a number of considerations. First, as Miller JA says, the term “recording” must be given its ordinary meaning. This is the going-in presumption, absent good reasons otherwise. But for Miller JA, the Crown offered no objective support for its assumption that the term “recording” must encompass the FaceTime video at issue. While dictionary meaning and ordinary meaning are two different things, dictionary meaning can shed light on ordinary meaning, and Miller JA noted that there was no instance of the term “recording” being used to describe a “visual display created by any means” [159].

This might have been enough, but the Crown offered another argument: that the term “recording” must be understood as encompassing new forms of technology [162]. Of course, because of the original meaning canon, it could not be said that any linguistic drift in the term “recording” is legally relevant in this case [166]. However, it is a common application of the original meaning rule that where words are written in a broad and dynamic manner, they could capture phenomena not known to drafters at the time of enactment. For Miller JA, however, this argument failed when it comes to the word “recording.” For him, FaceTime was clearly a phenomenon that existed at the time these provisions were drafted, and in fact, the context of the provisions indicated that Parliament had actually distinguished, in other places, recordings versus “visually observing a person…” [174-176]. The term “recording,” then must rely on the concept of reproducibility, as distinguished from other sorts of displays that cannot be saved and reproduced. This latter category of displays was known by Parliament when it crafted these provisions, but it is conspicuously absent from the provisions themselves.

Miller JA, having disposed of these arguments, then clearly contrasts his approach to Gillese JA’s:

[171] What the Crown is left with is the proposition that a reauthoring of the provision would better achieve s.162.1’s purpose….But where Parliament chooses specific means to achieve its ends, the court is not permitted to choose different means any more than it would be permitted to choose different ends. The interpretive question is not what best promotes the section’s purpose, such that courts can modify the text to best bring about that result, but rather how Parliament chose to promote its purpose

[172] …Although the Crown’s argument is framed in ascertaining the conventional, ordinary meaning of language, it is actually an argument about what meaning ought to be imposed on s.162.1, so as to best achieve the purposes of this section.

These paragraphs are remarkable because they clearly set up the difference between Gillese JA’s approach and Miller JA’s approach; the difference between a methodologically pragmatic approach, and an approach that roots ends in means, purpose in text. For Gillese JA, one of her five reasons for accepting the Crown argument pertained to the fact that the defense’s offered interpretation would fail to achieve the agreed-upon purpose of the provisions. This sort of reasoning is only possible under a pragmatic approach, which permits courts to prioritize different interpretive tools as they see fit. The result is a Holy Trinity abomination: where purpose is the anchor for interpretation, and the text is massaged to achieve that purpose, in the court’s view.

Miller JA’s approach is better, if one follows the argument in this post. His approach clearly sees text as an interpretive “tool” that is prior to all the others, in the sense that it is (1) what the legislature enacted to achieve some goal (2) it, practically, is the best evidence we have of what the purpose of the legislation is. Under this formulation, it is not up to the courts to decide whether better means exist to achieve the purpose of the legislation. If this were the case, the point of interpretation would be to identify the meaning of purpose, rather than the meaning of language as evidence of intention. Miller JA explicitly assigns more weight to the text in cabining the purposive analysis.

The Walsh case illustrates the problem that pragmatism has created. While all agree on the point of interpretation, that agreement tends to break down when we begin to apply the tools we have to determine the meaning of the text. Methodological pragmatism offers no hope for solving this problem, because it fails to take a stand on which tools are best. The Stratas & Williams approach, and the approach offered by Miller JA in Walsh, envisions some ranking of the interpretive tools, with text playing a notable role. This approach is better. It moves us away from the endless flexibility of pragmatism, while still leaving the judge as the interpreter of the law.

Against Pure Pragmatism in Statutory Interpretation II: Evaluating Rizzo

Part II in a 3 part Double Aspect series

Please read Part I of this series before reading this post.

In the first post of this series, I set out to explain the concept of pragmatism in statutory interpretation, as explained by Ruth Sullivan. My contention was that Rizzo, arguably Canada’s seminal statutory interpretation judgment, is a pragmatic judgment. Relatedly, I argued. that a purely pragmatic approach to statutory interpretation, while providing interpreters with maximum flexibility, also fails in two potential ways: (1) it permits judges to assign weights to interpretive tools that may run counter to the point of statutory intepretation: to discern what this particular text means; and (2) it could lead to methodological unpredicability–a problem that I will outline in Part III of this series.

In this post, I will address why Rizzo is a fundamentally pragmatic judgment. It is pragmatic because it leaves open the possibility, particularly in the use of purpose, for text to be supplanted if other interpretive tools point in another direction. In other words, it does not make a claim that some interpretive tools are more appropriate than others in the abstract. In the pragmatic approach, it is up to the judge to assign the weights; rather than the methodological doctrine guiding this selection, the judges themselves have unbridled discretion to mould statutory interpretation methods to the case in front of them, based on factual contexts, contemporary values, or otherwise. As I will note in Part III, this sounds good in theory—but in practice is less than desirable.

Rizzo was a garden-variety statutory interpretation case, and I need not go deep into the facts to show what is at stake. Basically, the key question was whether employees of a now-bankrupt company could  claim termination and severance payments after bankruptcy [1]. The key problem was whether the relevant legislation permitted the benefits to accrue to the employees, even though their employment was terminated by bankruptcy rather than by normal means. The relevant provisions of the Bankruptcy Act and the Employment Standards Act, on a plain reading, seemed to prevent the employees from claiming these benefits if their employment was terminated by way of bankruptcy [23].

The Supreme Court chastised the Court of Appeal for falling into this plain meaning trap. To the Supreme Court, the Court of Appeal “…did not pay attention to the scheme of the ESA, its object or the intention of the legislature; nor was the context of the words in issue appropriately recognized” [23]. The Supreme Court endorsed this now-famous passage as the proper method of interpretation in Canada:

21 Although much has been written about the interpretation of legislation (see, e.g., Ruth Sullivan, Statutory Interpretation (1997); Ruth Sullivan, Driedger on the Construction of Statutes (3rd ed. 1994) (hereinafter “Construction of Statutes”); Pierre-André Côté, The Interpretation of Legislation in Canada (2nd ed. 1991)), Elmer Driedger in Construction of Statutes (2nd ed. 1983) best encapsulates the approach upon which I prefer to rely. He recognizes that statutory interpretation cannot be founded on the wording of the legislation alone. At p. 87 he states:

Today there is only one principle or approach, namely, the words of an Act are to be read in their entire context and in their grammatical and ordinary sense harmoniously with the scheme of the Act, the object of the Act, and the intention of Parliament.

Using this approach, the Court reasoned that the provisions in questions needed to be interpreted with their objects in mind—specifically, the relevant provisions were designed to “protect employees” [25]. For example, section 40 of the Employment Standards Act, one of the provisions in question, “requires employers to give their employees reasonable notice of termination based upon length of service” [25]. Such a notice period (with termination pay where the employer does not adhere to the notice period), is designed to “provide employees with an opportunity to take preparatory measures and seek alternative employment” [25]. Ditto for the provisions governing severance pay [26].

The Court also relied on a number of other interpretive factors to reach the conclusion that the severance and termination pay provisions governed even in cases of bankruptcy.  Two are important here. First, the Court relied on the absurdity canon: where possible, interpretations of statutes that lead to “absurd results” should be avoided. Particularly, the Court, endorsing Sullivan, notes that “…a label of absurdity can be attached to interpretations which defeat the purpose of a statute or render some aspect of it pointless or futile…” [27]. In this case, the fact that an employee could be terminated a day before the bankruptcy—and receive benefits—and another employee could be terminated after bankruptcy—and not receive benefits—was an absurdity that ran counter to the purpose of the statute to provide a cushion for terminated employees [30].  The Court also focused on legislative history, which it acknowledged can play a “limited role in the interpretation of legislation” [35].

All of this to say, Rizzo is, to my mind, a pragmatic judgment for statutory interpretation. This is because, when it endorses the classic Driedger formula at paragraph 21, it does not venture further to show which of the interpretive tools it relies on are to be given the most weight in interpretation; and accordingly, Rizzo could lead to courts assigning weights to interpretive tools that could distort the process of interpretation. For example, the Rizzo Court does not say—as later Supreme Court cases do—that purpose cannot supplant text in interpretation (Placer Dome, at para 23). In other words, when courts source purpose, text is given more weight in interpretation because it is the anchor for purpose (see, for example, the Court’s analysis in Telus v Wellman, at paras 79, 82-83). This can be seen as the Court saying that text is assigned the most weight in interpretation, and that purpose is parasitic on text. When sourced in this way, then, there is no reason to assume that there will ever be a conflict between purpose and text, because purpose is merely one way to understand text. But Rizzo does not say this, instead suggesting that in some cases, purpose can supplant text.

This is the product of pragmatism. Taken on its own, Rizzo’s endorsement of Driedger permits “…each judge [to take advantage] of the full range of interpretive resources available….and deploys those resources appropriately given the particularities of the case” (see here). The possibility for highly abstract purposes to, in appropriate cases, subvert text is a function of the failure of Rizzo to assign clear weights to the interpretive tools in a way that reflects Canada’s fundamental constitutional principles, including the task of courts to discover what the text of statutes mean. I should note, though, that this is not a bug of pragmatism to its adherents; rather, it is a feature. The pragmatists conclude that text should have no special role in interpretation if other factors push against giving effect to text. As I will point out in my next post, this liberates judges to an unacceptable extent when measured in relation to the basic task of interpretation.

Against Pure Pragmatism in Statutory Interpretation I

The first post in a three-part Double Aspect series.

Rizzo & Rizzo, arguably Canada’s leading case on statutory interpretation, has now been cited at least 4581 times according to CanLII. Specifically, the following passage has been cited by courts at least 2000 times. This passage, to many, forms the core of Canada’s statutory interpretation method:

21                              Although much has been written about the interpretation of legislation (see, e.g., Ruth Sullivan, Statutory Interpretation (1997); Ruth Sullivan, Driedger on the Construction of Statutes (3rd ed. 1994) (hereinafter “Construction of Statutes”); Pierre-André Côté, The Interpretation of Legislation in Canada (2nd ed. 1991)), Elmer Driedger in Construction of Statutes (2nd ed. 1983) best encapsulates the approach upon which I prefer to rely.  He recognizes that statutory interpretation cannot be founded on the wording of the legislation alone.  At p. 87 he states:

Today there is only one principle or approach, namely, the words of an Act are to be read in their entire context and in their grammatical and ordinary sense harmoniously with the scheme of the Act, the object of the Act, and the intention of Parliament.

This paragraph has reached the status of scripture for Canadian academics. To many, it stands as a shining example of how Canadian law has rejected “plain meaning,” or “textualist” approaches to law (though these are not the same thing at all, scholars as eminent as Ruth Sullivan have confused them).  Most notably, as Sullivan argues, the practice of the Supreme Court of Canada under the auspices of the modern approach could be considered pragmatist. In many ways, pragmatism is considered by many in related fields to be an implicitly desirable good. Pragmatism in statutory interpretation, to its adherents, pulls the curtain back on judicial reasoning in statutory cases, asking courts to candidly weigh the factors they think are most important to reaching the proper result.

Pragmatism can be seen as a sliding scale—where one factor (such as text) is most persuasive, other factors (such as extrinsic evidence) will need to be stronger to overcome the text. In other cases, the opposite may be true. Notably, as championed by people like Richard Posner, pragmatism is focused on achieving sensible results. Therefore, the methodological approach used to achieve those results matters less than the results themselves.

While I am not sure proponents of pragmatism would classify Rizzo, particularly its leading paragraph, as a pragmatic judgment, in my view, Rizzo alone illustrates the key problem with pragmatism as an organizing and standalone theory of statutory interpretation. The Rizzo formula simply presents a laundry list of factors which should guide judicial decision-making, but fails to prescribe weights ex ante to those factors. It seems to assume that, in each case, the weights to the various factors are either (1) equal or (2) assigned by the judge in a given case. This is the key virtue of pragmatism. But it is also its vice, because “…without an advance commitment to basic interpretive principles, who can anticipate how a judiciary of Posnerian pragmatists would articulate and apply that law?” (see here, at 820). In other words, in a pragmatic approach “[e]verything is up for grabs” (820). Specifically, pure pragmatism has a number of potential issues:

  • It ignores that, in our legal system, the text of the statute (read in light of its context and purpose, sourced in text) is what governs, and for that reason, should be given the most weight in all interpretation, even if the text is open-textured. Courts must do the best they can to extract meaning from the text, read in light of its context. Call this formalism, call it textualism, call it whatever. The Supreme Court has said that the task of interpretation cannot be undertaken in order to impeach the meaning of text with extra-textual considerations (Telus v Wellman, at para 79).
  • Aside from the in-principle objection, there is a practical problem. While pragmatists claim that they are bringing the judicial reasoning process into the open, forcing judges to justify the weights they assign to various interpretive factors, in truth a fully-discretionary approach permits judges to reach any result they might wish, especially if they take into account broad “values-based” reasoning, as Sullivan advocates, or source purpose at some high level of abstraction, untethered to text.
  • Finally, the invitation to consider all factors in statutory interpretation, invited by Rizzo and the pragmatists, seems to assume that each interpretive factor will have something to say in a range of cases. But there are inherent problems with each interpretive factor, including text. The question for statutory interpretation methodology is, in the run of cases, which factors are more persuasive and controlling? By failing to provide an ex ante prediction about this question, pragmatists run close to abridging the idea that courts are supposed to develop norms—guiding principles—for statutory interpretation (see 2747-3174 Quebec Inc, at 995-996).

In order to develop these arguments, and address powerful (and some not-so-powerful) counter-arguments, I will be launching a series on Double Aspect on statutory interpretation, designed around the idea of pragmatism. The second post in the series will summarize Rizzo and why it is indicative of a pragmatist approach. The third post in the series will point out, using Rizzo itself, the flaws of pragmatism. It will also laud the Supreme Court and lower courts for, in recent years, blunting the edge of the pragmatist approach. Overall, this series will be designed to show that while text, context, and purpose are relevant interpretive factors, the task of interpretation is one that must be guided by ex ante guiding principles, not an “anything goes” approach. To this end, a recent attempt by Justice David Stratas and David Williams to assign ex ante weights to statutory interpretive factors is laudable and desirable. It should be followed.

A note of caution: the point of this series is not to advocate for a purely text-based approach, or a “plain-meaning approach.” Many have fallen into the trap of simply labelling arguments that highlight the primacy of text as being “textualism” or “plain-meaning.” Many resist the idea of text as a governing factor in interpretation because they believe it is equal to a literal reading, or because it does not take context into account. Virtually no one advocates for this line of thinking anymore. It is a strawman.

Additionally, the point of this series is not to impugn pragmatism wholesale. Instead, the point of this series is to point out that while pragmatism and flexibility have their place in interpretation, those things cannot come at the expense of an interpretive methodology that guides judges according to the core tenets of our legal system, including the separation of powers, as understood by the Supreme Court (see again Telus v Wellman, at para 79).

Stay tuned.

The Top Statutory Interpretation Cases of 2020

A banner year for interpretation

Introduction

To say that one believes in “purposive interpretation” has been the calling card of Canadian legal scholars for some time. Saying this, as some do, is radically incomplete. That is because competing schools of thought also look to purpose. Textualists, for example, look to the context in which words are used, as well as the purpose evident in those words (Scalia & Garner, at 20). To say that one is a purposivist might as well mean nothing, because everyone—even textualists—“routinely take[] purpose into account…” (Scalia & Garner, at 20).

Far from just being a lazy turn of phrase, though, the routine deployment of the term “purposivism” as a distinct school of thought blocks us from a clearer conversation about what should matter in statutory interpretation. For example, the real division between textualists and others is how purpose is sourced in statutory interpretation: textualists are wary of importing some abstract purpose to subvert a “close reading” of the text (see Scalia & Garner, at 20; see also the opinion of Côté Jin West Fraser), while others might source purpose differently. Saying that one is a “purposivist” also does not answer an important question: which purpose should count more in interpretation, since statutes often pursue multiple purposes at different levels of abstraction? (see, for an example of this, Rafilovich). These are real interpretive questions that are only now receiving any sort of sustained attention in the case law.

I should not hide my priors here. I too think that purpose is a relevant consideration in statutory interpretation, because it assists in the task of reading text to mean all it fairly encompasses. But purpose can be abused: indeed, “[t]he most destructive (and most alluring) feature of purposivism is its manipulability” (Scalia & Garner, 20). Because purposes can be stated in all sorts of ways, it is up to the judge, in many cases, to choose the most appropriate purpose to assist in interpreting the text. Sometimes, purpose can subvert text—which, of course, is problematic if the purpose is not sourced in text (McLachlin CJC’s opinion in West Fraser is a classic example of this).  Put simply: purpose informs text, it does not supplant it (Placer Dome, at para 23).

For that reason, we must come to sound and principled ways of sourcing purpose, rather than simply stating that we look to purpose. It is this theme that defined, in my view, the task for judicial interpreters in 2020. The following three cases are, to my mind, exemplars of dealing with some of these deeper questions in statutory interpretation. Rather than simply reciting the Rizzo & Rizzo formula and taking an “anything goes” approach to interpretation, these cases delve deeper and answer some knotty interpretive questions in a way that furthers a discussion about statutory interpretation in Canada—particularly with reference to the so-called “purposive” approach. Because these cases start a conversation on these issues (and because I happen to agree with the methodology employed by the judges writing the lead opinions in each case), these are the top statutory interpretation cases of 2020, in no particular order:

Michel v Graydon, 2020 SCC 24

In this case, the Supreme Court of Canada dealt with the question whether it is “possible to vary a child support order under the [Family Law Act] after the order has expired, and after the child support beneficiary ceases to be a “child” as defined in the [Family Law Act]” [2]. This seemingly technical question of family law, however, gave rise to all sorts of interpretive problems: the role of social science evidence in statutory interpretation, the problem of unbridled consequential analysis in statutory interpretation, and the problem raised when judges invoke both “liberal” and “purposive interpretation” in the same breath.

For Brown J, the answer to question in the case was found relatively confined to the legislative text and scheme. Starting from the text of the provision, Brown J concluded that the relevant text of the Family Law Act “creates an avenue for courts to retroactively change any child support order, irrespective of the beneficiary’s dependent status and irrespective of whether the order is extant at the time of the application” [20]. This was because of the placement of the relevant statutory scheme. Among other things, s.152 contained no textual restriction on the courts—for example, s.152(1) “contains no reference to the defined term ‘child’ that might serve to qualify the authority of a court to vary child support” [22]. The scheme of the Family Law Act supported this conclusion [23].

For Brown J, this textual conclusion was basically the end of the story (see also schematic considerations at paras 24, 26, 27). Importantly, though, Brown J’s textual conclusion was supported by a properly-scoped purpose. Brown J identified that one of the dominant features of the Family Law Act—given the statute it replaced—was a desire to “expan[d] on the circumstances under which a court may vary a child support order” [28]. Read in light with the text, the result was clear.

Martin J concurred in the result, but conducted a policy analysis to support her concurrence. In Martin J’s view, child support cases called for (that old standard) of a “fair, large, and liberal construction” [40]. For Martin J, this sort of construction required a “contextual and purposive reading of s.152” that looks to “its wider legislative purposes, societal implications, and actual impacts” [40] in a way that “takes into account the policies and values of contemporary Canadian society” [70]. Martin J concluded that “a jurisdictional bar preventing these cases from being heard not only rests in unsound legal foundations, it is inconsistent with the bedrock principles underlying child support and contributes to systemic inequalities” [40].

I agree with both judges that the text and context in this case supports this reading of s.152. But while both judges agreed on the ultimate result, the method they used to reach the result differs in important ways. While Brown J focuses largely on a contextual reading, Martin J incorporates other information, statistics, and an evaluation of the consequences of the interpretation to the result. As I will note, in this case, these approaches do not lead to dramatically different conclusions, because the tools all pointed to a certain result: text, social science, context, consequences. But where text and such other factors conflict, Martin J’s opinion raise a number of problems, in my view.

There are three comments to make about this case, and why it is important. First, Brown J’s opinion avoids the pitfalls that might be associated with external aids to interpretation.  Specifically, Martin J looked to various social science data related to poverty, family relationships, and marginalization. These are important topics, and in this case, the evidence supported the interpretation that Brown J undertook on the text. But the question arises: what to do when current social science evidence contradicts an analysis undertaken on the text? Put differently, if the text points in one direction, and that direction exacerbates problematic trends in social science evidence, which governs?

It is one thing to suggest that where the text is ambiguous, an interpretation which solves the supposed “mischief” the statute was aimed to solve should be preferred. One could make a case for that argument. But where the text and the evidence are directly contradictory, courts must follow the text because that is what the legislature enacted. This may sometimes lead to interpretations that do not make sense to contemporary society, or are unjust in face of empirical evidence, because the text was enacted at a particular time. But this is simply a function of the task of statutory interpretation, which is to determine what the legislature meant at the time of enactment (as I note below, this itself is a rule of interpretation). It must be remembered that external aids can be used to assist in interpreting the text. They cannot be used to subvert it. Martin J’s approach could lead to that result—though, as I note, the problem does not arise in this case because the text and evidence pointed to the same interpretive result.

Secondly, both opinions could be read as cabining the role of pure policy or consequential analysis in statutory interpretation, which could be an invitation for results-oriented reasoning. It is true that evaluating the competing consequences of interpretive options is a fair part of statutory interpretation (see Sullivan at 212 et seq; see also Atlas Tube, at para 10; Williams, at para 52). But there is a caveat: consequences cannot be used to dispense with the written text. This most arises in the context of the absurdity canon, where absurd interpretations of statutes are to be avoided. However, an overapplication of the absurdity canon can lead to many “false-positives” where consequences are labelled absurd in the judge’s opinion, even if those consequences are arguably a product of the text. This undermines the legislature’s role in specifying certain words. Instead, consequences can only be used to determine which of various “rival interpretations” are most consistent with the text, context, and purpose of the statute (see Williams, at para 10). In this way, consequences are not used to determine which interpretation is just or unjust in an abstract sense, but which interpretations are most consistent with the statute’s text, context, and purpose.

Brown J clearly used this sort of justified consequential analysis in his opinion. In connecting his preferred interpretation to the properly-scoped purpose of s.152(1), it was clear that his interpretation furthered that purpose. This is a proper use of consequences consistent with the text as the dominant driver of purpose.

On the other hand, Martin J’s opinion could be read in two ways: one undesirable, one not. First, it could be read as endorsing a wide-ranging assessment of consequences, at a high level of abstraction (for example, justifying her consequential analysis with reference to the need to abolish systemic inequalities: see paras 40, 70, 101).  This might be a very good thing in the abstract, but not all legislation is designed to achieve such lofty goals. If interpreted in such a way to reach a result the statute does not reach, statutes can be conceived as addressing or solving every societal problem, and therefore as resolving every unjust consequence—and this could lead to overextensions of the text beyond its ordinary meaning (see Max Radin, “Statutory Interpretation” 43 Harv L Rev 863, 876 (1930)).  This reading of Martin J’s opinion is not desirable, for that reason.

Another reading of Martin J’s opinion is that she roots her consequential analysis in the purpose of the statute as she sees it. For example, Martin J notes that her approach interprets s.152 with its “underlying purposes in mind” which includes the best interests of the child [76]. Martin J also notes that her interpretation favours access to justice, under-inclusivity, and socio-economic equality [72]. These factors may or may not be rooted to the statute under consideration.

If Martin J’s opinion is rooted in the recognized purpose of the best interests of the children, one can make the case that her opinion is justified as Brown J’s is. However, if read more broadly, Martin J could fairly be seen as addressing issues or consequences that may not fall within the consideration of the text. In the circumstances, I prefer to read Martin J’s opinion as consonant with Brown J’s. If that is done, there is no warrant to look to consequences that fall outside the purpose of the statute. But note: much will depend, as I note below, on how the purpose of a statutory provision is pitched.

Finally, Brown J’s opinion is tighter than Martin J’s in the sense that it does not raise conflicts between statutory interpretation principles. Martin J’s opinion arguably does so in two ways. First, it is well-known (despite the controversy of this practice in constitutional interpretation) that statutes must generally be interpreted as they would have been the day after the statute was passed (Perka, at 264-5). While there is some nuance on this point (see Sullivan, at 116-117), words cannot change legal meaning over time—but note that broad, open-textured terms can be flexibly applied to new conditions if the words can bear that meaning (see here). The key is that words can only cover off the situations that they can fairly encompass. But the injunction—repeated throughout Martin J’s opinion—that statutes must be interpreted in light of the “policies and values of contemporary Canadian society” [72] at least facially conflicts with the original meaning canon. To Martin J’s defense, she is not the first to say this in the context of family law and child support (see Chartier, at paras 19, 21). But nonetheless, the court cannot have it both ways, and Martin J’s opinion cannot be taken to mean that the legal meaning of texts must be interpreted to always be consistent with contemporary Canadian society.

At best, it might be said that Martin J’s opinion in this respect permits the taking into account of contemporary considerations where the text clearly allows for such considerations, or perhaps where the text is ambiguous and one interpretation would best fit modern circumstances in a practical sense. But these modern circumstances cannot be shoehorned into every interpretation.

Secondly, there is a conflict in Martin J’s opinion, in a theoretical sense, between her invocation of a “fair, large and liberal” interpretation (see paras 58, 71) and her invocation of a “purposive” interpretation  (see para 71). As Karl Llewellyn pointed out long ago, it is not unheard of for tools of interpretation to conflict. But as much as possible, judges should not invite such conflicts, and I fear Martin J did this in her opinion by conflating liberal interpretation with purposive interpretation. As I have written before, these things are not the same—in fact they are opposites. The Interpretation Act does instruct a “large and liberal” interpretation, but only as the objects of a statute permit. The Supreme Court continues to insist on an approach to statutory interpretation that uses text to ground the selection of purpose (see here). As such, text and purpose read synthetically governs—not some judge-made conception of what constitutes a “large and liberal interpretation.” This statement cannot be used to overshoot the purposes of a statute, properly scoped.

Perhaps in this case the purposes permit a large and liberal interpretation, in which case Martin J can use both of these tools interchangeably. As I said, the problem isn’t this case specifically, but what would happen if Martin J’s approach is used in the general run of cases. But it is far from clear that purposive and generous interpretation will always–or even often–lead to similar results. More likely, purpose will limit the ways in which text can be read—it will not liberate the judge to take into account any policy considerations she wishes.

Michel v Graydon raises all sorts of interesting issues. But taking Brown J’s opinion on its own terms, it is a clinic in how to clearly interpret a statute in light of its properly-scoped purpose. While Martin J’s opinion could also be read in this way, it could be read to permit a more free-flowing policy analysis that subverts legislative language. In this sense, Martin J’s opinion should be affixed with a “caution” label.

Entertainment Software Association, 2020 FCA 100

ESA will stand, I think, for some time as the definitive statement in the Federal Courts on how to conduct statutory interpretation, and the role of international law in that endeavour.

In this case, the facts of which I summarized here,  the Copyright Board offered an interpretation of the Copyright Modernization Act that arguably placed extraneous materials ahead of the governing text. Here is what I wrote about the Board’s conduct at the time:

The Board’s chosen materials for the interpretive exercise were stated, according to the Court, at a high level of generality (see paras 53-54). For example, the Board focused on the preamble to the Copyright Modernization Act to divine a rather abstract interpretation that supported its view on international law (paras 53-54). It also invoked government statements, but the Court rightly noted that these statements construed s.2.4(1.1) as a “narrow, limited-purpose provision” [56], not as an all-encompassing provision that permitted the collection of tariffs in both instances. The use of these materials was used by the Board to herald a different, broader interpretation than what the text and context of the provision indicated. 

The Court rebuffed the Board’s effort in this regard. By noting that the provision under interpretation was a “narrow, limited-purpose provision,” the Court rejected attempts by the Board to drive the interpretation higher than the text can bear.  This is a worthy affirmation of the importance of text in the interpretive process, and a warning about the malleability of purposive interpretation.

Why is this opinion so important? It makes a now oft-repeated point that purposive interpretation is not conducted “at large.” That is, it matters how judges state the purposes they hope to use in the interpretive task. As the Supreme Court noted in Telus v Wellman, courts cannot use abstract purposes to “distort the actual words of the statute” (see Telus, at para 79). This counts as an endorsement of the traditional separation of powers, under which “…the responsibility for setting policy in a parliamentary democracy rests with the legislature, not the courts” [79].

ESA is important because it implements what the Supreme Court has now repeated in Telus, Rafilovich, and other cases. It is now clear law that purposes cannot be used to subvert text; that text is the starting point in legislative interpretation, and that in sourcing purpose, text confines the scope of the exercise. In my view, ESA (expertly written by Justice Stratas) makes the clearest case yet for a sort of text-driven purposivism in the context of Canadian statutory interpretation.

Canada v Kattenburg, 2020 FCA 164

One underlying theme of much of what I have written thus far is a worry about results-oriented reasoning in statutory interpretation. To some, this might not be a risk at all. Or it might be a desirable feature: after all, if all law should simply be an adjunct of politics, then the policy preferences of judges are fair game. Of course, I readily admit that no legal system can reduce the risk of subjective policy-driven interpretation to 0; nor should it. But the Rule of Law, at its most basic, means that the law governs everyone—including judges. Part of the law judges must apply are the rules of statutory interpretation. Those rules are designed not to vindicate what the “just” result is in the abstract, what is “just” at international law (except where international law and domestic legislation meet in defined ways), or even what is “just” to the judge at equity or common law–except, of course, when statutes implicate common law rules. Statutory interpretation is a task that requires determining what the legislature thought was just to enact. As such, the rules of interpretation are guided towards that goal, and are necessarily designed to limit or exclude the preferences of judges or others, even if we reach that goal only imperfectly.

This important theoretical point was made in relation to the ascertainment of legislative purpose and international law in Kattenburg by Stratas JA. In Kattenburg, the underlying substantive issue was simple and narrow, as I wrote in my post on the case:

The Canadian Food Inspection Agency decided that certain wine imported to Canada from the West Bank are “products of Israel” (see the Federal Court’s decision in 2019 FC 1003 at para 3). The judicial review, among other issues, concerned whether the wine could be labelled as “products of Israel.” That’s it. Under ordinary administrative law principles, the court will assess whether the decision of the CFIA is reasonable. A typical legal task.

However, on the intervention motions in Kattenburg, Stratas JA noted that some intervenors ttempted to further bootstrap the record with “hyperlinks to find reports, opinions, news articles and informal articles to buttress their claims about the content of international law and the illegality of Israel’s occupation of the West Bank” (Kattenburg, at para 32). Stratas JA rejected such efforts.

Stratas JA’s rejection of these intervenors, and his strong words in denouncing them, raised the ire of some on law twitter. But anything worth doing won’t be easy, and Stratas JA said what needed to be said, particular when he noted that, with respect to the intervenors “[s]o much of their loose policy talk, untethered to proven facts and settled doctrine, can seep into reasons for judgment, leading to inaccuracies with real-life consequences” (Kattenburg, at para 44). 

 There’s no denying Stratas JA is pointing to an important methodological problem that is deserving of our attention. One way that purposes can be misstated, or used to subvert clear text, is by advancing broad understandings of international law to expand the purpose. As I’ve noted before, it is true that “international law can…be relevant to the interpretation of Canadian law where it is incorporated in domestic law explicitly, or where there is some ambiguity” (see here). But in many cases, international law will simply not be relevant to the interpretation of legislative texts, or the ascertainment of legislative purposes.

The attempt in Kattenburg to cast the legislative purpose to encompass some statement—any statement—on the legality of Israel’s conduct in the Middle East is a classic end-run around legislative text. While some of the intervenors may have wanted the Court to interpret the legislation in a particular way to encompass substantive policy goals encompassed in international law not only runs afoul of fundamental principle—international law only enters the task in defined, narrow ways—but it is contrary to precedent (see Vavilov, at para 121 and the litany of Federal Court of Appeal and Supreme Court cases on this point). Such efforts should be rejected.

Conclusion

In many ways, the three cases I have chosen as important for interpretation in 2020 are all representative of a broader theme of which lawyers should be aware. That is, there is much more happening behind the curtains in Canadian statutory interpretation than might appear at first blush. “Purposive interpretation” is not the end of the story. What matters is how we source purpose, the sources we assess to assist the interpretive task, and the role of text in grounding the interpretive process. These cases all come to defensible conclusions on these questions. The insights of these cases can be distilled into a few key propositions:

  1. Purpose must be sourced in relation to the relevant text under consideration. In this way, we are interpreting text as the legislature enacted it, and we are not using purpose to subvert that authentic reading of the text.
  2. There are reasons to be worried about consequential analysis, to the extent it could permit an expansion of legislative purposes beyond text.
  3. There are reasons to be worried about international law, to the extent it could permit the expansion of legislative purposes beyond text.

All for the better.

A Happy New Year for interpretive nerds!

Constitutional Law Ruins Everything. A (sort of) response to Mancini’s “Neutrality in Legal Interpretation.”

This post is by Andrew Bernstein.

No! I am not an academic nor was meant to be.
Am a mere practitioner, one that will do
To settle a dispute, argue an appeal or two
When advising clients, the law’s my tool.
Deferential, if it helps me sway the court
Argumentative, and (aspirationally) meticulous.
Case-building is my professional sport
Trying my hand at theory’s ridiculous!
But I’ll dip a toe into this pool.

(With apologies to T.S. Eliot and anyone who appreciates poetry)

Also, this is a blog post, so no footnotes or citations. Sorry!

As a lawyer whose most enduring interest for the last 30 years has been Canada’s constitutional arrangements, it gives me great pains to confess to you that I have concluded that constitutional law ruins everything. Or, perhaps put more judiciously, the kinds of debates that we have about constitutional interpretation are not especially instructive in dealing with other types of legal questions, such as statutory or common-law interpretation. There are many reasons for this, but the central one, in my view, has to do with the fact that while reasonable people may disagree on the outcome of a statutory interpretation, or a question of common law, those people will largely agree on the method of conducting those analyses. In constitutional interpretation, we don’t have consensus on “how” so it’s no wonder that the outcomes can be so radically different.

What are we really asking courts to do when we ask them to resolve a dispute? There are no doubt some high-minded theoretical answers to this (“do justice between the parties,” “ensure that capitalism is never threatened,” “enforce institutional sexism, racism, ageism, ableism and homophobia”) but from a practitioner’s perspective, the answer is actually straightforward: sort out the facts and apply a set of legal rules to those facts. Overwhelmingly those rules come from a variety of legal instruments, such as statutes, regulations, by-laws, and other “outputs” of political institutions such as Parliament, legislatures or municipal councils. If these institutions they don’t like the judicial interpretation of what they have passed, they can change the instrument accordingly. Moreover, these institutions are democratically elected, so if citizens do not agree with the laws that get made, they can replace them at the next election. Although this “feedback loop” suffers from many inefficiencies and obstacles in practice, it is essential to maintaining the concept of self-government by majority rule. What this means is that courts know what they are supposed to be doing when they interpret statutes: they look for legislative intention, as expressed by the words of the document. While courts are entitled to employ whatever clues they might be able to find in things like the legislative history, they appreciate that those clues must be used judiciously, as one speech by one MP does not a legislative intention make. And courts appreciate that the words of the document ultimately govern – although compliance is less than perfect, courts generally understand that they are not to circumvent the meaning of legislation with some kind of analysis based on the instrument’s supposed “purpose.”

While it is frequently accepted that the objective of statutory interpretation is to discern legislative intent, the question of why we would want to do so is not frequently interrogated. After all, while it may make eminent sense to give effect to a law that was passed a week ago, why would a self-governing people want to be governed by legislation that was passed by a legislature that is no longer in session? Perhaps by a different political party? The answer is partially pragmatic (it would be awfully cumbersome to have to re-enact every law each time a legislature was dissolved) but the real reason is the existence of the democratic feedback loop. Statutory interpretation operates on the presumption that, if no legislature has repealed or amended the statute, the people (as represented by the legislature) are content with it as it stands. In fact, this is the reason why no legislature can bind a future one to things like supermajority requirements. Because it is the people’s current intention – and not their past intention – that governs.

Constitutional law is designed to be immune to the democratic feedback loop. At least some aspects of the constitution are specifically intended to limit democratic institutions. The essence of that aspect of constitutionalism is the protection of vulnerable and/or minority groups from the potential for ill-treatment by the majority. Sometimes these protections take the role of institutional structures (such as federalism, regional representation in central institutions, and, according to some, a separation of powers) and other times they are specific guarantees of rights that specifically limit government action: freedom of expression, equality, or even “life, liberty and security of the person.” Cumulatively, this constitutional architecture is supposed to create a balance between self-government and limited government, ensuring that Canadians can govern ourselves, while not permitting the majority to oppress minorities.

This sounds great in theory, but immediately creates a dilemma: who gets to decide on the limits of “limited government?” Someone has to, and (if the constitution is going to be effective at curbing democratic excess) it has to be a different “someone” than the majoritarian institutions that actually do the governing. And although there are different models around the world, in Canada (like our American neighbours), we entrust that job to the Courts. This is not an uncontroversial decision, for a number of reasons. First, it is not clear that courts are institutionally well-suited to the job, with their adversarial model of fact-finding and decision-making. Second, courts are presided over by judges, who are just (as Justice Stratas recently said) lawyers who have received a judicial commission. There is no reason to think they are especially well suited to weighing the interests that a complex society needs to achieve an ideal balance between, for example, liberty and security, or equality and religious freedom. Third, judges are famously unrepresentative: they are whiter, richer, more male, more Christian, older and more conservative than the population. Nowhere is this more apparent than the apex of judicial decision-making, the Supreme Court of Canada, which got its first female judge in the 1980s and has never had an indigenous or any type of non-white judge or a judge from the LGBTQ community. Eighty five of Canada’s ninety Supreme Court judges have been Christian, the other 5 have been Jewish. No Muslims, Hindus, Sikhs, or even (admitted) atheists . Nevertheless, these 9 judges get to make significant decisions that have a major impact on social policy. Since the Charter was enacted, the Supreme Court has had a major role in liberalizing access to abortion, permitting medical assistance in dying, liberalizing prostitution laws, freeing access to cannabis, prohibiting the death penalty, enhancing public employees’ right to strike, and many other social policy decisions that were different from the democratic choices made by legislatures. In Canada, most decisions to strike down legislation have tilted towards the liberal side of the political spectrum, but there have also been decisions (most infamously, relating to private health care in Quebec) that tilt more towards the conservative side. This is not inherent to the process of adjudicating rights: the United States Supreme Court has grown increasingly conservative in the last 20 years, striking down liberal legislation relating to campaign finance, voting rights, and only yesterday striking down pandemic limitations on gatherings in houses of worship.

The combination of anti-democratic process and anti-democratic outcomes that constitutional adjudication creates has been subject of concern and criticism since judicial review was created in Marbury v. Madison. This, in turn, has led to the development of theories that are designed to constrain judicial decision-making. While some of this may be results-oriented, at its core, the goal of all “court-constraining theories” of constitutional interpretation is to give constitutional decision-making a touchstone by which decisions can be evaluated. Readers of this blog will no doubt be familiar with these theories, such as textualism, or public-meaning originalism, which stand in contrast to what is sometimes referred to as “living tree constitutionalism” or (in Leonid’s catchy turn of phrase “constitutionalism from the cave”). While I will undoubtedly not do them justice, the “touchstone theories” posit that the meaning of constitutional rights are more-or-less fixed (although may need to be applied in novel situations) and it’s the job of the courts to find and apply those fixed meanings, while “living tree constitutionalism” allows the meaning of these rights to evolve over time, and it’s the job of the courts to decide when and how to permit that evolution to take place.

To use an over-simplified example, imagine a constitutional guarantee of “equality,” which (it is agreed) was understood to mean “equality of opportunity” at the time it was enacted. And imagine that 40 years later, it is established that the historical and systemic disadvantages suffered by certain groups means that merely providing equal opportunity proves insufficient to providing those groups with a fair outcome. Touchstone constitutionalists could suggest that although what constitutes “equality of opportunity” may have to change to meet changing social circumstances, but does not permit courts to go further and use the constitutional guarantee of “equality” to impose equality of outcomes. Living tree constitutionalists could posit that the guarantee of equality was intended to ensure that people do not suffer disadvantage because of their immutable characteristics, and if we now recognize that this can only be done by providing equality of outcome, then this is what courts should do.

What’s important to appreciate is that our protagonists on both sides are not disagreeing just on the outcome. They are disagreeing on the fundamental nature of the exercise. Touchstone constitutionalists believes that the courts’ job is essentially to be the “seeker” in a game of hide and seek, while the living tree constitutionalists believe that the courts are playing Jenga, carefully removing blocks from the bottom and building the tower ever higher, with its ultimate height limited only by how far they can reach.

Who is right and who is wrong in this debate? No one and everyone. In fact, as I read Mark’s post to which I am (ostensibly) responding, I understand his plea to be not that touchstones – regardless of how old they may be – are normatively a fantastic way to adjudicate modern problems but rather that the alternative to touchstones is anarchy (or Kritarchy), and that has to be worse. Similarly, critics of touchstone constitutionalism are concerned about being forever bound by the past, without providing a particularly good explanation of what could or should reasonably replace it without ultimately resorting to the idea that we have to trust our judges to make good decision. This of course, begs the question “if we are relying on someone’s judgment, why is it the judges and not the people’s through their democratically elected representatives?”

What am I saying? I’m saying that the “touchstone vs. tree” debate is actually a normative question, that people like to dress up as one that has an objectively ascertainable answer. But in truth, where you stand on this will really depend on your own personal value system, as informed by your own experiences. If you value predictability and stability, and/or the idea of judges making decisions about what is right, fair or socially appropriate is offensive to you, you may be inclined towards touchstone constitutionalism. If you value substantive outcomes, and see the judicial role as guaranteeing and enforcing rights as they evolve, you will be inclined towards the living tree. Of course, this is to some degree all a false dichotomy. There are many places available between either end of this spectrum and everyone ultimately ends up tends towards one of the more central positions. For example, it is difficult to find anyone who seriously doubts the correctness of Brown v. Board of Education, even though there’s at least an argument that certain touchstones informing the meaning of equal protection in the United States’ 14th amendment contemplated segregation. On the other hand, no matter how alive one’s tree might be, respect for a system of precedent is necessary if you are going to continue to call what you are doing “law” as opposed to policymaking by an unaccountable institution that has only faint markings of democratic accountability.

So why does constitutional law ruin everything? As I see it, is that this unresolvable dilemma in constitutional law has a tendency to bring its enormous baggage to other areas, and leave it there. But it’s not clear that these oversized duffles filled with decades of counter-majoritarian sentiment are really going to assist what I would consider to be the very different exercise of statutory interpretation (I’m well aware of the argument that the constitution is just an uber-statute and should be interpreted accordingly, but that’s really just an argument for touchstone constitutionalism so I will conveniently ignore it). Why? Because unlike in constitutional interpretation, we have broad consensus on how to go about the exercise of statutory interpretation entails: it entails trying to determine what the legislature intended by the text that it enacted. And although this exercise can be difficult at times, and reasonable (and unreasonable) people can often disagree, they are disagreeing on the outcome and not the process. No one truly suggests that the courts should play Jenga when interpreting statutes; they are always the seeker in a game of hide-and-seek, using well-understood tools and rules. Of late, we have been describing those as “text, context and purpose” but long before that catch phrase existed, we had the lawyer’s toolbox of logic, common sense, experience, and approximately 400 years of common-law jurisprudence on canons of statutory construction (well-defended by Leonid in his recent post). It’s true that these rules are convoluted and it’s not always straightforward to apply them. Some judges and courts give more weight to (for example) the purpose of statute and the presumption against absurdity, while others might be more interested in the intricacies of grammatical structure. But these are matters of emphasis, and the degree of variation relatively modest. In fact, there is a pretty strong consensus, at least among Canadian courts, about how the exercise of statutory interpretation ought to be conducted, and, in the main, it is done with amazing regularity.

OK so we have covered the constitution (where there is no agreement on the game, much less the rules) and statutes (where everyone is singing from the same hymnbook). What remains is common law, and it is probably the strangest of all these creatures because it is, by necessity, hide-and-seek but what you are looking for is Jenga blocks. There is, of course, an important touchstone courts and judges look to: precedent. But if you stretch far back enough, the touchstone itself has no touchstone other than “what judges think is best.” In many ways, it’s “law from the cave” but the cave is extremely old, dark, and you probably can’t see the exit, so you are stuck inside unless or until the legislature “rescues” you and replaces the common law rules. This leads to a fascinating problem: because it’s based on precedent, common law derives its authority from consistency. But because it’s judge-made, judges feel relatively free to remake it in appropriate circumstances. In many ways, it’s the worst of both theoretical worlds: it is bound by (some may say stuck in) the past and also readily changeable by judges. But somehow it works anyway, and with much fewer lamentations from the theorists who worry about either of these things (excluding, of course, administrative law, which by unwritten constitutional principle must be comprehensively re-written every ten years to keep a group of frustrated practitioners on their toes).

So in short, I endorse Mark’s sentiment that we need neutral principles in adjudication. But I disagree that they are in short supply. We have neutral principles in statutory interpretation, and they work as well as any system that is administered by a few hundred people across the country possibly could. We have essentially one neutral governing principle in common law analysis, which is “mostly follow precedent.” So what we are really talking about is constitutional law, where the debate between the touchstone cops and the living tree arborists is essentially unresolvable because when you scrape to the bottom it asks “what do you value in a legal system” and it’s no surprise that there isn’t universal agreement on this. But there is a strong consensus on how to engage in interpretation outside the constitutional context, and we should not let the constitutional disagreements obscure that.

In other words, constitutional law ruins everything. But I told you that at the beginning.

Textualism for Hedgehogs

Why substantive canons belong in textualist interpretation, and what this tells us about neutral interpretive principles

I hope that you have read co-blogger Mark Mancini’s post on “Neutrality in Legal Interpretation“. In a nutshell, Mark argues for the application of politically neutral principles to the interpretation of legal texts, and against the fashionable view that it is inevitable, or indeed desirable, that interpreters will seek to fashion texts into instruments for the advancement of their preferred policy outcomes. It is a superb essay, and I agree with almost everything Mark says there.

Almost. In this post, I would like to explore one point of disagreement I have with Mark. Although it concerns a minor issue and does not detract from Mark’s overall argument at all, I think it helps us clarify our thinking both about legal interpretation and also about the meaning and purpose of legal neutrality. This point of disagreement concerns, of all things, “substantive canons of construction”.


Mark argues that textualism is a set of morally-neutral interpretive techniques that allow an interpreter to (my words, but Mark’s meaing, I think) serve as a faithful agent of the body enacting the legal text. (Mark focuses on statutes, but the same considerations apply to constitutional texts.) Other approaches allow or even require the interpreter to impose a certain set of substantive commitments, which may or may not be shared by the authors of the legal texts, on them. Textualism seeks to avoid doing so by asking the interpreter to focus on the text itself, relying on its letter and its spirit alone, rather than on any external commitments. In this context, Mark notes a possible (and indeed common) objection:

[O]ne might say that textualism and its family of tools are not themselves neutral. For example, some of the substantive canons of construction might be said to be imbued with presuppositions about the ways laws must be interpreted. For example, there is the rule that statutes altering the common law require a clear statement in order to do so.  This is not a value-neutral tool, it could be said, because it makes it difficult for statutes to override what one might call a generally “conservative” common law. 

Mark appears to grant this objection to the use of substantive canons in statutory interpretation, while denying that it undermines his broader argument:

I do see the merit of this argument, which is why I (and some other textualists) may wish to assign a lesser role to substantive canons. Indeed, since I believe in legislative sovereignty, the legislature should be able to change the common law without a clear statement. 

But then Mark walks back the concession to some extent, writing that “these canons could be justified on other grounds” , for example “as a matter of precedent, or as a matter of ‘stabilizing’ the law.”

By my lights, Mark’s initial concession is a mistake, and the walk-back too half-hearted. Substantive interpretive canons ― interpretive presumptions such as those requiring clear statements for statutes to derogate from common law or statutory rights, to change the law retroactively or to create exorbitant powers (for example Henry VIII clauses), or calling for narrow constructions of penal or taxing statutes ― deserve a more robust defence, which I will offer here. Most of them are not only “justified on other grounds” but are actually closely connected to the reasons for endorsing textualism and neutral interpretation more broadly.

These reasons include the separation of powers and democracy, which, taken together, mean that law should be changed in consequence of the choices of democratically elected legislatures and of such other actors to whom legislatures have properly delegated their law-making powers (assuming that such delegation can ever be proper). But they also include the Rule of Law, notably the idea that the law ought to be sufficiently public and certain to guide the subject. Textualism gives effect to the separation of powers and democracy by asking judges to give effect to legislatures’ choices and warning them not to override these choices by applying their own subjective preferences or substantive values not endorsed by the legislature. It also gives effect to the Rule of Law by ensuring that subjects, or at least their legal advisors, have access to the same information that will be used by those who interpret and apply the law. They can thus anticipate the law’s application better than if it can be given a meaning based on unenacted values available only to judges or administrators at the point of application.

Consider now how substantive canons serve the same ends. Their contribution to upholding the Rule of Law values of notice and guidance is perhaps most obvious. When courts refuse to read unclear or ambiguous statutes as imposing criminal or tax liability, they are ensuring that people are warned before their liberty and property are put in jeopardy, and can guide themselves accordingly. Similarly, when courts apply the principle of legality, which requires clear statutory language to over-ride or oust established common law rights, be they the right to access court (as in Justice Cromwell’s concurring opinion in Trial Lawyers Association of British Columbia v British Columbia (Attorney General), 2014 SCC 59, [2014] 3 SCR 31 or property rights in Wells v Newfoundland, [1999] 3 SCR 199, they ensure that people are given warning before these rights are abrogated. Justice Major, writing for the unanimous court in Wells, explained this:

In a nation governed by the rule of law, we assume that the government will honour its obligations unless it explicitly exercises its power not to.  In the absence of a clear express intent to abrogate rights and obligations – rights of the highest importance to the individual – those rights remain in force.  To argue the opposite is to say that the government is bound only by its whim, not its word.   In Canada this is unacceptable, and does not accord with the nation’s understanding of the relationship between the state and its citizens. [46]

The argument about the relationship between textualism and separation of powers and democracy is perhaps somewhat less straightforward. But I think it’s not unfair to say that the obverse of insisting that it is the prerogative of legislatures, as the bodies representing the electorate, to have the law reflect their choices is that the law should reflect their choices. Textualism does this by emphasizing the primacy of text, which the legislature actually enacted, as the object of interpretation. Substantive canons are nothing more than an insistence that certain choices clearly appear to have been made in the text. Mark writes that “legislative sovereignty” means that “the legislature should be able to change the common law without a clear statement”, but I’m not sure that legislative supremacy requires deference to sotto voce or accidental legal change.

On the contrary, I think that for an interpreter to insist that the legislature spell out the consequences of its enactments rather than let them be inferred promotes legislative authority by requiring the democratic sovereign to squarely address the issues instead of leaving them to be worked out by unelected officials and judges. At the same time, however, it also promotes the more “negative” aspect of the separation of powers by freeing judges from becoming the legislatures’ accomplices is abuse of power. Subject to constitutional constraints, it is wrong for the courts not to give effect to legislation, but they are not, I think, under a duty to add to legislated iniquity of the legislature itself has not dared require it.

To be sure, it is possible for judges to misapply substantive interpretive canons so as to make them into instruments for refashioning legislation in accordance with their own preferences and values. Judges can be skillful practitioners of Nelsonian blindness and refuse to see in a statute that which is clearly there ― just as, on other occasions, they can see there that which is not. But I do not think that this necessarily makes substantive canons anathema to textualism. As then-Judge Amy Barrett has explained in a lecture devoted largely to a defence of textualism (which I summarized here), textualist adjudication is not mechanical. It requires judgment. A sparing ― judicious ― application of substantive canons calls for good judgment, but in this it is no different from other aspects of textualist interpretation or judicial decision-making more generally.

All that having been said, the impulse to disclaim and renounce the use of interpretive techniques that seem to bias adjudication in favour of particular outcomes is understandable as part of a broader appeal for neutrality. But here, I think, an appeal to precedent is relevant. Judges applying established substantive canons (or any other established interpretative techniques) is not introducing their own values into the law. They are not ― again, assuming they are not abusing their power ― wielding discretionary authority to bring the law into alignment with their policy preferences. They are not springing a surprise on the legislature (or the litigants). They are following established conventions for reading legal texts, which legislatures (or least the people drafting bills for them) can and ought to know.

Now, perhaps there is a further point of subtle disagreement between Mark and me here. Mark writes that “while the making of law may be a political activity, that does not mean that the rules we use for interpretation should be”. I think this a little imprecise. Like other legal rules, the established conventions of interpretation are not, themselves, value-free; I don’t think they could be. The conventions of textualism promote democratic authority, the separation of powers, and the Rule of Law. These are political values, in a broad sense, and I think that a defence of textualism should proceed on the basis that these are good values, not that that textualism has nothing to do with them. What should indeed be apolitical, to the extent possible for human beings, is the application of interpretive rules, not their content. However, an interpretive rule whose content is such as to make apolitical application impossible, is of course flawed from this standpoint.


What we should be looking for, then, are interpretive rules that can be applied impartially ― not mechanically, to be sure, but without the interpreter drawing on his or her subjective values, preferences, and beliefs about good policy. At least some forms of purposivism, as well as living constitutionalism and its analogues in statutory interpretation fail this test. Textualism, as Mark argues, is a more promising approach. But at the same time ― and not coincidentally ― textualism promotes important constitutional values: the Rule of Law, democracy, and separation powers.

Substantive interpretive canons, I have argued, promote the same values, and thus have a place in textualist interpretation. Indeed, I would go so far as to say that substantive canons are pre-eminently textualist interpretive tools, rather than those of some other interpretive approach. Like other kinds of interpretive canons, to which Mark refers, they are rules about reading texts ― albeit more than the other kinds, perhaps, they are rules for reading legal and, even more specifically, legislative texts. Their use has little to with legislative purpose, for example, and they may sit uneasily with a pragmatist or evolutionist approach to interpretation. They are not attempts to devine a legislature’s intentions hidden between textual lines, but rather rules about the legal meaning of enacted texts. Textualists should embrace substantive canons, not just as a grudging concession to precedent, but as a set of tools to wield with discernment, but also with confidence.

Neutrality in Legal Interpretation

Nowadays, it is unfashionable to say that legal rules, particularly rules of interpretation, should be “neutral.” Quite the opposite: now it is more fashionable to say that results in cases depend on the “politics” of a court on a particular day. Against this modern trend, not so long ago, it was Herbert Wechsler in his famous article “Towards Neutral Principles of Constitutional Law” who first advanced the idea of neutral principles. He wrote that, because courts must not act as a “naked power organ,” they must be “entirely principled” (Wechsler, at 19). They are principled when they rest their decisions “on reasons with respect to all the issues in the cases, reasons that in their generality and their neutrality transcend any immediate result that is involved” (Wechsler, at 19). The goal of these so-called “neutral principles” was to avoid “ad hoc evaluation” which Wechsler called “the deepest problem of our constitutionalism” (Wechsler, at 12). While Wechsler did not put it this way, I think textualism—particularly in statute law—is the closest thing to neutrality we have, and should be defended as such.

Wechsler’s idea of neutral principles, and textualism itself, are subject to much controversy. But, in my view, it is without a doubt that a deep problem in Canadian law remains “ad hoc evaluation,” otherwise known as “results-oriented reasoning.” Some judges are starting to recognize this. In constitutional law, Justices Brown and Rowe in the recent s.15 Fraser case noted that “substantive equality”—while a laudable doctrinal goal—has been ill-defined in the cases, and “has become an open-ended and undisciplined rhetorical device by which courts may privilege, without making explicit, their own policy preferences” (Fraser, at para 146). The same potential problem attends statutory interpretation, where results-oriented reasoning is possible (Entertainment Software Association, at para 76), and administrative law, where Vavilov was concerned with provides a rules-based framework for the application of deference. All of this is positive, because it provides a guide for judges in applying rules, ensuring that the reasoning process is transparent, bound, and fair to the parties.

But, in many ways, neutrality as a principle in our law is under attack. A common adage has become “law=politics,” and this broad, simple statement has elided the nuances that must apply when we speak of interpretation. This is true on both sides of the “political aisle” (a reference I make not out of any desire to do so, but out of necessity). Some who believe in notions of living constitutionalism or unbounded purposivism would tie the meaning of law to whatever a particular political community thinks in the current day, ostensibly because the current day is more enlightened than days past. In some ways this might be true as a factual matter (putting aside questions of legitimacy). But, as we are learning in real time, we have no guarantee that the present will be any more enlightened than the past.  Still others now advance a novel idea of “common good constitutionalism,” under which the meaning of constitutional text—whatever it is—must align with a “robust, substantively conservative approach to constitutional law and interpretation.” The goal is a “substantive moral constitutionalism…not enslaved to the original meaning of the Constitution.” These views have something in common: they purport to view the interpretation of law as a means to an end, reading in to legal texts contentious, political values that may or may not be actually reflected in the laws themselves.

The attack on neutrality from these camps—that span the spectrum—follow a familiar path, at least implicitly. They reason from an end. In other words, the argument assumes that some end is coextensive with moral justice, whatever that is. It assumes that the end is a good thing. It then says that the law should encompass that end because it is good.

Legal interpretation should not work this way. Laws, whether statutes or Constitutions, embody certain value choices and purposes. They have an internal meaning, quite apart from what other people want a particular law to mean. In this way, it is true that law is a purposive activity, in that law does pursue some end. But, as is well known, law is not co-extensive with justice, nor is it helpful to the interpretation of laws to say they pursue the “common good” or some other bromide. Even if one could come to some stable definition of such terms (a tall task indeed) that could guide the task of legal interpretation, it isn’t clear that all of the goals associated with some external philosophy are co-extensive with the law as adopted.   Laws do pursue purposes, but they do not do so at all costs—they often pursue limited or specific goals that are evident only when one reads the text (see the debate in West Fraser between the opinions of McLachlin CJC and Côté J on this point). This is why purpose is usually best sourced in text, not in some external philosophy.

If we accept that law is indeed a purposive endeavour, and that the words used by legislatures and drafters are the means by which purposes are enacted, then textualism is a defensible way of discovering those purposes. Textualism is simply the idea that we must read text to discover all that it fairly encompasses. Textualism is really a family of tools that we can use to discover that text. There are the linguistic canons—ejusdem generis, and the like—that are generally based on the way humans tend to speak in ordinary terms. There are contextual canons, such as the rule that statutes must be interpreted holistically. There are substantive canons of construction (which I will get to later). And there are other tools, like purpose, which can guide textualist interpretation so long as it is sourced properly. Unlike other theories of “interpretation,” these tools are designed to find the meaning of the law from within, rather than imposing some meaning on it without.

I can think of at least three (and probably more) objections to the point I am making here. First, one might say that textualism and its family of tools are not themselves neutral. For example, some of the substantive canons of construction might be said to be imbued with presuppositions about the ways laws must be interpreted. For example, there is the rule that statutes altering the common law require a clear statement in order to do so.  This is not a value-neutral tool, it could be said, because it makes it difficult for statutes to override what one might call a generally “conservative” common law. I do see the merit of this argument, which is why I (and some other textualists) may wish to assign a lesser role to substantive canons. Indeed, since I believe in legislative sovereignty, the legislature should be able to change the common law without a clear statement. Of course, these canons could be justified on other grounds that I do not have space to explore here. For example, they could be justified as a matter of precedent, or as a matter of “stabilizing” the law.

Second, one might trot out the familiar canard that textualism as a general matter leads to “conservative” outcomes. To put this argument in its most favourable light, one might say that textualism leads to cramped interpretations of statutes, robbing them of their majestic generalities that could serve to achieve certain political aims. It’s worth noting three responses to this position. First, the “cramped interpretation” argument tends to conflate strict constructionism and textualism. Indeed, textualism may sometimes lead to “broad” interpretation of statutes if text and purpose, working synthetically, lead to that conclusion. A great recent example is the Bostock decision from the United States Supreme Court, which I wrote about here. There, textualism led to a result that was actually more protective of certain rights.  Second, the use of political labels to describe legal doctrines is a pernicious trend that must come to an end. Even if these labels were actually stable in meaning, and not themselves tools of cultural warfare, it is unfair to assume that any one legal theory is always something. I understand the need to box everything, these days, into neat categories. But sometimes, law can mean many different things. And tools used to interpret those laws, as much as possible, should remain apart from the political aims those laws wish to pursue.

Third, it might be said that true neutrality is not of this world. That is, it could be argued that a Solomonic law is impossible, and no matter what, the act of interpretation is a fundamentally human activity that will be imbued with traditionally human biases. I accept this point. Because judges are humans, no system of rules will always remove the human aspect of judging, nor should it. The best we can do is design a system of rules, in mind of the tradeoffs, that limits the pernicious forms of biases and political reasoning that could infect the law. We won’t always get it right, but we should not take the nihilistic view that the entire enterprise of law as something separate from politics is not worth pursuing.

Finally, one might argue that law is inextricably political. It is cooked up in legislatures made up of thoroughly political individuals, with agendas. It is enforced by people who have biases of their own. I also accept this point. But this argument, to me, runs up against two major problems that limit its force. First, while the making of law may be a political activity, that does not mean that the rules we use for interpretation should be. Not at all. In fact, one might say that the rules of interpretation should be used to discover the meaning of the law, whatever political result it encompasses. Second, there is a major is/ought problem here. Just because the making of law is political does not mean we should not be concerned with a system of rules designed to limit biases that might infect the judging process. All people, regardless of ideology, should find this goal laudable.

I close with this. I understand that we live in sclerotic times in which there are passionate political views on many sides. There is a natural tendency to impose those views into law. We lose something when this happens. While perhaps not a sufficient condition for legitimacy, it is central to the Rule of Law that laws be promulgated and interpreted in a fair way. Generality, as Wechsler notes, is one guarantee of fairness. If we give up on generality and neutrality in interpretation, then we must admit that judges are simply political actors, agents of politicians, without any need for independence. It is self-evident that this is undesirable.

Linguistic Nihilism

One common line of attack against textualism—the idea that “the words of a governing text are of paramount concern, and what they convey, in their context, is what the text means (Scalia & Garner, at 56)—is that language is never clear, or put differently, hopelessly vague or ambiguous. Put this way, the task of interpretation based on text is a fool’s game. Inevitably, so the argument goes, courts will need to resort to extraneous purposes, “values,” social science evidence, pre or post-enactment legislative history, or consequential analysis to impose meaning on text that cannot be interpreted.

I cannot agree with this argument. For one, the extraneous sources marshalled by anti-textualists bristle with probative problems, and so are not reliable indicators of legislative meaning themselves. More importantly, an “anything goes” approach to interpretation offers no guidance to judges who must, in tough cases, actually interpret the law in predictable way. In this post, I will explore these arguments. My point is that a sort of linguistic nihilism that characterizes anti-textualist arguments is not conclusive, but merely invites further debate about the relative role of text and other terms.

**

Putting aside frivolous arguments one often hears about textualism (ie: “it supports a conservative agenda” or “it is the plain meaning approach”), one clear criticism of textualism is that interpretation is not self-executing. Jorge Gracia, for example, writes:

…texts are always given in a certain language that obeys rules and whose signs denote and connote more or less established meanings. In addition, the audience cannot help but bring to the text its own cultural, psychological, and conceptual context. Indeed, the understanding of the meaning of a text can be carried out only by bringing something to the text that is not already there…

Gracia, A Theory of Textuality: The Logic and Epistemology, at 28

Sullivan calls this situation the “pervasive indeterminacy of language” (see here, at 206). Put this way, as Sullivan notes, it is impossible to interpret text in its linguistic context:

It is not possible for judges  who interpret a provision of the Criminal Code or the Income Tax Act to wipe out the beliefs, values and expectations that they bring to their reading. They cannot erase their knowledge of law or the subject of legislation. They cannot case aside legal culture, with its respect for common law and evolving constitutional values…Like any other readers, if they want to make sense of a text, judges must rely on the context that they themselves bring to the text (see 208).

This form of linguistic nihilism is highly attractive. So goes the argument, if texts cannot be interpreted on their own, judges should and must bring their own personal biases and values to the text, as a desirable or inevitable result of the unclear text. And if that’s the case, we should adopt another type of interpretive record—perhaps one that centres what a judge in a particular case thinks the equities ought to be.

**

This argument aside, I find it hard to accept. First, the tools that are inevitably supposed to resolve these ambiguities or vagueness themselves are ambiguous and vague; so it is hard to hold them up as paragons of clarity against hopelessly clear text.

Let’s consider, first, the tools often advanced by non-textualists that are supposed to bring clarity to the interpretive exercise. Purpose is one such tool. In Canadian statutory interpretation, purpose and context must be sourced in every case, even when the text is admittedly clear on first blush (ATCO, at para 48). Put together, text, context, and purpose must be read together harmoniously (Canada Trustco, at para 47). But sometimes, purpose is offered by anti-textualists as an “out” from ambiguity or vagueness in the text itself. The problem is that sourcing purpose is not self-executing either. Purpose can be stated at various levels of abstraction (see here, and in general, Hillier). In other words, purpose can be the most abstract purpose of the statute possible (say, to achieve justice, as Max Radin once said); or it could be the minute details of particular provisions. There can be many purposes in a statute, stated in opposite terms (see Rafilovich for an example of this). Choosing purposes in these cases can be just as difficult as figuring out what words mean. This is especially so because the Supreme Court has never really provided guidance on the interaction between text and purpose, instead simply stating that these things must be read “harmoniously.” What this means in distinct cases is unclear. This is why it is best to source purpose with reference to text itself (see here).

Legislative history also presents well-known problems. One might advance the case that a Minister, when introducing a bill, speaks to the bill and gives his view of the bill’s purpose. Others may say differently. In some cases, legislative history can be probative. But in many cases, legislative history is not useful at all. For one, and this is true in both Canada and the US, we are bound by laws; not by the intentions of draftspeople. What a Minister thinks is enacted in text does not necessarily equate to what is actually enacted (see my post here on the US case of Bostock). There may be many reasons why bills were drafted the way they were in particular cases, but it is not probative to think legislative history (which can be manipulated) should be some cure-all for textual ambiguity or vagueness.

Finally, one might say that it is inevitable and desirable for judges to bring their own personal values and experiences to judging and interpreting statutes. This is a common refrain these days. To some extent, I agree with those who say that such value-based judging is inevitable. Judges are human beings, and are not robots. We cannot expect them to put aside all implicit value judgments in all cases. But one of the purposes of law, and of the rules of interpretation, is to ensure that decisions are reasoned according to a uniform set of rules applicable across the mass of cases. We have to limit idiosyncratic reasoning to the extent we can/ If we give up on defining with clarity such rules—in order to liberate judges and their own personal views—we no longer have a system of interpretation defined by law. Rather, we have a system of consequences, where judges reach the results they like based on the cases in front of them. This might sound like a nice idea to some, but in the long run, it is an unpredictable way to solve legal disputes.

**

If all of the tools of interpretation, including text, are imperfect, what is an interpreter to do? One classic answer to this problem is what I call the “anything goes” approach. Sullivan seems to say that this is what the Supreme Court actually does in its statutory interpretation cases (see here, at 183-184). While I question this orthodox view in light of certain cases, I take Sullivan’s description to be indicative of a normative argument. If the Supreme Court cannot settle on one theory of interpretation, perhaps it is best to settle on multiple theories. Maybe, in some cases, legislative history is extremely probative, and it takes precedence over text. Maybe, in some cases, purpose carries more weight than text. This is a sort of pragmatic approach that allows judges to use the tools of interpretation in response to the facts of particular cases.

This is attractive because it does not put blinders on the interpreter. It also introduces “nuance” and “context” to the interpretation exercise. All of this sounds good. But in reality, I am not sure that the “anything goes” approach, where judges assign weight on various tools in various cases, is all that helpful. I will put aside the normative objections—for example, the idea that text is adopted by the legislature or its delegates and legislative history is not—and instead focus on the pragmatic problems. Good judicial decisions depend on good judicial reasoning. Good judicial reasoning is more likely to occur if it depends less on a particular judge’s writing prowess and more on sourcing that reasoning from precedential and well-practiced rules. But there is no external, universal rule to guide the particular weights that judges should assign to various tools of interpretation, and even further, what factors will guide the assignment of weights. At the same time, some people might argue that rules that are too stringent will stymie the human aspect of judging.

In my view, an answer to this was provided by Justice Stratas in a recent paper co-authored with his clerk, David Williams. The piece offers an interesting and well-reasoned way of ordering tools of interpretation. For Stratas & Williams,  there are certain “green light” “yellow light” and “red light” tools in statutory interpretation. Green light tools include text and context, as well as purpose when it is sourced in text. Yellow light tools are ones that must be used with caution—for example, legislative history and social science evidence. Red light tools are ones that should never be used—for example, personal policy preferences.

I think this is a sound way of viewing the statutory interpretation problem. The text is naturally the starting point, since text is what is adopted by the legislature or its delegates, and is often the best evidence of what the legislature meant. Context is necessary as a pragmatic tool to understand text. Purpose can be probative as well, if sourced in text.

Sometimes, as I mentioned above, legislative history can be helpful. But it  must be used with caution. The same goes with social science evidence, which might be helpful if it illustrates the consequences of different interpretations, and roots those consequences back to internal statutory tools like text or purpose. But again, social science evidence cannot be used to contradict clear text.

Finally, I cannot imagine a world in which a judge’s personal views on what legislation should mean should be at all probative. Hence, it is a red light tool.

In this framework, judges are not asked to, on a case-by-case basis, assign weights to the tools that the judge thinks is most helpful. Instead, the tools are ranked according to their probative value. This setup has the benefit of rigidity, in that it does assign objective weight to the factors before interpretation begins. At the same time, it keeps the door open to using various tools that could deal with textual ambiguity or vagueness.

The point is that textualism cannot be said to be implausible simply because it takes some work to squeeze meaning out of text. The alternatives are not any better. If we can arrange text at the hierarchy of a list of other tools, that may be a solid way forward.

“Purposive” Does Not Equal “Generous”: The Interpretation Act

It is often said in Canada that statutes must be interpreted “purposively” and “generously.” Many cite the federal Interpretation Act’s s.12, which apparently mandates this marriage between purposive and generous interpretation:

12 Every enactment is deemed remedial, and shall be given such fair, large and liberal construction and interpretation as best ensures the attainment of its objects.

The Supreme Court has also accepted this general principle in the context of the judge-made rule that benefits-conferring legislation should be interpreted liberally (see Rizzo, and more recently, Michel v Graydon).

Putting aside the judge-made rule itself, which raises similar but somewhat separate questions, I write today to make a simple point: this injunction in the Interpretation Act cannot be read so as to render purposive interpretation the same as a “generous” interpretation. Doing so could violate the Supreme Court’s statutory interpretation jurisprudence, which promotes an authentic determination of purpose according to the legislative language under consideration (see my post on Rafilovich). Indeed, as is clear in the constitutional context, purposive interpretation will often lead to the narrowing of a right, rather than a generous interpretation of that right (see, for a recent example, R v Poulin). Similarly, a purposive interpretation in statute law will lead to a narrowing of the meaning of a particular statutory provision to its purposes. Those purposes will best be reflected in text (see Sullivan, at 193; see also here). For that reason, the Interpretation Act can only mandate a simple canon of interpretation: “The words of a governing text are of paramount concern, and what they convey, in their context is what the text means” (Scalia & Garner, at 56). Words should be interpreted fairly but only insofar as purpose reflected in text dictates.

One cannot read the Interpretation Act to mandate a generous interpretation over a purposive one. The text of the provision in question says that “fair, large and liberal construction” must be rendered in a way that “best ensures the attainment of the [enactment’s] objects.” This means that purpose is the anchor for a “generous” interpretation within those purposes. Put differently, we should read words to mean all that they can fairly mean, but we cannot use some injunction of “generosity” to supplant the words or the purposes they reflect.

Prioritizing “generosity” over the natural reading of text in its context would lead to all sorts of practical problems. For one, it is difficult to determine what a “generous” interpretation of a statute would mean in practical terms (see Scalia & Garner, at 365). Does this simply mean that “[a]ny doubt arising from difficulties of language should be resolved in favour of the claimant”? (see Rizzo, at para 36). This could be defensible. But the risk is that using the language of “generosity” could invite judges to expand the scope of language and purpose to suit policy outcomes/parties they prefer.

We should be careful of this language for this reason. More importantly, if “generosity” means that the legitimately-sourced purpose of legislation can be abrogated, the language is quite inconsistent with the Supreme Court’s actual approach to interpretation in recent cases (see Telus v Wellman and Rafilovich).

Rather, the reading of the relevant section of the Interpretation Act must be taken to conform with the Supreme Court’s governing approach to statutory interpretation.  In this sense, the “fair, large, and liberal” interpretive approach mandated by the Interpretation Act might be explained by contrasting it to an old form of interpretation that virtually no one adopts now: strict constructionism. Strict constructionism, most commonly adopted in the adage that “statutes in derogation of the common law were to be strictly construed” (Scalia & Garner, at 365) was unjustified because it violated the “fair meaning rule”; the text, in its context, must be interpreted fairly. No one today—not even textualists—are strict constructionists, because everyone accepts the idea that text must be interpreted fairly. If the Interpretation Act is a response to strict constructionism, its language could perhaps be forgiven. But it should be taken no further than the fair-meaning rule, which rests on identifying relevant purposes in text and using those purposes to guide textual interpretation.

An example of a party attempting to use the Interpretation Act is a manner I consider impermissible occurred in Hillier. There, Ms. Hillier relied on the Interpretation Act and the general canon of interpretation that benefits-conferring legislation is to be liberally interpreted. Putting aside this canon (dealt with in Hillier, at para 38), the Interpretation Act was marshalled by Ms. Hillier to suggest that the court should rule in her favour. Stratas JA rejected this erroneous reliance on the Interpretation Act, concluding (at para 39):

[39]  To similar effect is the interpretive rule in section 12 of the Interpretation Act. It provides that “[e]very enactment is deemed remedial, and shall be given such fair, large and liberal construction and interpretation as best ensures the attainment of its objects.” Section 12 is not a licence for courts and administrative decision-makers to substitute a broad legislative purpose for one that is genuinely narrow or to construe legislative words strictly for strictness’ sake—in either case, to bend the legislation away from its authentic meaning. Section 12 instructs courts and administrative decision-makers to interpret provisions to fulfil the purposes they serve, broad or narrow, no more, no less.

This is an accurate description of the function of the Interpretation Act, which finds agreement with the Supreme Court’s statutory interpretation jurisprudence, such as I can discern it. Purpose—usually sourced in text—guides textual interpretation. Purpose and text should be read synthetically together to render a fair meaning of the language at hand. But broad notions of “generosity” or “fairness” should be not be used to supplant the authentic purpose(s) of legislation, derived in text. And “generosity” is not an end-round around the language the legislature actually uses.