One common line of attack against textualism—the idea that “the words of a governing text are of paramount concern, and what they convey, in their context, is what the text means (Scalia & Garner, at 56)—is that language is never clear, or put differently, hopelessly vague or ambiguous. Put this way, the task of interpretation based on text is a fool’s game. Inevitably, so the argument goes, courts will need to resort to extraneous purposes, “values,” social science evidence, pre or post-enactment legislative history, or consequential analysis to impose meaning on text that cannot be interpreted.
I cannot agree with this argument. For one, the extraneous sources marshalled by anti-textualists bristle with probative problems, and so are not reliable indicators of legislative meaning themselves. More importantly, an “anything goes” approach to interpretation offers no guidance to judges who must, in tough cases, actually interpret the law in predictable way. In this post, I will explore these arguments. My point is that a sort of linguistic nihilism that characterizes anti-textualist arguments is not conclusive, but merely invites further debate about the relative role of text and other terms.
Putting aside frivolous arguments one often hears about textualism (ie: “it supports a conservative agenda” or “it is the plain meaning approach”), one clear criticism of textualism is that interpretation is not self-executing. Jorge Gracia, for example, writes:
Sullivan calls this situation the “pervasive indeterminacy of language” (see here, at 206). Put this way, as Sullivan notes, it is impossible to interpret text in its linguistic context:
This form of linguistic nihilism is highly attractive. So goes the argument, if texts cannot be interpreted on their own, judges should and must bring their own personal biases and values to the text, as a desirable or inevitable result of the unclear text. And if that’s the case, we should adopt another type of interpretive record—perhaps one that centres what a judge in a particular case thinks the equities ought to be.
This argument aside, I find it hard to accept. First, the tools that are inevitably supposed to resolve these ambiguities or vagueness themselves are ambiguous and vague; so it is hard to hold them up as paragons of clarity against hopelessly clear text.
Let’s consider, first, the tools often advanced by non-textualists that are supposed to bring clarity to the interpretive exercise. Purpose is one such tool. In Canadian statutory interpretation, purpose and context must be sourced in every case, even when the text is admittedly clear on first blush (ATCO, at para 48). Put together, text, context, and purpose must be read together harmoniously (Canada Trustco, at para 47). But sometimes, purpose is offered by anti-textualists as an “out” from ambiguity or vagueness in the text itself. The problem is that sourcing purpose is not self-executing either. Purpose can be stated at various levels of abstraction (see here, and in general, Hillier). In other words, purpose can be the most abstract purpose of the statute possible (say, to achieve justice, as Max Radin once said); or it could be the minute details of particular provisions. There can be many purposes in a statute, stated in opposite terms (see Rafilovich for an example of this). Choosing purposes in these cases can be just as difficult as figuring out what words mean. This is especially so because the Supreme Court has never really provided guidance on the interaction between text and purpose, instead simply stating that these things must be read “harmoniously.” What this means in distinct cases is unclear. This is why it is best to source purpose with reference to text itself (see here).
Legislative history also presents well-known problems. One might advance the case that a Minister, when introducing a bill, speaks to the bill and gives his view of the bill’s purpose. Others may say differently. In some cases, legislative history can be probative. But in many cases, legislative history is not useful at all. For one, and this is true in both Canada and the US, we are bound by laws; not by the intentions of draftspeople. What a Minister thinks is enacted in text does not necessarily equate to what is actually enacted (see my post here on the US case of Bostock). There may be many reasons why bills were drafted the way they were in particular cases, but it is not probative to think legislative history (which can be manipulated) should be some cure-all for textual ambiguity or vagueness.
Finally, one might say that it is inevitable and desirable for judges to bring their own personal values and experiences to judging and interpreting statutes. This is a common refrain these days. To some extent, I agree with those who say that such value-based judging is inevitable. Judges are human beings, and are not robots. We cannot expect them to put aside all implicit value judgments in all cases. But one of the purposes of law, and of the rules of interpretation, is to ensure that decisions are reasoned according to a uniform set of rules applicable across the mass of cases. We have to limit idiosyncratic reasoning to the extent we can/ If we give up on defining with clarity such rules—in order to liberate judges and their own personal views—we no longer have a system of interpretation defined by law. Rather, we have a system of consequences, where judges reach the results they like based on the cases in front of them. This might sound like a nice idea to some, but in the long run, it is an unpredictable way to solve legal disputes.
If all of the tools of interpretation, including text, are imperfect, what is an interpreter to do? One classic answer to this problem is what I call the “anything goes” approach. Sullivan seems to say that this is what the Supreme Court actually does in its statutory interpretation cases (see here, at 183-184). While I question this orthodox view in light of certain cases, I take Sullivan’s description to be indicative of a normative argument. If the Supreme Court cannot settle on one theory of interpretation, perhaps it is best to settle on multiple theories. Maybe, in some cases, legislative history is extremely probative, and it takes precedence over text. Maybe, in some cases, purpose carries more weight than text. This is a sort of pragmatic approach that allows judges to use the tools of interpretation in response to the facts of particular cases.
This is attractive because it does not put blinders on the interpreter. It also introduces “nuance” and “context” to the interpretation exercise. All of this sounds good. But in reality, I am not sure that the “anything goes” approach, where judges assign weight on various tools in various cases, is all that helpful. I will put aside the normative objections—for example, the idea that text is adopted by the legislature or its delegates and legislative history is not—and instead focus on the pragmatic problems. Good judicial decisions depend on good judicial reasoning. Good judicial reasoning is more likely to occur if it depends less on a particular judge’s writing prowess and more on sourcing that reasoning from precedential and well-practiced rules. But there is no external, universal rule to guide the particular weights that judges should assign to various tools of interpretation, and even further, what factors will guide the assignment of weights. At the same time, some people might argue that rules that are too stringent will stymie the human aspect of judging.
In my view, an answer to this was provided by Justice Stratas in a recent paper co-authored with his clerk, David Williams. The piece offers an interesting and well-reasoned way of ordering tools of interpretation. For Stratas & Williams, there are certain “green light” “yellow light” and “red light” tools in statutory interpretation. Green light tools include text and context, as well as purpose when it is sourced in text. Yellow light tools are ones that must be used with caution—for example, legislative history and social science evidence. Red light tools are ones that should never be used—for example, personal policy preferences.
I think this is a sound way of viewing the statutory interpretation problem. The text is naturally the starting point, since text is what is adopted by the legislature or its delegates, and is often the best evidence of what the legislature meant. Context is necessary as a pragmatic tool to understand text. Purpose can be probative as well, if sourced in text.
Sometimes, as I mentioned above, legislative history can be helpful. But it must be used with caution. The same goes with social science evidence, which might be helpful if it illustrates the consequences of different interpretations, and roots those consequences back to internal statutory tools like text or purpose. But again, social science evidence cannot be used to contradict clear text.
Finally, I cannot imagine a world in which a judge’s personal views on what legislation should mean should be at all probative. Hence, it is a red light tool.
In this framework, judges are not asked to, on a case-by-case basis, assign weights to the tools that the judge thinks is most helpful. Instead, the tools are ranked according to their probative value. This setup has the benefit of rigidity, in that it does assign objective weight to the factors before interpretation begins. At the same time, it keeps the door open to using various tools that could deal with textual ambiguity or vagueness.
The point is that textualism cannot be said to be implausible simply because it takes some work to squeeze meaning out of text. The alternatives are not any better. If we can arrange text at the hierarchy of a list of other tools, that may be a solid way forward.
One thought on “Linguistic Nihilism”