“Clear Enough”

Some thoughts on statutory interpretation.

As I finish my graduate studies at  Chicago, it struck me that a major theme of legal design is the degree of perfection (if any) we should expect from legal rules. Drafted legal rules—whether by the legislature or judiciary—will always be over and underbroad, because rules of general application cannot foresee every idiosyncratic individual application. In such a case, the extent to which a perfect rule can be created is dependent on the extent to which we balance the error rate of application with the ease of administrability of a straightforward rule. Here, we will never come to a perfect balance, but we can try to come to something that is defensible and workable.

The same sort of consideration applies in the field of statutory interpretation. The most important issue in statutory interpretation is the clarity exercise—how clear is clear enough? Finding that a statutory text is clear on its face leads to a number of important consequences. For one, the Supreme Court has said that where text is “precise and unequivocal, the ordinary meaning of the words play a dominant role in the interpretive process” as opposed to purpose (Canada Trustco, at para 10). Additionally, the use of Charter values in statutory interpretation to gap-fill only arises where there is ambiguity in the ordinary textual meaning (BellExpressVu, at para 28). And, as Gib Van Ert points out, the Federal Court of Appeal seems to be adopting a similar rule in the context of international law.

Some may object at the outset to a consideration of “clarity” as a means of discerning legislative intent on a particular subject. This line of opposition is deeply rooted in the idea of legal realism, with its skepticism of judicial modes of reasoning and the rejection of abstract legal thought as a means to come to clear answers on the law. Representative works in this regard include John Willis’ “Statutory Interpretation in a Nutshell,” where he argues that, in modern legislation which uses wide language (often to delegate authority to others), literal interpretation does no good, essentially because the language is broad and unclear. And he notes that even if interpretation could be  clear or plain on its face, there are differences between judges as to what “plain” constitutes (see 10 and 11). Additionally, Karl Llewellyn’s classic article on the “dueling canons of interpretation” sheds doubt on the use of the canons of statutory interpretation to come to any clear meaning that is not inspired by motivated reasoning. Underlying each of these important critiques is a belief in the relativism and contingency of language. Clarity, on this account, is probably a fool’s errand, in part because ascribing an intent to the legislature is difficult with open-textured language, and in part because language itself is inherently unclear. If this is true, it will be the rare case indeed where a court should be convinced that a text is clear.

While this might sound good to a lawyer’s ear—especially a lawyer that is paid money to exploit ambiguities—it does not comport with the way we use language in the majority of cases. And this is where the example of crafting legal rules comes into handy. One might wish to craft a legal rule to cover all of the interstitial, idiosyncratic applications—ones that are weird or abnormal. But then we create a rule that might work well in the individual case, and not in the general run of cases. Instead, we should craft legal rules based on the 98% of cases, not the 2%: see Richard Epstein’s Simple Rules for a Complex World on this score. In the realm of statutory interpretation, this means that we should start with the going-in, commonsense presumption that language is generally clear in the majority of circumstances, after a bit of listening and synthesis. People transact in the English language everyday with no major kerfluffles, and even conduct complex business and legal dealings without requiring a court to opine on the language they are using. This underlying mass of cases never makes it to court precisely because English works. The problem with statutory interpretation cases, then, is the major selection effect they present. The cases that make it to court, where the rules are developed, are the cases that are most bizarre or that raise the most technical questions. Those are not the cases on which we should base rules of general application. Instead, the rule should simply be that English works in most circumstances, as evidenced by the fact that each of us can generally communicate—with only small hiccups—in the day-to-day world.

If that is the rule adopted, and if legal language is really no different in kind (only in degree of specificity and technicality), then a court should not be exacting in its determination of the clarity of a statutory provision. That is, if language generally works on first impression, then there is no need for a court to adopt a presumption that it doesn’t work, and hence that something greater than “clear enough” is required to definitively elucidate the meaning of a text. We should merely assume that language probably works, that legislatures know language, and that courts have the tools to discern that language. While we should not assume that language is perfect, we should at least assume that it is workable in an ordinary meaning sense.

This approach also has the benefit of commonsense. Perfection is not of this world.  The legal realists put way too high a standard on the clarity of language, to something approaching perfect linguistic clarity rather than semantic workability. We should not craft legal rules around the fact that, in some far-off circumstances, we can imagine language not working.

What does this mean in operation? The American debate over Chevron deference supplies a good example. Chevron holds that where Congress has spoken to the precise question at issue, courts should not afford deference to an agency’s interpretation of law. This is Chevron Step One. If Congress has not spoken clearly, the court moves to Chevron Step Two, where it will defer to the interpretation and uphold it if it constitutes a reasonable interpretation of law. In a recent case, Justice Gorsuch concliuded at Chevron Step One that the text was “clear enough,” so that deference should not be afforded. The clear enough formulation is reminiscent of Justice Kavanaugh’s article, where he explains the various divisions among judges about clarity:

I tend to be a judge who finds clarity more readily than some of my colleagues but perhaps a little less readily than others. In practice, I probably apply something approaching a 65-35 rule. In other words, if the interpretation is at least 65-35 clear, then I will call it clear and reject reliance on ambiguity-dependent canons. I think a few of my colleagues apply more of a 90-10 rule, at least in certain cases. Only if the proffered interpretation is at least 90-10 clear will they call it clear. By contrast, I have other colleagues who appear to apply a 55-45 rule. If the statute is at least 55-45 clear, that’s good enough to call it clear.

Kavanaugh’s approach is probably closer to the right one, if we accept the general proposition that language will be workable in the majority of cases. If there is no reason to doubt language, then clarity will be easier to come by. It is only if we go in assuming the case of unworkability that clarity becomes a fool’s errand. But from a perspective of legal design, this is not desirable.

Law has a reputation for being a highly technical field, with a laser focus on commas, semicolons, and correcting the passive voice. But at the level of designing legal rules, including rules governing language, the best we can hope for is workability, not technical precision. This is because designing rules involves tradeoffs between incentives, administrability, and fit. And because humans are not perfect, we cannot design rules at this level of abstraction that are perfect. As a result, in the language context, the best we can and should do is workability in the general run of cases.

Romancing the Law

An ode to formalism and reflections on Runnymede’s Law and Freedom Conference

I had the pleasure of attending last weekend’s Runnymede Society conference in Toronto. As always, the conference was a welcome opportunity to meet with old friends and new, and to reflect on a number of pertinent issues in Canadian law.

Perhaps because of my own research interests in the last year, I was particularly interested in a theme that seemed to run throughout the conference: the degree of confidence that each of us has in law, particularly the statutory law. Justice Stratas’ talk with Asher Honickman highlighted that there are many in the legal community that, if not giving up on law, are questioning its relevance in a society that is now defined by greater calls for context, nuance, individualized application, and discretion.  The virtues of rules—the creation of economies of scale, the structuring of norms and expectations according to positive orders, the costs saved at the ex post application stage—are apparently counterweighed by the potential for overbroad application, rank injustice, and otherwise discriminatory treatment.

The degree to which we are worried about these vices, or encouraged by these virtues, is probably a function of our belief in legislatures and their work product. Even if legislatures do not get things “right,” there are good reasons to believe that what the legislature does is owed a wide degree of respect–because of the value of legislative compromises, the “finely-wrought” legislative procedure, and the representative nature of the legislature . Nowadays, though, a commitment to the law passed by the legislature is labelled pejoratively as “formalist.” In administrative law, offshoots of this belief are characterized, dismissively and without analysis, as “Diceyan” or an unwelcome throwback to the days of “ultra vires” (take a look at the oral argument in the Bell/NFL & Vavilov cases for many examples of this).  In statutory interpretation, a belief that text in its context will generally contain answers is dismissed as a belief in “the plain meaning rule,” mere “textualism”–notwithstanding the important distinction between these two methods. In constitutional law, a focus on constitutional text is “originalism.” None of these are arguments, but they have since infiltrated the orthodoxy of the academy.

The consequences of this argument-by-label should not be understated. Take the case of statutory interpretation. The Supreme Court of Canada tells us that we should interpret statutes purposively, but at the same time, that the text will play a dominant role in the process when it is clear (Canada Trustco, at para 10). This implies that purposes, while helpful to the interpretive process, should not dominate where the text is clearly pointing in another direction.

But a focus on statutory text—especially the contention that text can ever be clear—is often derided as inconsistent with the contingent and “ambiguous” nature of language. So the argument goes, text can never truly be “clear,” and so textualism falls away. But whether the text of a statute will contain answers to an interpretive difficulty is, in part, a function of the judge’s belief in the coherence and determinacy of law—in other words, her appreciation of the point at which “law runs out”. A judge inclined to believe that the tools of statutory interpretation can be used to come to a defensible answer on a matter will commit herself to that task, and will probably not consider legislative language “ambiguous” in its purposive context. For her, law will maybe never run out, or if it does, it will only do so in the extreme case of true ambiguity, where no discernible meaning cognizable to human understanding could be appreciated. A judge less committed to the determinacy of law will be more willing to introduce extraneous materials—legislative history, Charter values—in order to come to a meaning that makes sense to her. For her, the law may “run out” quite early. The risk here, of course, is the enlargement of the scope for judicial discretion. For those who believe in the general soundness of statutory law, this creates the potential for conflict with the generally-elected representative body.

This is not a hypothetical problem. In the United States, Chevron administrative law deference rests on the judge’s appreciation of statutory language. At step 1, courts are asked to apply the ordinary tools of statutory interpretation to determine if Congress spoke clearly on a particular matter. If so, that meaning binds the agency. If not, at step 2, if there is ambiguity in a statute, courts defer to a reasonable agency interpretation. As Justice Scalia said, a judge committed to the text at step 1 will rarely need to move to step 2. In this way, there would be less scope for agencies to exercise virtually unreviewable discretion. Those who believe that law runs out earlier will, ceterus paribus, be more willing to allow multiple decision-makers to come to very different decisions on a matter so long as those decisions are roughly justified by a statute.

The various points on the spectrum of “giving up on law” will be the product of many factors, including factors particular to cases before courts. But at some level, a belief that text can, or should, contain answers seems to undergird the entire process of determining the meaning of a statute. I think there are good reasons to hold the belief that what the legislature produces is generally sound for reasons that are particular to the legislative process and the law in question. To my mind, judges should be wary of letting text “run out,” in part because of what replaces it; more abstract, generally less clear “second-tier” sources of legislative meaning (Note: sometimes text will be truly unclear, and a statutory purpose can be clearly gleaned from the text. Our law sees no problem with this, and neither do I).

This is not to presuppose that legislatures always make sense in their enactments. The process of making law is not designed to be a perfect application of human rationality or even of expertise. Legislatures sometimes don’t make sense. But there are good reasons to respect the legislative process. Importantly, seemingly non-sensical legislative compromises, run through readings in Parliament and the committee process, are sometimes the product of concessions to minority groups, represented through their Members of Parliament. These legislative compromises are sometimes essential, and should be respected even if they do not make sense. Judge Easterbrook puts it well: “If this [an outcome of statutory interpretation] is unprincipled, it is the way of compromise. Law is a vector rather than an arrow. Especially when you see the hand of interest groups.”

If the legislative process is imperfect, so is the process of statutory interpretation. Statutory interpretation will not always yield easy answers, or even the ex ante “correct” answer. The tools of statutory interpretation are often contradictory, some say outmoded, and sometimes unwieldly. But as Judge Posner said in his book Reflections on Judging, the tools of statutory interpretation are designed to impose meaning. Used authentically and faithfully, with a concomitant belief in the legitimacy of the law passed by the legislature, they help courts come to a defensible conclusion on the meaning of a provision; one that is consonant with the universe of laws in the statute book, with the particular statute’s larger purposes, and the immediate context of a statute.

It worries me that some no longer belief in this process—in the formal quality of law as law, in the idea that when the legislature speaks, it does so for a reason. Similarly, I worry that the invitation for judges to rely on values and principles extraneous to a statute—for example, Charter values, legislative history, etc—to impose a meaning on a statute is based on wrongheaded idea that judicial discretion is somehow absolutely better than legislative power. I, for one, think that we should expect judges in a constitutional democracy to believe in the law passed by the legislature. This is not judicial acquiescence, but there is perhaps a value to formalism. Parliament, to be sure, does not always get everything right. But there is a benefit to formalism: the way in which Parliament passes laws is subject to a formal process, interposed with legislative study. The way we elect our leaders and the way Parliament operates is, in a way, formal. The law it creates should be owed respect by those sworn to uphold it.

The debate over rules versus standards or discretion is one that is rife throughout history. But presupposing the debate, I always thought, was a belief in law itself. For those of us at Runnymede this weekend, we were invited to question whether that belief exists any longer.

Law Like Love

“What is law like? What can we compare it with in order to illuminate its character and suggest answers to some of the perennial questions of jurisprudence?”

That’s the opening of Jeremy Waldron’s “Planning for Legality,” 109 Mich. L. Rev. 883 (2010), a review of Scott Shapiro’s book Legality. When I read it recently, it immediately reminded me of W.H. Auden’s magnificent poem, “Law Like Love,” where Auden suggests that the question is perhaps absurd, but irresistible. Here’s a recording of Auden reading it.

I don’t know if Waldron’s line is a deliberate allusion. But my guess is that it is not. Law review articles, after all, are not Umberto Eco’s novels. They deal in footnotes, not allusions. If I’m right about this, I think it confirms just how brilliant Auden’s poem is – not only as a matter of literary merit, but also in that it is the best summary of the field of legal philosophy ever produced.