As I finish my graduate studies at Chicago, it struck me that a major theme of legal design is the degree of perfection (if any) we should expect from legal rules. Drafted legal rules—whether by the legislature or judiciary—will always be over and underbroad, because rules of general application cannot foresee every idiosyncratic individual application. In such a case, the extent to which a perfect rule can be created is dependent on the extent to which we balance the error rate of application with the ease of administrability of a straightforward rule. Here, we will never come to a perfect balance, but we can try to come to something that is defensible and workable.
The same sort of consideration applies in the field of statutory interpretation. The most important issue in statutory interpretation is the clarity exercise—how clear is clear enough? Finding that a statutory text is clear on its face leads to a number of important consequences. For one, the Supreme Court has said that where text is “precise and unequivocal, the ordinary meaning of the words play a dominant role in the interpretive process” as opposed to purpose (Canada Trustco, at para 10). Additionally, the use of Charter values in statutory interpretation to gap-fill only arises where there is ambiguity in the ordinary textual meaning (BellExpressVu, at para 28). And, as Gib Van Ert points out, the Federal Court of Appeal seems to be adopting a similar rule in the context of international law.
Some may object at the outset to a consideration of “clarity” as a means of discerning legislative intent on a particular subject. This line of opposition is deeply rooted in the idea of legal realism, with its skepticism of judicial modes of reasoning and the rejection of abstract legal thought as a means to come to clear answers on the law. Representative works in this regard include John Willis’ “Statutory Interpretation in a Nutshell,” where he argues that, in modern legislation which uses wide language (often to delegate authority to others), literal interpretation does no good, essentially because the language is broad and unclear. And he notes that even if interpretation could be clear or plain on its face, there are differences between judges as to what “plain” constitutes (see 10 and 11). Additionally, Karl Llewellyn’s classic article on the “dueling canons of interpretation” sheds doubt on the use of the canons of statutory interpretation to come to any clear meaning that is not inspired by motivated reasoning. Underlying each of these important critiques is a belief in the relativism and contingency of language. Clarity, on this account, is probably a fool’s errand, in part because ascribing an intent to the legislature is difficult with open-textured language, and in part because language itself is inherently unclear. If this is true, it will be the rare case indeed where a court should be convinced that a text is clear.
While this might sound good to a lawyer’s ear—especially a lawyer that is paid money to exploit ambiguities—it does not comport with the way we use language in the majority of cases. And this is where the example of crafting legal rules comes into handy. One might wish to craft a legal rule to cover all of the interstitial, idiosyncratic applications—ones that are weird or abnormal. But then we create a rule that might work well in the individual case, and not in the general run of cases. Instead, we should craft legal rules based on the 98% of cases, not the 2%: see Richard Epstein’s Simple Rules for a Complex World on this score. In the realm of statutory interpretation, this means that we should start with the going-in, commonsense presumption that language is generally clear in the majority of circumstances, after a bit of listening and synthesis. People transact in the English language everyday with no major kerfluffles, and even conduct complex business and legal dealings without requiring a court to opine on the language they are using. This underlying mass of cases never makes it to court precisely because English works. The problem with statutory interpretation cases, then, is the major selection effect they present. The cases that make it to court, where the rules are developed, are the cases that are most bizarre or that raise the most technical questions. Those are not the cases on which we should base rules of general application. Instead, the rule should simply be that English works in most circumstances, as evidenced by the fact that each of us can generally communicate—with only small hiccups—in the day-to-day world.
If that is the rule adopted, and if legal language is really no different in kind (only in degree of specificity and technicality), then a court should not be exacting in its determination of the clarity of a statutory provision. That is, if language generally works on first impression, then there is no need for a court to adopt a presumption that it doesn’t work, and hence that something greater than “clear enough” is required to definitively elucidate the meaning of a text. We should merely assume that language probably works, that legislatures know language, and that courts have the tools to discern that language. While we should not assume that language is perfect, we should at least assume that it is workable in an ordinary meaning sense.
This approach also has the benefit of commonsense. Perfection is not of this world. The legal realists put way too high a standard on the clarity of language, to something approaching perfect linguistic clarity rather than semantic workability. We should not craft legal rules around the fact that, in some far-off circumstances, we can imagine language not working.
What does this mean in operation? The American debate over Chevron deference supplies a good example. Chevron holds that where Congress has spoken to the precise question at issue, courts should not afford deference to an agency’s interpretation of law. This is Chevron Step One. If Congress has not spoken clearly, the court moves to Chevron Step Two, where it will defer to the interpretation and uphold it if it constitutes a reasonable interpretation of law. In a recent case, Justice Gorsuch concliuded at Chevron Step One that the text was “clear enough,” so that deference should not be afforded. The clear enough formulation is reminiscent of Justice Kavanaugh’s article, where he explains the various divisions among judges about clarity:
I tend to be a judge who finds clarity more readily than some of my colleagues but perhaps a little less readily than others. In practice, I probably apply something approaching a 65-35 rule. In other words, if the interpretation is at least 65-35 clear, then I will call it clear and reject reliance on ambiguity-dependent canons. I think a few of my colleagues apply more of a 90-10 rule, at least in certain cases. Only if the proffered interpretation is at least 90-10 clear will they call it clear. By contrast, I have other colleagues who appear to apply a 55-45 rule. If the statute is at least 55-45 clear, that’s good enough to call it clear.
Kavanaugh’s approach is probably closer to the right one, if we accept the general proposition that language will be workable in the majority of cases. If there is no reason to doubt language, then clarity will be easier to come by. It is only if we go in assuming the case of unworkability that clarity becomes a fool’s errand. But from a perspective of legal design, this is not desirable.
Law has a reputation for being a highly technical field, with a laser focus on commas, semicolons, and correcting the passive voice. But at the level of designing legal rules, including rules governing language, the best we can hope for is workability, not technical precision. This is because designing rules involves tradeoffs between incentives, administrability, and fit. And because humans are not perfect, we cannot design rules at this level of abstraction that are perfect. As a result, in the language context, the best we can and should do is workability in the general run of cases.