Was Lon Fuller an Originalist?

Some thoughts on Lon Fuller, the Rule of Law, and constitutional interpretation

I think that the best argument for originalism is that it is required by the principle of the Rule of Law. (Jeffrey Pojanowski’s contribution to an online symposium on originalism organized by Diritto Pubblico Comparato ed Europeo earlier this year makes this argument nicely and concisely.) So I probably brought some confirmation bias to a re-reading of Lon Fuller’s discussion of the Rule of Law requirement of “congruence between official action and the law” in The Morality of Law, which makes me think that he would have been at least sympathetic to originalism.

If law is to guide the behaviour of those to whom it is addressed, it is not enough that it be public, intelligible, stable, and so on. It must also be applied and enforced consistently with the way it is supposed to be. A failure of congruence, Fuller explains, amounts to nothing less than “the lawless administration of the law”. (81) It can result from a number of causes, some perhaps innocent, like “mistaken interpretation”; others having to do with the lack competence or intelligence; and in extreme cases “bribery”, “prejudice”, and “drive towards personal power”. (81) (The attempt at classification is mine; Fuller, somewhat oddly, presents this various causes pell-mell.)

Importantly, although one might be tempted to think that it is primarily the executive that has to be vigilant to ensure that it applies the law as written, Fuller was clear that the requirement of congruence is addressed to the judiciary too. The lower courts had to ensure that they applied the law as set out by the higher ones, but even an apex court has responsibilities towards the Rule of Law. After a detour into the importance of generality, coherence, constancy, and prospectivity in the articulation of adjudicative law, Fuller writes:

The most subtle element in the task of maintaining congruence between law and official action lies, of course, in the problem of interpretation. Legality requires that judges and other officials apply statutory law, not according to their fancy or with crabbed literalness, but in accordance with principles of interpretation that are appropriate to their position in the whole legal order. (82)

He proceeds to recommend the principle of articulation articulated in Haydon’s Case, (1584) 3 Co Rep 7a:

for the sure and true interpretation of all statutes in general (be they penal or beneficial, restrictive or enlarging of the common law,) four things are to be discerned and considered:

1st. What was the common law before the making of the Act.
2nd. What was the mischief and defect for which the common law did not provide.
3rd. What remedy the Parliament hath resolved and appointed to cure the disease of the commonwealth.
And, 4th. The true reason of the remedy; and then the office of all the Judges is always to make such construction as shall suppress the mischief, and advance the remedy.

Now, this quotation, which I have presented in the same way as Fuller does, is somewhat incomplete. Here is the full statement of “the office of all the Judges” according to Heydon’s Case:

always to make such construction as shall suppress the mischief, and advance the remedy, and to suppress subtle inventions and evasions for continuance of the mischief, and pro privato commodo, and to add force and life to the cure and remedy, according to the true intent of the makers of the Act, pro bono publico.

Fuller, instead of the reference to “the true intent of the makers of the Act”, adds one further element of his own,

a fifth point to be “discerned and considered,” which might read somewhat as follows: “How would those who must guide themselves by its [i.e. the Act’s] words reasonably understand the intent of the Act, for the law must not become a snare for those who cannot know the reasons of it as fully as do the Judges. (83)

In subsequent discussion, Fuller proceeds to criticise what he calls “an atomistic conception of intention”, which “conceives the mind to be directed … toward distinct situations of fact rather than toward some significance in human affairs that these situations may share”, (84) and denies the relevance of intention in interpretation, or at any rate in difficult interpretative questions, which arise in individual situations ostensibly not anticipated by the legislator. Intention matters, Fuller insists, but it is clear from the example he uses ― that of a dead inventor whose work must be continued from an incomplete design by another person ― that it is not an actual, specific intention that he has in mind, but the general purpose of the document to be interpreted that can be ascertained from its contents; indeed Fuller commends the exclusion of “any private and uncommunicated intention of the draftsman of a statute” (86) from its legal interpretation.

How does this all translate into approaches to constitutional interpretation ― which, after all, Fuller does not actually discuss? Many Canadian readers will no doubt be inclined to think that Fuller is advocating something like purposive interpretation, to which the Supreme Court of Canada sometimes professes to adhere. But, as Benjamin Oliphant and I have explained in our work on originalism in Canada, purposivism, especially as articulated in … is arguably compatible with some forms of originalism. Fuller’s purposivism, it seems to me translates fairly well into public meaning originalism, given its emphasis, on the one hand, on the circumstances of the law’s making as being key to interpreting it, and on the other on the reasonable understanding of those to whom the statute is addressed as one of the guidelines for the interpreters. Fuller’s exclusion of the “private and uncommunicated thoughts” reinforces my view that it is public meaning, rather than original intentions, originalism that he supported, while his rejection of the “atomistic conception of intention” shows that he would have had no time for original expected applications ― which, of course, most originalists have no time for either.

Of course, Fuller was writing before originalism became a word, and a topic for endless debate. It is perhaps presumptuous, as well as anachronistic, to claim him for my side of this debate. Then again, Fuller himself insisted that text are not meant to apply to finite sets of factual circumstances within their author’s contemplation. So long as the mischiefs they are meant to rectify remain, they can be properly applied to new facts ― something with which public meaning originalists fully agree. In the case of the dead inventor, were we to summon his “spirit for help, the chances are that this help would take the form of collaborating … in the solution of a problem … left unresolved” (85) ― not of the dictation of an answer. And failing that, if we stay within the inventor’s framework, and remain true to his general aim, we have done the best we could. This is a standard by which I am happy to be judged.

Constitutional Amendment and the Law

I have been a bit harsh on the Supreme Court in my first post on its opinion in the Reference re Senate Reform, 2014 SCC 32, saying that it had reduced the constitutional text to the status of a façade, which hid as much as it revealed of the real constitutional architecture, which only the Court itself could see. But one must recognize that the Court’s position was very difficult.  The amending formulae codified in Part V of the Constitution Act, 1982, are a nightmare, at once too precise and too vague to guide their interpretation. Although in our legal system text, especially constitutional text, is supposed to be the legal form par excellence, superior to any unwritten norm, Part V shows that this is not always so.

It is often said that, before Part V was added to the constitution in 1982, there was no general amending formula in the Canadian constitution. That is only true if “constitution” is understood as “constitutional text.” In reality, there was an amending formula ― the Canadian constitution could be amended by the Imperial (i.e. British) Parliament, which in accordance with a “constitutional position” (i.e. convention) recognized by the Preamble of the Statute of Westminster, 1931, would only act on address of the Canadian Parliament, which, in accordance with a further convention of which the Supreme Court recognized the existence in the Patriation Reference, could only make such an address with “substantial provincial consent.”

This last convention, requiring substantial provincial consent to constitutional changes, was obviously somewhat vague. And indeed it often said that vagueness is an inherent limitation of constitutional conventions, and perhaps one of the reasons which prevent conventions from attaining legal status. More generally, in his great work on The Concept of Law, H.L.A. Hart argued that the passage from somewhat uncertain traditional rules to formal ones was part of a movement from a pre-legal to a legal system. The replacement of the convention requiring “substantial provincial consent” with specific, written amending formulae forming part of the constitutional text ought to have clarified the constitutional rules, and made them more law-like.

Instead, what we got is a system which is in many ways no clearer than the old conventional rule. Indeed, Part V illustrates Lon Fuller’s insight that an ostensibly legal rule or system of rules can fail certain formal requirements (of what he called the “inner morality of law” and what we usually refer to as the Rule of Law) to the point where they fail to guide behaviour and, thus, to be law at all.

The system of a general rule (s. 38 of the Constitution Act, 1982), examples of the general rule (s. 42), and exceptions to the general rule (ss. 41, 43, 44, and 45, some of which (ss. 44 and 45) themselves sound like plausible general rules) does not make for consistency, which is one of the Rule of Law requirements outlined by Fuller. (I note, however, that this system is somehow very Canadian, in that it parallels that which we have adopted for dividing powers between Parliament and the provinces: there, the “peace, order and good government” clause of s. 91 of the Constitution Act, 1867 is the general rule, followed by examples of federal powers in s. 91, and exceptions in s. 92, at least one of which, subs. 92(13) was itself very broad. Not coincidentally, this complex scheme arguably contributed to the distribution of powers being interpreted in a way that is probably far from what its authors had intended.) The mention of the Supreme Court in the amending formula ― combined with the conspicuous absence of the Supreme Court Act from the list of enactments composing the “constitution of Canada” is another glaring example of the inconsistency of Part V.

What is more, its rules are not exemplars of clarity (does, for instance, the “selection of Senators” refer only to their formal selection by the Governor General, as the federal government argued, or to the whole process leading to it?) Some of these rules also seem to produce results so absurd as to border on the impossible (for instance, as one of the judges suggested at the hearing of the Senate Reference, the amending formula seems to indicate that Canada could be turned into a dictatorship more easily than into a democratic republic).

Add this all up, and we have a set of amending formulae that, as Fuller predicted, fail to guide behaviour ― not only that of the politicians to whom they are addressed in the first instance, but also of the courts to which the politicians turn for help understanding them. We have, in other words, a set of rules which, although purportedly legal, indeed purportedly part of the “higher law,” in some circumstances fail to be law at all. (One should not exaggerate the scope of the problem. In many cases ― say, transforming Canada into a republic ― the import of Part V will be perfectly clear. But the Senate Reference as well as l’Affaire Nadon show the importance of cases where this is not so.)

Yet if one thing is unmistakable after the entrenchment of Part V, it is that the “procedure for amending the constitution of Canada” is a legal, and no longer a conventional matter. The courts are stuck with it, and cannot offload the problem of interpreting it to politicians. (In reality, the Supreme Court’s engagement with the conventions of constitutional amendment in the Patriation Reference and the subsequent Quebec Veto Reference illustrate the limits of its willingness, or ability, to do so even under the old, conventional regime.) And so the Supreme Court really had no choice but to try somehow to bring the less-than-fully-legal mess of Part V into the realm of legality. Inevitably, it had to do some violence to the text. It would not be fair to fault it for having done so. However, the difficulty of the Court’s position should not shield it from criticism of the way it went about its task, or absolve it from the responsibility for the problems which its endeavour will create. In particular, the concept of “constitutional architecture” which it used deserves critical attention. I hope to provide it shortly.

To Track or Not to Track?

There was an interesting article in the New York Times this weekend about the brewing fight around “do not track” features of internet browsers (such as Firefox or Internet Explorer) that are meant to tell websites visited by the user who has enabled the features not to collect information about the user’s activity for the purposes of online advertising. Here’s a concrete example that makes sense of the jargon. A friend recently asked me to look at a camera she was considering buying, so I checked it out on Amazon. Thereafter, for days on end, I was being served with ads for this and similar cameras on any number of websites I visited. Amazon had recorded my visit, concluded (wrongly, as it happens) that I was considering buying the camera in question, transmitted the information to advertisers, and their algorithms targeted me for camera ads. I found the experience a bit creepy, and I’m not the only one. Hence the appearance of the “do not track” functionalities: if I had been using a browser with a “do not track feature”, this would presumably not have happened.

Advertisers, of course, are not happy about “do not track.” Tracking our online activities allows them to target very specific ads at us, ads for stuff we have some likelihood of being actually interested in. As the Times explains,

[t]he advent of Do Not Track threatens the barter system wherein consumers allow sites and third-party ad networks to collect information about their online activities in exchange for open access to maps, e-mail, games, music, social networks and whatnot. Marketers have been fighting to preserve this arrangement, saying that collecting consumer data powers effective advertising tailored to a user’s tastes. In turn, according to this argument, those tailored ads enable smaller sites to thrive and provide rich content.

The Times reports that advertisers have been fighting the attempts of an NGO called the W3C (for “World Wide Web Consortium”) to develop standards for “do not track” features. They have also publicly attacked Microsoft for its plans to make “do not track” a default (albeit changeable) setting on the next version of Internet Explorer. And members of the U.S. Senate are getting into the fight as well. Some are questioning the involvement of an agency of the US government, the Federal Trade Commission, with W3C’s efforts, while others seem to side against the advertisers.

The reason I am writing about this is that this may be another example of the development of new rules happening before our eyes, and it gives us another opportunity to reflect on the various mechanisms by which social and legal rules emerge and interact, as well as on the way our normative systems assimilate technological development. (Some of my previous posts on these topics are here, here, and here.)

W3C wants to develop rules―not legally binding rules of course, but a sort of social norm which it hopes will be widely adopted―regulating the use of “do not track” features. But as with any would-be rule-makers, a number of questions arise. The two big ones are ‘what legitimacy does it have?’ and ‘is it competent?’ As the Times reports, some advertisers are, in fact raising the question of W3C’s competence, claiming the matter is “entirely outside their area of expertise.” This is self-serving of course.  W3C asserts that it “bring[s] diverse stake-holders together, under a clear and effective consensus-based process,” but that’s self-serving too, not to mention wishy-washy. And of course a claim can be both self-serving and true.

If not W3C, who should be making rules about “do not track”? Surely not advertisers’ trade groups? What about legislatures? In theory, legislatures possess democratic legitimacy, and also have the resources to find out a great deal about social problems and the best ways to solve them. But in practice, it is not clear that they are really able and, especially, willing to put these resources to good use. Especially on a somewhat technical problem like this, where the interests on one side (that of the advertisers) are concentrated while those on the other (the privacy of consumers) are diffused, legislatures are vulnerable to capture by interest groups. But even quite apart from that problem, technology moves faster than the legislative process, so legislation is likely to come too late, and not to be adapted to the (rapidly evolving) needs of the internet universe. And as for legitimacy, given the global impact of the rules at issue, what is, actually, the legitimacy of the U.S. Congress―or, say, the European Parliament―as a rule-maker?

If legislatures do not act, there are still other possibilities. One is that the courts will somehow get involved. I’m not sure what form lawsuits related to “do not track” might take―what cause of action anyone involved might have against anyone else. Perhaps “do not track” users might sue websites that refuse to comply with their preferences. Perhaps websites will make the use of tracking a condition of visiting them, and sue those who try to avoid it. I’m not sure how that might work, but I am pretty confident that lawyers more creative than I will think of something, and force the courts to step in. But, as Lon Fuller argued, courts aren’t good at managing complex policy problems which concern the interests of multiple parties, not all of them involved in litigation. And as I wrote before, courts might be especially bad at dealing with emerging technologies.

A final possibility is that nobody makes any rules at all, and we just wait until some rules evolve because behaviours converge on them. F.A. Hayek would probably say that this is the way to go, and sometimes it is. As I hope my discussion of the severe limitations of various rule-making fora shows, making rules is a fraught enterprise, which is likely to go badly wrong due to lack of knowledge if not capture by special interests. But sometimes it doesn’t make sense to wait for rules to grow―there are cases where having a rule is much more important than having a good rule (what side of the road to drive on is a classic example). The danger in the case of “do not track” might be an arms race between browser-makers striving to give users the ability to avoid targeted ads, or indeed any ads at all, and advertisers (and content providers) striving to throw them at users.  Pace the president of the Federal Trade Commission, whom the Times quotes as being rather optimistic about this prospect, it might actually be a bad thing, if the “barter system” that sustains the Internet as we know it is be caught in the crossfire.

Once again, I have no answers, only questions. Indeed my knowledge of the internet is too rudimentary for me to have answers. But I think what I know of legal philosophy allows me to ask some important questions.

I apologize, however, for doing it at such length.

Unsettling Settlement

I blogged some time ago about a settlement between an unspecified group of plaintiffs and Facebook regarding Facebook’s approach to what it calls “sponsored stories” which tell us that such and such friends “like” a certain company’s page. I raised some questions about the way in which this settlement works to create new rules, social and/or legal.  Is the influence which the plaintiffs (rather than any number of similarly situated individuals or groups) acquire over the formation of these rules by virtue of being the first to sue and settle with Facebook legitimate? Even apart from legitimacy, is it a good thing from a policy standpoint? For example, how do we know that this particular group is motivated by the public interest and, assuming that it is, capable of evaluating it correctly and of being an effective negotiator?

As the New York Times reports today, the judge who had to approve the settlement for it to go into effect also has questions, and will not give his approval until the parties come up with some answers.

As part of the proposed deal, Facebook agreed to better inform users about sponsored stories, to limit their use and to allow people under 18 to opt out of the function. The company also agreed to pay $10 million to a dozen research and advocacy groups that work on digital privacy rights, and $10 million to cover legal fees for the plaintiffs. But the settlement did not inhibit Facebook from continuing to serve up sponsored stories.

On Friday, Judge Richard G. Seeborg of United States District Court in San Francisco rejected the draft order and asked both sides to justify how they had negotiated the dollar amounts. “There are sufficient questions regarding the proposed settlement,” he wrote.

Judge Seeborg said he wanted clarification on whether there could be relief for the millions of Facebook users whose names and photographs had already been used.

From this report, it looks like Judge Seeborg is worried, as I was, about the legitimacy of the settlement as a rule-making procedure, as a “mode of social ordering,” to use Lon Fuller’s language. How do we know, he asks, that the agreement the parties reached makes sense? Is it fair to those who did not take part in the settlement negotiations but will end living by those rules with which the parties have come up as a result of an nontransparent process? Are we sure the settlement does not just benefit the parties, their pet charities, and the plaintiffs’ lawyers?

Those are sensible questions. The trouble is, as I wrote in my first post on this topic, that even if we conclude that the settlement is not an appropriate mode of social ordering, the alternatives aren’t great either. Legislation is slow and thus ill-suited to regulating an area in which change is constant and very fast. (A post by Stewart Baker at the Volokh Conspiracy, describing a proposed law that would have killed Gmail in its infancy by requiring the consent of both sender and receiver of an email for the email service to be able to scan its contents to serve up ads, shows just how ill-suited it can be. Social expectations of privacy have moved faster than the legislative process; Gmail now has close to half a billion users; and the proposed law is no more than a somewhat embarrassing memory.) And adjudication comes with serious problems of its own, which I described in the original post.

As then, I still don’t see any good way out of this conundrum.

In with the New?

Last week, I suggested that “[n]ew technologies seem not so much to create moral issues as to serve as a new canvass on which to apply our old concerns.” But there is no doubt that our legal rules, unlike perhaps moral ones, need updating when new technology comes along. How this updating is to happen is a difficult question. Lon Fuller, in his great article on “The Forms and Limits of Adjudication,” distinguished “three ways of reaching decisions, of settling disputes, of defining men’s relations to one another,” which he also called “forms of social ordering”: elections (and, one has to assume, resulting legislation), contract, and adjudication. All three can be and are used in developing rules surrounding new technologies, and the distinctions between them are not as sharp as Fuller suggested, because they are very much intertwined. Some recent stories are illustrative.

One is a report in the New York Times about a settlement between an unspecified group of plaintiffs and Facebook regarding Facebook’s approach to what it calls “sponsored stories” which tell us that such and such friends “like” a certain company’s page. Pursuant to the settlement, Facebook “will amend its terms of use to explain that users give the company permission to use their name, profile picture and content [and] offer settings that let users control which of their actions — which individual like, listen, or read — will appear in Sponsored Stories.” More than the (substantial) costs to Facebook, what interests me here is the way in which this settlement establishes or changes a rule – not a legal rule in a positivist sense, but a social rule – regulating the use of individuals’ names and images in advertising, introducing a requirement of consent and opt-out opportunity.

What form of social ordering is at work here? Contract, in an immediate sense, since a settlement is a contract. But adjudication too, in important ways. For one thing, the settlement had to be approved by a court. And for another, and more importantly, it seems more than likely that the negotiation would not have happened outside the context of a lawsuit which it was meant to settle. Starting, or at least credibly threatening, litigation is probably the only way for a group of activists and/or lawyers to get a giant such as Facebook to negotiate with them – in preference to any number of other similar groups – and thus to gain a disproportionate influence on the framing of the rules the group is interested in. Is this influence legitimate? Even apart from legitimacy, is it a good thing from a policy standpoint? For example, how do “we” – or does anyone – know that this particular group is motivated by the public interest and, assuming that it is, capable of evaluating it correctly and of being an effective negotiator? I think these are very troubling questions, but there are also no obvious ways of preventing social ordering through adjudication/negotiation even if we do conclude that it is problematic.

That is because alternative modes of social ordering are themselves flawed. Legislation is slow and thus a problematic response to new and fast-developing technologies. And adjudication (whether in a “pure” form – just letting courts develop rules in the process of deciding cases – or in the shape of more active judicial supervision of negotiated settlements) comes with problems of its own.

One is the subject of a post for Forbes by Timothy B. Lee, who describes how the fact that judges are removed from the communities that are subject to and have to live with the rules that they develop leads them to produce rules that do not correspond to the needs of these communities. One example he gives is that “many computer programmers think they’d be better off without software patents,” yet one of the leading judges who decides cases on whether there should be such patents “doesn’t have a very deep understanding of the concerns of many in the software industry. And, more to the point, he clearly wasn’t very interested in understanding those concerns better or addressing them.” Mr. Lee believes that this would be different if the judges in question happened to have friends or family members among the ranks of software developers. Perhaps – but, as he acknowledges, it is not possible for judges to have personal connections in every walk of life. Even trying to diversify the courts will only do so much. Furthermore, the individual experiences on which Mr. Lee thinks judges should rely might be atypical and thus tend to produce worse, rather than better, rules. Here too, questions about just how much judging ought to be informed by personal experience – as a matter both of policy and of legitimacy – are pressing.

Another set of questions about the courts’ handing of new technologies is the subject of a great paper by Kyle Graham, a professor at Santa Clara University and the author of the entertaining Non Curat Lex blog. Focusing on the development of liability rules surrounding new technologies, and using the examples of some once-new gadgets, mostly cars and planes,  prof. Graham points out that

[t]he liability rules that come to surround an innovation do not spring immediately into existence, final and fully formed. Instead, sometimes there are false starts and lengthy delays in the development of these principles. These detours and stalls result from five recurring features of the interplay between tort law and new technologies … First, the initial batch of cases presented to courts may be atypical of later lawsuits that implicate the innovation, yet relate rules with surprising persistence. Second, these cases may be resolved by reference to analogies that rely on similarities in form, and which do not wear well over time. Third, it may be difficult to isolate the unreasonable risks generated by an innovation from the benefits it is perceived to offer. Fourth, claims by early adopters of the technology may be more difficult to recover upon than those that arise later, once the technology develops a mainstream audience. Fifth, and finally, with regard to any particular innovation, it may be impossible to predict whether, and for how long, the recurring themes within tort law and its application that tend to yield a “grace” period for an invention will prevail over those tendencies with the opposite effect. (102)

I conclude, with my customary optimism, that there seem to be no good ways of developing rules surrounding new technologies, though there is a great variety of bad ones. But some rules there must be, so we need to learn to live with rotten ones.

A Pull Towards Goodness?

WARNING: This post is an adapted version of a passage in my “candidacy paper,” which is meant eventually to be part of the first chapter of my dissertation. Caveat lector.

***

Explaining their decisions is an important part of the judges’ work. It is valuable for all sorts of reasons. It forces judges to be honest – not just with the parties and their colleagues, but also, and perhaps most importantly, with themselves – about the issues at stake and the reasons that lead them to resolve the issues this way or that. It reassures the parties that the court has listened to their arguments and given them some thought, even if it ultimately rejected them. It makes judicial decisions more public, more transparent, and more amenable to criticism (and eventually reform). In these different ways it also disciplines the judges – it forces them to produce decisions that are more legally sound, because they address the relevant legal issues and materials. But could it do even more?

Some theorists, notably Lon Fuller, have argued reason-giving can make judicial decisions not merely legally sounder, but also better on some substantive criterion. As Fuller wrote in the context of his famous debate with H.L.A. Hart, “when men are compelled to explain and justify their decisions, the effect will generally be to pull those decisions toward goodness.”  (Lon L. Fuller, “Positivism and Fidelity to Law-A Reply to Professor Hart”, (1958) 71 Harv. L. Rev. 630, 636 .) In a similar vein, in an interesting (and/but incredibly romantic) essay on the role of the judge in relation to the corpus juris, especially in a common law system, Sarah M.R. Cravens contends that, as part of “virtuous judging,” reason-giving can help “take decision-making beyond simply the legally correct” and “is a component of a larger cycle that defines, develops, and achieves justice.” (1643)

Is that right? I am very skeptical, despite my sympathy for the view of law, and especially the common law, as inherently valuable and good. Fuller might just be right that reason-giving cannot lead “toward a more perfect realization of iniquity,” (636) because iniquity dares not speak its name, although we know that it does sometimes, as for example in Justice Holmes’ opinion in Buck v. Bell, which I described as “angry [and] heartless” here. But there is a great deal of disagreement about what iniquity is, and even more about what goodness or justice are, making it impossible to say whether reason-giving, or any other practice, actually helps realizing them. One way around this problem is to say, as Prof. Cravens seems to, that goodness or justice are to be found within the four corners of the legal system itself, so that reason-giving helps achieve them merely by situating judicial decisions within the system, but surely many will dispute that the our legal system, as it currently exists, is substantively good or just.

The most that can be said is that the existence of a legal system, or more specifically of a body of law comprising and connecting individual judicial decisions, is itself valuable and good, as for example Jeremy Waldron argues in his essay on “The Concept and the Rule of Law.” Fuller (and probably prof. Cravens) would agree with that claim, but his (and her) view goes rather beyond it and, much as I admire him, I cannot follow him there.