Arguing against Originalism Badly

Noura Karazivan’s flawed argument against using originalism to understand constitutional structure

Noura Karazivan has recently published an article called “Constitutional Structure and Original Intent: A Canadian Perspective” in the University of Illinois Law Review. Prof. Karazivan raises interesting questions: what is, and what should be, the mix of originalism and living constitutionalism in the Supreme Court’s treatment of constitutional structure ― understood as the set of institutions that make up Canada’s government, and the relations among them. Unfortunately, prof. Karazivan’s argument suffers from her failure to engage seriously with contemporary originalist thought, or indeed to take note of recent work exploring it in the Canadian context, and her answer to the normative question, which decisively favours living constitutionalism, is unsatisfactory.

* * *

Prof. Karazivan’s starting point is an orthodox proposition: “[i]n Canadian constitutional law, there is no doubt that a broad, purposive, and progressive approach”, described by the famous “living tree” metaphor, “is preferred” for the interpretation of any and all constitutional provisions, (630) though she acknowledges that the Supreme Court uses other interpretive methods too. In addition to being used in the interpretation of constitutional text, living constitutionalism has played a crucial role in a number of decisions concerning constitutional structure. For example, in l’Affaire Nadon, Reference re Supreme Court Act, ss 5 and 6, 2014 SCC 21, [2014] 1 SCR 433, the Court’s “conclusion would probably have been different” had it not engaged in “actualizing” its place in the constitutional structure, and only looked “its role in 1875”. (648)

Yet in a couple of recent decisions, says Prof. Karazivan, the Court adopted a more originalist approach to constitutional structure, rather than the evolutionist one that it normally favours. Prof. Karazivan focuses on Reference re Senate Reform, 2014 SCC 32, [2014] 1 SCR 704, but also mentions Trial Lawyers Association of British Columbia v. British Columbia (Attorney General), 2014 SCC 59, [2014] 3 S.C.R. 31. In the former, “the Court greatly relied on the intent of the 1867 framers”, (646) who wished the Upper House to supply “sober second thought”. The Court disregarded the practice of partisan appointments to the Senate, the Senate’s contemporary role, and even “the impact of the enactment of the Constitution Act, 1982”, (647) which arguably transferred the role of protector of constitutional rights from the Senate to the judiciary. Meanwhile, in Trial Lawyers, the superior courts’ historic dispute-settling role was crucial to the decision.

Prof. Karazivan argues that the Supreme Court was wrong to resort to originalism in these decisions. She gives four reasons. First, she takes Re B.C. Motor Vehicle Act, [1985] 2 SCR 486 to stand for the proposition that the judiciary is not bound by the intent of constitutional framers. Second, originalism can make no democratic claim in Canada, since the Constitution Act, 1867 was the work of “a group of white men, mostly Parliamentarians, concerned with the preservation of British institutions on Canadian soil”, while “[t]he constitutional negotiations in 1982 were even less ‘democratic'”. (651; square quotes in the original) In short, “Canada does not have a great constitutional moment”. (651) Third, the Canadian constitution is simply too rigid for the courts not to update it from time to time. Finally, a “living tree” approach to interpretation yields a fuller understanding of both the constitution as a whole and its various components, as well as being “in line with Canadian constitutional structure and tradition”. (654)

* * *

As I said at the outset, this is unconvincing. Prof. Karazivan repeats pieties about the superiority of living constitutionalism to originalism without understanding what originalism actually is. Although she refers, in passing, to the distinction between originalist interpretation that seeks the intent of constitutional framers and that which centres on the constitution’s original public meaning, her article focuses on original intent ― which relatively few contemporary originalists are still committed to. Prof. Karazivan also enlists a number of cases, such as the BC Motor Vehicle Act Reference and Reference re Employment Insurance Act (Can.), ss. 22 and 23, 2005 SCC 56, [2005] 2 SCR 669, in support of the proposition that living constitutionalism is the dominant approach to interpretation in Canada, while originalism has been rejected. Yet Benjamin Oliphant and I have shown that not only do these cases not support the claim of a wholesale rejection of originalism, but they are arguably (in the case of the BC Motor Vehicle Act Reference) or quite clearly (in the case of Employment Insurance Reference) consistent with public meaning originalism.

More broadly, we have also shown that the Supreme Court has never squarely rejected the more plausible forms of originalism, and indeed that various forms of originalist reasoning make frequent, if erratic, appearances in the Court’s reasoning. In particular, as both we and J. Gareth Morley and Sébastien Grammond have observed, originalist reasoning features heavily not only in the Senate Reform Reference, which prof. Karazivan decries, but also in the Nadon Reference, which she commends. Mr. Oliphant and I have also pointed out that cases on the jurisdiction of superior courts have had an originalist bent well before Trial Lawyers. In short, at the level of description, prof. Karazivan’s story, in which a largely living constitutionalist Supreme Court issued a couple of aberrant originalist decisions is much too simple.

Prof. Karazivan’s normative argument is even weaker. Her appeal to the authority of Justice Lamer’s opinion in the BC Motor Vehicle Act Reference has to be set against not only the arguable  consistency of this opinion with public meaning originalism, but also its author’s resort to more explicitly originalist reasoning elsewhere. For instance, in B(R) v Children’s Aid Society of Metropolitan Toronto, [1995] 1 SCR 315 he wrote that

[t]he flexibility of the principles [the Charter] expresses does not give [the courts] authority to distort their true meaning and purpose, nor to manufacture a constitutional law that goes beyond the manifest intention of its framers. (337)

Prof. Karazivan’s denial that Canada had “a great constitutional moment”, and her insistence that the drafting of the Constitution Act, 1867 (by “white men”) and that of the Constitution Act, 1982 (presumably by persons unknown) would be simply bizarre were they not sadly typical of the ritual denigration of Canadian constitutional history in which even Supreme Court judges have been known to engage. The truth, though, is that Canada did have not one, but two great constitutional moments ― in the mid-1860s and the early 1980s. My friend Alastair Gillespie has been exploring the first of these in a compelling (and ongoing) series of papers for the Macdonald-Laurier Institute, which, as I have written in a recent post for the CBA National

make clear [that] the Fathers of Confederation wrestled with such seemingly contemporary questions as whether diversity is a source of weakness of strength for a political community, what claims such a community may legitimately make on minorities within its midst, and what rights these minorities may assert against the community. The settlement of 1867 was a remarkable achievement in this regard.

To be sure, the Fathers of Confederation were indeed white men ― as were those who took part in the framing of the US Constitution, to which prof. Karazivan does not deny the status of a “great constitutional moment”. This is one reason, among others, why I do not find the democratic case for originalism very compelling. But the sexism and racism of our 19th-century forbears is not a reason for dismissing the substance of their achievements; and least of all for allowing a group nine men and women, who are if anything even less representative of society than the Fathers of Confederation on every dimension except for gender, the power to re-write the constitution. As for the enactment of the Canadian Charter of Rights and Freedoms, it was preceded by wide-ranging public consultations which resulted, for example, in the adoption of section 28 at the urging of feminist groups, as Kerri Froc has shown. Why prof. Karazivan claims it was undemocratic, I cannot understand.

That the constitution is rigid and difficult to amend is a feature, not a bug that needs to be removed by the backdoor expedient of judicial reinterpretation. The politicians who came up with and agreed to the amending formula in Part V of the Constitution Act, 1982 obviously thought it was flexible enough. Why were they wrong? That said, had prof. Karazivan taken public meaning originalism, and in particular the work of those originalists who recognize the distinction between constitutional interpretation and constitutional construction, seriously, she would have realized that many, perhaps most originalists do not advocate for a static constitutional law. They insist that the meaning of the constitution’s text is fixed, but recognize that this text can in fact be applied to facts and circumstances quite unforeseen at the time of its drafting through the development of constitutional doctrine.

Finally, I fail to see how living constitutionalism can lead us to a better understanding of the constitution. The argument, insofar as I understand it, seems question-begging. Saying treating the constitution as a “living tree” allows us to understand it better presupposes that the object of constitutional interpretation is the contemporary constitution rather than the intention of the constitutional text’s drafters or its original public meaning ― which is very much the point in issue. To be sure, Canadian constitutional tradition is laden with denunciations ― usually quite ignorant denunciations ― of originalism. But as the emerging Canadian scholarship that takes originalism seriously shows, these denunciations do not tell us the whole story. Nor can they serve as a normative justification in the absence of any more compelling ones.

* * *

As I mentioned at the outset, prof. Karazivan addresses an important question, that of the place of originalism in the Supreme Court’s understanding of constitutional structure. Unfortunately, she does so in a way that reflects a simplistic or outdated understanding of originalism, and as a result oversimplifies relevant precedents and offers thoroughly unconvincing arguments against originalism. That her arguments do not succeed does not show that the Court is right to be as originalist as it is, or that it ought to be more so. That case remains to be made. But so does prof. Karazivan’s in favour of living constitutionalism. Her article does not advance it.

Platonic Guardians 2.0?

The New York Times has published an essay by Eric Schmidt, the Chairman of Google, about the role of the Internet, and especially, of the exchange of ideas and information that the Internet enables, in both contributing to and addressing  the challenges the world faces. The essay is thoroughly upbeat, concluding that it is “within [our] reach” to ensure that “the Web … is a safe and vibrant place, free from coercion and conformity.” Yet when reading Mr. Schmidt it is difficult not to worry that, as with students running riot on American college campuses, the quest for “safety” will lead to the silencing of ideas deemed inappropriate by a force that might be well-intentioned but is unaccountable and ultimately not particularly committed to freedom of expression.

To be sure, Mr. Schmidt talks the free speech talk. He cites John Perry Barlow’s “Declaration of the Independence of Cyberspace,” with its belief that the Web will be “a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” He argues that

[i]n many ways, that promise has been realized. The Internet has created safe spaces for communities to connect, communicate, organize and mobilize, and it has helped many people to find their place and their voice. It has engendered new forms of free expression, and granted access to ideas that didn’t exist before.

Mr. Schmidt notes the role online communication has played in enabling democratic protest around the world, and wants to reject the claims of “[a]uthoritarian governments  … that censorship is necessary for stability.”

But his response to these claims is not just a straightforward defence of the freedom of expression. “The people who use any technology are the ones who need to define its role in society,” Mr. Schmidt writes. “Technology doesn’t work on its own, after all. It’s just a tool. We are the ones who harness its power.” That’s fair enough, so far as it goes. Mr. Schmidt warns against “us[ing] the Internet exclusively to connect with like-minded people rather than seek out perspectives that we wouldn’t otherwise be exposed to,” and that is indeed very important. But then the argument gets ominous:

[I]t’s important we use [the Internet’s] connectivity to promote the values that bring out the best in people. … We need leaders to use the new power of technology to allow us to broaden our horizons as individuals, and in the process broaden the horizons of our society. It’s our responsibility to demonstrate that stability and free expression go hand in hand.

It’s not that I’m against the idea that one should act responsibly when exercising one’s freedom of expression (or that one should just act responsibly, period). But is the responsibility of a speaker always to foster “stability” ― whatever exactly that is? And to whom ought we “to demonstrate that stability and free expression go hand in hand”? To the authoritarians who want to censor the internet? Why exactly do we owe them a demonstration, and what sort of demonstration are they likely to consider convincing? Last but not least, who are the leaders who are going to make us “broaden our horizons”?

Mr. Schmidt has a list of more or less specific ideas about how to make the internet the “safe and vibrant place” he envisions, and they give us a hint about his answer to that last question:

We should make it ever easier to see the news from another country’s point of view, and understand the global consciousness free from filter or bias. We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice.

He speaks “of leadership from government, from citizens, from tech companies,” but it is not obvious how citizens or even governments ― whom Mr. Barlow taunted as the “weary giants of flesh and steel,” powerless and unwelcome in cyberspace ― can “build tools” to do these sorts of things. It is really the other sort of giants, the “tech companies” such as the one Mr. Schmidt runs, that have, or at least can create, the means to be our benevolent guardians, turning us away from hate and harassment, and towards “global consciousness,” ― whatever that too may be. Google can demote websites that it deems to be promoters of “hate” in its search results, as indeed it already demotes those it considers to be copyright-infringers. Apple could block the access to its App Store to news  sources it considers biased, as indeed it has already blocked a Danish history book for  featuring some nudity in its illustrations. Facebook could tinker with its Newsfeed algorithms to help people with a favoured peace-and-love perspective “find their voice,” as it already tinkers with them to “help [us] see more stories that interest [us].”

Of course, Mr. Schmidt’s intentions are benign, and in some ways even laudable. Perhaps some of the “tools” he imagines would even be nice to have. The world may (or may not) be a better place if Facebook and Twitter could ask us something like “hey, this really isn’t very nice, are you sure you actually want to post this stuff?” ― provided that we had the ability to disregard the advice of our algorithmic minders, just like we can with spell-check. But I’m pretty skeptical about what might come out of an attempt to develop such tools. As I once pointed out here, being a benign censor is very hard ― heavy-handedness comes naturally in this business. And that’s before we even start thinking about the conflicts of interest inherent in the position of Google and of other tech companies who are in a position of being, at once, the regulators of their users’ speech and subjects of government regulations, and may well be tempted to so act in the former role as to avoid problems in the latter. And frankly, Mr. Schmidt’s apparent faith in “strong leaders” who will keep us free and make us safe and righteous is too Boromir-like for me to trust him.

As before, I have no idea what, if anything, needs to or could be done about these issues. Governments are unlikely to wish to intervene to stop the attempts of tech companies to play Platonic guardians 2.0. Even if they had the will, they would probably lack the ability to do so. And, as I said here, we’d be making a very risky gamble by asking governments, whose records of flagrant contempt for freedom of expression are incomparably worse than those of Google and its fellows, to regulate them. Perhaps the solution has to be in the creation of accountability mechanisms internal to the internet world, whether democratic (as David R. Johnson, David G. Post and Marc Rotenberg have suggested) or even akin to rights-based judicial review. In any case, I think that even if we don’t know how to, or cannot, stop the march of our algorithmic guardians, perhaps we can at lest spell-check them, and tell them that they might be about to do something very regrettable.

Interception Followed by a Fumble

What sort of authorization do the police need in order to obtain copies of text messages a person sends or receives? That was the issue which the Supreme Court decided today in R. v. TELUS Communications Co., 2013 SCC 16. If obtaining copies of text messages is an “intercept” within the meaning of s. 183 the Criminal Code, or something like it, then a special warrant for intercepting private communications, governed by Part VI of the Code, is required. Otherwise, it was enough for the police to obtain a “general warrant,” which is rather less difficult to get than a warrant to intercept. The parties agreed that, generally speaking, a Part VI warrant is required for reading a person’s text messages as soon as they are sent by means of special equipment installed by the telecommunications provider, which apparently is the only way to read texts sent between users of most Canadian telecommunications companies. But, unlike its competitors, Telus stores copies of texts sent by or to its users on its computers. So when the police sought a warrant to force Telus to hand over copies of texts that two of its users would send in the following couple of weeks, they thought that this was not going to count as an intercept within the meaning of Part VI of the Code, because they wouldn’t be reading the messages as they would be sent, but only accessing copies after a (little) while. But the majority of the Supreme Court found that what the police did was in fact an interception or something essentially similar, and, therefore, that they fumbled in not obtaining the appropriate Part VI warrant.

Three judges―Justices LeBel, Fish, and Abella―found that what the police did amounted to an intercept. Justice Abella’s opinion notes that the definition of “intercept” in s. 183 of the Code is broad and not exhaustive―it ” includes listen to, record or acquire a communication or acquire the substance, meaning or purport'” of a communication intended to be private. Justice Abella also insists that the understanding of “intercept” must  evolve to protect private communications that use new technologies no less than those that use those that existed at the time the statutory provision was drafted. It “must … focus on the acquisition of informational content and the individual’s expectation of privacy at the time the communication was made” (par. 36). Text messaging is not fundamentally different from ordinary conversation, and must be protected in the same way; nor should the specific way in which one telecommunications provider handle text messages deprive its clients of their privacy rights. The fact that Telus stores its clients’ messages on its computers is thus immaterial. The police ought to have obtained a Part VI warrant to read the messages they were interested in.

Two other judges―Justices Moldaver and Karakatsanis agree with this result, but they prefer not to decide whether what the police did  really was an intercept. It is enough that it was functionally similar to one. Because the benefit they derived from proceeding as they did was the same as they would have from reading text messages as they are sent, for which a Part VI warrant is incontrovertibly required, it was not enough to proceed under a general warrant, as they did, since a general warrant is only available when no other procedure provided by the Code is relevant. Justice Moldaver accepts “that as a technical matter, what occurred here was different from what would occur pursuant to a Part VI authorization,” but not  “however, that that fact is determinative in light of the identical privacy interests at stake” (par. 68). The privacy interests at stake help us understand the purpose of the protections which Parliament crafted before any investigative technique contemplated by the Code can be authorized, and the general warrant should not be available for police to circumvent these purposes by somewhat modifying the form, without altering the substance, of the investigative techniques they use.

Chief Justice McLachlin and Justice Cromwell dissent. They argue that what the police sought to do here was not to intercept private communications, but to obtain the disclosure of communications that had been (lawfully) intercepted by someone else (namely Telus), which is outside the scope of s. 183 of the Code, making a Part VI unnecessary. They also disagree with Justice Cromwell’s arguments that general warrants should be used only exceptionally and not as a matter of course.

Interestingly, Justice Cromwell’s opinion looks more like a majority one (for example it uses the internal headings that are usual in majority opinions) than Justice Abella’s does.  Justice Abella’s opinion also reads like a response to Justice Cromwell’s arguments―something more commonly seen in, and perhaps more suitable for, a dissent than a plurality opinion. I wonder, though of course we are likely never to know for sure, whether Justice Cromwell’s opinion was intended to be that of the majority, or at least the plurality, albeit a plurality dissenting as to the result. Perhaps Justice Abella’s would-be dissent persuaded Lebel, and maybe Justice Fish, and they switched from agreeing with Justice Cromwell to agreeing with her. (I’m guessing that Justice Fish, who is the Court’s most consistent civil libertarian, more likely agreed with Justice Abella from the start.)

For what it’s worth, I agree with the outcome of the case. The majority is right that Telus’ peculiar ways of handling text messages shouldn’t matter, and that privacy protections should be, as far as possible, consistent across the different ways in which we communicate. Whether Justice Abella’s approach or Justice Moldaver’s is preferable, I cannot tell. I think both opinions are thoughtful and interesting (and I have only given the bare bones of Justice Moldaver’s here). I hope that they will come to some form of synthesis in the future.

NOTE: This happens to be my 200th post. It took me a year less 10 days. Not too bad, I daresay.

A Forecast on Suing the Weatherman

First of all, apologies for my disappearance. The last two weeks have been hectic, and I have been most neglectful of the blogging duties. I hope to resume them now.

My first comeback post will be a lighthearted one though. The New York Times Magazine has an entertaining piece―actually, an adaption of a excerpt from a book on prognostication by Nate Silver―on the science and art (there is, still, a good deal of art) of the weather forecast. Whatever we might think in our more cynical days, Mr. Silver says that the accuracy of these forecasts has increased a great deal over the last few decades:

In 1972, the  [National Weather S]ervice’s high-temperature forecast missed by an average of six degrees [Farenheit] when made three days in advance. Now it’s down to three degrees. … Perhaps the most impressive gains have been in hurricane forecasting. Just 25 years ago, when the National Hurricane Center tried to predict where a hurricane would hit three days in advance of landfall, it missed by an average of 350 miles. If Hurricane Isaac, which made its unpredictable path through the Gulf of Mexico last month, had occurred in the late 1980s, the center might have projected landfall anywhere from Houston to Tallahassee, canceling untold thousands of business deals, flights and picnics in between — and damaging its reputation when the hurricane zeroed in hundreds of miles away. Now the average miss is only about 100 miles.

Now 100 miles still seems like a lot to me. (It’s comparable with the radius of a hurricane itself.) Still the accuracy of the forecasts is improving – fast. Now to the legal part of the post.

For the moment, even if I rely on a negligently prepared forecast (and is Mr. Silver notes, it is not “unheard-of for a careless forecaster to send in a 50-degree reading as 500 degrees”―with all the consequences imaginable, if the error is not caught before the figure is fed into the computer and wrecks all the forecasts it produces) and have a miserable day as a result, or even suffer material losses, I cannot sue the weatherman. Technically, that is because he owes no duty of care to the general public. But then the manufacturer of ginger ale in Donoghue v. Stevenson also thought he owed no duty of care to those who drank his snail-infested brew.

It is said that courts quietly subsidized the industrial revolution by employing all manner of legal doctrines to deny compensation to its victims―workers, consumers, and bystanders alike. But as the revolution became the new normal, the subsidy was no longer necessary, and so it was withdrawn, sometimes by legislation, and sometimes by courts themselves. One way of seeing the Donoghue v. Stevenson case is that it was an indication that thenceforth, manufacturers would be held to a duty of care towards consumers, where none was imposed before―not only, because of the injustice of denying compensation to the consumer, but also because manufacturing was no longer a new, and therefore inherently uncertain, process that could never really be trusted to deliver consistently reliable results.

So here’s my question. As the accuracy and reliability of the forecasts increases, will there come a point at which courts impose a duty of care on forecasters, so that suing the weatherman will no longer be impossible?

In with the New?

Last week, I suggested that “[n]ew technologies seem not so much to create moral issues as to serve as a new canvass on which to apply our old concerns.” But there is no doubt that our legal rules, unlike perhaps moral ones, need updating when new technology comes along. How this updating is to happen is a difficult question. Lon Fuller, in his great article on “The Forms and Limits of Adjudication,” distinguished “three ways of reaching decisions, of settling disputes, of defining men’s relations to one another,” which he also called “forms of social ordering”: elections (and, one has to assume, resulting legislation), contract, and adjudication. All three can be and are used in developing rules surrounding new technologies, and the distinctions between them are not as sharp as Fuller suggested, because they are very much intertwined. Some recent stories are illustrative.

One is a report in the New York Times about a settlement between an unspecified group of plaintiffs and Facebook regarding Facebook’s approach to what it calls “sponsored stories” which tell us that such and such friends “like” a certain company’s page. Pursuant to the settlement, Facebook “will amend its terms of use to explain that users give the company permission to use their name, profile picture and content [and] offer settings that let users control which of their actions — which individual like, listen, or read — will appear in Sponsored Stories.” More than the (substantial) costs to Facebook, what interests me here is the way in which this settlement establishes or changes a rule – not a legal rule in a positivist sense, but a social rule – regulating the use of individuals’ names and images in advertising, introducing a requirement of consent and opt-out opportunity.

What form of social ordering is at work here? Contract, in an immediate sense, since a settlement is a contract. But adjudication too, in important ways. For one thing, the settlement had to be approved by a court. And for another, and more importantly, it seems more than likely that the negotiation would not have happened outside the context of a lawsuit which it was meant to settle. Starting, or at least credibly threatening, litigation is probably the only way for a group of activists and/or lawyers to get a giant such as Facebook to negotiate with them – in preference to any number of other similar groups – and thus to gain a disproportionate influence on the framing of the rules the group is interested in. Is this influence legitimate? Even apart from legitimacy, is it a good thing from a policy standpoint? For example, how do “we” – or does anyone – know that this particular group is motivated by the public interest and, assuming that it is, capable of evaluating it correctly and of being an effective negotiator? I think these are very troubling questions, but there are also no obvious ways of preventing social ordering through adjudication/negotiation even if we do conclude that it is problematic.

That is because alternative modes of social ordering are themselves flawed. Legislation is slow and thus a problematic response to new and fast-developing technologies. And adjudication (whether in a “pure” form – just letting courts develop rules in the process of deciding cases – or in the shape of more active judicial supervision of negotiated settlements) comes with problems of its own.

One is the subject of a post for Forbes by Timothy B. Lee, who describes how the fact that judges are removed from the communities that are subject to and have to live with the rules that they develop leads them to produce rules that do not correspond to the needs of these communities. One example he gives is that “many computer programmers think they’d be better off without software patents,” yet one of the leading judges who decides cases on whether there should be such patents “doesn’t have a very deep understanding of the concerns of many in the software industry. And, more to the point, he clearly wasn’t very interested in understanding those concerns better or addressing them.” Mr. Lee believes that this would be different if the judges in question happened to have friends or family members among the ranks of software developers. Perhaps – but, as he acknowledges, it is not possible for judges to have personal connections in every walk of life. Even trying to diversify the courts will only do so much. Furthermore, the individual experiences on which Mr. Lee thinks judges should rely might be atypical and thus tend to produce worse, rather than better, rules. Here too, questions about just how much judging ought to be informed by personal experience – as a matter both of policy and of legitimacy – are pressing.

Another set of questions about the courts’ handing of new technologies is the subject of a great paper by Kyle Graham, a professor at Santa Clara University and the author of the entertaining Non Curat Lex blog. Focusing on the development of liability rules surrounding new technologies, and using the examples of some once-new gadgets, mostly cars and planes,  prof. Graham points out that

[t]he liability rules that come to surround an innovation do not spring immediately into existence, final and fully formed. Instead, sometimes there are false starts and lengthy delays in the development of these principles. These detours and stalls result from five recurring features of the interplay between tort law and new technologies … First, the initial batch of cases presented to courts may be atypical of later lawsuits that implicate the innovation, yet relate rules with surprising persistence. Second, these cases may be resolved by reference to analogies that rely on similarities in form, and which do not wear well over time. Third, it may be difficult to isolate the unreasonable risks generated by an innovation from the benefits it is perceived to offer. Fourth, claims by early adopters of the technology may be more difficult to recover upon than those that arise later, once the technology develops a mainstream audience. Fifth, and finally, with regard to any particular innovation, it may be impossible to predict whether, and for how long, the recurring themes within tort law and its application that tend to yield a “grace” period for an invention will prevail over those tendencies with the opposite effect. (102)

I conclude, with my customary optimism, that there seem to be no good ways of developing rules surrounding new technologies, though there is a great variety of bad ones. But some rules there must be, so we need to learn to live with rotten ones.

Google, Speaker and Censor

Some recent stories highlight Google’s ambiguous role as provider and manager of content, which, from a free-speech perspective, puts at it at once in the shoes of both a speaker potentially subject to censorship and an agent of the censors.

The first of these is an interesting exchange between Eugene Volokh, of UCLA and the Volokh Conspiracy, and Tim Wu, of Harvard. Back in April, prof. Volokh and a lawyer from California, Donald Falk, published a “White Paper” commissioned by Google, arguing that search results produced by Google and its competitors are covered by the First Amendment to the U.S. Constitution, which protects freedom of speech. The crux of their argument is that “search engines select and sort the results in a way that is aimed at giving users what the search engine companies see as the most  helpful and useful information” (3). This is an “editorial judgment,” similar to other editorial judgments – that of a newspaper publisher selecting and arranging news stories, letters from readers, and editorials, or a guidebook editor choosing which restaurants or landmarks to include and review and which to omit. The fact that the actual selecting and sorting of the internet search results is done by computer algorithms rather by human beings is of no import. It “is necessary given the sheer volume of information that search engines must process, and given the variety of queries that users can input,” but technology does not matter: the essence of the decision is the same whether it is made by men or by machines (which, in any event, are designed and programmed by human engineers with editorial objectives in mind).

In a recent op-ed in the New York Times, prof. Wu challenges the latter claim. For him, it matters a lot whether we are speaking of choices made by human beings or by computers. Free speech protections are granted to people, sentient beings capable of thought and opinion. Extending them to corporations is disturbing, and doing so to machines would be a mistake.

As a matter of legal logic, there is some similarity among Google, [a newspaper columnist], Socrates and other providers of answers. But if you look more closely, the comparison falters. Socrates was a man who died for his views; computer programs are utilitarian instruments meant to serve us. Protecting a computer’s “speech” is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship.

And it does not matter that computer algorithms are designed by humans. A machine can no more “inherit” the constitutional rights of its creator than Dr. Frankenstein’s monster.

Prof. Volokh responds to the arguments in a blog post. He thinks it is a mistake to treat the intervention of the algorithm as an entirely new event that breaks the constitutional protection to which editorial decisions of human beings are entitled. The algorithms  are only tools; their decisions are not autonomous, but reflect the choices of their designers. To the extent that similar choices by human beings are prohibited or regulated, they remain so if made by computers; but to the extent they are constitutionally protected – and it is a large one – the interposition of an algorithm should not matter at all.

This is only a bare-bones summary of the arguments; they are worth a careful reading. Another caveat is that the constitutional analysis might be somewhat different in Canada, since our law is somewhat less protective of free speech than its American counterpart. However, I do not think that these differences, however significant they are in some cases, would or should matter here.

The argument prof. Volokh articulates on Google’s behalf reflects its concern about having its own speech regulated. That concern is one it shares with the traditional media to which prof. Volokh repeatedly compares it. But Google is also different from traditional media, in that it serves as a host or conduit to all manner of content which it neither created nor even vetted. It is different too in being (almost) omnipresent, and thus subject to the regulation and pressure of governments the world over. For this reason, is often asked to act as an agent of the regulators or censors of the speech of others to which it links or which its platforms host – and, as much as it presents itself as a speaker worried about censorship of its own speech, it often enough accepts. It provides some of the details – numbers mostly, and a selection of examples – in its “Transparency Report.” To be sure, much of the content that Google accepts to remove is, in one way or another, illegal – for example defamatory, or contrary to hate speech legislation. And as a private company, Google does not necessarily owe it to anyone to link to or host his or her content. Still, when its decisions not to do so are motivated not by commercial considerations, but by requests of government agencies – and not necessarily courts, but police and other executive agencies too – its position becomes more ambiguous. For example, one has to wonder whether there is a risk of a conflict of interest between its roles as speaker and censors’ agent – whether it will not be tempted to trade greater compliance with the regulators’ demands when it comes to others’ content for greater leeway when it comes to its own.