Platonic Guardians 2.0?

The New York Times has published an essay by Eric Schmidt, the Chairman of Google, about the role of the Internet, and especially, of the exchange of ideas and information that the Internet enables, in both contributing to and addressing  the challenges the world faces. The essay is thoroughly upbeat, concluding that it is “within [our] reach” to ensure that “the Web … is a safe and vibrant place, free from coercion and conformity.” Yet when reading Mr. Schmidt it is difficult not to worry that, as with students running riot on American college campuses, the quest for “safety” will lead to the silencing of ideas deemed inappropriate by a force that might be well-intentioned but is unaccountable and ultimately not particularly committed to freedom of expression.

To be sure, Mr. Schmidt talks the free speech talk. He cites John Perry Barlow’s “Declaration of the Independence of Cyberspace,” with its belief that the Web will be “a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” He argues that

[i]n many ways, that promise has been realized. The Internet has created safe spaces for communities to connect, communicate, organize and mobilize, and it has helped many people to find their place and their voice. It has engendered new forms of free expression, and granted access to ideas that didn’t exist before.

Mr. Schmidt notes the role online communication has played in enabling democratic protest around the world, and wants to reject the claims of “[a]uthoritarian governments  … that censorship is necessary for stability.”

But his response to these claims is not just a straightforward defence of the freedom of expression. “The people who use any technology are the ones who need to define its role in society,” Mr. Schmidt writes. “Technology doesn’t work on its own, after all. It’s just a tool. We are the ones who harness its power.” That’s fair enough, so far as it goes. Mr. Schmidt warns against “us[ing] the Internet exclusively to connect with like-minded people rather than seek out perspectives that we wouldn’t otherwise be exposed to,” and that is indeed very important. But then the argument gets ominous:

[I]t’s important we use [the Internet’s] connectivity to promote the values that bring out the best in people. … We need leaders to use the new power of technology to allow us to broaden our horizons as individuals, and in the process broaden the horizons of our society. It’s our responsibility to demonstrate that stability and free expression go hand in hand.

It’s not that I’m against the idea that one should act responsibly when exercising one’s freedom of expression (or that one should just act responsibly, period). But is the responsibility of a speaker always to foster “stability” ― whatever exactly that is? And to whom ought we “to demonstrate that stability and free expression go hand in hand”? To the authoritarians who want to censor the internet? Why exactly do we owe them a demonstration, and what sort of demonstration are they likely to consider convincing? Last but not least, who are the leaders who are going to make us “broaden our horizons”?

Mr. Schmidt has a list of more or less specific ideas about how to make the internet the “safe and vibrant place” he envisions, and they give us a hint about his answer to that last question:

We should make it ever easier to see the news from another country’s point of view, and understand the global consciousness free from filter or bias. We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice.

He speaks “of leadership from government, from citizens, from tech companies,” but it is not obvious how citizens or even governments ― whom Mr. Barlow taunted as the “weary giants of flesh and steel,” powerless and unwelcome in cyberspace ― can “build tools” to do these sorts of things. It is really the other sort of giants, the “tech companies” such as the one Mr. Schmidt runs, that have, or at least can create, the means to be our benevolent guardians, turning us away from hate and harassment, and towards “global consciousness,” ― whatever that too may be. Google can demote websites that it deems to be promoters of “hate” in its search results, as indeed it already demotes those it considers to be copyright-infringers. Apple could block the access to its App Store to news  sources it considers biased, as indeed it has already blocked a Danish history book for  featuring some nudity in its illustrations. Facebook could tinker with its Newsfeed algorithms to help people with a favoured peace-and-love perspective “find their voice,” as it already tinkers with them to “help [us] see more stories that interest [us].”

Of course, Mr. Schmidt’s intentions are benign, and in some ways even laudable. Perhaps some of the “tools” he imagines would even be nice to have. The world may (or may not) be a better place if Facebook and Twitter could ask us something like “hey, this really isn’t very nice, are you sure you actually want to post this stuff?” ― provided that we had the ability to disregard the advice of our algorithmic minders, just like we can with spell-check. But I’m pretty skeptical about what might come out of an attempt to develop such tools. As I once pointed out here, being a benign censor is very hard ― heavy-handedness comes naturally in this business. And that’s before we even start thinking about the conflicts of interest inherent in the position of Google and of other tech companies who are in a position of being, at once, the regulators of their users’ speech and subjects of government regulations, and may well be tempted to so act in the former role as to avoid problems in the latter. And frankly, Mr. Schmidt’s apparent faith in “strong leaders” who will keep us free and make us safe and righteous is too Boromir-like for me to trust him.

As before, I have no idea what, if anything, needs to or could be done about these issues. Governments are unlikely to wish to intervene to stop the attempts of tech companies to play Platonic guardians 2.0. Even if they had the will, they would probably lack the ability to do so. And, as I said here, we’d be making a very risky gamble by asking governments, whose records of flagrant contempt for freedom of expression are incomparably worse than those of Google and its fellows, to regulate them. Perhaps the solution has to be in the creation of accountability mechanisms internal to the internet world, whether democratic (as David R. Johnson, David G. Post and Marc Rotenberg have suggested) or even akin to rights-based judicial review. In any case, I think that even if we don’t know how to, or cannot, stop the march of our algorithmic guardians, perhaps we can at lest spell-check them, and tell them that they might be about to do something very regrettable.

Online Gambling

Over at the EconLog, David Henderson has an interesting post that allows me to come back to some themes I used to carp on quite a bit, but haven’t returned to in a while now. In a nutshell, it is the story of antiwar.com, a website that, naturally enough, illustrates its message with some graphic imagery. Google concluded that the images contravened its policies, and withdrew the ads it placed on the website, causing the website to lose revenue on which they had relied. Apparently, Google does not want its ads to appear next to any picture that would not be “okay for a child in any region of the world to see,” which would disqualify many iconic pictures taken in wars past ― and not just wars, one might surmise.

Prof. Henderson points out that this is not “censorship,” since Google is a private firm acting in a purely commercial capacity here. But, he argues, this is still a “gamble” on Google’s part:

Google faces a tradeoff. On the one hand, there are probably many advertisers, possibly the vast majority, who don’t want their ads to appear alongside pictures of blood and gore, people being tortured, etc. So by being careful that web sites where ads appear do not have such pictures, Google gets more ad revenue than otherwise. On the other hand, Google is upsetting a lot of people who see it as dictating content. This will cause some people to shun Google. … [I]f the upset spreads, there could be a role for another competitor.

Perhaps so, although as I noted before, Google’s competitors ― such as Apple, with its iTunes store ― also seem to be choosing to use their own platforms to present sanitized versions of reality.

And as I also pointed out in the past, Google’s position with respect to freedom of expression is inherently conflicted. On the one hand, Google sees itself as engaged in the business of expression, arguing that its search algorithms reflect choices of an editorial nature that deserve constitutional protection. On the other, when it exercises control over its various platforms (whether the search engine itself or YouTube, the ad service, etc.), it can, and is frequently asked to, act as an agent for governments ― and not only democratic governments either ― who seek to censor expression they dislike. There is a danger that Google will choose to sacrifice some of its users’ freedom in order to protect its own by ingratiating itself with these governments. Furthermore, Google may be coming under pressure, not only from governments, but also from commercial partners it needs to keep on board ― or at bay ― and, possibly, from various “civil society” actors too, in exercising control over its platforms. The antiwar.com story is only one small part of this broader trend.

This is, or should be, well understood ― which makes me think that Google is not the only party in this story who took a gamble. Antiwar.com did too, as does anyone else who comes to rely on Google or other similar platforms, despite knowing the pressures, commercial and otherwise, that these platforms will come under. If anything, it is remarkable how successful this gamble usually turns out to be. Still, it is a bet, and will sometimes turn out badly.

I blogged last year about an argument by Ethan Zuckerman to the effect that the ad-based business model was the internet’s “original sin.” Mr. Zuckerman made his case from the perspective of the users, who must accept privacy losses resulting from tracking and profiling by advertisers in exchange for free access to ad-supported content. The antiwar.com story suggests that, for some content-producers at least, accepting the revenue and, as prof. Henderson points out, the convenience that come with the current business model and its major players was also a Faustian bargain. And yet, as for users, it is not quite clear what alternative arrangement would be viable.

In the face of what some may well be tempted to interpret as a market failure, it seems reasonable to expect calls for regulation, despite what libertarian types such prof. Henderson or the antiwar.com people themselves may say. There will be, and indeed, as I noted in the post about Apple linked to above, there already are, people calling for the regulation of online platforms, in order to make their behaviour conform to the regulators’ ideas about freedom of expression. Yet we should not forget that, on the whole, the net contribution of Google and the rest of them to our ability to express ourselves and to find and access the thoughts of others has clearly been positive ― and certainly much more positive than that of governments. While attempts at making a good thing even better would be understandable, they too would be gamble, and a risky one.

The Power of Google, Squared

I wrote, I while ago, about “the power of Google” and its role in the discussion surrounding the “right to be forgotten” ― a person’s right to force search engines to remove links to information about that person that is “inadequate, irrelevant or excessive,” whatever these things mean, even if factually true. Last week, the “right to be forgotten” was the subject of an excellent, debate ― nuanced, informative, and with interesting arguments on both sides ― hosted by Intelligence Squared U.S. I encourage you to watch the whole thing, because there is really too much there for a blog post.

I will, however, sketch out what I think was the most persuasive argument deployed by the opponents of the “right to be forgotten” ― with whom, admittedly, I agreed before watching the debate, and still do. I will also say a few words about the alternative solutions they proposed to what they agreed is a real and serious problem ― the danger that the prominence of a story about some stupid mistake or, worse, an unfounded allegation made about a person in search results come to mar his or her life forever, with no second chances possible.

Although the opponents of the “right to be forgotten,” as well as its proponents (I will refer to them as, simply, the opponents and the proponents, for brevity’s sake), made arguments sounding in high principle as well as more practical ones, the one on which the debate mostly focused, and which resonated most with me concerned the institutional arrangements that are needed to implement the “right to be forgotten.” The way it works ― and the only way it can work, according to one of the opponents, Andrew McLaughlin (the CEO of Digg and a former Director of Public Policy for Google) ― is that the person who wants a link to information about him or her removed applies to the search engine, and the search engine decides, following a secretive process and applying criteria of which it alone is aware. If the request is denied, the person who made it can apply to privacy authorities or go to court to reverse the decision. If however, the request is granted, nobody can challenge that decision. Indeed, if the European authorities had their way, nobody would even know that the decision had been made. (Telling the owner of the page to which a link is being delete, as Google has been doing, more or less defeats the purpose of the “right to be forgotten.”)

According to the opponents, this has some very unfortunate consequences. For one thing, the search engines have an incentive to err on the side of granting deletion requests ― at the very least, this avoids them the hassle of fighting appeals. One of the proponents, Chicago professor Eric Posner, suggested that market competition could check this tendency, but the opponents were skeptical that, even if users know that one search engine tends to delete more links than another, this would make any noticeable difference to its bottom line. Mostly, the proponents argued that we can rely on the meaning of the admittedly vague terms “inadequate, irrelevant or excessive” to be worked out over time, so that the decisions to delete a link or not become easier and less controversial. But another consequence of the way in which the “right to be forgotten” is implemented would actually prevent that, the opponents, especially Harvard professor Jonathan Zittrain argued. Since nobody can challenge a decision to delete a link, the courts will have no opportunity to refine the understanding of the concepts involved in the “right to be forgotten.” The upshot is that, according to the opponents anyway, the search engines (which, these days, mostly means Google) end up with a great deal of unchecked discretionary power. This is, of course, ironic, because the proponents of the “right to be forgotten” emphasize concerns about “the power of Google” as one of the reasons to support it, as typically do others who agree with them.

If the opponents are right that the “right to be forgotten” cannot be implemented in a way that is transparent, fair to all the parties concerned, at least reasonably objective, and does not increase instead of the checking “the power of Google,” what are the alternatives? The opponents offered at least three, each of them interesting in its own way. First, Mr. McLaughlin suggested that, instead of a “right to be forgotten,” people should have a right to provide a response, which search engines would have to display among their results. Second, we could have category-specific measures directed at some types of information particularly likely to be prejudicial to people, or of little public interest. (It is worth noting, for example, that in Canada at least, we already do this with criminal court decisions involving minors, which are anonymized; as are family law cases in Québec.) And third, Mr. McLaughlin insisted that, with the increased availability of all sorts of information about everyone, our social mores will need to change. We must become more willing to forgive, and to give people second chances.

This is perhaps optimistic. Then again, so is the proponents’ belief that a corporation can be made to weigh, impartially and conscientiously, considerations of the public interest and the right to “informational self-determination” (which is, apparently, the theoretical foundation of the “right to be forgotten”). And I have argued already that new social norms will in fact emerge as we get more familiar with the internet environment in which we live, and in which our digital shadows are permanently unstuck in time. In any case,what is certain is that these issues are not going to go away anytime soon. It is also clear that this Intelligence Squared debate is an excellent place to start, or to continue, thinking about them. Do watch it if you can.

Disrupting C-36

The Economist has published a lengthy and informative “briefing” on the ways in which the internet is changing prostitution ― often, although not always, for the benefit of sex workers. As it explains, the effects of new technologies on what is usually said to be the oldest profession are far-reaching, and mostly positive ― insofar as they make sex work safer than it used to be. If the federal government had been concerned with protecting sex workers, and if Parliament had truly “ha[d] grave concerns about … the risks of violence posed to those who engage in” prostitution, as it affected to be in the preamble of the so-called Protection of Communities and Exploited Persons Act, S.C. 2014 c. 25, better known as Bill C-36, they would have considered the internet’s potential for benefiting sex workers.

But as the government’s and Parliament’s chief concern was apparently to make prostitution vanish by a sleight of criminal law’s heavy hand, its middle finger raised at the Supreme Court, they instead sought to drive sex workers off the internet. The new section 286.4 of the Criminal Code, created by C-36, criminalizes “[e]veryone who knowingly advertises an offer to provide sexual services for consideration,” although section 286.5 exempts those advertising “their own sexual services.” In other words, if a sex worker has her own website, that’s tolerated ― but if she uses some other service, or at least one geared specifically to sex workers and their potential customers, the provider of that service is acting illegally.

Meanwhile, according to the Economist, in the market for sex, as in so many others,

specialist websites and apps are allowing information to flow between buyer and seller, making it easier to strike mutually satisfactory deals. The sex trade is becoming easier to enter and safer to work in: prostitutes can warn each other about violent clients, and do background and health checks before taking a booking. Personal web pages allow them to advertise and arrange meetings online; their clients’ feedback on review sites helps others to proceed with confidence.

Above all, the ability to advertise, screen potential clients, and pre-arrange meetings online means that sex workers need not look for clients in the most dangerous environment for doing so ― on the street. Besides, “the internet is making it easier to work flexible hours and to forgo a middleman,” and indeed “it is independent sex workers for whom the internet makes the biggest difference.”

The internet is also making sex work safer. Yet the work of websites that “let [sex workers] vouch for clients they have seen, improving other women’s risk assessments,” or “where customers can pay for a background check to present to sex workers” is probably criminalized under the new section 286.2(1) added to the Criminal Code by C-36, which applies to “[e]veryone who receives a financial or other material benefit, knowing that it is obtained by or derived directly or indirectly from the commission of an offence under subsection 286.1(1)” ― the “obtaining sexual services for consideration” offence. Forums where sex workers can provide each other with tips and support can be shut down if they are associated with or part of websites that advertise “sexual services.”

As the Economist points out, the added safety (both from violent clients and law enforcement), convenience, and discretion can attract more people into sex work. So trying to eliminate the online marketplace for sex makes sense if one’s aim is, as I put it here, “to drive people out of sex work by making it desperately miserable” ― but that’s a hypocritical approach, and not what C-36 purports to do.

In any case, criminalization complicates the work of websites that help sex workers and their clients, but does not stop it. They are active in the United States, despite prostitution being criminalized in almost every State ― though they pretend that their contents are fictional. They base their activities in more prostitution-friendly jurisdictions. A professor interviewed by the Economist points out that a ban on advertising sexual services in Ireland “has achieved almost nothing.” There seems to be little reason to believe that the ban in C-36, which has a large exemption for sex workers advertising themselves, would fare differently.

The Economist concludes that “[t]he internet has disrupted many industries. The oldest one is no exception.” Yet the government and Parliament have been oblivious to this trend, as they have been oblivious to most of the realities of sex work. One must hope that courts, when they hear the inevitable challenge to the constitutionality of C-36, will take note.

Felix Peccatum

There was an interesting piece in The Atlantic a couple of weeks ago, in which Ethan Zuckerman argued that we should, as the subtitle would have it, “ditch the [internet’s] ad-based business model and build a better web.” Accepting internet content should be free to access, online services free to use, and that the costs of hosting the contents and providing the services can be paid for by tying them to advertising was, Mr. Zuckerman says, “the original sin of the web.” It sounded like a good idea at time, but turned out badly. It is time to repent, and to mend our ways. But is it?

Mr. Zuckerman argues that the ad-based business model created an “internet [that] spies at us at every twist and turn.” In order to persuade potential investors to support a nascent website, its creators must convince them that the ads on that site “will be worth more than everyone else’s ads.” And even if the ads are not actually worth very much, the potential for improvement is in itself something that can be marketed to investors. The way to make the ads on a website worth more than those on others ― say, on Facebook ― requires “target[ing] more and better than Facebook.” And that, in turn, “requires moving deeper into the world of surveillance,” to learn ever more information about the users, so as to make the targeting of ads to them ever more precise.

Over the years, the progressive creep of online tracking and surveillance has

 trained Internet users to expect that everything they say and do online will be aggregated into profiles (which they cannot review, challenge, or change) that shape both what ads and what content they see.

Despite occasional episodes of unease over what is going on, even outright manipulation by the providers of online services is not enough to turn their users off. As with private service providers, says Mr. Zuckerman, so with governments:

[u]sers have been so well trained to expect surveillance that even when widespread, clandestine government surveillance was revealed by a whistleblower, there has been little organized, public demand for reform and change.

Trust in government generally has never been lower, yet it seems that online, anything goes.

Mr. Zuckerman points out that the ad-based business model had ― and still has ― upsides too. When it took off, it was pretty much the only way “to offer people free webpage hosting and make money.” Initially at least, most people lacked the means ― the technical means, never mind financial resources ― to pay for online services. Offering them “free” ― that is to say, by relying on advertising instead of user fees to pay for them ― allowed people to starting using them who would never have done so otherwise:

[t]he great benefit of an ad supported web is that it’s a web open to everyone. It supports free riders well, which has been key in opening the web to young people and those in the developing world. Ad support makes it very easy for users to “try before they buy.”

Indeed,

[i]n theory, an ad-supported system is more protective of privacy than a transactional one. Subscriptions or micropayments resolved via credit card create a strong link between online and real-world identity, while ads have traditionally been targeted to the content they appear with, not to the demo/psychographic identity of the user.

In practice, well, we know how that worked out.

Besides, says Mr. Zuckerman, not only did the ad-based internet do away with our privacy, it also produces “clickbait” that nobody really wants to read, is increasingly centralized, and breaks down into interest-based echo chambers.

The solution on which Mr. Zuckerman rests most of his hopes for a redemption of the web is a move from ad-based to subscription based business models. He points out that Google already offers companies and universities the possibility of paying for its products in exchange for not offering their employees or students the ads that support its free Gmail service. And he is confident that “[u]sers will pay for services that they love,” even if a shift to subscription-based business models would also mean that users would simply abandon those for which they have no deep affection. This, in turn, would produce “more competition, less centralization and more competitive innovation.” A shift to subscription-based web service would require new means of payment ― something with lower transaction costs than credit-card systems or PayPal. Such technologies do not yet exist, or at least are not yet fully ready, but Mr. Zuckerman is hopeful that they will come along, and allow us to move away from the “fallen” ad-based internet.

But even if a return to the online garden of Eden ― which, much like “real” one, never actually existed ― were technically possible, would it be desirable? Mr. Zuckerman acknowledges that whatever business model we turn to, “there are bound to be unintended consequences.” Unintended, perhaps, but not entirely unforeseeable. Even if transaction costs can be lowered, a subscription-based internet would be less accessible for many people, in particular those in the less well-off countries, the young, and the economically disadvantaged. Those who, in many way, need it most. Besides, it seems doubtful to me that a subscription-based internet would generate more innovation than the current version. As Mr. Zuckerman points out, the ad-based model has the virtue of letting users try new services easily. It also means that abandoning a service does not mean throwing away the money paid to subscribe to it. It is thus friendlier to newcomers, and less favourable to incumbents, than a subscription-based model. (Just think of the number of new media sources that developed online in the last 15 years ― and compare it with, say, the number of new newspapers that appeared in the previous decades.)

The tracked, surveilled ad-based web has its downsides. But it lowered barriers to entry and allowed the emergence of new voices which, I strongly suspect, could not have been heard without it. (By way of anecdote, I had enough doubt about this blogging thing to begin with that I’m pretty sure I wouldn’t have started if I had to pay for it too. Alternatively, I don’t suppose anyone reading this now would have been willing to pay me!) If embracing ads was indeed the internet’s original sin, then I believe that it was, as Augustine suggested of the original original one, felix peccatum ― a fortunate one.

Searching Freedom

I have already blogged (here and here) about the debate on whether the output of search engines such as Google should be protected by constitutional guarantees of freedom of expression, summarizing arguments by Eugene Volokh and Josh Blakcman. These arguments are no longer merely the stuff of academic debate. As both prof. Volokh and prof. Blackman report, the U.S. District Court for the South District of New York has yesterday endorsed the position (which prof. Volokh and others defend) that search results are indeed entitled to First Amendment protection, in Zhang v. Baidu. Although I do not normally comment on American judicial decisions, this one is worth looking at, because it both gives us an idea of the issues that are likely to arise in Canada sooner rather than later, and can serve as a reminder that these issues will have to be approached somewhat differently from the way they are in the United States.

Zhang was a suit by a group of pro-democracy activists who were claiming that Baidu, a Chinese search engine, is acting illegally in excluding from the search results it displays in the United States results that have to do with the Chinese democracy movement and a number of topics such as the Tiananmen Square protests, including articles the plaintiffs themselves had written. The plaintiffs alleged that, in doing so, Baidu engages in censorship at the behest of the Chinese government. Legally, they claimed that Baidu conspired to violate and violated their civil rights under federal and state law.

Baidu moved to dismiss, arguing that the constitutional protection of freedom of speech applied to its search results, preventing the imposition of liability. Relying on jurisprudence protecting a speaker’s right to choose the contents of his message, and in particular not to convey a message it did not want to convey (whether a newspaper’s right not to print a reply from a candidate for public office whom it criticized or a parade organizers’ right not to allow the participation of a group they disagreed with), the Court agreed:

In light of those principles, there is a strong argument to be made that the First Amendment fully immunizes search-engine results from most, if not all, kinds of civil liability and government regulation. … The central purpose of a search engine is to retrieve relevant information from the vast universe of data on the Internet and to organize it in a way that would be most helpful to the searcher. In doing so, search engines inevitably make editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information (for example, on the first page of the search results or later). (7)

The search engines’ “editorial judgments” are constitutionally protected, in the same way as the editorial judgments of newspapers, guidebook authors, or any other speakers who choose what message or information to convey.

Nor does the fact that search-engine results may be produced algorithmically matter for the analysis. After all, the algorithms themselves were written by human beings, (8)

says the Court, endorsing prof. Volokh’s (and others’) view of the matter.

The Court makes a couple of other points that are worth highlighting. One is that

search engine operators (at least in the United States and given today’s technology) lack the physical power to silence anyone’s voices, no matter what their alleged market shares may be, (12)

and that an internet user who fails to find relevant information with one search engine can easily to turn to another one. (The matter, really, seems to be not so much “physical power” as monopoly.) Another is that the ads displayed by a search engine might be entitled to less protection than the actual search results, at least insofar as “commercial speech” is less protected than others sorts. Last but not least, the Court finds

no irony in holding that Baidu’s alleged decision to disfavor speech concerning democracy is itself protected by the democratic ideal of free speech. … [T]he First Amendment protects Baidu’s right to advocate for systems of government other than democracy (in China or elsewhere) just as surely as it protects Plaintiffs’ rights to advocate for democracy.

I find this largely persuasive. Still, we might want to ask some questions. For instance, the point about search engines not being monopolists, and users having alternative means of finding information is only true so long as the users know what it is they are looking for. If one doesn’t know that, say, there are other views about democracy in China than whatever the Communist Party line happens to be, one will not think that something is missing from Baidu’s search results, and one will not try using its competitors to find it. But, of course, the same could be said about partisan media, or other biased sources of information. For all the problems that these create, we still think that the problems that regulating them would cause would be even worse. Perhaps there is something special about the internet that makes this calculation inapplicable ― but, if so, the onus is on those who think so to prove it.

Quite apart from the constitutional issues, there is also the question ― which the Court does not address ― of whether the plaintiffs’ claims could have succeeded anyway. At first sight ― and admittedly I know little about American civil rights legislation ― they do not seem especially plausible. As I pointed out in a previous post on this topic, it is by no means clear that there is, whether under anti-discrimination law or otherwise, “some kind of baseline right to have Google [or another search engine] take notice of you”.

This brings me to the point I wanted to make about the differences between American and Canadian law in this context. As the Supreme Court of Canada held in RWDSU v. Dolphin Delivery, [1986] 2 S.C.R. 573, the Charter does not apply to purely private disputes resolved under common law rules (although its “values” are to be taken into account in the development of the common law). This is in contrast to the situation in the United States, where courts consider themselves bound by the First Amendment even when resolving disputes between private parties. If a case such as Zhang arose in Canada, and the plaintiffs formulated their claims in tort (rather than as violations of, say, the Canadian Human Rights Act), the defendant search engine would not have been able to invoke the Charter‘s guarantee of freedom of expression. This doesn’t mean that the outcome would, or should, be different ― but the route by which it could be reached would have to be.

Charter, Meet Google

Josh Blackman has just published a fascinating new essay, “What Happens if Data Is Speech?” in the University of Pennsylvania Journal of Constitutional Law Online, asking some important questions about how courts should treat ― and how we should think about ― attempts to regulate the (re)creation and arrangement of information by “algorithms parsing data” (25). For example, Google’s algorithms suggest search queries on the basis of our and other users’ past searches, and then sort the available links in once we hit ‘enter’. Can Google be ordered to remove a potential query from the suggestions it displays, or a link from search results? Can it be asked to change the way in which it ranks these results? These and other questions will only become more pressing as these technologies become ever more important in our lives, and as the temptation to regulate them one way or another increases.

One issue that is a constant theme in the literature on this topic that prof. Blackman reviews is what, if any, is the significance of the fact that “with data, it is often difficult to find somebody with the traits of a typical speaker” (27). It thus becomes tempting to conclude that algorithms working with data can be regulated without regard for freedom of speech, since no person’s freedom is affected by such regulation. If at least some uses of data are, nevertheless, protected as free speech, there arises another issue which prof. Blackman highlights ― the potential for conflict between any such protection, and the protection of privacy rights, which takes of form of prohibitions on speaking against someone (in some way).

The focal point of these concerns, for now anyway, are search engines, and particularly Google. Prof. Blackman points out, as Google becomes our gateway to more and more of the information we need, it acquires a great deal of power over what information we ever get to access. Not showing up high in Google’s search results becomes, in effect, a sentence of obscurity and irrelevance. And while it will claim that it only seeks to make its output more relevant for users, the definition of “relevance” gives Google the ability to pursue an agenda of its own, whether it is punishing those who, in its own view, are trying to game its ranking system, as prof. Blackman describes, or currying favour with regulators or commercial partners, or even implementing some kind of moral vision for what the internet should be like (I describe these possibilities here and here). All that, combined with what seems to some as the implausibility of algorithms as bearers of the right to freedom of speech, can make it tempting for legislators to regulate search engines. “But,” prof. Blackman asks, “what poses a greater threat to free speech ― the lack of regulations or the regulations themselves?” (31) Another way of looking at this problem is to ask whether the creators and users of websites should be protected by the state from, in effect, regulation by Google or Google should be protected from regulation by the state (32).

The final parts of prof. Blackman’s essay address the question of what happens next, when ― probably in the near future ― algorithms become not only tools for accessing information but, increasingly, extensions of individual action and creativity. If the line between user and algorithm is blurred, regulating the latter means restricting the freedom of the former.

Prof. Blackman’s essay is a great illustration of the fact that the application of legal rules and principles to technologies which did not exist when they were developed can often be difficult, not least because these new technologies sometimes force us to confront the theoretical questions which we were previously able to ignore or at least to fudge in the practical development of legal doctrine. (I discussed one example of this problem, in the area of election law, here.) For instance, we have so far been able to dodge the question whether freedom of expression really serves the interests of the speaker or the listener, because for just about any expressive content there is at least one speaker and at least one listener. But when algorithms (re-)create information, this correspondence might no longer hold.

There are many other questions to think about. Is there some kind of baseline right to have Google take notice of you? Is the access to online information of such public importance that its providers, even private ones, effectively take on a public function, and maybe incur constitutional obligations in the process? How should we deal with the differences of philosophies and constitutional frameworks between countries?

This last question leads me to my final observation. So far as I can tell ― I have tried some searching, though one can always search more ― nothing at all has been written on these issues in Canada. Yet the contours of the protection of freedom of expression under the Canadian Charter of Rights and Freedoms are in some ways quite different from those under the First Amendment. When Canadian courts come to confront these issues ― when the Charter finally meets Google ― they might find some academic guidance helpful (says one conceited wannabe academic!). As things stand now, they will not find any.