The Power of Google

I seem never to have blogged about the “right to be forgotten” enshrined into European law by the European Court of Justice (ECJ) in a judgment issued in May. An interesting recent blog post by Paul Bernal allows me to do offer a few random observations on the matter. Better late than never, right?

In a nutshell, the “right to be forgotten” allows a person to request a search provider (for example, Google) to remove links to “inadequate, irrelevant or excessive” ― even if factually correct ― information about that person for search results. If the search provider refuses, the person can ask national privacy authorities to compel the removal. Google is most dissatisfied with being asked to handle thousands of such requests and to weigh the privacy interests of those who make them against the public interest in access to information (as well the freedom of expression of those providing the information in the first instance). It says that it cannot perform this balancing act, and indeed its first stabs at it have sometimes been very clumsy ― so much so that, as prof. Bernal explains, people have suspected it of doing a deliberately poor job so as to discredit the whole concept of the right to be forgotten.

Google has responded by setting up a group of experts ― ostensibly to advise on implementing the right to be forgotten but really, prof. Bernal implies, to make sure that the conversation about it happens on its own terms. And that, according to prof. Bernal, includes not paying attention to “the power of Google” ―its “[p]ower over what is found – and not found” about anyone, reflected by the way we use the phrase “to google someone”; its agenda-setting power; and its ability to influence not only journalists and experts, but also policy-makers. Prof. Bernal points out that Google creates (and tweaks) the algorithms which determine what results appear and in what order when a search is run, and that it has not always upheld freedom of expression at the expense of all other values. Google systematically removes links to websites due to copyright-infringement, as well as for a variety of other reasons. Its right to be forgotten tantrum should be viewed in that context, says prof. Bernal; we mustn’t forget Google power, and the variety of ways in which it exercises it.

Fair enough. I have myself written (notably here and here) about Google’s dual, and conflicted, role as at once a speaker and a censor. Google wants to be treated as a speaker ― and granted freedom of speech ― in designing its search algorithms. It also takes on a role of regulator or censor, whether on behalf of its own values and priorities (commercial or otherwise), those of its clients or partners, or those of governments. And there is a standing danger that Google will be tempted to play its role as regulator and censor of the speech of others in such a way as to gain more leeway (especially from governments) when it comes to is own.

Yet to my mind, this inherent conflict is, if anything, more reason to believe that making Google into an arbiter of private and public interests is a bad idea. The ECJ offloads the responsibility of balancing individual privacy rights and public interest in access to information on Google and its competitors, at least in the first instance, but why would we want to give such a responsibility to companies that have such a twisted set of incentives? Prof. Bernal is right that Google is not an unconditional defender of freedom of expression ― but instead of concluding that it might as well compromise it some more, this time in the name of privacy, isn’t that a reason for thinking that we cannot rely on it to strike the right balance between the rights and interests implicated by the right to be forgotten?

Another thing that we might want to keep in mind when we think of “the power of Google” in the context of the right to be forgotten, is the nature of that power. It is not, like the power of the state, a coercive one. In a sense, Google has a great deal of market power, but the users of its search service hardly feel it as “power.” We know that we have easily accessible alternatives to Google (notably, Microsoft’s Bing, and Yahoo!). We just don’t feel (for the most part) like using them ― for whatever reason, but not because anybody forces us to. And I think it matters that the power of Google is not a collective power of people acting together (like the power of the state) but, if that’s the right word, a composite power ― the sum of a great number of individual actions more or less insignificant by themselves. Despite the fact that, as prof. Bernal rightly points out, Google’s algorithms are not somehow natural or neutral, it is, in a real sense a conduit for the disparate actions and interests of isolated individuals, rather than a vehicle for the expression of their collective will. To me, that makes the power of Google, at least this aspect of it, a rather less threatening one.

It is also a democratizing one. By making it easier to find information about someone, it makes such research accessible not only to those who have a staff of researchers (or police officers, or intelligence agents!) at their disposal, but to ordinary citizens. And this is precisely what worries the advocates of the right to be forgotten. It is indeed a curious right, one that apparently only exists online. Nobody says that libraries or archives should purge information about people once it becomes “irrelevant or excessive.” (Indeed, at least for now, the right to be forgotten does not even require substantive information to be taken down from the Internet, or even links to such information to be removed from ordinary websites. They must, it seems, only be expunged from search results.) So someone with a lot of time and/or money on his or her hands can still find that information. It’s those without resources to expend on an extended investigation who must be deprived of it. That too, I think, is something to keep in mind when thinking about the right to be forgotten.

This all might not amount to very much. Insofar as prof. Bernal calls for nuance and a fuller appreciation of the facts in thinking about the right to be forgotten and Google’s role in implementing it, I second him. If have a distinct message of my own, it is probably that an actor having “power” is not, without more, a reason for pinning any particular responsibility on it. We should be wary of power, whatever its form, but it doesn’t follow that we should burden anyone powerful in whatever way we can think of. If anything, power should be checked and balanced ― balanced, that is, with countervailing powers, not with responsibilities that can, in the hands of the powerful, become excuses for further self-aggrandizement more than limits on their action.

H/t: Yves Faguy

Searching Freedom

I have already blogged (here and here) about the debate on whether the output of search engines such as Google should be protected by constitutional guarantees of freedom of expression, summarizing arguments by Eugene Volokh and Josh Blakcman. These arguments are no longer merely the stuff of academic debate. As both prof. Volokh and prof. Blackman report, the U.S. District Court for the South District of New York has yesterday endorsed the position (which prof. Volokh and others defend) that search results are indeed entitled to First Amendment protection, in Zhang v. Baidu. Although I do not normally comment on American judicial decisions, this one is worth looking at, because it both gives us an idea of the issues that are likely to arise in Canada sooner rather than later, and can serve as a reminder that these issues will have to be approached somewhat differently from the way they are in the United States.

Zhang was a suit by a group of pro-democracy activists who were claiming that Baidu, a Chinese search engine, is acting illegally in excluding from the search results it displays in the United States results that have to do with the Chinese democracy movement and a number of topics such as the Tiananmen Square protests, including articles the plaintiffs themselves had written. The plaintiffs alleged that, in doing so, Baidu engages in censorship at the behest of the Chinese government. Legally, they claimed that Baidu conspired to violate and violated their civil rights under federal and state law.

Baidu moved to dismiss, arguing that the constitutional protection of freedom of speech applied to its search results, preventing the imposition of liability. Relying on jurisprudence protecting a speaker’s right to choose the contents of his message, and in particular not to convey a message it did not want to convey (whether a newspaper’s right not to print a reply from a candidate for public office whom it criticized or a parade organizers’ right not to allow the participation of a group they disagreed with), the Court agreed:

In light of those principles, there is a strong argument to be made that the First Amendment fully immunizes search-engine results from most, if not all, kinds of civil liability and government regulation. … The central purpose of a search engine is to retrieve relevant information from the vast universe of data on the Internet and to organize it in a way that would be most helpful to the searcher. In doing so, search engines inevitably make editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information (for example, on the first page of the search results or later). (7)

The search engines’ “editorial judgments” are constitutionally protected, in the same way as the editorial judgments of newspapers, guidebook authors, or any other speakers who choose what message or information to convey.

Nor does the fact that search-engine results may be produced algorithmically matter for the analysis. After all, the algorithms themselves were written by human beings, (8)

says the Court, endorsing prof. Volokh’s (and others’) view of the matter.

The Court makes a couple of other points that are worth highlighting. One is that

search engine operators (at least in the United States and given today’s technology) lack the physical power to silence anyone’s voices, no matter what their alleged market shares may be, (12)

and that an internet user who fails to find relevant information with one search engine can easily to turn to another one. (The matter, really, seems to be not so much “physical power” as monopoly.) Another is that the ads displayed by a search engine might be entitled to less protection than the actual search results, at least insofar as “commercial speech” is less protected than others sorts. Last but not least, the Court finds

no irony in holding that Baidu’s alleged decision to disfavor speech concerning democracy is itself protected by the democratic ideal of free speech. … [T]he First Amendment protects Baidu’s right to advocate for systems of government other than democracy (in China or elsewhere) just as surely as it protects Plaintiffs’ rights to advocate for democracy.

I find this largely persuasive. Still, we might want to ask some questions. For instance, the point about search engines not being monopolists, and users having alternative means of finding information is only true so long as the users know what it is they are looking for. If one doesn’t know that, say, there are other views about democracy in China than whatever the Communist Party line happens to be, one will not think that something is missing from Baidu’s search results, and one will not try using its competitors to find it. But, of course, the same could be said about partisan media, or other biased sources of information. For all the problems that these create, we still think that the problems that regulating them would cause would be even worse. Perhaps there is something special about the internet that makes this calculation inapplicable ― but, if so, the onus is on those who think so to prove it.

Quite apart from the constitutional issues, there is also the question ― which the Court does not address ― of whether the plaintiffs’ claims could have succeeded anyway. At first sight ― and admittedly I know little about American civil rights legislation ― they do not seem especially plausible. As I pointed out in a previous post on this topic, it is by no means clear that there is, whether under anti-discrimination law or otherwise, “some kind of baseline right to have Google [or another search engine] take notice of you”.

This brings me to the point I wanted to make about the differences between American and Canadian law in this context. As the Supreme Court of Canada held in RWDSU v. Dolphin Delivery, [1986] 2 S.C.R. 573, the Charter does not apply to purely private disputes resolved under common law rules (although its “values” are to be taken into account in the development of the common law). This is in contrast to the situation in the United States, where courts consider themselves bound by the First Amendment even when resolving disputes between private parties. If a case such as Zhang arose in Canada, and the plaintiffs formulated their claims in tort (rather than as violations of, say, the Canadian Human Rights Act), the defendant search engine would not have been able to invoke the Charter‘s guarantee of freedom of expression. This doesn’t mean that the outcome would, or should, be different ― but the route by which it could be reached would have to be.

Charter, Meet Google

Josh Blackman has just published a fascinating new essay, “What Happens if Data Is Speech?” in the University of Pennsylvania Journal of Constitutional Law Online, asking some important questions about how courts should treat ― and how we should think about ― attempts to regulate the (re)creation and arrangement of information by “algorithms parsing data” (25). For example, Google’s algorithms suggest search queries on the basis of our and other users’ past searches, and then sort the available links in once we hit ‘enter’. Can Google be ordered to remove a potential query from the suggestions it displays, or a link from search results? Can it be asked to change the way in which it ranks these results? These and other questions will only become more pressing as these technologies become ever more important in our lives, and as the temptation to regulate them one way or another increases.

One issue that is a constant theme in the literature on this topic that prof. Blackman reviews is what, if any, is the significance of the fact that “with data, it is often difficult to find somebody with the traits of a typical speaker” (27). It thus becomes tempting to conclude that algorithms working with data can be regulated without regard for freedom of speech, since no person’s freedom is affected by such regulation. If at least some uses of data are, nevertheless, protected as free speech, there arises another issue which prof. Blackman highlights ― the potential for conflict between any such protection, and the protection of privacy rights, which takes of form of prohibitions on speaking against someone (in some way).

The focal point of these concerns, for now anyway, are search engines, and particularly Google. Prof. Blackman points out, as Google becomes our gateway to more and more of the information we need, it acquires a great deal of power over what information we ever get to access. Not showing up high in Google’s search results becomes, in effect, a sentence of obscurity and irrelevance. And while it will claim that it only seeks to make its output more relevant for users, the definition of “relevance” gives Google the ability to pursue an agenda of its own, whether it is punishing those who, in its own view, are trying to game its ranking system, as prof. Blackman describes, or currying favour with regulators or commercial partners, or even implementing some kind of moral vision for what the internet should be like (I describe these possibilities here and here). All that, combined with what seems to some as the implausibility of algorithms as bearers of the right to freedom of speech, can make it tempting for legislators to regulate search engines. “But,” prof. Blackman asks, “what poses a greater threat to free speech ― the lack of regulations or the regulations themselves?” (31) Another way of looking at this problem is to ask whether the creators and users of websites should be protected by the state from, in effect, regulation by Google or Google should be protected from regulation by the state (32).

The final parts of prof. Blackman’s essay address the question of what happens next, when ― probably in the near future ― algorithms become not only tools for accessing information but, increasingly, extensions of individual action and creativity. If the line between user and algorithm is blurred, regulating the latter means restricting the freedom of the former.

Prof. Blackman’s essay is a great illustration of the fact that the application of legal rules and principles to technologies which did not exist when they were developed can often be difficult, not least because these new technologies sometimes force us to confront the theoretical questions which we were previously able to ignore or at least to fudge in the practical development of legal doctrine. (I discussed one example of this problem, in the area of election law, here.) For instance, we have so far been able to dodge the question whether freedom of expression really serves the interests of the speaker or the listener, because for just about any expressive content there is at least one speaker and at least one listener. But when algorithms (re-)create information, this correspondence might no longer hold.

There are many other questions to think about. Is there some kind of baseline right to have Google take notice of you? Is the access to online information of such public importance that its providers, even private ones, effectively take on a public function, and maybe incur constitutional obligations in the process? How should we deal with the differences of philosophies and constitutional frameworks between countries?

This last question leads me to my final observation. So far as I can tell ― I have tried some searching, though one can always search more ― nothing at all has been written on these issues in Canada. Yet the contours of the protection of freedom of expression under the Canadian Charter of Rights and Freedoms are in some ways quite different from those under the First Amendment. When Canadian courts come to confront these issues ― when the Charter finally meets Google ― they might find some academic guidance helpful (says one conceited wannabe academic!). As things stand now, they will not find any.

Google as Regulator, Part Deux

A recent story, reported for example by the Globe and Mail, nicely illustrates Google’s dual, and perhaps ambiguous, role as “speaker and censor,” at once exercising, or claiming to exercise, an editorial judgment and making itself he agent of speech-restricting governments, about which I blogged some time ago. According to the Globe, “Google’s search algorithm will begin demoting websites that are frequently reported for copyright violations, a move that will likely make it more difficult to find file-sharing, Torrent and so-called file locker sites.” These websites will not be removed from search results, but they will be harder to find.

This is, it seems to me, an obvious example of “editorial judgment,” which – as I explain in more detail in the post linked to above – Google claims to exercise when designing its search algorithms. At the same time, it is an an example of Google acting, in effect, as a regulator, if not, in this case, as a censor. The decision to demote allegedly-copyright-infringing websites is not, one suspects, motivated by commercial considerations; at least not immediately commercial considerations, since, as the Globe puts it, the move “should please Hollywood” – and other content producers – and perhaps Google considers pleasing them as an investment that will pay off. Google’s state reason for this decision is that it will “help users find legitimate, quality sources of content more easily” (my emphasis). One usually associates concerns for legitimacy with public authorities rather than private corporations.

Indeed, some might want Google to take an even more public-spirited position. As Deven Desai, of the Thomas Jefferson School of Law, notes in a post on Concurring Opinions, “this shift may open the door to more arguments for Google to be a gatekeeper and policer of content.” Indeed, although he does not favour such an approach, he points out that it is a “difficult question … why or why not act on some issues but not others.” Why, for example, copyright infringement but not hate speech? For now, even Google might lack the data and/or content-analyzing capacities effectively to recognize hate speech. But given how fast technology evolves, this might change sooner rather than later. As prof. Desai observes, if Google becomes a more overt internet regulator, it will be criticized, for example from a competition-law standpoint. But of course it will also be criticized if it refuses to take on that role.

Either way, there will be a lot of interesting questions for lawyers. At what point does Google, acting as a quasi-regulator, become a state agent subject to constitutional constraints? How does competition law, and its prohibition on abuse of a dominant position, interact with the constitutional protection of freedom of speech, if the latter encompasses Google’s freedom of editorial judgment about its algorithm? What sort of due process rights do or should people affected by Google’s editorial decisions have – and what legal framework – for example, administrative or maybe tort law – is appropriate for settling this question? This is a lot to think about. No answers from me for now.

The Only Thing Worse Than Being Talked About

Is being talked about in a court decision that’s available online for all to see. At least if you’ve sued a former employer, and are looking for a new job. At the Volokh Conspiracy, Eugene Volokh reports on a case in which a man who believes he lost employment opportunities because prospective employers found out about his lawsuit against a previous employer sued companies providing both general internet search and specialized legal databases for making available online materials relating to that litigation. The complaint alleged violations of a variety of statutory and common law rules, but the court dismissed all these claims. The court added that publication of matters of public record, such as court proceedings and materials is, in any event, constitutionally protected.

I think that, in these circumstances, the outcome would be the same in Canada. I cannot see how the publication of court materials, unless the court itself ordered them to remain confidential, can amount to a common law tort; nor am I aware of any statutes that would prohibit it regardless of the circumstances (more on limited exceptions shortly). The constitutional situation is a bit different, since the Canadian Charter of Rights and Freedoms does not directly apply to the common law, though it would apply to a statute.  That difference wouldn’t matter here.

In any case, what concerns me right now is not the current legal situation or the question, which prof. Volokh addresses, whether there “is an adequate justification for suppressing speech about legal documents that have been released by the courts as a public record.” (His response is negative, and I think he is right.) It is the antecedent question whether any and all legal documents should be made matters of public record.

Generally speaking, our legal system favours publicity. The publicity of judicial proceedings helps ensure the impartiality, and perhaps also the quality, of judicial work. As with other branches of government, publicity is important for accountability. Closed, secret, or inaccessible courts are a hallmark of authoritarian political systems. In Edmonton Journal v. Alberta (Attorney General), [1989] 2 S.C.R. 1326, the Supreme Court has held that the openness of court proceedings, including the ability of the media to report on them, is an important constitutional value.

Important, but not absolute. The usual presumption of publicity can be overturned in particular cases, where the disclosure of elements of the evidence, normally a matter of public record, may compromise the impartiality of the proceedings (for example by influencing potential jurors) or reveal privileged information, such as commercial secrets. Such cases are regarded as exceptional; importantly, a party who wants the court to make some element of the case confidential has to ask the court to do so, which can be expensive and which many will not think of doing. (For example, refugee claimants rarely ask that their cases before the Immigration and Refugee Board or the Federal Court be anonymized, although if memory serves well, they are entitled to do so.)

But there are also some categorical rules which apply automatically, without a party having to do anything. At issue in Edmonton Journal was one such rule, prohibiting the publication of all sorts of details about family law cases. The Supreme Court held that the law was much too restrictive and thus an unconstitutional restriction of the freedom of expression. But narrower restrictions exist. For instance the names of minors involved in criminal cases are not published – the defendants are known by their initials. And in Québec, family law cases are identified by a number, rather than the name of the parties, with the names of the parties and the places where they live being replaced by initials in the court’s reasons (incidentally, the Alberta statute in Edmonton Journal allowed the publication of these details; Québec’s rule is essentially its mirror image).

The idea – and I think it is a sound one – seems to be that (many of) the positive effects of publicity can result from publishing the court’s decision but not the parties’ names. From the perspective of keeping the courts accountable, the publication of the parties’ names probably matters little; what is important is that journalists, lawyers, and interested citizens know what evidence was before the court and what the court did with it. On the other hand, there is also a legitimate public interest in knowing what is happening to whom, or who exactly is involved in stories that attract attention.

And now I’m coming back to the case I considered at the beginning of the post. So long as access to court materials, or even to judgments, was time-consuming, difficult and expensive, it mattered little that publicity was the rule in most cases. Realistically, only news media would bother accessing these records, and then only in a few cases which attracted sufficient attention to make the effort and expense worthwhile. The internet changes that. It is fairly easy, and relatively cheap or even free, to find materials (at least judgments) from any case one is interested in. Indeed, one need not even know there is a case. It is enough to google someone’s name to find court decisions involving that person. An employer who would not have gone to the courthouse to rummage through files just to see if a prospective employee had ever been involved in litigation can find this out in a matter of seconds from the comfort of his office. Indeed, he may find it accidentally – he might google an applicant’s name without the intention of finding out about the applicant’s litigation history, looking for something else – but that just comes up. However the information comes out, it can be very – and unfairly – damaging, As prof. Volokh points out,

[m]any employers would likely be wary of hiring someone who had sued a past employer, because they might view this as a sign of possible litigiousness. Even if the earlier lawsuit was eminently well-founded, a prospective employer might not take the time and effort to investigate this, but might just move on to the next candidate, especially if [the candidate] is one of several comparably well-credentialed candidates for the same spot.

So here are some questions. Does our general presumption of publicity of court materials still make sense in this new reality that the internet has brought about? Or should we re-balance free speech and privacy, perhaps by making anonymization the default rule? If so, should we make exceptions? A blanket anonymity rule might be problematic, because there are cases where knowing who is involved is very much in the public interest. But are exceptions workable? If not, does this mean we should abandon anonymity after all?

I don’t have answers to these questions. I would love to hear from you.

Google, Speaker and Censor

Some recent stories highlight Google’s ambiguous role as provider and manager of content, which, from a free-speech perspective, puts at it at once in the shoes of both a speaker potentially subject to censorship and an agent of the censors.

The first of these is an interesting exchange between Eugene Volokh, of UCLA and the Volokh Conspiracy, and Tim Wu, of Harvard. Back in April, prof. Volokh and a lawyer from California, Donald Falk, published a “White Paper” commissioned by Google, arguing that search results produced by Google and its competitors are covered by the First Amendment to the U.S. Constitution, which protects freedom of speech. The crux of their argument is that “search engines select and sort the results in a way that is aimed at giving users what the search engine companies see as the most  helpful and useful information” (3). This is an “editorial judgment,” similar to other editorial judgments – that of a newspaper publisher selecting and arranging news stories, letters from readers, and editorials, or a guidebook editor choosing which restaurants or landmarks to include and review and which to omit. The fact that the actual selecting and sorting of the internet search results is done by computer algorithms rather by human beings is of no import. It “is necessary given the sheer volume of information that search engines must process, and given the variety of queries that users can input,” but technology does not matter: the essence of the decision is the same whether it is made by men or by machines (which, in any event, are designed and programmed by human engineers with editorial objectives in mind).

In a recent op-ed in the New York Times, prof. Wu challenges the latter claim. For him, it matters a lot whether we are speaking of choices made by human beings or by computers. Free speech protections are granted to people, sentient beings capable of thought and opinion. Extending them to corporations is disturbing, and doing so to machines would be a mistake.

As a matter of legal logic, there is some similarity among Google, [a newspaper columnist], Socrates and other providers of answers. But if you look more closely, the comparison falters. Socrates was a man who died for his views; computer programs are utilitarian instruments meant to serve us. Protecting a computer’s “speech” is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship.

And it does not matter that computer algorithms are designed by humans. A machine can no more “inherit” the constitutional rights of its creator than Dr. Frankenstein’s monster.

Prof. Volokh responds to the arguments in a blog post. He thinks it is a mistake to treat the intervention of the algorithm as an entirely new event that breaks the constitutional protection to which editorial decisions of human beings are entitled. The algorithms  are only tools; their decisions are not autonomous, but reflect the choices of their designers. To the extent that similar choices by human beings are prohibited or regulated, they remain so if made by computers; but to the extent they are constitutionally protected – and it is a large one – the interposition of an algorithm should not matter at all.

This is only a bare-bones summary of the arguments; they are worth a careful reading. Another caveat is that the constitutional analysis might be somewhat different in Canada, since our law is somewhat less protective of free speech than its American counterpart. However, I do not think that these differences, however significant they are in some cases, would or should matter here.

The argument prof. Volokh articulates on Google’s behalf reflects its concern about having its own speech regulated. That concern is one it shares with the traditional media to which prof. Volokh repeatedly compares it. But Google is also different from traditional media, in that it serves as a host or conduit to all manner of content which it neither created nor even vetted. It is different too in being (almost) omnipresent, and thus subject to the regulation and pressure of governments the world over. For this reason, is often asked to act as an agent of the regulators or censors of the speech of others to which it links or which its platforms host – and, as much as it presents itself as a speaker worried about censorship of its own speech, it often enough accepts. It provides some of the details – numbers mostly, and a selection of examples – in its “Transparency Report.” To be sure, much of the content that Google accepts to remove is, in one way or another, illegal – for example defamatory, or contrary to hate speech legislation. And as a private company, Google does not necessarily owe it to anyone to link to or host his or her content. Still, when its decisions not to do so are motivated not by commercial considerations, but by requests of government agencies – and not necessarily courts, but police and other executive agencies too – its position becomes more ambiguous. For example, one has to wonder whether there is a risk of a conflict of interest between its roles as speaker and censors’ agent – whether it will not be tempted to trade greater compliance with the regulators’ demands when it comes to others’ content for greater leeway when it comes to its own.