The Power of Google, Squared

I wrote, I while ago, about “the power of Google” and its role in the discussion surrounding the “right to be forgotten” ― a person’s right to force search engines to remove links to information about that person that is “inadequate, irrelevant or excessive,” whatever these things mean, even if factually true. Last week, the “right to be forgotten” was the subject of an excellent, debate ― nuanced, informative, and with interesting arguments on both sides ― hosted by Intelligence Squared U.S. I encourage you to watch the whole thing, because there is really too much there for a blog post.

I will, however, sketch out what I think was the most persuasive argument deployed by the opponents of the “right to be forgotten” ― with whom, admittedly, I agreed before watching the debate, and still do. I will also say a few words about the alternative solutions they proposed to what they agreed is a real and serious problem ― the danger that the prominence of a story about some stupid mistake or, worse, an unfounded allegation made about a person in search results come to mar his or her life forever, with no second chances possible.

Although the opponents of the “right to be forgotten,” as well as its proponents (I will refer to them as, simply, the opponents and the proponents, for brevity’s sake), made arguments sounding in high principle as well as more practical ones, the one on which the debate mostly focused, and which resonated most with me concerned the institutional arrangements that are needed to implement the “right to be forgotten.” The way it works ― and the only way it can work, according to one of the opponents, Andrew McLaughlin (the CEO of Digg and a former Director of Public Policy for Google) ― is that the person who wants a link to information about him or her removed applies to the search engine, and the search engine decides, following a secretive process and applying criteria of which it alone is aware. If the request is denied, the person who made it can apply to privacy authorities or go to court to reverse the decision. If however, the request is granted, nobody can challenge that decision. Indeed, if the European authorities had their way, nobody would even know that the decision had been made. (Telling the owner of the page to which a link is being delete, as Google has been doing, more or less defeats the purpose of the “right to be forgotten.”)

According to the opponents, this has some very unfortunate consequences. For one thing, the search engines have an incentive to err on the side of granting deletion requests ― at the very least, this avoids them the hassle of fighting appeals. One of the proponents, Chicago professor Eric Posner, suggested that market competition could check this tendency, but the opponents were skeptical that, even if users know that one search engine tends to delete more links than another, this would make any noticeable difference to its bottom line. Mostly, the proponents argued that we can rely on the meaning of the admittedly vague terms “inadequate, irrelevant or excessive” to be worked out over time, so that the decisions to delete a link or not become easier and less controversial. But another consequence of the way in which the “right to be forgotten” is implemented would actually prevent that, the opponents, especially Harvard professor Jonathan Zittrain argued. Since nobody can challenge a decision to delete a link, the courts will have no opportunity to refine the understanding of the concepts involved in the “right to be forgotten.” The upshot is that, according to the opponents anyway, the search engines (which, these days, mostly means Google) end up with a great deal of unchecked discretionary power. This is, of course, ironic, because the proponents of the “right to be forgotten” emphasize concerns about “the power of Google” as one of the reasons to support it, as typically do others who agree with them.

If the opponents are right that the “right to be forgotten” cannot be implemented in a way that is transparent, fair to all the parties concerned, at least reasonably objective, and does not increase instead of the checking “the power of Google,” what are the alternatives? The opponents offered at least three, each of them interesting in its own way. First, Mr. McLaughlin suggested that, instead of a “right to be forgotten,” people should have a right to provide a response, which search engines would have to display among their results. Second, we could have category-specific measures directed at some types of information particularly likely to be prejudicial to people, or of little public interest. (It is worth noting, for example, that in Canada at least, we already do this with criminal court decisions involving minors, which are anonymized; as are family law cases in Québec.) And third, Mr. McLaughlin insisted that, with the increased availability of all sorts of information about everyone, our social mores will need to change. We must become more willing to forgive, and to give people second chances.

This is perhaps optimistic. Then again, so is the proponents’ belief that a corporation can be made to weigh, impartially and conscientiously, considerations of the public interest and the right to “informational self-determination” (which is, apparently, the theoretical foundation of the “right to be forgotten”). And I have argued already that new social norms will in fact emerge as we get more familiar with the internet environment in which we live, and in which our digital shadows are permanently unstuck in time. In any case,what is certain is that these issues are not going to go away anytime soon. It is also clear that this Intelligence Squared debate is an excellent place to start, or to continue, thinking about them. Do watch it if you can.

Forgotten Balance

Over at Concurring Opinions, Frank Pasquale has a post defending the EU Court of Justice’s decision that enshrined the “right to be forgotten” in European law. Arguing against “a reflexively rejectionist position” which he sees emerging among some American commentators, prof. Pasquale writes that it fails to “recognize the power of certain dominant firms to shape impressions of individuals,” and might lead, by design or otherwise, to an undermining even of the (limited) protections for privacy and reputation which American law recognizes. For my part, I think that prof. Pasquale sets up something of a false dichotomy. There are other options than a free-for-all in which any disclosure of any information is permissible and acceptance of the “right to be forgotten.”

Prof. Pasquale worries about the possibility that people’s medical records or intimate photos will be stolen and posted online. If that happens, he asks,

[a]re the critics of the [right to be forgotten] really willing to just shrug and say, “Well, they’re true facts and the later-publishing websites weren’t in on the hack, so leave them up”?

American law, he explains, provides for some penalties against those who publish purely private information. “Perhaps,” he says, “critics of the [right to be forgotten] want to sweep away these penalties, too. But if they succeed, there will be real human costs.” The right to be forgotten, he concludes, is essential to “guaranteeing a digital future where our reputations aren’t at the mercy of malicious hackers and careless search engines.”

I’m unconvinced. Prof. Pasquale’s concerns are serious, but the right to be forgotten is at once insufficient and excessive to address them.

The information disclosure of which rightly worries prof. Pasquale is intrinsically private. Companies which compile it or to which people entrust it for storage or safekeeping should not disclose it without the consent of the individuals concerned; those who receive such information from people not authorized to communicate it have no business publishing it. The publication of such information is a harm which the law should sanction. But the “right to be forgotten,” at least as articulated by the EU Court of Justice, is at best an indirect protection against this harm. As its name suggests, it is not a right against having private information about you published in the first place. It is not even a right to have private information removed from the websites that originally published it, but only to have links to that information removed from search results. Of course it will make the information that much more difficult to find. More difficult, but not impossible. Something like a (much narrower, as I’ll presently explain) version of the right to be forgotten might be useful to protect us from disclosure of private information, but only as a complement, not an alternative, to going after the actual publishers of such information.

At the same time, the “right to be forgotten” potentially extends to all sorts of information that is not necessarily intrinsically private in the way medical records or intimate pictures are. For instance, back in August, the BBC explained that many of the 12 pages from its website that had been removed from Google’s search results up to that point, concerned court cases ― including those where a defendant had been convicted of a serious crime. (Now, I’ve already written about the difficulties that being mentioned in a court decision can create, and wondered whether anonymizing (at least some) of them would not be better. But, for now at least, the prevailing view is that court cases, including the parties’ names, are generally public matters.) In such cases, there can surely be no question of forcing the actual publishers of the stories to remove them, and the “right to be forgotten” only means, as I recently explained here, that ordinary people, those who do not have much time and/or money for research, will not be able to find them. Even if in some cases a version of the “right to be forgotten” would help us protect what most people will agree is private information, the current European version of this “right” is vastly overbroad.”

So it seems to me that one can easily be against the recognition of a “right to be forgotten” in the shape in which the EU Court of Justice created it, and in favour of protecting people from “malicious hackers and careless search engines” disclosing intrinsically private information about them. It should be possible to craft more narrowly-tailored and more effective regulation, directed in the first instance against the publication of such information and, as a secondary measure, allowing links to infringing information to also be removed. In the inevitable conflict between privacy and freedom of expression, we shouldn’t forget nuance and balance.

 

The Power of Google

I seem never to have blogged about the “right to be forgotten” enshrined into European law by the European Court of Justice (ECJ) in a judgment issued in May. An interesting recent blog post by Paul Bernal allows me to do offer a few random observations on the matter. Better late than never, right?

In a nutshell, the “right to be forgotten” allows a person to request a search provider (for example, Google) to remove links to “inadequate, irrelevant or excessive” ― even if factually correct ― information about that person for search results. If the search provider refuses, the person can ask national privacy authorities to compel the removal. Google is most dissatisfied with being asked to handle thousands of such requests and to weigh the privacy interests of those who make them against the public interest in access to information (as well the freedom of expression of those providing the information in the first instance). It says that it cannot perform this balancing act, and indeed its first stabs at it have sometimes been very clumsy ― so much so that, as prof. Bernal explains, people have suspected it of doing a deliberately poor job so as to discredit the whole concept of the right to be forgotten.

Google has responded by setting up a group of experts ― ostensibly to advise on implementing the right to be forgotten but really, prof. Bernal implies, to make sure that the conversation about it happens on its own terms. And that, according to prof. Bernal, includes not paying attention to “the power of Google” ―its “[p]ower over what is found – and not found” about anyone, reflected by the way we use the phrase “to google someone”; its agenda-setting power; and its ability to influence not only journalists and experts, but also policy-makers. Prof. Bernal points out that Google creates (and tweaks) the algorithms which determine what results appear and in what order when a search is run, and that it has not always upheld freedom of expression at the expense of all other values. Google systematically removes links to websites due to copyright-infringement, as well as for a variety of other reasons. Its right to be forgotten tantrum should be viewed in that context, says prof. Bernal; we mustn’t forget Google power, and the variety of ways in which it exercises it.

Fair enough. I have myself written (notably here and here) about Google’s dual, and conflicted, role as at once a speaker and a censor. Google wants to be treated as a speaker ― and granted freedom of speech ― in designing its search algorithms. It also takes on a role of regulator or censor, whether on behalf of its own values and priorities (commercial or otherwise), those of its clients or partners, or those of governments. And there is a standing danger that Google will be tempted to play its role as regulator and censor of the speech of others in such a way as to gain more leeway (especially from governments) when it comes to is own.

Yet to my mind, this inherent conflict is, if anything, more reason to believe that making Google into an arbiter of private and public interests is a bad idea. The ECJ offloads the responsibility of balancing individual privacy rights and public interest in access to information on Google and its competitors, at least in the first instance, but why would we want to give such a responsibility to companies that have such a twisted set of incentives? Prof. Bernal is right that Google is not an unconditional defender of freedom of expression ― but instead of concluding that it might as well compromise it some more, this time in the name of privacy, isn’t that a reason for thinking that we cannot rely on it to strike the right balance between the rights and interests implicated by the right to be forgotten?

Another thing that we might want to keep in mind when we think of “the power of Google” in the context of the right to be forgotten, is the nature of that power. It is not, like the power of the state, a coercive one. In a sense, Google has a great deal of market power, but the users of its search service hardly feel it as “power.” We know that we have easily accessible alternatives to Google (notably, Microsoft’s Bing, and Yahoo!). We just don’t feel (for the most part) like using them ― for whatever reason, but not because anybody forces us to. And I think it matters that the power of Google is not a collective power of people acting together (like the power of the state) but, if that’s the right word, a composite power ― the sum of a great number of individual actions more or less insignificant by themselves. Despite the fact that, as prof. Bernal rightly points out, Google’s algorithms are not somehow natural or neutral, it is, in a real sense a conduit for the disparate actions and interests of isolated individuals, rather than a vehicle for the expression of their collective will. To me, that makes the power of Google, at least this aspect of it, a rather less threatening one.

It is also a democratizing one. By making it easier to find information about someone, it makes such research accessible not only to those who have a staff of researchers (or police officers, or intelligence agents!) at their disposal, but to ordinary citizens. And this is precisely what worries the advocates of the right to be forgotten. It is indeed a curious right, one that apparently only exists online. Nobody says that libraries or archives should purge information about people once it becomes “irrelevant or excessive.” (Indeed, at least for now, the right to be forgotten does not even require substantive information to be taken down from the Internet, or even links to such information to be removed from ordinary websites. They must, it seems, only be expunged from search results.) So someone with a lot of time and/or money on his or her hands can still find that information. It’s those without resources to expend on an extended investigation who must be deprived of it. That too, I think, is something to keep in mind when thinking about the right to be forgotten.

This all might not amount to very much. Insofar as prof. Bernal calls for nuance and a fuller appreciation of the facts in thinking about the right to be forgotten and Google’s role in implementing it, I second him. If have a distinct message of my own, it is probably that an actor having “power” is not, without more, a reason for pinning any particular responsibility on it. We should be wary of power, whatever its form, but it doesn’t follow that we should burden anyone powerful in whatever way we can think of. If anything, power should be checked and balanced ― balanced, that is, with countervailing powers, not with responsibilities that can, in the hands of the powerful, become excuses for further self-aggrandizement more than limits on their action.

H/t: Yves Faguy