Platonic Guardians 2.0?

The New York Times has published an essay by Eric Schmidt, the Chairman of Google, about the role of the Internet, and especially, of the exchange of ideas and information that the Internet enables, in both contributing to and addressing  the challenges the world faces. The essay is thoroughly upbeat, concluding that it is “within [our] reach” to ensure that “the Web … is a safe and vibrant place, free from coercion and conformity.” Yet when reading Mr. Schmidt it is difficult not to worry that, as with students running riot on American college campuses, the quest for “safety” will lead to the silencing of ideas deemed inappropriate by a force that might be well-intentioned but is unaccountable and ultimately not particularly committed to freedom of expression.

To be sure, Mr. Schmidt talks the free speech talk. He cites John Perry Barlow’s “Declaration of the Independence of Cyberspace,” with its belief that the Web will be “a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” He argues that

[i]n many ways, that promise has been realized. The Internet has created safe spaces for communities to connect, communicate, organize and mobilize, and it has helped many people to find their place and their voice. It has engendered new forms of free expression, and granted access to ideas that didn’t exist before.

Mr. Schmidt notes the role online communication has played in enabling democratic protest around the world, and wants to reject the claims of “[a]uthoritarian governments  … that censorship is necessary for stability.”

But his response to these claims is not just a straightforward defence of the freedom of expression. “The people who use any technology are the ones who need to define its role in society,” Mr. Schmidt writes. “Technology doesn’t work on its own, after all. It’s just a tool. We are the ones who harness its power.” That’s fair enough, so far as it goes. Mr. Schmidt warns against “us[ing] the Internet exclusively to connect with like-minded people rather than seek out perspectives that we wouldn’t otherwise be exposed to,” and that is indeed very important. But then the argument gets ominous:

[I]t’s important we use [the Internet’s] connectivity to promote the values that bring out the best in people. … We need leaders to use the new power of technology to allow us to broaden our horizons as individuals, and in the process broaden the horizons of our society. It’s our responsibility to demonstrate that stability and free expression go hand in hand.

It’s not that I’m against the idea that one should act responsibly when exercising one’s freedom of expression (or that one should just act responsibly, period). But is the responsibility of a speaker always to foster “stability” ― whatever exactly that is? And to whom ought we “to demonstrate that stability and free expression go hand in hand”? To the authoritarians who want to censor the internet? Why exactly do we owe them a demonstration, and what sort of demonstration are they likely to consider convincing? Last but not least, who are the leaders who are going to make us “broaden our horizons”?

Mr. Schmidt has a list of more or less specific ideas about how to make the internet the “safe and vibrant place” he envisions, and they give us a hint about his answer to that last question:

We should make it ever easier to see the news from another country’s point of view, and understand the global consciousness free from filter or bias. We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice.

He speaks “of leadership from government, from citizens, from tech companies,” but it is not obvious how citizens or even governments ― whom Mr. Barlow taunted as the “weary giants of flesh and steel,” powerless and unwelcome in cyberspace ― can “build tools” to do these sorts of things. It is really the other sort of giants, the “tech companies” such as the one Mr. Schmidt runs, that have, or at least can create, the means to be our benevolent guardians, turning us away from hate and harassment, and towards “global consciousness,” ― whatever that too may be. Google can demote websites that it deems to be promoters of “hate” in its search results, as indeed it already demotes those it considers to be copyright-infringers. Apple could block the access to its App Store to news  sources it considers biased, as indeed it has already blocked a Danish history book for  featuring some nudity in its illustrations. Facebook could tinker with its Newsfeed algorithms to help people with a favoured peace-and-love perspective “find their voice,” as it already tinkers with them to “help [us] see more stories that interest [us].”

Of course, Mr. Schmidt’s intentions are benign, and in some ways even laudable. Perhaps some of the “tools” he imagines would even be nice to have. The world may (or may not) be a better place if Facebook and Twitter could ask us something like “hey, this really isn’t very nice, are you sure you actually want to post this stuff?” ― provided that we had the ability to disregard the advice of our algorithmic minders, just like we can with spell-check. But I’m pretty skeptical about what might come out of an attempt to develop such tools. As I once pointed out here, being a benign censor is very hard ― heavy-handedness comes naturally in this business. And that’s before we even start thinking about the conflicts of interest inherent in the position of Google and of other tech companies who are in a position of being, at once, the regulators of their users’ speech and subjects of government regulations, and may well be tempted to so act in the former role as to avoid problems in the latter. And frankly, Mr. Schmidt’s apparent faith in “strong leaders” who will keep us free and make us safe and righteous is too Boromir-like for me to trust him.

As before, I have no idea what, if anything, needs to or could be done about these issues. Governments are unlikely to wish to intervene to stop the attempts of tech companies to play Platonic guardians 2.0. Even if they had the will, they would probably lack the ability to do so. And, as I said here, we’d be making a very risky gamble by asking governments, whose records of flagrant contempt for freedom of expression are incomparably worse than those of Google and its fellows, to regulate them. Perhaps the solution has to be in the creation of accountability mechanisms internal to the internet world, whether democratic (as David R. Johnson, David G. Post and Marc Rotenberg have suggested) or even akin to rights-based judicial review. In any case, I think that even if we don’t know how to, or cannot, stop the march of our algorithmic guardians, perhaps we can at lest spell-check them, and tell them that they might be about to do something very regrettable.

Online Gambling

Over at the EconLog, David Henderson has an interesting post that allows me to come back to some themes I used to carp on quite a bit, but haven’t returned to in a while now. In a nutshell, it is the story of antiwar.com, a website that, naturally enough, illustrates its message with some graphic imagery. Google concluded that the images contravened its policies, and withdrew the ads it placed on the website, causing the website to lose revenue on which they had relied. Apparently, Google does not want its ads to appear next to any picture that would not be “okay for a child in any region of the world to see,” which would disqualify many iconic pictures taken in wars past ― and not just wars, one might surmise.

Prof. Henderson points out that this is not “censorship,” since Google is a private firm acting in a purely commercial capacity here. But, he argues, this is still a “gamble” on Google’s part:

Google faces a tradeoff. On the one hand, there are probably many advertisers, possibly the vast majority, who don’t want their ads to appear alongside pictures of blood and gore, people being tortured, etc. So by being careful that web sites where ads appear do not have such pictures, Google gets more ad revenue than otherwise. On the other hand, Google is upsetting a lot of people who see it as dictating content. This will cause some people to shun Google. … [I]f the upset spreads, there could be a role for another competitor.

Perhaps so, although as I noted before, Google’s competitors ― such as Apple, with its iTunes store ― also seem to be choosing to use their own platforms to present sanitized versions of reality.

And as I also pointed out in the past, Google’s position with respect to freedom of expression is inherently conflicted. On the one hand, Google sees itself as engaged in the business of expression, arguing that its search algorithms reflect choices of an editorial nature that deserve constitutional protection. On the other, when it exercises control over its various platforms (whether the search engine itself or YouTube, the ad service, etc.), it can, and is frequently asked to, act as an agent for governments ― and not only democratic governments either ― who seek to censor expression they dislike. There is a danger that Google will choose to sacrifice some of its users’ freedom in order to protect its own by ingratiating itself with these governments. Furthermore, Google may be coming under pressure, not only from governments, but also from commercial partners it needs to keep on board ― or at bay ― and, possibly, from various “civil society” actors too, in exercising control over its platforms. The antiwar.com story is only one small part of this broader trend.

This is, or should be, well understood ― which makes me think that Google is not the only party in this story who took a gamble. Antiwar.com did too, as does anyone else who comes to rely on Google or other similar platforms, despite knowing the pressures, commercial and otherwise, that these platforms will come under. If anything, it is remarkable how successful this gamble usually turns out to be. Still, it is a bet, and will sometimes turn out badly.

I blogged last year about an argument by Ethan Zuckerman to the effect that the ad-based business model was the internet’s “original sin.” Mr. Zuckerman made his case from the perspective of the users, who must accept privacy losses resulting from tracking and profiling by advertisers in exchange for free access to ad-supported content. The antiwar.com story suggests that, for some content-producers at least, accepting the revenue and, as prof. Henderson points out, the convenience that come with the current business model and its major players was also a Faustian bargain. And yet, as for users, it is not quite clear what alternative arrangement would be viable.

In the face of what some may well be tempted to interpret as a market failure, it seems reasonable to expect calls for regulation, despite what libertarian types such prof. Henderson or the antiwar.com people themselves may say. There will be, and indeed, as I noted in the post about Apple linked to above, there already are, people calling for the regulation of online platforms, in order to make their behaviour conform to the regulators’ ideas about freedom of expression. Yet we should not forget that, on the whole, the net contribution of Google and the rest of them to our ability to express ourselves and to find and access the thoughts of others has clearly been positive ― and certainly much more positive than that of governments. While attempts at making a good thing even better would be understandable, they too would be gamble, and a risky one.

The Power of Google, Squared

I wrote, I while ago, about “the power of Google” and its role in the discussion surrounding the “right to be forgotten” ― a person’s right to force search engines to remove links to information about that person that is “inadequate, irrelevant or excessive,” whatever these things mean, even if factually true. Last week, the “right to be forgotten” was the subject of an excellent, debate ― nuanced, informative, and with interesting arguments on both sides ― hosted by Intelligence Squared U.S. I encourage you to watch the whole thing, because there is really too much there for a blog post.

I will, however, sketch out what I think was the most persuasive argument deployed by the opponents of the “right to be forgotten” ― with whom, admittedly, I agreed before watching the debate, and still do. I will also say a few words about the alternative solutions they proposed to what they agreed is a real and serious problem ― the danger that the prominence of a story about some stupid mistake or, worse, an unfounded allegation made about a person in search results come to mar his or her life forever, with no second chances possible.

Although the opponents of the “right to be forgotten,” as well as its proponents (I will refer to them as, simply, the opponents and the proponents, for brevity’s sake), made arguments sounding in high principle as well as more practical ones, the one on which the debate mostly focused, and which resonated most with me concerned the institutional arrangements that are needed to implement the “right to be forgotten.” The way it works ― and the only way it can work, according to one of the opponents, Andrew McLaughlin (the CEO of Digg and a former Director of Public Policy for Google) ― is that the person who wants a link to information about him or her removed applies to the search engine, and the search engine decides, following a secretive process and applying criteria of which it alone is aware. If the request is denied, the person who made it can apply to privacy authorities or go to court to reverse the decision. If however, the request is granted, nobody can challenge that decision. Indeed, if the European authorities had their way, nobody would even know that the decision had been made. (Telling the owner of the page to which a link is being delete, as Google has been doing, more or less defeats the purpose of the “right to be forgotten.”)

According to the opponents, this has some very unfortunate consequences. For one thing, the search engines have an incentive to err on the side of granting deletion requests ― at the very least, this avoids them the hassle of fighting appeals. One of the proponents, Chicago professor Eric Posner, suggested that market competition could check this tendency, but the opponents were skeptical that, even if users know that one search engine tends to delete more links than another, this would make any noticeable difference to its bottom line. Mostly, the proponents argued that we can rely on the meaning of the admittedly vague terms “inadequate, irrelevant or excessive” to be worked out over time, so that the decisions to delete a link or not become easier and less controversial. But another consequence of the way in which the “right to be forgotten” is implemented would actually prevent that, the opponents, especially Harvard professor Jonathan Zittrain argued. Since nobody can challenge a decision to delete a link, the courts will have no opportunity to refine the understanding of the concepts involved in the “right to be forgotten.” The upshot is that, according to the opponents anyway, the search engines (which, these days, mostly means Google) end up with a great deal of unchecked discretionary power. This is, of course, ironic, because the proponents of the “right to be forgotten” emphasize concerns about “the power of Google” as one of the reasons to support it, as typically do others who agree with them.

If the opponents are right that the “right to be forgotten” cannot be implemented in a way that is transparent, fair to all the parties concerned, at least reasonably objective, and does not increase instead of the checking “the power of Google,” what are the alternatives? The opponents offered at least three, each of them interesting in its own way. First, Mr. McLaughlin suggested that, instead of a “right to be forgotten,” people should have a right to provide a response, which search engines would have to display among their results. Second, we could have category-specific measures directed at some types of information particularly likely to be prejudicial to people, or of little public interest. (It is worth noting, for example, that in Canada at least, we already do this with criminal court decisions involving minors, which are anonymized; as are family law cases in Québec.) And third, Mr. McLaughlin insisted that, with the increased availability of all sorts of information about everyone, our social mores will need to change. We must become more willing to forgive, and to give people second chances.

This is perhaps optimistic. Then again, so is the proponents’ belief that a corporation can be made to weigh, impartially and conscientiously, considerations of the public interest and the right to “informational self-determination” (which is, apparently, the theoretical foundation of the “right to be forgotten”). And I have argued already that new social norms will in fact emerge as we get more familiar with the internet environment in which we live, and in which our digital shadows are permanently unstuck in time. In any case,what is certain is that these issues are not going to go away anytime soon. It is also clear that this Intelligence Squared debate is an excellent place to start, or to continue, thinking about them. Do watch it if you can.

The Power of Google

I seem never to have blogged about the “right to be forgotten” enshrined into European law by the European Court of Justice (ECJ) in a judgment issued in May. An interesting recent blog post by Paul Bernal allows me to do offer a few random observations on the matter. Better late than never, right?

In a nutshell, the “right to be forgotten” allows a person to request a search provider (for example, Google) to remove links to “inadequate, irrelevant or excessive” ― even if factually correct ― information about that person for search results. If the search provider refuses, the person can ask national privacy authorities to compel the removal. Google is most dissatisfied with being asked to handle thousands of such requests and to weigh the privacy interests of those who make them against the public interest in access to information (as well the freedom of expression of those providing the information in the first instance). It says that it cannot perform this balancing act, and indeed its first stabs at it have sometimes been very clumsy ― so much so that, as prof. Bernal explains, people have suspected it of doing a deliberately poor job so as to discredit the whole concept of the right to be forgotten.

Google has responded by setting up a group of experts ― ostensibly to advise on implementing the right to be forgotten but really, prof. Bernal implies, to make sure that the conversation about it happens on its own terms. And that, according to prof. Bernal, includes not paying attention to “the power of Google” ―its “[p]ower over what is found – and not found” about anyone, reflected by the way we use the phrase “to google someone”; its agenda-setting power; and its ability to influence not only journalists and experts, but also policy-makers. Prof. Bernal points out that Google creates (and tweaks) the algorithms which determine what results appear and in what order when a search is run, and that it has not always upheld freedom of expression at the expense of all other values. Google systematically removes links to websites due to copyright-infringement, as well as for a variety of other reasons. Its right to be forgotten tantrum should be viewed in that context, says prof. Bernal; we mustn’t forget Google power, and the variety of ways in which it exercises it.

Fair enough. I have myself written (notably here and here) about Google’s dual, and conflicted, role as at once a speaker and a censor. Google wants to be treated as a speaker ― and granted freedom of speech ― in designing its search algorithms. It also takes on a role of regulator or censor, whether on behalf of its own values and priorities (commercial or otherwise), those of its clients or partners, or those of governments. And there is a standing danger that Google will be tempted to play its role as regulator and censor of the speech of others in such a way as to gain more leeway (especially from governments) when it comes to is own.

Yet to my mind, this inherent conflict is, if anything, more reason to believe that making Google into an arbiter of private and public interests is a bad idea. The ECJ offloads the responsibility of balancing individual privacy rights and public interest in access to information on Google and its competitors, at least in the first instance, but why would we want to give such a responsibility to companies that have such a twisted set of incentives? Prof. Bernal is right that Google is not an unconditional defender of freedom of expression ― but instead of concluding that it might as well compromise it some more, this time in the name of privacy, isn’t that a reason for thinking that we cannot rely on it to strike the right balance between the rights and interests implicated by the right to be forgotten?

Another thing that we might want to keep in mind when we think of “the power of Google” in the context of the right to be forgotten, is the nature of that power. It is not, like the power of the state, a coercive one. In a sense, Google has a great deal of market power, but the users of its search service hardly feel it as “power.” We know that we have easily accessible alternatives to Google (notably, Microsoft’s Bing, and Yahoo!). We just don’t feel (for the most part) like using them ― for whatever reason, but not because anybody forces us to. And I think it matters that the power of Google is not a collective power of people acting together (like the power of the state) but, if that’s the right word, a composite power ― the sum of a great number of individual actions more or less insignificant by themselves. Despite the fact that, as prof. Bernal rightly points out, Google’s algorithms are not somehow natural or neutral, it is, in a real sense a conduit for the disparate actions and interests of isolated individuals, rather than a vehicle for the expression of their collective will. To me, that makes the power of Google, at least this aspect of it, a rather less threatening one.

It is also a democratizing one. By making it easier to find information about someone, it makes such research accessible not only to those who have a staff of researchers (or police officers, or intelligence agents!) at their disposal, but to ordinary citizens. And this is precisely what worries the advocates of the right to be forgotten. It is indeed a curious right, one that apparently only exists online. Nobody says that libraries or archives should purge information about people once it becomes “irrelevant or excessive.” (Indeed, at least for now, the right to be forgotten does not even require substantive information to be taken down from the Internet, or even links to such information to be removed from ordinary websites. They must, it seems, only be expunged from search results.) So someone with a lot of time and/or money on his or her hands can still find that information. It’s those without resources to expend on an extended investigation who must be deprived of it. That too, I think, is something to keep in mind when thinking about the right to be forgotten.

This all might not amount to very much. Insofar as prof. Bernal calls for nuance and a fuller appreciation of the facts in thinking about the right to be forgotten and Google’s role in implementing it, I second him. If have a distinct message of my own, it is probably that an actor having “power” is not, without more, a reason for pinning any particular responsibility on it. We should be wary of power, whatever its form, but it doesn’t follow that we should burden anyone powerful in whatever way we can think of. If anything, power should be checked and balanced ― balanced, that is, with countervailing powers, not with responsibilities that can, in the hands of the powerful, become excuses for further self-aggrandizement more than limits on their action.

H/t: Yves Faguy

Charter, Meet Google

Josh Blackman has just published a fascinating new essay, “What Happens if Data Is Speech?” in the University of Pennsylvania Journal of Constitutional Law Online, asking some important questions about how courts should treat ― and how we should think about ― attempts to regulate the (re)creation and arrangement of information by “algorithms parsing data” (25). For example, Google’s algorithms suggest search queries on the basis of our and other users’ past searches, and then sort the available links in once we hit ‘enter’. Can Google be ordered to remove a potential query from the suggestions it displays, or a link from search results? Can it be asked to change the way in which it ranks these results? These and other questions will only become more pressing as these technologies become ever more important in our lives, and as the temptation to regulate them one way or another increases.

One issue that is a constant theme in the literature on this topic that prof. Blackman reviews is what, if any, is the significance of the fact that “with data, it is often difficult to find somebody with the traits of a typical speaker” (27). It thus becomes tempting to conclude that algorithms working with data can be regulated without regard for freedom of speech, since no person’s freedom is affected by such regulation. If at least some uses of data are, nevertheless, protected as free speech, there arises another issue which prof. Blackman highlights ― the potential for conflict between any such protection, and the protection of privacy rights, which takes of form of prohibitions on speaking against someone (in some way).

The focal point of these concerns, for now anyway, are search engines, and particularly Google. Prof. Blackman points out, as Google becomes our gateway to more and more of the information we need, it acquires a great deal of power over what information we ever get to access. Not showing up high in Google’s search results becomes, in effect, a sentence of obscurity and irrelevance. And while it will claim that it only seeks to make its output more relevant for users, the definition of “relevance” gives Google the ability to pursue an agenda of its own, whether it is punishing those who, in its own view, are trying to game its ranking system, as prof. Blackman describes, or currying favour with regulators or commercial partners, or even implementing some kind of moral vision for what the internet should be like (I describe these possibilities here and here). All that, combined with what seems to some as the implausibility of algorithms as bearers of the right to freedom of speech, can make it tempting for legislators to regulate search engines. “But,” prof. Blackman asks, “what poses a greater threat to free speech ― the lack of regulations or the regulations themselves?” (31) Another way of looking at this problem is to ask whether the creators and users of websites should be protected by the state from, in effect, regulation by Google or Google should be protected from regulation by the state (32).

The final parts of prof. Blackman’s essay address the question of what happens next, when ― probably in the near future ― algorithms become not only tools for accessing information but, increasingly, extensions of individual action and creativity. If the line between user and algorithm is blurred, regulating the latter means restricting the freedom of the former.

Prof. Blackman’s essay is a great illustration of the fact that the application of legal rules and principles to technologies which did not exist when they were developed can often be difficult, not least because these new technologies sometimes force us to confront the theoretical questions which we were previously able to ignore or at least to fudge in the practical development of legal doctrine. (I discussed one example of this problem, in the area of election law, here.) For instance, we have so far been able to dodge the question whether freedom of expression really serves the interests of the speaker or the listener, because for just about any expressive content there is at least one speaker and at least one listener. But when algorithms (re-)create information, this correspondence might no longer hold.

There are many other questions to think about. Is there some kind of baseline right to have Google take notice of you? Is the access to online information of such public importance that its providers, even private ones, effectively take on a public function, and maybe incur constitutional obligations in the process? How should we deal with the differences of philosophies and constitutional frameworks between countries?

This last question leads me to my final observation. So far as I can tell ― I have tried some searching, though one can always search more ― nothing at all has been written on these issues in Canada. Yet the contours of the protection of freedom of expression under the Canadian Charter of Rights and Freedoms are in some ways quite different from those under the First Amendment. When Canadian courts come to confront these issues ― when the Charter finally meets Google ― they might find some academic guidance helpful (says one conceited wannabe academic!). As things stand now, they will not find any.

Google as Regulator, Part Deux

A recent story, reported for example by the Globe and Mail, nicely illustrates Google’s dual, and perhaps ambiguous, role as “speaker and censor,” at once exercising, or claiming to exercise, an editorial judgment and making itself he agent of speech-restricting governments, about which I blogged some time ago. According to the Globe, “Google’s search algorithm will begin demoting websites that are frequently reported for copyright violations, a move that will likely make it more difficult to find file-sharing, Torrent and so-called file locker sites.” These websites will not be removed from search results, but they will be harder to find.

This is, it seems to me, an obvious example of “editorial judgment,” which – as I explain in more detail in the post linked to above – Google claims to exercise when designing its search algorithms. At the same time, it is an an example of Google acting, in effect, as a regulator, if not, in this case, as a censor. The decision to demote allegedly-copyright-infringing websites is not, one suspects, motivated by commercial considerations; at least not immediately commercial considerations, since, as the Globe puts it, the move “should please Hollywood” – and other content producers – and perhaps Google considers pleasing them as an investment that will pay off. Google’s state reason for this decision is that it will “help users find legitimate, quality sources of content more easily” (my emphasis). One usually associates concerns for legitimacy with public authorities rather than private corporations.

Indeed, some might want Google to take an even more public-spirited position. As Deven Desai, of the Thomas Jefferson School of Law, notes in a post on Concurring Opinions, “this shift may open the door to more arguments for Google to be a gatekeeper and policer of content.” Indeed, although he does not favour such an approach, he points out that it is a “difficult question … why or why not act on some issues but not others.” Why, for example, copyright infringement but not hate speech? For now, even Google might lack the data and/or content-analyzing capacities effectively to recognize hate speech. But given how fast technology evolves, this might change sooner rather than later. As prof. Desai observes, if Google becomes a more overt internet regulator, it will be criticized, for example from a competition-law standpoint. But of course it will also be criticized if it refuses to take on that role.

Either way, there will be a lot of interesting questions for lawyers. At what point does Google, acting as a quasi-regulator, become a state agent subject to constitutional constraints? How does competition law, and its prohibition on abuse of a dominant position, interact with the constitutional protection of freedom of speech, if the latter encompasses Google’s freedom of editorial judgment about its algorithm? What sort of due process rights do or should people affected by Google’s editorial decisions have – and what legal framework – for example, administrative or maybe tort law – is appropriate for settling this question? This is a lot to think about. No answers from me for now.

Google, Speaker and Censor

Some recent stories highlight Google’s ambiguous role as provider and manager of content, which, from a free-speech perspective, puts at it at once in the shoes of both a speaker potentially subject to censorship and an agent of the censors.

The first of these is an interesting exchange between Eugene Volokh, of UCLA and the Volokh Conspiracy, and Tim Wu, of Harvard. Back in April, prof. Volokh and a lawyer from California, Donald Falk, published a “White Paper” commissioned by Google, arguing that search results produced by Google and its competitors are covered by the First Amendment to the U.S. Constitution, which protects freedom of speech. The crux of their argument is that “search engines select and sort the results in a way that is aimed at giving users what the search engine companies see as the most  helpful and useful information” (3). This is an “editorial judgment,” similar to other editorial judgments – that of a newspaper publisher selecting and arranging news stories, letters from readers, and editorials, or a guidebook editor choosing which restaurants or landmarks to include and review and which to omit. The fact that the actual selecting and sorting of the internet search results is done by computer algorithms rather by human beings is of no import. It “is necessary given the sheer volume of information that search engines must process, and given the variety of queries that users can input,” but technology does not matter: the essence of the decision is the same whether it is made by men or by machines (which, in any event, are designed and programmed by human engineers with editorial objectives in mind).

In a recent op-ed in the New York Times, prof. Wu challenges the latter claim. For him, it matters a lot whether we are speaking of choices made by human beings or by computers. Free speech protections are granted to people, sentient beings capable of thought and opinion. Extending them to corporations is disturbing, and doing so to machines would be a mistake.

As a matter of legal logic, there is some similarity among Google, [a newspaper columnist], Socrates and other providers of answers. But if you look more closely, the comparison falters. Socrates was a man who died for his views; computer programs are utilitarian instruments meant to serve us. Protecting a computer’s “speech” is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship.

And it does not matter that computer algorithms are designed by humans. A machine can no more “inherit” the constitutional rights of its creator than Dr. Frankenstein’s monster.

Prof. Volokh responds to the arguments in a blog post. He thinks it is a mistake to treat the intervention of the algorithm as an entirely new event that breaks the constitutional protection to which editorial decisions of human beings are entitled. The algorithms  are only tools; their decisions are not autonomous, but reflect the choices of their designers. To the extent that similar choices by human beings are prohibited or regulated, they remain so if made by computers; but to the extent they are constitutionally protected – and it is a large one – the interposition of an algorithm should not matter at all.

This is only a bare-bones summary of the arguments; they are worth a careful reading. Another caveat is that the constitutional analysis might be somewhat different in Canada, since our law is somewhat less protective of free speech than its American counterpart. However, I do not think that these differences, however significant they are in some cases, would or should matter here.

The argument prof. Volokh articulates on Google’s behalf reflects its concern about having its own speech regulated. That concern is one it shares with the traditional media to which prof. Volokh repeatedly compares it. But Google is also different from traditional media, in that it serves as a host or conduit to all manner of content which it neither created nor even vetted. It is different too in being (almost) omnipresent, and thus subject to the regulation and pressure of governments the world over. For this reason, is often asked to act as an agent of the regulators or censors of the speech of others to which it links or which its platforms host – and, as much as it presents itself as a speaker worried about censorship of its own speech, it often enough accepts. It provides some of the details – numbers mostly, and a selection of examples – in its “Transparency Report.” To be sure, much of the content that Google accepts to remove is, in one way or another, illegal – for example defamatory, or contrary to hate speech legislation. And as a private company, Google does not necessarily owe it to anyone to link to or host his or her content. Still, when its decisions not to do so are motivated not by commercial considerations, but by requests of government agencies – and not necessarily courts, but police and other executive agencies too – its position becomes more ambiguous. For example, one has to wonder whether there is a risk of a conflict of interest between its roles as speaker and censors’ agent – whether it will not be tempted to trade greater compliance with the regulators’ demands when it comes to others’ content for greater leeway when it comes to its own.