Google, Speaker and Censor

Some recent stories highlight Google’s ambiguous role as provider and manager of content, which, from a free-speech perspective, puts at it at once in the shoes of both a speaker potentially subject to censorship and an agent of the censors.

The first of these is an interesting exchange between Eugene Volokh, of UCLA and the Volokh Conspiracy, and Tim Wu, of Harvard. Back in April, prof. Volokh and a lawyer from California, Donald Falk, published a “White Paper” commissioned by Google, arguing that search results produced by Google and its competitors are covered by the First Amendment to the U.S. Constitution, which protects freedom of speech. The crux of their argument is that “search engines select and sort the results in a way that is aimed at giving users what the search engine companies see as the most  helpful and useful information” (3). This is an “editorial judgment,” similar to other editorial judgments – that of a newspaper publisher selecting and arranging news stories, letters from readers, and editorials, or a guidebook editor choosing which restaurants or landmarks to include and review and which to omit. The fact that the actual selecting and sorting of the internet search results is done by computer algorithms rather by human beings is of no import. It “is necessary given the sheer volume of information that search engines must process, and given the variety of queries that users can input,” but technology does not matter: the essence of the decision is the same whether it is made by men or by machines (which, in any event, are designed and programmed by human engineers with editorial objectives in mind).

In a recent op-ed in the New York Times, prof. Wu challenges the latter claim. For him, it matters a lot whether we are speaking of choices made by human beings or by computers. Free speech protections are granted to people, sentient beings capable of thought and opinion. Extending them to corporations is disturbing, and doing so to machines would be a mistake.

As a matter of legal logic, there is some similarity among Google, [a newspaper columnist], Socrates and other providers of answers. But if you look more closely, the comparison falters. Socrates was a man who died for his views; computer programs are utilitarian instruments meant to serve us. Protecting a computer’s “speech” is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship.

And it does not matter that computer algorithms are designed by humans. A machine can no more “inherit” the constitutional rights of its creator than Dr. Frankenstein’s monster.

Prof. Volokh responds to the arguments in a blog post. He thinks it is a mistake to treat the intervention of the algorithm as an entirely new event that breaks the constitutional protection to which editorial decisions of human beings are entitled. The algorithms  are only tools; their decisions are not autonomous, but reflect the choices of their designers. To the extent that similar choices by human beings are prohibited or regulated, they remain so if made by computers; but to the extent they are constitutionally protected – and it is a large one – the interposition of an algorithm should not matter at all.

This is only a bare-bones summary of the arguments; they are worth a careful reading. Another caveat is that the constitutional analysis might be somewhat different in Canada, since our law is somewhat less protective of free speech than its American counterpart. However, I do not think that these differences, however significant they are in some cases, would or should matter here.

The argument prof. Volokh articulates on Google’s behalf reflects its concern about having its own speech regulated. That concern is one it shares with the traditional media to which prof. Volokh repeatedly compares it. But Google is also different from traditional media, in that it serves as a host or conduit to all manner of content which it neither created nor even vetted. It is different too in being (almost) omnipresent, and thus subject to the regulation and pressure of governments the world over. For this reason, is often asked to act as an agent of the regulators or censors of the speech of others to which it links or which its platforms host – and, as much as it presents itself as a speaker worried about censorship of its own speech, it often enough accepts. It provides some of the details – numbers mostly, and a selection of examples – in its “Transparency Report.” To be sure, much of the content that Google accepts to remove is, in one way or another, illegal – for example defamatory, or contrary to hate speech legislation. And as a private company, Google does not necessarily owe it to anyone to link to or host his or her content. Still, when its decisions not to do so are motivated not by commercial considerations, but by requests of government agencies – and not necessarily courts, but police and other executive agencies too – its position becomes more ambiguous. For example, one has to wonder whether there is a risk of a conflict of interest between its roles as speaker and censors’ agent – whether it will not be tempted to trade greater compliance with the regulators’ demands when it comes to others’ content for greater leeway when it comes to its own.

Author: Leonid Sirota

Law nerd. I teach constitutional law at the Auckland University of Technology Law School, in New Zealand. I studied law at McGill, clerked at the Federal Court of Canada, and then did graduate work at the NYU School of Law.

7 thoughts on “Google, Speaker and Censor”

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s