The Course of Human Events

David R. Johnson and David Post have published a fascinating essay, “Governing Online Spaces: Virtual Representation,” at the Volokh Conspiracy, arguing that Facebook ought to move towards becoming something like a representative democracy. While various attempts at regulating Facebook and other online services and communities from the outside are a frequent topic of discussion, including, for example, here and here, Mr. Johnson and prof. Post raise a different, albeit related issue, that of internal governance.

At present, Facebook’s relationship with its users is akin to that of a “benevolent dictator[],” or perhaps an enlightened absolute monarch, a sort of digital Frederick the Great, with his subjects. That relationship is governed by the Terms of Service (TOS) that users must accept in order to use Facebook. And the company reserves the right to change those Terms of Service at will. As the law now stands, it is entitled to do so. But, say Mr. Johnson and Prof. Post, this is  wrong as a matter of principle. The principles of “self governance and self-determination” mean

that all users have a right to participate in the processes through which the rules by which they will be bound are made.  This principle is today widely accepted throughout the civilized world when applied to formal law-making processes, and we believe it applies with equal force to the new forms of TOS-based rule-making now emerging on the Net.

Market discipline―the threat of users leaving Facebook in favour of a competitor―is not enough, because the cost to the user of doing so is unusually high, due both to the users having “invested substantial amounts of time and effort in organizing their own experience at the site” and to network effects.

But attempts to have users provide input on Facebook’s Terms of Service have not been very successful. Most users simply cannot be bothered to engage in this sort of self-governance; others are ignorant or otherwise incompetent; but even the small portion of users who are willing and able to contribute something useful to Facebook’s governance comprises way too many people to engage in meaningful deliberation. Mr. Johnson and Prof. Post propose to get around these problems by setting up a system of representation. Instead of users engaging in governance directly, they would

be given the ability to grant a proxy to anyone who has volunteered to act on his/her behalf in policy discussions with Facebook management. These proxy grants could be made, revoked, or changed at any time, at the convenience of the user. Those seeking proxies would presumably announce their general views, proposals, platforms, and positions. Anyone receiving some minimum number of proxies would be entitled to participate in discussions with management — and their views would presumably carry more or less weight depending upon the number of users they could claim to represent.

This mechanism of virtual representation would, Mr. Johnson and Prof. Post argue, have several benefits. Those seeking and obtaining proxies―the representatives in a virtual democracy―would be people with the motivation and, one expects, the knowledge seriously to participate in Facebook’s governance. Representation sidelines extremists and gives power to the moderate voices and the silent majority ignored by direct democracy. At the same time, it gives Facebook the means of knowing how users feel about what it does and what it proposes to do differently in the future, which is handy for keeping them happy and avoiding having them rebel and desert to a competitor.

The proposal is not―”yet”―for a full-scale virtual democracy.  Mr. Johnson and Prof. Post accept that Facebook will retain something like a monarchical veto over the demands of its users’ representatives. Still, it is pretty radical―and pretty compelling. By all means, read it in full.

As Mr. Johnson and prof. Post recognize, “there are many unanswered questions.” Many of those concern the details of the virtual mixed constitution (to borrow a term from 18th-century political philosophy) that they are proposing, and the details of its implementation. But here’s another question, at which their discussion hints without quite reaching it.

Suppose Facebook reorganizes itself into a self-governing polity of some sort, whether with a mixed constitution or a truly democratic one. What effect would this have on its dealings with those who wish to govern it from the outside? Mr. Johnson and prof. Post write that “Facebook’s compliance with the clearly expressed will of the online polity would also surely help to keep real-space regulators at bay.” But what if it doesn’t? Not all of those regulators, after all, care a whole lot for democracy, and even if they do, their democratic constituents are citizens of local polities, not of a global one. Could this global democratic polity fight back? Could its members

dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God entitle them?

Mr. Johnson and Prof. Post allude to Alexander Hamilton and James Madison as their inspiration. But what about Thomas Jefferson?

iPrudes

There was an interesting story by Michael Posner in The Globe and Mail yesterday on Apple’s decision not to allow the sale of books and apps telling the story of Danish hippies on its commercial platforms, iBookstore and the App Store, because they contain some photographs featuring naked men and women. Apple says the pictures breach its policy against sexually explicit images. Mr. Posner accuses the company of hypocrisy, because it has not banned other books “filled with pictures of naked bodies [and] continues to sell apps for Playboy and Sports Illustrated, which feature partially naked women.” So does the author of the books, who points out that Apple’s founder, Steve Jobs, claimed to be a spiritual descendant and to share some of the ideal of the hippies movement, which he accuses Apple of betraying. The publishers, for their part, insist that the books are in no way pornographic or arousing, so that they do not breach Apple’s guidelines.

Be that as it may, the Danish authorities are not amused. Mr. Posner writes that

[l]ast week, Uffe Elbaek, the country’s culture minister, wrote to his European counterparts, and to European Union commissioners Neelie Kroes and Androulla Vassiliou, seeking to have the issue debated within the EU.

“This is a history book,” Elbaek said in an interview. “It documents how we behaved in those days. Is it fair that an American company without any real dialogue … can apply American moral standards to a product that only interests a Danish audience with vastly different moral standards?”

The minister worries that corporations “will decide how freedom of speech will be arbitrated and who is allowed artistic freedoms” and argues that “it’s important that we have these discussions at regional and national levels.” Mr. Posner too worries about freedom of speech. Indeed, he accuses Apple of “de facto censorship.”

This brings to mind several issues about which I have already blogged. One is the dual and ambiguous position of technology companies as speakers and censors, about which I have written about in Google’s case. Apple might argue that a decision not to allow the sale of a book it deems offensive or otherwise unsuitable is a form of editorial judgment and, thus, protected speech, just as Google argues its decision to disfavour copyright-infringing websites in ranking its search results is. At the same time, as the provider of a platform through which others express themselves, Apple takes on a speech-regulating role; and the importance this role is proportionate to that platform’s popularity.

But there is a crucial difference between Google removing content from, say, YouTube at the request of a government agency, and Apple removing content from its stores on its own, without any government involvement. In my view, it is not fair to refer to such decisions as censorship. A private company, at least so long as it is not a monopolist, has no power to prohibit speech. If a speaker is not allowed to use one private platform, he or she can turn to another. As Mr. Posner notes, the books Apple has banned from its stores are best-sellers in print. Their author is not exactly being silenced.

Besides, we accept that newspapers or publishers do not print everything that is submitted to them. The question, then, is whether there is a reason for holding technology companies to a different standard. Dominant market position or, a fortiori, monopoly might be one such reason. But I doubt that Apple actually has a dominant market position, even in the app market (considering Android’s popularity); it surely doesn’t have one in the book market. And I’m not sure I can think of anything else that would justify, even as a matter of morality, never mind law, saying that Apple (or Google, or whoever) has more onerous duties towards freedom of expression than traditional media companies, as Ms. Elbaek, the Danish minister, seems to think.

As always in the face of such disagreement, there also arises the question of who (if anyone) ought to be making the rules, and how―the question of the appropriate “mode of social ordering,” to use Lon Fuller’s phrase, about which I blogged here, here, and here. Ms. Elbaek seems to think that the rules regulating the ability of platforms such as Google’s or Apple to select and “censor” their contents should be said by national governments (by legislatures presumably, or maybe by executives through international treaties) or by supra-national bodies such as, presumably, the EU. (Note that she spoke of “discussions at regional and national level”―not at the UN, which she probably knows is not too keen on certain kinds of offensive speech the Danes see nothing wrong with.) But it’s not clear that governments, at whatever level, should be making these rules. As wrote in my earlier posts, legislation is often a clumsy tool for dealing with emerging technologies and new business models, because the latter develop faster than the former can adapt. And private ordering through the market might be enough to take care of the problem here, if there even is one. Apple is not a monopolist; it has competitors who might be willing to give the books which it does not like a platform, and profit from them. Authors and readers are free to use these competing platforms. Apple will remain a prude―hypocritical (as prudes often are) or not―if it thinks there is a profit to be made in prudishness, or it will convert to more liberal ways if that is more profitable.

The Future is Even Creepier

There is an interesting story in today’s New York Times that brings together a couple of my recent topics, the tracking of internet users by the websites they visit and the use of the data thus generated in advertising, about which I wrote here, and the use of target-specific outreach and advertising by President Obama’s re-election campaign, about which I wrote here. There are even, for good measure, overtones of human dignity there.

The story is about the way the data gathered when we use the internet, whether just browsing or searching for something in particular, are then used to throw those annoying targeted ads at us wherever we go. The data is collected by computers of course; it is computer algorithms, too, that analyze it and use it to assign us to some fine-grained category (depending on our inferred interests and means); and it is still computers that sell the right to show us a display ad to companies that might be interested in the specific category of consumer each of us is deemed to belong to.

This is roughly similar, if I understand correctly, to what the Obama campaign did in studying the data it had collected about voters and using it to target each person specifically according to his or her likely interests and concerns, except that the field of application here is commerce rather than politics. And just as some people have doubts about the morality of that tactic in the political realm, there are those who are convinced that its application in the commercial one is immoral. The Times quotes a consumer-rights advocate as saying that “[o]nline consumers are being bought and sold like chattel [sic]. … It’s dehumanizing.” As with what the Obama campaign did, I’m not sure about that. I’m not convinced by the description of the process as selling people―it involves selling information about me, and the right to show me a message on which I remain free to act or not, not my personhood. I don’t feel dehumanized by those ads―just creeped out, which, I think, is a very human reaction, by the way (I doubt that cattle are creeped out by being sold).

Perhaps there is an echo here of the debate, in human dignity scholarship, over whether dignity and its violations are an objective matter, meaning that one’s dignity can be violated even though one doesn’t feel that it is ,or a subjective one, meaning that one’s perception is determinative. (A classic example of this problem is the controversy over dwarf-tossing: the dwarf consents to being thrown around for sport and makes money out of it―but can the state prohibit the activity regardless, on the ground that it is a violation of his dignity even if he doesn’t think it is?)

I should note one possible difference between what is happening in the commercial advertising context and what the Obama campaign did. The companies that track internet users claim that those whom they track are not identified in any recognizable fashion. When they sell the right to show me ads to advertisers, they might describe me as something like “the guy who reads legal blogs and news websites a lot and has been looking at cell phones recently.” The Obama campaign, of course, was identifying people by name, address, etc., in order to reach out to them. So maybe the internet-ad people are less creepy than the politicians. But maybe not. The Times’ article suggests people are very skeptical about the actual anonymity of internet users tracked by advertisers, so the difference might be illusory.

As I said above and in my previous posts, even if this is not immoral and/or illegal, it is creepy. Perhaps “do not track” features of internet browsers will save us from the onslaught of creepiness. But not only are advertisers trying to fight them but, as they are pointing out, their use might undermine the bargain at the foundation of the internet―in exchange for putting up with ads, we get to enjoy all sorts of great content (such as this blog, right?) for free. Perhaps we are now finding out that the bargain was a Faustian one. But it’s likely too late to get out of it.

To Track or Not to Track?

There was an interesting article in the New York Times this weekend about the brewing fight around “do not track” features of internet browsers (such as Firefox or Internet Explorer) that are meant to tell websites visited by the user who has enabled the features not to collect information about the user’s activity for the purposes of online advertising. Here’s a concrete example that makes sense of the jargon. A friend recently asked me to look at a camera she was considering buying, so I checked it out on Amazon. Thereafter, for days on end, I was being served with ads for this and similar cameras on any number of websites I visited. Amazon had recorded my visit, concluded (wrongly, as it happens) that I was considering buying the camera in question, transmitted the information to advertisers, and their algorithms targeted me for camera ads. I found the experience a bit creepy, and I’m not the only one. Hence the appearance of the “do not track” functionalities: if I had been using a browser with a “do not track feature”, this would presumably not have happened.

Advertisers, of course, are not happy about “do not track.” Tracking our online activities allows them to target very specific ads at us, ads for stuff we have some likelihood of being actually interested in. As the Times explains,

[t]he advent of Do Not Track threatens the barter system wherein consumers allow sites and third-party ad networks to collect information about their online activities in exchange for open access to maps, e-mail, games, music, social networks and whatnot. Marketers have been fighting to preserve this arrangement, saying that collecting consumer data powers effective advertising tailored to a user’s tastes. In turn, according to this argument, those tailored ads enable smaller sites to thrive and provide rich content.

The Times reports that advertisers have been fighting the attempts of an NGO called the W3C (for “World Wide Web Consortium”) to develop standards for “do not track” features. They have also publicly attacked Microsoft for its plans to make “do not track” a default (albeit changeable) setting on the next version of Internet Explorer. And members of the U.S. Senate are getting into the fight as well. Some are questioning the involvement of an agency of the US government, the Federal Trade Commission, with W3C’s efforts, while others seem to side against the advertisers.

The reason I am writing about this is that this may be another example of the development of new rules happening before our eyes, and it gives us another opportunity to reflect on the various mechanisms by which social and legal rules emerge and interact, as well as on the way our normative systems assimilate technological development. (Some of my previous posts on these topics are here, here, and here.)

W3C wants to develop rules―not legally binding rules of course, but a sort of social norm which it hopes will be widely adopted―regulating the use of “do not track” features. But as with any would-be rule-makers, a number of questions arise. The two big ones are ‘what legitimacy does it have?’ and ‘is it competent?’ As the Times reports, some advertisers are, in fact raising the question of W3C’s competence, claiming the matter is “entirely outside their area of expertise.” This is self-serving of course.  W3C asserts that it “bring[s] diverse stake-holders together, under a clear and effective consensus-based process,” but that’s self-serving too, not to mention wishy-washy. And of course a claim can be both self-serving and true.

If not W3C, who should be making rules about “do not track”? Surely not advertisers’ trade groups? What about legislatures? In theory, legislatures possess democratic legitimacy, and also have the resources to find out a great deal about social problems and the best ways to solve them. But in practice, it is not clear that they are really able and, especially, willing to put these resources to good use. Especially on a somewhat technical problem like this, where the interests on one side (that of the advertisers) are concentrated while those on the other (the privacy of consumers) are diffused, legislatures are vulnerable to capture by interest groups. But even quite apart from that problem, technology moves faster than the legislative process, so legislation is likely to come too late, and not to be adapted to the (rapidly evolving) needs of the internet universe. And as for legitimacy, given the global impact of the rules at issue, what is, actually, the legitimacy of the U.S. Congress―or, say, the European Parliament―as a rule-maker?

If legislatures do not act, there are still other possibilities. One is that the courts will somehow get involved. I’m not sure what form lawsuits related to “do not track” might take―what cause of action anyone involved might have against anyone else. Perhaps “do not track” users might sue websites that refuse to comply with their preferences. Perhaps websites will make the use of tracking a condition of visiting them, and sue those who try to avoid it. I’m not sure how that might work, but I am pretty confident that lawyers more creative than I will think of something, and force the courts to step in. But, as Lon Fuller argued, courts aren’t good at managing complex policy problems which concern the interests of multiple parties, not all of them involved in litigation. And as I wrote before, courts might be especially bad at dealing with emerging technologies.

A final possibility is that nobody makes any rules at all, and we just wait until some rules evolve because behaviours converge on them. F.A. Hayek would probably say that this is the way to go, and sometimes it is. As I hope my discussion of the severe limitations of various rule-making fora shows, making rules is a fraught enterprise, which is likely to go badly wrong due to lack of knowledge if not capture by special interests. But sometimes it doesn’t make sense to wait for rules to grow―there are cases where having a rule is much more important than having a good rule (what side of the road to drive on is a classic example). The danger in the case of “do not track” might be an arms race between browser-makers striving to give users the ability to avoid targeted ads, or indeed any ads at all, and advertisers (and content providers) striving to throw them at users.  Pace the president of the Federal Trade Commission, whom the Times quotes as being rather optimistic about this prospect, it might actually be a bad thing, if the “barter system” that sustains the Internet as we know it is be caught in the crossfire.

Once again, I have no answers, only questions. Indeed my knowledge of the internet is too rudimentary for me to have answers. But I think what I know of legal philosophy allows me to ask some important questions.

I apologize, however, for doing it at such length.

Oui… Non… Peut-Être?

La question de l’application de règles de la Loi électorale québécoise concernant les dépenses électorales des citoyens à des activités sur internet, que j’ai déjà abordée ici et ici, refait encore surface. Selon un article de Radio-Canada, le Directeur général des élections a d’abord conclu que liberaux.net, un site farouchement opposé au Parti libéral du Québec, controvenait à la Loi électorale, qui limite sévèrement les dépenses que toute personne autre qu’un parti politique ou un candidat peut encourir en période électorale pour favoriser ou défavoriser l’élection d’un parti ou d’un candidat; moins de 24 heures plus tard, le DGE a changé d’idée.

Selon Radio-Canada, le DGE a conclu que liberaux.net était un « média citoyen [similaire] à l’un de ceux qui bénéficient de l’exception prévue à l’article 404 de la Loi électorale premier paragraphe, lequel garantit la liberté d’expression des médias en spécifiant qu’il ne s’agit pas d’une dépense électorale ». La créatrice du site insiste, elle aussi, sur le fait qu’elle est une simple citoyenne. Elle n’aurait, en fait, rien dépensé pour créer le site, sauf son travail bien sûr, et l’hébergement du site lui aurait été offert gratuitement.

Selon moi, le DGE a tort dans son interprétation de la Loi électorale. Il l’interprète pour lui faire dire ce qu’elle devrait peut-être dire, mais qu’elle ne dit pas. La disposition pertinente, le paragraphe 1 de l’article 404, exclut de la définition de « dépenses électorales »

la publication, dans un journal ou autre périodique, d’articles, d’éditoriaux, de nouvelles, d’entrevues, de chroniques ou de lettres de lecteurs, à la condition que cette publication soit faite sans paiement, récompense ou promesse de paiement ou de récompense, qu’il ne s’agisse pas d’un journal ou autre périodique institué aux fins ou en vue de l’élection et que la distribution et la fréquence de publication n’en soient pas établies autrement qu’en dehors de la période électorale.

Le texte anglais de cette disposition parle de

the cost of publishing articles, editorials, news, interviews, columns or letters to the editor in a newspaper, periodical or other publication, provided that they are published without payment, reward or promise of payment or reward, that the newspaper, periodical or other publication is not established for the purposes or in view of the election and that the circulation and frequency of publication are as what obtains outside the election period.

Le problème de liberaux.net, c’est qu’il ne s’agit pas d’ « un journal ou autre périodique ». Un périodique, selon le Dictionnaire de l’académie française, est une publication « qui paraît par livraisons successives, dans des temps fixes et réglés ». La référence, dans la Loi, à la fréquence de publication du « journal ou autre périodique » confirme que le législateur avait ce sens à l’esprit. Un quotidien, un hebdomadaire, une revue qui paraît dix fois l’an, ce sont des périodiques au sens de la Loi électorale. Un site web qui est mis à jour au gré de la motivation et des envies de son auteur n’en est pas un.

On pourrait être tenté de se rabattre sur le texte anglais, en apparence plus permissif, puisqu’il parle de « newspaper, periodical or other publication » (mes italiques). Mais même en mettant de côté la définition de “publication” de l’Oxford English Dictionary comme « a book or journal issued for public sale », à laquelle un site web ne correspond absolument pas, je pense que c’est bien le texte français qui reflète l’intention du législateur, vu la référence – dans les deux langues officielles – à la fréquence de la publication.

De plus, l’interprétation « technologiquement neutre » du DGE va à l’encontre de l’économie de l’article 404 de la Loi électorale qui contient des dispositions séparées, aux paragaphes 1, 2 et 3, s’appliquant respectivement à la presse périodique, aux livres et aux médias de télécommunication (radio et télévision). Selon moi, cette interprétation est donc erronée.

Il est sans doute regrettable – je dirais même ridicule – que la Loi électorale n’accomode aucunement l’expression des citoyens sur internet. En comparaison, la Loi électorale du Canada exempte de sa définition de « publicité électorale », à l’alinéa 319(d), « la diffusion par un individu, sur une base non commerciale, de ses opinions politiques sur le réseau communément appelé Internet ». On pourrait bien sûr se demander si cette exemption est suffisante. (Pourquoi s’applique-t-elle à des inidvidus, mais pas à des groupes, par exemple?) On pourrait aussi se demander une disposition « technologiquement neutre », s’appliquant à toute forme d’expression citoyenne, ne serait pas préférable à des dispositions particulières à chaque média. Quoi qu’il en soit, la disposition fédérale, c’est mieux que rien.

Or, la loi québécoise n’en contient pas d’équivalent. Il n’appartient pas au DGE, qui doit faire appliquer la loi, de la réécrire, si souhaitable cette réécriture soit-elle.

In with the New?

Last week, I suggested that “[n]ew technologies seem not so much to create moral issues as to serve as a new canvass on which to apply our old concerns.” But there is no doubt that our legal rules, unlike perhaps moral ones, need updating when new technology comes along. How this updating is to happen is a difficult question. Lon Fuller, in his great article on “The Forms and Limits of Adjudication,” distinguished “three ways of reaching decisions, of settling disputes, of defining men’s relations to one another,” which he also called “forms of social ordering”: elections (and, one has to assume, resulting legislation), contract, and adjudication. All three can be and are used in developing rules surrounding new technologies, and the distinctions between them are not as sharp as Fuller suggested, because they are very much intertwined. Some recent stories are illustrative.

One is a report in the New York Times about a settlement between an unspecified group of plaintiffs and Facebook regarding Facebook’s approach to what it calls “sponsored stories” which tell us that such and such friends “like” a certain company’s page. Pursuant to the settlement, Facebook “will amend its terms of use to explain that users give the company permission to use their name, profile picture and content [and] offer settings that let users control which of their actions — which individual like, listen, or read — will appear in Sponsored Stories.” More than the (substantial) costs to Facebook, what interests me here is the way in which this settlement establishes or changes a rule – not a legal rule in a positivist sense, but a social rule – regulating the use of individuals’ names and images in advertising, introducing a requirement of consent and opt-out opportunity.

What form of social ordering is at work here? Contract, in an immediate sense, since a settlement is a contract. But adjudication too, in important ways. For one thing, the settlement had to be approved by a court. And for another, and more importantly, it seems more than likely that the negotiation would not have happened outside the context of a lawsuit which it was meant to settle. Starting, or at least credibly threatening, litigation is probably the only way for a group of activists and/or lawyers to get a giant such as Facebook to negotiate with them – in preference to any number of other similar groups – and thus to gain a disproportionate influence on the framing of the rules the group is interested in. Is this influence legitimate? Even apart from legitimacy, is it a good thing from a policy standpoint? For example, how do “we” – or does anyone – know that this particular group is motivated by the public interest and, assuming that it is, capable of evaluating it correctly and of being an effective negotiator? I think these are very troubling questions, but there are also no obvious ways of preventing social ordering through adjudication/negotiation even if we do conclude that it is problematic.

That is because alternative modes of social ordering are themselves flawed. Legislation is slow and thus a problematic response to new and fast-developing technologies. And adjudication (whether in a “pure” form – just letting courts develop rules in the process of deciding cases – or in the shape of more active judicial supervision of negotiated settlements) comes with problems of its own.

One is the subject of a post for Forbes by Timothy B. Lee, who describes how the fact that judges are removed from the communities that are subject to and have to live with the rules that they develop leads them to produce rules that do not correspond to the needs of these communities. One example he gives is that “many computer programmers think they’d be better off without software patents,” yet one of the leading judges who decides cases on whether there should be such patents “doesn’t have a very deep understanding of the concerns of many in the software industry. And, more to the point, he clearly wasn’t very interested in understanding those concerns better or addressing them.” Mr. Lee believes that this would be different if the judges in question happened to have friends or family members among the ranks of software developers. Perhaps – but, as he acknowledges, it is not possible for judges to have personal connections in every walk of life. Even trying to diversify the courts will only do so much. Furthermore, the individual experiences on which Mr. Lee thinks judges should rely might be atypical and thus tend to produce worse, rather than better, rules. Here too, questions about just how much judging ought to be informed by personal experience – as a matter both of policy and of legitimacy – are pressing.

Another set of questions about the courts’ handing of new technologies is the subject of a great paper by Kyle Graham, a professor at Santa Clara University and the author of the entertaining Non Curat Lex blog. Focusing on the development of liability rules surrounding new technologies, and using the examples of some once-new gadgets, mostly cars and planes,  prof. Graham points out that

[t]he liability rules that come to surround an innovation do not spring immediately into existence, final and fully formed. Instead, sometimes there are false starts and lengthy delays in the development of these principles. These detours and stalls result from five recurring features of the interplay between tort law and new technologies … First, the initial batch of cases presented to courts may be atypical of later lawsuits that implicate the innovation, yet relate rules with surprising persistence. Second, these cases may be resolved by reference to analogies that rely on similarities in form, and which do not wear well over time. Third, it may be difficult to isolate the unreasonable risks generated by an innovation from the benefits it is perceived to offer. Fourth, claims by early adopters of the technology may be more difficult to recover upon than those that arise later, once the technology develops a mainstream audience. Fifth, and finally, with regard to any particular innovation, it may be impossible to predict whether, and for how long, the recurring themes within tort law and its application that tend to yield a “grace” period for an invention will prevail over those tendencies with the opposite effect. (102)

I conclude, with my customary optimism, that there seem to be no good ways of developing rules surrounding new technologies, though there is a great variety of bad ones. But some rules there must be, so we need to learn to live with rotten ones.

Google, Speaker and Censor

Some recent stories highlight Google’s ambiguous role as provider and manager of content, which, from a free-speech perspective, puts at it at once in the shoes of both a speaker potentially subject to censorship and an agent of the censors.

The first of these is an interesting exchange between Eugene Volokh, of UCLA and the Volokh Conspiracy, and Tim Wu, of Harvard. Back in April, prof. Volokh and a lawyer from California, Donald Falk, published a “White Paper” commissioned by Google, arguing that search results produced by Google and its competitors are covered by the First Amendment to the U.S. Constitution, which protects freedom of speech. The crux of their argument is that “search engines select and sort the results in a way that is aimed at giving users what the search engine companies see as the most  helpful and useful information” (3). This is an “editorial judgment,” similar to other editorial judgments – that of a newspaper publisher selecting and arranging news stories, letters from readers, and editorials, or a guidebook editor choosing which restaurants or landmarks to include and review and which to omit. The fact that the actual selecting and sorting of the internet search results is done by computer algorithms rather by human beings is of no import. It “is necessary given the sheer volume of information that search engines must process, and given the variety of queries that users can input,” but technology does not matter: the essence of the decision is the same whether it is made by men or by machines (which, in any event, are designed and programmed by human engineers with editorial objectives in mind).

In a recent op-ed in the New York Times, prof. Wu challenges the latter claim. For him, it matters a lot whether we are speaking of choices made by human beings or by computers. Free speech protections are granted to people, sentient beings capable of thought and opinion. Extending them to corporations is disturbing, and doing so to machines would be a mistake.

As a matter of legal logic, there is some similarity among Google, [a newspaper columnist], Socrates and other providers of answers. But if you look more closely, the comparison falters. Socrates was a man who died for his views; computer programs are utilitarian instruments meant to serve us. Protecting a computer’s “speech” is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship.

And it does not matter that computer algorithms are designed by humans. A machine can no more “inherit” the constitutional rights of its creator than Dr. Frankenstein’s monster.

Prof. Volokh responds to the arguments in a blog post. He thinks it is a mistake to treat the intervention of the algorithm as an entirely new event that breaks the constitutional protection to which editorial decisions of human beings are entitled. The algorithms  are only tools; their decisions are not autonomous, but reflect the choices of their designers. To the extent that similar choices by human beings are prohibited or regulated, they remain so if made by computers; but to the extent they are constitutionally protected – and it is a large one – the interposition of an algorithm should not matter at all.

This is only a bare-bones summary of the arguments; they are worth a careful reading. Another caveat is that the constitutional analysis might be somewhat different in Canada, since our law is somewhat less protective of free speech than its American counterpart. However, I do not think that these differences, however significant they are in some cases, would or should matter here.

The argument prof. Volokh articulates on Google’s behalf reflects its concern about having its own speech regulated. That concern is one it shares with the traditional media to which prof. Volokh repeatedly compares it. But Google is also different from traditional media, in that it serves as a host or conduit to all manner of content which it neither created nor even vetted. It is different too in being (almost) omnipresent, and thus subject to the regulation and pressure of governments the world over. For this reason, is often asked to act as an agent of the regulators or censors of the speech of others to which it links or which its platforms host – and, as much as it presents itself as a speaker worried about censorship of its own speech, it often enough accepts. It provides some of the details – numbers mostly, and a selection of examples – in its “Transparency Report.” To be sure, much of the content that Google accepts to remove is, in one way or another, illegal – for example defamatory, or contrary to hate speech legislation. And as a private company, Google does not necessarily owe it to anyone to link to or host his or her content. Still, when its decisions not to do so are motivated not by commercial considerations, but by requests of government agencies – and not necessarily courts, but police and other executive agencies too – its position becomes more ambiguous. For example, one has to wonder whether there is a risk of a conflict of interest between its roles as speaker and censors’ agent – whether it will not be tempted to trade greater compliance with the regulators’ demands when it comes to others’ content for greater leeway when it comes to its own.

No New Thing in the Cloud

The Stanford Encyclopedia of Philosophy has a new entry on “Information Technology and Moral Values,” by John Sullins, a professor of philosophy at Sonoma State University. It is a useful summary of (many of) the moral issues that information technology raises, and a reminder that issues that we are used to considering from a policy standpoint also have moral dimensions. At the same time, it is a reminder that there is no new thing under the sun – itself an old observation.

Generally speaking, the moral issues which prof. Sullins thinks information technology are pretty much the same moral issues that you would expect a left-leaning intellectual to worry about in just about any context – income inequalities, gender inequality, “justice”. (I might be wrong to attribute these leanings to pof. Sullins of course; I have no other ground for this attribution than the article. And yet it feels like ground enough.) A libertarian or a conservative would probably have written a substantially different-sounding piece on the same topic; different-sounding, but equally predictable. New technologies seem not so much to create moral issues as to serve as a new canvass on which to apply our old concerns.

A couple of specific examples seem also to confirm the timeless cynicism (or is it wisdom?) of Ecclesiastes. One is given by prof. Sullins himself:

The move from one set of dominant information technologies to another is always morally contentious. Socrates lived during the long transition from a largely oral tradition to a newer information technology consisting of writing down words and information and collecting those writings into scrolls and books. Famously Socrates was somewhat antagonistic to writing and he never wrote anything down himself. Ironically, we only know about Socrates’ argument against writing because his student Plato ignored his teacher and wrote it down.

Socrates worried that writing would cause people to stop learning stuff – why bother when you can look it up a book? Just imagine what the grumpy old man would have said about Google and Wikipedia.

The second example came to mind when reading prof. Sullins’ discussion of the concerns raised by the “Moral Values in Communicating and Accessing Information.” Among the concerns he explores under this rubric are that with “[w]ho has the final say whether or not some information … is communicated or not” and that over the accuracy of the information communicated about someone or something (and the problem of who bears the burden of ensuring accuracy, or perhaps of dealing with the consequences of inaccurate information being communicated).  This reminded me of the passage in The Master and Margarita where Yeshua Ha-Notsri – Jesus – tells Pilate that he “is starting to worry that this whole confusion” about what he told the people “will go on for a very long time. And it’s all because he is writing down my words incorrectly.” “He” is the Levi Matvei – Matthew. As Yeshua goes on to explain, Matvei follows him “with a goat-skin and writes all the time. But I once looked at this goat-skin, and was horrified. I never said anything, anything at all of what’s written there. I begged him: for God’s sake, burn your goat-skin! But he tore it from my hands and ran away.” He might as well have been trying to get Facebook to delete some information about him, right? As the ensuing confusion shows, there are indeed dangers in recording information about someone without his consent, and then communicating it to all sorts of not always well-intentioned people.

So there is nothing new in the cloud, where this text will be stored, any more than under the sun, on goat-skins, or anywhere else, is there? Yet it is just possible that there is nothing new only because we do not see it. Perhaps new technologies really do create new problems – but we are so busy trying to deal with old ones that we do not notice.