Oui… Non… Peut-Être?

La question de l’application de règles de la Loi électorale québécoise concernant les dépenses électorales des citoyens à des activités sur internet, que j’ai déjà abordée ici et ici, refait encore surface. Selon un article de Radio-Canada, le Directeur général des élections a d’abord conclu que liberaux.net, un site farouchement opposé au Parti libéral du Québec, controvenait à la Loi électorale, qui limite sévèrement les dépenses que toute personne autre qu’un parti politique ou un candidat peut encourir en période électorale pour favoriser ou défavoriser l’élection d’un parti ou d’un candidat; moins de 24 heures plus tard, le DGE a changé d’idée.

Selon Radio-Canada, le DGE a conclu que liberaux.net était un « média citoyen [similaire] à l’un de ceux qui bénéficient de l’exception prévue à l’article 404 de la Loi électorale premier paragraphe, lequel garantit la liberté d’expression des médias en spécifiant qu’il ne s’agit pas d’une dépense électorale ». La créatrice du site insiste, elle aussi, sur le fait qu’elle est une simple citoyenne. Elle n’aurait, en fait, rien dépensé pour créer le site, sauf son travail bien sûr, et l’hébergement du site lui aurait été offert gratuitement.

Selon moi, le DGE a tort dans son interprétation de la Loi électorale. Il l’interprète pour lui faire dire ce qu’elle devrait peut-être dire, mais qu’elle ne dit pas. La disposition pertinente, le paragraphe 1 de l’article 404, exclut de la définition de « dépenses électorales »

la publication, dans un journal ou autre périodique, d’articles, d’éditoriaux, de nouvelles, d’entrevues, de chroniques ou de lettres de lecteurs, à la condition que cette publication soit faite sans paiement, récompense ou promesse de paiement ou de récompense, qu’il ne s’agisse pas d’un journal ou autre périodique institué aux fins ou en vue de l’élection et que la distribution et la fréquence de publication n’en soient pas établies autrement qu’en dehors de la période électorale.

Le texte anglais de cette disposition parle de

the cost of publishing articles, editorials, news, interviews, columns or letters to the editor in a newspaper, periodical or other publication, provided that they are published without payment, reward or promise of payment or reward, that the newspaper, periodical or other publication is not established for the purposes or in view of the election and that the circulation and frequency of publication are as what obtains outside the election period.

Le problème de liberaux.net, c’est qu’il ne s’agit pas d’ « un journal ou autre périodique ». Un périodique, selon le Dictionnaire de l’académie française, est une publication « qui paraît par livraisons successives, dans des temps fixes et réglés ». La référence, dans la Loi, à la fréquence de publication du « journal ou autre périodique » confirme que le législateur avait ce sens à l’esprit. Un quotidien, un hebdomadaire, une revue qui paraît dix fois l’an, ce sont des périodiques au sens de la Loi électorale. Un site web qui est mis à jour au gré de la motivation et des envies de son auteur n’en est pas un.

On pourrait être tenté de se rabattre sur le texte anglais, en apparence plus permissif, puisqu’il parle de « newspaper, periodical or other publication » (mes italiques). Mais même en mettant de côté la définition de “publication” de l’Oxford English Dictionary comme « a book or journal issued for public sale », à laquelle un site web ne correspond absolument pas, je pense que c’est bien le texte français qui reflète l’intention du législateur, vu la référence – dans les deux langues officielles – à la fréquence de la publication.

De plus, l’interprétation « technologiquement neutre » du DGE va à l’encontre de l’économie de l’article 404 de la Loi électorale qui contient des dispositions séparées, aux paragaphes 1, 2 et 3, s’appliquant respectivement à la presse périodique, aux livres et aux médias de télécommunication (radio et télévision). Selon moi, cette interprétation est donc erronée.

Il est sans doute regrettable – je dirais même ridicule – que la Loi électorale n’accomode aucunement l’expression des citoyens sur internet. En comparaison, la Loi électorale du Canada exempte de sa définition de « publicité électorale », à l’alinéa 319(d), « la diffusion par un individu, sur une base non commerciale, de ses opinions politiques sur le réseau communément appelé Internet ». On pourrait bien sûr se demander si cette exemption est suffisante. (Pourquoi s’applique-t-elle à des inidvidus, mais pas à des groupes, par exemple?) On pourrait aussi se demander une disposition « technologiquement neutre », s’appliquant à toute forme d’expression citoyenne, ne serait pas préférable à des dispositions particulières à chaque média. Quoi qu’il en soit, la disposition fédérale, c’est mieux que rien.

Or, la loi québécoise n’en contient pas d’équivalent. Il n’appartient pas au DGE, qui doit faire appliquer la loi, de la réécrire, si souhaitable cette réécriture soit-elle.

In with the New?

Last week, I suggested that “[n]ew technologies seem not so much to create moral issues as to serve as a new canvass on which to apply our old concerns.” But there is no doubt that our legal rules, unlike perhaps moral ones, need updating when new technology comes along. How this updating is to happen is a difficult question. Lon Fuller, in his great article on “The Forms and Limits of Adjudication,” distinguished “three ways of reaching decisions, of settling disputes, of defining men’s relations to one another,” which he also called “forms of social ordering”: elections (and, one has to assume, resulting legislation), contract, and adjudication. All three can be and are used in developing rules surrounding new technologies, and the distinctions between them are not as sharp as Fuller suggested, because they are very much intertwined. Some recent stories are illustrative.

One is a report in the New York Times about a settlement between an unspecified group of plaintiffs and Facebook regarding Facebook’s approach to what it calls “sponsored stories” which tell us that such and such friends “like” a certain company’s page. Pursuant to the settlement, Facebook “will amend its terms of use to explain that users give the company permission to use their name, profile picture and content [and] offer settings that let users control which of their actions — which individual like, listen, or read — will appear in Sponsored Stories.” More than the (substantial) costs to Facebook, what interests me here is the way in which this settlement establishes or changes a rule – not a legal rule in a positivist sense, but a social rule – regulating the use of individuals’ names and images in advertising, introducing a requirement of consent and opt-out opportunity.

What form of social ordering is at work here? Contract, in an immediate sense, since a settlement is a contract. But adjudication too, in important ways. For one thing, the settlement had to be approved by a court. And for another, and more importantly, it seems more than likely that the negotiation would not have happened outside the context of a lawsuit which it was meant to settle. Starting, or at least credibly threatening, litigation is probably the only way for a group of activists and/or lawyers to get a giant such as Facebook to negotiate with them – in preference to any number of other similar groups – and thus to gain a disproportionate influence on the framing of the rules the group is interested in. Is this influence legitimate? Even apart from legitimacy, is it a good thing from a policy standpoint? For example, how do “we” – or does anyone – know that this particular group is motivated by the public interest and, assuming that it is, capable of evaluating it correctly and of being an effective negotiator? I think these are very troubling questions, but there are also no obvious ways of preventing social ordering through adjudication/negotiation even if we do conclude that it is problematic.

That is because alternative modes of social ordering are themselves flawed. Legislation is slow and thus a problematic response to new and fast-developing technologies. And adjudication (whether in a “pure” form – just letting courts develop rules in the process of deciding cases – or in the shape of more active judicial supervision of negotiated settlements) comes with problems of its own.

One is the subject of a post for Forbes by Timothy B. Lee, who describes how the fact that judges are removed from the communities that are subject to and have to live with the rules that they develop leads them to produce rules that do not correspond to the needs of these communities. One example he gives is that “many computer programmers think they’d be better off without software patents,” yet one of the leading judges who decides cases on whether there should be such patents “doesn’t have a very deep understanding of the concerns of many in the software industry. And, more to the point, he clearly wasn’t very interested in understanding those concerns better or addressing them.” Mr. Lee believes that this would be different if the judges in question happened to have friends or family members among the ranks of software developers. Perhaps – but, as he acknowledges, it is not possible for judges to have personal connections in every walk of life. Even trying to diversify the courts will only do so much. Furthermore, the individual experiences on which Mr. Lee thinks judges should rely might be atypical and thus tend to produce worse, rather than better, rules. Here too, questions about just how much judging ought to be informed by personal experience – as a matter both of policy and of legitimacy – are pressing.

Another set of questions about the courts’ handing of new technologies is the subject of a great paper by Kyle Graham, a professor at Santa Clara University and the author of the entertaining Non Curat Lex blog. Focusing on the development of liability rules surrounding new technologies, and using the examples of some once-new gadgets, mostly cars and planes,  prof. Graham points out that

[t]he liability rules that come to surround an innovation do not spring immediately into existence, final and fully formed. Instead, sometimes there are false starts and lengthy delays in the development of these principles. These detours and stalls result from five recurring features of the interplay between tort law and new technologies … First, the initial batch of cases presented to courts may be atypical of later lawsuits that implicate the innovation, yet relate rules with surprising persistence. Second, these cases may be resolved by reference to analogies that rely on similarities in form, and which do not wear well over time. Third, it may be difficult to isolate the unreasonable risks generated by an innovation from the benefits it is perceived to offer. Fourth, claims by early adopters of the technology may be more difficult to recover upon than those that arise later, once the technology develops a mainstream audience. Fifth, and finally, with regard to any particular innovation, it may be impossible to predict whether, and for how long, the recurring themes within tort law and its application that tend to yield a “grace” period for an invention will prevail over those tendencies with the opposite effect. (102)

I conclude, with my customary optimism, that there seem to be no good ways of developing rules surrounding new technologies, though there is a great variety of bad ones. But some rules there must be, so we need to learn to live with rotten ones.

Google, Speaker and Censor

Some recent stories highlight Google’s ambiguous role as provider and manager of content, which, from a free-speech perspective, puts at it at once in the shoes of both a speaker potentially subject to censorship and an agent of the censors.

The first of these is an interesting exchange between Eugene Volokh, of UCLA and the Volokh Conspiracy, and Tim Wu, of Harvard. Back in April, prof. Volokh and a lawyer from California, Donald Falk, published a “White Paper” commissioned by Google, arguing that search results produced by Google and its competitors are covered by the First Amendment to the U.S. Constitution, which protects freedom of speech. The crux of their argument is that “search engines select and sort the results in a way that is aimed at giving users what the search engine companies see as the most  helpful and useful information” (3). This is an “editorial judgment,” similar to other editorial judgments – that of a newspaper publisher selecting and arranging news stories, letters from readers, and editorials, or a guidebook editor choosing which restaurants or landmarks to include and review and which to omit. The fact that the actual selecting and sorting of the internet search results is done by computer algorithms rather by human beings is of no import. It “is necessary given the sheer volume of information that search engines must process, and given the variety of queries that users can input,” but technology does not matter: the essence of the decision is the same whether it is made by men or by machines (which, in any event, are designed and programmed by human engineers with editorial objectives in mind).

In a recent op-ed in the New York Times, prof. Wu challenges the latter claim. For him, it matters a lot whether we are speaking of choices made by human beings or by computers. Free speech protections are granted to people, sentient beings capable of thought and opinion. Extending them to corporations is disturbing, and doing so to machines would be a mistake.

As a matter of legal logic, there is some similarity among Google, [a newspaper columnist], Socrates and other providers of answers. But if you look more closely, the comparison falters. Socrates was a man who died for his views; computer programs are utilitarian instruments meant to serve us. Protecting a computer’s “speech” is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship.

And it does not matter that computer algorithms are designed by humans. A machine can no more “inherit” the constitutional rights of its creator than Dr. Frankenstein’s monster.

Prof. Volokh responds to the arguments in a blog post. He thinks it is a mistake to treat the intervention of the algorithm as an entirely new event that breaks the constitutional protection to which editorial decisions of human beings are entitled. The algorithms  are only tools; their decisions are not autonomous, but reflect the choices of their designers. To the extent that similar choices by human beings are prohibited or regulated, they remain so if made by computers; but to the extent they are constitutionally protected – and it is a large one – the interposition of an algorithm should not matter at all.

This is only a bare-bones summary of the arguments; they are worth a careful reading. Another caveat is that the constitutional analysis might be somewhat different in Canada, since our law is somewhat less protective of free speech than its American counterpart. However, I do not think that these differences, however significant they are in some cases, would or should matter here.

The argument prof. Volokh articulates on Google’s behalf reflects its concern about having its own speech regulated. That concern is one it shares with the traditional media to which prof. Volokh repeatedly compares it. But Google is also different from traditional media, in that it serves as a host or conduit to all manner of content which it neither created nor even vetted. It is different too in being (almost) omnipresent, and thus subject to the regulation and pressure of governments the world over. For this reason, is often asked to act as an agent of the regulators or censors of the speech of others to which it links or which its platforms host – and, as much as it presents itself as a speaker worried about censorship of its own speech, it often enough accepts. It provides some of the details – numbers mostly, and a selection of examples – in its “Transparency Report.” To be sure, much of the content that Google accepts to remove is, in one way or another, illegal – for example defamatory, or contrary to hate speech legislation. And as a private company, Google does not necessarily owe it to anyone to link to or host his or her content. Still, when its decisions not to do so are motivated not by commercial considerations, but by requests of government agencies – and not necessarily courts, but police and other executive agencies too – its position becomes more ambiguous. For example, one has to wonder whether there is a risk of a conflict of interest between its roles as speaker and censors’ agent – whether it will not be tempted to trade greater compliance with the regulators’ demands when it comes to others’ content for greater leeway when it comes to its own.

No New Thing in the Cloud

The Stanford Encyclopedia of Philosophy has a new entry on “Information Technology and Moral Values,” by John Sullins, a professor of philosophy at Sonoma State University. It is a useful summary of (many of) the moral issues that information technology raises, and a reminder that issues that we are used to considering from a policy standpoint also have moral dimensions. At the same time, it is a reminder that there is no new thing under the sun – itself an old observation.

Generally speaking, the moral issues which prof. Sullins thinks information technology are pretty much the same moral issues that you would expect a left-leaning intellectual to worry about in just about any context – income inequalities, gender inequality, “justice”. (I might be wrong to attribute these leanings to pof. Sullins of course; I have no other ground for this attribution than the article. And yet it feels like ground enough.) A libertarian or a conservative would probably have written a substantially different-sounding piece on the same topic; different-sounding, but equally predictable. New technologies seem not so much to create moral issues as to serve as a new canvass on which to apply our old concerns.

A couple of specific examples seem also to confirm the timeless cynicism (or is it wisdom?) of Ecclesiastes. One is given by prof. Sullins himself:

The move from one set of dominant information technologies to another is always morally contentious. Socrates lived during the long transition from a largely oral tradition to a newer information technology consisting of writing down words and information and collecting those writings into scrolls and books. Famously Socrates was somewhat antagonistic to writing and he never wrote anything down himself. Ironically, we only know about Socrates’ argument against writing because his student Plato ignored his teacher and wrote it down.

Socrates worried that writing would cause people to stop learning stuff – why bother when you can look it up a book? Just imagine what the grumpy old man would have said about Google and Wikipedia.

The second example came to mind when reading prof. Sullins’ discussion of the concerns raised by the “Moral Values in Communicating and Accessing Information.” Among the concerns he explores under this rubric are that with “[w]ho has the final say whether or not some information … is communicated or not” and that over the accuracy of the information communicated about someone or something (and the problem of who bears the burden of ensuring accuracy, or perhaps of dealing with the consequences of inaccurate information being communicated).  This reminded me of the passage in The Master and Margarita where Yeshua Ha-Notsri – Jesus – tells Pilate that he “is starting to worry that this whole confusion” about what he told the people “will go on for a very long time. And it’s all because he is writing down my words incorrectly.” “He” is the Levi Matvei – Matthew. As Yeshua goes on to explain, Matvei follows him “with a goat-skin and writes all the time. But I once looked at this goat-skin, and was horrified. I never said anything, anything at all of what’s written there. I begged him: for God’s sake, burn your goat-skin! But he tore it from my hands and ran away.” He might as well have been trying to get Facebook to delete some information about him, right? As the ensuing confusion shows, there are indeed dangers in recording information about someone without his consent, and then communicating it to all sorts of not always well-intentioned people.

So there is nothing new in the cloud, where this text will be stored, any more than under the sun, on goat-skins, or anywhere else, is there? Yet it is just possible that there is nothing new only because we do not see it. Perhaps new technologies really do create new problems – but we are so busy trying to deal with old ones that we do not notice.