The Separation of Spending and Speech

I commented yesterday onVincent Marissal’s column in La Presse about the impact of social media on the upcoming election campaign in Québec – and the way in which the social media undermine the regulation of the electoral process that limits the electoral expenses of “third parties” – citizens, groups, or organizations that are neither political parties nor candidates for office. I want to return to this topic, focusing now on its theoretical, rather than its practical, implications.

The current schemes for the regulation of electoral campaigns in Canada are premised on the idea that one must, generally, spend in order to speak – or at least, in order to make one’s speech heard by any significant number of people. So long as this premise holds, a limit on electoral spending is a limit on electoral speech. And, subject to a few exceptions (such as the publication of letters to the editor or op-eds in newspapers, at the newspapers’ expense), which were also exempt from the electoral regulations, that premise did in fact hold true until the advent of social media.

It no longer does. A tweet might be read by thousands, even hundreds of thousands of people. A YouTube video can be seen by millions. And their authors will not have to pay a dime for the dissemination of their messages. Spending and speech have come apart – and a key assumption underlying the regulation of elections in Canada no longer holds true. So what becomes of our current regulatory schemes? Should we discard them as obsolete? And if so, what should we replace them with?

The answer to these questions depends on the purpose for which we regulate electoral campaigns. The trouble is that our current regulations have not one, but two purposes On the one hand, as I noted in an op-ed Cyberpresse published in April, our electoral regulations aim to suppress the influence of money on the electoral process, which they assume to be unfair and/or pernicious. On the other, they aim, as I suggested in a recent post, to put political parties at the centre of the electoral process, by consigning “third parties” to the margins. These two purposes worked together so long as spend-to-speak model of electoral communications held, because limiting electoral expenses by third parties served both. But now it no longer does. It still works to reduce the influence of money, but limiting or prohibiting electoral expenditures by third parties no longer prevents them from speaking, loudly and to very large audiences, though social media. That is a central point of Mr. Marissal’s column – political parties can no longer be sure of controlling the electoral debate, and outsiders can easily play an important role in it.

So if our main concern is with the role of money, we can keep our electoral regulations as they are. Indeed, they are arguably less troubling now than they once were, since they do not actually prevent people from speaking out on political issues. In effect, they only direct that third parties must, during election campaigns, speak through social media. Only, I wonder if such a rule has any point. It is not money, after all, that our current regulations try to subdue, but the people who have a lot of it, individually or collectively. And if these people are able to speak anyway, through social media, what do we care to prevent them from spending their money on something they can get for free? If, however, our concern is to maintain the party- and candidates-centred model of elections, the current regulations are obsolete and utterly inadequate to the task. New rules are required – as well as the will and the means to police their application to the internet’s wilderness. I doubt that our governments have either.

Une campagne 1.9

Vincent Marissal a publié une chronique intéressante dans La Presse ce matin, sur “la première vraie campagne 2.0” que le Québec vivra lorsque les élections seront déclenchées – vraisemblablement dans les prochains mois. Contrairement aux États-Unis, où internet et, surtout, les réseaux sociaux ont transformé les campagnes électorales dès 2004, et certainement en 2008, le changement a tardé à se faire sentir au Québec. M. Marissal relève une autre différence: alors qu’aux États-Unis ce sont les candidats (notamment Barack Obama) qui ont donné aux nouveaux médias un rôle central dans les campagnes électorales, “la révolution 2.0 au Québec viendra probablement des électeurs plus que des partis politiques.” Comme toute révolution digne de ce nom, celle-ci va heurter les habitudes et les normes établies, non seulement sur le plan politique, qui n’est pas de mon ressort ici, mais aussi sur le plan juridique. Je me concentre, dans ce billet, sur les aspects pratiques des changements qu’elle amène, gardant une réflexion théorique pour un autre, bientôt.

Comme le souligne M. Marissal, la Loi électorale québécoise essaie de circonscrire les interventions dans une campagne électorale aux partis politiques. Les dépenses des “tierces parties” – c’est-à-dire tout le monde sauf les partis politiques enregistrés et les candidats – sont très sévèrement limitées. Or, dit-il,

Twitter, Facebook et surtout YouTube permettent ce que la loi électorale québécoise interdit: des interventions de tierces parties, non officiellement associées à un parti politique, anonymes le plus souvent et dont les interventions ne sont pas comptabilisées dans les dépenses électorales. …  [P]lusieurs groupes, en particulier du côté des artistes, sont très mobilisés contre le gouvernement Charest et … ils ne se gêneront pas pour intervenir lors de la prochaine campagne électorale sur les réseaux sociaux. En fait, c’est déjà commencé. … Encore là, toutefois, l’univers 2.0 appartient à tout le monde, et rien n’empêche des groupes favorables aux libéraux (ou opposés au PQ, à la CAQ ou à Québec solidaire) de jouer aussi cette carte [ce que certains font déjà].

Cependant, les choses ne sont pas si simples. La Loi électorale s’applique, en principe, aux interventions sur les médias sociaux. À cet égard, comme en d’autres matières, elle est plus restrictive que la Loi électorale du Canada, ainsi que la législation équivalente de certaines autres provinces. L’article 319 de la loi fédérale, par exemple, exclut de sa définition de la “publicité électorale” qu’elle réglemente et limite “la diffusion par un individu, sur une base non commerciale, de ses opinions politiques sur le réseau communément appelé Internet.” La loi québécoise ne contient pas d’équivalent de cette exemption (elle-même plutôt étroite puisqu’elle n’applique pas, notamment, à l’expression pré-électorale de groupes).

Par contre, elle ne contrôle que les “dépenses électorales”, c’est à dire “le coût de tout bien ou service utilisé pendant la période électorale” pour aider un candidat ou un parti ou leur nuire (art. 404). En supposant qu’il s’agit du “coût” à la personne qui communique un message, la communication d’un message électoraliste sur les médias sociaux n’est pas couverte par cette définition, puisqu’elle est gratuite. Cependant, peu importe le moyen de communication choisi, la production d’un message électoraliste sera couverte par la définition de la Loi électorale si elle entraîne des dépenses.

Donc si vous tapez une missive anti-PLQ chez vous et la diffusez sur Facebook, vous ne contrevenez pas à la loi, puisque vous ne dépensez que votre temps. Mais si vous tournez une vidéo dénigrant ce même PLQ, dont la production et le montage en coûtent quelques centaines de dollars, et que vous la diffusez sur ce même Facebook ou sur YouTube, vous avez engagé une dépense électorale – ce que la loi vous interdit de faire.

Bref, M. Marissal a raison de dire que les médias sociaux changent ou, du moins, permettent de contourner, les règles du jeu établies avant leur apparition. Mais ils ne permettent pas de s’en affranchir tout à fait. Comme après la plupart des révolutions, l’ancien droit est tenace. On n’aura pas peut-être pas une campagne tout à fait 2.0 – mais au moins, 1.9.

In with the New?

Last week, I suggested that “[n]ew technologies seem not so much to create moral issues as to serve as a new canvass on which to apply our old concerns.” But there is no doubt that our legal rules, unlike perhaps moral ones, need updating when new technology comes along. How this updating is to happen is a difficult question. Lon Fuller, in his great article on “The Forms and Limits of Adjudication,” distinguished “three ways of reaching decisions, of settling disputes, of defining men’s relations to one another,” which he also called “forms of social ordering”: elections (and, one has to assume, resulting legislation), contract, and adjudication. All three can be and are used in developing rules surrounding new technologies, and the distinctions between them are not as sharp as Fuller suggested, because they are very much intertwined. Some recent stories are illustrative.

One is a report in the New York Times about a settlement between an unspecified group of plaintiffs and Facebook regarding Facebook’s approach to what it calls “sponsored stories” which tell us that such and such friends “like” a certain company’s page. Pursuant to the settlement, Facebook “will amend its terms of use to explain that users give the company permission to use their name, profile picture and content [and] offer settings that let users control which of their actions — which individual like, listen, or read — will appear in Sponsored Stories.” More than the (substantial) costs to Facebook, what interests me here is the way in which this settlement establishes or changes a rule – not a legal rule in a positivist sense, but a social rule – regulating the use of individuals’ names and images in advertising, introducing a requirement of consent and opt-out opportunity.

What form of social ordering is at work here? Contract, in an immediate sense, since a settlement is a contract. But adjudication too, in important ways. For one thing, the settlement had to be approved by a court. And for another, and more importantly, it seems more than likely that the negotiation would not have happened outside the context of a lawsuit which it was meant to settle. Starting, or at least credibly threatening, litigation is probably the only way for a group of activists and/or lawyers to get a giant such as Facebook to negotiate with them – in preference to any number of other similar groups – and thus to gain a disproportionate influence on the framing of the rules the group is interested in. Is this influence legitimate? Even apart from legitimacy, is it a good thing from a policy standpoint? For example, how do “we” – or does anyone – know that this particular group is motivated by the public interest and, assuming that it is, capable of evaluating it correctly and of being an effective negotiator? I think these are very troubling questions, but there are also no obvious ways of preventing social ordering through adjudication/negotiation even if we do conclude that it is problematic.

That is because alternative modes of social ordering are themselves flawed. Legislation is slow and thus a problematic response to new and fast-developing technologies. And adjudication (whether in a “pure” form – just letting courts develop rules in the process of deciding cases – or in the shape of more active judicial supervision of negotiated settlements) comes with problems of its own.

One is the subject of a post for Forbes by Timothy B. Lee, who describes how the fact that judges are removed from the communities that are subject to and have to live with the rules that they develop leads them to produce rules that do not correspond to the needs of these communities. One example he gives is that “many computer programmers think they’d be better off without software patents,” yet one of the leading judges who decides cases on whether there should be such patents “doesn’t have a very deep understanding of the concerns of many in the software industry. And, more to the point, he clearly wasn’t very interested in understanding those concerns better or addressing them.” Mr. Lee believes that this would be different if the judges in question happened to have friends or family members among the ranks of software developers. Perhaps – but, as he acknowledges, it is not possible for judges to have personal connections in every walk of life. Even trying to diversify the courts will only do so much. Furthermore, the individual experiences on which Mr. Lee thinks judges should rely might be atypical and thus tend to produce worse, rather than better, rules. Here too, questions about just how much judging ought to be informed by personal experience – as a matter both of policy and of legitimacy – are pressing.

Another set of questions about the courts’ handing of new technologies is the subject of a great paper by Kyle Graham, a professor at Santa Clara University and the author of the entertaining Non Curat Lex blog. Focusing on the development of liability rules surrounding new technologies, and using the examples of some once-new gadgets, mostly cars and planes,  prof. Graham points out that

[t]he liability rules that come to surround an innovation do not spring immediately into existence, final and fully formed. Instead, sometimes there are false starts and lengthy delays in the development of these principles. These detours and stalls result from five recurring features of the interplay between tort law and new technologies … First, the initial batch of cases presented to courts may be atypical of later lawsuits that implicate the innovation, yet relate rules with surprising persistence. Second, these cases may be resolved by reference to analogies that rely on similarities in form, and which do not wear well over time. Third, it may be difficult to isolate the unreasonable risks generated by an innovation from the benefits it is perceived to offer. Fourth, claims by early adopters of the technology may be more difficult to recover upon than those that arise later, once the technology develops a mainstream audience. Fifth, and finally, with regard to any particular innovation, it may be impossible to predict whether, and for how long, the recurring themes within tort law and its application that tend to yield a “grace” period for an invention will prevail over those tendencies with the opposite effect. (102)

I conclude, with my customary optimism, that there seem to be no good ways of developing rules surrounding new technologies, though there is a great variety of bad ones. But some rules there must be, so we need to learn to live with rotten ones.

Google, Speaker and Censor

Some recent stories highlight Google’s ambiguous role as provider and manager of content, which, from a free-speech perspective, puts at it at once in the shoes of both a speaker potentially subject to censorship and an agent of the censors.

The first of these is an interesting exchange between Eugene Volokh, of UCLA and the Volokh Conspiracy, and Tim Wu, of Harvard. Back in April, prof. Volokh and a lawyer from California, Donald Falk, published a “White Paper” commissioned by Google, arguing that search results produced by Google and its competitors are covered by the First Amendment to the U.S. Constitution, which protects freedom of speech. The crux of their argument is that “search engines select and sort the results in a way that is aimed at giving users what the search engine companies see as the most  helpful and useful information” (3). This is an “editorial judgment,” similar to other editorial judgments – that of a newspaper publisher selecting and arranging news stories, letters from readers, and editorials, or a guidebook editor choosing which restaurants or landmarks to include and review and which to omit. The fact that the actual selecting and sorting of the internet search results is done by computer algorithms rather by human beings is of no import. It “is necessary given the sheer volume of information that search engines must process, and given the variety of queries that users can input,” but technology does not matter: the essence of the decision is the same whether it is made by men or by machines (which, in any event, are designed and programmed by human engineers with editorial objectives in mind).

In a recent op-ed in the New York Times, prof. Wu challenges the latter claim. For him, it matters a lot whether we are speaking of choices made by human beings or by computers. Free speech protections are granted to people, sentient beings capable of thought and opinion. Extending them to corporations is disturbing, and doing so to machines would be a mistake.

As a matter of legal logic, there is some similarity among Google, [a newspaper columnist], Socrates and other providers of answers. But if you look more closely, the comparison falters. Socrates was a man who died for his views; computer programs are utilitarian instruments meant to serve us. Protecting a computer’s “speech” is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship.

And it does not matter that computer algorithms are designed by humans. A machine can no more “inherit” the constitutional rights of its creator than Dr. Frankenstein’s monster.

Prof. Volokh responds to the arguments in a blog post. He thinks it is a mistake to treat the intervention of the algorithm as an entirely new event that breaks the constitutional protection to which editorial decisions of human beings are entitled. The algorithms  are only tools; their decisions are not autonomous, but reflect the choices of their designers. To the extent that similar choices by human beings are prohibited or regulated, they remain so if made by computers; but to the extent they are constitutionally protected – and it is a large one – the interposition of an algorithm should not matter at all.

This is only a bare-bones summary of the arguments; they are worth a careful reading. Another caveat is that the constitutional analysis might be somewhat different in Canada, since our law is somewhat less protective of free speech than its American counterpart. However, I do not think that these differences, however significant they are in some cases, would or should matter here.

The argument prof. Volokh articulates on Google’s behalf reflects its concern about having its own speech regulated. That concern is one it shares with the traditional media to which prof. Volokh repeatedly compares it. But Google is also different from traditional media, in that it serves as a host or conduit to all manner of content which it neither created nor even vetted. It is different too in being (almost) omnipresent, and thus subject to the regulation and pressure of governments the world over. For this reason, is often asked to act as an agent of the regulators or censors of the speech of others to which it links or which its platforms host – and, as much as it presents itself as a speaker worried about censorship of its own speech, it often enough accepts. It provides some of the details – numbers mostly, and a selection of examples – in its “Transparency Report.” To be sure, much of the content that Google accepts to remove is, in one way or another, illegal – for example defamatory, or contrary to hate speech legislation. And as a private company, Google does not necessarily owe it to anyone to link to or host his or her content. Still, when its decisions not to do so are motivated not by commercial considerations, but by requests of government agencies – and not necessarily courts, but police and other executive agencies too – its position becomes more ambiguous. For example, one has to wonder whether there is a risk of a conflict of interest between its roles as speaker and censors’ agent – whether it will not be tempted to trade greater compliance with the regulators’ demands when it comes to others’ content for greater leeway when it comes to its own.

No New Thing in the Cloud

The Stanford Encyclopedia of Philosophy has a new entry on “Information Technology and Moral Values,” by John Sullins, a professor of philosophy at Sonoma State University. It is a useful summary of (many of) the moral issues that information technology raises, and a reminder that issues that we are used to considering from a policy standpoint also have moral dimensions. At the same time, it is a reminder that there is no new thing under the sun – itself an old observation.

Generally speaking, the moral issues which prof. Sullins thinks information technology are pretty much the same moral issues that you would expect a left-leaning intellectual to worry about in just about any context – income inequalities, gender inequality, “justice”. (I might be wrong to attribute these leanings to pof. Sullins of course; I have no other ground for this attribution than the article. And yet it feels like ground enough.) A libertarian or a conservative would probably have written a substantially different-sounding piece on the same topic; different-sounding, but equally predictable. New technologies seem not so much to create moral issues as to serve as a new canvass on which to apply our old concerns.

A couple of specific examples seem also to confirm the timeless cynicism (or is it wisdom?) of Ecclesiastes. One is given by prof. Sullins himself:

The move from one set of dominant information technologies to another is always morally contentious. Socrates lived during the long transition from a largely oral tradition to a newer information technology consisting of writing down words and information and collecting those writings into scrolls and books. Famously Socrates was somewhat antagonistic to writing and he never wrote anything down himself. Ironically, we only know about Socrates’ argument against writing because his student Plato ignored his teacher and wrote it down.

Socrates worried that writing would cause people to stop learning stuff – why bother when you can look it up a book? Just imagine what the grumpy old man would have said about Google and Wikipedia.

The second example came to mind when reading prof. Sullins’ discussion of the concerns raised by the “Moral Values in Communicating and Accessing Information.” Among the concerns he explores under this rubric are that with “[w]ho has the final say whether or not some information … is communicated or not” and that over the accuracy of the information communicated about someone or something (and the problem of who bears the burden of ensuring accuracy, or perhaps of dealing with the consequences of inaccurate information being communicated).  This reminded me of the passage in The Master and Margarita where Yeshua Ha-Notsri – Jesus – tells Pilate that he “is starting to worry that this whole confusion” about what he told the people “will go on for a very long time. And it’s all because he is writing down my words incorrectly.” “He” is the Levi Matvei – Matthew. As Yeshua goes on to explain, Matvei follows him “with a goat-skin and writes all the time. But I once looked at this goat-skin, and was horrified. I never said anything, anything at all of what’s written there. I begged him: for God’s sake, burn your goat-skin! But he tore it from my hands and ran away.” He might as well have been trying to get Facebook to delete some information about him, right? As the ensuing confusion shows, there are indeed dangers in recording information about someone without his consent, and then communicating it to all sorts of not always well-intentioned people.

So there is nothing new in the cloud, where this text will be stored, any more than under the sun, on goat-skins, or anywhere else, is there? Yet it is just possible that there is nothing new only because we do not see it. Perhaps new technologies really do create new problems – but we are so busy trying to deal with old ones that we do not notice.

Rants and Freedoms

Some university students think the lecturer whose class they are taking is doing a lousy job. Someone creates a hyperbolically-named Facebook group to rant; others join; a few post derogatory messages on the group’s wall. So far, so normal. But, after the semester ends and the lecturer, for reasons unknown, is no longer employed by the university, she somehow learns of the Facebook group, and complains to the university’s authorities. A kangaroo court is held, and finds the members of the group ― including those who posted no messages at all, and those whose messages were quite innocuous ― guilty of “non-academic misconduct.” Some of the students are required to write an apology letter to the former lecturer and put on probation. An appeal to a higher university instance is fruitless, and the university’s Board of Governors refuses to hear a further appeal. Judicial review and an appeal ensue.

That’s the scary story of Keith and Steven Pridgen, (former) students at the University of Calgary, whose right to rant the Alberta Court of Appeal vindicated in a recent decision. One has to hope that it will serve as a lesson for professors and university administrators (as well as teachers and school principals) in the future. Students, in case such people forget, have always ranted about their professors, and always will. It’s not always nice, and it’s not always fair; get over it. (This is, as much as anything else, a note to self as an aspiring academic.) The fact that rants now leave a digital record does not change anything, it seems to me: just because they used to circulate (and of course still circulate) by word of mouth, rants were no less pervasive and durable in the past. Stories about professors are handed over from one cohort of students to the next; they are an ineradicable part of university’s environment.

Legally, the Alberta Court of Appeal is interesting in a number of ways. Each of the three judges wrote a separate opinion. They all agree in finding the university’s decision unreasonable  and hence invalid on administrative law grounds, because the university’s decision bore little, if any, relationship with the evidence it ought to have been based on ― evidence of harm to the lecturer, or of the specific actions of each accused student. Justice O’Ferrall also finds that the utter failure to consider the students’ free speech rights contributes to making the decision unreasonable. The judges disagree, however, on whether to address the other issue debated by the parties (and several interveners) – the applicability of the Charter, and its guarantee of freedom of expression.

Justice Paperny thinks the question deserves to be addressed, since it was debated at length by the parties and is important; her colleagues disagree, because it is not necessary to the resolution of the case (since it can be resolved on administrative law grounds) and important constitutional questions should not be addressed unless it is necessary to do so. Both arguments have merit; I’m not sure on whose side I would have come out if I had to vote. Justice Paperny devotes much of her opinion to arguing that the Charter does indeed apply to universities, at least in their disciplinary dealings with their students. Her review of the case law is comprehensive, her argument about the universities’ and the government’s roles in contemporary society sometimes sweeping. And it is persuasive (and Justice Paperny’s colleagues, one senses, do not actually disagree with its substance).

One final thought. The court did not pause to consider whether the university even had the power to punish students for something they wrote on Facebook. Yet it seems to me that it’s a crucial jurisdictional question. (Needless to say, the university did not consider it either.) I can see why a university might be interested in what is being said in its lecture halls, or online on forums it maintains (in connection with courses for example). It does have an interest in maintaining a welcoming, respectful learning environment, although arguably this interest does not play out in the same way as a school’s, since everyone at a university is an adult and is there by choice. But does this interest give a university the right to police the conduct of its students off-campus or online? I think not; but in any case, it’s too bad the court did not ask itself the question.

Privacy in the Past, Present, and Future

Our own actions – individual and collective – set the upper limit of our privacy rights. We will never have more privacy rights than we care to have, although we often have fewer. One stark illustration of this idea comes in Isaac Asimov’s short story “The Dead Past,” in which a group of scientists build and, despite the government’s best efforts, thoughtlessly disseminate the instructions for building a “chronoscope” – a machine for viewing any events in the (recent) past. Their original purpose was historical research, but the chronoscope is not very useful for that; what it is very good for is snooping and voyeurism. The story ends with the government official who tried and failed to stop the protagonists wishing “[h]appy goldfish bowl to you, to me, to everyone.”

The internet, especially Web 2.0, is (almost) as good as the chronoscope, argues Alex Kozinski, Chief Judge of the U.S. Court of Appeals for the 9th Circuit, in a short essay published in the Stanford Law Review Online. It also allows everyone to learn all about anyone, provided that the person – or indeed someone else – posted the information on the internet at some point. And the fact that people share their every thought and deed online shapes society’s expectations of privacy, which are the key to what constitutional protections we have in this area. Those parts of our lives which we do not expect to be private are not protected from observation at will by the government. And if we do not expect anything to be private, then nothing will be.

“Reasonable expectations of privacy” are also key to defining privacy rights under the Canadian Charter of Rights and Freedoms. The Supreme Court’s latest engagement with the question of just what expectations of privacy are reasonable, in R. v. Gomboc, 2010 SCC 55, [2010] 3 S.C.R. 211, produced something of a mess. The issue was whether the installation without a warrant of a device that measures the electricity consumption of a house breached the owner’s reasonable expectation of privacy. Four judges said no, because general information about electricity consumption does not reveal enough to make it private. Three said no because the law entitled to owner to ask the utility not to hand over such information to the police, and he had not exercised this right. Two said that the information was private. But what seems clear is that for Canadian law too, what we think about our privacy and what we do about it, individually and collectively, matters.

Are we then doomed, as Judge Kozinski suggests we might be? Perhaps not. With respect, his claims are a little too pessimistic. Judge Kozinski collects a great many frightening anecdotes about people’s willingness to wash their – and others’ – dirty laundry in public. But anecdotes seldom justify sweeping conclusions. And some studies at least seem to show that people do care about their privacy more than the pessimists assume,  if not always in ways or to an extent that would satisfy the pessimists. Old expectations of privacy might be fading, but new ones could emerge, along different lines. Judge Kozinski is right that the law cannot do much to protect people who do not care. But we must hope that he and his colleagues, as well as legislators on both sides of the 49th parallel, will be mindful of the possibility that changes in privacy expectations can go in both directions.