Maneant Scripta

The Supreme Court protects its sources from “link rot”

This will be an unusual post. First, it will be short. Second, it will praise the Supreme Court of Canada, for a change. Some years ago, I wrote here about the problem of “link rot” as it affects judicial decisions. Courts refer to online materials ― sometimes even blog posts, though I don’t think the Supreme Court of Canada has done that yet ― and provide references to these sources in their reasons. Unfortunately, the online addresses of these sources ― the URLs that enable the readers to find them ― can change. Indeed, the materials can simply be taken down. Finding the sources on which judges rely becomes difficult in the former case, and impossible in the latter. Unless, that is, the courts actually do something about it. And now the Supreme Court has.

Here is the Court’s announcement:

Recognizing that web pages or websites that the Court cites in its judgments may subsequently vary in content or be discontinued, the Office of the Registrar of the SCC has located and archived the content of most online sources that had been cited by the Court between 1998 and 2016. These sources were captured with a content as close as possible to the original content cited. Links to the archived content can be found here: Internet Sources Cited in SCC Judgments (1998 – 2016).

From 2017 onward, online internet sources cited in the “Authors Cited” section in SCC judgments will be captured and archived.  When a judgment cites such a source, an “archived version” link will be provided to facilitate future research.

The Supreme Court of the United States has maintained an archive of “Internet sources cited in opinions“, albeit only going back to 2005, for some time now. Having taken a quick look at the websites of the UK and New Zealand Supreme Courts, I cannot find any equivalent archive, though perhaps I haven’t searched carefully enough.

It is great that the Supreme Court of Canada follows, and indeed improves on, the initiative of its American counterpart, and rescues its sources from oblivion. This is going to be very helpful to anyone ― a journalist, a researcher, or just a citizen ― who is interested in understanding what information the court relied on in making its decisions. As I wrote in my original post on this issue, the problem of “link rot” in the Supreme Court’s decisions was quite serious:

Of the links in the five oldest cases to cite any, not a single one still works, though one … leads to an automatic re-direct, and so is still useful. The rest lead either to error messages or even to an offer to buy the domain on which the page linked to had once been posted (a page belonging  to the BC Human Rights Commission ― which has since been abolished).

The Court’s effort to remedy this problem is to be applauded.

Selfie Slow-Down

I have already blogged about one American judicial decision on the constitutionality of a “ballot selfie” ban, which has since been upheld on appeal by the Court of Appeals for the 1st Circuit. And I have also written about the history of the secret ballot, which in my view explains why measures to protect ballot secrecy ― including bans on something that might at first glance appear quite innocuous, like a selfie showing for whom a person has voted ― are actually more important than they seem. Another American decision issued last week, this one by the Court of Appeals for the 6th Circuit, provides some additional food for thought on this issue.

Much of the discussion in Judge Sutton’s majority opinion in Crookston v Johnson is procedural. The case came up as an application for a preliminary injunction preventing the enforcement of Michigan’s prohibition on “exposing marked ballots to others”, (1) and Judge Sutton concludes that it is simply too late to grant one now in anticipation of the elections to be held on November 8. The people who will be running the election have already been trained and have received specific guidance on photography at the polling stations. Changing the rules at this point would create unnecessary confusion. So Judge Sutton does not rule on the merits of the case, which will be assessed later, assuming the applicant still cares. (This situation is reminiscent of the Canadian cases about election debates, which are invariably brought on an emergency basis when the debates are set up, and invariably abandoned before a full merits hearing once the election has taken place.)

But Judge Sutton does make some comments that bear on the merits of the dispute, and, although preliminary, these comments strike me as quite sensible and interesting. One observation is that

many Michigan voting stalls … are simply tall desks, placed next to each other, with three short dividers shielding the writing surface from view. In this setting, posing for a ballot selfie could compromise the secrecy of another’s ballot, distract other voters, and force a poll worker to intervene. (4)

My memory of Canadian voting stalls is a bit hazy ― I skipped the last election because I couldn’t tell which of the parties was worst ― but something like that might be true of them too. And indeed, even if it is not in any given case, it is worth thinking about whether our voting arrangements must actually be planned so as to cater to the “needs” of people wishing to snap a selfie.

Another practical point is that allowing ballot selfies could create a “risk of delay” at the polling stations, “as ballot-selfie takers try to capture the marked ballot and face in one frame—all while trying to catch the perfect smile”. (5) In a brief concurrence focusing entirely on the issue of delay, Judge Guy makes the additional point that “with digital photography, if you don’t like the way you look in the first one, you take another and so on ad infinitum.” (7) He wonders, too, whether “the allowance of taking a selfie also include use of the ubiquitous selfie stick”. (7)

And then, there are the issues that I have already discussed here ― whether the absence of evidence of ballot selfies’ harm shows that there is no reason for banning them or, on the contrary, demonstrates the effectiveness of the bans as a prophylactic measure. Judge Sutton clearly thinks that the latter is the case. Moreover, “[t]he links between [voter corruption and intimidation] and the prohibition on ballot exposure are not some historical accident; they are ‘common sense'”. (5, quoting US Supreme Court precedent.) Chief Judge Cole, dissenting, takes the contrary view, as have other American courts that have addressed selfie bans.

For own part, without expressing an opinion as to which of these views is correct as a matter of U.S. law, I have more sympathy for Judge Sutton’s. While I have been dwelling on the importance of evidence in constitutional adjudication for some time now, and critical of restricting rights on the basis of assumptions no later than yesterday, the evidence is actually there, albeit that it is mostly historical. Moreover, a court should be able to pronounce on the issue of delay without waiting for an “experiment” to take place. Common sense can be an unreliable guide to adjudication, but ― absent evidence to the contrary ― courts should be able to rely on it sometimes.

Prohibitions of ballot selfies might seem counter-intuitive or even quaint. In the United States, they run counter to the very strong tradition of virtually untrammelled freedom of expression. While I sometimes wish that Canadians took more inspiration from that tradition than they do (for example when it comes to the criminalization of “hate speech”), this is one instance where a more even-handed weighing of competing interests might be in order. Judges Sutton and Guy provide a useful reminder of what some of these interests are.

L’Uber et l’argent d’Uber

Une poursuite contre Uber carbure à l’ignorance économique

Certaines personnes qui ont eu recours aux service d’Uber la nuit du Nouvel an ont payé cher. Très cher même, dans certains cas. Car, contrairement aux taxis traditionnels dont les prix sont toujours les mêmes, Uber pratique ce que l’entreprise appelle le « prix dynamique » ― un prix qui fluctue, parfois très rapidement, en fonction de la demande pour ses voitures qui existe à un endroit et à un moment donné. Puisque la demande était très forte et très concentrée à la fin des festivités du réveillon, les prix ordinaires ont été multipliés par un facteur parfois très élevé ― facteur dont un client qui commandait une course était avisé, et qu’il devait même entrer, manuellement, dans l’application afin de pouvoir passer sa commande.

Or, on apprenait vendredi qu’une des clientes d’Uber, Catherine Papillon, qui a payé 8,9 fois le prix ordinaire pour sa course, veut intenter un recours collectif contre l’entreprise, à moins que celle-ci ne la rembourse. Représentée par Juripop, elle prétend avoir été lésée par le prix « anormalement élevé[…] » qu’elle a payé. Quant au consentement qu’elle a donné en commandant sa course, elle soutient que celui-ci ne peut lui être opposé vu la lésion qu’elle a subie. Elle affirme, du reste, ne pas avoir compris ce que le « 8,9 » qu’elle a entré en passant sa commande voulait dire.

Patrick Lagacé explique bien, dans une chronique parue dans La Presse, pour les personnes dans la situation de Mme Papillon ne méritent pas notre sympathie:

La nuit du Nouvel An, vous le savez sans doute, est la pire nuit où tenter de trouver un taxi. J’ai personnellement frôlé l’amputation du gros orteil droit, un 1er janvier de la fin du XXe siècle, en tentant de trouver un taxi au centre-ville de Montréal pour me ramener à bon port, au petit matin. Des centaines, peut-être des milliers d’autres cabochons dans la même situation que moi cherchaient eux aussi des taxis, introuvables…

Un détail révélateur du récit de Mme Papillon suggère qu’elle aussi se retrouvait dans une situation similaire : elle « a expliqué qu’elle s’est inscrite sur Uber “en cinq minutes”, peu avant de faire appel à la compagnie dans la nuit du 31 décembre au 1er janvier ». On ne sait pas encore pourquoi elle l’a fait, mais on peut deviner, n’est-ce pas? (Et les avocats d’Uber ne manqueront pas, j’en suis sûr, de lui poser la question pour confirmer la réponse dont on se doute.) Alors, écrit M. Lagacé, quand les gens acceptent de payer un prix, fût-il exorbitant, qui leur est clairement annoncé, pour s’épargner la recherche futile d’un taxi qui n’arrive jamais, eh bien, c’est un choix qu’ils font et dont ils devraient assumer la responsabilité.

Le droit voit-il les choses d’une manière différente? Je ne suis pas civiliste, encore moins spécialiste du droit de la consommation. Je ne prétendrai donc pas émettre de pronostic sur l’issue de la cause de Mme Papillon. Je crois, cependant, pouvoir émettre une opinion sur ce que le résultat de ce recours devrait être si les juges qui en disposeront s’en tiennent aux principes élémentaires qu’il met en cause.

Ces principes sont non seulement, et peut-être même pas tant, moraux qu’économiques. Les biens et les services n’ont pas de valeur intrinsèque qui pourrait servir à déterminer leur prix « juste ». Leur prix sur un marché libre dépend de l’offre et de la demande. Il s’agit, en fait, d’un signal. Si un service ― par exemple une course en taxi ― commande un prix élevé, les vendeurs ― par exemple, les chauffeurs ― savent qu’ils feront beaucoup d’argent en offrant le service en question. Plusieurs vendeurs s’amènent donc sur le marché ― par exemple, dans les rues du Vieux-Montréal ― pour offrir leurs services aux acheteurs. En même temps, le prix élevé signale aux acheteurs que s’ils le veulent acquérir le service, il leur en coûtera cher. Ceux qui tiennent à l’obtenir le feront, alors que d’autres trouveront des alternatives ou attendront. C’est ainsi que le nombre de vendeurs et d’acheteurs s’équilibre, et que ceux qui sont prêts à payer sont servis rapidement. Uber prétend que ceux qui ont voulu utiliser son service le matin du Jour de l’An n’ont attendu qu’un peu plus de quatre minutes, en moyenne, grâce au nombre record de chauffeurs qui étaient sur la route. Personne parmi eux, on peut parier, n’a « frôlé l’amputation du gros orteil droit », comme M. Lagacé jadis. 

Son histoire est, par ailleurs, un bon rappel de ce qui arrive si les prix ne peuvent pas augmenter en réponse à une forte demande ― par exemple parce qu’ils sont fixés par décret gouvernemental, comme le sont les prix du taxi traditionnel. Puisque les prix n’augmentent pas, les vendeurs n’ont aucune raison supplémentaire d’entrer sur le marché, et il n’y en a pas plus que d’habitude. Si, en plus, le nombre de vendeurs est limité ― par exemple, parce que le gouvernement fixe un nombre maximal de licenses de taxi ― il ne peut pas augmenter pour répondre à une demande exceptionnelle par définition. Dès lors, c’est l’attente et la chance, plutôt que la volonté de payer qui déterminent qui recevra et qui ne recevra pas le service ― et les engelures s’ensuivent.

J’en arrive aux questions juridiques qui se poseront dans la poursuite contre Uber. Mme Papillon et ses avocats prétendront sans doute que le contrat qui fait en sorte qu’une course de taxi qui coûte 80$ au lieu d’une dizaine « désavantage le consommateur  […] d’une manière excessive et déraisonnable », ce qui, en vertu de l’article 1437 du Code civil du Québec, donne ouverture à la réduction de l’obligation qui découle de ce contrat ― en l’occurrence, du prix payé par Mme Papillon. Ils soutiendront aussi qu’il s’agit d’un cas où « la disproportion entre les prestations respectives des parties est tellement considérable qu’elle équivaut à de l’exploitation du consommateur, ou que l’obligation du consommateur est excessive, abusive ou exorbitante », ce qui permet également au consommateur de demander la réduction de ses obligations, en vertu cette fois de l’article 8 de la Loi sur la protection du consommateur. Ils auront tort.

Car penser que la prestation d’Uber se limite au déplacement de son client, c’est ignorer les principes économiques fondamentaux que je viens d’exposer. Uber ne fait pas que déplacer son passager d’un point de départ à un point d’arrivée. Avant même de pouvoir le faire, Uber s’assure d’abord qu’il y aura une voiture pour cueillir le client ― et qu’elle sera là en un temps utile ou, du moins, assez court pour que le client ne se gèle pas les extrémités. C’est ça aussi, la prestation d’Uber, et la raison pour laquelle les gens font appel à ses services même lorsque ceux-ci sont plus chers que le taxi traditionnel. Et c’est pour s’assurer de livrer cette prestation qu’Uber doit faire augmenter ses prix lorsque la demande pour ses services est particulièrement forte. Le recours de Mme Papillon, qui fait abstraction de cette réalité, est, dès lors, fondé sur l’ignorance des règles économiques de base ou sur l’aveuglement volontaire face à celles-ci. Si les juges qui se prononcent sur ce recours comprennent ces règles, ils le rejetteront du revers de la main.

S’ils souhaitent raisonner a contrario, les juges pourront, par ailleurs, se demander quelle serait la réparation qu’ils devraient accorder s’ils faisaient droit à la demande de Mme Papillon. Il s’agirait, de toute évidence, d’une réduction du prix payé ― mais une réduction jusqu’à quel point? Si un commerçant réussit à flouer un consommateur en lui faisant payer un prix exorbitant, mais qu’il remplit, par ailleurs, ses obligations en vertu du contrat, il semble juste de réduire le prix jusqu’à celui qui prévaut sur le marché. Or, y a-t-il un tel prix dans les circonstances qui nous intéressent? Le prix du taxi traditionnel n’a rien à voir avec celui du marché, non seulement parce qu’il est le produit d’un fiat gouvernemental, mais aussi parce que, de toute évidence, ce n’est un prix d’équilibre, c’est-à-dire un prix auquel l’offre et la demande se rejoignent. Au prix du taxi traditionnel, la demande est de loin supérieure à l’offre ― d’où l’orteil gelé de M. Lagacé. Comment un tribunal s’y prendrait-il pour déterminer le prix du marché en l’absence, justement, d’un marché ― autre que celui qu’Uber a créé? Il ne pourrait le faire que d’une façon parfaitement arbitraire, ce qui serait contraire à notre compréhension habituelle du rôle des tribunaux. (J’avais déjà soulevé un problème similaire en parlant d’un recours contre la SAQ, fondé, lui aussi, sur l’article 8 de la Loi sur la protection du consommateur.) Comment est-ce qu’un tribunal saurait, en fait, que le prix exigé par Uber n’est pas le prix du marché? C’est à Mme Papillon, en tant que demanderesse, de le prouver, me semble-t-il. Je ne vois pas comment elle pourrait le faire.

Un mot, en conclusion, sur la position de Juripop dans cette histoire. Cet organisme n’est pas un bureau d’avocats ordinaire qui défend la cause de ses clients, fût-elle guidée par la plus pure cupidité. C’est soi-disant une « clinique juridique », une « entreprise d’économie sociale », dont la mission consiste à « soutenir l’accessibilité [sic] des personnes à la justice ». Or, Juripop ne se place pas du côté de la justice en appuyant Mme Papillon. Car la justice ne consiste pas à se déresponsabiliser face aux conséquences annoncées d’actions qu’on a posées, comme elle cherche à le faire. Et la justice ne peut pas se réaliser dans l’ignorance des lois économiques. Comme le disait fort sagement Friedrich Hayek dans La route de la servitude, « [i]l peut sembler noble de dire, “au diable la science économique, bâtissons plutôt un monde décent” ― mais c’est en fait simplement irresponsable » (je traduis). Les québécois le savent, d’ailleurs: ce n’est pas réclamer justice que de vouloir le beurre et l’argent du beurre.

Silencing the Bullies

In my last post, I wrote about the decision of the Supreme Court of Nova Scotia in Crouch v. Snell, 2015 NSSC 340, which struck down that province’s Cyber-Safety Act, a law intended “to provide safer communities by creating administrative and court processes that can be used to address and prevent cyberbullying.” Justice McDougall held that the statute both infringed the freedom of expression and could lead to deprivations of liberty not in accordance with principles of fundamental justice, contrary to sections 2(b) and 7 of the Canadian Charter of Rights and Freedoms, and was not justified under section 1 of the Charter. As I indicated in the conclusion of my last post, I believe that this was the right decision. Here are some thoughts about why that is so, and also about some deficiencies, or unanswered questions, in Justice McDougall’s reasons.

Perhaps the most interesting question Justice McDougall raises is whether the limits the Cyber-Safety Act imposed on the freedom of expression are “prescribed by law” within the meaning of section 1 of the Charter. Justice McDougall holds that they are not, because to issue a “protection order” meant to stop a person from engaging in cyberbullying a justice of the peace or a judge must not only find that that person engaged in cyberbullying in the past, but also that “there are reasonable grounds to believe that [that person] will engage in cyberbullying of the subject in the future.” (Subs. 8(b)) Justice McDougall is concerned that there is no indication in the statute as to what those reasonable grounds might be, and that the procedure, especially the ex-parte procedure before a justice of the peace, will not yield sufficient evidence on the basis of which to decide whether the “reasonable grounds to believe” requirement is met.

I find this reasoning intriguing and perplexing at the same time. It seems to me that Justice McDougall’s real concern is not with the vagueness of the statute’s words ― as is usually the case when courts ask whether a limitation of Charter rights is “prescribed by law” ― but with the procedure the statute creates. The concept of “reasonable grounds to believe” already exists in criminal law without attracting censure for vagueness and, as Justice McDougall himself observes, judges are sometimes asked to determine whether there exists a risk that an offender will re-offend in the future. But such determinations are made on the basis of substantial evidence submitted by both parties to an adversarial process. Here, by contrast, the decision must be made on the basis of (potentially flimsy) evidence submitted by one party alone. I agree that this is disturbing, and ought to be regarded as constitutionally problematic, but I’m not sure that “vagueness” is the appropriate name for this problem. Nor is it obvious that any other part of the Oakes test ought would be a better place to address the issue that Justice McDougall raises. Perhaps we need to recognize a procedural element to the “prescribed by law” prong of section 1, in keeping with Jeremy Waldron’s insight that the Rule of Law, and arguably the very concept of law, are crucially dependent on the existence of certain procedures through which the application of legal norms can be channelled and contested, as well as on formal requirements such as publicity and intelligibility that are better captured by the notion of vagueness.

Another question worth asking about Justice McDougall’s reasons is whether he is correct to find that the ex-parte process created by the Cyber-Safety Act is not rationally connected to the Act‘s objectives, except in emergencies or in cases where it is impossible for a victim of cyber-bullying to identify the perpetrator. Courts have seldom found that a law was not rationally connected to its purposes ― it is usually a low bar. Again, I am sympathetic to Justice McDougall seeing a procedure that give no notice to a person whose writings ― no matter how troublesome ― are about to be censored as a serious problem. Still, I’m not sure that, problematic though it is, an ex parte procedure is an irrational response to legislative concerns with timeliness and accessibility of remedies against cyberbullying, which Justice McDougall acknowledged in his decision. It will be interesting to see if appellate courts approach this issue in the same way as Justice McDougall did.

So much for the procedure created by the Cyber-Safety Act. As disturbing as it is, its contents is, if anything, even more troubling from a constitutional standpoint. Somewhat curiously, Justice McDougall does not have all that much to say about the scope and effect of the Cyber-Safety Act, which he addresses under the headings of minimal impairment and balancing between the Act’s positive and negative effects. What he does say, however, is damning indeed: the definition of cyberbullying, in particular, he finds to be “a colossal failure,” [165] catching “many types of expression that go to the core of freedom of expression values.” [175] That is true, but the point might bear some elaboration.

Take another look at the statutory definition of cyberbullying. It

means any electronic communication through the use of technology including, without limiting the generality of the foregoing, computers, other electronic devices, social networks, text messaging, instant messaging, websites and electronic mail, typically repeated or with continuing effect, that is intended or ought reasonably [to] be expected to cause fear, intimidation, humiliation, distress or other damage or harm to another person’s health, emotional well-being, self-esteem or reputation, and includes assisting or encouraging such communication in any way. (Par. 3(1)(b); brackets apparently in the original.)

Think about it. Any communication using computers or cell phones that “ought reasonably to be expected to cause … damage to another person’s … emotional well-being” ― anything that a reasonable person ought to know will make anyone else, anyone at all (since the statute does not in any way restrict who the “other person” whose well-being mustn’t be harmed), upset or feel bad counts as cyberbullying and is liable to be censored. As Eugene Volokh points out in an important article (as well as a bunch of posts on the Volokh Conspiracy) that the defenders of ant-cyberbullying legislation would do well to read, sometimes telling things that will have that effect on people is necessary to explain your own feelings or actions:

[i]f you want to explain to your friends why you’re depressed, or why you’ve broken up with someone, or why you’re moving out of town or taking another job, you might need to tell them about your husband’s cheating, your ex-boyfriend’s sexually transmitted disease, your ex-girlfriend’s impending bankruptcy, or even your mother’s dementia. (761-62)

Sometimes, indeed, you even want to make people feel bad, and with good reason:

speech remains valuable to public debate even when the speaker is motivated by hostility. Often much of the most useful criticism of a person comes from people who have good reason to wish that person ill—if you are mistreated by a politician, religious leader, businessperson, or lawyer, you might acquire both useful information about the person’s faults and resentment towards that person. (774)

And of course, quite apart from any contribution to the public debate, being able to tell why you are aggrieved at someone is important to self-expression. It is often said that people should not have to suffer in silence. But under the Cyber-Safety Act, they are likely to have to do so, since it may well be impossible to explain their emotions in ways that will not hurt the feelings or injure the reputation of the person they blame ― correctly or otherwise ― for their suffering.

Justice McDougall hints at these issues when points at the absence of defences such as truth in the Cyber-Safety Act, and notes that it applies to private and public communications alike. However, I think that it is important to explain in more detail, and with examples, why the extremely broad definition of cyberbullying in this legislation is so problematic. Moreover, even adding the defences of truth absence of ill-will would be enough to remedy the problem. The former is inapplicable to statements of opinion. The latter is insufficient for the reasons explained by prof. Volokh.

Beyond its (very real) unfairness and procedural defects, the fundamental problem with the Cyber-Safety Act is that it seeks to censor communications which the law has never regarded ― and, indeed, still does not regard ― as wrongs, whether civil or criminal. A statement need not be defamatory or otherwise tortious, much less amount to hate speech or be otherwise criminal, to fall within the definition of cyberbullying. The legislature, presumably, thought that this is not a problem so long as it was not imposing a penalty for the making statements considered to be cyberbullying. Whether the requirements that can imposed as part of a “protection order” issued pursuant to the Cyber-Safety Act, which can include not only prospective and retroactive censorship, but also a ban on using certain devices or online services really are not penalties is questionable in my mind, but let’s put that to one side for now. Even if the legislature is right that “protection orders” can be fairly characterized as preventive rather than punitive in nature, what exactly is it that gives it a right to prevent people from doing things that in its own view are not actually wrong? The legislature itself is acting like a bully, albeit a well-intentioned one. It’s a good thing that Justice McDougall silenced it.

Anti-Bullying Law Struck Down

Last week, the Supreme Court of Nova Scotia struck down the province’s recently-enacted anti-cyber-bullying legislation, the Cyber-Safety Act. In Crouch v. Snell, 2015 NSSC 340, Justice McDougall holds that the Act both infringed the freedom of expression protected by s. 2(b) of the Canadian Charter of Rights and Freedoms, and made possible deprivations of liberty inconsistent with the principles of fundamental justice, contrary to s. 7 of the Charter. In this post, I summarize Justice McDougall’s reasons. (At great length, I am afraid, partly because it is important to explain the somewhat complicated legislation at issue, and mostly because the opinion covers a lot of constitutional ground.) I will comment separately.

Although laws against cyber-bullying are often justified by the need to protect young persons (especially children) from attacks and harassment by their peers, the parties in Crouch were adults, former partners in a technology start up who had had a falling out. Mr. Crouch alleged that “Mr. Snell began a ‘smear campaign’ against him on social media.” [22] Mr. Crouch eventually responded by applying for a “protection order” under the Cyber-Safety Act.

The Act, whose stated “purpose … is to provide safer communities by creating administrative and court processes that can be used to address and prevent cyberbullying,” (s. 2) makes it possible for persons who consider that they are being the victims of cyber-bullying (or for their parents and police officers, if they are minors) to apply for an order that can include prohibitions against its target communicating with or about the applicant, or using specified electronic services or devices. The Act defines cyberbullying as

any electronic communication through the use of technology including, without limiting the generality of the foregoing, computers, other electronic devices, social net works, text messaging, instant messaging, websites and electronic mail, typically repeated or with continuing effect, that is intended or ought reasonably [to] be expected to cause fear, intimidation, humiliation, distress or other damage or harm to another person’s health, emotional well-being, self-esteem or reputation, and includ[ing] assisting or encouraging such communication in any way. (Par. 3(1)(b))

While some earlier cases read this definition as including requirement of malice into this definition, Justice McDougall considers that it included not only actions that had a “culpable intent” but also “conduct where harm was not intended, but ought reasonably to have been expected.”[80]

The applications are made “without notice to the respondent.” (Subs. 5(1)) If “the justice determines, on a balance of probabilities, that … the respondent engaged in cyberbullying of the subject; and … there are reasonable grounds to believe that the respondent will engage in cyberbullying of the subject in the future,” (s. 8) he or she can issue a “protection order.” Once an order is granted by the justice of the peace, it must be served on its target. A copy is forwarded to the Supreme Court, where a judge must review the order and confirm it (with or without amendment) if he or she “is satisfied that there was sufficient evidence … to support the making of the order.” (Subs. 12(2)) If the judge is not so satisfied, he or she must “direct a hearing of the matter in whole or in part,” (Subs. 12(3)) at which point the target of the order as well as the applicant are notified and can be heard.

Mr. Crouch’s application resulted in a protection order being granted by a justice of the peace. Reviewing it, Justice McDougall finds that some of Mr. Crouch’s allegations were unsupported by any evidence; indeed, in applying for the protection order, Mr. Crouch misrepresented a perfectly innocent statement made by Mr. Snell as a threat by taking it out of the context in which it had been made. Nevertheless, there was enough evidence supporting Mr. Crouch’s complaint for Justice McDougall to confirm, in somewhat revised form, the protection order that prohibited Mr. Snell “from directly or indirectly communicating with” or “about” Mr. Crouch, [23] and ordering him to remove any social media postings that referred to Mr. Crouch explicitly or “that might reasonably lead one to conclude that they refer to” him. [73] This confirmation was subject to a ruling on the Cyber-Safety Act‘s constitutionality, which Mr. Snell challenged.

His first argument was that the Act infringed his freedom of expression. Remarkably, the government was not content to argue that the infringement was justified under s. 1 of the Charter, and actually claimed that there was no infringement at all, “because communications that come within the definition of ‘cyberbullying’ are, due to their malicious and hurtful nature, low-value communications that do not accord with the values sought to be protected under s. 2(b).” [101] Justice McDougall rejects this argument, since the Supreme Court has consistently held that “[t]he only type of expression that receives no Charter protection is violent expression.” [102] In finding that both the purpose and the effect of the Act infringed freedom of expression, Justice McDougall cites Justice Moir’s comments in Self v. Baha’i, 2015 NSSC 94, at par. 25 :

[a] neighbour who calls to warn that smoke is coming from your upstairs windows causes fear. A lawyer who sends a demand letter by fax or e-mail causes intimidation. I expect Bob Dylan caused humiliation to P.F. Sloan when he released “Positively 4th Street”, just as a local on-line newspaper causes humiliation when it reports that someone has been charged with a vile offence. Each is a cyberbully, according to the literal meaning of the definitions, no matter the good intentions of the neighbour, the just demand of the lawyer, or the truthfulness of Mr. Dylan or the newspaper.

(Self was the case where the judge read a requirement of malice into the definition of cyber-bullying. There had, however, been no constitutional challenge to the Cyber-Safety Act there. Incidentally, Self also arose from a business dispute.)

The more difficult issue, as usual in freedom of expression cases, is whether the infringement is a “reasonable limit[] prescribed by law that can be demonstrably justified in a free and democratic society,” as section 1 of the Charter requires. In the opinion of Justice McDougall, the Cyber-Safety Act fails not only the Oakes test for justifying restrictions on rights, but also the requirement that such restrictions be “prescribed by law.”

Mr. Snell argued that the definition of cyber-bullying in the Cyber-Safety Act was too vague to count as “prescribed by law.” Justice McDougall considers that the definition “is sufficiently clear to delineate a risk zone. It provides an intelligible standard” [129] for legal debate. However, in his view, the same cannot be said of the requirement in section 8 of the Act that there be “reasonable grounds to believe that the respondent will engage in cyberbullying of the subject in the future.” Justice McDougall finds that “[t]he Act provides no guidance on what kinds of evidence and considerations might be relevant here [and thus] no standard so as to avoid arbitrary decision-making.” [130] While risk of re-offending is assessed in criminal sentencing decisions, this is done on the basis of evidence, rather than on an ex-parte application that may include only limited evidence of past, and no indication of future, conduct. Here, “[t]he Legislature has given a plenary discretion to do whatever seems best in a wide set of circumstances,” which is likely to result in “arbitrary and discriminatory applications.” [137]

Although this should be enough to dispose of the case, Justice McDougall nevertheless goes on to put the Cyber-Safety Act to the Oakes test. He concludes

that the objectives of the Act—to create efficient and cost-effective administrative and court processes to address cyberbullying, in order to protect Nova Scotians from undue harm to their reputation and their mental well-being—is [sic] pressing and substantial. [147]

However, he finds that the ex-parte nature of the process created by the Cyber-Safety Act is not rationally connected to these objectives. While proceeding without notice to the respondent may be necessary when the applicant does not know who is cyber-bullying him or her, or in emergencies, the Act requires applications to be ex-parte in every case. It thus “does not specifically address a targeted mischief.” [158]

Nor is the Act, in Justice McDougall’s view, minimally impairing of the freedom of expression. Indeed, he deems “the Cyber-safety Act, and the definition of cyberbullying in particular, … a colossal failure” in that it “unnecessarily catches material that has little or nothing to do with the prevention of cyberbullying.” [165] It applies to “both private and public communications,” [165] provides no defences ― not even truth or absence of ill-will ―, and does not require “proof of harm.” [165]

Finally, Justice McDougall is of the opinion that the positive effects of the Cyber-Safety Act ― of which there is no evidence but whose existence he seems willing to “presume[]” [173] ― do not outweigh the deleterious ones. Once again, the scope of the definition of cyber-bullying is the issue: “[i]t is clear that many types of expression that go to the core of freedom of expression values might be caught” [175] by the statute.

In addition to the argument based on freedom of expression, Mr. Snell raised the issue of s. 7 of the Charter, and Justice McDougall addresses it too. The Cyber-Safety Act engages the liberty interest because the penalties for not complying with a “protection order” can include imprisonment. In Justice McDougall’s view, this potential interference with liberty is not in accordance with the principles of fundamental justice ― quite a few of them, actually. The ex-parte nature of the process the Act sets up is arbitrary, since as Justice McDougall already found, it lacks a rational connection with its objective. The statutory definition of cyber-bullying is overbroad, for the same reason it is not minimally impairing of the freedom of expression. The “requirement that the respondent be deemed likely to engage in cyberbullying in the future is incredibly vague.” [197] Moreover, “the protection order procedure set out in the Cyber-safety Act is not procedurally fair,” due mostly to “the failure to provide a respondent whose identity is known or easily ascertainable with notice of and the opportunity to participate in the initial protection order hearing.” [203] Finally, Justice McDougall adopts Justice Wilson’s suggestion in R. v. Morgentaler, [1988] 1 S.C.R. 30, that a deprivation of a s. 7 right that is also an infringement of another Charter right is not in accordance with the principles of fundamental justice. The Cyber-Safety Act infringes the freedom of expression, which “weighs heavily against a finding that the impugned law accords with the principles of fundamental justice.” [204] As with the infringement of the freedom expression, that of s. 7 is not justified under section 1 of the Charter.

As a result, Justice McDougall declares the Cyber-Safety Act unconstitutional. The statutory scheme is too dependent on the over-inclusive definition of cyber-bullying for alternatives such as reading in or severing some provisions to be workable. The declaration of unconstitutionality is to take effect immediately, because “[t]o temporarily suspend [it] would be to condone further infringements of Charter-protected rights and freedoms.” [220] Besides, the victims of cyber-bullying still “have the usual—albeit imperfect—civil and criminal avenues available to them.” [220]

I believe that this is the right outcome. However, Justice McDougall’s reasons are not altogether satisfactory. More on that soon.

Platonic Guardians 2.0?

The New York Times has published an essay by Eric Schmidt, the Chairman of Google, about the role of the Internet, and especially, of the exchange of ideas and information that the Internet enables, in both contributing to and addressing  the challenges the world faces. The essay is thoroughly upbeat, concluding that it is “within [our] reach” to ensure that “the Web … is a safe and vibrant place, free from coercion and conformity.” Yet when reading Mr. Schmidt it is difficult not to worry that, as with students running riot on American college campuses, the quest for “safety” will lead to the silencing of ideas deemed inappropriate by a force that might be well-intentioned but is unaccountable and ultimately not particularly committed to freedom of expression.

To be sure, Mr. Schmidt talks the free speech talk. He cites John Perry Barlow’s “Declaration of the Independence of Cyberspace,” with its belief that the Web will be “a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” He argues that

[i]n many ways, that promise has been realized. The Internet has created safe spaces for communities to connect, communicate, organize and mobilize, and it has helped many people to find their place and their voice. It has engendered new forms of free expression, and granted access to ideas that didn’t exist before.

Mr. Schmidt notes the role online communication has played in enabling democratic protest around the world, and wants to reject the claims of “[a]uthoritarian governments  … that censorship is necessary for stability.”

But his response to these claims is not just a straightforward defence of the freedom of expression. “The people who use any technology are the ones who need to define its role in society,” Mr. Schmidt writes. “Technology doesn’t work on its own, after all. It’s just a tool. We are the ones who harness its power.” That’s fair enough, so far as it goes. Mr. Schmidt warns against “us[ing] the Internet exclusively to connect with like-minded people rather than seek out perspectives that we wouldn’t otherwise be exposed to,” and that is indeed very important. But then the argument gets ominous:

[I]t’s important we use [the Internet’s] connectivity to promote the values that bring out the best in people. … We need leaders to use the new power of technology to allow us to broaden our horizons as individuals, and in the process broaden the horizons of our society. It’s our responsibility to demonstrate that stability and free expression go hand in hand.

It’s not that I’m against the idea that one should act responsibly when exercising one’s freedom of expression (or that one should just act responsibly, period). But is the responsibility of a speaker always to foster “stability” ― whatever exactly that is? And to whom ought we “to demonstrate that stability and free expression go hand in hand”? To the authoritarians who want to censor the internet? Why exactly do we owe them a demonstration, and what sort of demonstration are they likely to consider convincing? Last but not least, who are the leaders who are going to make us “broaden our horizons”?

Mr. Schmidt has a list of more or less specific ideas about how to make the internet the “safe and vibrant place” he envisions, and they give us a hint about his answer to that last question:

We should make it ever easier to see the news from another country’s point of view, and understand the global consciousness free from filter or bias. We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice.

He speaks “of leadership from government, from citizens, from tech companies,” but it is not obvious how citizens or even governments ― whom Mr. Barlow taunted as the “weary giants of flesh and steel,” powerless and unwelcome in cyberspace ― can “build tools” to do these sorts of things. It is really the other sort of giants, the “tech companies” such as the one Mr. Schmidt runs, that have, or at least can create, the means to be our benevolent guardians, turning us away from hate and harassment, and towards “global consciousness,” ― whatever that too may be. Google can demote websites that it deems to be promoters of “hate” in its search results, as indeed it already demotes those it considers to be copyright-infringers. Apple could block the access to its App Store to news  sources it considers biased, as indeed it has already blocked a Danish history book for  featuring some nudity in its illustrations. Facebook could tinker with its Newsfeed algorithms to help people with a favoured peace-and-love perspective “find their voice,” as it already tinkers with them to “help [us] see more stories that interest [us].”

Of course, Mr. Schmidt’s intentions are benign, and in some ways even laudable. Perhaps some of the “tools” he imagines would even be nice to have. The world may (or may not) be a better place if Facebook and Twitter could ask us something like “hey, this really isn’t very nice, are you sure you actually want to post this stuff?” ― provided that we had the ability to disregard the advice of our algorithmic minders, just like we can with spell-check. But I’m pretty skeptical about what might come out of an attempt to develop such tools. As I once pointed out here, being a benign censor is very hard ― heavy-handedness comes naturally in this business. And that’s before we even start thinking about the conflicts of interest inherent in the position of Google and of other tech companies who are in a position of being, at once, the regulators of their users’ speech and subjects of government regulations, and may well be tempted to so act in the former role as to avoid problems in the latter. And frankly, Mr. Schmidt’s apparent faith in “strong leaders” who will keep us free and make us safe and righteous is too Boromir-like for me to trust him.

As before, I have no idea what, if anything, needs to or could be done about these issues. Governments are unlikely to wish to intervene to stop the attempts of tech companies to play Platonic guardians 2.0. Even if they had the will, they would probably lack the ability to do so. And, as I said here, we’d be making a very risky gamble by asking governments, whose records of flagrant contempt for freedom of expression are incomparably worse than those of Google and its fellows, to regulate them. Perhaps the solution has to be in the creation of accountability mechanisms internal to the internet world, whether democratic (as David R. Johnson, David G. Post and Marc Rotenberg have suggested) or even akin to rights-based judicial review. In any case, I think that even if we don’t know how to, or cannot, stop the march of our algorithmic guardians, perhaps we can at lest spell-check them, and tell them that they might be about to do something very regrettable.

Safety Regulations and the Charter

I wrote earlier this week about the decision of the Court of Appeal for Ontario R. v. Michaud, 2015 ONCA 585, which upheld the constitutionality of regulations requiring trucks to be equipped with a speed limiter that prevents them going faster than 105 km/h. The Court found that the regulations could put some truck drivers in danger by leaving them unable to accelerate their way out of trouble, and thus infringed s. 7 of the Canadian Charter of Rights and Freedoms, but that they were justified under s. 1 of the Charter. This is a most unusual outcome ― I’m not sure there a s. 7 violation had ever before been upheld under s. 1 ― and the Court itself suggested that the s. 7 analytical framework set out by the Supreme Court in Canada (Attorney General) v. Bedford, 2013 SCC 72, [2013] 3 S.C.R. 1101 is not well-suited to cases where the constitutionality of “safety regulations” is at issue. In this post, I would like to comment on the role of s. 7 in this and similar cases, and the role of courts in applying the constitution in such circumstances.

* * *

The Court may well be right that the current s. 7 framework is not adequate to deal with “safety regulations” ― at least if it has interpreted that framework correctly. Referring to Bedford, the Court took the position that any negative effect on a person’s security is enough to engage the “security of the person” right protected by s. 7. But I’m not sure that this is really what the Supreme Court meant when it said that “[a]t this stage, the question is whether the impugned laws negatively impact or limit the applicants’ security of the person.” [Bedford 58] Is there no threshold, at least a more-than-deminimis one, for a court to find a s. 7 infringement? Such thresholds exist in jurisprudence on other provisions of the Charter, for example on s. 2(a), where a “trivial or insubstantial interference” with religious freedom, one “that does not threaten actual religious beliefs or conduct,” does not engage the Charter. Admittedly, Bedford says nothing about such a threshold in s. 7, but then neither it nor the other s. 7 cases that come to mind involved situations where the interference with security interests was of a potentially trivial magnitude.

As the Court of Appeal suggests in Michaud, “safety regulations” are likely to create precisely this sort of interference with the security interests, or even the right to life, of people who engage in the regulated activity. The Court explains that it is always possible to say that a more stringent regulation would have prevented a few more injuries or even deaths, so one could argue that the increase in each person’s likelihood of being injured or dying as a result of a somewhat more lax rule is a s. 7 violation. The Court is concerned that acceptance of such arguments will trivialized s. 7, and I agree that this would indeed be disturbing.

But it seems to me that the best response to this problem is to say that a purely statistical increase in the odds of being injured should not count as sufficient to establish the violation of any given person’s rights. In Bedford itself, the courts were able to show how the prostitution-related provisions of the Criminal Code directly and substantially interfered with the security of sex-workers who had to comply with them. The negative impact on their safety was not just statistical; one did not need an actuarial table to see it ― though statistical evidence was used to show the extent of the problems beyond the stories of the claimants themselves.

The Court of Appeal suggests a different approach, which is to treat safety regulations differently from other (perhaps especially criminal) laws, and to take their beneficial effects into account at the s. 7 stage of the analysis, and not only at the s. 1 justification stage as is usually done. In my view, there are two problems with this solution. First, it is inconsistent with the Supreme Court’s longstanding aversion for introducing balancing into the substantive provisions of the Charter. This aversion is justified, not only by the pursuit of coherence, but also by the desirability of putting the onus of proving social benefits not on a rights claimant, but on the government.

The other reason I find the creation of a special category of “safety regulations” problematic is that its contours would be uncertain, and would generate unnecessary yet difficult debate. The rules requiring speed limiters in trucks under pain of relatively limited penalties are obvious safety regulations. But it seems like a safe bet that the government would try to bring other rules within the scope of that category if doing this made defending their constitutionality easier, including for example the prostitution provisions enacted, in response to Bedford, as the Protection of Communities and Exploited Persons Act. Of course, the parties challenging these laws would fight just as hard to show that such rules are not really about safety. The uncertainty and the costs of litigation would increase, while the benefits to be gained from this approach are not obvious.

* * *

Now, this whole issue of statistical increases in risk being treated as a violation of s. 7 of the Charter is actually irrelevant to Michaud. That’s not to say the Court should not have brought it up ― I think it did us a favour by flagging it, and we should take up its invitation to think about this problem. Still, the issue in that case is different: it’s not that a safety regulation does not allegedly go far enough, but that it allegedly goes too far. The two possibilities are, of course, two sides of the same coin; they are both possible consequences of the regulator’s preference for a bright-line rule over a standard. The Court is right to observe that there are good reasons to prefer rules to standards (some of the time anyway). And surely the Charter wasn’t supposed to eliminate bright-line rules from our legislation.

However, to speak of the speed limiter requirement as a “bright-line rule” is to miss what is really distinctive about it. Those who challenge the requirement aren’t seeking it be replaced by a standard. They are content with a bright-line speed limit ― provided that they are able to infringe it on occasion (and, one suspects, that they are not prosecuted for doing so). Unlike a speed limit enforced, sporadically and ex-post, by the police, which can broken if need be, a speed limit enforced permanently and ex-ante by an electronic device cannot be broken at all. In other words, the issue is not simply one of rules versus standards; it’s one of rules whose nature as rules can on occasion be ignored (put another way, rules that can be treated as if they were standards) versus rules that stay rules.

This creates a difficulty for constitutional law. Can a court acknowledge that a rule sometimes needs to be broken? Can a court go even further than that, and say that a rule is constitutionally defective if it doesn’t allow a mechanism for being broken? To say that the legislator is entitled to choose rules over standards does not really answer these questions. As thoughtful and sophisticated as the Michaud opinion is, I don’t think that it really addresses this issue.

That’s too bad, because this problem will arise again, and ever more urgently, with the development of technology that takes the need, and the ability, to make decisions away from humans. Self-driving cars are, of course, the obvious example. As it happens, the New York Times published an interesting story yesterday about the difficulties that Google’s autonomous vehicles run into because they are “programmed to follow the letter of the law” ― and the drivers of other cars on the road are not. Google’s cars come to a halt at four-way stops ― and cannot move away, because the other cars never do, and the robots let them by. Google’s cars keep a safe distance behind the next vehicle on a highway ― and other cars get right into the gap. The former situation might be merely inconvenient, although in a big way. The latter is outright dangerous. What happens if regulators mandate that self-driving cars be programmed so as never to break the rules and it can be shown that this will increase the danger of some specific situations on the road? What happens, for that matter, in a tort claim against the manufacturer (or rather the programmer) of such a vehicle? Michaud gives us some clues for thinking about the former question, though I’m not sure it fully settles it.

* * *

Thinking about constitutional questions that challenges to safety regulations give rise to also means thinking about the courts’ role when these regulations are challenged. In Michaud, the Court took a strongly deferential position, drawing a parallel with administrative law, where courts are required to defer to decisions that are based on the exercise of expert judgment. It noted, however, that “situations, in which a legislature or regulator uses a safety regulation for an improper collateral purpose, or where the regulator makes a gross error, are imaginable.” [152] In these situations, the courts should step in.

I think this is exactly right. Courts must be alert to the possibility that rules that ostensibly aim at health and safety are actually enacted for less benign purposes, and in particular as a result of various public choice problems. Safety rules are attractive to those who want to limit competition precisely because they look so obviously well-intentioned and are difficult to criticize. That said, when ― as in Michaud itself ― there seems to be no dispute that the rule at issue is genuinely meant to pursue safety objectives, courts should indeed adopt a hands-off approach. Michaud illustrates the difficulties they have in dealing with conflicting expert reports based on complex and uncertain science. And the Court is right to suggest that governments should be entitled to err on the side of caution if they so wish ― though, by the same token, I think they should also not be required to do so (and the Court does not say otherwise). This is fundamentally a policy choice, and the courts should not be interfering with it.

* * *

The questions that the Michaud case raises are many and complex. The Court of Appeal’s opinion is thoughtful and interesting, though as I explained above and in my previous post, I’m not sure that its approach to the existing constitutional framework and to the evidence is the correct one. But that opinion does not address of all these questions. Eventually ― though not necessarily in this case, even if there is an appeal ― the Supreme Court will need to step in and start answering them.