Anti-Bullying Law Struck Down

Last week, the Supreme Court of Nova Scotia struck down the province’s recently-enacted anti-cyber-bullying legislation, the Cyber-Safety Act. In Crouch v. Snell, 2015 NSSC 340, Justice McDougall holds that the Act both infringed the freedom of expression protected by s. 2(b) of the Canadian Charter of Rights and Freedoms, and made possible deprivations of liberty inconsistent with the principles of fundamental justice, contrary to s. 7 of the Charter. In this post, I summarize Justice McDougall’s reasons. (At great length, I am afraid, partly because it is important to explain the somewhat complicated legislation at issue, and mostly because the opinion covers a lot of constitutional ground.) I will comment separately.

Although laws against cyber-bullying are often justified by the need to protect young persons (especially children) from attacks and harassment by their peers, the parties in Crouch were adults, former partners in a technology start up who had had a falling out. Mr. Crouch alleged that “Mr. Snell began a ‘smear campaign’ against him on social media.” [22] Mr. Crouch eventually responded by applying for a “protection order” under the Cyber-Safety Act.

The Act, whose stated “purpose … is to provide safer communities by creating administrative and court processes that can be used to address and prevent cyberbullying,” (s. 2) makes it possible for persons who consider that they are being the victims of cyber-bullying (or for their parents and police officers, if they are minors) to apply for an order that can include prohibitions against its target communicating with or about the applicant, or using specified electronic services or devices. The Act defines cyberbullying as

any electronic communication through the use of technology including, without limiting the generality of the foregoing, computers, other electronic devices, social net works, text messaging, instant messaging, websites and electronic mail, typically repeated or with continuing effect, that is intended or ought reasonably [to] be expected to cause fear, intimidation, humiliation, distress or other damage or harm to another person’s health, emotional well-being, self-esteem or reputation, and includ[ing] assisting or encouraging such communication in any way. (Par. 3(1)(b))

While some earlier cases read this definition as including requirement of malice into this definition, Justice McDougall considers that it included not only actions that had a “culpable intent” but also “conduct where harm was not intended, but ought reasonably to have been expected.”[80]

The applications are made “without notice to the respondent.” (Subs. 5(1)) If “the justice determines, on a balance of probabilities, that … the respondent engaged in cyberbullying of the subject; and … there are reasonable grounds to believe that the respondent will engage in cyberbullying of the subject in the future,” (s. 8) he or she can issue a “protection order.” Once an order is granted by the justice of the peace, it must be served on its target. A copy is forwarded to the Supreme Court, where a judge must review the order and confirm it (with or without amendment) if he or she “is satisfied that there was sufficient evidence … to support the making of the order.” (Subs. 12(2)) If the judge is not so satisfied, he or she must “direct a hearing of the matter in whole or in part,” (Subs. 12(3)) at which point the target of the order as well as the applicant are notified and can be heard.

Mr. Crouch’s application resulted in a protection order being granted by a justice of the peace. Reviewing it, Justice McDougall finds that some of Mr. Crouch’s allegations were unsupported by any evidence; indeed, in applying for the protection order, Mr. Crouch misrepresented a perfectly innocent statement made by Mr. Snell as a threat by taking it out of the context in which it had been made. Nevertheless, there was enough evidence supporting Mr. Crouch’s complaint for Justice McDougall to confirm, in somewhat revised form, the protection order that prohibited Mr. Snell “from directly or indirectly communicating with” or “about” Mr. Crouch, [23] and ordering him to remove any social media postings that referred to Mr. Crouch explicitly or “that might reasonably lead one to conclude that they refer to” him. [73] This confirmation was subject to a ruling on the Cyber-Safety Act‘s constitutionality, which Mr. Snell challenged.

His first argument was that the Act infringed his freedom of expression. Remarkably, the government was not content to argue that the infringement was justified under s. 1 of the Charter, and actually claimed that there was no infringement at all, “because communications that come within the definition of ‘cyberbullying’ are, due to their malicious and hurtful nature, low-value communications that do not accord with the values sought to be protected under s. 2(b).” [101] Justice McDougall rejects this argument, since the Supreme Court has consistently held that “[t]he only type of expression that receives no Charter protection is violent expression.” [102] In finding that both the purpose and the effect of the Act infringed freedom of expression, Justice McDougall cites Justice Moir’s comments in Self v. Baha’i, 2015 NSSC 94, at par. 25 :

[a] neighbour who calls to warn that smoke is coming from your upstairs windows causes fear. A lawyer who sends a demand letter by fax or e-mail causes intimidation. I expect Bob Dylan caused humiliation to P.F. Sloan when he released “Positively 4th Street”, just as a local on-line newspaper causes humiliation when it reports that someone has been charged with a vile offence. Each is a cyberbully, according to the literal meaning of the definitions, no matter the good intentions of the neighbour, the just demand of the lawyer, or the truthfulness of Mr. Dylan or the newspaper.

(Self was the case where the judge read a requirement of malice into the definition of cyber-bullying. There had, however, been no constitutional challenge to the Cyber-Safety Act there. Incidentally, Self also arose from a business dispute.)

The more difficult issue, as usual in freedom of expression cases, is whether the infringement is a “reasonable limit[] prescribed by law that can be demonstrably justified in a free and democratic society,” as section 1 of the Charter requires. In the opinion of Justice McDougall, the Cyber-Safety Act fails not only the Oakes test for justifying restrictions on rights, but also the requirement that such restrictions be “prescribed by law.”

Mr. Snell argued that the definition of cyber-bullying in the Cyber-Safety Act was too vague to count as “prescribed by law.” Justice McDougall considers that the definition “is sufficiently clear to delineate a risk zone. It provides an intelligible standard” [129] for legal debate. However, in his view, the same cannot be said of the requirement in section 8 of the Act that there be “reasonable grounds to believe that the respondent will engage in cyberbullying of the subject in the future.” Justice McDougall finds that “[t]he Act provides no guidance on what kinds of evidence and considerations might be relevant here [and thus] no standard so as to avoid arbitrary decision-making.” [130] While risk of re-offending is assessed in criminal sentencing decisions, this is done on the basis of evidence, rather than on an ex-parte application that may include only limited evidence of past, and no indication of future, conduct. Here, “[t]he Legislature has given a plenary discretion to do whatever seems best in a wide set of circumstances,” which is likely to result in “arbitrary and discriminatory applications.” [137]

Although this should be enough to dispose of the case, Justice McDougall nevertheless goes on to put the Cyber-Safety Act to the Oakes test. He concludes

that the objectives of the Act—to create efficient and cost-effective administrative and court processes to address cyberbullying, in order to protect Nova Scotians from undue harm to their reputation and their mental well-being—is [sic] pressing and substantial. [147]

However, he finds that the ex-parte nature of the process created by the Cyber-Safety Act is not rationally connected to these objectives. While proceeding without notice to the respondent may be necessary when the applicant does not know who is cyber-bullying him or her, or in emergencies, the Act requires applications to be ex-parte in every case. It thus “does not specifically address a targeted mischief.” [158]

Nor is the Act, in Justice McDougall’s view, minimally impairing of the freedom of expression. Indeed, he deems “the Cyber-safety Act, and the definition of cyberbullying in particular, … a colossal failure” in that it “unnecessarily catches material that has little or nothing to do with the prevention of cyberbullying.” [165] It applies to “both private and public communications,” [165] provides no defences ― not even truth or absence of ill-will ―, and does not require “proof of harm.” [165]

Finally, Justice McDougall is of the opinion that the positive effects of the Cyber-Safety Act ― of which there is no evidence but whose existence he seems willing to “presume[]” [173] ― do not outweigh the deleterious ones. Once again, the scope of the definition of cyber-bullying is the issue: “[i]t is clear that many types of expression that go to the core of freedom of expression values might be caught” [175] by the statute.

In addition to the argument based on freedom of expression, Mr. Snell raised the issue of s. 7 of the Charter, and Justice McDougall addresses it too. The Cyber-Safety Act engages the liberty interest because the penalties for not complying with a “protection order” can include imprisonment. In Justice McDougall’s view, this potential interference with liberty is not in accordance with the principles of fundamental justice ― quite a few of them, actually. The ex-parte nature of the process the Act sets up is arbitrary, since as Justice McDougall already found, it lacks a rational connection with its objective. The statutory definition of cyber-bullying is overbroad, for the same reason it is not minimally impairing of the freedom of expression. The “requirement that the respondent be deemed likely to engage in cyberbullying in the future is incredibly vague.” [197] Moreover, “the protection order procedure set out in the Cyber-safety Act is not procedurally fair,” due mostly to “the failure to provide a respondent whose identity is known or easily ascertainable with notice of and the opportunity to participate in the initial protection order hearing.” [203] Finally, Justice McDougall adopts Justice Wilson’s suggestion in R. v. Morgentaler, [1988] 1 S.C.R. 30, that a deprivation of a s. 7 right that is also an infringement of another Charter right is not in accordance with the principles of fundamental justice. The Cyber-Safety Act infringes the freedom of expression, which “weighs heavily against a finding that the impugned law accords with the principles of fundamental justice.” [204] As with the infringement of the freedom expression, that of s. 7 is not justified under section 1 of the Charter.

As a result, Justice McDougall declares the Cyber-Safety Act unconstitutional. The statutory scheme is too dependent on the over-inclusive definition of cyber-bullying for alternatives such as reading in or severing some provisions to be workable. The declaration of unconstitutionality is to take effect immediately, because “[t]o temporarily suspend [it] would be to condone further infringements of Charter-protected rights and freedoms.” [220] Besides, the victims of cyber-bullying still “have the usual—albeit imperfect—civil and criminal avenues available to them.” [220]

I believe that this is the right outcome. However, Justice McDougall’s reasons are not altogether satisfactory. More on that soon.

Platonic Guardians 2.0?

The New York Times has published an essay by Eric Schmidt, the Chairman of Google, about the role of the Internet, and especially, of the exchange of ideas and information that the Internet enables, in both contributing to and addressing  the challenges the world faces. The essay is thoroughly upbeat, concluding that it is “within [our] reach” to ensure that “the Web … is a safe and vibrant place, free from coercion and conformity.” Yet when reading Mr. Schmidt it is difficult not to worry that, as with students running riot on American college campuses, the quest for “safety” will lead to the silencing of ideas deemed inappropriate by a force that might be well-intentioned but is unaccountable and ultimately not particularly committed to freedom of expression.

To be sure, Mr. Schmidt talks the free speech talk. He cites John Perry Barlow’s “Declaration of the Independence of Cyberspace,” with its belief that the Web will be “a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” He argues that

[i]n many ways, that promise has been realized. The Internet has created safe spaces for communities to connect, communicate, organize and mobilize, and it has helped many people to find their place and their voice. It has engendered new forms of free expression, and granted access to ideas that didn’t exist before.

Mr. Schmidt notes the role online communication has played in enabling democratic protest around the world, and wants to reject the claims of “[a]uthoritarian governments  … that censorship is necessary for stability.”

But his response to these claims is not just a straightforward defence of the freedom of expression. “The people who use any technology are the ones who need to define its role in society,” Mr. Schmidt writes. “Technology doesn’t work on its own, after all. It’s just a tool. We are the ones who harness its power.” That’s fair enough, so far as it goes. Mr. Schmidt warns against “us[ing] the Internet exclusively to connect with like-minded people rather than seek out perspectives that we wouldn’t otherwise be exposed to,” and that is indeed very important. But then the argument gets ominous:

[I]t’s important we use [the Internet’s] connectivity to promote the values that bring out the best in people. … We need leaders to use the new power of technology to allow us to broaden our horizons as individuals, and in the process broaden the horizons of our society. It’s our responsibility to demonstrate that stability and free expression go hand in hand.

It’s not that I’m against the idea that one should act responsibly when exercising one’s freedom of expression (or that one should just act responsibly, period). But is the responsibility of a speaker always to foster “stability” ― whatever exactly that is? And to whom ought we “to demonstrate that stability and free expression go hand in hand”? To the authoritarians who want to censor the internet? Why exactly do we owe them a demonstration, and what sort of demonstration are they likely to consider convincing? Last but not least, who are the leaders who are going to make us “broaden our horizons”?

Mr. Schmidt has a list of more or less specific ideas about how to make the internet the “safe and vibrant place” he envisions, and they give us a hint about his answer to that last question:

We should make it ever easier to see the news from another country’s point of view, and understand the global consciousness free from filter or bias. We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice.

He speaks “of leadership from government, from citizens, from tech companies,” but it is not obvious how citizens or even governments ― whom Mr. Barlow taunted as the “weary giants of flesh and steel,” powerless and unwelcome in cyberspace ― can “build tools” to do these sorts of things. It is really the other sort of giants, the “tech companies” such as the one Mr. Schmidt runs, that have, or at least can create, the means to be our benevolent guardians, turning us away from hate and harassment, and towards “global consciousness,” ― whatever that too may be. Google can demote websites that it deems to be promoters of “hate” in its search results, as indeed it already demotes those it considers to be copyright-infringers. Apple could block the access to its App Store to news  sources it considers biased, as indeed it has already blocked a Danish history book for  featuring some nudity in its illustrations. Facebook could tinker with its Newsfeed algorithms to help people with a favoured peace-and-love perspective “find their voice,” as it already tinkers with them to “help [us] see more stories that interest [us].”

Of course, Mr. Schmidt’s intentions are benign, and in some ways even laudable. Perhaps some of the “tools” he imagines would even be nice to have. The world may (or may not) be a better place if Facebook and Twitter could ask us something like “hey, this really isn’t very nice, are you sure you actually want to post this stuff?” ― provided that we had the ability to disregard the advice of our algorithmic minders, just like we can with spell-check. But I’m pretty skeptical about what might come out of an attempt to develop such tools. As I once pointed out here, being a benign censor is very hard ― heavy-handedness comes naturally in this business. And that’s before we even start thinking about the conflicts of interest inherent in the position of Google and of other tech companies who are in a position of being, at once, the regulators of their users’ speech and subjects of government regulations, and may well be tempted to so act in the former role as to avoid problems in the latter. And frankly, Mr. Schmidt’s apparent faith in “strong leaders” who will keep us free and make us safe and righteous is too Boromir-like for me to trust him.

As before, I have no idea what, if anything, needs to or could be done about these issues. Governments are unlikely to wish to intervene to stop the attempts of tech companies to play Platonic guardians 2.0. Even if they had the will, they would probably lack the ability to do so. And, as I said here, we’d be making a very risky gamble by asking governments, whose records of flagrant contempt for freedom of expression are incomparably worse than those of Google and its fellows, to regulate them. Perhaps the solution has to be in the creation of accountability mechanisms internal to the internet world, whether democratic (as David R. Johnson, David G. Post and Marc Rotenberg have suggested) or even akin to rights-based judicial review. In any case, I think that even if we don’t know how to, or cannot, stop the march of our algorithmic guardians, perhaps we can at lest spell-check them, and tell them that they might be about to do something very regrettable.

Safety Regulations and the Charter

I wrote earlier this week about the decision of the Court of Appeal for Ontario R. v. Michaud, 2015 ONCA 585, which upheld the constitutionality of regulations requiring trucks to be equipped with a speed limiter that prevents them going faster than 105 km/h. The Court found that the regulations could put some truck drivers in danger by leaving them unable to accelerate their way out of trouble, and thus infringed s. 7 of the Canadian Charter of Rights and Freedoms, but that they were justified under s. 1 of the Charter. This is a most unusual outcome ― I’m not sure there a s. 7 violation had ever before been upheld under s. 1 ― and the Court itself suggested that the s. 7 analytical framework set out by the Supreme Court in Canada (Attorney General) v. Bedford, 2013 SCC 72, [2013] 3 S.C.R. 1101 is not well-suited to cases where the constitutionality of “safety regulations” is at issue. In this post, I would like to comment on the role of s. 7 in this and similar cases, and the role of courts in applying the constitution in such circumstances.

* * *

The Court may well be right that the current s. 7 framework is not adequate to deal with “safety regulations” ― at least if it has interpreted that framework correctly. Referring to Bedford, the Court took the position that any negative effect on a person’s security is enough to engage the “security of the person” right protected by s. 7. But I’m not sure that this is really what the Supreme Court meant when it said that “[a]t this stage, the question is whether the impugned laws negatively impact or limit the applicants’ security of the person.” [Bedford 58] Is there no threshold, at least a more-than-deminimis one, for a court to find a s. 7 infringement? Such thresholds exist in jurisprudence on other provisions of the Charter, for example on s. 2(a), where a “trivial or insubstantial interference” with religious freedom, one “that does not threaten actual religious beliefs or conduct,” does not engage the Charter. Admittedly, Bedford says nothing about such a threshold in s. 7, but then neither it nor the other s. 7 cases that come to mind involved situations where the interference with security interests was of a potentially trivial magnitude.

As the Court of Appeal suggests in Michaud, “safety regulations” are likely to create precisely this sort of interference with the security interests, or even the right to life, of people who engage in the regulated activity. The Court explains that it is always possible to say that a more stringent regulation would have prevented a few more injuries or even deaths, so one could argue that the increase in each person’s likelihood of being injured or dying as a result of a somewhat laxer rule is a s. 7 violation. The Court is concerned that acceptance of such arguments will trivialize s. 7, and I agree that this would indeed be disturbing.

But it seems to me that the best response to this problem is to say that a purely statistical increase in the odds of being injured should not count as sufficient to establish the violation of any given person’s rights. In Bedford itself, the courts were able to show how the prostitution-related provisions of the Criminal Code directly and substantially interfered with the security of sex-workers who had to comply with them. The negative impact on their safety was not just statistical; one did not need an actuarial table to see it ― though statistical evidence was used to show the extent of the problems beyond the stories of the claimants themselves.

The Court of Appeal suggests a different approach, which is to treat safety regulations differently from other (perhaps especially criminal) laws, and to take their beneficial effects into account at the s. 7 stage of the analysis, and not only at the s. 1 justification stage as is usually done. In my view, there are two problems with this solution. First, it is inconsistent with the Supreme Court’s longstanding aversion for introducing balancing into the substantive provisions of the Charter. This aversion is justified, not only by the pursuit of coherence, but also by the desirability of putting the onus of proving social benefits on the government.

The other reason I find the creation of a special category of “safety regulations” problematic is that its contours would be uncertain, and would generate unnecessary yet difficult debate. The rules requiring speed limiters in trucks under pain of relatively limited penalties are obvious safety regulations. But it seems like a safe bet that the government would try to bring other rules within the scope of that category if doing this made defending their constitutionality easier, including for example the prostitution provisions enacted, in response to Bedford, as the Protection of Communities and Exploited Persons Act. Of course, the parties challenging these laws would fight just as hard to show that such rules are not really about safety. The uncertainty and the costs of litigation would increase, while the benefits to be gained from this approach are not obvious.

* * *

Now, this whole issue of statistical increases in risk being treated as a violation of s. 7 of the Charter is actually irrelevant to Michaud. That’s not to say the Court should not have brought it up ― I think it did us a favour by flagging it, and we should take up its invitation to think about this problem. Still, the issue in that case is different: it’s not that a safety regulation does not allegedly go far enough, but that it allegedly goes too far. The two possibilities are, of course, two sides of the same coin; they are both possible consequences of the regulator’s preference for a bright-line rule over a standard. The Court is right to observe that there are good reasons to prefer rules to standards (some of the time anyway). And surely the Charter wasn’t supposed to eliminate bright-line rules from our legislation.

However, to speak of the speed limiter requirement as a “bright-line rule” is to miss what is really distinctive about it. Those who challenge the requirement aren’t seeking it be replaced by a standard. They are content with a bright-line speed limit ― provided that they are able to infringe it on occasion (and, one suspects, that they are not prosecuted for doing so). Unlike a speed limit enforced, sporadically and ex-post, by the police, which can be broken if need be, a speed limit enforced permanently and ex-ante by an electronic device cannot be broken at all. In other words, the issue is not simply one of rules versus standards; it’s one of rules whose nature as rules can on occasion be ignored (put another way, rules that can be treated as if they were standards) versus rules that stay rules.

This creates a difficulty for constitutional law. Can a court acknowledge that a rule sometimes needs to be broken? Can a court go even further than that, and say that a rule is constitutionally defective if it doesn’t allow a mechanism for being broken? To say that the legislator is entitled to choose rules over standards does not really answer these questions. As thoughtful and sophisticated as the Michaud opinion is, I don’t think that it really addresses this issue.

That’s too bad, because this problem will arise again, and ever more urgently, with the development of technology that takes the need, and the ability, to make decisions away from humans. Self-driving cars are, of course, the obvious example. As it happens, the New York Times published an interesting story yesterday about the difficulties that Google’s autonomous vehicles run into because they are “programmed to follow the letter of the law” ― and the drivers of other cars on the road are not. Google’s cars come to a halt at four-way stops ― and cannot move away, because the other cars never do, and the robots let them by. Google’s cars keep a safe distance behind the next vehicle on a highway ― and other cars get right into the gap. The former situation might be merely inconvenient, although in a big way. The latter is outright dangerous. What happens if regulators mandate that self-driving cars be programmed so as never to break the rules and it can be shown that this will increase the danger of some specific situations on the road? What happens, for that matter, in a tort claim against the manufacturer (or rather the programmer) of such a vehicle? Michaud gives us some clues for thinking about the former question, though I’m not sure it fully settles it.

* * *

Thinking about constitutional questions that challenges to safety regulations give rise to also means thinking about the courts’ role when these regulations are challenged. In Michaud, the Court took a strongly deferential position, drawing a parallel with administrative law, where courts are required to defer to decisions that are based on the exercise of expert judgment. It noted, however, that “situations, in which a legislature or regulator uses a safety regulation for an improper collateral purpose, or where the regulator makes a gross error, are imaginable.” [152] In these situations, the courts should step in.

I think this is exactly right. Courts must be alert to the possibility that rules that ostensibly aim at health and safety are actually enacted for less benign purposes, and in particular as a result of various public choice problems. Safety rules are attractive to those who want to limit competition precisely because they look so obviously well-intentioned and are difficult to criticize. That said, when ― as in Michaud itself ― there seems to be no dispute that the rule at issue is genuinely meant to pursue safety objectives, courts should indeed adopt a hands-off approach. Michaud illustrates the difficulties they have in dealing with conflicting expert reports based on complex and uncertain science. And the Court is right to suggest that governments should be entitled to err on the side of caution if they so wish ― though, by the same token, I think they should also not be required to do so (and the Court does not say otherwise). This is fundamentally a policy choice, and the courts should not be interfering with it.

* * *

The questions that the Michaud case raises are many and complex. The Court of Appeal’s opinion is thoughtful and interesting, though as I explained above and in my previous post, I’m not sure that its approach to the existing constitutional framework and to the evidence is the correct one. But that opinion does not address of all these questions. Eventually ― though not necessarily in this case, even if there is an appeal ― the Supreme Court will need to step in and start answering them.

Law and Innovation, Again

In my July post for the National Magazine’s blog I wrote that the decision of Ontario’s Superior Court rejecting the attempt by the city of Toronto to stop Uber operating there without a “taxicab broker” license was a reminder of the fact that technological innovation often challenges the law not directly, but by enabling innovative business models. In a recent post for the Hoover Institution’s Defining Ideas, Richard Epstein offers a similar argument, and draws similar conclusions from it.

Prof. Epstein’s post, as it happens, was also prompted by litigation against Uber ― this time in California, where an administrative tribunal recently concluded that its drivers were “employees” and thus entitled to certain benefits. (It is worth noting that New York City’s authorities have since taken the contrary position.) Prof. Epstein points out that

There is no question that [“sharing economy”] platform systems require a contractual framework for a three-party relationship that is not found in the playbook of traditional industries, where there is a direct relationship between the party that supplies the goods and services and the party that requests them.

The law, says prof. Epstein, has a choice in how to respond to the situation. It can let the market work out new forms of contractual relations, which might combine elements of pre-existing standard arrangements (such as the employment contract) if the parties want it. Alternatively, it can try to simply fit new commercial relationships into the pre-existing forms.

For prof. Epstein, the choice is clear:

it is a hopeless task to apply traditional regulatory structures to modern arrangements, especially when they block the implementation of new business models. Indeed, it is necessary to go one step further: it makes no sense to apply these regulatory statutes to older businesses, too. Time after time, these statutes are drafted with some “typical” arrangement in mind, only for the drafters to discover that they must also try to apply the statutes to nonstandard transactions that do not fit within the mold.

No disagreement from me. Here’s what I had written last month:

[t]he law ― including the regulation of taxis ― is written with specific business models in mind. When the business models in question are no longer the only ones around, the legal rules based on the assumption that they are lose their efficacy.

We should, I said, resist “[t]he temptation to expand the scope of the existing regulations, to close the ‘loopholes’ opened up by innovation,” and take “the disruptions caused by innovation … as an opportunity to ask whether any of the arguments for the old rules … still apply.”

But if you didn’t want to take it from me then, you should take it from Prof. Epstein now.

The Uber Decision

Last week, Ontario’s Superior Court of Justice delivered a much noticed judgment rejecting Toronto’s claims that Uber could not operate there without registering and obtaining a license as a taxicab or limousine broker. Needless to say, the ruling is of great practical importance to Uber’s users, both passengers and drivers, as well as those who seek to regulate it out of existence. Legally, the decision, City of Toronto v Uber Canada Inc., 2015 ONSC 3572, is about a very narrow issue of statutory interpretation. Yet the recently-appointed Justice Dunphy’s thorough and well-written opinion provides us an opportunity to reflect on the importance of the Rule of Law and the processes of legal change.

The City of Toronto, like many others in Canada and elsewhere, has chosen to cartelize the transportation of persons by privately owned cars. All the cars used for that purpose are divided into the categories of “taxicabs” and “limousines.” The number of the former is fixed; the number of the latter is restricted indirectly, by imposing a variety of regulations on their owners and operators. In addition, the City requires “taxicab brokers” and “limousine service companies” to obtain licenses in order to operate within its limits. The City’s case against Uber was that Uber was acting as a “taxicab broker” or a “limousine service company,” without having done so. It asked the Court for both a declaration and an injunction that would have ordered Uber to stop its operations in Toronto. Uber, for its part, claimed that its operations were not covered by the City’s by-laws.

Justice Dunphy begins by determining whether Uber cars might be “taxicabs” or “limousines” within the meaning of the applicable by-law, chapter 545 of the City of Toronto Municipal Code. The definition if a “taxicab” is limited to categories defined by the various types of permits issued by the City. Since Uber cars lack such permits, they do not fall within this definition, reasons Justice Dunphy, and must be “limousines,” which include all cars “used for hire for the conveyance of passengers in the City of Toronto” other than “taxicabs.” To say that unlicensed cars used for that purpose are still “taxicabs” “would make nonsense of the definition of ‘limousine’ in the same enactment” [57] and thus cannot be the correct interpretation.

Having concluded that Uber cars are “limousines,” Justice Dunphy asks himself whether Uber ― or, more precisely, any one of the three members of the Uber group of companies actually sued by the City ― acted as a “limousine service company.” The by-law defines such a company as a “person or entity which accepts calls in any manner for booking, arranging or providing limousine transportation.” Uber, Justice Dunphy holds, does not “accept calls,” and thus is not covered by the definition. In Justice Dunphy’s view “accepting” a call or any sort of request “requires the intervention of some element of human discretion or judgment in the process and cannot be applied to a merely passive, mechanical role of receiving and relaying electronic messages.” [78] Yet that is precisely what Uber does.

Having provided prospective passengers and drivers with software that allows them to connect, often well in advance of any specific trip being envisioned by either party, it relays passengers’ requests for a ride to the nearest car available. Unlike a traditional taxi broker or limousine company, it cannot reject the request (for example if there are no cars available) or undertake to fulfill it. It is the driver who receives the request who takes the decision. Uber no more “accepts” requests for rides than does a phone company whose networks are used to transmit traditional calls for cabs, or automated services that connect a prospective rider with a broker. In Justice Dunphy’s view, it “is very likely” that “the by-law was drafted and the word ‘accepts’ was selected in lieu of the more generic ‘receives'” precisely in order “to exclude such businesses from the scope of the regulation.” [70]

Justice Dunphy also considers the meaning of the word “calls,” used in the definition of a “limousine service company” ― but not in that of a “taxicab broker” which, unlike the limousine company, can accept “requests.” This difference in wording, Justice Dunphy says, it must be given effect, so that “calls” cannot be taken to mean “requests.” Besides, the word “requests” is a recent innovation in the definition of a “taxicab broker,” and the City could have amended the definition of “limousine service company,” but has not done so. Online requests handled by Uber are not “calls” in any normal sense of the word, and this is an additional reason for concluding that it is not a “limousine service company.”

Although it might seem like excessive legalistic pedantry to some, I find Justice Dunphy’s analysis persuasive. Needless to say, it only applies to the specific legislative framework before him. Had the relevant definitions been drafted differently, his conclusions would presumably have been different too. But given the by-laws that were actually before him, I think that Justice Dunphy was quite right to distinguish the passive or mechanical functions of receiving or transmitting a communication and the (at least somewhat) discretionary function of accepting an order, as well as to give effect to the distinction between “calls” and “requests” which the City itself has created.

As I said in the beginning, beyond the narrow point about the meaning of the specific words used by Toronto’s city council to regulate its taxi industry, there is a broader one about the Rule of Law. As Justice Dunphy points out, “[t]he goal of statutory interpretation is not to start with the desired outcome that the regulator seeks in light of new developments to see what means can be found to stretch the words used to accomplish the goal,” [69] which as he says is what he would have had to do in order to rule for the City in this case. The Rule of Law requires, among other things, that legal rules be public and relatively stable. It also requires the government to be bound by the existing legal rules. A legal system where the meaning of the rules can change because the government wants it to, even though it cannot be bothered to follow the procedures available for legal change, is not one where the Rule of Law prevails.

It is often said that insisting on this “formal” sort of Rule of Law is not enough, because requirements as to the publicity and clarity of legislation and insistence on legal change following recognized procedures does not do much to constrain government. Government can still enact whatever rules it wants, so long as it goes about it the right way. But if it really were so easy for government to change the rules while following the applicable procedures, would it really be fighting so hard to avoid having to do so? As Justice Dunphy recognizes,

[t]he City finds itself caught between the Scylla of the existing regulatory system, with its numerous vested interests characterized by controlled supply and price, and the Charybdis of thousands of consumer/voters who do not wish to see the competition genie forced back into the bottle now that they have acquired a taste for it. [9]

Changing the rules, in this context, is not as easy as those who denigrate the formal understandings of the Rule of Law would have us believe. And so it matters a great whether

the City’s regulations, crafted in a different era, with different technologies in mind [have] created a flexible regulatory firewall around the taxi industry sufficient to resist the Uber challenge, or … instead [have] created the equivalent of a regulatory Maginot Line behind which it has retreated, neither confronting nor embracing the challenges of the new world of internet-enabled mobile communications. [12]

Justice Dunphy’s conclusion, of course, is that the City’s regulations have done the latter, and Uber is thus free to pursue its (charm) offensive. In theory, the regulatory troops can still be withdrawn from the useless, antiquated defences and thrown into the battle to stop the invaders. In practice, it may well be too late by the time they can be mobilized.

Justice Dunphy understands this, no doubt. Although he insists, as most judges not named Richard Posner are wont to do, that “[q]uestions of what policy choices the City should make or how the regulatory environment ought to respond to mobile communications technology changes are political ones” [13] and not for him to resolve, his awareness of, and willingness to mention, the conflict between “vested interests” and the “competition genie” suggest that he knows that his decision will influence the choices that will end up being made. Indeed, Justice Dunphy’s attention to the details of Uber’s technology and business model, as well as his awareness of the broader context in which the case before him fits, not to mention his rhetorical flourishes, have something at least vaguely Posnerian about them. The decision he has delivered is not only an Uber decision, meaning a decision about Uber. It’s also an über-decision ― one that is superior to what one usually sees.

Plus ça change…

This is the fourth and last post in the series about my most recent article, “‘Third Parties’ and Democracy 2.0″, (2015) 60:2 McGill LJ 253. On Monday, I introduced the paper, which deals with the repercussions of political and technological changes on our framework for regulating the participation of persons other than parties and candidates in pre-electoral debate. On Tuesday, I discussed political the political changes of the last 45 years, which have resulted in political parties more or less deserting the realm of policy debates, and leaving a void which can only be filled by those whom our electoral law considers to be “third parties” and relegates to the sidelines of pre-electoral debate. Yesterday, I discussed the effect of the technologies and business models of Web 2.0 ― a separation of spending and speech that has made it possible for third parties to participate in electoral campaigns without spending money, and thus without being subject to the limits imposed by our election laws.

Today, I consider the amendments I would like to see made to the Canada Elections Act and to similar legislation elsewhere, in light of the changes to the “facts on the ground” which such legislation covers. Perhaps counter-intuitively, my article argues that such amendments can actually be quite modest. I would prefer more substantial changes, to be sure, but they would require a different, more ambitious argument. While I have hinted at it in various posts here, I do not make it in the article. What I am concerned with there is, as I put it yesterday, keeping open the avenue for third-party communications created by Web 2.0.

To do so, the most important thing, as is often the case, is not so much to improve the current state of affairs as simply not to make it worse. There is a danger that the adherents of a conception of politics where pre-electoral debates are entirely dominated by political parties ― not least the parties themselves, but possibly also some electoral authorities ― will seek to restore the parties’ former privileged position by imposing limits on Web 2.0 communications by third parties not restricted by current rules. How serious this danger really is, I cannot tell. I am not aware of any real proposals to this effect, but then the impact of social media on electoral campaigns is only beginning to be felt. And there is at least a chance that politicians and bureaucrats will recognize the difficulty of regulating citizens’ expression on social media, the huge cost of attempting to enforce such regulations, the dangers of political abuse of the inevitably selective enforcement, and generally the huge amounts of censorship that would have to be imposed to achieve the desired effect.

Beyond this “do no harm” position, we can and should reform electoral laws in two ways, which recognize that in light of the political parties’ unwillingness to debate ideas, it is important to make it easier, not more difficult, for third parties to inject issues of policy into election campaigns. First, the existing limits on third-party expenses should be raised. There is plenty of room for doing so, even without calling into question the principle that their expenses should be limited to amounts substantially lower than those permitted to political parties. As I put it in the article,

the Supreme Court recognized long ago [in Reference Re Alberta Statutes – The Bank Taxation Act; The Credit of Alberta Regulation Act; and the Accurante News and Information Act, [1938] SCR 100 at 132-134], elections to Parliament are a national, not a local concern. It must be possible for Canadians to debate the issues they raise on a national and not only a local scale, regardless of the willingness of political parties to do so. (292)

And second, the rules on third-party communications need to be made technologically neutral. The Canada Elections Act, for a reason that I do not understand, treats online communications differently from more traditional ones, in that it only only exempts online communications by individuals, and not those of organizations (whether corporations, trade unions, etc.) from its definition of electoral expenses. By contrast, for other forms of communications, notably those published in the traditional media, whether exempt from or included in the definition of (restricted) electoral expenses, the messaging of individuals and that of entities are treated in the exact same way. The singling out of online communications for a more stringent rule should be repealed.

While my article is only concerned with federal law, I will say something here about Québec, because its Election Act suffers from the same problems as the federal legislation, but on a much greater scale. Its limit on third-party expenses is an absurdly low 300$, which of course prevents any sort of effective communication other than through Web 2.0 means. (For instance, I have blogged here about the case of Yves Michaud, who published an ad criticizing some members of Québec’s National Assembly for voting to censor him once upon a time, and was fined by the province’s electoral authorities. Mr. Michaud may be an odious character, but why shouldn’t he have been allowed to make his case?) Besides, only individuals are allowed to make their views known as third parties. Corporations, unions, NGOs, and social movements are forced to shut up altogether.

The Election Act’s provisions on third-party participation are also not at all technologically neutral. This has, in the last two election campaigns, resulted in electoral authorities attempting to shut down expression by online “citizen media” ― a website in 2012 and a short documentary in 2014. In both cases, the authorities quickly reversed course, but ― as I argued here ― it was their initial determinations that such advocacy was not permitted by the law that was correct, and their reversal was a deliberate misreading of the legislation, an attempt to mitigate the law’s harshness and obsolescence that was itself contrary to the Rule of Law. The statute urgently needs to be reformed.

To show the need of reform along those lines and, even more importantly, of avoiding pernicious reform in a (likely futile) attempt to restore political parties to a position of which Web 2.0 is depriving them ― and which they do not deserve ― was the ultimate aim of my article. But if I have just succeeded in making you appreciate the importance of the changes ― in politics as well as in technology and business models ― that are shaping the factual background which electoral law regulates, I have already accomplished something.

Free Speech

This is the third post in the series about my most recent article, “‘Third Parties’ and Democracy 2.0″, (2015) 60:2 McGill LJ 253. On Monday, I introduced the paper, which deals with the repercussions of political and technological changes on our framework for regulating the participation of persons other than parties and candidates in pre-electoral debate. Yesterday, I discussed political the political changes of the last 45 years, which have resulted in political parties more or less deserting the realm of policy debates, and leaving a void which can only be filled by those whom our electoral law considers to be “third parties” and relegates to the sidelines of pre-electoral debate.

Today, I take up the issue of technological change ― and especially the development of various “web 2.0” technologies and business models ― that has made political (as well as other) speech free not only in the legal, but also in the financial sense. I describe this change as the “separation of spending and speech.” I posted about it long ago, when I was writing the first draft of the article. But the issue is important enough to be worth re-emphasizing, and anyway only a few hardy souls were reading this blog at the time.

The idea is a simple one, but its implications are considerable. Up until ten years ago, at most, the only way a message (political or not), could be made to reach substantial numbers of people was through the print or electronic mass media ― either as content a media organization itself chose to run, as part of a news item or an editorial, or as an op-ed, or as a paid advertisement. Unless the media took up your message on its own volition ― and it had limited space to do so, especially for messages transmitted in the form chosen by their authors (such as newspaper op-eds), you had to pay for it to do so ― and pay a lot. The vast majority of individuals could not afford it ― when acting on their own, anyway, because organizations, notably trade unions, are in a different position thanks to their ability to pool together resources from large numbers of people.

Canadian election laws were written with this reality in mind. Those of them that regulate the participation by persons and entities other than candidates and political parties, a.k.a. “third parties,” address these the various types of communications and treat them differently depending on whether the third party has to pay for the transmission of the communication. Communications taken up by the media ― news reports, interviews, or op-eds ― are exempted from the definition of “election expenses” and thus not regulated. Paid advertisement is counted as an expense and strictly limited.

The combination of statutory spending limits and the limitations imposed by the technologies and business models of traditional mass media on the amount of third-party communications not covered by these limits served to circumscribe third party participation in pre-electoral debates. Political parties, by contrast, operate under much relaxed versions of these twin constraints. Spending limits to which they are subject are much higher than those imposed on third parties, and the media are more interested in giving them a voice ― even when, as I explained in yesterday’s post, the parties don’t really have anything interesting to say. Political parties could thus remain at the centre of the discussion.

Web 2.0 ― the websites that allow users to easily generate and communicate their own content, such as social networks, YouTube, and various blogging services ― changes things by removing one of the two constraints on the ability of third parties to communicate with voters. The spending limits are still in place, but it is no longer necessary to spend in order to speak. In Harper v. Canada (Attorney General), 2004 SCC 33, [2004] 1 S.C.R. 827, which upheld the federal restrictions on third party advertising, the dissent pointed out that these restrictions were so low as to prevent a third party from taking out advertisements in the national press, or in the electronic media. The majority responded by observing that most people could simply not afford to do so anyway. Both of these facts were and still are true. But now, thanks to the separation of spending and speech made possible by the technologies and business models of web 2.0, both may also be increasingly beside the point. Even a single person’s rant about a political party can easily be seen by hundreds of his or her “friends” on Facebook ― at no financial cost to him or her. Ten years ago, reaching the same audience would probably have cost a substantial sum of money, if it had been feasible at all. And of course the possibilities of “sharing” and hyperlinking increase the potential audience one may reach exponentially, at no additional expense ― which, again is a dramatic departure from the pre-Web 2.0 days.

To be sure, the Web 2.0 means of communication have not yet entirely displaced the traditional media as a means of reaching large numbers of people. But they have added a crucially important avenue through which third parties can express themselves throughout an election campaign, and thus reduced the severity of the effects of the spending limits on their ability to do so. Conversely, they have have deprived political parties of their near-monopoly on the political debate at election time ― which they were using to avoid policy discussion to the greatest extent possible. In my next post, the last in this series, I will argue that the law should keep this avenue open, and suggest some (relatively modest) reforms to ensure that it does so.

Online Gambling

Over at the EconLog, David Henderson has an interesting post that allows me to come back to some themes I used to carp on quite a bit, but haven’t returned to in a while now. In a nutshell, it is the story of antiwar.com, a website that, naturally enough, illustrates its message with some graphic imagery. Google concluded that the images contravened its policies, and withdrew the ads it placed on the website, causing the website to lose revenue on which they had relied. Apparently, Google does not want its ads to appear next to any picture that would not be “okay for a child in any region of the world to see,” which would disqualify many iconic pictures taken in wars past ― and not just wars, one might surmise.

Prof. Henderson points out that this is not “censorship,” since Google is a private firm acting in a purely commercial capacity here. But, he argues, this is still a “gamble” on Google’s part:

Google faces a tradeoff. On the one hand, there are probably many advertisers, possibly the vast majority, who don’t want their ads to appear alongside pictures of blood and gore, people being tortured, etc. So by being careful that web sites where ads appear do not have such pictures, Google gets more ad revenue than otherwise. On the other hand, Google is upsetting a lot of people who see it as dictating content. This will cause some people to shun Google. … [I]f the upset spreads, there could be a role for another competitor.

Perhaps so, although as I noted before, Google’s competitors ― such as Apple, with its iTunes store ― also seem to be choosing to use their own platforms to present sanitized versions of reality.

And as I also pointed out in the past, Google’s position with respect to freedom of expression is inherently conflicted. On the one hand, Google sees itself as engaged in the business of expression, arguing that its search algorithms reflect choices of an editorial nature that deserve constitutional protection. On the other, when it exercises control over its various platforms (whether the search engine itself or YouTube, the ad service, etc.), it can, and is frequently asked to, act as an agent for governments ― and not only democratic governments either ― who seek to censor expression they dislike. There is a danger that Google will choose to sacrifice some of its users’ freedom in order to protect its own by ingratiating itself with these governments. Furthermore, Google may be coming under pressure, not only from governments, but also from commercial partners it needs to keep on board ― or at bay ― and, possibly, from various “civil society” actors too, in exercising control over its platforms. The antiwar.com story is only one small part of this broader trend.

This is, or should be, well understood ― which makes me think that Google is not the only party in this story who took a gamble. Antiwar.com did too, as does anyone else who comes to rely on Google or other similar platforms, despite knowing the pressures, commercial and otherwise, that these platforms will come under. If anything, it is remarkable how successful this gamble usually turns out to be. Still, it is a bet, and will sometimes turn out badly.

I blogged last year about an argument by Ethan Zuckerman to the effect that the ad-based business model was the internet’s “original sin.” Mr. Zuckerman made his case from the perspective of the users, who must accept privacy losses resulting from tracking and profiling by advertisers in exchange for free access to ad-supported content. The antiwar.com story suggests that, for some content-producers at least, accepting the revenue and, as prof. Henderson points out, the convenience that come with the current business model and its major players was also a Faustian bargain. And yet, as for users, it is not quite clear what alternative arrangement would be viable.

In the face of what some may well be tempted to interpret as a market failure, it seems reasonable to expect calls for regulation, despite what libertarian types such prof. Henderson or the antiwar.com people themselves may say. There will be, and indeed, as I noted in the post about Apple linked to above, there already are, people calling for the regulation of online platforms, in order to make their behaviour conform to the regulators’ ideas about freedom of expression. Yet we should not forget that, on the whole, the net contribution of Google and the rest of them to our ability to express ourselves and to find and access the thoughts of others has clearly been positive ― and certainly much more positive than that of governments. While attempts at making a good thing even better would be understandable, they too would be gamble, and a risky one.

The Power of Google, Squared

I wrote, I while ago, about “the power of Google” and its role in the discussion surrounding the “right to be forgotten” ― a person’s right to force search engines to remove links to information about that person that is “inadequate, irrelevant or excessive,” whatever these things mean, even if factually true. Last week, the “right to be forgotten” was the subject of an excellent, debate ― nuanced, informative, and with interesting arguments on both sides ― hosted by Intelligence Squared U.S. I encourage you to watch the whole thing, because there is really too much there for a blog post.

I will, however, sketch out what I think was the most persuasive argument deployed by the opponents of the “right to be forgotten” ― with whom, admittedly, I agreed before watching the debate, and still do. I will also say a few words about the alternative solutions they proposed to what they agreed is a real and serious problem ― the danger that the prominence of a story about some stupid mistake or, worse, an unfounded allegation made about a person in search results come to mar his or her life forever, with no second chances possible.

Although the opponents of the “right to be forgotten,” as well as its proponents (I will refer to them as, simply, the opponents and the proponents, for brevity’s sake), made arguments sounding in high principle as well as more practical ones, the one on which the debate mostly focused, and which resonated most with me concerned the institutional arrangements that are needed to implement the “right to be forgotten.” The way it works ― and the only way it can work, according to one of the opponents, Andrew McLaughlin (the CEO of Digg and a former Director of Public Policy for Google) ― is that the person who wants a link to information about him or her removed applies to the search engine, and the search engine decides, following a secretive process and applying criteria of which it alone is aware. If the request is denied, the person who made it can apply to privacy authorities or go to court to reverse the decision. If however, the request is granted, nobody can challenge that decision. Indeed, if the European authorities had their way, nobody would even know that the decision had been made. (Telling the owner of the page to which a link is being delete, as Google has been doing, more or less defeats the purpose of the “right to be forgotten.”)

According to the opponents, this has some very unfortunate consequences. For one thing, the search engines have an incentive to err on the side of granting deletion requests ― at the very least, this avoids them the hassle of fighting appeals. One of the proponents, Chicago professor Eric Posner, suggested that market competition could check this tendency, but the opponents were skeptical that, even if users know that one search engine tends to delete more links than another, this would make any noticeable difference to its bottom line. Mostly, the proponents argued that we can rely on the meaning of the admittedly vague terms “inadequate, irrelevant or excessive” to be worked out over time, so that the decisions to delete a link or not become easier and less controversial. But another consequence of the way in which the “right to be forgotten” is implemented would actually prevent that, the opponents, especially Harvard professor Jonathan Zittrain argued. Since nobody can challenge a decision to delete a link, the courts will have no opportunity to refine the understanding of the concepts involved in the “right to be forgotten.” The upshot is that, according to the opponents anyway, the search engines (which, these days, mostly means Google) end up with a great deal of unchecked discretionary power. This is, of course, ironic, because the proponents of the “right to be forgotten” emphasize concerns about “the power of Google” as one of the reasons to support it, as typically do others who agree with them.

If the opponents are right that the “right to be forgotten” cannot be implemented in a way that is transparent, fair to all the parties concerned, at least reasonably objective, and does not increase instead of the checking “the power of Google,” what are the alternatives? The opponents offered at least three, each of them interesting in its own way. First, Mr. McLaughlin suggested that, instead of a “right to be forgotten,” people should have a right to provide a response, which search engines would have to display among their results. Second, we could have category-specific measures directed at some types of information particularly likely to be prejudicial to people, or of little public interest. (It is worth noting, for example, that in Canada at least, we already do this with criminal court decisions involving minors, which are anonymized; as are family law cases in Québec.) And third, Mr. McLaughlin insisted that, with the increased availability of all sorts of information about everyone, our social mores will need to change. We must become more willing to forgive, and to give people second chances.

This is perhaps optimistic. Then again, so is the proponents’ belief that a corporation can be made to weigh, impartially and conscientiously, considerations of the public interest and the right to “informational self-determination” (which is, apparently, the theoretical foundation of the “right to be forgotten”). And I have argued already that new social norms will in fact emerge as we get more familiar with the internet environment in which we live, and in which our digital shadows are permanently unstuck in time. In any case,what is certain is that these issues are not going to go away anytime soon. It is also clear that this Intelligence Squared debate is an excellent place to start, or to continue, thinking about them. Do watch it if you can.

Disrupting C-36

The Economist has published a lengthy and informative “briefing” on the ways in which the internet is changing prostitution ― often, although not always, for the benefit of sex workers. As it explains, the effects of new technologies on what is usually said to be the oldest profession are far-reaching, and mostly positive ― insofar as they make sex work safer than it used to be. If the federal government had been concerned with protecting sex workers, and if Parliament had truly “ha[d] grave concerns about … the risks of violence posed to those who engage in” prostitution, as it affected to be in the preamble of the so-called Protection of Communities and Exploited Persons Act, S.C. 2014 c. 25, better known as Bill C-36, they would have considered the internet’s potential for benefiting sex workers.

But as the government’s and Parliament’s chief concern was apparently to make prostitution vanish by a sleight of criminal law’s heavy hand, its middle finger raised at the Supreme Court, they instead sought to drive sex workers off the internet. The new section 286.4 of the Criminal Code, created by C-36, criminalizes “[e]veryone who knowingly advertises an offer to provide sexual services for consideration,” although section 286.5 exempts those advertising “their own sexual services.” In other words, if a sex worker has her own website, that’s tolerated ― but if she uses some other service, or at least one geared specifically to sex workers and their potential customers, the provider of that service is acting illegally.

Meanwhile, according to the Economist, in the market for sex, as in so many others,

specialist websites and apps are allowing information to flow between buyer and seller, making it easier to strike mutually satisfactory deals. The sex trade is becoming easier to enter and safer to work in: prostitutes can warn each other about violent clients, and do background and health checks before taking a booking. Personal web pages allow them to advertise and arrange meetings online; their clients’ feedback on review sites helps others to proceed with confidence.

Above all, the ability to advertise, screen potential clients, and pre-arrange meetings online means that sex workers need not look for clients in the most dangerous environment for doing so ― on the street. Besides, “the internet is making it easier to work flexible hours and to forgo a middleman,” and indeed “it is independent sex workers for whom the internet makes the biggest difference.”

The internet is also making sex work safer. Yet the work of websites that “let [sex workers] vouch for clients they have seen, improving other women’s risk assessments,” or “where customers can pay for a background check to present to sex workers” is probably criminalized under the new section 286.2(1) added to the Criminal Code by C-36, which applies to “[e]veryone who receives a financial or other material benefit, knowing that it is obtained by or derived directly or indirectly from the commission of an offence under subsection 286.1(1)” ― the “obtaining sexual services for consideration” offence. Forums where sex workers can provide each other with tips and support can be shut down if they are associated with or part of websites that advertise “sexual services.”

As the Economist points out, the added safety (both from violent clients and law enforcement), convenience, and discretion can attract more people into sex work. So trying to eliminate the online marketplace for sex makes sense if one’s aim is, as I put it here, “to drive people out of sex work by making it desperately miserable” ― but that’s a hypocritical approach, and not what C-36 purports to do.

In any case, criminalization complicates the work of websites that help sex workers and their clients, but does not stop it. They are active in the United States, despite prostitution being criminalized in almost every State ― though they pretend that their contents are fictional. They base their activities in more prostitution-friendly jurisdictions. A professor interviewed by the Economist points out that a ban on advertising sexual services in Ireland “has achieved almost nothing.” There seems to be little reason to believe that the ban in C-36, which has a large exemption for sex workers advertising themselves, would fare differently.

The Economist concludes that “[t]he internet has disrupted many industries. The oldest one is no exception.” Yet the government and Parliament have been oblivious to this trend, as they have been oblivious to most of the realities of sex work. One must hope that courts, when they hear the inevitable challenge to the constitutionality of C-36, will take note.