Making a Monster

A report on the future regulation of the internet proposes giving the CRTC overwhelming and unaccountable powers

The final report of the Broadcasting and Telecommunications Legislative Review Panel, grandly entitled Canada’s Communications Future: Time to Act (the “BTLR Report”) has already attracted its share of commentary, much of it, but by no means all, sharply critical. As Michael Geist has explained, the report articulates

a vision of a highly regulated Internet in which an expanded CRTC … would aggressively assert its jurisdictional power over Internet sites and services worldwide with the power to levy massive penalties for failure to comply with its regulatory edicts. 

The discussion has mostly focused on the wisdom of the BTLR Report’s 97 recommendations for regulating the way in which Canadians engage with the online world, and also on their impact on freedom of expression. But one aspect of the report ― indeed, not merely an aspect but a fundamental element of the report’s underlying philosophy ― has, I think, received less attention, although Professor Geist alludes to it with his reference to “an expanded CRTC”: the report’s commitment to administrative power. This is, perhaps, a less obvious issue, but we should not underestimate its significance. If followed, the report’s recommendations would not merely expand the CRTC, but make into a bureaucratic behemoth. We must not let this happen.


The BTLR Report recommends multiple amendments to the legislation governing electronic communications in Canada that would tend to produce the “highly regulated internet” to which Professor Geist refers. Yet the striking thing is that most of the proposed changes do not describe the regulations that they call for with any precision. Instead, they say that the CRTC should be given vast powers to bring into being the report’s imagined brave new world.

The CRTC would be givens new powers to make rules of general application. Most ominously, it would be given the ability to regulate “media content undertakings” ― that is, all manner of entities creating their own content, whether written, sound-based, or visual, as well as those providing platforms for the content created by others, everything from a humble podcast to giants like Netflix, Facebook, and YouTube. These “undertakings” would be required to register with the CRTC, which would be

enable[d] … to establish classes of registrants, to amend registrations, and impose requirements — whether through conditions of registration or through regulations — on registrants (Recommendation 57)

These requirements could, in particular, include “codes of conduct, including provisions with respect to resolution mechanisms, transparency, privacy, and accessibility”. (Recommendation 74) At the same time, the CRTC would be given

the power to exempt any media content undertaking or classes of media content undertakings from registration in instances in which — by virtue of its specialized content or format, revenues, or otherwise — regulation is neither necessary nor appropriate to achieve media content policy objectives. (Recommendation 58)

In other words, the CRTC would decide ― with virtually no guidance from legislation ― both what the rules for “media content undertakings” would be an who would in fact have to comply with them at all. In particular it would be to

impose discoverability obligations on all audio or audiovisual entertainment media content undertakings, as it deems appropriate, including …  prominence obligations [and] the obligation to offer Canadian media content choices(Recommendation 62). 

The CRTC could impose similar requirements on “on media aggregation and media sharing undertakings” ― again “as appropriate” (Recommendation 73). The CRTC would also be directed to “intervene, if necessary … in order to respond quickly to changes in the communications services, improve transparency, and promote trust” in the face of technologies that “combine algorithms and artificial intelligence with Big Data” (Recommendation 93).

The CRTC would also be empowered, and indeed required, to regulate behaviour of individual market actors. It would be given the remit “to ensure that rates are just and reasonable” in “key electronic communications markets” (Recommendation 29). Indeed, in a rare instance of seeking to restrain rather than expand the CRTC’s discretion, the BTLR Report suggests that the ability of the CRTC to “forbear” from regulating the justness of rates should be eliminated (Recommendation 30). The CRTC would also be given the power to “regulate economic relationships between media content undertakings and content producers, including terms of trade” (Recommendation 61). In relation to CBC/Radio-Canada, the CRTC would be tasked with “overseeing all its content-related activities” (Recommendation 83).

But the report would not only have the CRTC make the law for the online world. It would also be given a substantial autonomous power of the purse. It would be given the power to designate “from an expanded range of market participants — all providers of electronic communications services — … required contributors to funds to ensure access to advanced telecommunications”. (Recommendation 25) Among the requirements the CRTC would be able to impose on those required to register … would be “the payment of registration fees” (Recommendation 57). It could, further, “impose spending requirements or levies on all media content undertakings, except those” mainly providing written news (Recommendation 61), “some or all” of which it could use to fund “to the production of news content” through “an independent, arm’s length CRTC-approved fund for the production of news, including local news on all platforms” (Recommendation 71).

The CRTC would acquire additional adjudicative powers too. For example, Recommendation 38 suggests that it should resolve disputes over the location of telecommunication infrastructure. More significantly, it would be both prosecutor and judge when “imposing penalties for any failure to comply with the terms and conditions of registration” imposed on “media content undertakings” (Recommendation 57), with “resolv[ing] disputes” among which it would also be tasked (Recommendation 61). Not that this adjudication would necessarily look like that done in the courts, since the BTLR Report would empower the CRTC “to issue ex parte decisions where the circumstances of the case justify it”. (Recommendation 75)

The prophet of the administrative state in Canada, John Willis, described administrative agencies as “governments in miniature”. One hesitates to describe the law-making, trade-regulating, money-grabbing CRTC envisioned by the BTLR Report as in any sense miniature, but it sure looks like a government unto itself, albeit a rather undemocratic one. In addition to the Commissioners who would exercise legislative, executive, and judicial powers, it would have a sort of representative body, the Public Interest Committee, “composed of not more than 25 individuals with a wide range of backgrounds, skills, and experience representing the diversity of public, civic, consumer, and small business interests, and including Indigenous Peoples”. (Recommendation 15) It’s not quite clear who would be appointing these people, but it certainly does not seem that, despite their supposed mandate to represent the public, they would be elected. Not to worry though: there would also be funding, out of fees collected by the CRTC, for “public interest interventions” (Recommendations 12 and 13), in case, I suppose, the Public Interest Committee doesn’t sufficiently intervene to represent the public interest. And, in addition to the prosecutorial and judicial functions of the Commissioners, there would be

an independent, industry-funded, communications consumer complaints office with the authority to investigate and resolve complaints from individual and small business retail customers of services covered by the respective Acts,

whose “mandate and structure” the CRTC would “create and approve” (Recommendation 96).

Meanwhile, outside control over this machinery will be be reduced. The Commissioners, who are currently appointed to renewable five-year terms, would instead serve for seven years, with no possibility of renewal (Recommendation 4). A limited form of Parliamentary supervision, the laying of government “directions” to the CRTC before the Houses of Parliament would be abolished in the interests of swift regulation (Recommendation 6). And, of course, given the vagueness of the legislative guidance to the CRTC and the breadth of its mandate, it is unlikely that the courts would intervene much to police its regulatory activities.

To sum up, the CRTC would be put in control, with very few restraints, of Canadians’ interaction with the online world, and with one another. Who can speak online and on what conditions ― the CRTC would have control over that. How much they have to pay for the privilege, and where the money goes ― the CRTC would have control over that. How disputes among them, and between them and the CRTC itself, are to be resolved ― the CRTC would have control over that too. The only “checks” on it would come from handpicked representatives of the “public interest” as the CRTC itself conceives it ― not from Parliament or the courts.


The empowerment of the CRTC proposed by the BTLR Report is, of course, no accident. It proceeds from a specific philosophy of government, which the Report describes quite forthrightly. According to its authors,

The role of government is to establish broad policies. The role of regulators is to implement those policies through specific rules and in a transparent and predictable fashion. Legislation is the key instrument through which government establishes these policies. It should provide sufficient guidance to assist the CRTC in the discharge of its duties, but sufficient flexibility for it to operate independently in deciding how to implement sector policy. To achieve this, legislative statements of policy should set out broadly framed objectives and should not be overly prescriptive. (46-47)

In other words, government ― Parliament is left out of the equation entirely, as if it has nothing to do with legislation ― should mostly leave the CRTC alone. Indeed, it is important to preserve “proper balance between the government’s role in policymaking and the regulator’s role in implementing those policies independent of government influence”. (47) And, judging by the amount discretion ― to make law and dictate the behaviour of individual organizations, to levy fees and spend money, to identify, prosecute, and condemn alleged offenders and to adjudicate disputes ― the BTLR Report would vest in the CRTC, the “balance” is really all on the side of the regulator.

This is the philosophy the BTLR Report would impose on the 2020s and, perhaps, beyond. It ostensibly envisions “the CRTC’s shift toward a future-oriented, proactive, and data-driven style of regulation”. (44) But its ideology comes, not from the future, but from a distant and, as article on “The Depravity of the 1930s and the Modern Administrative State” by Steven G. Calabresi and Gary Lawson about which I blogged here shows, detestable past. As Professors Calabresi and Lawson explain, President Franklin D. Roosevelt’s

administration and a compliant Congress created a vast array of new “expert” regulatory agencies, many of which followed the “independent” model by insulating the agency heads from at-will presidential removal, and many of which contained (and still contain) statutory authorizations to the agencies so vague as to be literally meaningless. … These agencies, controlled neither by the President nor by Congress, made life-altering decisions of both fact and law subject only to deferential judicial
review. (829)

This is the governance model proposed by the BTLR Report. Its original backers

fundamentally did not believe that all men are created equal and
should democratically govern themselves through representative institutions. They believed instead that there were “experts”—the modern descendants of Platonic philosopher kings, distinguished by their academic pedigrees rather than the metals in their souls—who should administer the administrative state as freely as possible from control by representative political institutions. (829)

(For more on the beliefs of 1930s pro-administrativists, see also this post by co-blogger Mark Mancini.) Judging by their proposals, the views of the authors of the BTLR Report are rooted in just this kind of thinking. They mistrust the free market as well as democratic institutions, and want fundamental decisions about what is, by their own account, an unbelievably important part of our lives to be made by officials deemed wiser than everyone else.

And if the philosophy behind the BTLR Report’s proposed future goes back a mere century, its institutional vision is considerably older still. In fact, at the risk of sounding a bit like Philip Hamburger (which, after all, isn’t a bad thing!) I would argue that it amounts to a counter-revolution against the 17th-century subjection of executive authority to law, and a reversal of the the post-1689 constitutional settlement. To be sure, everything the BTLR Report proposes to do would be covered by the fig leaf of ― deliberately vague and unconstraining ― legislative authority. But in substance, the proposals amount to executive law-making contrary to the Case of Proclamations, executive dispension from the law contrary to article 2 of the Bill of Rights 1688, executive adjudication contrary to the case of Prohibitions del Roy, and executive taxation contrary, this time, to article 4 of the Bill of Rights. James I and James II would be proud.


So when we hear that “this time it’s different” ― that the online world is like nothing we’ve seen before ― that its actors “pose a unique set of challenges for contemporary regulators”, as Paul Daly argues ― and that this justifies the sort of overwhelming regulatory response recommended by the BTLR Report, we need to be skeptical. For all that the issues raised by the modern world are ― now as a century ago! ― said to be quite unlike anything that came before, the solutions offered are the same old. More unfettered bureaucratic power is always said to do the trick. When all you have is a hammer…

More recently, a very different philosophy seemed, however briefly, to prevail in the online world. In the 1996 “Declaration of the Independence of Cyberspace“, John Perry Barlow proclaimed:

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

The Declaration isn’t much more remembered than the term “cyberspace” itself, nowadays, and the weary giants whom Barlow was taunting have come after the cyber-libertarians like Pushkin’s Stone Guest. If the authors of the BTLR Report get their way, the we would indeed be governed, to keep with the 17th century English political thought, by Leviathan himself.


NOTE: A petition to “the Government of Canada to Reject the recommendations regarding the legislation and regulation of free speech, free expression and the free press made by the” BTLR Report is open for signature at the House of Commons website. Please sign it!

Platonic Guardians 2.0?

The New York Times has published an essay by Eric Schmidt, the Chairman of Google, about the role of the Internet, and especially, of the exchange of ideas and information that the Internet enables, in both contributing to and addressing  the challenges the world faces. The essay is thoroughly upbeat, concluding that it is “within [our] reach” to ensure that “the Web … is a safe and vibrant place, free from coercion and conformity.” Yet when reading Mr. Schmidt it is difficult not to worry that, as with students running riot on American college campuses, the quest for “safety” will lead to the silencing of ideas deemed inappropriate by a force that might be well-intentioned but is unaccountable and ultimately not particularly committed to freedom of expression.

To be sure, Mr. Schmidt talks the free speech talk. He cites John Perry Barlow’s “Declaration of the Independence of Cyberspace,” with its belief that the Web will be “a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” He argues that

[i]n many ways, that promise has been realized. The Internet has created safe spaces for communities to connect, communicate, organize and mobilize, and it has helped many people to find their place and their voice. It has engendered new forms of free expression, and granted access to ideas that didn’t exist before.

Mr. Schmidt notes the role online communication has played in enabling democratic protest around the world, and wants to reject the claims of “[a]uthoritarian governments  … that censorship is necessary for stability.”

But his response to these claims is not just a straightforward defence of the freedom of expression. “The people who use any technology are the ones who need to define its role in society,” Mr. Schmidt writes. “Technology doesn’t work on its own, after all. It’s just a tool. We are the ones who harness its power.” That’s fair enough, so far as it goes. Mr. Schmidt warns against “us[ing] the Internet exclusively to connect with like-minded people rather than seek out perspectives that we wouldn’t otherwise be exposed to,” and that is indeed very important. But then the argument gets ominous:

[I]t’s important we use [the Internet’s] connectivity to promote the values that bring out the best in people. … We need leaders to use the new power of technology to allow us to broaden our horizons as individuals, and in the process broaden the horizons of our society. It’s our responsibility to demonstrate that stability and free expression go hand in hand.

It’s not that I’m against the idea that one should act responsibly when exercising one’s freedom of expression (or that one should just act responsibly, period). But is the responsibility of a speaker always to foster “stability” ― whatever exactly that is? And to whom ought we “to demonstrate that stability and free expression go hand in hand”? To the authoritarians who want to censor the internet? Why exactly do we owe them a demonstration, and what sort of demonstration are they likely to consider convincing? Last but not least, who are the leaders who are going to make us “broaden our horizons”?

Mr. Schmidt has a list of more or less specific ideas about how to make the internet the “safe and vibrant place” he envisions, and they give us a hint about his answer to that last question:

We should make it ever easier to see the news from another country’s point of view, and understand the global consciousness free from filter or bias. We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice.

He speaks “of leadership from government, from citizens, from tech companies,” but it is not obvious how citizens or even governments ― whom Mr. Barlow taunted as the “weary giants of flesh and steel,” powerless and unwelcome in cyberspace ― can “build tools” to do these sorts of things. It is really the other sort of giants, the “tech companies” such as the one Mr. Schmidt runs, that have, or at least can create, the means to be our benevolent guardians, turning us away from hate and harassment, and towards “global consciousness,” ― whatever that too may be. Google can demote websites that it deems to be promoters of “hate” in its search results, as indeed it already demotes those it considers to be copyright-infringers. Apple could block the access to its App Store to news  sources it considers biased, as indeed it has already blocked a Danish history book for  featuring some nudity in its illustrations. Facebook could tinker with its Newsfeed algorithms to help people with a favoured peace-and-love perspective “find their voice,” as it already tinkers with them to “help [us] see more stories that interest [us].”

Of course, Mr. Schmidt’s intentions are benign, and in some ways even laudable. Perhaps some of the “tools” he imagines would even be nice to have. The world may (or may not) be a better place if Facebook and Twitter could ask us something like “hey, this really isn’t very nice, are you sure you actually want to post this stuff?” ― provided that we had the ability to disregard the advice of our algorithmic minders, just like we can with spell-check. But I’m pretty skeptical about what might come out of an attempt to develop such tools. As I once pointed out here, being a benign censor is very hard ― heavy-handedness comes naturally in this business. And that’s before we even start thinking about the conflicts of interest inherent in the position of Google and of other tech companies who are in a position of being, at once, the regulators of their users’ speech and subjects of government regulations, and may well be tempted to so act in the former role as to avoid problems in the latter. And frankly, Mr. Schmidt’s apparent faith in “strong leaders” who will keep us free and make us safe and righteous is too Boromir-like for me to trust him.

As before, I have no idea what, if anything, needs to or could be done about these issues. Governments are unlikely to wish to intervene to stop the attempts of tech companies to play Platonic guardians 2.0. Even if they had the will, they would probably lack the ability to do so. And, as I said here, we’d be making a very risky gamble by asking governments, whose records of flagrant contempt for freedom of expression are incomparably worse than those of Google and its fellows, to regulate them. Perhaps the solution has to be in the creation of accountability mechanisms internal to the internet world, whether democratic (as David R. Johnson, David G. Post and Marc Rotenberg have suggested) or even akin to rights-based judicial review. In any case, I think that even if we don’t know how to, or cannot, stop the march of our algorithmic guardians, perhaps we can at lest spell-check them, and tell them that they might be about to do something very regrettable.

Online Gambling

Over at the EconLog, David Henderson has an interesting post that allows me to come back to some themes I used to carp on quite a bit, but haven’t returned to in a while now. In a nutshell, it is the story of antiwar.com, a website that, naturally enough, illustrates its message with some graphic imagery. Google concluded that the images contravened its policies, and withdrew the ads it placed on the website, causing the website to lose revenue on which they had relied. Apparently, Google does not want its ads to appear next to any picture that would not be “okay for a child in any region of the world to see,” which would disqualify many iconic pictures taken in wars past ― and not just wars, one might surmise.

Prof. Henderson points out that this is not “censorship,” since Google is a private firm acting in a purely commercial capacity here. But, he argues, this is still a “gamble” on Google’s part:

Google faces a tradeoff. On the one hand, there are probably many advertisers, possibly the vast majority, who don’t want their ads to appear alongside pictures of blood and gore, people being tortured, etc. So by being careful that web sites where ads appear do not have such pictures, Google gets more ad revenue than otherwise. On the other hand, Google is upsetting a lot of people who see it as dictating content. This will cause some people to shun Google. … [I]f the upset spreads, there could be a role for another competitor.

Perhaps so, although as I noted before, Google’s competitors ― such as Apple, with its iTunes store ― also seem to be choosing to use their own platforms to present sanitized versions of reality.

And as I also pointed out in the past, Google’s position with respect to freedom of expression is inherently conflicted. On the one hand, Google sees itself as engaged in the business of expression, arguing that its search algorithms reflect choices of an editorial nature that deserve constitutional protection. On the other, when it exercises control over its various platforms (whether the search engine itself or YouTube, the ad service, etc.), it can, and is frequently asked to, act as an agent for governments ― and not only democratic governments either ― who seek to censor expression they dislike. There is a danger that Google will choose to sacrifice some of its users’ freedom in order to protect its own by ingratiating itself with these governments. Furthermore, Google may be coming under pressure, not only from governments, but also from commercial partners it needs to keep on board ― or at bay ― and, possibly, from various “civil society” actors too, in exercising control over its platforms. The antiwar.com story is only one small part of this broader trend.

This is, or should be, well understood ― which makes me think that Google is not the only party in this story who took a gamble. Antiwar.com did too, as does anyone else who comes to rely on Google or other similar platforms, despite knowing the pressures, commercial and otherwise, that these platforms will come under. If anything, it is remarkable how successful this gamble usually turns out to be. Still, it is a bet, and will sometimes turn out badly.

I blogged last year about an argument by Ethan Zuckerman to the effect that the ad-based business model was the internet’s “original sin.” Mr. Zuckerman made his case from the perspective of the users, who must accept privacy losses resulting from tracking and profiling by advertisers in exchange for free access to ad-supported content. The antiwar.com story suggests that, for some content-producers at least, accepting the revenue and, as prof. Henderson points out, the convenience that come with the current business model and its major players was also a Faustian bargain. And yet, as for users, it is not quite clear what alternative arrangement would be viable.

In the face of what some may well be tempted to interpret as a market failure, it seems reasonable to expect calls for regulation, despite what libertarian types such prof. Henderson or the antiwar.com people themselves may say. There will be, and indeed, as I noted in the post about Apple linked to above, there already are, people calling for the regulation of online platforms, in order to make their behaviour conform to the regulators’ ideas about freedom of expression. Yet we should not forget that, on the whole, the net contribution of Google and the rest of them to our ability to express ourselves and to find and access the thoughts of others has clearly been positive ― and certainly much more positive than that of governments. While attempts at making a good thing even better would be understandable, they too would be gamble, and a risky one.

The Power of Google, Squared

I wrote, I while ago, about “the power of Google” and its role in the discussion surrounding the “right to be forgotten” ― a person’s right to force search engines to remove links to information about that person that is “inadequate, irrelevant or excessive,” whatever these things mean, even if factually true. Last week, the “right to be forgotten” was the subject of an excellent, debate ― nuanced, informative, and with interesting arguments on both sides ― hosted by Intelligence Squared U.S. I encourage you to watch the whole thing, because there is really too much there for a blog post.

I will, however, sketch out what I think was the most persuasive argument deployed by the opponents of the “right to be forgotten” ― with whom, admittedly, I agreed before watching the debate, and still do. I will also say a few words about the alternative solutions they proposed to what they agreed is a real and serious problem ― the danger that the prominence of a story about some stupid mistake or, worse, an unfounded allegation made about a person in search results come to mar his or her life forever, with no second chances possible.

Although the opponents of the “right to be forgotten,” as well as its proponents (I will refer to them as, simply, the opponents and the proponents, for brevity’s sake), made arguments sounding in high principle as well as more practical ones, the one on which the debate mostly focused, and which resonated most with me concerned the institutional arrangements that are needed to implement the “right to be forgotten.” The way it works ― and the only way it can work, according to one of the opponents, Andrew McLaughlin (the CEO of Digg and a former Director of Public Policy for Google) ― is that the person who wants a link to information about him or her removed applies to the search engine, and the search engine decides, following a secretive process and applying criteria of which it alone is aware. If the request is denied, the person who made it can apply to privacy authorities or go to court to reverse the decision. If however, the request is granted, nobody can challenge that decision. Indeed, if the European authorities had their way, nobody would even know that the decision had been made. (Telling the owner of the page to which a link is being delete, as Google has been doing, more or less defeats the purpose of the “right to be forgotten.”)

According to the opponents, this has some very unfortunate consequences. For one thing, the search engines have an incentive to err on the side of granting deletion requests ― at the very least, this avoids them the hassle of fighting appeals. One of the proponents, Chicago professor Eric Posner, suggested that market competition could check this tendency, but the opponents were skeptical that, even if users know that one search engine tends to delete more links than another, this would make any noticeable difference to its bottom line. Mostly, the proponents argued that we can rely on the meaning of the admittedly vague terms “inadequate, irrelevant or excessive” to be worked out over time, so that the decisions to delete a link or not become easier and less controversial. But another consequence of the way in which the “right to be forgotten” is implemented would actually prevent that, the opponents, especially Harvard professor Jonathan Zittrain argued. Since nobody can challenge a decision to delete a link, the courts will have no opportunity to refine the understanding of the concepts involved in the “right to be forgotten.” The upshot is that, according to the opponents anyway, the search engines (which, these days, mostly means Google) end up with a great deal of unchecked discretionary power. This is, of course, ironic, because the proponents of the “right to be forgotten” emphasize concerns about “the power of Google” as one of the reasons to support it, as typically do others who agree with them.

If the opponents are right that the “right to be forgotten” cannot be implemented in a way that is transparent, fair to all the parties concerned, at least reasonably objective, and does not increase instead of the checking “the power of Google,” what are the alternatives? The opponents offered at least three, each of them interesting in its own way. First, Mr. McLaughlin suggested that, instead of a “right to be forgotten,” people should have a right to provide a response, which search engines would have to display among their results. Second, we could have category-specific measures directed at some types of information particularly likely to be prejudicial to people, or of little public interest. (It is worth noting, for example, that in Canada at least, we already do this with criminal court decisions involving minors, which are anonymized; as are family law cases in Québec.) And third, Mr. McLaughlin insisted that, with the increased availability of all sorts of information about everyone, our social mores will need to change. We must become more willing to forgive, and to give people second chances.

This is perhaps optimistic. Then again, so is the proponents’ belief that a corporation can be made to weigh, impartially and conscientiously, considerations of the public interest and the right to “informational self-determination” (which is, apparently, the theoretical foundation of the “right to be forgotten”). And I have argued already that new social norms will in fact emerge as we get more familiar with the internet environment in which we live, and in which our digital shadows are permanently unstuck in time. In any case,what is certain is that these issues are not going to go away anytime soon. It is also clear that this Intelligence Squared debate is an excellent place to start, or to continue, thinking about them. Do watch it if you can.

Disrupting C-36

The Economist has published a lengthy and informative “briefing” on the ways in which the internet is changing prostitution ― often, although not always, for the benefit of sex workers. As it explains, the effects of new technologies on what is usually said to be the oldest profession are far-reaching, and mostly positive ― insofar as they make sex work safer than it used to be. If the federal government had been concerned with protecting sex workers, and if Parliament had truly “ha[d] grave concerns about … the risks of violence posed to those who engage in” prostitution, as it affected to be in the preamble of the so-called Protection of Communities and Exploited Persons Act, S.C. 2014 c. 25, better known as Bill C-36, they would have considered the internet’s potential for benefiting sex workers.

But as the government’s and Parliament’s chief concern was apparently to make prostitution vanish by a sleight of criminal law’s heavy hand, its middle finger raised at the Supreme Court, they instead sought to drive sex workers off the internet. The new section 286.4 of the Criminal Code, created by C-36, criminalizes “[e]veryone who knowingly advertises an offer to provide sexual services for consideration,” although section 286.5 exempts those advertising “their own sexual services.” In other words, if a sex worker has her own website, that’s tolerated ― but if she uses some other service, or at least one geared specifically to sex workers and their potential customers, the provider of that service is acting illegally.

Meanwhile, according to the Economist, in the market for sex, as in so many others,

specialist websites and apps are allowing information to flow between buyer and seller, making it easier to strike mutually satisfactory deals. The sex trade is becoming easier to enter and safer to work in: prostitutes can warn each other about violent clients, and do background and health checks before taking a booking. Personal web pages allow them to advertise and arrange meetings online; their clients’ feedback on review sites helps others to proceed with confidence.

Above all, the ability to advertise, screen potential clients, and pre-arrange meetings online means that sex workers need not look for clients in the most dangerous environment for doing so ― on the street. Besides, “the internet is making it easier to work flexible hours and to forgo a middleman,” and indeed “it is independent sex workers for whom the internet makes the biggest difference.”

The internet is also making sex work safer. Yet the work of websites that “let [sex workers] vouch for clients they have seen, improving other women’s risk assessments,” or “where customers can pay for a background check to present to sex workers” is probably criminalized under the new section 286.2(1) added to the Criminal Code by C-36, which applies to “[e]veryone who receives a financial or other material benefit, knowing that it is obtained by or derived directly or indirectly from the commission of an offence under subsection 286.1(1)” ― the “obtaining sexual services for consideration” offence. Forums where sex workers can provide each other with tips and support can be shut down if they are associated with or part of websites that advertise “sexual services.”

As the Economist points out, the added safety (both from violent clients and law enforcement), convenience, and discretion can attract more people into sex work. So trying to eliminate the online marketplace for sex makes sense if one’s aim is, as I put it here, “to drive people out of sex work by making it desperately miserable” ― but that’s a hypocritical approach, and not what C-36 purports to do.

In any case, criminalization complicates the work of websites that help sex workers and their clients, but does not stop it. They are active in the United States, despite prostitution being criminalized in almost every State ― though they pretend that their contents are fictional. They base their activities in more prostitution-friendly jurisdictions. A professor interviewed by the Economist points out that a ban on advertising sexual services in Ireland “has achieved almost nothing.” There seems to be little reason to believe that the ban in C-36, which has a large exemption for sex workers advertising themselves, would fare differently.

The Economist concludes that “[t]he internet has disrupted many industries. The oldest one is no exception.” Yet the government and Parliament have been oblivious to this trend, as they have been oblivious to most of the realities of sex work. One must hope that courts, when they hear the inevitable challenge to the constitutionality of C-36, will take note.

Felix Peccatum

There was an interesting piece in The Atlantic a couple of weeks ago, in which Ethan Zuckerman argued that we should, as the subtitle would have it, “ditch the [internet’s] ad-based business model and build a better web.” Accepting internet content should be free to access, online services free to use, and that the costs of hosting the contents and providing the services can be paid for by tying them to advertising was, Mr. Zuckerman says, “the original sin of the web.” It sounded like a good idea at time, but turned out badly. It is time to repent, and to mend our ways. But is it?

Mr. Zuckerman argues that the ad-based business model created an “internet [that] spies at us at every twist and turn.” In order to persuade potential investors to support a nascent website, its creators must convince them that the ads on that site “will be worth more than everyone else’s ads.” And even if the ads are not actually worth very much, the potential for improvement is in itself something that can be marketed to investors. The way to make the ads on a website worth more than those on others ― say, on Facebook ― requires “target[ing] more and better than Facebook.” And that, in turn, “requires moving deeper into the world of surveillance,” to learn ever more information about the users, so as to make the targeting of ads to them ever more precise.

Over the years, the progressive creep of online tracking and surveillance has

 trained Internet users to expect that everything they say and do online will be aggregated into profiles (which they cannot review, challenge, or change) that shape both what ads and what content they see.

Despite occasional episodes of unease over what is going on, even outright manipulation by the providers of online services is not enough to turn their users off. As with private service providers, says Mr. Zuckerman, so with governments:

[u]sers have been so well trained to expect surveillance that even when widespread, clandestine government surveillance was revealed by a whistleblower, there has been little organized, public demand for reform and change.

Trust in government generally has never been lower, yet it seems that online, anything goes.

Mr. Zuckerman points out that the ad-based business model had ― and still has ― upsides too. When it took off, it was pretty much the only way “to offer people free webpage hosting and make money.” Initially at least, most people lacked the means ― the technical means, never mind financial resources ― to pay for online services. Offering them “free” ― that is to say, by relying on advertising instead of user fees to pay for them ― allowed people to starting using them who would never have done so otherwise:

[t]he great benefit of an ad supported web is that it’s a web open to everyone. It supports free riders well, which has been key in opening the web to young people and those in the developing world. Ad support makes it very easy for users to “try before they buy.”

Indeed,

[i]n theory, an ad-supported system is more protective of privacy than a transactional one. Subscriptions or micropayments resolved via credit card create a strong link between online and real-world identity, while ads have traditionally been targeted to the content they appear with, not to the demo/psychographic identity of the user.

In practice, well, we know how that worked out.

Besides, says Mr. Zuckerman, not only did the ad-based internet do away with our privacy, it also produces “clickbait” that nobody really wants to read, is increasingly centralized, and breaks down into interest-based echo chambers.

The solution on which Mr. Zuckerman rests most of his hopes for a redemption of the web is a move from ad-based to subscription based business models. He points out that Google already offers companies and universities the possibility of paying for its products in exchange for not offering their employees or students the ads that support its free Gmail service. And he is confident that “[u]sers will pay for services that they love,” even if a shift to subscription-based business models would also mean that users would simply abandon those for which they have no deep affection. This, in turn, would produce “more competition, less centralization and more competitive innovation.” A shift to subscription-based web service would require new means of payment ― something with lower transaction costs than credit-card systems or PayPal. Such technologies do not yet exist, or at least are not yet fully ready, but Mr. Zuckerman is hopeful that they will come along, and allow us to move away from the “fallen” ad-based internet.

But even if a return to the online garden of Eden ― which, much like “real” one, never actually existed ― were technically possible, would it be desirable? Mr. Zuckerman acknowledges that whatever business model we turn to, “there are bound to be unintended consequences.” Unintended, perhaps, but not entirely unforeseeable. Even if transaction costs can be lowered, a subscription-based internet would be less accessible for many people, in particular those in the less well-off countries, the young, and the economically disadvantaged. Those who, in many way, need it most. Besides, it seems doubtful to me that a subscription-based internet would generate more innovation than the current version. As Mr. Zuckerman points out, the ad-based model has the virtue of letting users try new services easily. It also means that abandoning a service does not mean throwing away the money paid to subscribe to it. It is thus friendlier to newcomers, and less favourable to incumbents, than a subscription-based model. (Just think of the number of new media sources that developed online in the last 15 years ― and compare it with, say, the number of new newspapers that appeared in the previous decades.)

The tracked, surveilled ad-based web has its downsides. But it lowered barriers to entry and allowed the emergence of new voices which, I strongly suspect, could not have been heard without it. (By way of anecdote, I had enough doubt about this blogging thing to begin with that I’m pretty sure I wouldn’t have started if I had to pay for it too. Alternatively, I don’t suppose anyone reading this now would have been willing to pay me!) If embracing ads was indeed the internet’s original sin, then I believe that it was, as Augustine suggested of the original original one, felix peccatum ― a fortunate one.

Searching Freedom

I have already blogged (here and here) about the debate on whether the output of search engines such as Google should be protected by constitutional guarantees of freedom of expression, summarizing arguments by Eugene Volokh and Josh Blakcman. These arguments are no longer merely the stuff of academic debate. As both prof. Volokh and prof. Blackman report, the U.S. District Court for the South District of New York has yesterday endorsed the position (which prof. Volokh and others defend) that search results are indeed entitled to First Amendment protection, in Zhang v. Baidu. Although I do not normally comment on American judicial decisions, this one is worth looking at, because it both gives us an idea of the issues that are likely to arise in Canada sooner rather than later, and can serve as a reminder that these issues will have to be approached somewhat differently from the way they are in the United States.

Zhang was a suit by a group of pro-democracy activists who were claiming that Baidu, a Chinese search engine, is acting illegally in excluding from the search results it displays in the United States results that have to do with the Chinese democracy movement and a number of topics such as the Tiananmen Square protests, including articles the plaintiffs themselves had written. The plaintiffs alleged that, in doing so, Baidu engages in censorship at the behest of the Chinese government. Legally, they claimed that Baidu conspired to violate and violated their civil rights under federal and state law.

Baidu moved to dismiss, arguing that the constitutional protection of freedom of speech applied to its search results, preventing the imposition of liability. Relying on jurisprudence protecting a speaker’s right to choose the contents of his message, and in particular not to convey a message it did not want to convey (whether a newspaper’s right not to print a reply from a candidate for public office whom it criticized or a parade organizers’ right not to allow the participation of a group they disagreed with), the Court agreed:

In light of those principles, there is a strong argument to be made that the First Amendment fully immunizes search-engine results from most, if not all, kinds of civil liability and government regulation. … The central purpose of a search engine is to retrieve relevant information from the vast universe of data on the Internet and to organize it in a way that would be most helpful to the searcher. In doing so, search engines inevitably make editorial judgments about what information (or kinds of information) to include in the results and how and where to display that information (for example, on the first page of the search results or later). (7)

The search engines’ “editorial judgments” are constitutionally protected, in the same way as the editorial judgments of newspapers, guidebook authors, or any other speakers who choose what message or information to convey.

Nor does the fact that search-engine results may be produced algorithmically matter for the analysis. After all, the algorithms themselves were written by human beings, (8)

says the Court, endorsing prof. Volokh’s (and others’) view of the matter.

The Court makes a couple of other points that are worth highlighting. One is that

search engine operators (at least in the United States and given today’s technology) lack the physical power to silence anyone’s voices, no matter what their alleged market shares may be, (12)

and that an internet user who fails to find relevant information with one search engine can easily to turn to another one. (The matter, really, seems to be not so much “physical power” as monopoly.) Another is that the ads displayed by a search engine might be entitled to less protection than the actual search results, at least insofar as “commercial speech” is less protected than others sorts. Last but not least, the Court finds

no irony in holding that Baidu’s alleged decision to disfavor speech concerning democracy is itself protected by the democratic ideal of free speech. … [T]he First Amendment protects Baidu’s right to advocate for systems of government other than democracy (in China or elsewhere) just as surely as it protects Plaintiffs’ rights to advocate for democracy.

I find this largely persuasive. Still, we might want to ask some questions. For instance, the point about search engines not being monopolists, and users having alternative means of finding information is only true so long as the users know what it is they are looking for. If one doesn’t know that, say, there are other views about democracy in China than whatever the Communist Party line happens to be, one will not think that something is missing from Baidu’s search results, and one will not try using its competitors to find it. But, of course, the same could be said about partisan media, or other biased sources of information. For all the problems that these create, we still think that the problems that regulating them would cause would be even worse. Perhaps there is something special about the internet that makes this calculation inapplicable ― but, if so, the onus is on those who think so to prove it.

Quite apart from the constitutional issues, there is also the question ― which the Court does not address ― of whether the plaintiffs’ claims could have succeeded anyway. At first sight ― and admittedly I know little about American civil rights legislation ― they do not seem especially plausible. As I pointed out in a previous post on this topic, it is by no means clear that there is, whether under anti-discrimination law or otherwise, “some kind of baseline right to have Google [or another search engine] take notice of you”.

This brings me to the point I wanted to make about the differences between American and Canadian law in this context. As the Supreme Court of Canada held in RWDSU v. Dolphin Delivery, [1986] 2 S.C.R. 573, the Charter does not apply to purely private disputes resolved under common law rules (although its “values” are to be taken into account in the development of the common law). This is in contrast to the situation in the United States, where courts consider themselves bound by the First Amendment even when resolving disputes between private parties. If a case such as Zhang arose in Canada, and the plaintiffs formulated their claims in tort (rather than as violations of, say, the Canadian Human Rights Act), the defendant search engine would not have been able to invoke the Charter‘s guarantee of freedom of expression. This doesn’t mean that the outcome would, or should, be different ― but the route by which it could be reached would have to be.

Charter, Meet Google

Josh Blackman has just published a fascinating new essay, “What Happens if Data Is Speech?” in the University of Pennsylvania Journal of Constitutional Law Online, asking some important questions about how courts should treat ― and how we should think about ― attempts to regulate the (re)creation and arrangement of information by “algorithms parsing data” (25). For example, Google’s algorithms suggest search queries on the basis of our and other users’ past searches, and then sort the available links in once we hit ‘enter’. Can Google be ordered to remove a potential query from the suggestions it displays, or a link from search results? Can it be asked to change the way in which it ranks these results? These and other questions will only become more pressing as these technologies become ever more important in our lives, and as the temptation to regulate them one way or another increases.

One issue that is a constant theme in the literature on this topic that prof. Blackman reviews is what, if any, is the significance of the fact that “with data, it is often difficult to find somebody with the traits of a typical speaker” (27). It thus becomes tempting to conclude that algorithms working with data can be regulated without regard for freedom of speech, since no person’s freedom is affected by such regulation. If at least some uses of data are, nevertheless, protected as free speech, there arises another issue which prof. Blackman highlights ― the potential for conflict between any such protection, and the protection of privacy rights, which takes of form of prohibitions on speaking against someone (in some way).

The focal point of these concerns, for now anyway, are search engines, and particularly Google. Prof. Blackman points out, as Google becomes our gateway to more and more of the information we need, it acquires a great deal of power over what information we ever get to access. Not showing up high in Google’s search results becomes, in effect, a sentence of obscurity and irrelevance. And while it will claim that it only seeks to make its output more relevant for users, the definition of “relevance” gives Google the ability to pursue an agenda of its own, whether it is punishing those who, in its own view, are trying to game its ranking system, as prof. Blackman describes, or currying favour with regulators or commercial partners, or even implementing some kind of moral vision for what the internet should be like (I describe these possibilities here and here). All that, combined with what seems to some as the implausibility of algorithms as bearers of the right to freedom of speech, can make it tempting for legislators to regulate search engines. “But,” prof. Blackman asks, “what poses a greater threat to free speech ― the lack of regulations or the regulations themselves?” (31) Another way of looking at this problem is to ask whether the creators and users of websites should be protected by the state from, in effect, regulation by Google or Google should be protected from regulation by the state (32).

The final parts of prof. Blackman’s essay address the question of what happens next, when ― probably in the near future ― algorithms become not only tools for accessing information but, increasingly, extensions of individual action and creativity. If the line between user and algorithm is blurred, regulating the latter means restricting the freedom of the former.

Prof. Blackman’s essay is a great illustration of the fact that the application of legal rules and principles to technologies which did not exist when they were developed can often be difficult, not least because these new technologies sometimes force us to confront the theoretical questions which we were previously able to ignore or at least to fudge in the practical development of legal doctrine. (I discussed one example of this problem, in the area of election law, here.) For instance, we have so far been able to dodge the question whether freedom of expression really serves the interests of the speaker or the listener, because for just about any expressive content there is at least one speaker and at least one listener. But when algorithms (re-)create information, this correspondence might no longer hold.

There are many other questions to think about. Is there some kind of baseline right to have Google take notice of you? Is the access to online information of such public importance that its providers, even private ones, effectively take on a public function, and maybe incur constitutional obligations in the process? How should we deal with the differences of philosophies and constitutional frameworks between countries?

This last question leads me to my final observation. So far as I can tell ― I have tried some searching, though one can always search more ― nothing at all has been written on these issues in Canada. Yet the contours of the protection of freedom of expression under the Canadian Charter of Rights and Freedoms are in some ways quite different from those under the First Amendment. When Canadian courts come to confront these issues ― when the Charter finally meets Google ― they might find some academic guidance helpful (says one conceited wannabe academic!). As things stand now, they will not find any.

New Ideas and Old

Time to emerge from my holiday hibernation. And it seems fitting to start off the new year with some reflections, or at least a re-hash of some reflections, on the subject of social, technological, and legal change. The immediate occasion for doing so is a column by Washington Post’s Robert Samuelson on the widespread outrage provoked by revelations of the NSA’s data-collecting activities.

Mr. Samuelson argues that these revelations are commonly “stripped of their social, technological and historical context.” The context in question is the fact that “millions upon millions of Americans have consciously and, probably in most cases, eagerly surrendered much of their privacy by embracing the Internet and social media.” For people who disclose all sorts of information about their lives to strangers and to the social media companies to complain about the government collecting some limited kinds of information about them, subject to legal constraints, is “hypocritical.” Besides, the NSA’s activities are also not nearly as intrusive as past government programmes for spying on citizens: during the Vietnam War, “the CIA investigated 300,000 anti-war critics.” However questionable the need for or effectiveness of specific NSA programmes, Mr. Samuelson adds, “[i]n a digitized world, spying must be digitized.” In short, our views on privacy need to take the context of 2014 into account. Some of you may recall an early post of mine in which I discussed a paper by Chief Judge Alex Kozinski, of the US Court of Appeals for the 9th Circuit, arguing that privacy is pretty much dead, because courts treat as private the things that citizens expect to be private, and if citizens, through their online behaviour, demonstrate that they do not expect any information about them to be private, then the courts will act accordingly. Chief Judge Kozinski was worried by this possibility. Mr. Samuelson does not seem to be. Should we?

Mr. Samuelson is right to insist on context, both historical and social, before getting outraged. It is easy to forget that new technologies often do no more than give a new form to things which existed long before. As I suggested here, “[n]ew technologies seem not so much to create moral issues as to serve as a new canvass on which to apply our old concerns.” And there may well be something hypocritical in failing to care about disclosing all kinds of personal information to companies that (try to) make money out of it, yet being furious at governments using similar information to (try to) prevent terrorist attacks. What the NSA does is arguably not as big a deal as some of the outraged think. Yet that does not fully justify Mr. Samuelson’s unconcern. Both he and Chief Judge Kozinski forget that the end of privacy as we had known it need not, and arguably does not, mean the end of privacy tout court. Old norms about what is and what is not private are breaking down under the pressure of technological change. But that does not mean that new ones do not emerge.

In particular, the norm that seems to be replacing near-categorical prohibitions on using certain sorts of information is one that makes all sorts of personal information fair game subject to the consent of the person concerned. Attempts to prohibit email providers from “reading” the contents of our messages look silly considering the hundreds of millions of people who use Gmail knowing that Google does just that ― but the point is that they know what is going on. Similarly, people accept to share information on Facebook, so long as they know they are sharing it ― but they are unhappy when Facebook tries to expand the visibility of the things they shared without telling them. This example also hints another important norm in the new privacy universe ― one of differentiated, rather than categorical, privacy. The fact that we accept to share information with some people or organizations does not mean that we are willing to share it with others.

Arguably, these norms aren’t exactly new. For instance, we always shared some things with our friends that we kept from our parents, and told parents things we wouldn’t admit to our friends. Even before Facebook, few things were private in the sense of nobody knowing about them. But new technologies make the choices to tell and not to tell more pervasive, more nuanced, and more explicit than they perhaps had to be before. They also make the relativity of privacy more apparent.

The problem with the NSA data collection, as others have said before, is arguably not so much its substance as the lack of consent and awareness of those affected. That, rather than the collection of personal information as such, is what contravenes the key norms of the new privacy paradigm. And to the extent that the outrage about the NSA’s activities caused by this violation, it is not all hypocritical.

I’m not sure there is much of a point to these ramblings. I’m still trying to write my way into the new year.

Scripta Volant Quoque

The Romans said ― or, more likely, wrote ― that while words fly away, writing remains. Russians say that what is written with the quill cannot be hacked away with an axe.  The idea of the permanence of the written word is very widespread. It is part of the law, too, whether in the rules on proving the existence of a contract or in those on defamation. But the internet is putting it under considerable pressure, from both ends. On the one hand, words that would once have been spoken and fleeting are now written and can be read years later. (I have discussed an example of the consequences this can have here.) On the other, online writing can be more ephemeral than the old-fashioned sort, as a paper by Raizel Liebler and June Liebert recently published in the Yale Journal of Law & Technology shows.

It is a study of the citations to websites in opinions of the U.S. Supreme Court, showing that a considerable part of the hyperlinks given as references in such citations no longer work. Judges are citing online materials ever more often (indeed, as I wrote here, they no longer rely on the submissions of parties but run their own searches to find such materials). In total, between 1996 and 2010, “114 majority opinions of the Supreme Court included links” (280). But, as websites are restructured or even taken offline altogether, links to them can “rot” ― they no longer lead to the page containing the information that used to be there, or indeed to anything at all. As a result, “[o]f the URLs used within the U.S. Supreme Court opinions [between 1996 and 2010] 29% … were invalid” (298).

That can cause serious problems to those―scholars, journalists, and citizens―who want to see for themselves the information that the Supreme Court has relied on in reaching or at least justifying its decisions. Of course, sometimes the information is still available, having only been moved to a different address and being still accessible by a simple search. But in other cases, it might be gone altogether. Sometimes, the information might be more or less tangential. But sometimes it might be central to the Court’s decision. In short, this matters.

I do not think that similar research has been done in Canada, so I have come up with a little anecdotal evidence of my own. It is not very encouraging. Our Supreme Court seems not to be as enthusiastic as its American counterpart about citing online sources ― so far as I can tell, it has done so in only 54 cases. (The earliest of these was Pushpanathan v. Canada (Minister of Citizenship and Immigration), [1998] 1 S.C.R. 982; it took another three years until the second R. v. Sharpe, [2001] 1 S.C.R. 45, 2001 SCC 2). But the “link rot” rate in its citations might be every bit as high or even worse. Of the links in the five oldest cases to cite any, not a single one still works, though one (to a UN page, referenced in Pushpanathan) leads to an automatic re-direct, and so is still useful. The rest lead either to error messages or even to an offer to buy the domain on which the page linked to had once been posted (a page belonging  to the BC Human Rights Commission ― which has since been abolished). Of course, it seems like a safe bet that a greater proportion of links in the more recent decisions work, but will they still do 10 years from now?

Lest this post be considered as a Luddite proclamation, I should point out that it is not as if paper documents courts cite to cannot become unavailable. Old books, government reports, or academic journals can be buried in libraries and archives, accessible only to hardiest researchers―when not physically rotten or eaten by rats. On balance, citation to online references may well make sources more rather than less accessible. Still, it is not without its problems. The permanence of the written word can no longer be taken for granted.

H/T David Post