Charter, Meet Google

Josh Blackman has just published a fascinating new essay, “What Happens if Data Is Speech?” in the University of Pennsylvania Journal of Constitutional Law Online, asking some important questions about how courts should treat ― and how we should think about ― attempts to regulate the (re)creation and arrangement of information by “algorithms parsing data” (25). For example, Google’s algorithms suggest search queries on the basis of our and other users’ past searches, and then sort the available links in once we hit ‘enter’. Can Google be ordered to remove a potential query from the suggestions it displays, or a link from search results? Can it be asked to change the way in which it ranks these results? These and other questions will only become more pressing as these technologies become ever more important in our lives, and as the temptation to regulate them one way or another increases.

One issue that is a constant theme in the literature on this topic that prof. Blackman reviews is what, if any, is the significance of the fact that “with data, it is often difficult to find somebody with the traits of a typical speaker” (27). It thus becomes tempting to conclude that algorithms working with data can be regulated without regard for freedom of speech, since no person’s freedom is affected by such regulation. If at least some uses of data are, nevertheless, protected as free speech, there arises another issue which prof. Blackman highlights ― the potential for conflict between any such protection, and the protection of privacy rights, which takes of form of prohibitions on speaking against someone (in some way).

The focal point of these concerns, for now anyway, are search engines, and particularly Google. Prof. Blackman points out, as Google becomes our gateway to more and more of the information we need, it acquires a great deal of power over what information we ever get to access. Not showing up high in Google’s search results becomes, in effect, a sentence of obscurity and irrelevance. And while it will claim that it only seeks to make its output more relevant for users, the definition of “relevance” gives Google the ability to pursue an agenda of its own, whether it is punishing those who, in its own view, are trying to game its ranking system, as prof. Blackman describes, or currying favour with regulators or commercial partners, or even implementing some kind of moral vision for what the internet should be like (I describe these possibilities here and here). All that, combined with what seems to some as the implausibility of algorithms as bearers of the right to freedom of speech, can make it tempting for legislators to regulate search engines. “But,” prof. Blackman asks, “what poses a greater threat to free speech ― the lack of regulations or the regulations themselves?” (31) Another way of looking at this problem is to ask whether the creators and users of websites should be protected by the state from, in effect, regulation by Google or Google should be protected from regulation by the state (32).

The final parts of prof. Blackman’s essay address the question of what happens next, when ― probably in the near future ― algorithms become not only tools for accessing information but, increasingly, extensions of individual action and creativity. If the line between user and algorithm is blurred, regulating the latter means restricting the freedom of the former.

Prof. Blackman’s essay is a great illustration of the fact that the application of legal rules and principles to technologies which did not exist when they were developed can often be difficult, not least because these new technologies sometimes force us to confront the theoretical questions which we were previously able to ignore or at least to fudge in the practical development of legal doctrine. (I discussed one example of this problem, in the area of election law, here.) For instance, we have so far been able to dodge the question whether freedom of expression really serves the interests of the speaker or the listener, because for just about any expressive content there is at least one speaker and at least one listener. But when algorithms (re-)create information, this correspondence might no longer hold.

There are many other questions to think about. Is there some kind of baseline right to have Google take notice of you? Is the access to online information of such public importance that its providers, even private ones, effectively take on a public function, and maybe incur constitutional obligations in the process? How should we deal with the differences of philosophies and constitutional frameworks between countries?

This last question leads me to my final observation. So far as I can tell ― I have tried some searching, though one can always search more ― nothing at all has been written on these issues in Canada. Yet the contours of the protection of freedom of expression under the Canadian Charter of Rights and Freedoms are in some ways quite different from those under the First Amendment. When Canadian courts come to confront these issues ― when the Charter finally meets Google ― they might find some academic guidance helpful (says one conceited wannabe academic!). As things stand now, they will not find any.

New Ideas and Old

Time to emerge from my holiday hibernation. And it seems fitting to start off the new year with some reflections, or at least a re-hash of some reflections, on the subject of social, technological, and legal change. The immediate occasion for doing so is a column by Washington Post’s Robert Samuelson on the widespread outrage provoked by revelations of the NSA’s data-collecting activities.

Mr. Samuelson argues that these revelations are commonly “stripped of their social, technological and historical context.” The context in question is the fact that “millions upon millions of Americans have consciously and, probably in most cases, eagerly surrendered much of their privacy by embracing the Internet and social media.” For people who disclose all sorts of information about their lives to strangers and to the social media companies to complain about the government collecting some limited kinds of information about them, subject to legal constraints, is “hypocritical.” Besides, the NSA’s activities are also not nearly as intrusive as past government programmes for spying on citizens: during the Vietnam War, “the CIA investigated 300,000 anti-war critics.” However questionable the need for or effectiveness of specific NSA programmes, Mr. Samuelson adds, “[i]n a digitized world, spying must be digitized.” In short, our views on privacy need to take the context of 2014 into account. Some of you may recall an early post of mine in which I discussed a paper by Chief Judge Alex Kozinski, of the US Court of Appeals for the 9th Circuit, arguing that privacy is pretty much dead, because courts treat as private the things that citizens expect to be private, and if citizens, through their online behaviour, demonstrate that they do not expect any information about them to be private, then the courts will act accordingly. Chief Judge Kozinski was worried by this possibility. Mr. Samuelson does not seem to be. Should we?

Mr. Samuelson is right to insist on context, both historical and social, before getting outraged. It is easy to forget that new technologies often do no more than give a new form to things which existed long before. As I suggested here, “[n]ew technologies seem not so much to create moral issues as to serve as a new canvass on which to apply our old concerns.” And there may well be something hypocritical in failing to care about disclosing all kinds of personal information to companies that (try to) make money out of it, yet being furious at governments using similar information to (try to) prevent terrorist attacks. What the NSA does is arguably not as big a deal as some of the outraged think. Yet that does not fully justify Mr. Samuelson’s unconcern. Both he and Chief Judge Kozinski forget that the end of privacy as we had known it need not, and arguably does not, mean the end of privacy tout court. Old norms about what is and what is not private are breaking down under the pressure of technological change. But that does not mean that new ones do not emerge.

In particular, the norm that seems to be replacing near-categorical prohibitions on using certain sorts of information is one that makes all sorts of personal information fair game subject to the consent of the person concerned. Attempts to prohibit email providers from “reading” the contents of our messages look silly considering the hundreds of millions of people who use Gmail knowing that Google does just that ― but the point is that they know what is going on. Similarly, people accept to share information on Facebook, so long as they know they are sharing it ― but they are unhappy when Facebook tries to expand the visibility of the things they shared without telling them. This example also hints another important norm in the new privacy universe ― one of differentiated, rather than categorical, privacy. The fact that we accept to share information with some people or organizations does not mean that we are willing to share it with others.

Arguably, these norms aren’t exactly new. For instance, we always shared some things with our friends that we kept from our parents, and told parents things we wouldn’t admit to our friends. Even before Facebook, few things were private in the sense of nobody knowing about them. But new technologies make the choices to tell and not to tell more pervasive, more nuanced, and more explicit than they perhaps had to be before. They also make the relativity of privacy more apparent.

The problem with the NSA data collection, as others have said before, is arguably not so much its substance as the lack of consent and awareness of those affected. That, rather than the collection of personal information as such, is what contravenes the key norms of the new privacy paradigm. And to the extent that the outrage about the NSA’s activities caused by this violation, it is not all hypocritical.

I’m not sure there is much of a point to these ramblings. I’m still trying to write my way into the new year.

Scripta Volant Quoque

The Romans said ― or, more likely, wrote ― that while words fly away, writing remains. Russians say that what is written with the quill cannot be hacked away with an axe.  The idea of the permanence of the written word is very widespread. It is part of the law, too, whether in the rules on proving the existence of a contract or in those on defamation. But the internet is putting it under considerable pressure, from both ends. On the one hand, words that would once have been spoken and fleeting are now written and can be read years later. (I have discussed an example of the consequences this can have here.) On the other, online writing can be more ephemeral than the old-fashioned sort, as a paper by Raizel Liebler and June Liebert recently published in the Yale Journal of Law & Technology shows.

It is a study of the citations to websites in opinions of the U.S. Supreme Court, showing that a considerable part of the hyperlinks given as references in such citations no longer work. Judges are citing online materials ever more often (indeed, as I wrote here, they no longer rely on the submissions of parties but run their own searches to find such materials). In total, between 1996 and 2010, “114 majority opinions of the Supreme Court included links” (280). But, as websites are restructured or even taken offline altogether, links to them can “rot” ― they no longer lead to the page containing the information that used to be there, or indeed to anything at all. As a result, “[o]f the URLs used within the U.S. Supreme Court opinions [between 1996 and 2010] 29% … were invalid” (298).

That can cause serious problems to those―scholars, journalists, and citizens―who want to see for themselves the information that the Supreme Court has relied on in reaching or at least justifying its decisions. Of course, sometimes the information is still available, having only been moved to a different address and being still accessible by a simple search. But in other cases, it might be gone altogether. Sometimes, the information might be more or less tangential. But sometimes it might be central to the Court’s decision. In short, this matters.

I do not think that similar research has been done in Canada, so I have come up with a little anecdotal evidence of my own. It is not very encouraging. Our Supreme Court seems not to be as enthusiastic as its American counterpart about citing online sources ― so far as I can tell, it has done so in only 54 cases. (The earliest of these was Pushpanathan v. Canada (Minister of Citizenship and Immigration), [1998] 1 S.C.R. 982; it took another three years until the second R. v. Sharpe, [2001] 1 S.C.R. 45, 2001 SCC 2). But the “link rot” rate in its citations might be every bit as high or even worse. Of the links in the five oldest cases to cite any, not a single one still works, though one (to a UN page, referenced in Pushpanathan) leads to an automatic re-direct, and so is still useful. The rest lead either to error messages or even to an offer to buy the domain on which the page linked to had once been posted (a page belonging  to the BC Human Rights Commission ― which has since been abolished). Of course, it seems like a safe bet that a greater proportion of links in the more recent decisions work, but will they still do 10 years from now?

Lest this post be considered as a Luddite proclamation, I should point out that it is not as if paper documents courts cite to cannot become unavailable. Old books, government reports, or academic journals can be buried in libraries and archives, accessible only to hardiest researchers―when not physically rotten or eaten by rats. On balance, citation to online references may well make sources more rather than less accessible. Still, it is not without its problems. The permanence of the written word can no longer be taken for granted.

H/T David Post

The Course of Human Events

David R. Johnson and David Post have published a fascinating essay, “Governing Online Spaces: Virtual Representation,” at the Volokh Conspiracy, arguing that Facebook ought to move towards becoming something like a representative democracy. While various attempts at regulating Facebook and other online services and communities from the outside are a frequent topic of discussion, including, for example, here and here, Mr. Johnson and prof. Post raise a different, albeit related issue, that of internal governance.

At present, Facebook’s relationship with its users is akin to that of a “benevolent dictator[],” or perhaps an enlightened absolute monarch, a sort of digital Frederick the Great, with his subjects. That relationship is governed by the Terms of Service (TOS) that users must accept in order to use Facebook. And the company reserves the right to change those Terms of Service at will. As the law now stands, it is entitled to do so. But, say Mr. Johnson and Prof. Post, this is  wrong as a matter of principle. The principles of “self governance and self-determination” mean

that all users have a right to participate in the processes through which the rules by which they will be bound are made.  This principle is today widely accepted throughout the civilized world when applied to formal law-making processes, and we believe it applies with equal force to the new forms of TOS-based rule-making now emerging on the Net.

Market discipline―the threat of users leaving Facebook in favour of a competitor―is not enough, because the cost to the user of doing so is unusually high, due both to the users having “invested substantial amounts of time and effort in organizing their own experience at the site” and to network effects.

But attempts to have users provide input on Facebook’s Terms of Service have not been very successful. Most users simply cannot be bothered to engage in this sort of self-governance; others are ignorant or otherwise incompetent; but even the small portion of users who are willing and able to contribute something useful to Facebook’s governance comprises way too many people to engage in meaningful deliberation. Mr. Johnson and Prof. Post propose to get around these problems by setting up a system of representation. Instead of users engaging in governance directly, they would

be given the ability to grant a proxy to anyone who has volunteered to act on his/her behalf in policy discussions with Facebook management. These proxy grants could be made, revoked, or changed at any time, at the convenience of the user. Those seeking proxies would presumably announce their general views, proposals, platforms, and positions. Anyone receiving some minimum number of proxies would be entitled to participate in discussions with management — and their views would presumably carry more or less weight depending upon the number of users they could claim to represent.

This mechanism of virtual representation would, Mr. Johnson and Prof. Post argue, have several benefits. Those seeking and obtaining proxies―the representatives in a virtual democracy―would be people with the motivation and, one expects, the knowledge seriously to participate in Facebook’s governance. Representation sidelines extremists and gives power to the moderate voices and the silent majority ignored by direct democracy. At the same time, it gives Facebook the means of knowing how users feel about what it does and what it proposes to do differently in the future, which is handy for keeping them happy and avoiding having them rebel and desert to a competitor.

The proposal is not―”yet”―for a full-scale virtual democracy.  Mr. Johnson and Prof. Post accept that Facebook will retain something like a monarchical veto over the demands of its users’ representatives. Still, it is pretty radical―and pretty compelling. By all means, read it in full.

As Mr. Johnson and prof. Post recognize, “there are many unanswered questions.” Many of those concern the details of the virtual mixed constitution (to borrow a term from 18th-century political philosophy) that they are proposing, and the details of its implementation. But here’s another question, at which their discussion hints without quite reaching it.

Suppose Facebook reorganizes itself into a self-governing polity of some sort, whether with a mixed constitution or a truly democratic one. What effect would this have on its dealings with those who wish to govern it from the outside? Mr. Johnson and prof. Post write that “Facebook’s compliance with the clearly expressed will of the online polity would also surely help to keep real-space regulators at bay.” But what if it doesn’t? Not all of those regulators, after all, care a whole lot for democracy, and even if they do, their democratic constituents are citizens of local polities, not of a global one. Could this global democratic polity fight back? Could its members

dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature’s God entitle them?

Mr. Johnson and Prof. Post allude to Alexander Hamilton and James Madison as their inspiration. But what about Thomas Jefferson?

iPrudes

There was an interesting story by Michael Posner in The Globe and Mail yesterday on Apple’s decision not to allow the sale of books and apps telling the story of Danish hippies on its commercial platforms, iBookstore and the App Store, because they contain some photographs featuring naked men and women. Apple says the pictures breach its policy against sexually explicit images. Mr. Posner accuses the company of hypocrisy, because it has not banned other books “filled with pictures of naked bodies [and] continues to sell apps for Playboy and Sports Illustrated, which feature partially naked women.” So does the author of the books, who points out that Apple’s founder, Steve Jobs, claimed to be a spiritual descendant and to share some of the ideal of the hippies movement, which he accuses Apple of betraying. The publishers, for their part, insist that the books are in no way pornographic or arousing, so that they do not breach Apple’s guidelines.

Be that as it may, the Danish authorities are not amused. Mr. Posner writes that

[l]ast week, Uffe Elbaek, the country’s culture minister, wrote to his European counterparts, and to European Union commissioners Neelie Kroes and Androulla Vassiliou, seeking to have the issue debated within the EU.

“This is a history book,” Elbaek said in an interview. “It documents how we behaved in those days. Is it fair that an American company without any real dialogue … can apply American moral standards to a product that only interests a Danish audience with vastly different moral standards?”

The minister worries that corporations “will decide how freedom of speech will be arbitrated and who is allowed artistic freedoms” and argues that “it’s important that we have these discussions at regional and national levels.” Mr. Posner too worries about freedom of speech. Indeed, he accuses Apple of “de facto censorship.”

This brings to mind several issues about which I have already blogged. One is the dual and ambiguous position of technology companies as speakers and censors, about which I have written about in Google’s case. Apple might argue that a decision not to allow the sale of a book it deems offensive or otherwise unsuitable is a form of editorial judgment and, thus, protected speech, just as Google argues its decision to disfavour copyright-infringing websites in ranking its search results is. At the same time, as the provider of a platform through which others express themselves, Apple takes on a speech-regulating role; and the importance this role is proportionate to that platform’s popularity.

But there is a crucial difference between Google removing content from, say, YouTube at the request of a government agency, and Apple removing content from its stores on its own, without any government involvement. In my view, it is not fair to refer to such decisions as censorship. A private company, at least so long as it is not a monopolist, has no power to prohibit speech. If a speaker is not allowed to use one private platform, he or she can turn to another. As Mr. Posner notes, the books Apple has banned from its stores are best-sellers in print. Their author is not exactly being silenced.

Besides, we accept that newspapers or publishers do not print everything that is submitted to them. The question, then, is whether there is a reason for holding technology companies to a different standard. Dominant market position or, a fortiori, monopoly might be one such reason. But I doubt that Apple actually has a dominant market position, even in the app market (considering Android’s popularity); it surely doesn’t have one in the book market. And I’m not sure I can think of anything else that would justify, even as a matter of morality, never mind law, saying that Apple (or Google, or whoever) has more onerous duties towards freedom of expression than traditional media companies, as Ms. Elbaek, the Danish minister, seems to think.

As always in the face of such disagreement, there also arises the question of who (if anyone) ought to be making the rules, and how―the question of the appropriate “mode of social ordering,” to use Lon Fuller’s phrase, about which I blogged here, here, and here. Ms. Elbaek seems to think that the rules regulating the ability of platforms such as Google’s or Apple to select and “censor” their contents should be said by national governments (by legislatures presumably, or maybe by executives through international treaties) or by supra-national bodies such as, presumably, the EU. (Note that she spoke of “discussions at regional and national level”―not at the UN, which she probably knows is not too keen on certain kinds of offensive speech the Danes see nothing wrong with.) But it’s not clear that governments, at whatever level, should be making these rules. As wrote in my earlier posts, legislation is often a clumsy tool for dealing with emerging technologies and new business models, because the latter develop faster than the former can adapt. And private ordering through the market might be enough to take care of the problem here, if there even is one. Apple is not a monopolist; it has competitors who might be willing to give the books which it does not like a platform, and profit from them. Authors and readers are free to use these competing platforms. Apple will remain a prude―hypocritical (as prudes often are) or not―if it thinks there is a profit to be made in prudishness, or it will convert to more liberal ways if that is more profitable.

The Future is Even Creepier

There is an interesting story in today’s New York Times that brings together a couple of my recent topics, the tracking of internet users by the websites they visit and the use of the data thus generated in advertising, about which I wrote here, and the use of target-specific outreach and advertising by President Obama’s re-election campaign, about which I wrote here. There are even, for good measure, overtones of human dignity there.

The story is about the way the data gathered when we use the internet, whether just browsing or searching for something in particular, are then used to throw those annoying targeted ads at us wherever we go. The data is collected by computers of course; it is computer algorithms, too, that analyze it and use it to assign us to some fine-grained category (depending on our inferred interests and means); and it is still computers that sell the right to show us a display ad to companies that might be interested in the specific category of consumer each of us is deemed to belong to.

This is roughly similar, if I understand correctly, to what the Obama campaign did in studying the data it had collected about voters and using it to target each person specifically according to his or her likely interests and concerns, except that the field of application here is commerce rather than politics. And just as some people have doubts about the morality of that tactic in the political realm, there are those who are convinced that its application in the commercial one is immoral. The Times quotes a consumer-rights advocate as saying that “[o]nline consumers are being bought and sold like chattel [sic]. … It’s dehumanizing.” As with what the Obama campaign did, I’m not sure about that. I’m not convinced by the description of the process as selling people―it involves selling information about me, and the right to show me a message on which I remain free to act or not, not my personhood. I don’t feel dehumanized by those ads―just creeped out, which, I think, is a very human reaction, by the way (I doubt that cattle are creeped out by being sold).

Perhaps there is an echo here of the debate, in human dignity scholarship, over whether dignity and its violations are an objective matter, meaning that one’s dignity can be violated even though one doesn’t feel that it is ,or a subjective one, meaning that one’s perception is determinative. (A classic example of this problem is the controversy over dwarf-tossing: the dwarf consents to being thrown around for sport and makes money out of it―but can the state prohibit the activity regardless, on the ground that it is a violation of his dignity even if he doesn’t think it is?)

I should note one possible difference between what is happening in the commercial advertising context and what the Obama campaign did. The companies that track internet users claim that those whom they track are not identified in any recognizable fashion. When they sell the right to show me ads to advertisers, they might describe me as something like “the guy who reads legal blogs and news websites a lot and has been looking at cell phones recently.” The Obama campaign, of course, was identifying people by name, address, etc., in order to reach out to them. So maybe the internet-ad people are less creepy than the politicians. But maybe not. The Times’ article suggests people are very skeptical about the actual anonymity of internet users tracked by advertisers, so the difference might be illusory.

As I said above and in my previous posts, even if this is not immoral and/or illegal, it is creepy. Perhaps “do not track” features of internet browsers will save us from the onslaught of creepiness. But not only are advertisers trying to fight them but, as they are pointing out, their use might undermine the bargain at the foundation of the internet―in exchange for putting up with ads, we get to enjoy all sorts of great content (such as this blog, right?) for free. Perhaps we are now finding out that the bargain was a Faustian one. But it’s likely too late to get out of it.

To Track or Not to Track?

There was an interesting article in the New York Times this weekend about the brewing fight around “do not track” features of internet browsers (such as Firefox or Internet Explorer) that are meant to tell websites visited by the user who has enabled the features not to collect information about the user’s activity for the purposes of online advertising. Here’s a concrete example that makes sense of the jargon. A friend recently asked me to look at a camera she was considering buying, so I checked it out on Amazon. Thereafter, for days on end, I was being served with ads for this and similar cameras on any number of websites I visited. Amazon had recorded my visit, concluded (wrongly, as it happens) that I was considering buying the camera in question, transmitted the information to advertisers, and their algorithms targeted me for camera ads. I found the experience a bit creepy, and I’m not the only one. Hence the appearance of the “do not track” functionalities: if I had been using a browser with a “do not track feature”, this would presumably not have happened.

Advertisers, of course, are not happy about “do not track.” Tracking our online activities allows them to target very specific ads at us, ads for stuff we have some likelihood of being actually interested in. As the Times explains,

[t]he advent of Do Not Track threatens the barter system wherein consumers allow sites and third-party ad networks to collect information about their online activities in exchange for open access to maps, e-mail, games, music, social networks and whatnot. Marketers have been fighting to preserve this arrangement, saying that collecting consumer data powers effective advertising tailored to a user’s tastes. In turn, according to this argument, those tailored ads enable smaller sites to thrive and provide rich content.

The Times reports that advertisers have been fighting the attempts of an NGO called the W3C (for “World Wide Web Consortium”) to develop standards for “do not track” features. They have also publicly attacked Microsoft for its plans to make “do not track” a default (albeit changeable) setting on the next version of Internet Explorer. And members of the U.S. Senate are getting into the fight as well. Some are questioning the involvement of an agency of the US government, the Federal Trade Commission, with W3C’s efforts, while others seem to side against the advertisers.

The reason I am writing about this is that this may be another example of the development of new rules happening before our eyes, and it gives us another opportunity to reflect on the various mechanisms by which social and legal rules emerge and interact, as well as on the way our normative systems assimilate technological development. (Some of my previous posts on these topics are here, here, and here.)

W3C wants to develop rules―not legally binding rules of course, but a sort of social norm which it hopes will be widely adopted―regulating the use of “do not track” features. But as with any would-be rule-makers, a number of questions arise. The two big ones are ‘what legitimacy does it have?’ and ‘is it competent?’ As the Times reports, some advertisers are, in fact raising the question of W3C’s competence, claiming the matter is “entirely outside their area of expertise.” This is self-serving of course.  W3C asserts that it “bring[s] diverse stake-holders together, under a clear and effective consensus-based process,” but that’s self-serving too, not to mention wishy-washy. And of course a claim can be both self-serving and true.

If not W3C, who should be making rules about “do not track”? Surely not advertisers’ trade groups? What about legislatures? In theory, legislatures possess democratic legitimacy, and also have the resources to find out a great deal about social problems and the best ways to solve them. But in practice, it is not clear that they are really able and, especially, willing to put these resources to good use. Especially on a somewhat technical problem like this, where the interests on one side (that of the advertisers) are concentrated while those on the other (the privacy of consumers) are diffused, legislatures are vulnerable to capture by interest groups. But even quite apart from that problem, technology moves faster than the legislative process, so legislation is likely to come too late, and not to be adapted to the (rapidly evolving) needs of the internet universe. And as for legitimacy, given the global impact of the rules at issue, what is, actually, the legitimacy of the U.S. Congress―or, say, the European Parliament―as a rule-maker?

If legislatures do not act, there are still other possibilities. One is that the courts will somehow get involved. I’m not sure what form lawsuits related to “do not track” might take―what cause of action anyone involved might have against anyone else. Perhaps “do not track” users might sue websites that refuse to comply with their preferences. Perhaps websites will make the use of tracking a condition of visiting them, and sue those who try to avoid it. I’m not sure how that might work, but I am pretty confident that lawyers more creative than I will think of something, and force the courts to step in. But, as Lon Fuller argued, courts aren’t good at managing complex policy problems which concern the interests of multiple parties, not all of them involved in litigation. And as I wrote before, courts might be especially bad at dealing with emerging technologies.

A final possibility is that nobody makes any rules at all, and we just wait until some rules evolve because behaviours converge on them. F.A. Hayek would probably say that this is the way to go, and sometimes it is. As I hope my discussion of the severe limitations of various rule-making fora shows, making rules is a fraught enterprise, which is likely to go badly wrong due to lack of knowledge if not capture by special interests. But sometimes it doesn’t make sense to wait for rules to grow―there are cases where having a rule is much more important than having a good rule (what side of the road to drive on is a classic example). The danger in the case of “do not track” might be an arms race between browser-makers striving to give users the ability to avoid targeted ads, or indeed any ads at all, and advertisers (and content providers) striving to throw them at users.  Pace the president of the Federal Trade Commission, whom the Times quotes as being rather optimistic about this prospect, it might actually be a bad thing, if the “barter system” that sustains the Internet as we know it is be caught in the crossfire.

Once again, I have no answers, only questions. Indeed my knowledge of the internet is too rudimentary for me to have answers. But I think what I know of legal philosophy allows me to ask some important questions.

I apologize, however, for doing it at such length.