Platonic Guardians 2.0?

The New York Times has published an essay by Eric Schmidt, the Chairman of Google, about the role of the Internet, and especially, of the exchange of ideas and information that the Internet enables, in both contributing to and addressing  the challenges the world faces. The essay is thoroughly upbeat, concluding that it is “within [our] reach” to ensure that “the Web … is a safe and vibrant place, free from coercion and conformity.” Yet when reading Mr. Schmidt it is difficult not to worry that, as with students running riot on American college campuses, the quest for “safety” will lead to the silencing of ideas deemed inappropriate by a force that might be well-intentioned but is unaccountable and ultimately not particularly committed to freedom of expression.

To be sure, Mr. Schmidt talks the free speech talk. He cites John Perry Barlow’s “Declaration of the Independence of Cyberspace,” with its belief that the Web will be “a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.” He argues that

[i]n many ways, that promise has been realized. The Internet has created safe spaces for communities to connect, communicate, organize and mobilize, and it has helped many people to find their place and their voice. It has engendered new forms of free expression, and granted access to ideas that didn’t exist before.

Mr. Schmidt notes the role online communication has played in enabling democratic protest around the world, and wants to reject the claims of “[a]uthoritarian governments  … that censorship is necessary for stability.”

But his response to these claims is not just a straightforward defence of the freedom of expression. “The people who use any technology are the ones who need to define its role in society,” Mr. Schmidt writes. “Technology doesn’t work on its own, after all. It’s just a tool. We are the ones who harness its power.” That’s fair enough, so far as it goes. Mr. Schmidt warns against “us[ing] the Internet exclusively to connect with like-minded people rather than seek out perspectives that we wouldn’t otherwise be exposed to,” and that is indeed very important. But then the argument gets ominous:

[I]t’s important we use [the Internet’s] connectivity to promote the values that bring out the best in people. … We need leaders to use the new power of technology to allow us to broaden our horizons as individuals, and in the process broaden the horizons of our society. It’s our responsibility to demonstrate that stability and free expression go hand in hand.

It’s not that I’m against the idea that one should act responsibly when exercising one’s freedom of expression (or that one should just act responsibly, period). But is the responsibility of a speaker always to foster “stability” ― whatever exactly that is? And to whom ought we “to demonstrate that stability and free expression go hand in hand”? To the authoritarians who want to censor the internet? Why exactly do we owe them a demonstration, and what sort of demonstration are they likely to consider convincing? Last but not least, who are the leaders who are going to make us “broaden our horizons”?

Mr. Schmidt has a list of more or less specific ideas about how to make the internet the “safe and vibrant place” he envisions, and they give us a hint about his answer to that last question:

We should make it ever easier to see the news from another country’s point of view, and understand the global consciousness free from filter or bias. We should build tools to help de-escalate tensions on social media — sort of like spell-checkers, but for hate and harassment. We should target social accounts for terrorist groups like the Islamic State, and remove videos before they spread, or help those countering terrorist messages to find their voice.

He speaks “of leadership from government, from citizens, from tech companies,” but it is not obvious how citizens or even governments ― whom Mr. Barlow taunted as the “weary giants of flesh and steel,” powerless and unwelcome in cyberspace ― can “build tools” to do these sorts of things. It is really the other sort of giants, the “tech companies” such as the one Mr. Schmidt runs, that have, or at least can create, the means to be our benevolent guardians, turning us away from hate and harassment, and towards “global consciousness,” ― whatever that too may be. Google can demote websites that it deems to be promoters of “hate” in its search results, as indeed it already demotes those it considers to be copyright-infringers. Apple could block the access to its App Store to news  sources it considers biased, as indeed it has already blocked a Danish history book for  featuring some nudity in its illustrations. Facebook could tinker with its Newsfeed algorithms to help people with a favoured peace-and-love perspective “find their voice,” as it already tinkers with them to “help [us] see more stories that interest [us].”

Of course, Mr. Schmidt’s intentions are benign, and in some ways even laudable. Perhaps some of the “tools” he imagines would even be nice to have. The world may (or may not) be a better place if Facebook and Twitter could ask us something like “hey, this really isn’t very nice, are you sure you actually want to post this stuff?” ― provided that we had the ability to disregard the advice of our algorithmic minders, just like we can with spell-check. But I’m pretty skeptical about what might come out of an attempt to develop such tools. As I once pointed out here, being a benign censor is very hard ― heavy-handedness comes naturally in this business. And that’s before we even start thinking about the conflicts of interest inherent in the position of Google and of other tech companies who are in a position of being, at once, the regulators of their users’ speech and subjects of government regulations, and may well be tempted to so act in the former role as to avoid problems in the latter. And frankly, Mr. Schmidt’s apparent faith in “strong leaders” who will keep us free and make us safe and righteous is too Boromir-like for me to trust him.

As before, I have no idea what, if anything, needs to or could be done about these issues. Governments are unlikely to wish to intervene to stop the attempts of tech companies to play Platonic guardians 2.0. Even if they had the will, they would probably lack the ability to do so. And, as I said here, we’d be making a very risky gamble by asking governments, whose records of flagrant contempt for freedom of expression are incomparably worse than those of Google and its fellows, to regulate them. Perhaps the solution has to be in the creation of accountability mechanisms internal to the internet world, whether democratic (as David R. Johnson, David G. Post and Marc Rotenberg have suggested) or even akin to rights-based judicial review. In any case, I think that even if we don’t know how to, or cannot, stop the march of our algorithmic guardians, perhaps we can at lest spell-check them, and tell them that they might be about to do something very regrettable.

Author: Leonid Sirota

Law nerd. I teach public law at the University of Reading, in the United Kingdom. I studied law at McGill, clerked at the Federal Court of Canada, and did graduate work at the NYU School of Law. I then taught in New Zealand before taking up my current position at Reading.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: