Disinformation by Omission

Additional thoughts on the futility of regulatory responses to mis- and disinformation

In my last post, I wrote about the Canadian Forces’ attempts to manipulate public opinion, including by means of disinformation, and about the dangers of regulations ostensibly meant to counteract disinformation. I briefly return to the issue of disinformation to highlight an excellent, if frightening, essay by David French in his newsletter for The Dispatch.

Mr. French writes about the alarming levels of polarization and mutual loathing by political partisans in the United States. He argues that this results from a “combination of malice and misinformation”, which mean “that voters hate or fear the opposing side in part because they have mistaken beliefs about their opponents. They think the divide is greater than it is.” Mr. French observes that many Americans are stuck in a vicious cycle:

Malice and disdain makes a person vulnerable to misinformation. Misinformation then builds more malice and disdain and enhances the commercial demand for, you guessed it, more misinformation. Rinse and repeat until entire media empires exist to supply that demand. 

And, crucially, Mr. French points out that misinformation does not just consist of “blunt, direct lying, which is rampant online”. It also includes “deception by omission (a news diet that consistently feeds a person with news only of the excesses of the other side) and by exaggeration and hyperbole”, which can be “in many ways more dangerous than outright lies”, because they cannot easily be countered with accurate information. (This is why the rhetorical practice of “nutpicking” ― pointing to the crazies on the opposite side, and claiming that they represent all those who might share something of their worldview ― is so effective. The nuts are real! They might even be somewhat prominent and influential, though not as much as the nutpicker suggests. Nutpicking isn’t lying. But it is deceptive, and destructive.) 

And yet, Mr. French cautions against regulatory responses to this crisis, serious though it is:

there is no policy fix for malice and misinformation. There is no five-point plan for national harmony. Popular policies … don’t unite us, and there are always differences and failures to help renew our rage. Instead, we are dealing with a spiritual and moral sickness. Malice and disdain are conditions of the soul. Misinformation and deception are sinful symptoms of fearful and/or hateful hearts. (Paragraph break removed)

I think this is tragically right, even though I do not share Mr. French’s deep Christian faith. Call it heart or mind instead of soul; speak of moral error, indeed of immorality instead of sin; this all is secondary, to my mind. The point is that the fault is not in our laws, but in ourselves. And this is why, in my last post, I wrote that the government

cannot be trusted with educating citizens and funding media in a way that would solve the problems of the “environment that has created the disinformation crisis”. The solution must come from the civil society, not from the state.

As I wrote long ago in the context of hate speech, the law ― at least so long as it remains relatively cabined and does not attempt comprehensive censorship ― cannot counteract the corrosive “messages … sent by sophisticated, intelligent people”, who are able to avoid crude hate propaganda, or outright lies. The hint, the understatement, the implication, the misdirection, the omission are their weapons, and the shield against it must be in our hearts and minds, not the statute book.

We often think of regulation as a sort of magic wand that can do whatever we need, provided we utter the right sort of spell when wielding it. This is, of course, an illusion, and entertaining it only distracts us from working on the most difficult subject of all: our selves.

Disinformation and Dystopia

Whose disinformation efforts should we really fear―and why we should also fear regulation to stop disinformation

Mis- and disinformation about matters of public concern is much in the news, and has been, on and off, for the last five years. First kindled by real and imagined interference in election campaigns, interest in the subject has flared up with the present plague. Yesterday’s developments, however, highlight the dangers of utterly wrongheaded responses to the issue, one that would will lead to a consolidation of government power and its use to silence critics and divergent voices.


First, we get a hair-raising report by David Pugliese in the Ottawa Citizen about the Canadian Armed Forces’ strong interest in, and attempts at, engaging in information operations targeting Canadians over the course of 2020. Without, it must be stressed, political approval, and seemingly to the eventual consternation of Jonathan Vance, the then-Chief of Defence Staff, the Canadian Joint Operations Command sought to embark on a “campaign … for ‘shaping’ and ‘exploiting’ information” about the pandemic. In their view “the information operations scheme was needed to head off civil disobedience … and to bolster government messages”. They also saw the whole business as a “learning opportunity” for what might become a “routine” part of their operations.

Nor is this all. At the same time, but separately, “Canadian Forces intelligence officers, culled information from public social media accounts in Ontario”, including (but seemingly not limited to) from people associated with Black Lives Matter. This, supposedly, was “to ensure the success of Operation Laser, the Canadian Forces mission to help out in long-term care homes hit by COVID-19 and to aid in the distribution of vaccines in some northern communities”. A similar but also, apparently, unrelated effort involved the public affairs branch of the Canadian Forces, which want its “officers to use propaganda” peddled by “friendly defence analysts and retired generals” and indeed “information warfare and influence tactics”, “to change attitudes and behaviours of Canadians as well as to collect and analyze information from public social media accounts” and “to criticize on social media those who raised questions about military spending and accountability.”

And in yet another separate incident,

military information operations staff forged a letter from the Nova Scotia government warning about wolves on the loose in a particular region of the province. The letter was inadvertently distributed to residents, prompting panicked calls to Nova Scotia officials … [T]he reservists conducting the operation lacked formal training and policies governing the use of propaganda techniques were not well understood by the soldiers.

To be blunt, there seems to be a large constituency in various branches of the Canadian forces for treating the citizens whom they are supposed to defend as enemies and targets in an information war. Granted, these people’s enthusiasm seems to outstrip their competence ― but we know about the ones who got caught. We can only hope that there aren’t others, who are better at what they do. And it’s not a happy place to be in, to be hoping that your country’s soldiers are incomptent. But here we are.


Also yesterday, as it happens, the CBA National Magazine published the first episode of a new podcast, Modern Law, in which its editor, Yves Faguy, interviewed Ève Gaumond, a researcher on AI and digital technologies, about various techniques of online persuasion, especially during election campaigns. These techniques include not only mis- and disinformation and “deep fakes”, but also advertising on social media, which need not to untruthful, though it may present other difficulties. Mr. Faguy’s questions focused on what (more) should Canada, and perhaps other countries, do about these things.

Ms. Gaumond’s views are somewhat nuanced. She acknowledges that “social media is not the main driver of disinformation and misinformation” ― traditional media still are ― and indeed that “we’re not facing a huge disinformation crisis” at all, at present. She points out that, in debates about mis- and disinformation, “the line between truth and falsehood is not so clearly defined”. And she repeatedly notes that there are constitutional limits to the regulation of speech ― for example, she suggests that a ban on microtargeting ads would be unconstitutional.

Ultimately, though, like many others who study these issues, Ms. Gaumond does call for more and more intrusive regulation. She claims, for instance, that “[i]f we are to go further to fight disinformation”, online advertising platforms should be forced not only to maintain a registry of the political ads they carry and of the amounts the advertisers spent, but also to record “[t]he number of times an ad has viewed” and “the audience targeted by the ad”. This would, Ms. Gaumond hopes, deter “problematic” targeting. She also wants to make advertising platforms responsible for ensuring that no foreign advertising makes its way into Canadian elections, and tentatively endorses Michael Pal’s suggestion that spending limits for online advertising should be much lower than for more conventional, and more expensive, formats.

Ultimatelty, though she doesn’t “think that we should tackle speech per se”, Ms. Gaumond muses that “[w]e should see how to regulate all platforms in a way that we can touch on all possible ways that disinformation is spread”. This means not only spending limits but also that “[y]ou cannot pay millions of dollars to microtarget … what you’re saying to people that believe the same thing as you do without oversight from other people, from Election Canada”. And beyond that

not only regulating social medias [sic], but also all of the environment that has created the disinformation crisis. That means education, funding and great journalism, the media ecosystem is one of the important components of why we’re not facing such a big disinformation crisis.


There are a few things to say about Ms. Gaumond’s proposals ― keeping in mind Mr. Pugliese’s report about the activities of the Canadian forces. The overarching point is the one suggested by the juxtaposition of the two: while researchers and politicians fret about disinformation campaigns carried ou by non-state and foreign actors, the state itself remains the most important source of spin, propaganda, and outright lies with which we have to contend. Unlike bots and Russian trolls, the state can easily dupe the opinion-forming segments of society, who are used to (mostly) believing it ― partly out of ideological sympathy, but partly, and it’s important to stress this, because the state is also an important source of necessary and true information which such people rely on and relay.

This means that we should be extremely wary of granting the state any power to control information we can transmit and receive. Its armed agents think nothing of manipulating us, including for the sake of propping up the government of the day. And it is no answer that we should grant these powers to independent, non-partisan bureaucracies. The Canadian Forces are also an independent, non-partisan bureaucracy of sorts. I’m pretty confident that they weren’t trying to manipulate opinion out of any special affection for the Liberal Party of Canada, say. They are just on the side of order and stability, and any civilian bureaucratic structure would be too. It would also be likely to be tempted to squish questions about its own budget and functioning, and to develop an unhealthy interest in people it regards as trouble-makers. Civilians might be more suspicious of right-wing groups than of BLM, but the ones have the same right to free speech and to privacy as the others.

Another thing to note is the confusion among the different issues clustered under the general heading of concerns about mis- and disinformation. Concerns about the targeting of advertising may be valid or not, but their validity often has little to do with the truthfulness of the ads at issue. Concerns about foreign influence may be magnified when it is being exercised through misleading and/or microtargeted ads, but they are not necessarily linked to the issues either of disinformation or of microtargeting. Spending limits, again, have little to do with disinformation. No doubt a knowledgeable researcher like Ms. Gaumond would be more careful about such distinctions in a paper than she sometimes is in the interview with Mr. Faguy. But can untutored policy-makers, let alone voters, keep track?

In light of all this, Ms. Gaumond’s suggestions, though sprinkled with well-intentioned caveats about “not saying ‘you cannot say that'”, should give us serious pause. Even increasing disclosure requirements is far from a straightforward proposition. As Ms. Gaumond notes, Google simply refused to carry political ads rather than set up the registry the government required. Facebook and Twitter might follow if they are forced to make disclosures that would reveal the functioning of their algorithms, which they may have good reasons for keeping out of their competitors’ sight. More fundamentally, the idea that all (political?) speech should at all times be tracked and monitored by the state does not strike me as healthy. Political debate is a fundamental right of citizens, not something we can only engage in on the government’s sufferance. We are not children, and government ― including Elections Canada ― is not a parent who needs to know what we are getting up to online. Last but not least, because of the government’s track record of spin and deceit, it cannot be trusted with educating citizens and funding media in a way that would solve the problems of the “environment that has created the disinformation crisis”. The solution must come from the civil society, not from the state.

Lastly, let me note in my view Ms. Gaumond may be far too optimistic about the willingness of Canadian courts to uphold constitutional limits on government regulation of electoral speech. Their record on this issue is generally abysmal, and the Supreme Court’s reasoning in the leading case, Harper v Canada (Attorney General), 2004 SCC 33, [2004] 1 SCR 827, is itself misinformed and speculative. If government actors take the initiative on these matters, the courts will not save us.


The issue of mis- and disinformation is at least much a moral panic as a genuine crisis. As Ms. Gaumond points out, the trouble is to a considerable extent with traditional media and political forces outside anyone’s easy control; as Mr. Pugliese’s reporting makes clear, we must fear our own government at least as much as any outside force. Yet fears of new technology ― not to mention fear-mongering by media and political actors whose self-interest suggests cutting social media down to size ― mean that all manner of new regulations are being proposed specifically for online political discussions. And the government, instead of being reined in, is likely to acquire significant new powers that will further erode the ability of citizens to be masters in their own public and private lives.