######### Card Hero LETTERS #########
Letters to the editors

Vol. 6, NO. 1 / May 2021

To the editors:

Adam Garfinkle has written an evocative essay that warns of the dangers posed by deepfakes. Made possible by advances in computing power, sophisticated deepfakes rely on the use of generative adversarial networks (GANs). This form of machine learning involves two neural networks battling one another in a process that creates new data that bear nearly identical statistics to those of the original data on which the two networks are trained. With fake data resembling real data, one practical result is that any photograph or video can be altered in such a way as to appear unchanged, achieving a level of verisimilitude that can fool any human observer. Audio, too, can be faked using GANs. An innocuous, if humorous, example of a deepfake is a video clip of Nicolas Cage dancing in the Austrian countryside as Maria Von Trapp from The Sound of Music. The technology underlying deepfakes can benefit society: GANs may help detect glaucoma and vision loss, as well as predict the effects of climate change. Unfortunately, many deepfakes serve more sinister purposes. Indeed, the overwhelming majority of deepfakes found online are pornographic. The potential for emotional harm, especially against women, and blackmail is significant.

The implications for politics are obvious. Already, concern abounds with respect to disinformation and its corrosive effects on popular discourse. Disinformation poses a threat, so the standard understanding goes, because malign actors can deploy it to undermine faith in democratic institutions, to widen political polarization, and to stir passions that could lead to violence.1 A foreign disinformation campaign allegedly helped bring Donald Trump to the White House in 2016. Certainly, his insistence that massively widespread fraud occurred during the 2020 US presidential election paved the way for the shocking events that unfolded on January 6, 2021, at the US Capitol. In the past five years, many democracies have stepped up efforts to fight disinformation, whether through media literacy campaigns, increased scrutiny of social media, or institutional centers of excellence like those that the European Union and the North Atlantic Treaty Organization have set up in Helsinki and Riga, respectively. In response to public pressure, social media giants such as Facebook and Twitter have tried to implement, with varying degrees of effectiveness, new protocols aimed at suppressing the spread of disinformation.2

Lying and dishonesty are as old as politics itself. But, as Garfinkle cautions, a printed word, a radio broadcast, or a tweet might be much less persuasive than an artificially contrived video that looks and sounds real. As a particular form of disinformation, deepfakes could have greater power in inciting violence—they could even convey unofficial declarations of war. Deepfakes could also unduly destroy the careers of politicians through representations that feature them saying things they have never said, or doing things they have never done. Despite these risks, the existing legal regime in many countries, including the United States, is too loose to give regulators a handle on the problem. Closing legal gaps could damage constitutional rights, provoking fresh concerns that governments are using the threat posed by deepfakes to abridge fundamental freedoms, such as free speech. Still, for Garfinkle, what makes deepfakes so potent a threat is not so much that constitutionally appropriate solutions for them are hard to design, but that deepfakes exploit how blurry the line between politics and entertainment has become. It is far easier to smear than to defend against a smear. Deepfakes thus have the ability to tickle the worst of human nature in ways not possible before.

Are these concerns valid? Do deepfakes have unprecedented potential to distort and to harm our politics even more than standard disinformation?

Garfinkle makes a compelling argument, but there are, thankfully, some reasons for not sharing his pessimism—at least, not fully. One reason turns on the susceptibility of citizens to political falsehoods. They do not believe just anything. Nor do they accept whatever information is handed to them. Their susceptibility is asymmetrical: they believe some groups more than they do others. Partisan identification is partly responsible for this divide. In studies of American politics, political scientists have uncovered significant evidence that partisan identity shapes policy preferences, rather than the other way around.3 Indeed, the Trump era saw voters switch policy views simply on the basis of who was articulating them. Republicans became less attached to states’ rights and free trade under Trump’s presidential leadership, whereas Democrats became more attached to those particular values. This is not only true in the United States. In Canada, where partisanship is much less pronounced, attitudes split along partisan lines in 2019 over a series of images that emerged showing, veritably, Prime Minister Justin Trudeau wearing blackface as late as 2001.4

The point here is that those who identify as Republican are probably much more likely than self-identifying Democrats to accept the authenticity of a deepfake video that makes a Democrat politician look bad. The opposite would hold true of a deepfake involving a Republican politician. Notwithstanding its crudeness, the video of Nancy Pelosi appearing drunk, highlighted by Garfinkle, is an instructive example of this point. Trump and his lawyer, Rudolph Giuliani, retweeted it, and some of their bona fide followers probably did think in all earnestness that Pelosi was intoxicated. But most Democrats—and, for that matter, self-describing Independents—would have already discounted almost any communication by Trump as likely being deceitful. It may not be the case that seeing is believing, but that believing is seeing.

The level of sophistication that characterizes a deepfake may not matter as much as Garfinkle argues it does. With the passage of time, more and more people will come to know of deepfakes and what they can do, even without understanding exactly how GANs work, let alone what machine learning is. Readers of Garfinkle’s essay will have a heightened awareness of the threat that deepfakes pose. Simply being cognizant of the fact that deepfakes exist can diminish their power. Aware that such verisimilar images are now possible, internet users may become much more skeptical when they see footage that does not conform with their expectations. Perhaps this is why, to date at least, deepfakes have played a very minimal role in disturbing the politics of Western democracies, especially considering all the opportunities that the 2020 US presidential election and Trump’s unfounded allegations of election fraud created. 

Of course, some might argue that sowing doubt and discrediting public institutions are the main goals of any disinformation campaign. If members of society cannot agree on the basic truths that should undergird social and political institutions, then democracy will become more fragile and vulnerable to demagoguery. Alternatively, deepfakes may be used to reinforce existing views, thereby solidifying polarization. These claims are fair, but they may be overstated. Unremitting skepticism cannot be good for democracy, but blind acceptance cannot be good either. Besides, cognitive dissonance—the psychological bias whereby individuals reject new information that contradicts their beliefs—is hardly unique to the current digital age. Finally, deepfakes may very well solidify partisanship, but even without them, politics is probably toxic enough already for such partisanship to continue to congeal.

As knowledge of deepfakes spreads, so will the tools for detecting them. Many computer scientists have made efforts to identify strategies and to craft tools for discerning even the most realistic deepfakes. One team of information technology experts has found that GAN-generated images universally contain severe artifacts when transformed into the frequency domain. These features in the underlying data can be leveraged for accurately recognizing which images are deepfakes and which are not.5 Another team of researchers has developed tools that use standard facial recognition techniques to uncover inconsistencies in the data, which in turn reveal face-swap deepfakes.6 Of course, although these efforts show promise, much more work needs to be done, not least because some deepfake creation technologies employ features that can fool certain deep neural network detectors.7 Yet the fact that this problem exists is not surprising. The history of military technology is replete with examples of an innovation that prompts a counter-innovation, which in turn provokes further responses that, together, form a recursive pattern. Naturally, such spy-versus-spy games play out in the information space because credible threats tend to excite efforts to counter them.8

The best response to deepfakes would be to address the underlying motivations of those who employ these tools maliciously. Curiously, Garfinkle says little of the reasons for the political ends to which deepfakes are the means. Sure, the purpose may be to excite the passions, to keep citizens “dim-sighted,” or to “influence moist robots,” as it were, but again, the question remains: to what exact end?9 There will always be those who, with varying degrees of effectiveness, use tools such as deepfakes to sow chaos for chaos’ own sake. Deepfaked pornography may be disturbingly gratifying for those with particular sexual fantasies or an intent to extort others. Scammers, too, can leverage deepfakes to dupe investors and markets in order to make a fraudulent buck.

But at the political level, where national governments might be contemplating whether to use these tools against one another, it is far from clear how deepfakes can be effective in going about international negotiations or crisis diplomacy. Garfinkle suggests certain scenarios that could come to pass in the near future—incitements to riot or declarations of new military capabilities—but even so, the deepfakes will still be subject to verification. For matters of high policy, intelligence agencies would not take at face value just any audiovisual file disseminated online. Moreover, using deceptive technologies to achieve strategic gains is risky because it often discloses key capabilities that can be detected and used to improve defenses. In international relations, the ease of deception does not always imply that attacking is easy.10 The attribution problems that can dull the coercive power of cyber weapons may well be applicable here as well: how can a state modify its behavior if it does not know the demand made of it, let alone the organization making the demand?

None of this is to say that we should be complacent about deepfakes. The harm is very real for those women affected by deepfake pornography. Nevertheless, it is important to keep the problems created by GAN technology in perspective. Whether for the right or wrong reasons, people will not believe just anything they see. Governments will not credulously digest verisimilar images and audio and modify their behavior in a way that fits the goals of the disinforming organization. When these technologies first emerged, there may have been a window of opportunity for deepfakes to have made their greatest impact, but now, thanks to improved understanding of them, that window may not be as wide as it used to be.


  1. Many of the arguments about disinformation and deepfakes I make here are drawn from a longer piece of mine: “Disinformation in International Politics,” European Journal of International Security 4, no. 2 (2019): 227–48, doi:10.1017/eis.2019.6. 
  2. For a critical description of some of these anti-disinformation efforts, see Chris Tenove, “Protecting Democracy from Disinformation: Normative Threats and Policy Responses,” The International Journal of Press/Politics 25, no. 3 (2020): 522–27, doi:10.1177/1940161220918740. 
  3. Larry Bartels, “Uninformed Votes: Information Effects in Presidential Elections,” American Journal of Political Science 40, no. 1 (1996): 195, doi:10.2307/2111700. 
  4. Conservative partisans were far more likely than liberal partisans to address the blackface controversy on social media. See Taylor Owen, “Digital Democracy Project Research Memo #5: Fact-Checking, Blackface, and the Media,” Public Policy Memo, October 3, 2019.  
  5. Joel Frank et al., “Leveraging Frequency Analysis for Deep Fake Image Recognition,” International Conference on Machine Learning, PMLR (2020): 3,247–58. 
  6. Shruti Agarwal et al., “Detecting Deep-Fake Videos from Appearance and Behavior,” (2020): arXiv:2004.14491. 
  7. Thanh Thi Nguyen et al., “Deep Learning for Deepfakes Creation and Detection: A Survey,” arXiv (2019), arXiv:1909.11573, 7. This survey identifies about twenty prominent detection tools that are available as of 2020. 
  8. This is the basic theme in Geoffrey Parker, The Military Revolution: Military Innovation and the Rise of the West, 1500–1800 (Cambridge: Cambridge University Press, 1996). For a much more contemporary example of such dynamics, see Jon Lindsay, “Surviving the Quantum Cryptocalypse,” Strategic Studies Quarterly 14, no. 2 (2020): 49–73. 
  9. Quotes from Adam Garfinkle, “Disinformed,” Inference 5, no. 3 (2020). 
  10. Erik Gartzke and Jon Lindsay, “Weaving Tangled Webs: Offense, Defense, and Deception in Cyberspace,” Security Studies 24, no. 2 (2015): 316–48, doi:10.1080/09636412.2015.1038188. 

Alexander Lanoszka is an assistant professor in the Department of Political Science at the University of Waterloo.

More Letters for this Article


Endmark

Copyright © Inference 2025

ISSN #2576–4403