######### Card Hero LETTERS #########
Letters to the editors

Vol. 6, NO. 1 / May 2021

To the editors:

“If we expect no more from citizens than that they gratify their appetites,” Adam Garfinkle warns, “they will be enticed by spectacle and easily taken in by lies. Disinformation will thrive. As it is thriving now.” It is a justifiable concern. Yet the culprits for this plunge into delusion are not the deepfakes and generative adversarial networks (GANs) implicated in Garfinkle’s essay. The real culprit is more mundane and pernicious: the recommendation algorithms that curate the information surfaced by search engines and social media.

By mid-May of 2020, in the midst of the global pandemic, 28% of Americans believed Bill Gates was planning to use COVID-19 to implement a mandatory vaccine program with tracking microchips.1 Belief in this conspiracy is not unique to Americans. In our own global surveys, we find that across Central and South America, the Middle East, Northern Africa, the United States, and Western Europe, 20% of the public believes this bizarre claim.2 Machine-synthesized audio, images, and video—so-called deepfakes—were not part of the creation or spread of this conspiracy. Instead, the conspiracy spread through simple social media posts.

The far-reaching, far-right QAnon conspiracy claims, among other things, that a cabal of Satan-worshipping cannibalistic pedophiles and child sex-traffickers plotted against Donald Trump during his term as US President. A recent poll finds 37% of Americans are unsure whether QAnon is true or false, and 17% believe it to be true.3 Deepfakes were not part of the creation or spread of this conspiracy. This conspiracy was also created and spread through simple social media posts—with Trump’s tacit endorsement.4

A pair of videos depicting Speaker Nancy Pelosi looking inebriated during public appearances were not, as Garfinkle states, GAN-generated deepfakes.5 The low-tech videos were created by simply slowing the original footage by 25% to simulate slurred speech. Sophisticated technology was not needed to convince millions that Pelosi was “blowed [sic] out of her mind,” as the accompanying social media claimed.

Deepfakes and technologies designed to deceive are not the common thread that connects Bill Gates’s COVID microchips, QAnon’s Satan-worshipping cannibals, and the litany of conspiracies and outright lies polluting the internet. The common thread is the recommendation algorithms that aggressively promote the internet’s flotsam and jetsam onto news feeds and watch lists, plunging users into increasingly isolated echo chambers devoid of reality.

The classic thought exercise asks, “If a tree falls in a forest and no one is around to hear it, does it make a sound?” The modern equivalent should ask, “If information is on Facebook and an algorithm doesn’t promote it onto a news feed, does it exist?”

Every day, four petabytes—more than four million gigabytes—of data are uploaded to Facebook. But not all of this content is equal in the eyes of Facebook’s management. In 2009, the platform removed the ability for users to chronologically sort their news feed, turning over editorial control to algorithmic curation. Facebook’s own internal researchers found their algorithms “exploit the human brain’s attraction to divisiveness.” The researchers went on to conclude that, if left unchecked, the recommendation algorithms will promote “more and more divisive content in an effort to gain user attention and increase time on the platform.” A separate internal study found that 64% of people who joined an extremist group on Facebook did so because of the company’s recommendation. Facebook’s leadership has chosen to ignore these findings.6

Every minute of every day, more than 500 hours of video footage are uploaded to YouTube. The likelihood that any video is widely seen depends largely on recommendation algorithms. Seventy percent of watched YouTube footage is recommended by the company’s algorithms.7 By 2016, Twitter and Instagram had joined Facebook and YouTube in unleashing attention-grabbing recommendation algorithms to control what users read, see, hear, and—ultimately—believe.

Tech companies implemented recommendation algorithms because they proved remarkably successful at manipulating users. By maximizing clicks, likes, and shares, recommendation algorithms also helped maximize profits. These algorithms also succeeded in creating vicious feedback loops. If a user searches for content about QAnon on these platforms, whether innocently or otherwise, the algorithms will recommend additional QAnon-related content. A few clicks here, a few clicks there, and the user will be quickly ushered down a rabbit hole from which escape will prove difficult.8 This algorithmic amplification is the root cause of the unprecedented speed and reach with which misinformation is spreading online.

The true power of these algorithms could be seen when Facebook tweaked their recommendation algorithms to favor authoritative news organizations in the days before the 2020 US presidential election.9 The changes worked exactly as intended, reducing election-related misinformation on the platform. Nonetheless, Facebook reversed course just days later when it was found that the new algorithm also reduced the average time users spent on Facebook and, in turn, ad revenue.

Garfinkle is justified in calling out the potential dangers of deepfakes, which have been used to commit fraud and have been weaponized against women in the form of non-consensual, sexually explicit material.10 But the more significant risk associated with deepfakes comes in the form of the liar’s dividend.11 The increasing ease with which images, audio, and video can be convincingly manipulated means that they can also be far more easily dismissed as deepfakes, regardless of their truthfulness or authenticity. More than being duped by a deepfake, it is this excuse to selectively deny the truth that is perhaps the most serious threat posed by deepfakes.

The larger issue of misinformation can be laid squarely at the feet of the tech companies and the recommendation algorithms they have adopted to titillate and outrage users, so that they spend as many hours as possible clicking, liking, sharing, and tweeting.

The current crisis, and the specter of an impending infocalypse,12 was both foreseeable and preventable. This is maddening. As Facebook demonstrated, if only for a few days, recommendation algorithms can be adjusted to favor the authoritative, trustworthy, and civil. All that is required is the moral fortitude—or federally mandated legislation13—to recognize the devastating impact to individuals, societies, and democracies that social media is having, and to put aside corporate indifference and unchecked greed in favor of even a modicum of decency and social responsibility.


  1. Linley Sanders, “The Difference between What Republicans and Democrats Believe to Be True about COVID-19,” YouGov, May 26, 2020. 
  2. Sophie Nightingale and Hany Farid, “Examining the Global Spread of COVID-19 Misinformation,” arXiv (2021), arXiv:2006.08830v2. 
  3. Mallory Newall, “More Than 1 in 3 Americans Believe a ‘Deep State’ Is Working to Undermine Trump,” Ipsos, December 30, 2020. 
  4. Brandon Carter, “Trump, Addressing Far-Right QAnon Conspiracy, Offers Praise for Its Followers,” NPR, August 19, 2020. 
  5. Saranac Hale Spencer, “Viral Video Manipulates Pelosi’s Words,” Factcheck, 2020. 
  6. Jeff Horwitz and Deepa Seetharaman, “Facebook Executives Shut Down Efforts to Make the Site Less Divisive,” Wall Street Journal, May 26, 2020. 
  7. Marc Faddoul, Guillaume Chaslot, and Hany Farid, “A Longitudinal Analysis of YouTube’s Promotion of Conspiracy Videos,” arXiv (2020), arXiv:2003.03318. 
  8. Julia Carrie Wong, “Down the Rabbit Hole: How QAnon Conspiracies Thrive on Facebook,” The Guardian, June 25, 2020. 
  9. Kevin Roose, “Facebook Reverses Postelection Algorithm Changes That Boosted News from Authoritative Sources,” The New York Times, December 16, 2020. 
  10. Lorenzo Franceschi-Bicchierai, “Listen to This Deepfake Audio Impersonating a CEO in Brazen Fraud Attempt,” Vice News, July 23, 2020; Mary Anne Franks and Ari Ezra Waldman, “Sex, Lies, and Videotape: Deep Fakes and Free Speech Delusions,” Maryland Law Review 78, no. 4 (2018): 892. 
  11. Robert Chesney and Danielle Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” 107 California Law Review 1,753 (2019), doi:10.2139/ssrn.3213954. 
  12. The term infocalypse has been attributed to the technologist Aviv Odaya. Charlie Warzel, “Believable: The Terrifying Future of Fake News,” BuzzFeed News, February 11, 2018. 
  13. Tom Malinowski and Anna G. Eshoo, “Reps. Malinowski and Eshoo Introduce Bill to Hold Tech Platforms Liable for Algorithmic Promotion of Extremism,” press release, October 20, 2020. 

Hany Farid is a Professor at the University of California, Berkeley, with a joint appointment in Electrical Engineering & Computer Sciences and the School of Information.

More Letters for this Article


Endmark

Copyright © Inference 2024

ISSN #2576–4403