To the editors:
“If we expect no more from citizens than that they gratify their appetites,” Adam Garfinkle warns, “they will be enticed by spectacle and easily taken in by lies. Disinformation will thrive. As it is thriving now.” It is a justifiable concern. Yet the culprits for this plunge into delusion are not the deepfakes and generative adversarial networks (GANs) implicated in Garfinkle’s essay. The real culprit is more mundane and pernicious: the recommendation algorithms that curate the information surfaced by search engines and social media.
By mid-May of 2020, in the midst of the global pandemic, 28% of Americans believed Bill Gates was planning to use COVID-19 to implement a mandatory vaccine program with tracking microchips.1 Belief in this conspiracy is not unique to Americans. In our own global surveys, we find that across Central and South America, the Middle East, Northern Africa, the United States, and Western Europe, 20% of the public believes this bizarre claim.2 Machine-synthesized audio, images, and video—so-called deepfakes—were not part of the creation or spread of this conspiracy. Instead, the conspiracy spread through simple social media posts.
The far-reaching, far-right QAnon conspiracy claims, among other things, that a cabal of Satan-worshipping cannibalistic pedophiles and child sex-traffickers plotted against Donald Trump during his term as US President. A recent poll finds 37% of Americans are unsure whether QAnon is true or false, and 17% believe it to be true.3 Deepfakes were not part of the creation or spread of this conspiracy. This conspiracy was also created and spread through simple social media posts—with Trump’s tacit endorsement.4
A pair of videos depicting Speaker Nancy Pelosi looking inebriated during public appearances were not, as Garfinkle states, GAN-generated deepfakes.5 The low-tech videos were created by simply slowing the original footage by 25% to simulate slurred speech. Sophisticated technology was not needed to convince millions that Pelosi was “blowed [sic] out of her mind,” as the accompanying social media claimed.
Deepfakes and technologies designed to deceive are not the common thread that connects Bill Gates’s COVID microchips, QAnon’s Satan-worshipping cannibals, and the litany of conspiracies and outright lies polluting the internet. The common thread is the recommendation algorithms that aggressively promote the internet’s flotsam and jetsam onto news feeds and watch lists, plunging users into increasingly isolated echo chambers devoid of reality.
The classic thought exercise asks, “If a tree falls in a forest and no one is around to hear it, does it make a sound?” The modern equivalent should ask, “If information is on Facebook and an algorithm doesn’t promote it onto a news feed, does it exist?”
Every day, four petabytes—more than four million gigabytes—of data are uploaded to Facebook. But not all of this content is equal in the eyes of Facebook’s management. In 2009, the platform removed the ability for users to chronologically sort their news feed, turning over editorial control to algorithmic curation. Facebook’s own internal researchers found their algorithms “exploit the human brain’s attraction to divisiveness.” The researchers went on to conclude that, if left unchecked, the recommendation algorithms will promote “more and more divisive content in an effort to gain user attention and increase time on the platform.” A separate internal study found that 64% of people who joined an extremist group on Facebook did so because of the company’s recommendation. Facebook’s leadership has chosen to ignore these findings.6
Every minute of every day, more than 500 hours of video footage are uploaded to YouTube. The likelihood that any video is widely seen depends largely on recommendation algorithms. Seventy percent of watched YouTube footage is recommended by the company’s algorithms.7 By 2016, Twitter and Instagram had joined Facebook and YouTube in unleashing attention-grabbing recommendation algorithms to control what users read, see, hear, and—ultimately—believe.
Tech companies implemented recommendation algorithms because they proved remarkably successful at manipulating users. By maximizing clicks, likes, and shares, recommendation algorithms also helped maximize profits. These algorithms also succeeded in creating vicious feedback loops. If a user searches for content about QAnon on these platforms, whether innocently or otherwise, the algorithms will recommend additional QAnon-related content. A few clicks here, a few clicks there, and the user will be quickly ushered down a rabbit hole from which escape will prove difficult.8 This algorithmic amplification is the root cause of the unprecedented speed and reach with which misinformation is spreading online.
The true power of these algorithms could be seen when Facebook tweaked their recommendation algorithms to favor authoritative news organizations in the days before the 2020 US presidential election.9 The changes worked exactly as intended, reducing election-related misinformation on the platform. Nonetheless, Facebook reversed course just days later when it was found that the new algorithm also reduced the average time users spent on Facebook and, in turn, ad revenue.
Garfinkle is justified in calling out the potential dangers of deepfakes, which have been used to commit fraud and have been weaponized against women in the form of non-consensual, sexually explicit material.10 But the more significant risk associated with deepfakes comes in the form of the liar’s dividend.11 The increasing ease with which images, audio, and video can be convincingly manipulated means that they can also be far more easily dismissed as deepfakes, regardless of their truthfulness or authenticity. More than being duped by a deepfake, it is this excuse to selectively deny the truth that is perhaps the most serious threat posed by deepfakes.
The larger issue of misinformation can be laid squarely at the feet of the tech companies and the recommendation algorithms they have adopted to titillate and outrage users, so that they spend as many hours as possible clicking, liking, sharing, and tweeting.
The current crisis, and the specter of an impending infocalypse,12 was both foreseeable and preventable. This is maddening. As Facebook demonstrated, if only for a few days, recommendation algorithms can be adjusted to favor the authoritative, trustworthy, and civil. All that is required is the moral fortitude—or federally mandated legislation13—to recognize the devastating impact to individuals, societies, and democracies that social media is having, and to put aside corporate indifference and unchecked greed in favor of even a modicum of decency and social responsibility.