######### Card Hero LETTERS #########
Letters to the editors

Vol. 7, NO. 1 / April 2022

To the editors:

A man who never was, Major William Martin, duped German Military Intelligence into false belief that helped the Allies a great deal in World War II.1 Producing this false belief was the objective of Operation Mincemeat, now venerated among intelligence professionals. Martin, along with the supporting narrative of misinformation in which he starred, was an ingenious deep fake. His body, a casualty of a supposed military plane crash, was staged so that it would be found floating near the southern coast of Spain with a briefcase containing fake British military plans chained to one wrist. What agents deserve credit for this ruse? A host of human ones working in the world of espionage. British Military Intelligence painstakingly labored to create the illusion and pull the wool over the eyes of the adversaries. And what is an example of a supporting piece of narrative they wisely included? One such piece was the assignment to Martin of a military rank consistent with having on his person sensitive military documents. To avoid an inconsistency that the Nazis would doubtless have perceived, the designers of the ruse wrote “Major” into the fiction, and specifically into the forged documents placed with the corpse.

Could a future AI, or an ensemble thereof, ever create a deepfake like this? No. The overarching inference for this negative can be starkly stated: AI is incapable of literary creativity; yet such creativity is centrally required for conceiving, sculpting, and deploying the likes of Mincemeat.

As to why the first premise in this argument is true, the lead author, Selmer Bringsjord, spent a small fortune of time and money trying to show that in fact this premise is false—that AI is capable of literary creativity. This sustained effort was expended in the late 1990s in collaboration with David Ferrucci, the chief architect of the Jeopardy!-winning AI known as Watson.2 The results of this research convinced Bringsjord of the following: AIs such as the system that was born of this effort are weakly creative, and thus capable of convincing casual thinkers that such systems are deeply creative. They ultimately execute no more than high-tech legerdemain, made possible by clever human engineers. The haversack ruse, in broad strokes, is a preexisting pattern,3 but a novel instantiation of it, such as Mincemeat, requires originating numerous plot twists, characters,4 and natural-language sentences of no small complexity. As Lady Lovelace would remind us, computers do only what we tell them to do; they originate nothing.5 Mincemeat was not a rehash of data from the past in the service of shallow prestidigitation; it was an original narrative, born in the brains of brilliant human spooks.

Since humans are certainly capable of narratological deception, against which the nations in NATO need to defend, a pressing question emerges. Could a future AI ever see through a deepfake like Mincemeat, and expose it as such? Here again the correct answer is no, or, at the very least, not without human guidance. In order to unmask a series of events as points in a concocted story, one must have the ability to originate stories that are candidates for what was concocted. We have already seen that such origination is beyond the reach of mere AIs. If Professor Smith knows that student Jones could well be lying when he offers his elaborate excuse as to why he missed the midterm, Smith’s best defense is to stack the claimed narrative against alternatives she can herself generate, including ones that render Jones blameworthy.

These answers of ours clash directly and deeply with those given by Adam Garfinkle in his important and eloquent essay on why AI’s capacity for deepfake production is with us now, and worrisome. Garfinkle fears today’s data-driven machine learning. In particular, he finds frightening a specific type of machine learning harnessed for deepfake production: generative adversarial networks (GANs).6 But machine learning of any type, by definition, is incapable of originating new narrative. This is so because, by definition, machine learning merely cranks out input-output behavior that approximates functions the AI in question has been spoon-fed by humans. Hence, the three-part Achilles’ heel of machine learning–based deepfake technology is that it cannot, as a matter of mathematics invent a fake narrative never seen before and hence not to be found in any existing data, reason out the non-numerical socio-cognitive implications of a prospective deepfake were it released, and then preemptively prevent some of these implications from materializing. These unwanted implications are always inconsistencies, and sometimes outright contradictions. If student Jones says that he missed the midterm because he had an allergic reaction to penicillin and had to be rushed to the hospital to receive a shot, something or someone rushed him there. If Professor Smith asks how this worked, and Jones says that he drove himself to the emergency room, jumped out of his car, and dashed inside, this implies that footage of Jones will have been captured on closed-circuit television footage in the emergency room. If he is not seen therein, inconsistency has materialized; the fake has been foiled.

Machine-learning systems cannot reason to novel hypothetical contradictions, and then, seeing these contradictions, adjust the candidate fake so as to avoid them, or produce additional concomitant deepfakes to preclude its victims from perceiving the contradictions. In short, machine learning cannot do what the human spies in Mincemeat did, and what human spies do throughout their careers: originate, analyze, and refine stories filled with all the cognitively layered, non-numerical, declarative nuances of human mental life.

Such cognitive layering can be seen clear as day in Mincemeat. The British spies believed that their elaborately planned and planted stimuli would cause certain false beliefs in Nazi minds about Allied beliefs. That’s three layers of belief—above the reach of nonhuman animals, and above the reach of humans aged five and younger.7 In some simulations three layers are within the grasp of logicist AI.8 But human espionage routinely reaches cognitive layering at depths far beyond any machine reasoning.

Consider the phenomena of sleeper agents in espionage, and the work of those charged with exposing them. There can be little doubt that there are sleepers in some NATO countries—people who appear to be leading uneventful lives as law-abiding citizens with normal jobs, but are really secret agents of an adversary nation, in hibernation. Denote such a person as “the Sleeper.” The Sleeper, to start, is utterly confident that no one knows of his real role. But assume that some detective, the Detective, has just discovered that the Sleeper is in fact a sleeper. This entails that the Detective believes the Sleeper is a sleeper. Next, suppose that the Sleeper now becomes aware of the fact that the Detective is on to him; this entails that the Sleeper believes that the Detective believes that the Sleeper is a sleeper. Of course, the Detective can next discover that the Sleeper knows that the Detective is investigating him. This entails that the Detective believes the Sleeper believes the Detective believes the Sleeper is a sleeper.

We have reached three layers. Now the Detective’s boss can be apprised of the situation, perhaps because the Detective is requesting funds for cyber-defense in support of the investigation. That immediately implies that the Boss believes the Detective believes the Sleeper believes the Detective believes the Sleeper is a sleeper.

The reader can see where this is going. Once the Detective sees that his boss grasps the situation, we have our fifth level: the Detective believes the Boss believes the Detective believes the Sleeper believes the Detective believes the Sleeper is a sleeper.

There is no reason to stop here, since, for instance, the Sleeper could receive info about the Boss, courtesy of the fact that the office of the Boss is bugged by the sponsor of the Sleeper. We thus reach the sixth layer. Such layering is jarringly iterated when written, but even garden-variety human espionage requires that those who would sustain fakery, like the Sleeper’s sleeping undetected, must cognize six-layer propositions, and more. Reasoning and planning over such layered declarative information, on the part of any AI, is simply not possible, even in the foreseeable future.

Garfinkle’s fear is nonetheless telling, for what is frightening is the use of machine learning by clever and creative humans who can augment such mere calculation with the creativity and logic of an in-the-field John le Carré. This dynamic is seen in miniature in the recent AI-enabled but human-deployed deepfake appearance of Anthony Bourdain in the movie Roadrunner.9 Bourdain passed away in 2018. This deception points the way forward to the much more serious scenarios Garfinkle sees coming. A GAN-generated deepfake video that shows a democratic leader casting aspersion upon someone venerated by the viewers of said video, combined with the Mincemeat-like arrangement of things in the world to align with this video—now that is indeed something to join Garfinkle in worrying about.10

But notice the nature of that danger: AI-mediated espionage, carried out by humans who are a lot smarter than the AI and who are in control, armed with the high-tech weaponry of machine intelligence. This would hardly impress Solomon, whose dictum that there is nothing new under the sun is confirmed by each careful analysis applied to each and every allegedly novel technology or tactic. The chief danger is nothing more than what our adversaries have dealt up in the dynamic of war and intelligence since time immemorial.

Of course, just because the dynamic is an old, familiar one doesn’t mean it is not one to fear. It is always prudent to fear malicious humans armed with a new breed of weapons. Franklin D. Roosevelt was rather afraid of Nazis with nuclear weapons, enough to ensure that his own soldiers and spies were so armed first.

So what is the answer? Garfinkle writes that the antidote to deepfake production, in broad strokes, is seeing, wanting, and obtaining “the divine gold of wisdom.” That seems a bit ethereal. Here is a less obscure, practical plan, and a rather familiar one: Rely on some suitably trained smart humans, armed with artificial agents whose powers, by virtue of their use not of data but computational logic, include and far exceed those of machine learning. These people will fight for right in an AI arms race to detect deepfakes, neutralize them, and produce and deploy their own superior ones.

But how is the fight to be waged and won? Invariably, by uncovering the inconsistencies that are necessarily generated by every fake, however deep it may be. There is minimally one outright inconsistency lurking beneath the surface of every single fake. Even fakes so deep that only a god could invent and perpetrate them entail inconsistency. This is a mathematical fact, an unassailable certainty. The easiest way to see it is to attend to the simplest kind of deception: a lie from agent A to agent B that P. Leaving aside niceties that philosophers can be left to debate, P must by definition conflict with something liar A knows: namely, not-P, or some set of propositions, S, that entail not-P. Even in simple lying, the antidote is to find and reveal the inconsistency. Usually the best way to do that is for the detective to intervene: to lay traps—experiments, really—and see if what they reveal preserves consistency or exposes the contradiction.

If intelligence officers and spies armed with AI and working on our behalf fail, then yes, dark is our future. Yet this is the calculus that has been operative from the dawn of conflict, and the age of AI will simply follow suit. Clubs, then knives, then swords and spears, then bows, then—momentously—crossbows, then guns, then missiles, then nukes—all these weapons and more were wielded best when accompanied by deception, and were best thwarted when counter-deception exposed inconsistency. Nothing now is new. AI, whether of the machine-learning variety Garfinkle fears or of the more powerful logic-based type, is simply another class of weapons. For espionage, it is perhaps now the most potent class there is.


  1. See Ben MacIntyre, Operation Mincemeat: How a Dead Man and a Bizarre Plan Fooled the Nazis and Assured an Allied Victory (New York: Crown, 2011). 
  2. Selmer Bringsjord and David Ferrucci, Artificial Intelligence and Literary Creativity: Inside the Mind of Brutus.1 (Mahwah: Lawrence Erlbaum, 2000). Ferrucci’s ideas regarding AI and narrative currently fuel a promising AI company: Elemental Cognition
  3. For an example from World War I, see the section titled “The ‘Haversack Ruse’” in the article on Wikipedia, “Richard Meinertzhagen.” 
  4. The creation of three-dimensional characters is on its own an AI-paralyzing challenge, one met masterfully by great dramatists such as Henrik Ibsen. See Selmer Bringsjord and Alexander Bringsjord, “Synthetic Worlds and Characters, and the Future of Creative Writing,” in Working through Synthetic Worlds, ed. C. A. P. Smith, Kenneth Kisiel, and Jeffrey Morrison (Surrey: Ashgate, 2009), 235–55. An early draft preprint of the paper is now available online
  5. This position is exactly what Alan Turing ascribed to Ada Lovelace, when considering her skepticism about the cognitive power of machines, as a counter to his exuberance. See Alan Turing, “Computing Machinery and Intelligence,” Mind 59, no. 236 (1950): 433–60, doi:10.1093/mind/LIX.236.433. For a test of whether a machine possesses the kind of creativity we claim is impossible for it to have, the Lovelace Test, see Selmer Bringsjord, Paul Bello, and David Ferrucci, “Creativity, the Turing Test, and the (Better) Lovelace Test,” Minds and Machines 11 (2001): 3–27, doi:10.1023/A:1011206622741. And for a podcast on this test featuring Selmer Bringsjord, see Robert Marks, “The Touring Test Is Dead. Long Live the Lovelace Test,” Mind Matters podcast, episode 76, April 2, 2020. 
  6. Garfinkle is far from alone in his worry, once it is generalized to include non-logicist AI of any type. More than a few humans are worried about what we find impossible: the genuine composition by GPT-3 and its successors of robust, new disinformation. See, for instance, Center for Security and Emerging Technology, “Can AI Write Disinformation?,” online event at 4:00 p.m., September 16, 2021. 
  7. For a discussion, see Michael Tomasello, “How Children Come to Understand False Beliefs,” Proceedings of the National Academy of Sciences 34 (2018): 8,491–98, doi:10.1073/pnas.1804761115. 
  8. See, for example, Konstantine Arkoudas and Selmer Bringsjord, “Propositional Attitudes and Causation,” International Journal of Software and Informatics 3, no. 1 (2009): 47–65. 
  9. We recommend reading “The Ethics of an Anthony Bordain Deepfake Voice” by Helen Rosner in The New Yorker, July 17, 2021. 
  10. Fortunately, the deepfake videos we have recently seen of President Volodymyr Zelenskyy during the invasion of Ukraine, however otherwise disturbing and portentous they are, have not been skillfully contextualized with Mincemeat-level narrative. 

Selmer Bringsjord is professor of Computer Science and Cognitive Science at Rensselaer Polytechnic Institute.

Alexander Bringsjord is a Risk Advisory Manager at Deloitte.

Naveen Sundar Govindarajulu is a deep learning researcher, and Head of AI at Stealth Startup.

More Letters for this Article


Endmark

Copyright © Inference 2024

ISSN #2576–4403