Political Science / Book Review

Vol. 3, NO. 4 / February 2018

Homo Deus: A Brief History of Tomorrow
by Yuval Noah Harari
Harper, 464 pp., USD$35.00.

Craignez, Seigneur, craignez que le ciel rigoureux ne vous haïsse assez pour exaucer vos vœux.
Jean Racine, Phèdre

Yuval Harari is a young Israeli historian. In his first book, Sapiens: A Brief History of Humankind, Harari surveyed the history of the human race; in his second, Homo Deus: A Brief History of Tomorrow, he has written an account of its future.1 Neither book is brief. Sapiens was a notable best seller. Harari may not expect lightning to strike twice; he would not be inconvenienced if it did. In Sapiens, Harari expressed no very great use for the monotheistic religions of mankind, nor for the agricultural practices that, he supposed, made them possible. He commended stone age cultures with the enthusiasm of a man not required to live in any of them. For all that, Sapiens was very much a work of Whig history, an account of successive revolutions, each prefiguring the next. In Homo Deus, Harari argues that human beings are shortly to be improved. Greatly so. For a start, better genes, better neural circuits, better biochemistry. Thereafter, a variety of implantable contraptions: chips, stents, or shunts. Finally, a full promotion to the pantheon: computer scientists, at last, inscribing intelligence in inorganic matter; the old-fashioned human body declining into desuetude, replaced by the filaments and files of an alien form of life.

Our Colonial Master

The west now commands a universal civilization, V. S. Naipaul once observed. Dense, elusive, and complex, its principles extend as far as its power to convey them. “Simple charms alone cannot be acquired from it; other, difficult things,” he writes, “come with it as well: ambition, endeavor, individuality.” It is a civilization that imposes its own stern constraints on those who live within it, “unacknowledged, but all the more profound.”2

These ideas are easily parodied. The Irish academic Mark Humphrys writes that in the “seventeenth and eighteenth centuries, the West invented science, democracy and capitalism.” It was not a minute too soon. “After 5,000 years of ignorance, superstition, tyranny, war, genocide, and poverty, the solutions to mankind’s fundamental problems were at last discovered.”3

If the solutions to superstition, tyranny, war, genocide, and poverty were discovered in the seventeenth and eighteenth centuries, they were, those solutions, well hidden in the twentieth.

The idea of a universal civilization—the very idea—is by no means new. In the second century CE, Aelius Aristides, delivered a remarkable oration, a celebration of Roman greatness.

If one considers the vast extent of your empire he must be amazed that so small a fraction of it rules the world, but when he beholds the city and its spaciousness it is not astonishing that all the habitable world is ruled by such a capital … Your possessions equal the sun’s course … You do not rule within fixed boundaries, nor can anyone dictate the limits of your sway … Whatever any people produces can be found here, at all times and in abundance … Egypt, Sicily, and the civilized part of Africa are your farms; ships are continually coming and going.4

The concluding words of this oration have a poignant irony that time has not effaced: “The whole world prays,” Aristides said, “in unison that your empire may endure forever.”

A panegyric is the overflow in rhetoric of a system of belief. Our own is severely abstract, monotheistic in its single-mindedness, and fully accessible only to a scientific priesthood. “And we have removed from thee thy veil,” the Koran remarks in verse 50:22, “and thy sight today is piercing.” The veil removed, what is revealed is an edifice of great grandeur but incomplete aspect. It is in its form classical and austere. It appeals to a finite set of exact and fundamental theories. It is unique. What can be known must be known as a derivation from its theories.5 But if it is unique, it is also incomplete. General relativity and quantum mechanics are both true, but they are not true together. Physicists anticipate their incarnation in a single, finite, all-encompassing, and exact theory. The incarnation is the source of its greatness, the place from which power flows.

Past Imperfect

Naipaul wrote during the last decade of the twentieth century; he wrote to uphold the dignity of a universal civilization—that, and its difficulty. Writing almost twenty years later, Harari is concerned with documenting its dissolution. The liberal belief in individualism, Harari writes, is based on the assumption that every human being embodies a single, indivisible essence, something purely his own, free in its action, autonomous in its choices. Like words written on water, these ideas are destined to disappear. “Organisms are algorithms,” Harari writes, and human beings, “an assemblage of many different algorithms lacking a single inner voice or a single self.”6 These algorithms are a part of the great wheel of being discerned by the sciences; and, no matter his private devotions, it is to the sciences that Harari bends his public knee.

Harari is a Whig historian, but he is not a Whig optimist. He is proud of himself as a man prepared to see things as they really are. Whether they really are as he sees them is rather less clear. Homo Deus is a work of speculation. Standards are looser than they might otherwise be. This is inevitable. In writing about the near future, Harari is guessing, and in writing about the far future, when human beings have long promoted themselves into the inorganic world, he is guessing again. For all that, Homo Deus is intended as a work of history, and the speculations in which Harari is engaged follow a familiar logical pattern. They are like initial value problems in physics. The future that Harari discerns must be projected from the historical present. The historical present. The day before yesterday is not good enough. Something more spacious is needed, some sense of the times in which we live.

“During the second half of the twentieth century,” Harari writes, “[the] Law of the Jungle has finally been broken, if not rescinded.”7 By the law of the jungle, he means the state of civil society under conditions of war, famine, and disease. These are the conditions under which humanity has long lived and suffered. If they have not entirely disappeared, they are, at least, in abeyance.

Are they? Are they really? “Whereas in ancient agricultural societies,” Harari writes, “human violence caused about 15 percent of all deaths, during the twentieth century violence caused only five 5 percent of deaths, and in the early twenty-first century it is responsible for about 1 percent of global mortality.”8 Stone age violence no longer commands anyone’s moral interest, or indignation, and, in any case, Harari’s assessment of prehistoric violence is, as Brian Ferguson observed, “utterly without empirical foundation.”9 Seventeen years into the twenty-first century, we remain bound to the twentieth, haunted by its horrors. Harari’s assertion that during the twentieth century violence caused only five percent of deaths worldwide is morally obtuse. The decline in violence is, most often, a statistical artefact of the growth in the world’s population. Roughly six million Poles of Poland’s pre-war population of thirty-five million died during the Second World War—seventeen percent, or almost one in five. The world’s population in 1939 was 2.3 billion. Point two percent of the world’s population perished in Poland. Which number better expresses the horror: seventeen percent or point two percent? Would the horror have been less had the population of South America been greater? Neither murder nor genocide has in the twentieth century been randomly distributed. The world’s population is an irrelevance.

“In most countries today,” Harari writes, “overeating has become a far worse problem than famine.”10 There are today famines taking place or about to take place in northern Nigeria, Somalia, Yemen, and South Sudan. Some twenty million people, Secretary-General António Guterres of the United Nations observed, are at risk.11 They are not at risk because fat people are fat. They are at risk because they have no food.12 If Chinese peasants are becoming obese, it has not been widely reported. Of the most terrible famines in history, many took place in the twentieth century. Persia suffered famine between 1917 and 1919. Eight to nine million people died. A sixth of the population of Turkestan died of hunger between 1917 and 1921. Famine in Russia in 1921 caused five million deaths; from 1928 to 1930, famine in northern China, three million. Famine in the Ukraine between 1931 and 1934 caused five million deaths; famine in China in 1936, five million. One million people died of famine in Leningrad during its wartime blockade by the German army. In 1942 and 1943, famines in China and Bengal caused between three and five million deaths. Two and a half million Javanese died of hunger during the Japanese occupation. The little-known Soviet famine of 1947 caused roughly one and one half million deaths. Famine killed between fifteen and forty million Chinese between 1959 and 1961. One million people died of hunger during the Sahel drought between 1968 and 1972. Of hunger, note, and not obesity. The North Korean famine of 1996 was responsible for between three hundred thousand and 3.5 million deaths. No one knows its true extent—circumstances that, if they did not chill the blood, should have stayed the hand of historians writing about the disappearance of famine in the modern world. The Second Congo War, between 1998 and 2004, caused almost four million deaths from starvation. In 1998, three hundred thousand died in Somalia. They died because they had nothing to eat.13

The first half of the twentieth century was unparalleled in its violence. Violence declined in the second half of the twentieth century because European states were too exhausted, or too apprehensive, to repair again to warfare. If the second half of the century was less violent than the first, it was not peaceful. No one should take the decline as a sign of moral improvement. The Chinese communist revolution; the partition of India; the Great Leap Forward; the ignominious Cultural Revolution; the suppression of Tibet; the Korean wars; the wars of Indochinese succession; the Egypt–Yemen war; the Franco–Algerian war; the genocidal Pol Pot regime; the grotesque and sterile Iranian revolution; the Iran–Iraq war; ethnic cleansings in Rwanda, Burundi, and the former Yugoslavia; the farcical Russian and American invasions of Afghanistan; the American invasion of Iraq; and various massacres, sub-continental famines, squalid civil insurrections, bloodlettings, throat slittings, death squads, theological infamies, and suicide bombings taking place from Latin America to East Timor were as yet unaccommodated. I have made these points before.14

Natürlich hab’ Ich wieder recht
Der Mensch ist dumm, die Welt ist schlecht
.

Derelict Doctrines

Harari’s view of what is coming, or to come, is much influenced by what he calls “the new human agenda.” Modern culture, he writes, “rejects [the] belief in a great cosmic plan. … Life has no script, no playwright, no director, no producer—and no meaning.”15 This is hardly a view that is distinctly modern. It is as old as history. If what is familiar in this view is old, what is modern is false. Harari’s sense that the life of man is less than might have been imagined, or expected, is the expression of his encounter with various derelict doctrines. “In recent decades,” Harari writes,

life scientists have demonstrated that emotions are not some mysterious spiritual phenomenon that is useful just for writing poetry and composing symphonies. Rather emotions are biochemical algorithms that are vital for the survival and reproduction of all mammals.16

Biologists have demonstrated no such thing. What the life scientists are doing is anyone’s guess. No one has ever supposed that emotions are useful just for writing poetry or composing symphonies. The concept of a biochemical algorithm occupies space without doing work. Some biochemical reactions may be described step by step, but this tells us nothing more than that some biochemical reactions may be described. Very many human emotions have nothing to do with survival or reproduction. There is peevishness, déjà vu, irritability, rapture, schadenfreude, frustration, sloth, aesthetic bliss, and that ineffable sense of melancholy incompleteness known in Portuguese as saudade.

Emotions are not algorithms if the concept of an algorithm is made precise, and the claim is pointless if it is not. An algorithm may be transferred from one machine to another, but an emotion or a sensation may not. You cannot feel my pain if I stub my toe; I cannot feel your jealousy if I steal your wife. On the contrary. That stubbed toe aside, I feel rather good about the whole business. Anger is not inevitably felt “as a sensation of heat and tension in the body.”17 A man may dissemble his anger, even from himself, and he may sustain a cold, vindictive sense of fury for years without ever feeling flushed or even particularly hot-blooded. I am myself like that—implacable. Anger may come and go and it makes little sense to inquire whether it is the same emotion coming and going. Emotions have nothing like the clear-cut identity that is characteristic of an algorithm. A great tribal chief may have a dozen squabbling wives, poor fool, and so a dozen conflicting obligations, but not a dozen competing angers. Like alimony, anger is burdensome but not countable. Emotions may be controlled, guided, provoked, nudged, cultivated, refined, molded, or shaped, but not so algorithms or machines. There no algorithmic structure controlling how emotions are felt. How they are felt is a matter of how they are felt. An algorithm may exist without ever being run, but an emotion that is never felt is like an idea that is never thought. Thoughts are not detachable objects; neither are emotions.18

Metaphysical Cake Master

Homo Deus is not a work of philosophy, but its arguments turn often on philosophical or logical issues. Harari is persuaded that, no matter their convictions to the contrary, human beings are not free in their actions. As a debate in philosophy, freedom of the will is like Jarndyce v Jarndyce in the law: it stretches its corrupt and unwholesome hand out to ensnare whoever is tempted by its arguments. The debate has retained its chief features since antiquity, and no philosopher or scientist has made the slightest contribution to enlarging it. Harari belongs to the ages. “To the best of our scientific understanding,” he writes, “determinism and randomness have divided the entire cake between them.”19 If human actions are determined, they are not free, and if random, not interesting. Freedom of the will must be an illusion.20 Perhaps this is so. If freedom of the will is an illusion, it is both universal and inexpugnable. Every man is persuaded that something is within his power, and none that everything is beyond it. What explains the illusion? No less than in the paradoxes of perception, in which a wine glass reveals the sleek contours of a woman’s silhouette, some account is needed. The illusion goes too deep to be an accident. It is not random. On the contrary. Free will enters into every deliberation; it is the foundation on which every legal system is constructed; it controls every human exchange; it is the assumption that makes daily life coherent; and if Google, Facebook, Apple, and Microsoft are busy undermining consumer choice, they are busy only because, like the rest of us, they share in the illusion of free will and are concerned to make the most of it. To do without the illusion is to live like animals. Considerate la vostra semenza fatti no foste a viver come bruti. An appeal to randomness is pointless. No deterministic account is remotely plausible. We are as little able to explain the illusion of free will as to explain free will itself. If the illusion is not a part of the cake, the cake is not all that there is; and if it is a part of the cake, determinism and randomness do not divide it.21

Court in Session

What Harari does not believe about free will, he does not believe about God, the soul, or the human mind; but if he is skeptical about some things, he is credulous about others, and so reaches a point of equilibrium between believing too little and believing too much. “Scientists,” he writes, “have subjected Homo sapiens to tens of thousands of bizarre experiments, and looked into every nook in our hearts and every cranny in our brains.”22 Je m’imagine cela. There is no soul. Ten thousand more experiments may well have been devoted to finding the details of the Smoot–Hawley Tariff in the common bile duct—and with a similar standard of success. If Harari is skeptical about freedom of the will or the human soul, in other respects he argues that the life of man is governed by a different imperative. Whatever is not forbidden by the laws of physics is possible. It is this dizzying sense of steadily expanding possibilities that allows Harari to accept with solemn credulity the promise that death is a soluble technological problem, or that, in time, human and machine intelligence, as Ray Kurzweil has predicted, will merge in the burst of a starlike singularity. Harari rejects a much older, darker view in which a life is bounded by irrefrangible limits—so far a man may go, but no further. Quite the contrary. So long as a scheme or suggestion is not physically impossible, Harari is content to accept as his own the epistemological maxim governing Silicon Valley—anything goes.23

The appellate court is now in session. The possibilities that Harari sees winking on the great manifold of being? What about them? Determinism is a doctrine with philosophical bite only if it has some modal force. If it amounts to no more than the observation that generally one thing follows another, it is of no interest. An object dropped from a great height must fall toward the center of the earth. It has no say in the scheme of things, and it cannot do otherwise. Historical laws that determine which possibilities are realized and which are not have the same force of command. This must happen; that is impossible.24 If anything goes, then we are left with no deterministic explanations why some things went, and if they went for no reason at all, what, then, is the purpose of this book?

Future Imperfect

Human beings, Harari believes, are about to lose their social and economic usefulness as well as their souls.25 Robots are coming, and, if not robots, then all-powerful algorithms. Having replaced chess champions and quiz show contestants, they are shortly to replace truck drivers, travel agents, accountants, lawyers, and doctors. Whether they are about to replace historians is a question that Harari wisely declines to discuss. What makes their forthcoming domination inevitable, Harari believes, is the discovery that consciousness may be separated from intelligence. Computers are no more conscious today than they were in 1950, but they are very much more intelligent, and in the near future they are certain to become even more intelligent. This is the information revolution that Harari has persuaded himself that he sees clearly. The Whig optimist now gives way to the Whig pessimist. The information revolution is likely to benefit the minority of those with the wit, or the money, to make use of it. “As algorithms push humans out of the job market,” Harari writes, “wealth and power might become concentrated in the hands of the tiny elite that owns the all-powerful algorithms, creating unprecedented social and political inequality.”26 Algorithms might even become entities under the law, like corporations or trusts, Facebook’s corporate algorithm showing Mark Zuckerberg the door in favor of itself.

Je pense, donc je commande.

I am as eager as the next man to see Facebook become Vishnu, but I do not expect to see it any time soon. It is by no means clear that computers are in 2017 any more intelligent than they were in 1950; it is, for that matter, by no means clear that the Sunway TaihuLight supercomputer is any more intelligent than the first Sumerian abacus. Both are incarnations of a Turing machine. The Sumerian abacus can do as much as a Turing machine, and the Sunway TaihuLight can do no more. Computers have become faster, to be sure, but an argument is required to show that by going faster, they are getting smarter.

An algorithm is a step-by-step affair, the residue in action of the antecedent concept of an effective calculation, a way of getting something done. In the 1930s, logicians precisely defined this old and informal idea: Kurt Gödel by means of the recursive functions; Alonzo Church, by the calculus of lambda conversion; and Alan Turing, by the Turing machine. The definitions coincided, leading Gödel to remark that the underlying concept was absolute. Effective calculability, Church conjectured, could be completely expressed by the properties of a Turing machine. Although it cannot be demonstrated, this conjecture may be something like a law of nature, a part of the edifice.

These are ideas that, like do-it-yourself surgery, may easily go wrong. Stephen Wolfram offers an example. “[T]he workings of the human brain,” he writes, “or the evolution of weather systems can, in principle, compute the same things as a computer.”27 He refrains notably from endorsing the conclusion that the brain is a weather system. It is the next best thing: it is like a weather system. “Computation is therefore simply a question of translating inputs and outputs from one system to another.” There are two assumptions in Wolfram’s argument: that the human brain is nothing more than a computer, and that the human mind is nothing more than the human brain. Are these assumptions true? Harari has no idea; Wolfram does not say. And no one else knows.

It is hardly beyond dispute that the human brain is a computer, except on the level of generality under which the human brain is like a weather system. It is difficult even to depict the simplest computational scheme in neurological terms. One neuron fires, and then another. Still a third neuron fires twice, as if it were adding the results. This is mere dumb show. What is taking place on the neurological level lacks any coherent connection to addition, which is a recursive operation defined over the natural numbers. Michael Jordan offers a reasonable assessment:

But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like [emphasis added]. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.28

It is possible to embed the rules of recursive arithmetic in a computer, but how might embedding take place in the brain? If this question has no settled answer, then neither does the question of whether the brain is a computer.

There remains the thesis that the human mind is identical to the human brain. Gödel’s second incompleteness theorem demonstrated that no formal systems adequate to the description of the natural numbers could prove its own consistency. The proof turns on Gödel’s ingenious redescription of consistency as a number-theoretic statement, a Diophantine equation. If the brain is a computer, it must be a formal system. Either it can demonstrate its own consistency, or it cannot. “So the following disjunctive conclusion is inevitable,” Gödel writes:

Either mathematics is incompletable in this sense, that its evident axioms can never be comprised in a finite rule, that is to say, the human mind (even within the realm of pure mathematics) infinitely surpasses the powers of any finite machine [emphasis added], or else there exist absolutely unsolvable Diophantine problems of the type specified.29

This argument was endorsed both by John Lucas and Roger Penrose. Is it valid? I do not know.

But neither does Harari. And neither, for that matter, does Wolfram.

No discussion of these issues would be complete without some mention of consciousness. It is a topic on which it is possible to say anything without ever saying something. Zoltan Istvan is a transhumanist, a student of life extension and digital immortality. That the first has not been achieved and the second is incoherent has been no impediment to his scholarship. “We have no idea how consciousness works,” he remarks.30 This is true only to the extent that we have no reason to think that consciousness works. Like the lilies of the field, it toils not and neither does it spin. “But the brain is still a machine,” Istvan goes on to say, “so it’s a matter of tinkering with it until we work it out.” Istvan’s faith in tinkering is not markedly inferior to my own, but judging from his enthusiasm, his successes would seem more considerable. Whatever I tinker with falls apart. For his part, Harari is as baffled as everyone else. Consciousness? What is it doing there?

David Chalmers referred to consciousness as the hard problem. That the problem is hard has become a part of the gabble. Everyone says that it is so. I am as worried as the next man. But quite before accepting consciousness as a hard problem, it would be useful to know what makes it hard and why it is a problem. It is not easy to say—one reason, I suppose, that the problem is hard. If I am not under anesthesia, asleep, or dead, I must be conscious. I am a busy man. When else could I be conscious? Yet in considering the remains of day, I can hardly be expected to remember all of it, so I am largely unable to say anything about the apparently peculiar nature of my consciousness on those occasions. When I do remember what I was doing, what I remember is chiefly what I was doing, and not anything especially about consciousness. At times, I am moved to comment on my consciousness, the more so when, with a murmured glug, I assure the dentist that I do not feel a thing, but then what is at issue is self-consciousness, a commentary on the real thing. Beyond observing that it is always hanging around, I have no idea what that real thing might be.

Mais je divague. If computers show no signs of consciousness, as Harari argues, this might suggest that, whatever else it might be, consciousness is not an algorithmic phenomenon—the perfect truth obviously. What then of Harari’s grand claim that “[e]very animal—including Homo sapiens—is an assemblage of organic algorithms shaped by natural selection over millions of years of evolution”?31

Big Data, Big Deal

It was just yesterday that any number of nervously shuffling TED talkers, their microphones suggesting a cockroach emerging from their ear, would, at various TED talks, assure their audience that Big Data was a Big Deal. Harari is with them, an advocate of Dataism, an apostle:

For scholars and intellectuals, [Dataism] promises to provide the scientific holy grail that has eluded us for centuries: a single overarching theory that unifies all the scientific disciplines from musicology through economics to biology. According to Dataism, Beethoven’s Fifth Symphony, a stock-exchange bubble, and the flu virus are just three patterns of dataflow that can be analyzed using the same basic concepts and tools. This idea is extremely attractive. It gives all scientists a common language, builds bridges over academic rifts, and easily exports insights across disciplinary borders.32

Like phrenology, Dataism is easy to uphold: look around! serving as a compelling adjuration in both cases. Fat-heads are generally thick. But if data are everywhere, so is Fox News, evidence that being everywhere and meaning something are not quite the same thing. A “single overarching theory that unifies all the scientific disciplines”? Yes, but, what theory? The observation that there are a lot of data all over the place is not calculated to set the pulse racing. A physical theory embodying Dataism must, at the very least, embody both special relativity and quantum mechanics. Physicists would sooner give up Harari than give them up, and so would I. It must embody, as well, a rigorously discrete structure, its elementary elements the natural and not the real numbers. It must abjure the old-fashioned but immensely powerful techniques of mathematical analysis; it must give up the continuum. All of this must go in favor of a physical scheme in which the physical universe is resolved into its discrete and computable elements. Some physicists have found this idea attractive. Wolfram is an example. He is mad for universal computation and the vision of physics that it implies. “I even have increasing evidence,” he writes,

that thinking in terms of simple programs will make it possible to construct a single truly fundamental theory of physics, from which space, time, quantum mechanics, and all the other known features of our universe will emerge.33

Wolfram’s scheme was rebutted by Scott Aaronson. Either it violates Lorenz symmetry, and so special relativity, Aaronson demonstrated, or it is not compatible with quantum mechanics.34

This is not a good augury, as my haruspex would say.

For all of Harari’s assurances that data are the real deal, these of his reflections already suggest that he has jumped the shark, another way of saying that he has missed the boat. It is Deep Learning that has now commanded everyone’s attention, a scheme of artificial intelligence that makes possible pleasantly obsequious digital assistants.

Siri, yo, Siri.
Yes, Master.

Deep Learning is neither very deep, nor does it involve much learning.35 The idea is more than fifty years old, and may be rolled back to Frank Rosenblatt’s work on perceptrons.36 The perceptron functioned as an artificial neuronal net, one neuron deep. What could it do? Marvin Minsky and Seymour Papert demonstrated that the correct answer was not very much.37 God tempered the wind to the shorn lamb. In the 1980s, a number of computer scientists demonstrated that by increasing the layers in a neural net, the thing could be trained by back propagation and convolution techniques to master a number of specific tasks. This was unquestionably an achievement, but in each case, the achievement was task specific. The great goal of artificial intelligence has always been to develop a general learning algorithm, one that, like a three-year-old child, could apply its intelligence across various domains. This has not been achieved. It is not even in sight. And no wonder. We have no theory that explains human or animal behavior. “The human mind,” Istvan has remarked (hi, Zoltan, hi), “is virtually unexplored.”38 Both chess and Go take place in confined spaces. The rules are plain; so, too, the goals of the game. After playing fifty million games of Go against itself, a computer easily defeated a human Go master. Whether it could have easily defeated fifty million Go masters playing against it is an interesting question. A kitten occupies a conceptual space bounded only by the limitations of its anatomy and its genetic endowment; but beyond that trite observation we can generally do no better in explaining its behavior than remarking that Fluffy here generally does what she wishes to do. No record of her frisking will ever be anything more than a record of her frisking. Theories lie at a different level of analysis. Without them, there is no hope of constructing a general learning algorithm.

And these we do not have.

Like so much else in Homo Deus, Dataism serves chiefly to express Harari’s great gullibility, his willingness to believe what some scientists say without wondering whether what they say is true. Dataism is not the holy grail; it is not a coherent theory; it is not about to unify anything. But, then, death is not a technological problem, and the singularity is an infantile fantasy.

Men are not about to become like gods.

Harari has been misinformed.

Endmark

  1. Yuval Harari, Sapiens: A Brief History of Humankind (New York: Harper, 2011). In his review, published in The Guardian of September 11, 2014, Galen Strawson remarked that the book was “overwhelmed by carelessness, exaggeration and sensationalism.” Fair enough. After a century in which serious historians regarded Big History as a big mistake, it is now again in fashion. See David Armitage, “What’s the Big Idea?TLS, September 20, 2012. For the first half of the twentieth century, the notoriety of Oswald Spengler’s Der Untergang des Abendlandes and Arnold Toynbee’s A Study of History persuaded serious historians not to go there or do that. All is now forgiven. David Christian, Maps of Time: An Introduction to Big History (Berkeley, CA: University of California Press, 2004) covers fourteen billion years, the overwhelming majority of no relevance to human history whatsoever. Ian Morris’s Why the West Rules—For Now (London: Profile Books Ltd, 2010) is Small Time Big Time: It is limited to the past ten thousand years. Sapiens is Big Time Small Time, Harari going back some two hundred and fifty thousand years. 
  2. V.S. Naipaul, “Our Universal Civilization,” The New York Times, November 5, 1990. 
  3. Mark Humphrys, “The West – The Universal Civilization.” 
  4. See James Oliver, “The Ruling Power: A Study of the Roman Empire in the Second Century after Christ through the Roman Oration of Aelius Aristides,” Transactions of the American Philosophical Society 43, no. 4 (1953): 871–1,003. 
  5. This is not a doctrine apt to survive an encounter with itself. The claim that whatever can be known must be known as a derivation from the sciences cannot itself be derived from any of the sciences. 
  6. Yuval Harari, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2017), 333. When Harari’s argument is stripped of its absurd and sentimental attachment to the idea of an algorithm, what remains is a re-statement of Hume’s unpersuasive argument about the personality and its persistence over time. “For my part, when I enter most intimately into what I call myself [emphasis added], I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe any thing but the perception.” David Hume, A Treatise of Human Nature (Oxford: Clarendon Press, 1896), 1.4.6.3. From this it follows only that the self is not an object of perception. So? 
  7. Yuval Harari, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2017), 14. 
  8. Ibid., 15. 
  9. Brian Ferguson, “Pinker’s List,” in Douglas Fry, ed., War, Peace, and Human Nature: The Convergence of Evolutionary and Cultural Views (Oxford: Oxford University Press, 2013), 116. Ferguson is discussing Steven Pinker, and not Yuval Harari, but the claims at issue are the same. 
  10. Yuval Harari, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2017), 5. 
  11. See “Global Action Keeping Famine at Bay but Failing to Prevent Suffering, UN Chief Warns,” UN News Centre, September 21, 2017. 
  12. The cause of famine and its existence are two entirely separate questions. Amartya Sen has argued that famines are less about the availability of food and more about its distribution. Amartya Sen, Poverty and Famines: An Essay on Entitlement and Deprivation (Oxford: Oxford University Press, 1981). 
  13. Like almost all numbers pertaining to excess deaths in the twentieth century, including deaths in combat, these numbers, although very roughly correct, vary widely from source to source. No comparable margin of error would be tolerated in any serious science. See Stephen Devereux, “Famine in the Twentieth Century,” IDS Working Paper 105, Institute of Development Studies, January 1, 2000. 
  14. David Berlinski, “The Best of Times,” Inference: International Review of Science 1, no. 4 (2015). 
  15. Yuval Harari, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2017), 201. 
  16. Ibid., 83. 
  17. Ibid., 36. 
  18. Harari is not mistaken in assigning immense intellectual importance to the advent of the algorithm in the twentieth century. His mistakes are local, not general. My own view is that the algorithm is one of two seminal concepts in the Western scientific tradition. The other is the calculus. I developed these ideas first in A Tour of the Calculus (New York: Pantheon, 1995), and second in The Advent of the Algorithm (San Diego: Harcourt, 2001). These ideas arise from quite different parts of the scientific experience, and it is not easy to see how they might ever be reconciled in one completely compelling mathematical or logical structure. If they are not in conflict, neither are they the best of friends. 
  19. Yuval Harari, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2017), 285. 
  20. Harari does not for a moment believe that free will is an illusion. Neither does anyone else. “All the predictions that pepper this book,” he writes, “are no more than an attempt [emphasis added] to discuss present-day dilemmas, and an invitation to change [emphasis added] the future.” Ibid., 65. What would an attempt to discuss anything, or an invitation to change something, amount to in a world without freedom of the will? 
  21. It is not easy to define either determinism or randomness. The class of deterministic theories, if not well defined, is at least better defined than some notion of determinism defined überhaupt, as German metaphysicians say, but the various definitions do very little to advance any discussion of freedom of the will. Richard Montague’s essay, “Deterministic Theories,” remains to my mind the very best discussion. Richard Montague, Formal Philosophy. Selected Papers of Richard Montague, ed. Richmond Thomason (New Haven. CA: Yale University Press, 1974). 
  22. Yuval Harari, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2017), 102. 
  23. There is something inherently finite to life. It cannot be escaped; it should not be evaded. Nothing but grief will come of the attempt. The Cumaean Sybil, granted as many years as grains of sand in a heap, wished only to die. 
  24. If a proposition is necessary, its denial is impossible. If determinism does not really determine something, it remains entirely a flabby concept. 
  25. Ibid., 36. 
  26. Ibid., 327. 
  27. The quotation is from Wolfram MathWorld’s entry for “Principle of Computational Equivalence,” which references Steven Wolfram, “The Principle of Computational Equivalence,” in A New Kind of Science (Champaign, IL: Wolfram Media, 2002), 5–6, 715–846. I am relying on a copy of the advanced and uncorrected version of the book: Steven Wolfram, A New Kind of Science, advance uncorrected proofs (Champaign, IL: Wolfram Media, 2001), 844. I sent Wolfram one of my books in the expectation that he would praise it; he sent me his in the same expectation. Neither expectation was justified. 
  28. Lee Gomes, “Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts,” IEEE Spectrum, October 20, 2014. 
  29. For citations and a very professional discussion of the argument, see Solomon Fefferman, “Penrose’s Gödelian Argument.” 
  30. Zoltan Istvan, quoted in Olivia Solon, “How Close Are We to a Black Mirror-Style Digital Afterlife?The Guardian, January 9, 2018. 
  31. Yuval Harari, Homo Deus: A Brief History of Tomorrow (New York: Harper, 2017), 323. 
  32. Ibid., 372. In a number of especially florid passages, Harari refers to Dataism as something like a religion. He is careful throughout his book not to include himself in the congregation. 
  33. Steven Wolfram, A New Kind of Science, advance uncorrected proofs (Champaign, IL: Wolfram Media, 2001), 4. 
  34. Scott Aaronson, “Book Review: ‘A New Kind of Science’,” (2002), arXiv:quant-ph/0206089v2. 
  35. For an account of the limitations inherent in deep learning techniques, see Gary Marcus, “Deep Learning: A Critical Appraisal,” (2018), arXiv:1801.00631. 
  36. Frank Rosenblatt, Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms (Washington, DC: Spartan Books, 1962). 
  37. Marvin Minsky and Seymour Papert, Perceptrons: An Introduction to Computational Geometry, (Cambridge, MA: MIT Press, 1972).  
  38. Zoltan Istvan, quoted in Olivia Solon, “How Close Are We to a Black Mirror-Style Digital Afterlife?The Guardian, January 9, 2018. Good old Zoltan! 

David Berlinski is an American writer.


More from this Contributor

More on Political Science


Endmark

Copyright © Inference 2024

ISSN #2576–4403