######### Card Hero LETTERS #########
Letters to the editors

Vol. 4, NO. 1 / May 2018

To the editors:

David Berlinski’s virtuoso demolition of Yuval Noah Harari’s Homo Deus was at once exciting and reassuring to read—exciting for the bravura display of intellect and knowledge; reassuring that the most dehumanising aspects of Harari’s theses are unlikely to become true. I agree that human beings are unlikely to be reduced to algorithms, or to sequences of identifiable, calculable decisions that can be predicted; I agree that human beings are unlikely to be boundlessly perfectible, either in the sense of being completed by definitions of them, or in the sense of becoming divinely long-lived or endowed with distinct, supernatural powers. (I write about the ancient world, and have other bones to pick with Harari about what divinity means, but another time…) I am reassured by arguments about consciousness, and cling to the idea of some ghost in the machine. Not that we are machines—surely not!—but at least we can remain unpredictable.

Berlinski’s response to Harari was much more sophisticated than the above makes it sound, and I have skewered my own response to it to provide some kind of consolation to the dystopian vision that Harari presents—or at least appears to present if one takes just Berlinski’s view as a guide. There are times in Homo Deus when Harari is more measured and cautious. I think his book should be read, not as a sequence of vatic pronouncements, but as a range of thought experiments—admittedly chilling ones, but as prophecies that may come true, so long as we are insouciant enough to let them. Berlinski is right to accuse Harari of guessing at times; but Harari is also clear that writing about the future is impossible, since there are so many variables in play, including the future of our genes, which could affect everything, and about which we can anticipate almost nothing.

Your reviewer also teases Harari, if teases is the word, for writing big histories. I am a little more sympathetic towards big histories than Berlinski is. While it is possible to sound glib when panning back or making tours of vast horizons, nonetheless the author of Homo Sapiens is cautious about that, too: as Berlinski says, in that book, Harari looks at the history of the species as a sequence of revolutions, and they do not always go in the same directions. The big history approach can have some strengths, such as charting changes that are now clearer than they were when they were actually happening. I enjoyed Harari’s application of meme theory to the agrarian revolution of circa 10,000 BCE: it may have seemed like a giant leap for mankind, but imagine if you are wheat. As a species, you have conquered the world. Come on and harvest me! I will just spread further.

Admittedly that approach is not the most scholarly, and looks all the less so because it relies on serious specialists to do the closer analysis of individual developments. And admittedly futurology is even less likely to be practical or useful, even if it is extrapolating from a clear view of the past. It is true that Harari’s views of the past are sometimes convincing, and sometimes less so: my own view of antiquity, for example, is that emotions and feelings were much more valued and evaluated than Harari thinks they were, but I am more easily convinced about a humanist revolution in the Enlightenment than Berlinski is. Still, I think we may yet have cause to be grateful for Harari’s haruspicating. Berlinski’s review was posted on February 14 this year. A month later, Christopher Wylie exposed the way in which the firm Cambridge Analytica had exploited the vast data it harvested from facebook in order to identify voters more likely to fall in with Trump’s way of thinking if pushed.

Now, that does not make Berlinski wrong. He is still wryly funny when he writes, “Algorithms might even become entities under the law, like corporations or trusts, Facebook’s corporate algorithm showing Mark Zuckerberg the door in favor of itself.” It is still not likely that algorithms will have the power of gods. They may not even take the role of wheat, and say, “Come on, harvest me! I’ll just change your world some more.” But Harari does conclude that we should be constantly vigilant. “What will happen to society,” he remarks, “politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?”

What is happening is, so far, in line with reassuring views that consciousness—and, in Wylie’s case, conscience—will prevail. The whistle was blown. We may have Trump in office now, and Britain is preparing to leave the European Union, but at least electoral commissions are investigating, the actual human beings about whom so much more is known want to do something about it, and if Mark Zuckerberg can sound sorry long enough, he may yet outwit his algorithmic nemesis. Nonetheless, this relies on a range of ifs, and Harari is right to be worried.

Tom Payne

David Berlinski replies:

To the extent that Tom Payne agrees with me, I agree with him. There remain a few snags in the otherwise smooth silk of our concordance. When Payne writes that he is disposed “to cling to the idea that there is some ghost in the machine,” it is with the rhetorical embarrassment of a man confessing an intellectual weakness. The embarrassment is misplaced. Something like Cartesian dualism is the default position of the human race. In The Concept of Mind, Gilbert Ryle attacked Descartes by means of a superbly inspired metaphor. The mind is a ghost in an otherwise robust machine, something impalpable and so unneeded. It is the metaphor that is defective, and not the idea it was intended to impugn. An animal is not a machine, and the mind is no more a ghost than the electromagnetic field is a phantom. There is never a shortage of neurologists or philosophers willing to dismiss the mind, but these people will believe anything, and often do.

Quoting Harari with what I assume is some alarm, Payne asks “[w]hat will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves?” I am not sure what it would mean for an algorithm to know anything. An abacus, after all, does not know that the sum of two and two is four. Like a hammer, it may be used for certain ends; but the ends remain human ends, and without human intervention and interpretation, both the hammer and the abacus are inert and play their role in the universe of things. Nor am I sure that a society in which algorithms are used to predict our behavior is more threatening than the one in which we live. Other people often know us better than we know ourselves. A sucker really is born every minute.

Somehow, life goes on.

On the subject of Big History, I am not sure whether Mr Payne and I disagree all that much. He writes that he is “a little more sympathetic towards big histories” that I am. He could not be less sympathetic. I see no good reason for any historian to write about more than what he knows, and no historian knows enough to write history on the scale that Harari has attempted.


Tom Payne teaches English and Classics at Sherborne School. He was previously a literary critic at The Daily Telegraph.

David Berlinski is an American writer.


Endmark

Copyright © Inference 2024

ISSN #2576–4403