Computer Science / Critical Essay

Vol. 1, NO. 2 / March 2015

The Mind in Its Place

David Gelernter

Letters to the Editors

In response to “The Mind in Its Place


The mind is its own place, John Milton observed. But where is this place, and what is its structure? Where is the map? I argue that our unwillingness to confront these simple questions has got us into deep trouble. Too many thinkers have let themselves be guided by an analogy instead of a map. The analogy in this case is an important part of modern intellectual history; it is over half a century old, and remains deeply influential and deeply misleading. My emphasis is on the analogy itself and its remarkable influence on science, philosophy, and popular culture.

The brain is not a computer. The mind is not its software.

Computability

During the 1930s, a number of major logicians, including Kurt Gödel, Alonzo Church, Alan Turing, and Emil Post, supplied precise definitions for the idea of an algorithm, or an effective procedure, calculation or computation. Computation was an informal idea, not a mathematical concept. But the world was increasingly enthralled by computation, as science and engineering drew closer together, and as calculating machines and electronics grew more refined.

Several precise definitions were proposed, and soon shown to be identical, despite their seeming dissimilarity. Gödel suggested, in consequence, that computation was a basic mathematical concept, and Church conjectured that anything that could be computed by any method could also be computed using the calculus of lambda conversion, his candidate for a precise definition of an algorithm.

Turing captured the essence of the age by defining an algorithm as a machine, rather than as a system of equations, as Church had done. Historians have called the 1930s the machine age for a reason: machines were growing steadily more powerful, ubiquitous, and important. The Turing machine, as it is universally called, was in step with the times in a way that the other proposed definitions were not. Strikingly simple, it incorporated the crucial distinction between hardware and software that was soon to prove so fundamental. The machine’s behavior was determined by a list of rules. The rules could be changed without changing the machine. Turing had his own version of Church’s thesis, henceforth the Church–Turing thesis: anything that can be computed can be computed by a Turing machine.

This is a thesis and not a theorem. “I conjecture,” said Turing (in effect), “that what people have meant all these years by calculate or compute is captured by my machine. I’m betting that any computation whatever can be made to work on this machine.” In the intervening three quarters of a century, no one has found a counter-example. When electronic digital computers emerged, they were easily shown to be equivalent to Turing machines. They are merely faster, fancier Turing machines. It follows that when no Turing machine can solve a problem, no digital computer can solve it either.

There are certainly problems that no machine can solve. The most famous is the halting problem. Turing proved it was non-computable. Given a program and its input, will the machine ever halt? It might not; the program might be buggy; it might get trapped repeating the same series of instructions endlessly, and run forever. We encounter such problems all the time. An application we are using, such as a word processor, stops responding. We have to kill the program using a special procedure, or reboot the computer. The halting problem is not computable. It is impossible to write a program that accepts another program and input, examines both, and tells us that it halts or does not halt. Such a program would have been hugely valuable; writing software is one of most error-prone activities known to man. Computability is important to my argument. That consciousness is non-computable means that software can never yield consciousness as an output or a result.

Computing Machinery and Intelligence

In 1950, Alan Turing published a paper in Mind called “Computing Machinery and Intelligence.”1 He wondered whether digital computers could think, be intelligent, or have minds. But none of these words, he pointed out, is clearly defined. In the 1930s, Turing provided a precise definition for computation; he now proposed not a definition of thought or mind, but a precise replacement for these fuzzy ideas in the form of what came to be called the Turing test.

The idea of thinking computers was in the air.

Turing’s test became famous. A tester chats, using text only, with two contestants he can’t see. One is human, the other an articulate computer. The tester tries to tell the two apart in a series of conversations. The human being is instructed to tell the truth, the computer to insist that it is the human being. The tester can ask any questions he likes. Turing specifically allowed questions about the contestants’ personal lives. He suggested questions about poetry. If the contestants are indistinguishable to the tester, the computer has passed the test.

The paper accomplishes an important feat that is often overlooked: it defines intelligence as human-ness. To be intelligent is to be indistinguishable from a person. Turing dismissed non-human kinds of intelligence; he dismissed the idea that intelligence consists of deep expertise in some particular area. He refused to reduce intelligence to some complex skill or a focused body of technical knowledge. Instead, he saw it as a combination of common-sense knowledge and basic adult skill at conversation.

This definition of intelligence is the most important and lasting contribution of Turing’s paper. But his paper also encouraged, without stating or endorsing, the claim that the mind relates to the brain as a computer relates to its software. This analogy continues to reverberate. In 2015, the analogy is still widely taught and believed. It is tremendously influential for several reasons, one important in itself. The relationship between mind and body is one of the deep mysteries of science and philosophy. How can the purely mental possibly be connected to the purely physical? The computer analogy supplies a sort of answer. The mind runs on the brain as software runs on a computer.

Decades after the publication of Turing’s paper, the analogy still feels invigorating to philosophers of mind, researchers and students. It is easy to see why Turing’s ideas have provoked enthusiasm and allegiance. The brain is a network with electrical signals running through it. So is a digital computer. The mind is brought to life by the brain; it directs the brain’s activities, yet cannot exist apart from the brain. So the mind is like software. And beyond these points, there is the Church–Turing thesis itself, which tells us that any computation can be accomplished by a Turing machine. It follows that if the mind computes, we know just what it is like: it is like a Turing machine, and so, like a digital computer.

It is an interesting aspect of some analogies that, once introduced, their plausibility only grows, and they take on a life of their own. Physical objects, such as a stone in motion or a planet in orbit, obey the physical laws that describe them. But this is not a good description of human beings. According to the philosopher Georges Rey, a stone or planet obeys physical laws, but only people ordinarily obey algorithms by following them. We might have thought people unique in this regard. But computers do this too. Given the representation of an algorithm in the form of a program, or software, computers follow or execute the representation step-by-step. The philosopher Jerry Fodor calls software a causal determinant of behavior. To explain the cause of a computer’s behavior, we can point to the software. The planets are different: “At no point in a causal account of their turnings does one advert to a structure which encodes Kepler’s laws and causes them to turn.”2 Planets obey; computers and people follow.

A Provisional Map

To understand the mind, we need a map. There is broad agreement among philosophers that propositional attitudes and emotions are basic, but no agreement on how they are related, and what other types of mental states exist. In the third edition of his Philosophy of Mind, Jaegwon Kim lists the varieties of mental phenomena.3 They are (1) sensations, (2) propositional attitudes, (3) feelings and emotions, (4) a subset of propositional attitudes called volitions, and, finally, (5) traits of character.4 It is a strange list. Its elements are not disjoint and we are given no idea which are fundamental.

Kim’s approach is representative. Mental states are better understood as points on a spectrum, with propositional attitudes at one end and feelings at the other. A propositional attitude is an example of thinking-about, but not the only example. Deciding whether I can park my car in that space is also thinking-about. It assumes many propositional attitudes, but we don’t necessarily think about any of them. It also includes visual processes, and mental simulation. As I imagine parking the car in that spot, I might think about how to park when I get to town tomorrow, a subtle planning task that includes imagining and weighing alternatives, and guessing probabilities. I might think about a garden, building, or painting; I might think about music. Propositional attitudes are an important but narrow subset of thinking-about. The English language tells us that feelings are broader than emotions; a feeling can be a sensation, such as feeling cold or being tickled. Seeing and hearing present us with a series of sensations; we don’t ordinarily call them feelings, but feeling cold and hearing a violin are instances of similar mental states. Philosophers sometimes discuss mood as yet another mental state; but moods, too, are a type of feeling. The mind is dynamic. Our mental states typically vary over the course of a day. We tend to move from the top of the spectrum, when mental energy and focus are maximum, to the bottom, when we are asleep and dreaming. Often we oscillate.

We do our best thinking-about when we are most alert and wide-awake. As we approach our least awake and alert state, we are asleep and dreaming; and dreaming is essentially pure feeling. Dreaming means hallucinating; our minds are full to the brim with vivid sensations that demand our attention, and are sometimes accompanied by strong emotions. The dream self is strikingly different from the normal, waking self. It is passive and unreflective; in dreams we accept all sorts of absurdities. These facts have been well known for generations.

We can, up to a point, choose our own mental states. But if we make the wrong choice, we won’t make much progress. If we choose to attack a hard problem when mental energy and focus are low, or try to drift idly from one recollection to another when energy and focus are high, we will make halting, uneven progress at best. And we can never choose what we feel. The idea that we control our minds is an illusion. Sigmund Freud made this clear long ago.

There is lots to be said about this spectrum; it is the mind’s foundation, or one part of its foundation. It is important to this discussion because it makes clear why computers will never have minds, and why minds are not software.

States of Mind Are Not Computable

We are happy, angry, satisfied, or embarrassed because these feelings happen to us. We decide to think, but we do not decide to feel. Happiness is a state we experience. The feeling might be purely mental, or might be partly physical. A happiness-causing event might induce a bodily reaction that includes increased physical energy, a higher pulse or breathing rate, and, of course, the feeling of happiness. States of pure feeling are states of being.

Being for a man is not the same as being for a stone. Ice reacts to warmth by melting. Steel reacts to water by rusting. Those reactions depend on the physical nature of ice and steel, and the physical nature of warmth and water. Neither a digital computer nor its software can create melting. Nor can they create rusting. Melting and rusting are physical processes that depend on the characteristics of ice and steel, and warmth and water. By the same token, neither a computer nor its software can create feeling or being or subjective experience. These reactions are not computations. We don’t know how experience is created; Thomas Nagel might be right that we won’t know until the next scientific revolution.5

Software may be able to approximate the mind after we have understood it better. How closely is an open question. A robot might represent the value and trend of human feelings by means of internal meters: pity reads 18.9, but incredulity is 7.5 and rising, and nervousness is steady at 10. The robot’s actions are calibrated to these values. But the instability and unpredictability of human behavior in the presence of strong feeling is well known. And even if our robots do closely approximate human behavior, we have not built a mind using software.

We have built a zombie.

Simple arguments, no? Why are most philosophers and psychologists likely to reject them? There is something tricky in the relationship between thinking and being. But, despite their corresponding roles, thinking and feeling are fundamentally different kinds of mental states. Thinking must be an intentional state; feeling cannot be. Feelings are states of being. They have no information content. They convey nothing but themselves. They have causes, but are about nothing. We have two parallel minds. Sometimes we make a decision because we think it best, sometimes because we feel it best. Ronald de Sousa is one of several philosophers who speak of a two-track mind.6 De Sousa pointed out the relationship to Freud’s primary versus secondary mental processes. Everyone sees this dichotomy; what must be added to form a true picture is the spectrum between the two poles.

It is easy to miss the truth about feeling or being if you are focused on thinking and intentionality. And most thinkers are focused on thinking and intentionality. Nothing is more unnatural than to ask philosophers and scientists to believe that the clear-thinking, rational mind is only a snapshot at the very top edge of a downward progression that happens every day and ends in hallucination, gusts of emotion, and intellectual passivity.

Philosophers and scientists are only human.

Mental States and Computer Software

Software can never create a mind. Computers are not like brains. But we might take this opportunity to clarify not only the mind but software. Computer software is just a performance on the computer, in roughly the sense that a sonata might be performed on a piano. Software is, broadly speaking, a particular way of working the computer. Many compositions can be played on the same piano, just as many programs can run on the same computer. The analogy between music and software, piano and computer, is rough and inexact. Yet it is far closer than the mind-software, brain-computer analogy.

Many different minds cannot run on one brain. But many different compositions can be performed on one piano, just as many different programs can run on one computer. One mind cannot run on many different brains. But one composition can be performed on many different pianos, just as one program can run on many different computers. Music can be written down completely and exactly, just as a program can. A mind cannot. Music is composed by a particular person, as is a program , but the mind is not. Music can be played on many different devices, such as an organ, electronic piano, or harpsichord, but the mind cannot be played at all.

The fundamental distinction between software and the mind is reflected in the portability, controllability, and fundamentally recursive nature of software. A digital computer is the embodiment, within well-understood constraints, of a formal logistic system. We can prove theorems about its basic capacities and its performance in solving different classes of problems. Software is itself just a digital computer, governed by these same theorems.

Software is fundamentally recursive in structure and operation. Here are some obvious definitions.

  1. Hardware is a digital computer realized by electronics (or some other comparably capable technology).
  1. Software is a digital computer realized by another digital computer.

H is the ground case of the recursion: you can always get a digital computer by building one out of electronics. The recursive clause S says that software can run on a digital computer made of hardware or software. A digital computer can always be made out of hardware or software. The only difference is that hardware can operate in isolation, by itself. Software must be embodied in hardware, or another piece of software, to work.

To start the ground case, construct a blueprint or engineering plan for a digital computer and embody the plan using electronics. Now plan a second digital computer, and use the hardware to realize or embody this second design. Now design a third computer, and run it directly on the hardware, or embody it using the software computer. Programs execute on top of other programs all the time.

Software is inherently recursive. The mind lacks this mathematical property. An actor can play Hamlet, who can in turn recite Aeneas’s lines from a play about Troy. But it’s hard to go much farther.

Comparing the computer-software system to the brain-mind system is, in this sense, like comparing a geometry theorem to a butterfly. This basic mistake makes itself felt in many misalignments. Software running on one computer can be copied onto and run on many others. Each mind is stuck on its brain; it can’t be run on anyone else’s brain. No clear idea of what constitutes brain software, brain hardware or how to separate them exists. We might as well speak of taking the colors from one garden and running them on some different garden’s flowers. A given computer can run arbitrarily many programs. A brain can run its mind, no one else’s. A program can be stopped and rerun from the start as many times as we like. A mind has no start and, if it can be stopped by anesthesia, cannot be rewound and rerun. When a program is stopped, it can be restarted at its very next instruction, so that not one action is skipped. A mind has no instructions, and so the idea of the mind skipping an instruction is meaningless; and after being stopped by unconsciousness, a mind cannot be restarted at the exact same place it had reached when it stopped.

A computer with no software is easy to build. The fetch-execute cycle is built into the hardware. But a conscious brain with no mind makes no sense.

A computer can carry out the programmer’s exact intentions, in such a way that its subsequent state is predictable at every point. No human being can ever be made to carry out anyone else’s exact intentions in such a way that his whole state is predictable at any point.

None of these properties of a computer software system depends on any particular computer or program. These properties are intrinsic to digital computers and software. Each one has a corresponding theorem we can prove.

Static and Active

Computers are fundamentally static machines. But human beings and their brains and minds change over many timescales. One can, of course, replace computer parts with new parts, plug in new peripherals and load new software. But the basic machine and its capabilities do not change. Right out of the box, it has the basic capabilities it will always have. Children right out of the box are very different from what they are at eight months. Human growth and development can last two decades. And the average sixty-year-old is very different from the average forty- or twenty-year-old. The mind changes year by year as we absorb life. Obviously human genetics has a large part in determining how we grow up. The point is that we do grow up. The processes and consequences of a growing and maturing body are central to who we are. The experience of being too small and incompetent to do grown-up things stays with us permanently. Growing up is central to human nature.

No human mind exists in isolation. Mental and physical actions are intertwined; and we won’t have anything like a human mind unless we have a human body, too. Where does physical stop and mental begin? Bodily sensations bring about mental states that cause those sensations to change and, in turn, the mental states to develop further. All such mental phenomena depend on something like a brain and something like a body, or an accurate reproduction of certain aspects of the body. However hard or easy the problem of building such a reproduction, computing has no wisdom to offer on the topic. On the construction of human-like bodies, computer science tells us nothing.

Conclusion

The analogies that I have discussed have hurt, not helped, our understanding of the mind. I and many others continue to pursue mind-like software. In doing so, it is crucial that we remember not how similar minds and software are, but how radically different. We can make imitation minds, probably pretty good ones. They will be enormously useful. Today, however, technology is leading science by the nose. I have found the spectrum theory, which is a theory about the mind without reference to computers, useful and reliable. But it is hardly the last word on the subject of a new, modern depth psychology. It is only the first.

Endmark

  1. Alan Turing, “Computing Machinery and Intelligence,” Mind 59 (1950): 433–60. 
  2. Jerry Fodor, The Language of Thought (New York: Crowell, 1975), 74. 
  3. Jaegwon Kim, Philosophy of Mind (Boulder, CO: Westview Press, 2011), 14. 
  4. Jaegwon Kim, Philosophy of Mind (Boulder, CO: Westview Press, 2011), 17. 
  5. Thomas Nagel, Mind and Cosmos (Oxford: Oxford University Press, 2012). 
  6. Ronald de Sousa, The Rationality of Emotion (Cambridge, MA: MIT Press, 1990). 

David Gelernter is a Professor of Computer Science at Yale University.


More on Computer Science


Endmark

Copyright © Inference 2024

ISSN #2576–4403