Linguistics / Book Review

Vol. 4, NO. 2 / October 2018

On Concepts, Modules, and Language: Cognitive Science at Its Core
edited by Roberto de Almeida and Lila Gleitman
Oxford University Press, 328pp., $60.00.

The cognitive revolution that gripped linguistics, psychology, and philosophy in the 1950s and 1960s owes much to Noam Chomsky and the intellectual milieu then forming around him at MIT. Chomsky’s prominence is well earned. Jerry Fodor’s contributions may prove as enduring. In On Concepts, Modules, and Language, Roberto de Almeida and Lila Gleitman have brought together some of Fodor’s former colleagues and collaborators, including Chomsky, to discuss his work. Each contributor was asked to engage critically with one of two big topics close to Fodor’s heart: the architecture of the human mind, and the language of thought.

Fodor is well known among psychologists for his thesis that there is a tripartite division to the human mind, a position he first defended in The Modularity of Mind.1 First, there are the sense organs that convert light and sound into signals to the nervous system. The study of these organs is best left to the psychophysicist. Then there are modules that further analyze the input. This is, Fodor argues, the domain of psychology. Visual perception and language comprehension are more-or-less modular: they are independent of our overall knowledge. Fodor’s stance went against the dominant view.

What Fodor had in mind can be explained in intuitive terms. Knowing the details of many visual illusions is no aid in escaping their power; knowing that many sentences of a language are nonsensical prevents no one from understanding them. The Müller-Lyer illusion fools a subject into believing that two lines are of different length when they are equally long. The visual system persists in seeing one line as longer than the other, no matter assurances to the contrary. A sentence such as the banana bit the linguist is clearly nonsensical, and yet we are aware of what it means. Visual perception and language comprehension seem to operate according to their own programs. Mental modules such as these behave like machines, obdurately following a code that cannot be disrupted by beliefs or thoughts. This is not to say that what one sees or hears is completely independent of what one believes. Vision and language do interact with beliefs. The constant updating of belief, perhaps the hallmark of what it means to think, takes place at what Fodor calls the central systems, the third and last stage of his tripartite vision of the mind.

No one, Fodor argued, really knows how to study the central systems. The data is not delimited in any principled way. Any bit of information is potentially relevant to forming new beliefs about the world. Consider an example from the cognitive science literature.2 The current cost of coffee in Yemen may at first sight bear no relation to whether a child, Maria Chiara, will cry on a Sunday morning. But I might have shares invested in coffee and, given a drop in prices, I may well cry out in despair while reading the Sunday papers, upsetting baby Chiara. Given the right background, beliefs about the cost of coffee in Yemen can be relevant to beliefs about whether Maria Chiara is apt to cry on a Sunday morning. How do we decide what beliefs are relevant every time we update our beliefs? These concerns do not apply to modular processes. The relevant information is easily identifiable and appropriately constrained.3

De Almeida and Gleitman are aware that some psychologists are skeptical about Fodor’s modularity thesis. It is unsurprising then that eight out of their twelve chapters are devoted to the modularity of language comprehension. These chapters argue that much of the available evidence, when correctly evaluated, is perfectly compatible with Fodor’s thesis. Fernanda Ferreira and James Nye, in particular, show that the language comprehension module can only access linguistic information to analyze the input it receives.4 There is some variety in the linguistic information used by the module, and questions about how various bits of information interact in the interpretation of a sentence. Still, nonlinguistic information remains inaccessible. Language comprehension cannot access general memory, and this is what Fodor’s module amounts to.

Thomas Bever and Merrill Garrett provide further detail.5 Bever points to an important feature of modular processes. Modules go beyond the physical information passed on to them by sense organs. The acoustic analysis of a given sentence offers a poor impression of what one experiences as its meaning. Modules carry out this enrichment by manipulating the information particular to each module. There is very little about visual perception in the book, but a similar conclusion is warranted there too.6

The case of language raises a nuanced complication for Fodor. As Chomsky points out in his contribution, language is not only an input, comprehension-driven system, it is also an output system.7 We are capable of producing a great deal of language, often as an interior monologue. And we can do this because we are in possession of a special type of knowledge regarding our own language. This is not information we are aware of or can consciously access—it is something we only know tacitly.

Any speaker is able to tell good from bad sentences. This is most striking in the ability to identify unnatural sentences in the speech of nonnative speakers. Still, there are many divergences among native speakers of a single language. For some English speakers, the sentence John donated the library some books is fine, but not for others. This needs to be explained. Any answer will require a complex theory. We must rely on the linguists to tell us what language is really like, and how it is represented in our minds.

Is knowledge of language a part of Fodor’s central system, the modular input systems, or somewhere in between? Chomsky provides an answer in terms of a component of the central system dedicated to learning a first language. Operating in accordance with unique principles, this component is independent of language comprehension and production. Fodor argued that we have many innate beliefs about the sentences of our languages. This view has never caught on.

Fodor is identified with the hypothesis that there exists a language of thought. In this regard, philosophers refer to two separate but easily conflated ideas. The first involves a revival of the old doctrine that we think in a mental language, one that is not spoken, or public. Traceable to Aristotle, Boethius, and William of Ockham, the idea is contingent on the general observation that speakers of different languages can refer to the very same things. The French talk about un homme, the English, about a man, and the ancient Romans about homo. They all would have had the same idea in mind.8 The same logic applies to the sentences in which these words can appear: homo currit, un homme court, and a man is running describe the same event. This suggests that the world’s languages are intertranslatable: the language of thought would explain why. Behind the words of a language lie concepts, and behind the sentences, combinations of concepts. To have a belief or a thought is to have a particular combination of concepts in mind. To believe that a man is running is to have the mental concepts man and running, and the capacity to put them together. The language of thought is the common code in which concepts are represented. We all think in the same mental language. Fodor provided many details about what the language of thought must be like, and why it must be something other than a spoken language.9 He drew upon linguistic and psycholinguistic evidence; his remarkable ability to evaluate empirical data is often overlooked.10

The second idea closely linked to the language of thought is the claim that mental processes are computational. This is now the foundational principle of cognitive science. The Language of Thought revives another old idea,11 one traceable to Plato and René Descartes. The idea is deceptively simple: no one can mentally represent what cannot be mentally represented. In his contribution to this volume, Massimo Piattelli-Palmarini goes over some of the details of Fodor’s argument.12 Imagine that English did not have the word every. How would we communicate the thought that every man is mortal? We might enumerate a very long series of conjunctions: David is mortal and Mark is mortal and Michele is mortal and… This strategy does not capture the meaning of every man is mortal. As Fodor put it, the ellipsis signals defeat. Without the word every, English speakers would not be able to express the thought that every man is mortal. How could one learn the meaning of every man is mortal absent the thought that every man is mortal? For a given language, L, it is impossible to learn a spoken or mental language, Fodor argued, that is more expressive than L.

What cannot be represented cannot be learned.13

Fodor insisted upon this point throughout his career. An early target was Jean Piaget. According to Piaget, children go through a number of stages until they reach full cognitive maturity at the age of sixteen. Each new stage is qualitatively more sophisticated than the previous one. Piaget never explained how children acquire a new stage using the cognitive abilities of their current stage.

Children cannot represent what they cannot represent.14

The issue arises again in James Hurford’s attempt to explain the evolution of language.15 Hurford models his account on Michael Tomasello’s theory of language acquisition.16 According to Tomasello, children’s language becomes more sophisticated as they use and reuse the pieces they learn. A similar process is at work in the evolution of language. What is missing from Hurford and Tomasello’s accounts is any theory about the intermediate steps. Fodor’s argument is more often than not ignored. It is easy to see why. Fodor believed that we must come to the world with a lot of innate and explicit information. This is the sticking point. There are carburetors and horses in the world, but it is very hard to imagine that these are represented in the language of thought.

In much of his work, Fodor embraced the hypothesis testing model of learning theory. Imagine a child learning the complex concept of a red triangle. The obvious hypothesis: an object is a red triangle if it is both red and a triangle. If this is the case, the child is not really learning anything new. The requisite concepts appear in the formulation of the hypothesis. Similar problems appear in the study of language acquisition. On bootstrapping theories of learning, children use semantic information to learn some syntactic aspects of their language.17 Children might recognize that certain events consist of agents and actions, and from this infer the relevant syntactic categories of a given sentence. Though such a story assumes a fair amount of innate knowledge, not all linguistic knowledge must be innate. At least some information is learned.18

Still, the gravamen of Fodor’s argument is the impossibility of putting together thoughts that are too complex for the tools at our disposal. The development of arithmetic in children is an example. Children in preschool make a transition, without instruction, from a crude to a more sophisticated system of counting. Asked to add 3 objects to 2 objects (3+2), children start counting the first set and then the second. If asked to add 3 objects to 4, they start again from the beginning. At some point, children see that they need not begin anew. But this presupposes that children entertain a more sophisticated form of counting than the one they employ. How can the transition from one method to the other take place? Where does this knowledge come from?

Susan Carey recently proposed a bootstrapping model that purports to explain this transition.19 She has put together an intricate proposal, one that appeals to innate knowledge about agents and goals. Learning to recite a series of symbols by rote can help children make the relevant transition. Her theory assumes what it is intended to demonstrate. Learning to count a number series can only work if children already know that whatever comes after the last number in the series is also a number. It is unclear that this crucial fact is bootstrapped from anywhere within Carey’s theory.

The fact that such worries often arise is a testament to the significance of Fodor’s argument, and to his standing in the history of cognitive science.

Endmark

  1. Jerry Fodor, The Modularity of Mind (Cambridge, MA: MIT Press, 1983). 
  2. I have adapted my own example from Richard Samuels, “Classical Computationalism and the Many Problems of Cognitive Relevance,” Studies in History and Philosophy of Science Part A 41 (2010): 280–93, where the possible origin of the example is traced. 
  3. It is because of this property of modules that Fodor thought cognitive psychologists could make a living. Having a well-organized set of data is another way of saying that such data are amenable to scientific research. My own description is a simplification and I am ignoring many issues, including the question of how many modules there actually are, something Fodor was especially sensitive to. 
  4. Fernanda Ferreira and James Nye, “The Modularity of Sentence Processing Reconsidered,” in On Concepts, Modules, and Language: Cognitive Science at Its Core, ed. Roberto de Almeida and Lila Gleitman (Oxford: Oxford University Press, 2017), 63–86. 
  5. Merrill Garrett, “Exploring the Limits of Modularity,” in On Concepts, Modules, and Language: Cognitive Science at Its Core, ed. Roberto de Almeida and Lila Gleitman (Oxford: Oxford University Press, 2017), 41–62; Thomas Bever, “The Unity of Consciousness and the Consciousness of Unity,” in On Concepts, Modules, and Language: Cognitive Science at Its Core, ed. Roberto de Almeida and Lila Gleitman (Oxford: Oxford University Press, 2017), 87–112. It is worth mentioning that Fodor wrote a seminal book on the psychology of language in the 1970s with Bever and Garrett: Jerry Fodor, Thomas Bever, and Merrill Garrett, The Psychology of Language (London: McGraw-Hill, 1974). 
  6. See, for instance, in Zenon Pylyshyn, Seeing and Visualizing: It’s Not What You Think (Cambridge, MA: MIT Press, 2003). 
  7. Noam Chomsky, “Exploring the Limits of Modularity,” in On Concepts, Modules, and Language: Cognitive Science at Its Core, ed. Roberto de Almeida and Lila Gleitman (Oxford: Oxford University Press, 2017), 25–40. 
  8. Claude Panaccio, Mental Language: From Plato to William of Ockham (New York: Fordham University Press, 2017). 
  9. Jerry Fodor, The Language of Thought (Cambridge, MA: Harvard University Press, 1975); Jerry Fodor, Concepts: Where Cognitive Science Went Wrong (Oxford: Oxford University Press, 1998); Jerry Fodor, LOT 2: The Language of Thought Revisited (Oxford: Oxford University Press, 2008). 
  10. My own contribution to this volume constitutes an update of some of Fodor’s claims. I argue that spoken languages are unlikely to be the vehicles in which we think because they are too inexplicit and often convoluted, nor are we properly thinking when using an interior monologue. David Lobina and José García-Alba, “On Language and Thought: A Question of Formats,” in On Concepts, Modules, and Language: Cognitive Science at Its Core, ed. Roberto de Almeida and Lila Gleitman (Oxford: Oxford University Press, 2017), 249–74. 
  11. Jerry Fodor, The Language of Thought (Cambridge, MA: Harvard University Press, 1975). 
  12. Massimo Piattelli-Palmarini, “Fodor and the Innatenes of All (Basic) Concepts,” in On Concepts, Modules, and Language: Cognitive Science at Its Core, ed. Roberto de Almeida and Lila Gleitman (Oxford: Oxford University Press, 2017), 211–38. 
  13. Jerry Fodor, “Fixation of Belief and Concept Acquisition,” in Language and Learning: The Debate Between Jean Piaget and Noam Chomsky, ed. Massimo Piatelli-Palmarini (London: Routledge, 1979), 142–61. The argument only actually works with formal logic as an example, just as Fodor himself framed it, but I think my linguistic description is more intuitive and suffices for the purposes at hand. 
  14. Jerry Fodor, “Fixation of Belief and Concept Acquisition,” in Language and Learning: The Debate between Jean Piaget and Noam Chomsky, ed. Massimo Piatelli-Palmarini (London: Routledge, 1979), 142–61. 
  15. James Hurford, The Origin of Grammar (Oxford: Oxford University Press, 2012). 
  16. This is possibly on the understanding that ontogeny recapitulates phylogeny, Ernst Haeckel’s famous phrase, which might be problematic in itself.Michael Tomasello, Constructing a Language: A Usage-based Theory of Language Acquisition (Cambridge, MA: Harvard University Press, 2003). 
  17. Steven Pinker, Language Learnability and Language Development (Cambridge, MA: Harvard University Press, 1984). 
  18. The use of the word “bootstrapping” is apparently related to the expression “to pull oneself by one’s bootstraps.” Yet it is used with no apparent hint of irony, given that the expression surely refers to an impossible task. Fodor must have loved this. 
  19. Susan Carey, “Bootstrapping and the Origin of Concepts,” Daedalus 133 (2004): 59–68. 

David Lobina is a philosopher at the University of Barcelona.


More from this Contributor

More on Linguistics


Endmark

Copyright © Inference 2024

ISSN #2576–4403