Linguistics / Book Review

Vol. 4, NO. 3 / March 2019

Language in Our Brain: The Origins of a Uniquely Human Capacity
by Angela Friederici
The M.I.T. Press, 304pp., $45.00.

Angela Friederici is the director of the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig. An internationally renowned neuropsychologist, she is known as well for her expertise in linguistics. Language in Our Brain thus offers an insider’s account of the play between language and the neurosciences. In his endorsement, David Poeppel describes her book as a “masterful summary of decades of work on the neurobiological foundations of language.” He goes on to remark that it “develops a comprehensive account of how this most complex of human computational functions is organized, providing a detailed and lucid perspective on the neuroscience of language.”1

In Language in Our Brain, we are dealing with an obviously important book.

Drums and Symbols

After studying Alexis St. Martin’s stomach through a gastric fistula in the 1820s, US Army surgeon William Beaumont determined that digestion is more chemical than mechanical. Muscular contractions of the stomach mashed his patient’s food, but it was the stomach’s gastric acid that dissolved it. Beaumont is today known as the father of gastric physiology, if only because two centuries ago, physicians had no idea how digestion worked.

At the beginning of the twentieth century, Santiago Ramón y Cajal advanced the daring hypothesis that neurons form a discrete lattice. If neurons are discrete, synaptic transmission follows inevitability. One neuron can do little, after all, and if many neurons can do more, they must be in touch with one another. This is precisely the conclusion that Cajal drew.

In 1906, Cajal shared the Nobel Prize with Camillo Golgi, who argued that the brain comprises a single continuous but reticulated network. Cajal was correct and Golgi wrong. Or so it seems today.

In the mid-1950s, Noam Chomsky solidified theoretical syntax. Thinking in general, and language in particular, he argued, boils down to symbolic manipulation—a perspective known as the computational theory of mind. It is a point of view deeply indebted to the work undertaken by the great logicians of the 1930s: Kurt Gödel, Alonzo Church, Alan Turing, and Emil Post. For all the sophistication of their ideas, surprisingly simple questions remain. Which part of our brain carries information forward in time? No one knows. For that matter, no one knows what a symbol is, or where symbolic interactions take place. The formal structures of linguistics and neurophysiology are disjoint, a point emphasized by Poeppel and David Embick in a widely cited study.2 There is an incommensurability between theories of the brain, TB, and theories of the mind, TM. This is the sort of granularity issue that concerned Poeppel and Embick. TM deals with formal devices and how they interact, while TB deals with waves of different frequencies and amplitudes, and how they overlap in time sequences across brain regions. In the absence of a common vocabulary and conceptual space, TM and TB are, at best, conceptual strangers. Are they elementarily equivalent, or is one an extension of the other? Do they share a common model? Is there a mapping between them such that one is interpretable in the other? Language in Our Brain does not attempt explicitly to ask such questions. It is worth considering whether they have answers, and if not, whether they are correctly posed.

Friederici takes linguistics seriously, and this is all to the good. Few neuropsychologists have studied how sentences break down into phrases, or how words carry meanings, or why speech is more than just sound. No one has distinguished one thought from another by dissecting brains. Neuroimaging tells us only when some areas of the brain light up selectively. Brain wave frequencies may suggest that different kinds of thinking are occurring, but a suggestion is not an inference—even if there is a connection between certain areas of the brain and seeing, hearing, or processing words. Connections of this sort are not nothing, of course, but neither are they very much. Is this because techniques have not yet been developed to target individual neurons? Or is it because thinking is more subtle than previously imagined?

We may not figure this out within our lifetimes.

There have historically been many theories that rival the computational theory of mind. In his famous review of B. F. Skinner’s Verbal Behavior, Chomsky demonstrated that no such theory could explain the fact that human language is compositional, representational, and recursive.3 It is within the space marked by the computational theory of mind that these properties receive an explanation. Progress is slow. Language in Our Brain talks of information or representations, but the corresponding entries are not in the glossary or index. The book says little about them. When Friederici writes about the “fast computation of the phonological representation,” an obvious inferential lapse is involved.4 Some considerable distance remains between the observation that the brain is doing something and the claim that it is manipulating various linguistic representations. Friederici notes the lapse. “How information content is encoded and decoded,” she remarks, “in the sending and receiving brain areas is still an open issue—not only with respect to language, but also with respect to the neurophysiology of information processing in general.”5

At the Limits of Neuroimaging

Any discussion about language and the brain must be focused on human language, and throughout her book Friederici assumes that something like the minimalist program is its underlying theory. Minimalism is a streamlined version of generative grammar, and it is precisely because of this theoretical streamlining that finding syntax within the brain is even possible. Neuroimaging techniques depict the brain as it digests information. This is material that Friederici handles expertly. The techniques that she describes yield a variety of markers typically signaling time in milliseconds, and the electrophysiological polarity in the signal. There has been some progress in determining precisely where what is taking place takes place. Overall oscillation packages in brain waves can also be studied through some of the same techniques. Even more recent techniques allow the analysis of neurotransmitters or even single neurons. The results suggest the existence of neural pathways connecting brain regions, or representational networks.

“[E]ven during task-dependent functional magnetic resonance imaging,” Friederici acknowledges, “only about 20 percent of the activation is explained by the specific task whereas about 80 percent of the low frequency fluctuation is unrelated.”6 A lot is going on at any given time within a given brain, and experimenters have to ingeniously subtract what is irrelevant from whatever task is observed. This is familiar enough from daily life. We do many things at once. With present technology, there is no way to determine what each neuron is doing at any given moment, or whether neuronal teams are firing together to perform a given task. Below a millisecond, sensing techniques yield noise.

Cognitive scientists cannot say how the mass or energy of the brain is related to the information it carries. Everyone expects that more activity in a given area means more information processing. No one has a clue whether it is more information or more articulated information, or more interconnected information, or whether, for that matter, the increased neuro-connectivity signifies something else entirely. Friederici remarks:

The picture that can be drawn from the studies reviewed here is neuroanatomically quite concise with respect to sensory processes and those cognitive processes that follow fixed rules, namely, the syntactic processes. Semantic processes that involve associative aspects are less narrowly localized.7

And then there are event-related potential effects, or stereotyped electrophysiological responses to a stimulus:

Acoustic processes and processes of speech sound categorization take place around 100 ms (N100). Initial phrase structure building takes place between 120 and 250 ms (ELAN), while the processing of semantic, thematic, and syntactic relations is performed between 300 and 500 ms (LAN, N400). Integration of different information types takes place around 600 ms (P600). These processes mainly involve the left hemisphere.8

Markers like N100, N400, or P600 signal whether the electrophysiological reading is positive (P) or negative (N). No one knows what such polarities entail. It is the functions of brain areas and timeframes, Friederici assumes, that determine whether something is early or late, anterior or posterior, lateral or bilateral. If the perception of a signal presupposes some sensory modality, the modality must swing into action before computation begins. Language in Our Brain is written in the expectation, or the hope, that a division of labor into phonetics, morphology, syntax, semantics, and pragmatics more or less corresponds to the tasks the brain executes in aggregating representations from more elementary bits.


Merge is the essential operation of Chomsky’s minimalism, because it is the simplest way of putting linguistic items together. “Merge,” Friederici assumes, “has a well-defined localization in the human brain.”9

Localized? Localized where?

“[I]n the most ventral anterior portion of the BA 44.”10

The data, Friederici writes, citing Poeppel’s work, “suggest that neural activation reflects the mental construction of hierarchical linguistic structures.”11 But hierarchical linguistic structures are one thing, and Merge is quite another. It is by being merged that the and ship yield the ship. To go beyond that to sail the ship involves the merger of sail and the ship. In what order do these two mergers take place? The merger of the ship must take place before that particular ship sets sail because the ship is a phrase, and there is no word to which sail could have merged within this sentence. Sail the is not a phrase of English.

There is a difference between the temporal order of events in neurophysiology and the logical order of events in syntax. It is obvious that, in the phrase sail the ship, we first pronounce sail and next the, with ship coming last. What could it mean to say that I first merged the and ship, and then merged them with sail? When I have heard or read sail the ship I have encountered sail the… first, and at that point I cannot know whether what is next is going to be ship, boat, or even skies.


    1. The man sailed the ship.
    2. [[NP The man ]NP [VP sailed [NP the ship ]NP ]]

Sentence 1a has a subject, the noun phrase the man, and a predicate, the verb phrase sailed the ship. There is a logical order in which a sentence like this is assembled, in terms of what grammarians call thematic relations. Friederici is sensitive to the apparatus of modern syntax. Thematic relations, she writes, express “the relation between the function denoted by a noun and the meaning of the action expressed by the verb.”12 It follows that the relationship between sailed and what (the) ship denotes is logically prior to that between what (the) man denotes and the rest of the sentence. In 1b the man sailed the ship. In 1b the ship is merged first, but what is first said is the man. The speech sequence (as perceived) and the syntactic sequence (as generated) are at odds.

Generative grammar addresses this sort of orthogonality by separating competence from performance. Competence reveals that in 1a Merge works from the bottom up, following the brackets in 1b. That what is first encountered in speech is the man is a fact of performance, a matter of parsing. This poses a serious puzzle. Hearing or reading a sentence is an affair from before to after. It is not bottom up. Parsing even something as simple as 1a is a gambit. After the phrase the man has been parsed, it is held on a memory buffer in order to allow the mental parser to concentrate on what comes next, so as to establish thematic integration. In 1a, that happens to be sailed the ship, but consider:

  1. The man
    1. sailed a balloon.
    2. sailed a kite.
    3. sailed a space-probe.

These are all sailings, but rather different actions are asserted for the subject, which is assigned dissimilar thematic roles depending on information that is only accessed upon parsing the direct object of each sentence. Neuroimaging cannot possibly determine whether theta relations are at work in such an elaborate parsing, or whether considerations of memory and attention are paramount instead. How would one decide that whatever is going on at BA 44 is Merge, as opposed to, for instance, the processed phrase being assigned to an active memory buffer? Merge involves systematic and phrasally complex combinatorial information, which is why language recognizers routinely invoke such notions as a memory stack. As far as I can see, present-day observational technology does not seem capable of teasing apart these different components of syntax at work, so it seems to me premature to claim that the observables localize Merge.

The Functional Language Network

There is evidence, Friederici suggests, that different neuronal networks support early and late syntactic processes. These networks are bound together by fiber tracts. There is also a language network at the molecular level. “Information flows,” Friederici writes, “from the inferior frontal gyrus back to the posterior temporal cortex via the dorsal pathway.”13 This is, of course, inferential: no one has seen information flowing, if only because no one has ever seen information. But brain events cohere at different levels into a pattern, which is consistent with what can be surmised from brain deficits and injuries. A functional language network, if more abstract than the digestive system, is no less real.

The question is how the thing works; indeed, the question of what the functional language network might be doing should, in my view, be subordinated to the distinction between competence and performance. What the mind must know and what the brain must process are very generally orthogonal. Consider the feat involved in recognizing a word’s syntactic category, distinguishing transform from transformation and either of those from transformational. The grammatical morphemes -tion and -al come at the tail of the word. We process words from their onset, trans first, then form, and finally the suffixes. So what does the mind actually do as it encounters each of these, in that sequential order?

Faced with such considerations, Morris Halle and Kenneth Stevens pioneered the concept of analysis by synthesis in 1962.14 Thomas Bever and Poeppel remind us how this “heuristic model emphasizes a balance of bottom-up and knowledge-driven, top-down, predictive steps in speech perception and language comprehension.”15 In their view, a model integrating the orthogonality of narrow competence-driven computations and broad performative strategies is computationally tractable and biopsychologically plausible. In processing transformationalize, say, the model may make a first-pass prediction upon parsing transform that needs to be adjusted upon processing -tion, then -al, then -ize.

A functional language network is, no doubt, playing some kind of role in such processes, but whether the activity that imaging techniques reveal when our brain entertains these symbolic dependencies involves the grammarian’s Merge, or something else entirely, no one really knows. Language in Our Brain begins by quoting Paul Flechsig: “[I]t is rather unlikely that psychology, on its own, will arrive at the real, lawful characterization of the structure of the mind, as long as it neglects the anatomy of the organ of the mind.”16 I am left wondering whether neurobiology shouldn’t have to take in all seriousness the central results of cognitive psychology—including the competence/performance divide—if seeking a lawful understanding of the human mind.


  1. MIT Press, “Language in Our Brain: The Origins of a Uniquely Human Capacity.” 
  2. David Poeppel and David Embick, “Defining the Relation between Linguistics and Neuroscience,” in Twenty-first Century Psycholinguistics: Four Cornerstones, ed. Anne Cutler (Mahwah: Lawrence Erlbaum, 2005), 103–20. 
  3. Noam Chomsky, “A Review of B. F. Skinner’s Verbal Behavior,” Language 35 no. 1 (1959): 26–58. 
  4. Angela Friederici, Language in Our Brain: The Origins of a Uniquely Human Capacity (Cambridge, MA: MIT Press, 2017), 20. 
  5. Ibid., 121. 
  6. Ibid., 127. 
  7. Ibid., 82. 
  8. Ibid. 
  9. Ibid., 4. 
  10. Ibid., 42. 
  11. Ibid., 53. 
  12. Ibid., 237. 
  13. Ibid., 129. 
  14. Morris Halle and Kenneth Stevens, “Speech Recognition: A Model and a Program for Research,” IRE Transactions on Information Theory 8 (1962): 155–59. 
  15. Thomas Bever and David Poeppel, “Analysis by Synthesis: A (Re-)Emerging Program of Research for Language and Vision,” Biolinguistics 4, no. 2–3 (2010): 174. 
  16. Angela Friederici, Language in Our Brain: The Origins of a Uniquely Human Capacity (Cambridge, MA: MIT Press, 2017), v. 

Juan Uriagereka is a linguist at the University of Maryland.

More from this Contributor

More on Linguistics


Copyright © Inference 2024

ISSN #2576–4403