Linguistics / Review Essay

Vol. 5, NO. 3 / September 2020

Speech comes naturally to human beings. We are tuned to speech in utero, we swiftly learn to voice our thoughts and needs, and we do so seamlessly by recruiting specialized oral and brain mechanisms that have likely evolved in our species alone.

Speech and language are so tightly linked that we often think them one and the same. Noam Chomsky, however, has always attributed linguistic competence to abstract and universal rules.1 But his is now a minority voice. A growing literature in brain and cognitive sciences has sought to anchor language in speech itself. Friedemann Pulvermüller and his colleagues have shown that when syllables like ba are heard, the lip motor area in the brain lights up; with ta, it is the tongue motor area.2 On this view, speech holds the key for the human capacity for language, its internal structure and evolutionary origins. The command of English is a sensory and motor feat. The utterance of a sentence like dogs bark reflects a speaker’s exquisite control over his lips and tongue, something akin to his ability to tap his fingers, chew, or dance.

Nearly seventy million of the deaf use a manual language. Sign languages demonstrably differ from nonlinguistic gestures, and these languages are not mutually intelligible, nor are they patent to nonsigners. Yet sign languages come to human beings naturally and spontaneously. The psychologist Susan Goldin-Meadow has shown that deaf children raised in hearing families and with no sign language exposure, generate home signs on their own, complete with rudimentary rules to which their mothers are not privy. When home-signers gather together, a new language is born. The spontaneous birth of sign languages has been meticulously documented in Nicaragua,3 but many other cases are well known.4

Human brains support language in two different formats and these systems recruit shared and linked brain mechanisms.5 These links are evident in the effect of early experience with sign language on subsequent linguistic competence. It is well known that children who are deprived of language in early development do not fully catch up when they encounter language later in life. Deaf children raised in hearing families are at risk of lacking early access to language. While early linguistic experience is critical, its format, whether in spoken or sign language, matters less. Research by Rachel Mayberry has shown that early linguistic experience with sign language facilitates the later acquisition of English; in fact, the benefit from early exposure was comparable to the benefit associated with another spoken language.6

This striking result is open to multiple explanations. One possibility is that early access to language provides the child with social, emotional, and cognitive advantages that are not specific to language, and it is these nonlinguistic skills that facilitate later language learning. But just possibly, linguistic principles themselves transfer across modalities. An early exposure to sign language helps because some of its rules are relevant to the later acquisition of English. Language is neither speech nor sign, but an abstract algebraic system that can emerge in either system.

Results from my lab support this possibility.7 My colleagues and I found that people apply the rules of their spoken language to signs, and they do so spontaneously—despite having no previous experience with sign language. In these experiments, we gauged the responses of speakers to signs in American Sign Language (ASL). Our participants were sign language naive—none commanded a sign language. If language were solely a sensory and motor affair, then one would expect naive participants to treat these visual displays in a nonlinguistic fashion, akin to pantomime or dance. Knowledge of language should be irrelevant.

What we found, instead, was that speakers’ responses to signs depended on linguistic structure. First, people shifted their responses to the same sign depending on the implied level of linguistic analysis—phonology or morphology. And even more remarkably, the responses of these naive speakers to signs further depended on the structure of their spoken language. Thus, English speakers showed one pattern of response; Hebrew speakers, the opposite. And critically, these differences were lawfully predicted by the distinct morphological structure of English and Hebrew.

In these experiments, people with no command of a sign language were presented with two types of novel signs in ASL.8 One sign exhibited doubling, that is two repeated syllables, denoted XX; another sign had two different syllables, denoted XY. Responses to these two types of signs, XX and XY, were evaluated under two conditions. The signs were presented as names for a single object, so doubling had no special significance; the signs were bare phonological forms. In other conditions, the change from X to XX indicated a systematic change in meaning, either plurality or diminution. Doubling implicitly signaled a morphological operation. The phonological and morphological task was the same. Participants were simply asked to choose which sign made a better name: XX or XY. We administered these experiments to speakers of English and Hebrew.

In so doing, we sought to address two questions. First, do responses to signs vary across phonological and morphological conditions? Second, do the responses of naive speakers to signs depend on the structure of English or Hebrew?

We reasoned that, if speakers treat signs strictly as visual displays of a motor activity, then responses to signs should depend only on the demands they exact on the visual and motor systems: linguistic factors should be irrelevant. And since the stimulus was always unchanged, responses should be invariant across the two linguistic levels and the speakers’ linguistic experience.

Our results, however, showed that linguistic factors strongly affected the responses of naive speakers to signs. When doubling had no meaning and repetition was strictly phonological, speakers of both languages exhibited a doubling aversion, as they reliably preferred XY to XX. But when doubling indicated a systematic change in meaning and signaled a morphological operation, speakers showed a reliable preference for XX.

Responses to the stimulus reflect the distinct representations projected to it by the mind. Linguistic theory offers a simple explanation for this shift. Doubling exhibits structural ambiguity, as it is amenable to two distinct parses, much like ambiguous figures in vision. The phonological parse is ill-formed because it violates a putatively universal grammatical constraint on adjacent identical elements. In contrast, the morphological parse of doubling is better formed. The linguistic level of analysis matters even when the stimulus modality is unfamiliar.

Responses also depended on the morphology of participants’ spoken language. English speakers preferred XX when doubling indicated plurality; Hebrew speakers, when doubling indicated diminution. These preferences are in line with the different morphologies of the two languages. The preferences are also in agreement with the responses of English and Hebrew speakers to novel spoken words.

Subsequent research has extended this investigation to Mandarin Chinese and Malayalam,9 and the previous conclusions have been upheld: the responses of these speakers to signs were predicted by the morphology of their spoken language. Speakers of Mandarin, which famously lacks productive plural morphology, showed no doubling preference when XX signs indicated plurals, whereas speakers of Malayalam, with rich plural morphology, did.

Taken as a whole, these results show that the responses of naive speakers to signs depend on linguistic factors, including the linguistic level of analysis and their linguistic experience. They are the first to show that speakers spontaneously project grammatical rules from their native spoken language to signs. Naive speakers spontaneously treat signs as linguistic entities and project to them grammatical principles of their native spoken language. If knowledge of language can project from one stimulus modality to another, then the relevant principles cannot possibly be either aural or oral, visual or manual. Knowledge of language includes rules that are algebraic and abstract.

These discoveries shed light on human nature and solve a number of linguistic mysteries. They explain why human communities can spontaneously generate language in either format, why signed and spoken languages share some of their structure and engage common brain mechanisms,10 and why early experience with sign language facilitates the subsequent acquisition of spoken language.11

The amodal nature of language also underscores the significance of sign languages, their complexity, expressive power, and the many advantages they confer on their users, which are comparable to the advantages conferred by spoken languages. Finally, language is at the core of reading—a cultural technology that recycles the core cognitive and brain mechanisms of language.12 Reading acquisition presents particular challenges to the deaf, as reading requires that learners become aware of the link between spelling and the sound structure of language—e.g., seed and cent share their initial phoneme—and this skill is exceedingly difficult for deaf individuals to attain.13 Finding that some of the rules governing linguistic patterns are amodal is encouraging because it could offer a bridge for the acquisition of reading skill by deaf readers.

Speaking is a human instinct. People know how to talk in more or less the sense that spiders know how to spin webs.14 But unlike spiders, we spin our linguistic webs from multiple raw materials. Speech is the default linguistic channel in hearing communities, but language and its channel are not one and the same.

Endmark

  1. Noam Chomsky, Syntactic Structures (The Hague: Mouton, 1957), 116; Noam Chomsky, Language and Mind (New York: Harcourt, Brace & World, 1968). 
  2. Friedemann Pulvermüller et al., “Motor Cortex Maps Articulatory Features of Speech Sounds,” Proceedings of the National Academy of Sciences 103, no. 20 (2006): 7,865–70, doi:10.1073/pnas.0509989103. 
  3. Ann Senghas, Sotaro Kita, and Asli Özyürek, “Children Creating Core Properties of Language: Evidence from an Emerging Sign Language in Nicaragua,” Science 305, no. 5,691 (2004): 1,779–82, doi:10.1126/science.1100199. 
  4. Wendy Sandler et al., “The Emergence of Grammar: Systematic Structure in a New Language,” Proceedings of the National Academy of Sciences 102, no. 7 (2005): 2,661–65, doi:10.1073/pnas.0405448102. 
  5. Laura Ann Petitto et al., “Speech-Like Cerebral Activity in Profoundly Deaf People Processing Signed Languages: Implications for the Neural Basis of Human Language,” Proceedings of the National Academy of Sciences 97, no. 25 (2000): 13,961–66, doi:10.1073/pnas.97.25.13961; Karen Emmorey et al., “How Sensory-Motor Systems Impact the Neural Organization for Language: Direct Contrasts between Spoken and Signed Language,” Frontiers in Psychology 5 (2014), doi:10.3389/fpsyg.2014.00484; Mairéad MacSweeney et al., “Phonological Processing in Deaf Signers and the Impact of Age of First Language Acquisition,” NeuroImage 40, no. 3 (2008): 1,369–79, doi:10.1016/j.neuroimage.2007.12.047. 
  6. Rachel Mayberry, Elizabeth Lock, and Hena Kazmi, “Linguistic Ability and Early Language Exposure,” Nature 417, no. 6,884 (2002): 38, doi:10.1038/417038a. 
  7. Iris Berent et al., “The Double Identity of Linguistic Doubling,” Proceedings of the National Academy of Sciences 113, no. 48 (2016): 13,702–707, doi:10.1073/pnas.1613749113. 
  8. Berent et al., “Double Identity.” 
  9. Iris Berent et al., “Knowledge of Language Transfers from Speech to Sign: Evidence from Doubling,” Cognitive Science 44, no. 1 (2020), doi:10.1111/cogs.12809. 
  10. Petitto et al., “Speech-Like Cerebral Activity;” Emmorey et al., “Direct Contrasts between Spoken and Signed Language;” MacSweeney et al., “Phonological Processing in Deaf Signers.” 
  11. Mayberry, Lock, and Kazmi, “Linguistic Ability.” 
  12. Stanislas Dehaene, Reading in the Brain: The Science and Evolution of a Human Invention (New York: Viking, 2009). 
  13. Loes Wauters, Wim Van Bon, and Agnes Tellings, “Reading Comprehension of Dutch Deaf Children,” Reading and Writing 19, no. 1 (2006): 49–76, doi:10.1007/s11145-004-5894-0; Fiona Kyle and Margaret Harris, “Predictors of Reading Development in Deaf Children: A 3-Year Longitudinal Study,” Journal of Experimental Child Psychology 107, no. 3 (2010): 229–43, doi: 10.1016/j.jecp.2010.04.011; Richard Conrad, The Deaf Schoolchild: Language and Cognitive Function (London: Harper and Row, 1979); Carol Traxler, “The Stanford Achievement Test, 9th Edition: National Norming and Performance Standards for Deaf and Hard-of-Hearing Students,” Journal of Deaf Studies and Deaf Education 5, no. 4 (2000): 337–48, doi:10.1093/deafed/5.4.337; T. E. Allen, “Patterns of Academic Achievement among Hearing Impaired Students: 1974–1983,” in Deaf Children in America, eds. Arthur Schildroth and Michael Karchmer (San Diego: College-Hill Press, 1986): 161–206. 
  14. Steven Pinker, The Language Instinct (New York: Morrow, 1994). 

Iris Berent is Professor of Psychology at Northeastern University in Boston.


More on Linguistics


Endmark

Copyright © Inference 2024

ISSN #2576–4403