######### Card Hero LETTERS #########
Letters to the editors

Vol. 3, NO. 3 / November 2017

To the editors:

Two aspects of the contemporary literature on the evolution of language (and cognition) are particularly noteworthy—and vexing. One is the clear paucity of available evidence; the other is the striking deductive style of many proposals. We are certainly not wanting in strong and detailed accounts either way.1

Remarkably, many of the proposals on offer are presented as well-ordered stage-by-stage affairs, where a given stage in evolution gives way to a subsequent stage in which linguistic (and often cognitive) abilities have undergone a clear and significant qualitative jump. That is to say, the abilities of the new stage are typically more sophisticated than those of the previous stage.

James Hurford has put forward a proposal along these lines. Hurford has modeled his account of how language emerged in our species on Michael Tomasello’s theory of how language is learned by our species, possibly on the understanding that ontogeny recapitulates phylogeny.2 According to Tomasello, language acquisition is a constructionist phenomenon; children’s knowledge of language becomes more sophisticated as they use and reuse what they learn. And so it is for the evolution of language, from the rather simple systems of the first language users to our modern abilities.3

Such a progression is typical of gradualist theories of evolution. This is because such approaches tend to be based on a number of assumptions regarding the sort of problems our ancestors faced, the type of solutions available to them, and the general commonality of human and nonhuman abilities. The result is a situation in which many properties of cognition are deduced from what the theory of evolution mandates should have taken place.

So-called saltationist accounts of the sort Cedric Boeckx has recently criticized are not so prone to tidy stage-by-stage stories. This is because these approaches typically postulate some abrupt event in evolutionary history that changes an organism dramatically. Saltationist accounts are thus not quite deductive in nature and should be better understood as “inferences to the best explanation.”

The theoretical outlook of the saltationist really is different. One starts with a comprehensive study of an aspect of cognition and then an attempt is made to provide an account of how it evolved in history. Evolutionary accounts are posterior to proper characterizations of cognition according to the saltationist; evolutionary considerations have but a limited effect on the study of cognition.

The conflict in outlook between gradualist and saltationist accounts raises interesting questions, some of which run counter to common practice in comparative and evolutionary studies, and this deserves some discussion.

Of particular interest is the Darwinian principle that animal species differ from each other in degree rather than in kind, a factor that applies to cognitive traits as much as it does to phenotypic traits. If this is so, “there must be,” as Boeckx puts it, “a path from them [nonhuman animal species] to us” in evolutionary history.

Boeckx has trumpeted this very point in various publications, including his review of Robert Berwick and Noam Chomsky’s Why Only Us. And he has often done so by quoting a brief paper by Frans de Waal and Pier Francisco Ferrari where these scholars argue “that the basic building blocks of cognition might be shared across a wide range of species.”4

We might well be wary of the words “might” and “range” from that quote, but there is no denying the centrality of animal cognition for such an approach. Boeckx clearly outlines what the repercussions might be in the case of language:

It may be the case that no component of language is unique, or unique to humans. What is unique might simply be the particular way in which non-linguistic, evolutionary old structures, observable in other species, is assembled.

Accordingly, the study of cognition should start by decomposing what prima facie look like complex human abilities into “basic building blocks.” Once this is done, it is likely that these blocks are evolutionarily ancient and thus part and parcel of the cognition of some nonhuman species. As such, there might not be many, or any, species-unique traits, only particular assemblies of common properties.

The approach sounds reasonable enough, but its underlying logic is a hotly contested topic. Darwin might well have been wrong.

As a case in point, Derek Penn and colleagues have argued that there is a large discontinuity between human and nonhuman minds. This appears to be irreducible to the unique human possession of natural language, the one factor often regarded as the differentiating feature. The differences between animal and human cognition are rather widespread according to Penn et al., the human mind remaining “distinctively human even in the absence of… language.”5

There are many details worth discussing in Penn et al. and in the many responses the article has received, but this is not the place to do so. What is certainly apposite here is the question of whether the cognitive abilities we can study across a range of species are specific (to some species or domains) or general (in all species and domains). This issue requires a nuanced approach not too dissimilar to what Boeckx has in mind, but of a somewhat different nous.

Cognitive abilities can be decomposed into two broad elements to begin with: the representations of a given domain and the architecture that uses such representations (the operations and capacities). The distinction is seldom noted in the literature, and it certainly was unknown to Darwin.

The psychologist Randy Gallistel, noted for his work on animal cognition, is an exception. Gallistel has shown that the architecture usually postulated to account for most animal abilities is a rather domain-general one: a read-and-write Turing Machine-like mechanism that seems to underlie many human and nonhuman abilities.6

The architecture is general and thus probably evolutionarily ancient. But combined with the representations of different domains, the result appears to be a constellation of rather specific phenomena. The specificity comes in the sort of representations that the read/write mechanism manipulates in each species (and for each problem), and thus in the sort of computations that the mechanism carries out in each case. The representations and computations of the foraging bee are rather different from those of the African ant, even if the operating architecture remains roughly the same.

This factor is somewhat lost in discussions of the basic building blocks that “might be shared across a wide range of species,” and yet it would appear to be a rather crucial point: different computations imply different primitives, and hence different building blocks. This is most evident in the outputs different species produce for different problems (very often one problem per species), since these appear to be incommensurable with each other. The African ant does not understand some of the representations the foraging bees uses, and it certainly cannot put them together in the same way that bees can.

This gap is only magnified when we compare human and nonhuman abilities. Jerry Fodor and Zenon Pylyshyn famously declared the abilities of nonhuman species “pretty generally systematic,” but they certainly did not mean that they were systematic in the human sense.7 Human cognition is systematic in various, ever more complex ways, as Robert Hadley has chronicled, and I do not know of any study that claims that animal cognition exhibits any of the skills at the top end of Hadley’s ranking.8

Indeed, what Fodor and Pylyshyn actually claimed is that if you train an animal to pick up a green thing instead of a red thing, you can also train it to pick up a red thing instead of a green thing. All this presumes is an ability to entertain both aRb and bRa, where a and b are variables (representations) and R is a relation (computation) between these variables; this is a sort of systematicity that would rank low in Hadley’s hierarchy.

Moreover, it is not at all obvious how a decomposition of systematicity into basic building blocks would account for the abilities of both human and nonhuman species. It really seems that we must simply postulate different variables and relations (representations and computations) in each case.

I do not mean to suggest that the bottom-up approach of de Waal and Ferrari should be discarded. I am only arguing that this stance can hardly function as an initial assumption in the study of cognition, as apparently intended. It is also only fair to add that there is very little in the literature that even approximates the ideal that such an approach envisions, especially when we distinguish between domain-general and domain-specific properties.

The case of language is pretty decisive in this respect; representations of natural language are completely unattested in other species, and so are most of the formal properties that derive from the combination of these representations and, perhaps domain-general, mechanisms and properties.9 Some of the domain-general features may well be evolutionarily ancient and present in the cognition of nonhuman species, though this is far from an established fact, but the assumption that all features of language might be is not supported at all. A language-ready brain would presumably be a species-specific brain; would this not follow from the observation that different brains result in different types of cognition? Darwin may have just been wrong in degree.

This is not to deny the importance of animal cognition to the study of language evolution, but Boeckx’s insistence that we make cognitive “properties as basic as possible,” in order to find their ancient origins is unnecessarily dogmatic. And hardly justifiable. Should we not aim to discover what the basic building blocks are, rather than starting with the assumption that we should make cognitive properties as basic as possible? Surely this is all a matter of discovery, not stipulation.

What Boeckx calls the Cartesian program—roughly, the study of cognition through the study of language—would appear to be just what we need.10 This sort of approach has yielded lots of information on the representations and computations human cognition makes use of, and it still does. The same cannot be said of animal cognition.

A rather relevant piece of evidence is the analysis of what sort of lexical items are allowed in natural language, as there seem to be some constraints on which concepts can be lexicalized. Languages certainly confer a great deal of flexibility when it comes to inventing new words, but some verb forms appear to not be possible.

The linguist Guglielmo Cinque offers the following sample of non-existent verbs, with the intended meaning in parentheses (readers can come up with their own examples):11

  • He has climbend the tree (“he has worryingly (for the speaker) climbed the tree”).
  • He fightaf/runaf (“he is afraid of fighting/running”).
  • He didish it (“he did it shamelessly”).
  • I sayam you are wrong (“I am sympathetic in saying you are wrong”).

There has been some discussion in the literature as to why this is the case. The proposed reasons range from the metaphysical to constraints internal to linguistic structure, but the consensus is that such verb forms are nonetheless impossible.12

The meanings of these verbs are not hard to grasp; the corresponding concepts are perfectly entertainable. It is just that these concepts cannot be lexicalized in some languages. These data thus yield important information on the primitives that cognition allows for, including the sort of things we can entertain and think about, without the need to employ any linguistic vehicle.

In the words of Cinque, what seems to be happening here is that “of all the concepts and distinctions that populate our system of thought only a fragment receives a grammatical encoding in the languages of the world.”13

This issue has also been experimentally investigated. Tim Hunter and colleagues report a study on the learnability of determiners that do not exist, but can be entertained.14 Their study focused on an unattested determiner which the authors call fost, the opposite of most and thus meaning “less than half.” Interestingly, a determiner with such a meaning could in fact exist, as it exhibits the same semantic properties as most and there are no internal or metaphysical reasons barring its existence.15 Thus, fost could be learned in a carefully designed experiment. Hunter et al show that both young children and adults can learn and appropriately use it when put to the test. Fost may not be a primitive of language, but it is a possible human concept.

This is evidence pertaining to some of the primitive representations of human cognition, but the same type of analysis can be applied to the study of the combinatory rules, or computations, underlying linguistic and non-linguistic capacities. Consider the following sentence, a well-known example from the semantics literature:16

  • Coffee grows in Africa.

What thought does this sentence express? We face a three-stage process to work this out. We would have to start with the syntactic analysis of the sentence. This is common practice. Semanticists typically run the composition of meaning off the syntactic structure of sentences. Once the meaning has been worked out (this is the second stage), we would then be able to analyze the structure of the thought conveyed by the sentence, something that can be carried out in terms of the Aristotelian distinction between subjects (or arguments) and predicates.17

Interestingly, or so the story goes, the thought underlying “coffee grows in Africa” would be composed of the property being in Africa, the predicate, and the argument, the growing of coffee. This is because what the sentence is about is the statement that coffee grows in Africa rather than the mundane fact that coffee grows. This is in clear contrast to the syntactic structure of the sentence, where coffee would be the subject and grows in Africa the predicate. Thus, predicates and arguments (/subjects), the representations, manifest themselves rather differently in linguistic and non-linguistic domains, and this would be mirrored in the computations of each domain.

There is neurolinguistic evidence of the relevant computations, in fact. In a number of magnetoencephalography (MEG) studies, Liina Pylkkänen and colleagues have analyzed the areas of brain activation associated with two types of combinatory rules (computations): linguistic (either syntactic or semantic) and conceptual (or non-linguistic).18 They did so by employing simple noun-noun combinations that varied along a specificity metric, as in tomato soup vs. vegetable soup, where tomato is more specific than vegetable.

With this distinction in mind and considering that the left anterior temporal lobe (LALT) is sensitive to combinatory effects that are conceptual rather than linguistic, Pylkkänen et al were able to track the times at which different combinatory effects arise in the MEG record. The results show that form-based (syntactic) effects arise at 100–200 milliseconds (ms), LALT effects within a 200–300 ms window, and lexical-semantic effects at around 300–400 ms. LALT effects, in particular, are largest when the first item in noun-noun combinations is specific rather than general, and this is a conceptual phenomenon, as syntactic and semantic combinations would apply regardless of the specificity of the nouns (and, moreover, these rules do not activate the LALT).

Put together, the evidence points to primitives and computations that are sui generis to human cognition. What is more, there appears to be no way to decompose these properties into more basic building blocks, let alone properties that are present, in any shape or form, in other nonhuman species. Human concepts, after all, are for the most part atomic and thus bottom-level properties.19

I certainly do not deny that there are interesting analogies between human and nonhuman cognition—how could there not be? The apparent fact that the underlying computational architecture may be rather similar across many species is clearly significant (at least in some respects and for some problems).20 But this should not be taken as vindication of the deductive outlook on the evolution of language, and we certainly should not conclude from this that the strategy applies to every aspect of cognition.

In particular, we should not try to deduce the ontogeny or phylogeny or etiology of a cognitive phenomenon from any number of assumptions on what evolution must be like. On the contrary, no evolutionary account of cognition can be anything other than an inference to the best explanation. The opposite view is often an overarching (and overreaching) approach that is not a little dogmatic. Monkey may sometimes see, but monkey quite often not do.21

David Lobina

Cedric Boeckx replies:

David Lobina takes issue with a central claim presented in my review of Why Only Us. Namely, that in order to gain insight into the evolutionary trajectory of complex human traits like language we ought to decompose “what prima facie look like complex human abilities into ‘basic building blocks’,” so as to be able to apply the comparative method that is inherent to the Darwinian perspective. I will limit myself to a few comments.

Let me begin with the essential one: if Lobina takes issue with the divide-and-conquer (DC) approach to the evolution of language/cognition, it behooves him to at the very least hint at an alternative that could be as productive as the DC approach has, I think, proven and promises to be. I couldn’t tell from his comments whether Lobina thinks the evolutionary issue is worth bothering with. In fact, Lobina seems to have a very different view of the state of the art from what readers of my essay, and the references it contained, could gather. He begins his commentary by saying that the contemporary literature on the evolution of language (and cognition) characterizes itself by the “clear paucity of available evidence.” Since he provides so few references, and in this case he cites none, it was hard for me to tell, but the claim that “we know so little” is an old saw. I refer interested readers to the collection of articles edited by W. Tecumseh Fitch cited in my essay for references.22

Elsewhere in his commentary, Lobina writes:

It is also only fair to add that there is very little in the literature that even approximates the ideal that such an approach envisions, especially when we distinguish between domain-general and domain-specific properties.

Again, I am not sure which literature he has in mind, but my own appraisal of the current literature on the evolution of language is quite different. Consider the material discussed by Helen Shen on vocal learning—a “domain-specific”/specialized trait where the DC approach has made substantial progress.23 Lobina also doesn’t seem to appreciate the progress made on animal cognition: “This sort of approach has yielded lots of information on the representations and computations human cognition makes use of, and it still does. The same cannot be said of animal cognition.” There are numerous examples—Susan Carey’s Origins of Concepts, James Hurford’s Origins of Meaning, Marc Hauser’s Wild Minds, and Charles Gallistel’s The Organization of Learning are on the shelf right above my computer as I write this—that show it is high time we stop saying that animal cognition is something we know little about.24 Lobina writes that “there are interesting analogies between human and nonhuman cognition,” but we have more than this now: we have (deep) homologies.

More evidence that Lobina’s characterization of the literature is not always accurate, or up-to-date, concerns his use of the old gradualist/saltationist distinction. He should know that in the domain of cognition it is often hard to use these labels to characterize specific proposals. This point has been made very clearly in the domain of language by Brady Clark, and I have stressed it again in my own work.25 Lobina also seems not to realize that gradualist scenarios don’t necessarily endorse a scala naturae view of cognitive phylogenies of the sort he sketches:

many of the proposals on offer are presented as well-ordered stage-by-stage affairs, where a given stage in evolution gives way to a subsequent stage in which linguistic (and often cognitive) abilities have undergone a clear and significant qualitative jump. That is, the abilities of the new stage are typically more sophisticated than those of the previous stage. …the abilities of the new stage are typically more sophisticated than those of the previous stage.

I have addressed the inadequacy of this view in the context of the evolution of cognition as part of a paper with Constantina Theofanopoulou, and I won’t belabor this point here.26 Suffice it to say that complex traits, viewed as mosaics, rarely have linear phylogenetic trajectories.

In general, Lobina likes to remain at the highest and loftiest level of cognitive description, what David Marr dubbed the computational level.27 Lobina likes to insist that “there is a large discontinuity between human and nonhuman minds”—note the characteristic absence of the notion brain here and elsewhere in his commentary. “Cognitive abilities,” he writes, “can be decomposed into two broad elements to begin with: the representations of a given domain and the architecture that uses such representations (the operations and capacities).” Lobina, if I understand correctly, stresses the discontinuity at the representational front. Here I think linguists have a lesson to teach to other fields. Linguists are very familiar with representational issues, but they are rarely satisfied with taking them as primitives, and indeed try to reduce their primitive character as much as possible. These representations have to be assembled, and we ought to ask what the mechanisms for that assembling are—we need to decompose these representations too. As Samuel Epstein and T. Daniel Seely wrote: “if you have not derived it (the representation), you have not understood it.”28

The clearest example of progress on the issues of representations and computations comes from work by Andrea Martin and Leonidas Doumas, who showed in exquisite mechanistic detail how the brain could have repurposed an available neurobiological mechanism when hierarchical linguistic representations became an efficient solution to a computational problem posed to the organism.29 I do not see how a stance like Lobina’s could lead to equivalent progress. Even atomic representations have to be scrutinized. As Richard Feynman remarked, there is a lot of room at the bottom. Again, the absence of any hint at an articulated view on Lobina’s part is glaring.

He is right about one thing:

This is not to deny the importance of animal cognition to the study of language evolution, but Boeckx’s insistence that we make cognitive ‘properties as basic as possible’ in order to find their ancient origins is unnecessarily dogmatic.

This is correct. One can’t be dogmatic. In particular, one can’t be dogmatic about how much one would like to understand, and how many questions one is willing to ask. One can’t force anyone to be interested in evolutionary questions and finding ways to test evolutionary scenarios. One can’t force anyone to turn the mystery of language evolution into a problem. But then it is important to be explicit about whether or not one is, paraphrasing Hume, attempting to restore the origins of language “to that obscurity, in which they ever did and ever will remain.”


  1. For instance, James Hurford, The Origins of Grammar (Oxford, England: Oxford University Press, 2012); Shigeru Miyagawa, Robert Berwick, and Kazuo Okanoya, “The Emergence of Hierarchical Structure in Human Language,” Frontiers of psychology 4 (2013): 1–6; Robert Berwick and Noam Chomsky, Why Only Us: Language and Evolution (Cambridge, MA: MIT Press, 2016). 
  2. James Hurford, The Origins of Grammar (Oxford, England: Oxford University Press, 2012). The phrase “ontogeny recapitulates phylogeny” is due to Ernst Haeckel. 
  3. Unfortunately, Tomasello has no explanation for how children do this, and this doesn’t bode well for Hurford’s own account. See, for some details, David Lobina, “Review of The Origins of Grammar, by James R. Hurford,” Disputatio 5 (2013): 375–81. The underlying problem here—viz., how can a less sophisticated stage give rise to a more sophisticated one—is rather underappreciated in the cognitive sciences, but it is applicable to many topics: language acquisition, language evolution, concept acquisition, learning in general, etc. It is most clearly articulated in Jerry Fodor, The Language of Thought (Cambridge, MA: Harvard University Press, 1975), where the point is made that you cannot acquire a more powerful representational system than the one you already have. As far as I can see, the logic of Fodor’s argument still applies and hasn’t been superseded by any theory or model since he first brought attention to it; it is typically simply circumvented. For what it’s worth, I suspect that the allure of providing organised, stage-by-stage evolutionary stories for such complex phenomena as language evolution may well be a bias of cognitive scientists similar in kind to the heuristics and biases described in Amos Tversky and Daniel Kahneman, “Judgment Under Certainty: Heuristics and Biases,” Science 185 (1974): 1,124–31. Naturally, such neatly ordered explanations may well make sense to a thinker (and to book reviewers), but it doesn’t follow from this that things panned out thus.  
  4. Frans de Waal and Pier Francisco Ferrari, “Towards a Bottom-Up Perspective on Animal and Human Cognition,” Trends in Cognitive Sciences 14, no. 5 (2010): 201, doi:10.1016/j.tics.2010.03.003 
  5. Derek Penn et al., “Darwin’s Mistake: Explaining the Discontinuity Between Human and Nonhuman Minds,” Behavioral and Brain Sciences 31 (2008): 121.  
  6. Randy Gallistel, “The Nature of Learning and the Functional Architecture of the Brain,” in Psychological Science Around the World, ed. Q. Jing (Sussex, UK: Psychology Press, 2006): 63–71. Gallistel focuses on learning phenomena for the most part, but his models generalise to cognitive processes tout court.  
  7. Jerry Fodor and Zenon Pylyshyn, “Connectionism and Cognitive Architecture: A Critical Analysis,” Cognition 28 (1988): 3–71. 
  8. Robert Hadley, “On the Proper Treatment of Semantic Systematicity,” Minds and Machines 14 (2004): 145–72. 
  9. Boeckx is a bit coy about the nature of language’s primitives, the so-called lexical items (bundles of features, in fact), especially considering that they might be a major stumbling block for his approach; see footnote 21 in his review of Why Only Us. He has actually argued that these primitives may be skeletal and general at their core and thus not specific to humans, but his argument is not very persuasive; see Cedric Boeckx, Elementary Syntactic Structures (Cambridge, MA: Cambridge University Press, 2015), for details.  
  10. See footnote 28 in his review of Why Only Us:
    I take this to be the limit of a certain “Cartesian” program, one that has sought to extract properties of the mind from the detailed study of linguistic expressions. This has worked well with the only species that produces such expressions, but as an empirical basis it is too narrow.
     
  11. Guglielmo Cinque, “Cognition, Universal Grammar, and Typological Generalizations,” Lingua 130 (2013): 50–65. 
  12. Jerry Fodor and Ernest Lepore, The Compositionality Papers (Oxford: Oxford University Press, 2002); John Collins, “Impossible Words Again: Or Why Beds Break but Not Make,” Mind and Language 26 (2016): 234–60. 
  13. Guglielmo Cinque, “Cognition, Universal Grammar, and Typological Generalizations,” Lingua 130 (2013): 50. 
  14. Tim Hunter et al., “Restrictions on the Meaning of Determiners: Typological Generalisations and Learnability,” Proceedings of Semantics and Linguistic Theory (SALT) 19 (2011): 223–38.  
  15. It seems that fost doesn’t exist for pragmatic reasons, but we need not be detained by this. 
  16. Pieter Seuren, Language in Cognition: Language from Within, vol. 1 (Oxford: Oxford University Press, 2009). I base what I say in the next few paragraphs on Seuren’s account, with one or two changes.  
  17. This is a more or less standard way to analyse the structure of a sentence’s proposition, which some scholars would in addition call a conceptual representation. I do: David Lobina and José García-Albea, “On Language and Thought: A Question of Format,” in On Concepts, Modules, and Language: Cognitive Science at its Core, eds. Roberto De Almeida and Lila Gleitman (Oxford: Oxford University Press, 2017), 249–74.  
  18. Linmin Zhang and Liina Pylkkänen, “The Interplay of Composition and Concept Specificity in the Left Anterior Temporal Lobe: An MEG Study,” NeuroImage 111 (2015): 228–40. 
  19. The locus classicus here is Jerry Fodor, Concepts: Where Cognitive Science Went Wrong (Oxford: Oxford University Press, 1998), but others have made much the same argument. See the Introduction to Eric Margolis and Stephen Laurence, eds., Concepts (Cambridge, MA: MIT Press, 1999), for some discussion. 
  20. Though here too there are pronounced differences, especially in terms of memory and/or attention capacity. In fact, I haven’t described the functional architecture of human cognition in any detail, preferring to focus on Gallistel’s description of the learning abilities of animal species for the sake of the argument. A more comprehensive account would have to include a lot more stuff; Zenon Pylyshyn, Computation and Cognition (Cambridge, MA: MIT Press, 1984) covers some of the relevant ground. 
  21. I am confident we need not worry about the domestication of wolves. Thanks to Mark Brenchley for some inspiration. 
  22. W. Tecumseh Fitch, ed., “Special Issue: Empirical Approaches to the Study of Language Evolution,” Psychonomic Bulletin & Review (2017). 
  23. Helen Shen, “News Feature: Singing in the brain,” Proceedings of the National Academy of Sciences of the United States of America 114, no. 36 (2017): 9,490–93. 
  24. See Susan Carey, The Origin of Concepts (Oxford: Oxford University Press, 2009); James Hurford, The Origins of Meaning (Oxford: Oxford University Press, 2007); Marc Hauser, Wild Minds: What Animals Really Think (New York: Henry Holt & Company, 2000); Charles Gallistel, The Organization of Learning (Cambridge, MA: MIT Press, 1990). 
  25. See Brady Clark, “Syntactic Theory and the Evolution of Syntax,” Biolinguistics 7 (2013): 169–97; Cedric Boeckx, “Biolinguistics: Facts, Fiction, and Forecast,” Biolinguistics 7 (2013): 316–28. 
  26. Constantina Theofanopoulou and Cedric Boeckx, “Cognitive Phylogenies, the Darwinian Logic of Descent, and the Inadequacy of Cladistic Thinking,” Frontiers in Cell and Developmental Biology 3 (2015). 
  27. David Marr, Vision: A Computational investigation into the Human Representation and Processing of Visual Information (New York: W. H. Freeman and Company, 1982). 
  28. Samuel Epstein and T. Daniel Seely, Derivations in Minimalism (Cambridge: Cambridge University Press, 2006). 
  29. Andrea Martin and Leonidas Doumas, “A Mechanism for the Cortical Computation of Hierarchical Linguistic Structure,” PLOS Biology 15, no. 3 (2017): e2000663. See also their recent Leonidas Doumas, Guillermo Puebla, and Andrea Martin, “How We Learn Things We Don't Know Already: A Theory of Learning Structured Representations From Experience,” (2017), bioRxiv, doi:10.1101/198804. 

David Lobina is a philosopher at the University of Barcelona.

Cedric Boeckx is a biolinguist at the Catalan Institute for Advanced Studies.


Endmark

Copyright © Inference 2024

ISSN #2576–4403