Review of Twenty-First Century Psycholinguistics |
|
|
Review: |
Date: Mon, 20 Feb 2006 14:26:51 +0100 From: Oren Sadeh-Leicht Subject: 21st Century Psycholinguistics: Four Cornerstones
EDITOR: Cutler, Anne TITLE: Twenty-First Century Psycholinguistics SUBTITLE: Four Cornerstones PUBLISHER: Lawrence and Erlbaum Associates YEAR: 2005
Oren Sadeh-Leicht, Utrecht Institute of Linguistics, OTS
SYNOPSIS
The book is a collection of various articles about four major problems in psycholinguistics: Psychology and Linguistics, Biology and Behavior, Production and Comprehension, and Model and Experiment.
The book begins with a very brief review (by Cutler, Klein and Levinson) of what psycholinguistics is and discusses in very general lines the development of those four problems over the past five decades of psycholinguistic research.
The following review is organized in the same manner as the book. A short summary of what is claimed in each article is given, followed by my assessment, where appropriate.
Psychology and Linguistics The purpose of ''Cognitive mechanisms and syntactic theory'' by Boland is to show that there exist psycholinguistic data formal linguists may consider to be in support of their theories, that is the fundamental distinction between arguments and adjuncts. A short theoretical introduction about the relation between formal linguistics and the parser is provided, where the position of weak transparency between parsing mechanisms and formal linguistics is advocated. The authoress proceeds to assume that adjuncts are attached during parsing with a different attachment rule: there is no lexical head that specifies attachments of adjuncts. Thus she suggests the Argument Structure Hypothesis: lexical frequency effects would be predicted for arguments, but not for adjuncts. This is relevant to formal linguistics: arguments will be processed differently than adjuncts.
Boland continues to discuss the on-going debate on this issue: whether lexical frequency or rather plausibility can explain argument/adjunct effects. She presents three experiments in which the passive listening paradigm (not explained, but I gather that people listen to sentences while manipulated pictures are presented, and their eye movements are monitored) was used. She found that subjects' visual attention was drawn to likely referents of the verb's arguments: when a verb is introduced into discourse, its arguments are implied before they are explicitly mentioned, but not adjuncts. This suggests that arguments were syntactically analyzed via a lexicalized mechanism. She concludes that only arguments are represented in the lexical entries of their heads. Although potentially useful to formal syntactic theory, the paper ends in a somewhat pessimistic tone: Psycholinguistic data will always play a secondary role in formal linguistic theory: 'This is as it should be, under the assumptions of weak transparency.' (p. 40). Why this should be so? It is unclear in what cases weak transparency indeed bares on formal theory or not, which works against the goal of the paper. It would be far more revealing to assume that the link is strong, as it allows for clearer predictions.
In ''Getting sound structures in mind'', Fikkert focuses on showing that children's production forms support the claim that children: (1) build up abstract phonological representations of words, and (2) make generalizations over their own productive lexicon resulting in phonological constraints which are part of children's developing phonological system. After a short introduction of basic concepts, the paper continues to report the study of place of articulation patterns in early production. Largely, children start from a harmonic stage, where all sounds of the word share the same place of articulation features determined by the stressed vowel of the word. Subsequent stages lead to better segmentation, until it reaches adult level.
It was also found that if a segment has no place of articulation specification in the lexicon, it will be generalized as coronal, and that children use the same place of articulation features as required to produce target words, meaning that only target words that can be produced correctly are attempted.
It was not mentioned what ''targeted words'' are and what a ''fully segmentized'' adult representation is, or what was coded exactly. It was therefore difficult to understand whether the conclusions Fikkert made follow from the database coding.
Further, Fikkert claims that once a child's production lexicon (does this distinction exist?) may contain for example a certain number of occurrences of the sequence labial vowel coronal, and that the child ''will generalize over this productive lexicon and derive a rule or constraint stating that labial consonants are at the left edge of the word'' (p. 49). It is not clear, and in fact highly controversial, how overgeneralization, a matter of frequency, may or may not directly lead to a formation of a rule or constraint. Fikkert shows that if this rule is introduced, then words like kip (chicken, in Dutch) would be produced unfaithfully, as pip, by consonant harmony, as evidenced. But then, it is not clear why the rule of consonant harmony was not acquired instead of the suggested rule, if at all. And it is not clear whether the unfaithful mistakes are not just a matter of pure coincidence in children production.
Indeed Fikkert discusses alternative explanations that may account for the data. She writes that frequency accounts, for example, do not fare well, since harmonic words are of very low frequency and yet are produced early. But it still doesn't follow that children did study the rule she suggested, if at all. She adds that optimality theory also does not explain the patterns found, and argues that only an account that assumes both initially underspecified and developing representations and a developing grammar (consisting of emerging constraints) provides an account of the data. But it is hard for me to see how the claim is supported due to these conceptual issues, and it is not clear to me why a child would over generalize, and on what basis this overgeneralization would follow, this is a matter of observation.
A short discussion of infant perception and early word recognition shows that vowel contrasts become language specific at about 6 months of age, and in perception and production, acquisition of vowels precedes the acquisition of consonants. At 9 months, children are sensitive to phonotactically possible and impossible strings of segments in their native language. By assuming underspecified lexical representations, Fikkert can account for the gradual and systematic changes encountered in child production studies, and for the difference between discrimination of sounds. On this basis, she argues that acquisition is important for linguists and psycholinguists, but in the end states that ''it is still a long but interesting way to test the psychological reality of linguistic theories'' (p. 54).
The paper by Haverkort, ''Linguistic representation and language use in Aphasia'', purports the argument that aphasics have the knowledge of their native language available, but cannot make quick use of it, in both production and perception. First, it is shown that aphasics demonstrate equal priming effects as normals on a cross-modal lexical decision task, where a certain word does not conform to previous syntactic context. This effect is obtained only when the time between hearing the last word of the syntactic context and being presented with the word for which a lexical decision is to be made is stretched to 1100ms. Second, it is shown that simplifications of syntactic structure, i.e. omission of functional categories, are directed by the grammatical representation of the language. Patients use simpler representations because they impose less burden on working memory. Thus representations are either ''pruned'', as in the ''Tree Pruning Hypothesis'' (Friedman, 2002) to meet processing limitations or a verb cannot move up too far. For instance, the distribution of errors of patients suggests that tense dominates agreement in a representational tree. Therefore, no pure errors of agreement are found: Since information always becomes available from the top down, agreement and tense errors will always occur together, as evidenced.
Based on this, the author claims that ''there is thus interdependence between linguistic representations and psycholinguistic processes'' (p. 67). But that makes it unclear why the author claimed in the beginning that: ''This paper argues that a clear distinction should be made between the representation of linguistic knowledge and the use that is made of such a knowledge representation in processes of language comprehension and production'' (p. 57). If linguistic representations depend on psycholinguistic processes and vice versa, then there cannot be a ''clear distinction'' between representation and its use. The author seems to have conflated difficulty of mental processes with complexity of representation reminding me of the Derivational Theory of Complexity, an assumption that turned out to be incorrect (although see Phillips, 1996). If the author does support such equivalence, then it goes against what is stated in the beginning. Simply put, the evidence the author supplies go against his initial declaration, but seems to fit his final conclusion: That linguistic knowledge describes mental processes, as Chomsky has already pointed out:
''...that the speaker-hearer has internalized a rule system involving the principles of locality and opacity and that judgment and performance are guided by mental computation involving these internally- represented rules and principles'' (Chomsky, 1980, p. 130).
But that is hardly new. For the ''clear distinction'' to gain more credibility, it should be shown that for normals with low working memory, comparable to that of aphasics, the same patterns of errors will be produced as of aphasics, but that is not provided.
In ''Data mining at the intersection of psychology and linguistics'' by Baayen, it is shown how the combination of linguistic and psychological resources can be a rich source of data for studying the lexicon and lexical processing. New methodological possibilities for data mining are presented by examination of databases complied by Balota et al., CELEX, the BNC and WordNet. The predictive potential of certain variables are studied for three behavioural measures: visual lexical decision latencies and word naming latencies in ms, and subjective familiarity ratings on a seven-point scale. From a statistic analysis of the various databases, it appears that word frequency is a semantic measure and not a measure of form-related lexical properties in visual lexical decision latencies and word naming latencies. Thus there is a tight correlation between word frequency with measures of word meaning, compared to measures of word form: Word frequency is primarily a measure of conceptual familiarity. The distributional observation supports the hypothesis that word frequency effect is a strong post-access component, and argues against the idea that frequency effects would arise primarily or exclusively at the access level.
Subjective familiarity ratings are found to be a dependent variable in their own right, much like eye fixation duration. This leads to the conclusion that ratings should not be used as a substitute for a corpus- based frequency counts.
In its final conclusion, the author suggests to reintroduce word frequency into psychology being a reliable predictor of behavioral measures. The author further purports the opinion that a factorial design should only be used as a last resort, since they require (among other problems) matching on all other potentially relevant variables: only use it when no-fine grained numerical information is available.
The paper ''Establishing and using routines during dialogue: Implications for psychology and linguistics'' by Pickering and Garrod (P&G) discusses the relation between dialogue processing and the nature of the mental lexicon.
They describe an experiment where two interlocutors have to explain to each other where they are in a maze. One of the interlocutors makes repeated use of an expression and this expression is ultimately adopted by the other interlocutor. P&G give an example that one of the interlocutors used the term ''right-indicator'' (right hand protrusion on a maze) invented by one of the interlocutors to be used as a reference point. The term was ''routinized'' by storing it in the mental lexicon for that conversation alone. Therefore, they claim, the lexicon must be constantly and dynamically updated. Logically then, there is no strict division between acquisition and adult usage, providing support to Jackendoff's (2002) conception of the mental lexicon.
This approach to dialogue is called the interactive-alignment account. It argues that a conversation is successful to the extent that interlocutors end up with aligned situation models: ''They come to understand the relevant aspect of the world in the same way'' (p. 87). Interactive alignment involves the priming of particular levels of representation and the links between those levels. Producing or comprehending any utterance leads to activation of those representations, but their activation gradually decays. However, when interactive alignment leads to sufficiently strong activation of the links between the level, routinization occurs. Routinization is tracing new memory traces associated with a particular expression. For example, ''right indicator'' is routinized by the activation of right and indicator, plus the specific meaning that right indicator has in this particular context. This leads to the activation of phonological representation and syntactic presentation (as in Jackendoff, 2002), together with the activation of the specific meaning (right-hand- protrusion on a maze).The links among phonology, syntax and semantic are activated, and this increases the likelihood that the interlocutors are going to use right indicator with that specific meaning.
Several caveats should have been explained. For instance, how does interlocutor B know to interpret interlocutor's A utterance right indicator to refer to a right hand protrusion on maze? That is hardly evident. P&G suggest that it is done by ''normal processes of meaning decomposition, corresponding to the compositional processes that A has used in production'' (p. 94). But it is not known whether two interlocutors use the same compositional or decompositional processes. What normal processes of meaning decomposition are should have been clarified. P&G suggest the following account. When activation is strong enough, a new lexical entry is constructed by indexing the phonological/syntactic presentation of right indicator not to the meaning representation of ''pointer to the right'', but to the intended meaning of ''right hand protrusion on the right''. They note that ''clearly, we cannot specify exactly what makes activation strong enough for routinization to occur, but assume that it depends on at least the frequency of use of the expression with that meaning by both speakers'' (p. 95).
By parity of argument, one would be able to claim that the expression ''couch potato'' may refer to ''right protrusion on maze'' if the phonological/syntactic presentation is sufficiently activated (but what exactly makes this routinization is unknown) and indexed with the meaning ''right protrusion on maze''. There can be actually any infinite number of expressions that may refer to ''right protrusion on maze'' under this account, which is therefore reduced to an uninteresting one.
It seems to me that the account P&G have to offer to routinization is a version of behaviorism, only the terms have been cast anew: Simply replace activation by positive reinforcement, meaning representation with stimulus, and routinization by response. The explanatory power of the model is inadequate suffering from the same known set of problems of behaviorist models (see Chomsky, 1959).
Poeppel and Embick's paper ''Defining the relation between linguistics and neuroscience'' is an interesting introspection of the link between the rapidly developing field of neuroscience and linguistics. They focus on the questions of whether current brain/language research provides an example of interdisciplinary cross-fertilization or cross- sterilizations, i.e. whether either discipline can learn something in an explanatorily significant way from one another. There are two problems: 1. Granularity mismatch problem (GMP): linguistic and neuroscientific studies of language operate with objects of different granularity. In particular, linguistic computations involve fine grained distinctions and explicit computational operations. Neuroscientific approaches to language operate in terms of broader conceptual distinctions; (2) Ontological incommensurability problem (OIP): the units of linguistic computation and the units of neurological computation are incommensurable (there is no solid connection linking these computations). The authors suggest a solution for both problems: ''spelling out the ontologies and processes in computational terms that are at the appropriate level of abstraction (i.e. can be performed by specific neuronal populations) such that explicit interdisciplinary linking hypotheses can be formulated.'' (p. 106). The authors then briefly describe the standard research program in cognitive neuroscience, and whether progress in this program can be made, specifically by imaging Broca's area. They point out that Neurocognitive research of Broca's area is not done at the proper level of granularity, and that what appear to be results are actually problems, in the sense that the expectation that syntax should correspond to a single cortical region is unrealistic (but that is the way research is being done). Syntax is made of various computations, and cortical areas involve many (assumingly) differentiated processes. Thus it is difficult to see on-to-one correspondence. ERP studies are discussed to be a solution to this problem, but are then discarded on the same reason of granularity mismatch.
The following point the authors put forward is the idea that that some notion of grammar must be computed in the brain in real time, in contrast to current generative syntax that argues that computations proposed in syntactic analyses need not be performed in real time. They authors insist that in order for progress to take place in addressing problems (1) and (2) that we restrict out attention to (abstract) computations that are actually performed by the human brain. Several ways to proceed are sketched in general lines. In syntax for instance, the authors suggest to study how linearization occurs. They argue though that ''the hierarchical representations motivated by syntactic theory must have a linear order imposed on them'' (p. 116). This is controversial: Syntactic representations may be hierarchical but linearization is perhaps a constraint of articulatory systems - linearization is thus outside of syntax.
The discussion of the problems is illuminating and interesting, but I found the solutions to be sketchy. The authors point out that the link between linguistics and neuroscience is computations, which should be executed by assemblies of neurons. But it is still unclear to me what type of computations can or cannot be executed by neurons. It is a plain truism that language is computed by neurons. And although theories of grammar suggest computations that need not be executed by the brain as formulated, it is unclear how the study of neuronal activity may lead to their abandonment. There is nothing on neuronal activity that labels it as a computation a neuron can perform or not, indeed there is nothing yet that labels it as this or that cognitive computation.
Biology and behavior
In ''Genetic specificity of linguistic heritability'', Stromswold reports linguistics studies of twins, in an effort to understand better heritability: the proportion of the phenotype variance that is due to genetic variance. She compared concordance rates for developmental language disorders in twins (SLI, dyslexia). The logic is the concordance rate for language disorder is significantly higher for monozygote twins than for dizygotic twins. This suggests that genes play a role in language disorders. She found that normal monozygotic twins more similarly to own another than dizygote twins do. This suggests that heritable factors play a substantial role in linguistic abilities of normal people. She contends that the analyses reported were univariate, and not multivariate, yielding certain limitations: It doesn't allow knowing whether the heritable factors that affect language are specific to language.
The paper mostly points out to future research. The impression of a field in its beginning, and the repetitive use of speculative or of a pessimistic note was all over the paper: ''unfortunately...one must have data from a very large number of twins'' (p. 125), ''can be estimated''(p. 129), ''one could investigate'' (p. 129), ''we can perform'' (p. 135), ''the ... hypothesis could explain'' (p. 136).
The paper continues to briefly discuss the role of environment, prenatal environment and how to tease these apart. The paper continues to discuss the molecular studies of genes of language (FOXP2), the DNA loci of written language impairments, and spoken language impairments. This is essentially a list of correlation between language impairments and the loci of genes that correspond to them. It is pointed out that there are at least 9 distinct loci linked to dyslexia, and a dozen loci linked to spoken language impairments. This indicates that different genotypes can cause broadly defined phenotypes such as written and spoken language impairments.
Scott aims to delineate the neural systems underlying speech perception in ''the neurology of speech perception''. She makes a comparison between the organization of primate auditory cortex and human auditory cortex. It is argued that like in visual system of primates, there is hierarchically organized processing of auditory information. And that there are two distinct pathways: a pathway responsible for what is being heard, and a pathway for the location of sound in space. In humans, there are also two pathways: a pathway from sound to meaning, and another from sound to articulation. These pathways appear to share anatomical and functional similarities.
Hagoort's paper ''Broca's complex as the unification space for language'' outlines how a neurobiological account of language contributes to the understanding of the workings of Broca's area. It is argued that Broca's area is not limited only to BA 44 and 45, but also to adjacent areas, too, such as BA 46 and BA 47 - therefore Broca's area is a complex in the prefrontal cortex. Several other points are made about Broca's area: that it is not language specific, that it subserves only a very specific function in language and that within Broca's area there are functionally defined sub-regions. Hagoort further pursues an account of how Broca's area unifies lexical information with overall sentential representations, perhaps in attempt to solve the Binding Problem in linguistic Neurocognition. He mostly relies on Vosse and Kempen's (2000) model.
In ''Dissecting the language organ: A new look at the role of Broca's area in language processing'' by Thompson-Schill, there is a brief review of hypothesized roles of Broca's area as the area for articulation, syntax, selection or verbal working memory. The discussion focuses on data from brain lesions. This paper shows that the actual function of Broca's area is undecided, although the authoress suggests that Broca's area is involved in selecting information among competing sources of information.
Morgan's ''Biology and Behavior: Insights from the acquisition of sign language'' discusses insights from acquisition on language in a different modality than speech: sign language. After a sketchy introduction to British sign language, several interesting questions that have arisen in relation to spoken language acquisition are brought about. The first is how are children's attempts of producing language altered when the input is not sound? It is argued that limitations of hearing systems are the same as limitations of hearing: ''There are underlying similarities between what children so with signs and words in the beginning of language acquisition'' (p. 197). This forces a strong biological component active in these processes. The second topic is the development of grammar. Again, developmental parity between deaf and hearing language acquisition is found. The third topic is specific language impairment. The interest is to find out how specific language impairment is manifested, if at all, in sign language: Is it the same or different from SLI in spoken language? Problems of this type of research are discussed and results of preliminary tests. The claim is that the studies of developmental sign language impairment will show that the general role of auditory processing in SLI is overstated.
Production and comprehension
In ''Maximal input and feedback in production and comprehension'', Vigliocco and Hartsuiker argue for a maximalist view of sentence production. They claim to present evidence favoring maximal input and bidirectional flow of information between assumed levels of integration, just like in comprehension. The different levels of integration are described, such as message, functional and positional levels. Then the directionality of information flow between these levels is discussed, relying mostly on studies of spontaneous errors. The ensuing topic is concerned with benefit of feedback between different levels.
This paper is extremely unclear to me, perhaps because of lack of background knowledge. Nonetheless, there should have been a clearer discussion of the debate and its premises in the beginning of this paper. I found it difficult to understand the theory or the motivation behind the various ''illustrative examples'' (p. 211). For instance, I did not understand how ''maximal'' and ''minimal'' input is measured or defined.
In ''Spoken-word recognition and production: regular but not inseparable bedfellows'', McQueen argues that speech production and comprehension should not be studied in isolation. He examines two specific properties of speech decoding and encoding. The paper discusses questions such as whether speech decoding/encoding is serial or cascaded. It is argued that from work on prevoicing in Dutch, for instance, there are limits on the kind of segmental information that is passed to the lexical level in decoding: Only information useful for lexical distinctions influences lexical processing. It is argued that phonological encoding is distinct yet tightly linked to phonological decoding.
Schiller discusses whether there is cross talk between the production and the comprehension systems in ''Verbal self-monitoring''. Internal monitoring is brought as an example for this cross-talk. Levelt's ''perceptual loop theory of self-monitoring'' is presented. Various predictions of the model were tested, and the results of four experiments are provided. It is shown that onset complexity and morphological complexity do not play a role in monitoring. Phonological representation (syllable boundaries or metrical stress) show strong effects in monitoring. These results favour a sequential, multi-tiered, phonological assignment process.
In ''The production and comprehension of resumptive pronouns in relative clause 'island' contexts'', Ferreira and Swets argue that people appear to relax the grammatical rules governing long-distance dependencies in production. It appears that sentences that violate the Subjacency constraint on wh-movement are produced in English in high frequency. To compensate this violation, so it appears, a resumptive pronoun is inserted in the gap position of the moved element to ameliorate the violation. This in turn may answer the question of incrementality of the human sentence parser, which seems to interest the authors more than the link between comprehension and production. If the parser is incremental, it should ''know'' about the island/gap resumptive pronoun close to the point of the gap. If the parser is clause-based, then an effect of gap/resumptive pronoun would be seen later than the gap. Two experiments for island elicitation plus resumptive pronouns are described, and a grammaticality judgment task (where comprehenders found island violations ungrammatical).
In one of the elicitation experiments, it is shown that the earliest words were longer in the island+resumptive pronoun condition than a grammatical control (although the control is plausibly odd ''this is a donkey that doesn't know where it lives'' - is it to be expected that donkeys know where they live at all?), suggesting that the production system uses a great deal of look-ahead. On this basis, it is argued that ''the production system appears to be unaware of a grammatical constraint to which the comprehension system is quite clearly sensitive, suggesting a production - comprehension asymmetry'' (p. 268). Furthermore, it seems that the difficulty of producing island violations shows up early, suggesting incrementality in production.
In ''On the relationship between perception and production in L2 categories'', Sebastián-Gallés and Baus add evidence to the coherence of phonetic representations in natives and the lack of robustness of these representations in non-native speakers. A group of L1 Catalan, Catalan-Spanish bilinguals and another group of L1 Spanish, Spanish-Catalan bilinguals were tested in three perceptual tasks, such as categorical perception, gating task, lexical decision; and in a production task (picture naming). The comparison focused on vowels found in Catalan but not in Spanish thus differing in phonetic representation across languages. It was found that ''the relationship between perception and production is a complex one'' (p. 291), and that the percentage of Spanish dominant participants who scored within the range of Catalan native speakers for all perception tasks was low (except for categorization).
Other than that, it was difficult for me to glean the insight of this paper regarding the purpose that was set in the beginning. A clear conclusion as to the relationship between production and perception should have been stated.
Emmorey's paper, titled ''Signing for viewing: Some relations between production and comprehension of Sign Language'' is a very insightful and interesting paper in the emerging research field of sign language. The paper sets out to discuss how visual perception and manual production interact at the level of phonology (expressed by signing contrasts, such as ''whispering'' and normal signing), elaborating mainly on monitoring of manual articulation.
An interesting discussion deals with the pairing of visual perception with manual articulation in the brain by mirror neurons - a necessary link for studying the link between production-perception. Evidence are brought to show that visual feedback from one's own signing occurs in peripheral vision, and that signers do not look at their hands while signing, and they also do not track the hands of their interlocutors. Therefore, signers monitor their own internal representations of signs. It is claimed that also speakers perhaps do the same. Further research is indeed a promising avenue for studying the link between perception and production and how these processes are monitored.
Model and Experiment
In ''From Popper to Lakatos: A case for cumulative computation modelling'', Roelofs discusses the common problem that modelling in psycholinguistics is not cumulative in the sense that one does not build on earlier modelling results. Two approaches are described: ''The toothbrush approach'', where a model is built for the given data only, and the ''skeet shooting'' approach, where the aim of the experimenter is to collect data that disconfirm models. The article further discusses when it is appropriate to reject a model. It seems that the article was a response to a certain criticism of the author's own programme ''Weaver++'' designed to simulate lemma retrieval in various perceptual tasks. The criticism argued for the rejection of WEAVER++, but the author replies that it can be saved by adding a new assumption. The rest of the paper seems like a complaint that rejection should not be brought up before new assumption(s) can be added to modify the model. The paper goes on to describe the successful ''academic career'' of the programme in modelling lemma perception in the human brain, arguing for cumulativeness as its validation.
In the humorously written paper ''How do computational models help us develop better theories'', Norris discusses the benefit of building computational models in developing better theories. The difference between a computational model and a theory is discussed. It is argued that computational models are required to be constructed even if one does not deem to do so because computational models help in establishing whether something is missing in the theory or whether a certain mechanism in the theory does not work. However, it was not made clear how failure can be attributed to limitations of the computational model, the programmer, the particular programming language used, or indeed the theory. It is simply taken as self understood that in principle a failure in a computational model equals failure of the theory. Given that the two are different, how can a failure in the model be so influential on the theory?
The paper further gives examples of models without theories and theories without models. A case study is provided (Shortlist: a model of word recognition in continuous speech). It is shown that the assumptions behind the original Shortlist model and the computational model of it were different and that none of those assumptions belong to the underlying theory. Thus it is argued that one should inquire how the model relates to the theory. If the model does what it was designed to do, one needs ask why is it so: ''Simulations from the model will convince you (and maybe even your critics) that the theory makes the right predictions, but it is only by thinking about the model that you will be able to explain why things work the way they do''. Note though that ''being able to explain why things work'' is a theory by itself, and its relation to the theory implemented may not be necessarily obvious.
In ''Tools for learning about computational models'', Pitt and Navarro discuss problems of comparison between models. Two qualitative tools are introduced: (1) Minimum description length and (2) Landscaping. The former is basically testing the success of generalizing a model designed for a certain set of data to another set of data instead of goodness of fit (the model that best fits the data). The latter is method which allows assessing sources of complexity in given models, gaining insight into the inner workings of the models - that is, how to distinguish between models. Further, it is briefly explained how complex relationships between models and data can be described. The paper is meant for people with high proficiency in statistics, at least in its second part.
In ''Rational models of comprehension: Addressing the performance paradox'', Crocker points to the problem that models account only for their own experimental findings, and not to more general performance. Limitations of models are identified (limited scope, model equivalence, measure specificity, weak linking hypothesis). A rational model is suggested, where the most plausible hypothesis is selected first. The advantages of such a model are discussed.
The last paper ''Computation and cognition: Four distinctions and their implications'' by Fitch, discusses the very difficult question how the brain computes the mind. Key computational distinctions are given, the most important one seems to be analog vs. digital. This distinction is compared to the electric/chemical activity of neurons, which receive analog information but emit digital information. How neurons compute hierarchical structure is also illustrated in general lines. Implications for cognition and language are discussed.
EVALUATION
The book provides reasonable information about the state-of-the-art in psycholinguistics in general lines. Although it tackles the four cornerstones of psycholinguistics, I think a more elaborate review of these four cornerstones was warranted to elucidate the problems being faced. The problem is intensified in the first section, Psychology and Linguistics: Most writers advocate a weak link between psychology and linguistics. No discussion was offered as to other approaches, which advocate a strong link. Most papers emphasize the estrangement between psychology and linguistics, and not how the two can be reconciled.
Each paper has given its own interpretation of the specific cornerstone, resulting in the same problem that the book is trying to focus on: A comparison and reconciliation between various approaches is extremely difficult. A certain researcher may take his or her view to be the working hypothesis, creating sub-approaches in sub-approaches built for solving a certain problem and only it.
REFERENCES
Chomsky, N. 1959. Review of B. F. Skinner's Verbal Behavior. Language 35:26-58.
Chomsky, N. 1980. Rules and Representations. New York: Columbia University Press.
Friedman, N. 2002. Question Production in Agrammatism: The Tree Pruning Hypothesis. Brain and Language 80:160-187.
Jackendoff, R. 2002. Foundations of Language: Brain, Meaning, Grammar, Evolution. Oxford: Oxford University Press.
Phillips, C. 1996. Order and Structure. PhD dissertation, Massachusetts Institute of Technology, Cambridge.
Sadeh Leicht, O. 2003. Sporadic Occurrence of the Garden Path Effect. In Yearbook 2003, eds. W. Heeren, D. Papangeli and E. Vlachou, 59-68: Utrecht Institute of Linguistics OTS.
Vosse, T. and Kempen, G. A. M. (2000). Syntactic structure assembly in human parsing: A computational model based on competitive inhibition and lexicalist grammar. Cognition 75:105-143.
|
|
ABOUT THE REVIEWER:
ABOUT THE REVIEWER Oren Sadeh-Leicht is a Ph.D. student in Psycholinguistics at the Utrecht Linguistics Institute, OTS, The Netherlands. He is studying the relation between parsing performance and grammatical competence. His M.A. thesis was entitled "Parsing Optional Garden Path Sentences in Hebrew" (cf. a summary of this work in Sadeh Leicht, 2003). He is more generally interested in parsers, evolution of language, and neurolinguistics.
|
|
Versions: |
Format: |
Hardback |
ISBN: |
0805852085 |
ISBN-13: |
N/A
|
Pages: |
424 |
Prices: |
U.S. $
125.00
|
|
|
|
|