LINGUIST List 28.2019
Mon May 01 2017
Review: Cog Sci; Comp Ling: van Trijp (2016)
Editor for this issue: Clare Harshey <clarelinguistlist.org>
Daniel Walter <dwalter
The evolution of case grammar E-mail this message to a friend Discuss this message
Book announced at http://linguistlist.org/issues/27/27-2838.html
AUTHOR: Remi van Trijp
TITLE: The evolution of case grammar
SERIES TITLE: Computational Models of Language Evolution
PUBLISHER: Language Science Press
REVIEWER: Daniel Walter, Emory University
Reviews Editor: Helen Aristar-Dry
In The Evolution of Case Grammar, Remi van Trijp puts forth a computationally plausible program that outlines the evolution of case grammar in a communicative network of artificial agents. The book is organized into five chapters. The first two, “Case and artificial language evolution” and “Processing case and argument structure” provide necessary background information for the experiments outlined in the following two chapters, “Baseline experiments” and “Multi-level selection and language systematicity”. The final chapter, “Impact on artificial language evolution and linguistic theory” explains the relevance of the experimental findings to other computational models of language evolution, as well as linguistics more generally.
In the first chapter, “Case and artificial language evolution”, van Trijp poses two important questions regarding case and language evolution:
1. Why do some languages evolve a case system?
2. How can a population self-organize a case system?
According to the author, neither of these questions has been answered by traditional linguistics, and the lack of raw historical data makes it unlikely that an answer can be found by exploring natural languages. Along this line of reasoning, the author argues for agent-based modeling, or the creation of a virtual environment with artificial agents that can operate at a pace which dramatically shrinks the timeline needed to observe evolutionary change. The author then switches topics to an overview of case. The author indicates that there are two primary functions of case within a language; to indicate event structure and package information structure. He also argues that there are four stages of case marking: no marking, specific marking, semantic roles, and syntactic roles. In the third section of the first chapter, the author changes topics from case to modeling language evolution. He presents three possible models. The first model is a model of genetic evolution, which focuses mainly on selective pressures and fitness. In this model, parent-agents pass on language genomes to their offspring-agents. It is at the stage of transmission that variation takes place in the form of mutations. The second type of model presented is an Iterated Learning Model (ILM) where language is transmitted from one generation to the next culturally, rather than genetically endowed. Unlike the previous model, there is no reliance on functional pressures. This model is highly representative of a generative approach to language acquisition. The third model “views the task of building and negotiating a communication system as a kind of problem-solving process” (pg. 12) and reflects cognitive-functional and usage-based theories of language learning. The author emphasizes that these models are not necessarily mutually exclusive but submits that his research agenda most closely follows the third model. After describing the models, the author describes the do’s and don’ts of this type of work. In the final section, van Trijp provides an overview of previous work, including the emergence of adaptive lexicons and the basis of “Fluid Construction Grammar” (FCG). The author prescribes FCG as the primary language learning theory for this book, which he defines as common construction grammar (Croft, 2005) but with an added emphasis on the fluidity of language emergence across linguistic units. The author expands upon FCG further in the second chapter.
In the second chapter, “Processing case and argument structure”, van Trijp begins with an appeal to processing models that incorporate both parsing and production, specifically FCG. FCG is designed to be bidirectional. That is to say, and FCG-system is built to be capable of both processing an utterance it receives as well as producing a meaningful utterance it is asked to convey. The underlying representational system is similar to connectionist models, where the connections between nodes representing different features of the language are updated based on probabilities built from input and experience. One of the primary issues to understand is the coupled feature structure. In FCG, all linguistic knowledge is represented in a coupled feature structure with a semantic pole and a syntactic pole. Within this system, the primary function of the program is to ‘unify’ and ‘merge’. These two operations are meant to satisfy the conditions for parsing and production, in which syntactic and semantic features unify and merge to produce new coupled feature structures. In addition to ‘unify’ and ‘merge’, FCG also requires a special operator, called the ‘J-operator’. This operator allows for the specification of hierarchical relations between units. In the final section of the chapter, the author provides an example of how the FCG system would parse the phrase ‘jack sweep dust off-floor’, and produce the phrase ‘jack sweep floor’. In this section, it is necessary to understand how the different nodes are being bound together. To put it simply, the system already has access to a list of vocabulary and a list of possible semantic constructions. It is the job of the system to merge the vocabulary items with the semantic constructions and check that the meanings and forms stored and those produced match before a new node is created. Each construction is endowed with valences for the possible number of lexical items with which it can merge. The major hurdle that this type of system faces, which will be the motivation for case marking, is that there is no syntax inherent in the constructions. The order of the lexical items is not relevant for the system to comprehend an utterance, since lexical items match with the construction roles. Therefore ‘jack sweep floor’ and ‘jack floor sweep’ and ‘floor jack sweep’ are all perfectly intelligible to the system. It is simply a reliance on the order in the input that makes one more conventionalized. Finally, as lexical entries are used within certain constructions, they gain a specific confidence score which helps conventionalize the links between a particular lexical entry and a semantic construction.
In the third chapter, “Baseline experiments”, van Trijp begins to describe the experiments he conducted using an FCG-based system which describes how a population of agents can evolve grammar over time. The input to the system is based on input from a real-world puppet theater in which physical agents (puppets) interact with objects and other puppets which provide the set of constructions which the computer agents need to communicate and comprehend. In addition to the physical environment, the agents themselves are endowed with particular abilities, such as social and cooperative behavior and a predefined lexicon, but no grammar. In the first baseline experiment, the agents have only a diagnostic feature which allows them to link lexical items to constructions, but not to mark the relationships between words. This experiment was run twice. First with only two agents, and second with a population of 10 agents. Results are discussed in terms of communicative success, i.e. whether the ‘hearer’ agent signals agreement or disagreement with the scenario to which both gave attention, and cognitive effort, i.e. the number of inferences the ‘hearer’ needs to make in order to successfully understand the ‘speaker’. The results of the first experiment show that after 10 series of 500 language games, the average communicative success reached approximately 70% and the maximum cognitive effort (on a scale from 0 to 1) was a 0.6. This indicates that while the communicators were successful more often than not, a large amount of cognitive effort was needed and much room for improvement exists. In the second baseline experiment, agents are endowed with a repair strategy that allows them to create role-specific marking in order to optimize communication. This innovation and expansion is proceduralized through a re-entrance process. The experiment is run with 2, 5, and 10 agent populations. Unlike the results from baseline-experiment 1, in which communicative success and cognitive effort remained steady throughout the 500 language games, the results of this experiment show change over time in a pattern closely related to a power function. In all three population scenarios, communicative success reaches 100%, and cognitive effort drops to 0.0. The time required for this to occur is longer the more agents are involved. In this experiment, the author also tracks the number of unique markers needed to achieve this success. Competition between forms for marking roles in particular contexts is also occurring, but since these markers are created and developed in a local context with a particular context and lexical item, the total number of role markers needed is relatively high compared to the number of generalizable contexts. In the final baseline experiment, the author notes that if the agents were to continue on with larger populations and more contexts, the inventory size needed would explode. To solve this problem, the agents are given the ability of analogical reasoning, for which the author provides the following algorithm: 1. Given a target participant role-sub-i, find a source role-sub-j for which a case marker already exists; 2. Elaborate the mapping between the two; 3. Keep the mapping if it is good; 4. If there are multiple analogies possible, choose the best one (based on entrenchment and category size); and 5. Build the necessary constructions and make the necessary changes to existing items. In the final experiment, there are four different trials. In trials 4a and 4b, the only difference is the size of the population, 2 and 10 respectively. In both of these experiments, the populations succeed in achieving 100% communicative success and 0.0 cognitive effort, but the number of markers during the innovation and learning stage fails to align within the population and decrease over time. In set-up 3c, the agents are also given the ability to track co-occurrences. This time agents begin to align after the innovation and learning phase and the total number of markers needed for comprehension decreases over the number of language games. The author notes, however, that the gain in inventory optimization is minimal, since the agents end up with an average of 25 markers for 30 total participant roles. In set-up 3d, the agents rely solely on token frequency of successful interactions instead of the co-occurrence links and confidence scores. The results indicate persistent variation during the phase in which the agents should be aligning to a common set of markers. The results indicate that while communicative success and cognitive effort can be optimized, the resulting system does not mirror that of natural languages. Namely, there is no optimization of the total number of markers needed and no generalization of markers across constructions. The author argues for a dynamic representation of categories and word meanings and the ability of agents to do one-to-many or even many-to-many mappings of markers to roles, which he believes can be achieved through coordination and pattern formation.
In Chapter Four, ‘Multi-level selection and language systematicity’, van Trijp presents an argument for the need for a dynamic, multi-level selection process to overcome the issue of systematicity that many artificial languages lose once smaller patterns begin to combine with larger ones. The chapter begins with an overview of pattern formation in natural languages, which includes an in depth look at negation in French. The material here dives into whether this evolution in French is the result of reanalysis or pattern formation. The author falls on the side of pattern formation as the more likely culprit for this language change and argues that computational models can show their merit in this arena by demonstrating the consequences of each hypothesis. The author then moves to a description of what pattern formation would look like computationally, which is then employed in the following experiments. In experiment 1, the setup is the same as in experiment 2c, in that there is individual selection without analogy, but the agents are endowed with a new diagnostic and repair strategy for pattern formation. With this new strategy, agents create and converge on one construction for each possible meaning, not one marker. The results show that the acquisition of constructions happens quickly, but alignment of constructions lasts a relatively long time. This is calculated as a meaning-form coherence score. In this experiment, the population almost reaches 100% for this score, but one or two forms are still undecided, even after 16,000 language games. The author argues that the agents are incapable of constructing a systematic language because the agents treat constructions as independent linguistic items. The importance for case markers, as indicated by the author, is that “this results in some case markers losing the competition for marking a certain participant role on the level of a single-argument constructions but still becoming the most successful one as part of a larger pattern” (pg. 125). The author then continues this discussion with the lack of systematicity in other work, such as exemplar-based simulations, probabilistic grammars, and Iterated Learning Models (ILMs). In experiment 2, the author tries to overcome the systematicity problem by providing the agents with a multi-level alignment strategy, but still without analogy. In this multi-level strategy, when a game is successful, the hearer not only increases the score of the applied constructions, but also those of the related constructions, while punishing competing constructions through lateral inhibition. The results of three different strategies (top-down, bottom-up, and multi-level) show that with the multi-level selection strategy, all ten agents share the same form-meaning mapping preferences after only 7,000 language games. In addition, the agents agree upon 30 case markers for the 51 possible constructions. The final experiment builds on the previous one by adding the ability for analogy to the multi-level selection strategy. The results are a comparison of five set-ups: individual, top-down, and bottom-up selection, multi-level selection with the previous lateral inhibition mechanism, and finally multi-level selection without lateral inhibition, but rather an algorithm for memory decay. To begin, the author compares the first four. The results indicate that the multi-level selection is the fastest to achieve 100% communicative success and 0.0 cognitive effort, as well as the only one to achieve a fully systematic language. However, the number of specific case markers is still over 20 and only five were generalized to multiple roles. The author then sums up the results of the final experimental setup, which shows that the new memory decay strategy which employs a new alignment strategy focused on re-using existing markers, as long as there is no conflicting participant role already created. The results show that this methods ends up with a final count of three participant roles generalized across constructs, which is a successful optimization of the meaning space and total participants that the system needs to address.
The final chapter, “Impact on artificial evolution and linguistic theory”, includes a comparison of this approach to others, with a particular focus on the Iterated Learning Model. The author focuses primarily on issues of systematicity, and the need of ILM to incorporate innate language skills in order to succeed. In contrast, van Trijp’s model reduces the need for an innate language acquisition device. He also makes distinct comparisons between this model and other argument and construction grammar models, such as the Berkeley Construction Grammar (Kay & Fillmore, 199) and Sign-Based Construction Grammar (Fillmore et al., unpublished). The author compares his model’s ability to explain ditransitive in English constructions as compared to these other construction grammar theories. The author concludes with a summary of this research agenda’s contributions to linguistics and further directions of study, and an emphasis on continuing to look at both frequency and function in linguistic evolution.
Van Trijp provides an in depth perspective on language evolution, which Christiansen and Kirby (2003) dubbed one of the hardest problems in science, and the ways in which computational linguistics can contribute to our knowledge. Most importantly, his focus is on testing the mechanisms which make language evolution possible, in contrast to other fields of study that assume some type of language acquisition device or black box, in which the operations and actual processes which transform language over time are not within the scope of study. Understanding language mechanisms and how they operate over time, across cognitive space, and with one another is crucial to creating a model of language that can account for processing and production.
Van Trijp’s clear discussion of previous work, and the detailed account of his methods is essential to is accessibility to those who are not working directly in computational models of language. While some basic knowledge of coding would be beneficial for someone interested in this work, especially when looking at the proceduralization of particular algorithms into code, it is not absolutely necessary. Also, for those who are interested in computational work, van Trijp’s description of case and its function within natural languages allows for an easy understanding of the relevance of this work to all of linguistics.
Another important insight into his work is its ability to account for the development in case over time while still achieving comprehension accuracy and decreasing the cognitive load on the individual. Some may argue that the scoring system for cognitive load might need a more fine-grained approach to be more life-like, but for this work, it is successful in at least capturing the phenomenon in a computationally significant way. The later experiments in this book build off of the previous work, which took into account neither systematicity nor lexicon size optimization. In these later experiments, the author shows how a more detailed, multi-level alignment strategy, along with analogical reasoning, were able to take on both systematicity and lexicon optimization. The results provide a clear understanding of the types of cognitive mechanisms necessary to achieve case systems akin to those of natural languages in a very convincing manner.
As for audience, this book is intended for researchers and scholars in computational linguistics, but a more general audience of scholars in linguistics more broadly would also benefit from the insights resulting from this type of experimental approach to language evolution. Any researcher in linguistics would benefit from learning more about how computational methods can be used to prove theories about underlying and invisible mechanisms of language learning, processing, production, and evolution. In addition, the detailed manner in which coding is described and laid out provides something of a roadmap for those who are looking to conduct similar experiments.
In sum, van Trijp’s book does an excellent job of providing access to a complex research area. His efforts in describing not only what he did, but also his justifications allow readers a deeper understanding of the assumptions researchers in computational linguistics need to make. Van Trijp weaves together a long studied area of linguistics with a novel approach using computational models and discusses the relevance of his results not only for computational linguistics, but to linguistics more generally.
Christiansen, M. & Kirby, S. (2003). Language evolution: The hardest problem in science? In Morten Christiansen & Simon Kirby (eds.), Language evolution, 1-15. Oxford: Oxford University Press.
Croft, W. (2005). Logical and typological arguments for radical construction grammar. In Jan-Ola Östman & Mirjam Fried (eds.), Construction grammars: Cognitive grounding and theoretical extensions, 273-314. Amsterdam: John Benjamins.
Fillmore, C., Kay, P., Michaelis, L., & Sag, I. unpubl. Construction grammar. Unpublished manuscript. Chapter 7 available online at http://lingo.stanford.edu/sag/SBCG/7.pdf
. Chicago: Chigaco University Press.
Kay, P. & Fillmore, C. (1999). Grammatical constructions and linguistic generalizations: The what’s X doing Y? Construction. Language 75, 1-33.
ABOUT THE REVIEWER
Dr. Walter is a Visiting Assistant Professor of German and English at Emory University's Oxford Campus, where he teaches German, English composition, and linguistics. His research interests include the acquisition of second language case marking, German as a foreign language, and the interplay between language learning mechanisms in foreign language acquisition. His publications focus primarily on case marking in German by adult foreign language learners, but he is also interested in understanding the role of entrenchment in the acquisition of case markers and the effect of developing concepts to overcome challenges to case acquisition in various languages. He is especially interested in understand the relationship between case forms and their acquisition in cross-linguistic comparisons.
Page Updated: 01-May-2017