LINGUIST List 15.1605

Thu May 20 2004

Diss: Neuroling: Koutsomitopoulou: 'A neural...'

Editor for this issue: Tomoko Okuno <tomokolinguistlist.org>


Directory

  • e_koutso, New Dissertation Abstract added

    Message 1: New Dissertation Abstract added

    Date: Thu, 20 May 2004 06:34:59 -0400 (EDT)
    From: e_koutso <e_koutsoyahoo.com>
    Subject: New Dissertation Abstract added




    Institution: Georgetown University Program: Department of Linguistics Dissertation Status: Completed Degree Date: 2004

    Author: Eleni Koutsomitopoulou

    Dissertation Title: A neural network model for the representation of Natural Language

    Linguistic Field: Applied Linguistics Computational Linguistics Linguistic Theories Psycholinguistics Semantics Text/Corpus Linguistics Neurolinguistics Cognitive Science

    Dissertation Director 1: Donald Loritz Dissertation Director 2: George V. Wilson Dissertation Director 3: Solomon S.J. Sara Dissertation Director 4: Allan G. Alderman

    Dissertation Abstract:

    Current research in natural language processing demonstrates the importance of analyzing syntactic relationships, such as word order, topicalization, passivization, dative movement, particle movement, pronominalization as dynamic resonant patterns of neuronal activation (Loritz, 1999). Following this line of research this study demonstrates the importance of also analyzing conceptual relationships, such as polysemy, homonymy, ambiguity, metaphor, neologism, coreference, as dynamic resonant patterns represented in terms of neuronal activation. This view has implications for the representation of natural language. Alternatively, formal representation methods abstract away from the actual properties of real-time natural language input and rule-based systems are of limited representational power.

    Since NL is a human neurocognitive phenomenon we presume that it can be best represented in a neural network model. This study focuses on a neural network simulation, the Cognitive Linguistic Adaptive Resonant Network (CLAR-NET) model of online and real-time associations among concepts. The CLAR-NET model is a simulated Adaptive Resonance Theory (ART, Grossberg 1972 et seq.) model. Through a series of experiments, I address particular linguistic problems such as homonymy, neologism, polysemy, metaphor, constructional polysemy, contextual coreference, subject-object control, event-structure metaphor and negation. The aim of this study is to infer natural language specific mappings of concepts in the human neurocognitive system on the basis of known facts and observations provided within the realms of conceptual metaphor theory (CMT), and adaptive grammar (AG, Loritz 1999), theories of linguistic analysis, and known variables drawn from the brain and cognitive sciences as well as previous neural network systems built for similar purposes. Additionally, this study investigates the extent to which these linguistic phenomena can be plausibly analyzed and accounted for within an ART-like neural network model.

    My basic hypothesis is that the association among concepts is primarily an expression of domain-general cognitive mechanisms that depend on continuous learning of both previously presented linguistic input and everyday, direct experiential (i.e. sensory-physical) behaviors represented in natural language as "common knowledge" (or "common sense"). According to this hypothesis, complex conceptual representations are not actually associated with pre-postulated feature structures, but with time-sensitive dynamic patterns of activation. These patterns can reinforce previous learning and/or create new "place-holders" in the conceptual system for future value binding.

    This line of investigation holds implications for language learning, neurol inguistics, metaphor theory, information retrieval, knowledge engineering, case-based reasoning, knowledge-based machine translation systems and related ontologies.

    This study finds that although STM effects in ART-like networks are significant, most of the time LTM calculation yields better semantic discrimination. It is suggested that the internal structure of lexical frames that correspond to clusters of congenial associations (in fact, neuronal subnetworks), is maintained as long as it resonates with new input patterns or learned in long-term memory traces. Different degrees of similarity (or deviation) from previously acquired knowledge clusters are computed as activation levels of the corresponding neuronal nodes and may be measured via differential equations of neuronal activity.

    The overall conclusion is that ART-like networks can model interesting linguistic phenomena in a neurocognitively plausible way.