LINGUIST List 17.1639
Wed May 31 2006
Diss: Applied Ling/Socioling: Bishop: 'Bimodal Bilingualism in Hear...'
Editor for this issue: Meredith Valant
<meredithlinguistlist.org>
Directory
1. Michele
Bishop,
Bimodal Bilingualism in Hearing, Native Signers of American Sign Language
Message 1: Bimodal Bilingualism in Hearing, Native Signers of American Sign Language
Date: 31-May-2006
From: Michele Bishop <mishbishmac.com>
Subject: Bimodal Bilingualism in Hearing, Native Signers of American Sign Language
Institution: Gallaudet University
Program: Department of Linguistics and Translation
Dissertation Status: Completed
Degree Date: 2006
Author: Michele Bishop
Dissertation Title: Bimodal Bilingualism in Hearing, Native Signers of American Sign Language
Linguistic Field(s):
Applied Linguistics
Sociolinguistics
Subject Language(s): American Sign Language (ase)
Dissertation Director:
Karen Emmorey
Kendall A King
Ceil Lucas
Paul Preston
Beppie Van Den Bogaerde
Dissertation Abstract:
This dissertation describes the features of bimodal bilingualism innaturalistic discourse among hearing, native users of American SignLanguage (ASL) and addresses three main questions:1. What are the features of code-blending in bimodal communication?2. What are the sociolinguistic/pragmatic features of bimodal communication?3. Which model for determining a base language in mixed utterances is bestable to account for code-blends?
This study aims to provide a thorough description of the bimodal linguisticphenomenon known as code-blending or simultaneous signed and spokenutterances by analyzing naturalistic discourse among hearing, nativesigners of ASL, specifically discussing topics about childhood, languageand identity. Adult, native bimodal bilinguals represent the only group ofbilinguals with the potential to produce two typologically distinct, nativelanguages simultaneously. Linguistic research on spoken languagebilinguals has been driven by the attempt to determine a base or matrixlanguage in sequential mixed utterances. However, the unique feature ofmixed simultaneous utterances has not figured into the discussion to anygreat degree. The issue of which theoretical model (i.e. Myers-Scotton'sMatrix Language Frame 1993a, Bogaerde and Baker, in press) is best able toaccount for code-blending is explored using data from both a pilot projectdone with Italian bimodal bilinguals (Bishop & Hicks forthcoming) and thecurrent data analyzed in this study.
It is argued that Myers-Scotton's MLF model (1993a) for determining a baselanguage in certain bimodal utterances is limited. An alternative model byBogaerde and Baker (in press) illustrates a greater capability to accountfor all types of bimodal utterances, making this model a more viableapproach. The application of both models (Bogaerde and Bakers' andMyers-Scotton's) in this dissertation suggests that a model thatincorporates meaning along with grammar would be able to account for thedata that do not fit a pure grammar model; in other words meaning isessential to language analysis. A Cognitive Grammar framework viewsmeaning as critical to language analysis and claims that linguistic unitsin any grammar are form-meaning pairings (Langacker 1991). Thisdissertation relies to a great extent on the synthesis of both CognitiveGrammar (Langacker 1991) and Mental Space Theory (Fauconnier1994) byLiddell in his latest book Grammar, Gesture and Meaning in American SignLanguage (2003).
This dissertation suggests that the MLF model cannot be universally appliedto all bilingual utterances as has been claimed and instead, turns to themodel proposed by Bogaerde and Baker (in press) for determining a baselanguage in bimodal utterances. The application of their model, originallyused for utterances by Deaf mothers and their Deaf and hearing children, isexpanded and adapted to accommodate adult bimodal utterances. Newcategories of code-blends are presented together with a discussion of theimpact of ASL on spoken discourse, such as the use of surrogate and tokenblends, listing, buoys, and depicting verbs (Liddell 2003). It issuggested that co-speech gesture studies and gestures studies in generalare more productive than spoken language code-switching studies inproviding insights into code-blending phenomena.
|