* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 21.287

Sun Jan 17 2010

Confs: Lang Acquisition, Psycholing, Comp Ling/USA

Editor for this issue: Amy Brunett <brunettlinguistlist.org>


LINGUIST is pleased to announce the launch of an exciting new feature: Easy Abstracts! Easy Abs is a free abstract submission and review facility designed to help conference organizers and reviewers accept and process abstracts online. Just go to: http://www.linguistlist.org/confcustom, and begin your conference customization process today! With Easy Abstracts, submission and review will be as easy as 1-2-3!
Directory
        1.    Lise Menn, American Association for the Advancement of Science

Message 1: American Association for the Advancement of Science
Date: 15-Jan-2010
From: Lise Menn <lise.menncolorado.edu>
Subject: American Association for the Advancement of Science
E-mail this message to a friend

American Association for the Advancement of Science
Short Title: AAAS


Date: 18-Feb-2010 - 22-Feb-2010
Location: San Diego, CA, USA
Contact: Lise Menn
Contact Email: < click here to access email >
Meeting URL: http://www.aaas.org/meetings/

Linguistic Field(s): Computational Linguistics; Language Acquisition; Psycholinguistics

Meeting Description:

AAAS is the largest general meeting of the sciences, devoted to presenting research to the general public and to other scientists across discipline boundaries. It is the largest general forum for linguists, and this year three major symposia are sponsored or co-sponsored by Section Z: Linguistics and Language Sciences. On 2/19, Language Processing for Science and Society; on 2/20, Music-Language Interactions in the Brain; on 2/21, Language Learning in Deaf Children: Integrating Research on Speech, Gesture, and Sign. Come and be part of the public face of linguistics, and learn about related research areas!

Language Processing for Science and Society
Friday 2/19, 3:30-5:00
Organizer: Annie Zaenen
http://aaas.confex.com/aaas/2010/webprogram/Session1689.html

Spoken language was the first and remains the most pervasive communication and information technology. The invention of writing has quite some time ago extended our ability to communicate to those distant from us in time and space. The recent arrival of computers now provides us with new ways to access and share information encoded in speech or written language. The result is that we are confronted daily with more information than we can focus on or assimilate. Keyword search has provided a simple and surprisingly successful way to access this fire hose of information, but it is a blunt instrument. Currently, researchers are developing algorithms that enable computers to home in better on the information that users really want to find. To illustrate these recent developments, we discuss three aspects. The first contribution concentrates on developments that have a broad application to society as a whole. The two others focus on developments that at least for the moment are more relevant to the scientific community, namely, how language technology can extract information from scientific literature and how computers can be made not only to read texts but also reason about them.

Speakers:
Patti Price, PPRICE Speech and Language Technology Consulting
The Growing Impact of Speech Technology on Society

Edward Briscoe, University of Cambridge
Making the World's Scientific Information (More) Organized, Accessible, and Usable

Christopher Manning, Stanford University
Getting Computers to Understand What They Read

Music-Language Interactions in the Brain: From the Brainstem to Broca’s Area
Saturday 2/20, 3:30-5:00
organizer: Aniruddh D. Patel

The past decade has seen an explosion of research on music and the brain. It is clear that music engages much of the brain and coordinates a wide range of cognitive processes. This naturally raises the question of how music cognition relates to other complex cognitive abilities. Language is an obvious candidate, since (like music) it relies on interpreting complex acoustic sequences that unfold in time. Whether music and language cognition share basic brain mechanisms has only recently begun to be studied empirically. An exciting picture is emerging. There are more connections between the domains than might be expected on the basis of dominant theories of musical and linguistic cognition. Furthermore, these connections have real-world implications for the study and treatment of disorders of speech and language. This symposium explores music-language relations from three different perspectives that combine behavioral and brain imaging methods: how speech is encoded by brainstem auditory structures; how “melodic intonation therapy” helps patients with non-fluent aphasia recover some of their spoken language fluency; and syntactic processing.

Speakers:
Nina Kraus, Northwestern University
Cognitive-Sensory Interaction in the Neural Encoding of Music and Speech

Gottfried Schlaug, Harvard Medical School
Singing to Speaking: Observations in Healthy Singers and Patients with Broca's Aphasia

Aniruddh D. Patel, Neurosciences Institute
Music, Language, and Grammatical Processing


Language Learning in Deaf Children: Integrating Research on Speech, Gesture, and Sign
Sunday 2/21, 3:30-5:00
Organizer, Jenny Saffran; discussant, Rachel Mayberry
http://aaas.confex.com/aaas/2010/webprogram/Session1348.html

How do infants who cannot hear learn language? The two dominant approaches to this problem are typically considered in isolation. On the one hand, infants exposed to signed languages such as American Sign Language readily learn a native sign language via the visual modality. On the other hand, infants exposed to spoken languages such as English are often provided with devices such as cochlear implants, which facilitate the acquisition of a native spoken language. However, there has been remarkably little cross-talk between investigators focused on these two different modes of language learning. This symposium considers the nature of the language input to deaf infants and young children from both the visual and auditory perspective. Based on results from studies of deaf infants with cochlear implants, how do these infants learn new spoken words and recognize familiar spoken words, given the degraded acoustic signal provided by cochlear implant devices? The types of visual input provided to deaf infants in the form of gesture across cultures also will be considered along with the implications of early spoken and signed language input for later language acquisition. By bringing together new research findings on the nature of the input to language learning in deaf infants and young children, this symposium will suggest important dimensions of experience that lead to successful language outcomes.

Speakers:
Derek Houston, Indiana University School of Medicine
Word Learning in Deaf Children with Cochlear Implants

Tina Grieco-Calub, Northern Illinois University
Processing of Spoken Words by 2-Year-Old Children Who Use Cochlear Implants

Marie Coppola, University of Chicago
Multi-Modal Input to Language Learning: Gesture and Speech to Children Across
Cultures

Linguistics and Language Science Section Business Meeting
Friday, February 19, 2010: 7:45 PM-10:00 PM
Warner Center (San Diego Marriott Hotel & Marina)
Organizer: Lise Menn, U. of Colorado, Secretary, Section Z

For other symposia of interest, go to
http://aaas.confex.com/aaas/2010/webprogram/Symposium7.html
Cognitive Function and Development


Read more issues|LINGUIST home page|Top of issue




Please report any bad links or misclassified data

LINGUIST Homepage | Read LINGUIST | Contact us

NSF Logo

While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.