LINGUIST List 14.3517

Thu Dec 18 2003

Confs: Phonetics/Boston, MA USA

Editor for this issue: Marie Klopfenstein <marielinguistlist.org>


Please keep conferences announcement as short as you can; LINGUIST will not post conference announcements which in our opinion are excessively long. To post to LINGUIST, use our convenient web form at http://linguistlist.org/LL/posttolinguist.html.

Directory

  1. lavoie, Tutorial on Acoustic Analysis

Message 1: Tutorial on Acoustic Analysis

Date: Tue, 16 Dec 2003 16:34:09 -0500 (EST)
From: lavoie <lavoiefas.harvard.edu>
Subject: Tutorial on Acoustic Analysis


Tutorial on Acoustic Analysis 

Date: 08-Jan-2004 - 08-Jan-2004
Location: Boston, Massachusetts, United States of America
Contact: Lisa Lavoie
Contact Email: lavoiefas.harvard.edu 

Linguistic Sub-field: Phonetics

Meeting Description:

A Tutorial on Acoustic Analysis will be part of the LSA Annual Meeting
in January. It will take place starting at 1 pm on Thursday, January
8. The first half of the tutorial will be an introduction or a
refresher and the second half will consist of case studies applying
acoustic analysis to research questions. Contact Lisa Lavoie
(lavoiefas.harvard.edu) with questions or for more information.

TUTORIAL: ACOUSTIC ANALYSIS
Part of the Linguistic Society of America Annual Meeting

Thursday, January 8, 2004
Back Bay D Ballroom
1:00 to 7:00 pm

Organizers:	
Lisa Lavoie (Harvard) 
Ioana Chitoran (Dartmouth)

Presenters: 	
Ioana Chitoran (Dartmouth)
John Kingston (UMass Amherst)
Lisa Lavoie (Harvard)
Ian Maddieson (UC Berkeley)
Sharon Manuel (MIT)
Joyce McDonough (Rochester) 
Stefanie Shattuck-Hufnagel (MIT)
Janet Slifka (MIT)
Lisa Zsiga (Georgetown)

In recent years, research agendas in theoretical phonology and
experimental phonetics have been converging more and more. As a
result, it has become increasingly important for linguists working in
different areas of the field to incorporate aspects of acoustic
phonetics in their research. This tutorial will focus on ways in which
variation in the acoustic signal can be interpreted to obtain
articulatory information. Our goal is to illustrate how much and what
kind of information can be derived relatively easily from the acoustic
signal. This tutorial will help participants learn what to expect and
what not to expect. Acoustic analysis is a very accessible means of
phonetic data analysis which can be used on its own or to help
determine what to study using more data-intensive or articulatory
techniques.

The first part of the workshop consists of basic training in acoustics
and interpreting spectrograms, appropriate either as a refresher or
for those who are new to this kind of work. The second part consists
of case studies that will be of interest both to novices and to those
already skilled in acoustic analysis. The case studies will include
generous amounts of data, as well as discussion of how it was
analyzed.

Part 1		1:00 to 4:15
INTRODUCTION TO ACOUSTIC ANALYSIS AND INTERPRETING SPECTROGRAMS

Sharon Manuel (MIT) 
*Introduction to Acoustics* 
This introduction covers the anatomy of the vocal tract and the
source-filter model of acoustics. It will examine the acoustic
patterns resulting from various types of articulations. Topics include
acoustics of periodic laryngeal sources, aperiodic laryngeal and
supralaryngeal sources, formants, and acoustic patterns for various
consonant classes.

Sharon Manuel, Stefanie Shattuck-Hufnagel, Janet Slifka (MIT)
*Interpreting Spectrograms I*
Presenters will work through several relatively straightforward
spectrograms of words and/or phrases, to give participants experience
in interpreting the acoustic patterns of utterances. Some of the
utterances will be produced by both male and female speakers to
illustrate the range of normal variation. Several of the utterances
will illustrate where to take measurements for specific segments and
segment sequences, such as the boundary between an aspirated stop and
the following vowel or what kinds of measures adequately characterize
a diphthong.

Lisa Lavoie (Harvard), Janet Slifka (MIT), Lisa Zsiga (Georgetown)
*Interpreting Spectrograms II*
Various participants will work through somewhat less straightforward
spectrograms with two goals in mind: (1) to model different approaches
to interpretation, i.e., strongly linguistic vs. more
engineering-based and (2) to illustrate casual speech phenomena, such
as assimilation, palatalization, and apparent deletion.

PART 2		4:30 to 7:00 
APPLICATIONS OF ACOUSTIC ANALYSIS

In this portion, presenters will illustrate how they have used
acoustic analysis to answer theoretical questions and/or determine
what kind of articulation produced the observed acoustic
patterns. Presenters will address where, how, and why they made the
measurements that they did. They will also indicate which speech
analysis software they used and whether it was adequate to the
task. Each case study will last approximately 20 minutes, and will
occur in the order listed below.

Ian Maddieson (UC Berkeley)
*Deciding what is a formant*
Formant frequencies are often estimated from visual inspection of
spectrograms or by using FFT or LPC algorithms to calculate an output
spectrum or an estimate of the vocal-tract transfer function. All
methods have problems and may yield either more or fewer formants than
the best model for the sound being analyzed. This section will
discuss some of the ways that over- or underestimates of the number of
formants present can be recognized and how analysis parameters can be
adjusted to improve the results.

Joyce McDonough (Rochester)
*Differentiating fricatives based on their acoustic energy in Navajo* 
This presentation will examine the evidence that a spectral analysis
can bring to bear on the nature of phonemic contrasts in languages
that are without extensive phonetic documentation. Several problems
arise in a description of the Athabaskan fricative patterns: voicing
is reported as contextual, the distinction between fricative and
approximant is apparently only weakly functional, the languages
exhibit strident consonant harmony, and the back versus the front
fricatives differ in the amount of variability they reportedly
exhibit. The spectral patterns of Navajo fricatives (alveolar,
alveopalatal, lateral, and velar) will be examined for what they can
tell us about these patterns and their parameters of contrast.

Sharon Manuel, Stefanie Shattuck-Hufnagel, Janet Slifka (MIT) 
*Finding feature cues when segments appear to be absent in English*
In spontaneous speech, segments sometimes appear to be deleted, but in
fact leave clues as to their featural makeup in adjacent regions. This
presentation will illustrate a variety of cases where a separate
segment cannot be identified, but feature cues to that segment are
recoverable. Cases of this type include realizations of in those where
the interdental fricative assimilates to the nasal, but which are not
exactly the same as in nose, and realizations of support where schwa
seems to delete from the first syllable, but which are not identical
to sport.

Lisa Zsiga (Georgetown) and Ioana Chitoran (Dartmouth)
*Gestural overlap in English, Russian, and Georgian*
The degree of articulatory overlap between articulatory gestures is
best ascertained from movement tracking data. The acoustic signal is a
less precise measure of coproduction because the landmarks in the
movement trajectories of each gesture - movement onset, target
achievement, target release - cannot be directly inferred from
it. This section will discuss how acoustic data can still be
efficiently interpreted in spite of its limitations, to extract as
much information as possible about degree of overlap between
consonantal gestures, and between consonantal and vocalic gestures.

Ian Maddieson (UC Berkeley)
*Sequencing of events in complex sounds* 
In addition to showing the acoustic pattern of complex sounds,
spectrograms can also frequently provide important insights into the
nature and sequencing of articulatory events which are involved. This
section will demonstrate some of what can and cannot be learned from
looking at spectrograms of various classes of complex sounds including
partially nasal, glottalized, and doubly-articulated consonants.

John Kingston (UMass Amherst)
*The acoustics of voice quality, secondary articulation, and
nasalization contrasts: Global rather than local spectral changes*
Many articulatory differences change the spectrum locally,
e.g. retracting the tongue blade from the alveolar ridge toward the
palate raises the second formant's frequency in adjacent vowels.
However, many other articulatory differences change the spectrum
globally, and are thus more difficult to diagnose. Three examples
(each involving a different articulator) will be discussed: (1) lax vs
tense and creaky vs breathy voice quality (medial compression of the
vocal folds), (2) uvularization vs pharyngealization as a secondary
articulations on consonants (retraction of the tongue dorsum or root),
and (3) vowel nasalization (soft palate lowering).

Wrap up
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue