Featured Linguist!

Jost Gippert: Our Featured Linguist!

"Buenos dias", "buenas noches" -- this was the first words in a foreign language I heard in my life, as a three-year old boy growing up in developing post-war Western Germany, where the first gastarbeiters had arrived from Spain. Fascinated by the strange sounds, I tried to get to know some more languages, the only opportunity being TV courses of English and French -- there was no foreign language education for pre-teen school children in Germany yet in those days. Read more



Donate Now | Visit the Fund Drive Homepage

Amount Raised:

$34890

Still Needed:

$40110

Can anyone overtake Syntax in the Subfield Challenge ?

Grad School Challenge Leader: University of Washington


Publishing Partner: Cambridge University Press CUP Extra Publisher Login
amazon logo
More Info


E-mail this page

Conference Information



Full Title: DGfS Workshop: Interface Issues of Gestures and Verbal Semantics and Pragmatics

      
Location: Postdam, Germany
Start Date: 12-Mar-2013 - 15-Mar-2013
Contact: Cornelia Ebert
Meeting Email: click here to access email
Meeting Description: Interface Issues of Gestures and Verbal Semantics and Pragmatics
Workshop at the 35th meeting of the German Linguistic Society (DGfS), Potsdam, Germany
March 12-15, 2013

As is widely accepted, gestures - in particular speech-accompanying iconic ones - can express semantic content. Speech and gestures are said to work together to convey one single thought (McNeill 1992, Kendon 2004) and the semantic content of gestures is intertwined with the semantic content of the speech signal. Gestures can be co-expressive (displaying the same semantic content as the speech signal) or complementary (expressing additional information).

An intriguing question that is not settled yet is how gesture meaning and speech meaning interact. Gestures are often only interpretable in combination with their accompanying speech symbols (Kopp et al 2004, Lascarides & Stone 2006). The strong interaction of gesture channel and speech channel has thus been interpreted in such a way that information from the two channels is mapped to one single logic representation (e.g. in Rieser 2008, Kopp et al 2004). It is, however, evident that the pieces of information from the different channels are of different nature and that gesture information seems to be backgrounded in some sense or other (cf. Giorgolo & Needham 2011).

Furthermore, it is well-known that gestures are also temporally aligned with the speech signal in a certain way, i.e. the stroke of the gesture falls together with the main accent of the gesture-accompanying sentence (McNeill 1992). This fact has been interpreted as an indication that gestures (beats in particular) can take over information structural tasks and serve to mark focus domains (Ebert, Evert & Wilmes 2011).
Linguistic Subfield: Pragmatics; Semantics
LL Issue: 23.3141


Back
Calls and Conferences main page