|Full Title:||DGfS Workshop: Interface Issues of Gestures and Verbal Semantics and Pragmatics|
|Start Date:||12-Mar-2013 - 15-Mar-2013|
|Meeting Email:||click here to access email|
Interface Issues of Gestures and Verbal Semantics and Pragmatics
Workshop at the 35th meeting of the German Linguistic Society (DGfS), Potsdam, Germany
March 12-15, 2013
As is widely accepted, gestures - in particular speech-accompanying iconic ones - can express semantic content. Speech and gestures are said to work together to convey one single thought (McNeill 1992, Kendon 2004) and the semantic content of gestures is intertwined with the semantic content of the speech signal. Gestures can be co-expressive (displaying the same semantic content as the speech signal) or complementary (expressing additional information).
An intriguing question that is not settled yet is how gesture meaning and speech meaning interact. Gestures are often only interpretable in combination with their accompanying speech symbols (Kopp et al 2004, Lascarides & Stone 2006). The strong interaction of gesture channel and speech channel has thus been interpreted in such a way that information from the two channels is mapped to one single logic representation (e.g. in Rieser 2008, Kopp et al 2004). It is, however, evident that the pieces of information from the different channels are of different nature and that gesture information seems to be backgrounded in some sense or other (cf. Giorgolo & Needham 2011).
Furthermore, it is well-known that gestures are also temporally aligned with the speech signal in a certain way, i.e. the stroke of the gesture falls together with the main accent of the gesture-accompanying sentence (McNeill 1992). This fact has been interpreted as an indication that gestures (beats in particular) can take over information structural tasks and serve to mark focus domains (Ebert, Evert & Wilmes 2011).
|Linguistic Subfield:||Pragmatics; Semantics|
|Calls and Conferences main page|