* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 23.680

Thu Feb 09 2012

Calls: Text/Corpus Linguistics/Turkey

Editor for this issue: Alison Zaharee <alisonlinguistlist.org>

LINGUIST is pleased to announce the launch of an exciting new feature: Easy Abstracts! Easy Abs is a free abstract submission and review facility designed to help conference organizers and reviewers accept and process abstracts online. Just go to: http://www.linguistlist.org/confcustom, and begin your conference customization process today! With Easy Abstracts, submission and review will be as easy as 1-2-3!
        1.     Jens Edlund , LREC Workshop: Multimodal Corpora

Message 1: LREC Workshop: Multimodal Corpora
Date: 08-Feb-2012
From: Jens Edlund <edlundspeech.kth.se>
Subject: LREC Workshop: Multimodal Corpora
E-mail this message to a friend

Full Title: LREC Workshop: Multimodal Corpora
Short Title: MultimodalCorpora

Date: 22-May-2012 - 22-May-2012
Location: Istanbul, Turkey
Contact Person: Jens Edlund
Meeting Email: < click here to access email >
Web Site: http://www.multimodal-corpora.org/

Linguistic Field(s): Text/Corpus Linguistics

Call Deadline: 24-Feb-2012

Meeting Description:

LREC 2012 Workshop
Multimodal Corpora: How Should Multimodal Corpora Deal with the Situation?
22 May 2012
Istanbul, Turkey

Currently, the creation of a multimodal corpus involves the recording, annotation and analysis of a selection of many possible communication modalities such as speech, hand gesture, facial expression, and body posture. Simultaneously, an increasing number of research areas are transgressing from focused single modality research to full-fledged multimodality research. Multimodal corpora are becoming a core research asset and they provide an opportunity for interdisciplinary exchange of ideas, concepts and data. The increasing interest in multimodal communication and multimodal corpora evidenced by European Networks of Excellence and integrated projects such as HUMAINE, SIMILAR, CHIL, AMI, CALLAS and SSPNet; the success of recent conferences and workshops dedicated to multimodal communication (ICMI-MLMI, IVA, Gesture, PIT, Nordic Symposium on Multimodal Communication, Embodied Language Processing); and the creation of the Journal of Multimodal User Interfaces also testifies to the growing interest in this area, and the general need for data on multimodal behaviours.

In 2012, the 8th Workshop on Multimodal Corpora will again be collocated with LREC. This year, LREC has selected Speech and Multimodal Resources as its special topic. This points to the significance of the workshop's general scope, and the fact that the main conference special topic largely covers the broad scope of the workshop provides us with a unique opportunity to step outside the boundaries and look further into the future.

The workshop follows similar events held at LREC 00, 02, 04, 06, 08, 10, and ICMI 11. All workshops are documented under http://www.multimodal-corpora.org and complemented by a special issue of the Journal of Language Resources and Evaluation which came out in 2008 and a state-of-the-art book published by Springer in 2009.

The workshop will consist of a morning session and an afternoon session. There will be time for collective discussions.

Call for Papers:

As always, we aim for a wide cross-section of the field, with contributions ranging from collection efforts, coding, validation and analysis methods, to tools and applications of multimodal corpora. This year, however, we also want to look ahead and emphasize the fact that a growing segment of research takes a view of spoken language as situated action, where linguistic and non-linguistic actions are intertwined with the dynamic conditions given by the situation and the place in which the actions occur. In spite of this, most corpora capture little more than the linguistic and meta-linguistic actions per se, and contain little or no information about the situation in which they take place. For this reason, we encourage contributions that raise the question of what the additions to future multimodal corpora will be - with possibilities ranging from simple dynamic information such as background noise, room temperature, light conditions and room dimensions to more complex models of room contents, external events, scents, or cognitive load modelling including physiological data such as breathing or pulse. We hope that with your help, the workshop will serve to examine the way language is conceived in corpus creation and to spark a discussion of its boundaries and how these should be accounted for in annotations and in interpretation.

The LREC'2012 workshop on multimodal corpora will feature a special session on the collection, annotation and analysis of corpora of situated interaction.

Other topics to be addressed include, but are not limited to:

- Multimodal corpus collection activities (e.g. direction-giving dialogues, emotional behaviour, human-avatar interaction, human-robot interaction, etc.) and descriptions of existing multimodal resources
- Relations between modalities in natural (human) interaction and in human-computer interaction
- Multimodal interaction in specific scenarios, e.g. group interaction in meetings
- Coding schemes for the annotation of multimodal corpora
- Evaluation and validation of multimodal annotations
- Methods, tools, and best practices for the acquisition, creation, management, access, distribution, and use of multimedia and multimodal corpora
- Interoperability between multimodal annotation tools (exchange formats, conversion tools, standardization)
- Collaborative coding
- Metadata descriptions of multimodal corpora
- Automatic annotation, based e.g. on motion capture or image processing, and the integration with manual annotations
- Corpus-based design of multimodal and multimedia systems, in particular systems that involve human-like modalities either in input (Virtual Reality, motion capture, etc.) and output (virtual characters)
- Automated multimodal fusion and/or generation (e.g., coordinated speech, gaze, gesture, facial expressions)
- Machine learning applied to multimodal data
- Multimodal dialogue modelling

Important Dates:

Deadline for paper submission (complete paper): 24 February 2012
Notification of acceptance: 19 March 2012
Final version of accepted paper: 26 March 2012
Final program and proceedings: 20 April 2012
Workshop: 22 May 2012


The workshop will consist primarily of paper presentations and discussion/working sessions. Submissions should be 4 pages long, must be in English, and follow the submission guidelines at:


Submission should be made at:


Demonstrations of multimodal corpora and related tools are encouraged as well (a demonstration outline of 2 pages can be submitted).

LREC2012 Map of Language Resources, Technologies and Evaluation:

When submitting a paper, from the START page authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that either have been used for the work described in the paper or are a new result of your research (contribution to building the LREC2012 Map).

Organizing Committee:

Jens Edlund, KTH Royal Institute of Technology, Sweden
Dirk Heylen, University of Twente, The Netherlands
Patrizia Paggio, University of Copenhagen, Denmark/University of Malta, Malta

Read more issues|LINGUIST home page|Top of issue

Page Updated: 09-Feb-2012

Supported in part by the National Science Foundation       About LINGUIST    |   Contact Us       ILIT Logo
While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed on its pages, it cannot vouch for their contents.