* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *


LINGUIST List 23.2627

Wed Jun 06 2012

Calls: Computational Linguistics/USA

Editor for this issue: Alison Zaharee <alisonlinguistlist.org>


LINGUIST is pleased to announce the launch of an exciting new feature: Easy Abstracts! Easy Abs is a free abstract submission and review facility designed to help conference organizers and reviewers accept and process abstracts online. Just go to: http://www.linguistlist.org/confcustom, and begin your conference customization process today! With Easy Abstracts, submission and review will be as easy as 1-2-3!
Date: 06-Jun-2012
From: Francesca Bonin <boninftcd.ie>
Subject: Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction
E-mail this message to a friend

Full Title: Workshop on Multimodal Analyses Enabling Artificial Agents in Human-Machine Interaction
Short Title: MA3

Date: 15-Sep-2012 - 15-Sep-2012
Location: Santa Cruz, California, USA
Contact Person: Francesca Bonin
Meeting Email: < click here to access email >
Web Site: http://fastnet.netsoc.ie/ma3_2012/Welcome.html

Linguistic Field(s): Computational Linguistics

Call Deadline: 02-Jul-2012

Meeting Description:

International Workshop on Multimodal Analyses for Human Machine Interaction

The workshop will be held on September 15 at the 12th International Conference on Intelligent Virtual Agents, University of California, Santa Cruz (IVA 2012).

Workshop website: http://fastnet.netsoc.ie/ma3_2012/Welcome.html
Conference website: http://iva2012.soe.ucsc.edu/

Exploring how human beings interact with the world and with each other is of outstanding importance for creating what is commonly named intelligent behaviour in Human-Machine Interaction. In the scientific community there is a growing interest in new techniques aimed at obtaining a more natural and human like interaction with machines. It is important, in the framework of intelligent virtual agent, to find the space for a socially intelligent virtual agent.

An appropriate system whether it is a virtual agent or a companion-like device has to analyse multiple sources of information to interact with humans in a proper and socially aware way. Such modalities are in general speech and its content, mimicry, gesture, biological signals, etc., and in particular it is the prosodic and paralinguistic features of the speech, facial mimicry, and heart rate, etc.

The aim of this workshop is to discuss techniques for gathering and analysing information on multimodal signals, combining methods of analysis in order to understand the correlation between different sources of information (gesture, speech, motion, etc.), as well as understanding how the environment and the context can influence how participants react.

Agents and smart devices are already more than a pure system. They become companions and a kind of friend. Thus, a Human- Machine Interaction will be no longer fixed on a task solution; it will be more human like, and include elements of chat and non- fixed gestures. The workshop will discuss the combination of these different signals, to foster more human-like interactions with virtual agents.

Call for Papers:

Workshop topics include, but are not limited to:

Multimodal recognition and identification of user behaviour
Awareness of the user's/agent's situation based on multimodal data
Analysis of Human-Machine conversations
Comparison of Human-Human vs. Human-Machine conversations
Strategies of multimodal Human-Machine Interaction
Analyses of speech and prosody wrt. paralinguistic features
Mimicry and gesture analyses
Using multimodal data sets for Human-Machine Interaction
Annotation and processing of multimodal data sets
Developments and combinations of analysis methods
Applications of agents in Human-Machine Interaction

Author Instructions:

Prospective authors are invited to submit full papers (12 pages), short papers (6 pages), in Springer Lecture Notes in Computer Science (LNCS) format.

To submit your paper, please go to https://www.softconf.com/c/iva2012/.

All submissions should be anonymous.

Submissions must describe substantial, original, completed and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Submissions will be judged on correctness, originality, technical strength, significance, relevance to the conference, and interest to the attendees.

Important Dates:

Submission deadline: July 2, 2012
Notification of acceptance: August 1, 2012
Camera-ready deadline: August 31, 2012
Workshop date: September 15, 2012

Organising Committee:

Ronald Böck, Cognitive Systems Group, Otto von Guericke University, Magdeburg, Germany
Francesca Bonin, PhD candidate, Trinity College, Dublin, Ireland
Nick Campbell, Stock Professor, Speech and Communication Lab, Trinity College, Dublin, Ireland

Program Committee:

Christian Becker-Asano, U Freiburg, Germany
Laurence Devillers, U Paris-Sorbonne 4, France
Jens Edlund, KTH Stockholm, Sweden
Dirk Heylen, U Twente, Netherlands
Costanza Navarretta U Copenhagen, Denmark
Catherine Pelanchaud, TELECOM ParisTech, France
Dietmar Rösner, U Magdeburg, Germany
Stefan Scherer, USC, USA
Björn Schuller, TU Munich, Germany
Jianhua Tao, Chinese Ac. Sciences, China
Thora Tenbrink, U Bremen, Germany
Jürgen Trouvain, U Saarland, Germany
Khiet Truong, U Twente, Netherlands
Michel Valstar, U Nottingham, UK
Andreas Wendemuth, U Magdeburg Germany
Yorick Wilks, IHMC, USA

Contacts and Further Information:

boninfscss.tcd.ie
ronald.boeckovgu.de

Workshop website: http://fastnet.netsoc.ie/ma3_2012/Welcome.html



Read more issues|LINGUIST home page|Top of issue



Page Updated: 06-Jun-2012

Supported in part by the National Science Foundation       About LINGUIST    |   Contact Us       ILIT Logo
While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed on its pages, it cannot vouch for their contents.