LINGUIST List 32.750

Mon Mar 01 2021

Calls: Comp Ling/Netherlands

Editor for this issue: Lauren Perkins <>

Date: 28-Feb-2021
From: James Pustejovsky <>
Subject: Beyond Language: Multimodal Semantic Representations (MMSR 2021)
E-mail this message to a friend

Full Title: Beyond Language: Multimodal Semantic Representations (MMSR 2021)
Short Title: MMSR 2021

Date: 14-Jun-2021 - 18-Jun-2021
Location: Groningen, Netherlands
Contact Person: James Pustejovsky
Meeting Email: < click here to access email >
Web Site:

Linguistic Field(s): Computational Linguistics

Call Deadline: 19-Mar-2021

Meeting Description:

The workshop is a one-day event, with the exact date TBD between June 14-18, 2021.

The demand for more sophisticated natural human-computer and human-robot interactions is rapidly increasing as users become more accustomed to conversation-like interactions with AI and NLP systems. Such interactions require not only the robust recognition and generation of expressions through multiple modalities (language, gesture, vision, action, etc.), but also the encoding of situated meaning.

This workshop intends to bring together researchers who aim to capture elements of multimodal interaction such as language, gesture, gaze, and facial expression with formal semantic representations. We provide a space for both theoretical and practical discussion of how linguistic co-modalities support, inform, and align with “meaning” found in the linguistic signal alone.

Call for Papers:

We solicit papers on multimodal semantic representation, including but not limited to the following topics:
- Examination and interpretation of co-gestural speech and co-speech gesture;
- Semantic frameworks for individual linguistic co-modalities (e.g. gaze, facial expression);
- Formal representation of situated conversation and embodiment;
- Design and annotation of multimodal meaning representation (including extensions of existing semantic frameworks);
- Challenges in cross-lingual or cross-cultural multimodal representation;
- Challenges in semantic parsing of multimodal representation;
- Challenges in aligning co-modalities in formal representation and/or NLP;
- Discussion of criteria for evaluation of multimodal semantics;
- Position papers on meaning, language, and multimodality;
- Simulated agents that embody multimodal representations of common ground.

Submission Information:
Two types of submissions are solicited: long papers and short papers. Long papers should describe original research and must not exceed 8 pages, excluding references. Short papers (typically system or project descriptions, or ongoing research) must not exceed 4 pages, excluding references. Both types will be published in the workshop proceedings and in the ACL Anthology. Accepted papers get an extra page in the camera-ready version.

We strongly encourage students to submit to the workshop and will consider a student session depending on the number of submissions.

Papers should be formatted using the IWCS/ACL style files, available at:

Papers should be submitted in PDF format via the Softconf system at the following link:

Page Updated: 01-Mar-2021