LINGUIST List 30.4973

Tue Dec 31 2019

Calls: Computational Linguistics, Lexicography, Semantics/France

Editor for this issue: Everett Green <everettlinguistlist.org>



Date: 19-Dec-2019
From: Itziar Gonzalez-Dios <itziar.gonzalezdehu.eus>
Subject: Multimodal Wordnets Workshop LREC 2020
E-mail this message to a friend

Full Title: Multimodal Wordnets Workshop LREC 2020

Date: 11-May-2020 - 11-May-2020
Location: Marseille, France
Contact Person: Itziar Gonzalez-Dios
Meeting Email: < click here to access email >
Web Site: http://hitz.eus/multimodalwordnets2020/

Linguistic Field(s): Computational Linguistics; Lexicography; Semantics

Call Deadline: 14-Feb-2020

Meeting Description:

Nowadays, data can be found in many different formats and multimodal approaches are gaining attention. Many research areas are moving  from a single modality to full-fledged multimodality research, e.g. multimodal corpora, multimodal lexicons, etc. For instance, efforts are being made to integrate images, sign languages, sounds, etc. into existing wordnets. As the exchange of information among modalities can be crucial for lexical databases, we want to address this interdisciplinary research area in the first Workshop on “Multimodal wordnets”, co-located with LREC 2020 and organized by the Global Wordnet Association.

Call for Papers:

Multimodal wordnets workshop LREC 2020

Venue: Palais du Pharo, Marseille, France; LREC 2020
Website: http://hitz.eus/multimodalwordnets2020/
Submission page: https://www.softconf.com/lrec2020/MMW2020/
Hashtag: #mmwn20


Topics of Interest:

- The workshop aims at studying the interaction and cross-fertilization between wordnets and existing multimodal resources. We invite submissions with original contributions addressing, but not limited to, the topics listed below.

- What are the benefits/drawbacks of multimodal wordnets?  How can wordnets help in the transmission and characterization of multimedia information?

- To what extent is it possible to create wordnets in other modalities?

- Which new multimodal initiatives and projects are being carried out involving distinct modalities (written, spoken, audio-visual, signs, pictograms, emojis, geographical and spatio-temporal data...)  and knowledge representations (wordnets, lexicons, ontologies, terminologies, dictionaries, corpora, wikipedias, distributional representations, cultural artifacts, books...)?

- What are (can be) the practical applications of multimodal wordnets? How to exploit existing multimodal wordnets, such as Visual Genome, ConceptNet, ImageNet, Imagact, etc.? Sense disambiguation on corpora, images, space role labeling, multimodal knowledge acquisition, commonsense reasoning and inference, distributed concept representation, integration of distributional (corpus-based) and knowledge-based embeddings...

- Which approaches are being developed to create these multimodal resources?  How can they be best represented?

- How to automatically map existing resources? How can we deal with similarity and relatedness across modality?
How can we deal with specificity? Image, sound, smell, touch, video are all infinitely specific but words are not.

- What is the added value of wordnet hierarchies to other modalities? Which is the role of the multimodal wordnets? Which is the expected format of the resources? Which standards to adopt or to develop?
How can we feed and feed back the algorithms with multimodal wordnets?

- Which ethical policies should be followed? (see for instance, https://www.excavating.ai/)


Submission Details:

Submissions will fall into one of the following categories (page limits exclude references):
Long papers: 8 pages max; 30 minutes presentation
Short papers: 5 pages max; 15 minutes presentation
Poster presentations

Papers must be compliant with the stylesheet adopted for the main conference Proceedings: https://lrec2020.lrec-conf.org/en/submission2020/authors-kit/

Follow this link (https://www.softconf.com/lrec2020/MMW2020/) in START Conference Manager to submit your paper.

Identify, Describe and Share your LRs!




Page Updated: 31-Dec-2019