* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 20.1995

Wed May 27 2009

Calls: Applied Linguistics, Computational Linguistics/China

Editor for this issue: Elyssa Winzeler <elyssalinguistlist.org>

LINGUIST is pleased to announce the launch of an exciting new feature: Easy Abstracts! Easy Abs is a free abstract submission and review facility designed to help conference organizers and reviewers accept and process abstracts online. Just go to: http://www.linguistlist.org/confcustom, and begin your conference customization process today! With Easy Abstracts, submission and review will be as easy as 1-2-3!
        1.    Franciska de Jong, Searching Spontaneous Conversational Speech 2009

Message 1: Searching Spontaneous Conversational Speech 2009
Date: 26-May-2009
From: Franciska de Jong <fdejongewi.utwente.nl>
Subject: Searching Spontaneous Conversational Speech 2009
E-mail this message to a friend

Full Title: Searching Spontaneous Conversational Speech 2009
Short Title: SSCS 2009

Date: 23-Oct-2009 - 23-Oct-2009
Location: Beijing, China
Contact Person: Martha Larson
Meeting Email: < click here to access email >
Web Site: http://ict.ewi.tudelft.nl/SSCS2009/

Linguistic Field(s): Applied Linguistics; Computational Linguistics

Call Deadline: 15-Jun-2009

Meeting Description:

SSCS 2009 is a workshop held in conjunction with ACM Multimedia 2009

Multimedia content often contains spoken audio as a key component. Although
speech is generally acknowledged as the quintessential carrier of semantic
information, spoken audio remains underexploited by multimedia retrieval
systems. In particular, the potential of speech technology to improve
information access has not yet been successfully extended beyond multimedia
content containing scripted speech, such as broadcast news. The SSCS 2009
workshop is dedicated to fostering search research based on speech technology
as it expands into spoken content domains involving non-scripted, less-highly
conventionalized, conversational speech characterized by wide variability of
speaking styles and recording conditions. Such domains include podcasts, video
diaries, lifelogs, meetings, call center recordings, social video networks, Web
TV, conversational broadcast, lectures, discussions, debates, interviews and
cultural heritage archives. This year we are setting a particular focus on the
user and the use of speech techniques and technology in real-life multimedia
access systems and have chosen the theme 'Speech technology in the multimedia
access framework.'

The development of robust, scalable, affordable approaches for accessing
multimedia collections with a spoken component requires the sustained
collaboration of researchers in the areas of speech recognition, audio
processing, multimedia analysis and information retrieval. Motivated by the aim
of providing a forum where these disciplines can engage in productive
interaction and exchange, Searching Spontaneous Conversational Speech (SSCS)
workshops were held in conjunction with SIGIR 2007 in Amsterdam and with SIGIR
2008 in Singapore. The SSCS workshop series continues with SSCS 2009 held in
conjunction with ACM Multimedia 2009 in Beijing.

SSCS2009 Organizing Committee:
Martha Larson, Delft University of Technology, The Netherlands
Franciska de Jong, University of Twente, The Netherlands
Joachim Kohler, Fraunhofer IAIS, Germany
Roeland Ordelman, Sound & Vision and University of Twente, The Netherlands
Wessel Kraaij, TNO and Radboud University, The Netherlands

Call for Papers

The SSCS 2009 workshop will focus on addressing the research challenges for the
field of Spoken Document Retrieval that were identified during SSCS 2008:
Integration, Interface/Interaction, Scale/Scope, and Community.

We welcome contributions on a range of trans-disciplinary issues related to
these research challenges, including:

-Information retrieval techniques based on speech analysis (e.g., applied to
speech recognition lattices)
-Search effectiveness (e.g., evidence combination, query/document expansion)
-Self-improving systems (e.g., unsupervised adaptation, recursive metadata refinement)
-Exploitation of audio analysis (e.g., speaker emotional state, speaker characteristics, speaking style)
-Integration of higher-level semantics, including cross-modal concept detection
-Combination of indexing features from video, text and speech

-Surrogates for representation or browsing of spoken content
-Intelligent playback: exploiting semantics in the media player
-Relevance intervals: determining the boundaries of query-related media segments
-Cross-media linking and link visualization deploying speech transcripts

-Large-scale speech indexing approaches (e.g., collection size, search speed)
-Dealing with collections containing multiple languages
-Affordable, light-weight solutions for small collections, i.e., for the long tail

-Stakeholder participation in design and realization of real world applications
-Exploiting user contributions (e.g., tags, ratings, comments, corrections, usage information, community structure)

Contributions for oral presentations (8-10 pages) poster presentations (2
pages), demonstration descriptions (2 pages) and position papers for selection
of panel members (2 pages) will be accepted. Further information including
submission guidelines is available on the workshop website:

Important Dates:
Monday, June 15, 2009 (Extended) Submission Deadline
Saturday, July 10, 2009 Author Notification
Friday, July 17, 2009 Camera Ready Deadline
Friday, October 23, 2009 Workshop in Beijing

Read more issues|LINGUIST home page|Top of issue

Please report any bad links or misclassified data

LINGUIST Homepage | Read LINGUIST | Contact us

NSF Logo

While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.