* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 21.2424

Tue Jun 01 2010

Calls: Applied Ling/Italy

Editor for this issue: Di Wdzenczny <dilinguistlist.org>


LINGUIST is pleased to announce the launch of an exciting new feature: Easy Abstracts! Easy Abs is a free abstract submission and review facility designed to help conference organizers and reviewers accept and process abstracts online. Just go to: http://www.linguistlist.org/confcustom, and begin your conference customization process today! With Easy Abstracts, submission and review will be as easy as 1-2-3!
Directory
        1.    Franciska de Jong, Searching Spontaneous Conversational Speech

Message 1: Searching Spontaneous Conversational Speech
Date: 01-Jun-2010
From: Franciska de Jong <fdejongewi.utwente.nl>
Subject: Searching Spontaneous Conversational Speech
E-mail this message to a friend

Full Title: Searching Spontaneous Conversational Speech
Short Title: SSCS 2010

Date: 29-Oct-2010 - 29-Oct-2010
Location: Florence, Italy
Contact Person: Martha Larson
Meeting Email: m.a.larsontudelft.nl
Web Site: http://www.searchingspeech.org/

Linguistic Field(s): Applied Linguistics

Other Specialty: Media Linguistics

Call Deadline: 14-Jun-2010

Meeting Description:

SSCS2010 is organized in conjunction with ACM Multimedia 2010.

The SSCS 2010 workshop is devoted to presentation and discussion of
recent research results concerning advances and innovation in the area of
spoken content retrieval and the area of multimedia search that makes use
of automatic speech recognition technology.

Spoken audio is a valuable source of semantic information, and speech
analysis techniques, such as speech recognition, hold high potential to
improve information retrieval and multimedia search. Nonetheless, speech
technology remains underexploited by multimedia systems, in particular, by
those providing access to multimedia content containing spoken audio.
Early success in the area of broadcast news retrieval has yet to be
extended to application scenarios in which the spoken audio is unscripted,
unplanned and highly variable with respect to speaker and style
characteristics. The SSCS 2010 workshop is concerned with a wide variety
of challenging spoken audio domains, including: lectures, meetings,
interviews, debates, conversational broadcast (e.g., talkshows), podcasts,
call center recordings, cultural heritage archives, social video on the Web
and spoken natural language queries. As speech steadily moves closer to
rivaling text as a medium for access and storage of information, the need for
technologies that can effectively make use of spontaneous conversational
speech to support search becomes more pressing.

In order to move the use of speech and spoken content in retrieval
applications and multimedia systems beyond the current state of the art,
sustained collaboration of researchers in the areas of speech recognition,
audio processing, multimedia analysis and information retrieval is
necessary. Motivated by the aim of providing a forum where these
disciplines can engage in productive interaction and exchange, Searching
Spontaneous Conversational Speech (SSCS) workshops were held in
conjunction with SIGIR 2007, SIGIR 2008 and ACM Multimedia 2009. The
SSCS workshop series continues at ACM Multimedia 2010 with a focus on
research that strives to move retrieval systems beyond conventional queries
and beyond the indexing techniques used in traditional mono-modal settings
or text-based applications.

Call For Papers

We welcome contributions on a range of trans-disciplinary research issues
related to these research challenges, including:

- Information Retrieval techniques in the speech domain (e.g., applied to
speech recognition lattices)
- Multimodal search techniques exploiting speech transcripts
(audio/speech/video fusion techniques including re-ranking)
- Search effectiveness (e.g., evidence combination, query/document
expansion)
- Exploitation of audio analysis (e.g., speaker�s emotional state, speaker
characteristics, speaking style)
- Integration of higher level semantics, including topic segmentation and
cross-modal concept detection
- Spoken natural language queries, Spoken Web
- Large-scale speech indexing approaches (e.g., collection size, search
speed)
- Multilingual settings (e.g., multilingual collections, cross-language access)
- Advanced interfaces for results display and playback of multimedia with a
speech track
- Exploiting user contributed information, including tags, rating and user
community structure
- Affordable, light-weight solutions for small collections, i.e., for the long
tail

Contributions for oral presentations (short papers of 4 pages or long papers
of 6 pages) and demonstration papers (4 pages) will be accepted. The
submission deadline is 14 June 2010 (extended deadline). For further
information see the website: http://www.searchingspeech.org/

At this time, we area also pre-announcing a special issue of ACM
Transactions on Information Systems on the topic of searching spontaneous
conversational speech. The special issue is based on the SSCS workshop
series, but will involve a separate call for papers. We will especially
encourage the authors of the best papers from SSCS 2010 to submit to the
special issue call.

SSCS 2010 Organizers
Martha Larson, Delft University of Technology, Netherlands
Roeland Ordelman, Sound & Vision and Uni. of Twente, Netherlands
Florian Metze, Carnegie Mellon University, USA
Franciska de Jong, University of Twente, Netherlands
Wessel Kraaij, TNO and Radboud University, Netherlands
Read more issues|LINGUIST home page|Top of issue



Page Updated: 01-Jun-2010

Supported in part by the National Science Foundation       About LINGUIST    |   Contact Us       ILIT Logo
While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed on its pages, it cannot vouch for their contents.