LINGUIST List 12.874

Wed Mar 28 2001

Calls: Language/Dialogue Systems, Collocation

Editor for this issue: Jody Huellmantel <>

As a matter of policy, LINGUIST discourages the use of abbreviations or acronyms in conference announcements unless they are explained in the text.


  1. Priscilla Rasmussen, Evaluation for Language & Dialogue Systems (ACL-2001)
  2. Priscilla Rasmussen, Workshop on Collocation (ACL-2001)

Message 1: Evaluation for Language & Dialogue Systems (ACL-2001)

Date: Tue, 27 Mar 2001 16:04:07 EST
From: Priscilla Rasmussen <>
Subject: Evaluation for Language & Dialogue Systems (ACL-2001)

Call for Papers

Workshop on Evaluation for Language and Dialogue Systems
Toulouse, France
July 6-7, 2001


The aim of this two day workshop is to identify and to synthesize current
needs for language-technology evaluation.

The first day of the workshop will focus on one of the most challenging
current issues in language engineering: the evaluation of dialogue systems
and models. The second day will extend the discussion to address the problem
of evaluation in language engineering more broadly and on more theoretical

The space of possible dialogues is enormous, even for limited domains like
travel information servers. The generalization of evaluation methodologies
across different application domains and languages is an open problem.
Review of published evaluations of dialogue models and systems suggests that
usability techniques are the standard method. Dialogue-based system are
often evaluated in terms of standard, objective usability metrics, such as
task-completion time and number of user actions. In the past, researchers
have proposed and debated theory-based methods for modifying and testing the
underlying dialogue model, but the most widely used method of evaluation is
usability testing, although more precise and empirical methods for
evaluating the effectiveness of dialogue models have been proposed. For
task-based interaction, typical measures of effectiveness are
time-to-completion and task outcome, but the evaluation should focus on user
satisfaction rather than on arbitrary effectiveness measurements.Indeed, the
problems faced in current approaches to measurement of effectiveness
dialogue models and systems include:

Direct measures are unhelpful because efficient performance on the nominal
task may not represent the most effective interaction
Indirect measures usually rely on judgment and are vulnerable to weak
relationships between the inputs and outputs
Subjective measures are unreliable and domain-specific
For its first day, the workshop organizers solicit papers on these issues,
with particular emphasis on methods that go beyond usability testing to
address the underlying dialogue model. Representative questions to be
addressed include:

 o How do we deal with the combinatorial explosion
 of dialogue states?
 o How can satisfaction be measured with respect to
 underlying dialogue models?
 o Are there useful direct measures of dialogue properties
 that do not depend on task efficiency?
 o What is the role of agent-based simulation in
 evaluation of dialogue models?

Of course, the problems faced in evaluating dialogue and system models are
found in other domains of language engineering, even for non-interactive
processes such as part-of-speech tagging, parsing, semantic disambiguation,
information extration, speech transcription, and audio document indexing. So
the issue of evaluation can be viewed at a more generic level, raising
fundamental, theoretical questions such as:

 o What are the interest and benefits of evaluation
 for language engineering?
 o Do we really need these specific methodologies,
 since a form of evaluation sould always be present
 in any scientific investigation?
 o If evaluation is needed in language engineering, is
 it the case for all domains?
 o What form should it take? Technology evaluation
 (task-oriented in laboratory environment) or
 field/user Evaluation (complete systems in real-life

We have seen before that the the evaluation of dialogue models is still
unsolved, but for domains where metrics already exists, are they
satisfactory and sufficient? How can we take into account or abstract from
the subjective factor introduced by human operators in the process?
Do similarity measures and standards offer appropriate answers to this
problem? Most of the efforts focus on evaluating process, but what about the
issue of language resources evaluation?

For its second day of work, the workshop organizers solicit papers on these
issues, with the intent to address the problem of evaluation both from a
broader perspective (including novel applications domains for evaluation,
new metrics for known tasks and resource evaluation) and a more theoretical
point of view (including formal theory of evaluation and infrastructural
needs of language engineering).

NOTE: People who would like to submit a paper on lexical semantic
disambiguation evaluation should consider the parallel workshop, on July
5-6, for the closure of the SENSEVAL-2 evaluation campaign.

- -----------------------------------------------------------


The organization of each of the two days of the workshop will reflect the
workshop's two main themes. Each day will begin with a session of
presentations of selected papers and follow with panel discussions to
synthesize and develop possible methodologies from additional selected
workshop papers.


The workshop seeks participation from people involved or interested in the
problem of evaluation in language processing and the research and industrial
communities that study and implement dialogue models for natural-language
interaction systems.

The first part of the workshop will specifically draw on the
natural-language interaction community, for instance like the one developing
at the confluence of SIGdial and SIGCHI, which will find in this workshop an
atmosphere more flavored by computational-linguistics related issues (see,
for example, the First SIGdialWorkshop on Discourse and Dialogue).

The second part of the workshop is intended to provide a forum for a broader
audience more in the spirit of the one that attended the LREC'2000 Satellite
Workshop on Evaluation (see, in particular
offering an opportunity to people involved in language engineering
evaluation (e.g ., the CLASS audience) in the context of national or
transnational projects or programs, both in Europe and abroad.

- -----------------------------------------------------------


Paper submissions should follow the two-column format of ACL proceedings and
should not exceed eight (8) pages, including references. We strongly
recommend the use of ACL LaTeX style files or Microsoft Word Style files
tailored for this year's conference. They are available from the ACL-2001
program committee Web site at

Papers should be submitted electronically, as either a LaTeX, Word or PDF
file to either:

Patrick Paroubek,
Karen Ward,

- -----------------------------------------------------------


Deadline for workshop paper submissions: April 6, 2001
Deadline for notification of workshop paper acceptance: April 27, 2001
Deadline for camera-ready workshop papers: May 16, 2001
Workshop date: July 6-7, 2001

- -----------------------------------------------------------


David G. Novick, UTEP

Joseph Mariani, Limsi - CNRS

Candy Kamm, AT&T Labs

Patrick Paroubek, Limsi - CNRS

Nils Dahlbdck, Linkvping University

Frankie James, NASA Ames Research Center

Karen Ward, UTEP,

- -----------------------------------------------------------


David G. Novick
Joseph Mariani
Candy Kamm
Patrick Paroubek
Nils Dahlbdck
Frankie James
Karen Ward
Christian Jacquemin
Niels Ole Bernsen
Stephane Chaudiron
Khalid Choukri
Martin Rajman
Robert Gaizauskas
Donna Harman
Lynette Hirschman (tentative)
David Pallett (tentative)
Carol Peters (tentative)
Jose Pardo (tentative)
Herman Steeneken (tentative)
Oliviero Stock (tentative)
Saod Tazi
Hans Uszkoreit (tentative)

- -----------------------------------------------------------


 ACL 2001

We also anticipate co-sponsorship from SIGdial.

- -----------------------------------------------------------


Additional information on the workshop, including accepted papers and the
workshop schedule, will be made available as needed at
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: Workshop on Collocation (ACL-2001)

Date: Tue, 27 Mar 2001 16:08:06 EST
From: Priscilla Rasmussen <>
Subject: Workshop on Collocation (ACL-2001)

*** First Call for Papers***

WORKSHOP ON COLLOCATION: Computational Extraction, Analysis and

ACL'2001 Conference
Toulouse, France
July 7th, 2001

We invite papers on topics relating to the theme of collocation and
more particularly their computational extraction, analysis and
exploitation. This workshop follows the French ATALA workshop on
collocation which took place in Paris, France on January 2001 and
seeks to go forward so as to explore the wider perspective of
computational linguistics.

The term "collocation" was introduced in the nineteen thirties by
J. R. Firth, founder member of the British Contextualist school, to
characterise certain linguistic phenomena of cooccurrence that stem
principally from the linguistic competence of native speakers (Firth
1957). By its very nature collocation remains a relatively fuzzy
concept, the consequence of which being that traditional grammarians
and semanticists have tended to ignore it, the exception being some
lexical semanticists as Cruse (1986). The study of collocation is
above all a practical one aimed at assisting language learners and
translators in their tasks.

Essentially idiomatic in nature, collocation defies rigid
formalisation which explains the existence of different schools of
thought between those seeking a descriptive contextualised view of
linguistic phenomena and those who seeks formalised applications for
translation, lexicography or computational purposes. This has led to a
variety of approaches based around a general core meaning for the

For several years, NLP has been concerned with collocation largely
through the following fields:

 Formalisation through specialised formalisms for different NLP
 tasks: dictionary formalism such as lexical function; HPSG, LFG,
 TAG, ... formalisms for analysis or generation.

 Extraction from monolingual or bilingual texts or dictionairies
 using either raw statistics or statistics combined with
 linguistic information such as part-of-speech or grammar

 Exploitation through specific NLP systems dedicated to second
 language learning or translation, or for such NLP tasks as
 information retrieval or thematic structuration.

This workshop aims to guage the extend to which the role of
collocation as a phenomenon in applied linguistics is now being taken
into account in formal linguistics and NLP and addresses the following
topics (not limitative):

 Formal description of collocation through existing or dedicated
 specialised formalisms

 New methods adopted for the identification of collocations. This
 would include statistics and also more linguistic oriented

 NLP systems dedicated to collocation. 

 Exploitation of collocations for other NLP tasks through
 monolingual or multilingual environments.

This workshop addresses researchers in all fields of theoretical and
applied computational linguistics and most particularly those working
in automatic and assisted machine translation, dictionnary building
and computationally assisted language teaching as well as those
concerned with information retrieval and text mining.


 Biatrice Daille IRIN - University of Nantes, France - 
 Geoffrey Williams CRELLIC - University of Bretagne-Sud, France - 


 Jeremy Clear, Honorary Research Fellow, University of Birmingham 
 Pernilla Danielsson, TELRI 
 Chris Gledhill, University of St Andrews 
 Syvain Kahane, LaTTiCe/TALaNa 
 Marie-Claude L'Homme, University of Montreal 
 Julia Pajzs, Hungarian Academy of Science 
 Antoinette Renouf, University of Liverpool 
 Alain Polguhre, OLST - University of Montreal 
 Laurent Romary, LORIA 
 Dan Tufis, Romanian Academy - RACAI 
 Jean Vironis, University of Provence 
 Leo Wanner, University of Stuttgart 


Workshop paper submissions 
 April 8, 2001 
Notification of acceptance 
 April 30, 2001 
Deadline for camera-ready papers 
 May 13, 2001 


July 7th, 2001 


Submissions must be in English, no more than 8 pages long, and in the
two-column format prescribed by ACL'2001. Please see for the detailed guidelines; however,
please put the authors' names, rather than a paper id, since reviewing
will not be blind. Submissions should be sent electronically in either
Word, pdf, or postscript format (only) no later than April 8, 2001 to:
Biatrice Daille
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue