LINGUIST List 13.179

Tue Jan 22 2002

Calls: Article on 'Inspiration', Computational Ling

Editor for this issue: Renee Galvis <>

As a matter of policy, LINGUIST discourages the use of abbreviations or acronyms in conference announcements unless they are explained in the text.


  1. Edwin Rutsch, Calls: Article - Inspiration in your language
  2. John Carroll, LREC2002 Workshop on Parsing Evaluation - 2nd CFP

Message 1: Calls: Article - Inspiration in your language

Date: Mon, 21 Jan 2002 22:53:49 -0500
From: Edwin Rutsch <edwinHUMANITYQUEST.COM>
Subject: Calls: Article - Inspiration in your language

Call for an Article - Inspiration in your language of expertise.
- --------------------------------------------------------------

I am looking for native speaking language experts to contribute a
short 300 word article, on the meaning of the word for "Inspiration"
in their language of expertise. This is for inclusion in an art book I
am writing entitled "The Spirit of Inspiration". I would like to have
at least 50 different languages represented in the book.

The article would include aspects of the following about the the
nature of inspiration; the definition, etymology, how it is expressed
in the arts and the authors personal insights. Please feel free to
forward this call the anyone that you think may be interested.

I have created a detailed submission guideline with a sample article which
you can view at this URL;

See the existing languages and contributors list at:

If you are interested in contributing an article send me an email at:

Thank You


Edwin Rutsch
(510) 528-9895
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: LREC2002 Workshop on Parsing Evaluation - 2nd CFP

Date: Tue, 22 Jan 2002 09:37:45 +0000
From: John Carroll <>
Subject: LREC2002 Workshop on Parsing Evaluation - 2nd CFP

 Call for Papers

 -- Towards Improved Evaluation Measures for Parsing Systems --

 LREC 2002 Workshop
 2nd June
 Las Palmas, Canary Islands, Spain


The PARSEVAL metrics for evaluating the accuracy of parsing systems
have underpinned recent advances in stochastic parsing with grammars
learned from treebanks (most prominently the Penn Treebank of
English). However, a new generation of parsing systems is emerging
based on different underlying frameworks and covering other
languages. PARSEVAL is not appropriate for many of these approaches:
the NLP community therefore needs to come together and agree on a new
set of parser evaluation standards.


In line with increasing interest in fine-grained syntactic and
semantic representations, stochastic parsing is currently being
applied to several high level syntactic frameworks, such as
unification-based grammars, tree-adjoining grammars and combinatory
categorial grammars. A variety of different types of training data are
being used, including dependency annotations, phrase structure trees,
and unlabelled text. Other researchers are building parsing systems
using shallower frameworks, based for example on finite-state
transducers. Many of these novel parsing approaches are using
alternative evaluation measures -- based on dependencies, valencies,
or exact or selective category match -- since the PARSEVAL measures
(of bracketing match with respect to atomic-labelled phrase structure
trees) cannot be applied, or are uninformative.

The field is therefore confronted with a lack of common evaluation
metrics, and also of appropriate gold standard evaluation corpora in
languages other than English. We need a new and uniform scheme for
parser evaluation that covers both shallow and deep grammars, and
allows for comparison and benchmarking across different syntactic
frameworks and different language types.

A previous LREC-hosted workshop on parser evaluation in 1998 (see brought together a
number of researchers advocating parser evaluation based on
dependencies or grammatical relations as a viable alternative to the
PARSEVAL measures.

The aim of this workshop is to start an initiative by bringing
together four relevant parties:

 - researchers in symbolic and stochastic parsing
 - builders of annotated corpora
 - representatives from different syntactic frameworks 
 - groups with interests in and proposals for parser evaluation

The workshop will provide a forum for discussion with the aim of
defining a new parser evaluation metric; we also intend the workshop
to kick off a sustained collaborative effort into building or deriving
sufficiently large evaluation corpora, and possibly training corpora
appropriate to the new metric. To maintain the momentum of this
initiative we will work towards setting up a parsing competition based
on new standard evaluation corpora and evaluation metric.


The workshop organisers invite papers focussing on:

 - benchmarking the accuracy of individual parsing systems
 - parser evaluation
 - design of annotation schemes covering different languages and
 grammar frameworks
 - creation of high-quality evaluation corpora

Papers on the following topics will be particularly welcome: 

 - descriptions of experiments using alternative evaluation measures
 with existing (stochastic or symbolic) parsers, focussing on
 comparison and discussion of qualitative differences

 - methods for creation of evaluation (or training) corpora, allowing
 flexible adaptation to a new evaluation standard based on
 dependencies or grammatical relations

 - comparisons of existing or possible new schemes for dependency-based
 evaluation (differences, similarities, problems)


The one-day workshop will consist of (30-minute) paper presentations,
a panel session, and an extended open session at which important
results of the workshop will be summarised and discussed.

As a follow-up, we hope to arrange a half-day meeting outside the
workshop format to discuss concrete action plans, create working
groups, and plan future collaboration.


John Carroll University of Sussex, UK
Anette Frank DFKI GmbH, Saarbruecken, Germany
Dekang Lin University of Alberta, Canada
Detlef Prescher DFKI GmbH, Saarbruecken, Germany
Hans Uszkoreit DFKI GmbH and Saarland University, Saarbruecken, Germany


Salah Ait-Mokhtar XRCE Grenoble
Gosse Bouma Rijksuniversiteit Groningen 
Thorsten Brants Xerox PARC
Ted Briscoe University of Cambridge
John Carroll University of Sussex
Jean-Pierre Chanod XRCE Grenoble
Michael Collins AT&T Labs-Research
Anette Frank DFKI Saarbruecken
Josef van Genabith Dublin City University
Gregory Grefenstette Clairvoyance, Pittsburgh
Julia Hockenmaier University of Edinburgh
Dekang Lin University of Alberta
Chris Manning Stanford University
Detlef Prescher DFKI Saarbruecken
Khalil Sima'an University of Amsterdam
Hans Uszkoreit DFKI Saarbruecken and Saarland University


Abstracts for workshop contributions should not exceed two A4 pages
(excluding references). An additional title page should state: the
title; author(s); affiliation(s); and contact author's e-mail address,
as well as postal address, telephone and fax numbers.

Submission is by email, preferably in Postscript or PDF format, to:

to arrive by 1st February 2002. Abstracts will be reviewed by at least
3 members of the program committee.

Formatting instructions for the final full version of papers will be
sent to authors after notification of acceptance.


 1 February 2002 deadline for receipt of abstracts
 22 February 2002 notification of acceptance
 12 April 2002 camera-ready final version for workshop proceedings

 2 June 2002 workshop 


The workshop will take place on 2nd June, following the main LREC 2002
Conference, in the Palacio de Congreso de Canarias, Las Palmas, Canary


The registration fee for the workshop is:

 If you are also attending LREC: 90 EURO
 If you are not attending LREC: 140 EURO

All attendees will receive a copy of the workshop proceedings.

Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue