LINGUIST List 11.1562

Sun Jul 16 2000

Calls: MT Evaluation, Comp Ling/Text Summmarization

Editor for this issue: Jody Huellmantel <jodylinguistlist.org>


As a matter of policy, LINGUIST discourages the use of abbreviations or acronyms in conference announcements unless they are explained in the text.

Directory

  1. Reeder,Florence M., Machine Translation Evaluation Workshop
  2. Dragomir Radev, Special issue of Computational Linguistics on Text Summmarization

Message 1: Machine Translation Evaluation Workshop

Date: Fri, 14 Jul 2000 15:17:30 -0400
From: Reeder,Florence M. <freedermitre.org>
Subject: Machine Translation Evaluation Workshop


Machine Translation Evaluation Workshop in conjunction with AMTA-2000
October 10, 2000
Mision del Sol, Cuernavaca, Mexico

MT EVALUATION WORKSHOP: Hands-On Evaluation

Motivation: Arguments about the evaluation of machine translation (MT)
are even more plentiful than systems to accomplish MT. In an effort to 
drive the evaluation process to the next level and to unify past work, 
this workshop is going to focus on a challenge and a framework in which 
the challenge could fit. The challenge is Hands-On Evaluation. The 
framework is being developed by the ISLE MT Evaluation effort. Both 
will be discussed throughout the course of the workshop.

Focus of the first part of the workshop will be on real-world 
evaluation, encouraging both developers and users. In an effort to 
facilitate a common ground for discussions, if they desire, participants 
may be given a) sample online data to be translated; b) a minimal task 
to accomplish with this data; c) currently existing tools for processing 
this data. With these three items, participants are expected to address 
issues in the evaluation of machine translation. A domain of 
particular interest is evaluation for Arabic data; Arabic tools, and 
filtering and text mining applications although participants are 
required only to evaluate using real-world data and actual systems. 
This common framework will give insights into the evaluation process and 
useful metrics for driving the development process. Additionally, we 
hope that the common challenge will motivate interesting discussion. As 
part of this, we are expecting to set up a web page to host previous 
work in evaluation. The URL will be released when the page is
prepared. 

The second part of the workshop will concentrate on the ISLE MT 
Evaluation effort, funded by NSF and the EU, to create a general 
framework of characteristics in terms of which MT evaluations, past and 
future, can be described and classified. The framework, whose 
antecedents are the JEIDA and EAGLES reports, consists of two taxonomies 
of increasingly specific features, with associated measures and pointers 
to systems. The discussion will review the current state of the 
classification effort, critique the framework in light of the hands-on 
evaluation performed earlier in the workshop, and suggest additional 
systems and measures to be considered. 

Questions and Issues: The questions and issues to be answered are 
diverse. The constants are the situation: the available systems; the 
available data; the sample tasks. This will, hopefully, eliminate some 
of the variables of evaluation. All papers on evaluation and evaluation 
issues will be considered, but preference will be given to papers 
following the framework. The following questions suggest possible 
evaluation threads within the framework.
	- What kind of metrics are useful for users versus system 
	developers?
	- What kinds of tools automate the evaluation process?
	- What kinds of tasks are suited to which evaluation schemes?
	- How can we use the evaluation process to speed or improve the 
	MT development process?
	- What kind of impacts does real-world data imply?
	- How can we evaluate MT when MT is a small part of the data 
	flow?
	- How independent is MT of the subsequent processing?-that is, 
	cleaning up the data improves performance, but does it improve 
	it enough? How do we quantify that?

While we encourage papers on Arabic MT evaluation, we will consider 
papers on related issues such as real-world data evaluation or that 
which is related to the ISLE work.

Submission Guidelines:
	- Intention to participate: For those participants wishing to 
	receive either data or systems, it may be necessary to generate 
	a non-disclosure agreement. Alternatively, participants are 
	encouraged to bring their own tools / data in light of the 
	constraints listed above.
	- Submission for review: papers of no more than 6000 words are 
	expected. 
	- Submission for publication: A template will be provided for
	accepted papers. 

Important Dates:
	Intention to participate: 28th July 2000
	Release of data / software: 9th August 2000
	Draft submission: 1st September 2000
	Notification of acceptance: 15th September 2000
	Final Papers Due: 29th September 2000
	Workshop: 10th October 2000 

Contact Points & Organizers:
Florence Reeder (MITRE Corporation) freedermitre.org 
Eduard Hovy (ISI/USC) hovyisi.edu 

Main conference site:
http://www.isi.edu/natural-language/conferences/amta2000 

Venue site:
http://www.misiondelsol.com.mx 

Reviewing Committee:
Michelle Vanni (Georgetown University / Department of Defense)
Keith Miller (MITRE Corporation)
Jack Benoit (MITRE Corporation)
Maghi King (ISSCO, University of Geneva)
Carol Van Ess-Dykema (Department of Defense)
John White (Litton/PRC)
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: Special issue of Computational Linguistics on Text Summmarization

Date: Sat, 15 Jul 2000 23:17:28 -0400 (EDT)
From: Dragomir Radev <radevperun.si.umich.edu>
Subject: Special issue of Computational Linguistics on Text Summmarization

			 CALL FOR PAPERS


			 SPECIAL ISSUE
				 of
		 COMPUTATIONAL LINGUISTICS
				 on
			 TEXT SUMMARIZATION


====================================================================
Guest Editors:

Dragomir Radev
radevumich.edu
University of Michigan

Kathy McKeown
kathycs.columbia.edu
Columbia University

Eduard Hovy
hovyisi.edu
University of Southern California/Information Sciences Institute
=====================================================================


Text summarization is one of the more complex challenges of natural
language processing. Its goal is to summarize the content of one or
more documents depending on the information needs of the user.

Current research in text summarization involves statistical and
knowledge-rich approaches that involve sentence and/or phrase
extraction and text generation. Most current systems first identify
the most salient information in the input material and then synthesize
that information while trying to preserve the essence of the original
text.

Interest in text summarization has risen over the last 10 years due to
the advent of the Internet and the unprecedented availability of
on-line textual data. In recent years, the "three Ms"
(multilinguality, multidocument, multimedia) have motivated some
exciting research projects. Most recently in the USA, the TIDES
program has funded several projects that involve multidocument and
eventually multilingual summarization.

Reflecting these events, researchers have organized several meetings
over the past years. The most recent such meeting at the ANLP/NAACL
conference in Seattle attracted 100 participants (including from
academia, industrial labs, startups, and the government) from the US,
Canada, the UK, Germany, France, Korea, Israel, Japan, Sweden,
Singapore, Spain, Hong Kong, and Belgium. Research in summarization
is also active in a dozen more countries throughout the world. Europe,
and the rest of the world.

The first collection of papers related to document summarization
appeared in 1995 in a special issue of Information Processing and
Management edited by Karen Sparck-Jones and Brigitte
Endres-Niggemeyer. A compendium of papers on text summarization,
edited by Inderjeet Mani and Mark Maybury, appeared in 1998. Most of
these papers date from an ACL/EACL workshop in 1997 and earlier.
There has been no general collection of papers since that time.
Therefore we believe the time is ripe for Computational Linguistics to
present an overview of the state of the art in text summarization.


		 SAMPLE TOPICS OF INTEREST

Linguistic and statistics based techniques for topic identification
Linguistic and statistics based techniques for summary generation
Studies of human summarization 
Evaluating summaries and summarization systems 
Multidocument summarization, including reconciliation of inconsistencies 
Multilingual summarization
Summarization metadata: determining and expressing trustworthiness and recency 
Types and classes of summaries


				NOTES

Papers should not simply describe an existing system. Of primary interest 
is the theoretical basis for the summarization process, summary evaluation, 
and the typology of summaries; the particular implementation of a set of 
word- and phrase-weighting techniques is of secondary concern. 


			 SCHEDULE

Call for papers issued: June 23, 2000
Papers due: December 15, 2000
Notifications to authors: March 15, 2001


			 SUBMISSION PROCESS

Electronic submission is preferred, but hard copy will also be
accepted. No attachments are to be submitted under any circumstances.
If sending hard copies, you should submit six copies. 
All submissions should be sent to the journal editor (Julia Hirschberg) 
according to the instructions in http://www.aclweb.org/cl/ . 

In addition to following the procedure described there, authors should
send the abstract of their paper electronically to the three guest
editors: <radevperun.si.umich.edu>, <kathycs.columbia.edu>,
<hovyISI.EDU>.

Note that for this special issue two types of papers will be accepted: 
long papers (more than 20 pages) and short papers (less than 20
pages). Both types of papers will be reviewed according to the same
criteria. We would ideally like to have papers of both types in the
printed journal.

Questions about the submission process should be addressed to
radevumich.edu 

Each submitted paper will be refereed by two experts appointed by the
permanent editorial board of CL and by two more reviewers selected by
the guest editors.
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue