* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 18.2516

Tue Aug 28 2007

Confs: Computational Linguistics/Denmark

Editor for this issue: Jeremy Taylor <jeremylinguistlist.org>


To post to LINGUIST, use our convenient web form at http://linguistlist.org/LL/posttolinguist.html.
Directory
        1.    Helene Mazo, Automatic Procedures in MT Evaluation at MT Summit XI


Message 1: Automatic Procedures in MT Evaluation at MT Summit XI
Date: 28-Aug-2007
From: Helene Mazo <mazoelda.org>
Subject: Automatic Procedures in MT Evaluation at MT Summit XI
E-mail this message to a friend

Automatic Procedures in MT Evaluation at MT Summit XI

Date: 11-Sep-2007 - 11-Sep-2007
Location: Copenhagen, Denmark
Contact: Gregor Thurmair
Contact Email: g.thurmairlinguatec.de
Meeting URL:
http://mtsummitcph.ku.dk/workshops/mts-automatic_procedures_in_mt_evaluation.doc

Linguistic Field(s): Computational Linguistics

Meeting Description:

This workshop, during MT Summit XI, Copenhagen 2007 (Sept. 11), focusses
on the discussion of automatic evaluation procedures in MT: BLEU / NIST,
d-score, x-score, edit distance, and other such tools.

The questions to be discussed are:

- What do the scores really measure? Are they biased towards
specific MT technologies? (validity)

- What kind initial effort do they require (e.g.: pre-translate
test corpus)? (economy)

- What kind of implicit assumptions do they make?
ยท What kind of resources do they need (e.g.: third party
grammars)? (economy, feasibility)

- What kind of diagnostic support can they give? (where to
improve the system)

- What kind of evaluation criteria (related to the FEMTI
framework) do they support (adequacy, fluency, ...)

The objective of the workshop is to learn from recent evaluation
activities, and to create a better understanding of the strengths and
limitations of the respective approaches, and to get closer to a common
methodology for MT output evaluation.

Draft programme

9.00 Welcome and introduction

9.20 The place of automatic evaluation metrics in external quality
models for
machine translation
Andrei Popescu-Belis, University of Geneva

10.00 Evaluating Evaluation --- Lessons from the WMT'07 Shared Task
Philipp Koehn, University of Edinburgh

10.30 Coffee break
11.00 Investigating Why BLEU Penalizes Non-Statistical Systems
Eduard Hovy, University of Southern California

11.30 Edit distance as an evaluation metric
Christopher Cieri, Linguistic Data Consortium (TBC)

12.00 Experience and conclusions from the CESTA evaluation project
Olivier Hamon, ELDA

12.30 Lunch
13.30 Automatic Evaluation in MT system production
Gregor Thurmair, Linguatec

14.00 Sensitivity of performance-based and proximity-based models for MT
evaluation
Bogdan Babych, Univ. Leeds

14.30 Automatic & human Evaluations of MT in the framework of a speech to
speech communication
Khalid. Choukri, ELDA

15.00 Coffee break
15.30 Discussion and conclusions
17.00 Close


Read more issues|LINGUIST home page|Top of issue




Please report any bad links or misclassified data

LINGUIST Homepage | Read LINGUIST | Contact us

NSF Logo

While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.