LINGUIST List 27.924

Mon Feb 22 2016

Calls: Computational Ling, Translation/Germany

Editor for this issue: Ashley Parker <>

Date: 22-Feb-2016
From: Rajen Chatterjee <>
Subject: Shared Task: Automatic Post-Editing
E-mail this message to a friend

Full Title: Shared Task: Automatic Post-Editing
Short Title: APE

Date: 11-Aug-2016 - 12-Aug-2016
Location: Berlin, Germany
Contact Person: Matteo Negri
Meeting Email: < click here to access email >
Web Site:

Linguistic Field(s): Computational Linguistics; Translation

Call Deadline: 24-Apr-2016

Meeting Description:

The second round of the APE shared task follows the first pilot round organised in 2015. The aim is to examine automatic methods for correcting errors produced by an unknown machine translation (MT) system. This has to be done by exploiting knowledge acquired from human post-edits, which are provided as training material.

This year the task focuses on the Information Technology (IT) domain, in which English source sentences have been translated into German by an unknown MT system and then manually post-edited by professional translators. At training stage, the collected human post-edits have to be used to learn correction rules for the APE systems. At test stage they will be used for system evaluation with automatic metrics (TER and BLEU).

Call for Participation:

Second Automatic Post-Editing (APE) shared task First Conference on Machine Translation (WMT16)


The aim of this task is to improve MT output in black-box scenarios, in which the MT system is used ''as is'' and cannot be modified. From the application point of view APE components would make it possible to:

Cope with systematic errors of an MT system whose decoding process is not accessible
Provide professional translators with improved MT output quality to reduce (human) post-editing effort
Adapt the output of a general-purpose system to the lexicon/style requested in a specific application domain

Data and Evaluation:

Training, development and test data consist in English-German triplets (source, target and post-edit) belonging to the IT domain. Training and development respectively contain 12,000 and 1,000 triplets (available soon), while the test set 2,000 instances. All data is provided by the EU project QT21 (

Systems' performance will be evaluated with respect to their capability to reduce the distance that separates an automatic translation from its human-revised version. Such distance will be measured in terms of TER, which will be computed between automatic and human post-edits in case-sensitive mode. Also BLEU will be taken into consideration as a secondary evaluation metric.

To gain further insights on final output quality, a subset of the outputs of the submitted systems will also be manually evaluated.

Important Dates:

Release of training data: February 22, 2016
Test set distributed: April 18, 2016
Submission deadline: April 24, 2016
Paper submission deadline: May 8, 2016
Manual evaluation: May 2016
Notification of acceptance: June 5, 2016
Camera-ready deadline: June 22, 2016

For any information or question on the task, please send an email to: wmt-ape at To be always updated about the APE task, you can also join the wmt-ape group:!forum/wmt-ape


Rajen Chatterjee (Fondazione Bruno Kessler)
Matteo Negri (Fondazione Bruno Kessler)
Raphael Rubino (Saarland University)
Marco Turchi (Fondazione Bruno Kessler)
Marcos Zampieri (Saarland University)

Page Updated: 22-Feb-2016