* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 21.911

Tue Feb 23 2010

FYI: SemEval-2010 Task 12: PETE

Editor for this issue: Elyssa Winzeler <elyssalinguistlist.org>


To post to LINGUIST, use our convenient web form at http://linguistlist.org/LL/posttolinguist.html.
Directory
        1.    Deniz Yuret, SemEval-2010 Task 12: PETE

Message 1: SemEval-2010 Task 12: PETE
Date: 22-Feb-2010
From: Deniz Yuret <dyuretku.edu.tr>
Subject: SemEval-2010 Task 12: PETE
E-mail this message to a friend

SemEval-2010 Shared Task 12
Parser Evaluation using Textual Entailments (PETE)
(http://pete.yuret.com)

The purpose of this e-mail is to encourage participation in the task
'Parser Evaluation using Textual Entailments' in the 5th International
Workshop on Semantic Evaluations, SemEval-2010
(http://semeval2.fbk.eu/semeval2.php) collocated with ACL-2010, July 15-16,
Uppsala.

This shared task should be of interest to researchers working on
* parsing
* semantic relation extraction
* recognizing textual entailments

Description:

Parser Evaluation using Textual Entailments is a shared task in the
SemEval-2010 Evaluation Exercises on Semantic Evaluation. The task
involves recognizing textual entailments (RTE) based solely on syntactic
information:

* The man with the hat was tired.
* The man was tired. (yes)
* The hat was tired. (no)

Our goals in introducing this task are:

* To focus parser evaluation on semantically relevant phenomena.
* To introduce a parser evaluation scheme that is formalism independent.
* To introduce a targeted textual entailment task focused on a single
linguistic competence.
* To be able to collect high quality evaluation data from untrained annotators.

The following criteria were used when constructing the entailments:

* They should be decidable using only syntactic inference.
* They should be easy to decide by untrained annotators.
* They should be challenging for state of the art parsers.

You can find more details about the entailment generation process at the
task website (http://pete.yuret.com). The trial dataset can be downloaded
from the task website or the SemEval website (http://semeval2.fbk.eu).
There will be no training data. The evaluation will be similar to the past
RTE tasks. There is a Google group
(http://groups.google.com/group/semeval-pete) for task related messages.

Instructions:

* Join the mailing list (http://groups.google.com/group/semeval-pete)
* Register in SemEval website (http://semeval2.fbk.eu)
* Download the trial data from task website (http://pete.yuret.com) or
SemEval website (http://semeval2.fbk.eu)
* Download task guide from task website (http://pete.yuret.com/guide)
* Download test data from SemEval website (http://semeval2.fbk.eu)
* Upload results to SemEval website (http://semeval2.fbk.eu)

Important Dates:

* February 19 - the development data available.
* March 26 - the test data available.
* April 2 - end of submission period for the task.
* April 17 - Submission of description papers.
* May 6 - Notification of acceptance.
* July 15-16 - Workshop at ACL 2010, Uppsala.

Contact:
* Deniz Yuret dyuretku.edu.tr



Linguistic Field(s): Computational Linguistics
Read more issues|LINGUIST home page|Top of issue




Please report any bad links or misclassified data

LINGUIST Homepage | Read LINGUIST | Contact us

NSF Logo

While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.