* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 21.1132

Mon Mar 08 2010

Calls: Computational Ling, Psycholing/Sweden

Editor for this issue: Kate Wu <katelinguistlist.org>


LINGUIST is pleased to announce the launch of an exciting new feature: Easy Abstracts! Easy Abs is a free abstract submission and review facility designed to help conference organizers and reviewers accept and process abstracts online. Just go to: http://www.linguistlist.org/confcustom, and begin your conference customization process today! With Easy Abstracts, submission and review will be as easy as 1-2-3!
Directory
        1.    Preslav Nakov, ACL 2010 Workshops: SemEval-2010

Message 1: ACL 2010 Workshops: SemEval-2010
Date: 06-Mar-2010
From: Preslav Nakov <preslavngmail.com>
Subject: ACL 2010 Workshops: SemEval-2010
E-mail this message to a friend

Full Title: ACL 2010 Workshops: SemEval-2010

Date: 15-Jul-2010 - 16-Jul-2010
Location: Uppsala, Sweden
Contact Person: David Traum
Meeting Email: workshopsacl2010.org
Web Site: http://acl2010.org/workshops.html

Linguistic Field(s): Computational Linguistics; Psycholinguistics

Call Deadline: 02-Apr-2010

Meeting Description:

Cognitive Modeling and Computational Linguistics (CMCL) and TopiCS special issue
Models of Language Comprehension
July 15th, 2010

A workshop to be held following the Association for Computational Linguistics
meeting in Uppsala, Sweden
http://cmcl.ling.cornell.edu

Second Call for Papers

SemEval-2010 Shared Task #8:
Multi-Way Classification of Semantic Relations
Between Pairs of Nominals
http://docs.google.com/View?docid=dfvxd49s_36c28v9pmw

Training data available

This shared task should be of interest to researchers working on
- semantic relation extraction
- information extraction
- lexical semantics

Background
Recently, the NLP community has shown a renewed interest in deeper semantic
analysis, including automatic recognition of semantic relations between pairs of
words. This is an important task with many potential applications in Information
Retrieval, Information Extraction, Text Summarization, Machine Translation,
Question Answering, Paraphrasing, Recognizing Textual Entailment, Thesaurus
Construction, Semantic Network Construction, Word Sense Disambiguation, and
Language Modelling.

Despite this interest, progress was slow due to the incompatibility of the
different classification schemes proposed and used, which made it difficult to
compare the various classification algorithms. Most of the datasets used so far
provided no context for the target relation, thus relying on the assumption that
semantic relations are largely context-independent, which is not a realistic
assumption. A notable exception is SemEval-2007 Task 4: Classification of
Semantic Relations between Nominals, which for the first time provided a
standard benchmark dataset for seven semantic relations - in context- . However,
in order to avoid the challenge of defining a single unified standard
classification scheme, this dataset treated each semantic relation separately,
as a single two-class (positive vs. negative) classification task, rather than
as multi-way classification. While some subsequent publications tried to use the
dataset in a multi-way setup, it was not designed to be used in that manner.

We believe that having a freely available standard benchmark dataset for -
multi-way- semantic relation classification - in context- is much needed for
the overall advancement of the field. Thus, we have posed as our primary
objective the challenging task of preparing and releasing such a dataset to the
research community. We further set up a common evaluation task that will enable
researchers to compare their algorithms.

The Task
Task: Given a sentence and two annotated nominals, choose the most suitable
relation from the following inventory of nine relations:

- Relation 1 (Cause-Effect)
- Relation 2 (Instrument-Agency)
- Relation 3 (Product-Producer)
- Relation 4 (Content-Container)
- Relation 5 (Entity-Origin)
- Relation 6 (Entity-Destination)
- Relation 7 (Component-Whole)
- Relation 8 (Member-Collection)
- Relation 9 (Message-Topic)

It is also possible to choose Other if none of the nine relations appears to be
suitable.

Example: The best choice for the following sentence would be Component-Whole(e1,e2):

"The macadamia nuts in the cake also make it necessary to have
a very sharp knife to cut through the cake neatly."

Note that in the above sentence, Component-Whole(e1,e2) holds, but
Component-Whole(e2,e1) does not, i.e., we have Other(e2,e1). Thus, the task asks
for determining - both- the relation and the order of e1 and e2 as its arguments.

Datasets
- Training Dataset: The training dataset consists of a total of 8,000 examples.

- Test Dataset: The test dataset consists of over 2,717 examples; it will be
released on March 18.

License: All data are released under the Creative Commons Attribution 3.0
Unported license.

Time Schedule
- Trial data released: August 30, 2009
- Training data released: March 5, 2010
- Test data release: March 18, 2010
- Result submission deadline: 7 days after downloading the - test- data, but
no later than April 2

- Organizers send the test results: April 10, 2010
- Submission of description papers: April 17, 2010
- Notification of acceptance: May 6, 2010
- SemEval'2010 workshop (at ACL): July 15-16, 2010

Task Organizers
Iris Hendrickx. University of Lisbon, University of Antwerp
Su Nam Kim. University of Melbourne
Zornitsa Kozareva. University of Southern California, Information Sciences Institute
Preslav Nakov. National University of Singapore
Diarmuid Ó Séaghdha. University of Cambridge
Sebastian Padó. Stuttgart University
Marco Pennacchiotti. Saarland University, Yahoo! Research
Lorenza Romano. FBK-irst, Italy
Stan Szpakowicz. University of Ottawa

Useful Links
Interested in participating in the shared task? Please join the following Google
group:
http://groups.google.com.sg/group/semeval-2010-multi-way-classification-of-semantic-relations?hl=en

Task #8 website: http://docs.google.com/View?docid=dfvxd49s_36c28v9pmw

SemEval 2010 website: http://semeval2.fbk.eu/semeval2.php
This Year the LINGUIST List hopes to raise $65,000. This money will go to help 
keep the List running by supporting all of our Student Editors for the coming year.

See below for donation instructions, and don't forget to check out our Space Fund 
Drive 2010 and join us for a great journey!

http://linguistlist.org/fund-drive/2010/

There are many ways to donate to LINGUIST!

You can donate right now using our secure credit card form at  
https://linguistlist.org/donation/donate/donate1.cfm

Alternatively you can also pledge right now and pay later. To do so, go to: 
https://linguistlist.org/donation/pledge/pledge1.cfm

For all information on donating and pledging, including information on how to 
donate by check, money order, or wire transfer, please visit: 
http://linguistlist.org/donation/

The LINGUIST List is under the umbrella of Eastern Michigan University and as 
such can receive donations through the EMU Foundation, which is a registered 
501(c) Non Profit organization. Our Federal Tax number is 38-6005986. These 
donations can be offset against your federal and sometimes your state tax return 
(U.S. tax payers only). For more information visit the IRS Web-Site, or contact 
your financial advisor.

Many companies also offer a gift matching program, such that they will match 
any gift you make to a non-profit organization. Normally this entails your 
contacting your human resources department and sending us a form that the 
EMU Foundation fills in and returns to your employer. This is generally a simple 
administrative procedure that doubles the value of your gift to LINGUIST, without 
costing you an extra penny. Please take a moment to check if your company 
operates such a program.

Thank you very much for your support of LINGUIST!


Read more issues|LINGUIST home page|Top of issue




Please report any bad links or misclassified data

LINGUIST Homepage | Read LINGUIST | Contact us

NSF Logo

While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.