* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 18.3084

Sun Oct 21 2007

Calls: Semantics/Syntax/Comp Ling/Natural Language Engineering (Jrnl)

Editor for this issue: Fatemeh Abdollahi <fatemehlinguistlist.org>

As a matter of policy, LINGUIST discourages the use of abbreviations or acronyms in conference announcements unless they are explained in the text. To post to LINGUIST, use our convenient web form at http://linguistlist.org/LL/posttolinguist.html.
        1.    Bill Dolan, Natural Language Engineering

Message 1: Natural Language Engineering
Date: 21-Oct-2007
From: Bill Dolan <billdolmicrosoft.com>
Subject: Natural Language Engineering
E-mail this message to a friend

Full Title: Natural Language Engineering

Linguistic Field(s): Computational Linguistics;Lexicography;Semantics;Syntax

Call Deadline: 15-Nov-2007

Journal of Natural Language Engineering

Special Issue on Textual Entailment

Second Call For Papers

Submissions due by November 15, 2007

The goal of identifying textual entailment - whether one piece of text can
be plausibly inferred from another - has emerged in recent years as a
generic core problem in Natural Language Understanding. For instance, in
order to answer the question 'Who killed Kennedy?', a QA system may need
to recognize that 'Oswald killed Kennedy' can be inferred from 'the
assassination of Kennedy by Oswald'.

Work in this area has been largely driven by the PASCAL Recognizing Textual
Entailment (RTE) challenges, a series of annual competitive meetings
(http://www.pascal-network.org/Challenges/RTE3). This work exhibits strong
ties to some earlier lines of research, particularly automatic acquisition
of paraphrases and lexical semantic relationships, and unsupervised
inference in applications such as
question answering, information extraction and summarization. It has also
opened the way to newer lines of research on more involved inference
methods, on knowledge representations needed to support this natural
language understanding challenge and on the use of learning methods in this
context. RTE has fostered an active and growing community of researchers
focused on the problem of applied entailment. The special issue of JNLE
will provide an opportunity to showcase some of the most important work in
this emerging area.

Articles for this special issue are invited on all aspects of textual
entailment, aiming at a broader scope than exhibited within the RTE
challenges. Topics include, but are not limited to:

* Representation levels, such as - Lexical, n-gram, and substring overlap
- Linguistic annotations (POS tags, syntactic structure, semantic
* Utilizing background knowledge, e.g. inference rules, paraphrase
templates, lexical relations
* Knowledge acquisition methods
- From corpora/Web, including acquiring entailment/paraphrasing corpora
- From semantic resources like FrameNet, PropBank, VerbNet, NOMLEX/NOMBANK
* Inference mechanisms, such as
- Similarity/subsumption metrics
- Tree-based distances and transformations
- Machine learning
- Logical inference using theorem provers
* The impact of entailment capabilities on applications
* Evaluation methods
* Data analysis

Submission information:
Please consult the journal web site for instructions for contributors
(uk.cambridge.org/journals/nle/). Submissions should be sent by email to
JNLE_TEcs.uiuc.edu (instead of the email address mentioned in the
instructions file). The message subject line should be "JNLE TE submission:

Submissions are due by November 15, 2007.

Guest Editors:
Ido Dagan (Bar Ilan University, Israel)
Bill Dolan (Microsoft Research, USA)
Bernardo Magnini (FBK-irst, Italy)
Dan Roth (UIUC, USA)

Read more issues|LINGUIST home page|Top of issue

Please report any bad links or misclassified data

LINGUIST Homepage | Read LINGUIST | Contact us

NSF Logo

While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.