LINGUIST List 11.2564

Tue Nov 28 2000

Calls: Arizona Working Paper, Natural Lang Engineering

Editor for this issue: Jody Huellmantel <>

As a matter of policy, LINGUIST discourages the use of abbreviations or acronyms in conference announcements unless they are explained in the text.


  1. Tania S. Zamuner, University of Arizona Working Papers in Linguistics
  2. Beverly Nunan, Journal of Natural Language Engineering

Message 1: University of Arizona Working Papers in Linguistics

Date: Mon, 27 Nov 2000 14:23:01 -0700
From: Tania S. Zamuner <>
Subject: University of Arizona Working Papers in Linguistics

Coyote Papers: University of Arizona Working Papers in Linguistics
Volume 12: Language in Cognitive Science
Submission Deadline: Midnight, January 15, 2001

The University of Arizona Linguistics Circle invites you to submit working
papers on psycholinguistics and computational linguistics for a new
electronic volume of the *Coyote Papers* to be published in Spring 2001.
Submitting authors may be graduate students or faculty members. Submissions
are limited to a maximum of one individual and one joint paper per author.
Papers should be no longer than 10 single-spaced pages (excluding
references, figures, and appendices) and should strictly adhere to the
formatting guidelines which can be obtained at: . In addition,
please include a 150 word abstract. Papers which do not adhere to the
formatting guidelines will not be considered. Please provide three hard
copies. Submissions via e-mail will also be accepted as long as they are
received by the deadline. Submissions via fax will not be accepted.
Nofification of acceptance and reviews will be given in mid February. Final
revisions will be due in mid March.

This will be an electronic volume which will be available online in pdf
format at no charge; there will be a printed version available at an extra 
Questions should be addressed to the editors by email.
Editors, Coyote Papers 12: Language in Cognitive Science:
Rachel L. Hayes
William D. Lewis
Erin L. O'Bryan
Tania S. Zamuner
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue

Message 2: Journal of Natural Language Engineering

Date: Mon, 27 Nov 2000 16:52:46 -0500 (EST)
From: Beverly Nunan <>
Subject: Journal of Natural Language Engineering

Below is a "Call for Papers" for a special issue on question answering
for the Journal of Natural Language Engineering. If you have any
questions, please direct them to Dr. Lynette Hirschman, 781-271-7789.
Thank you.





Guest editors:
Lynette Hirschman (MITRE)
Robert Gaizauskas (University of Sheffield)

As users struggle to navigate the wealth of on-line information now
available, the need for automated question answering systems becomes
more urgent: specifically, for systems that would allow a user to ask a
question in everyday language and get the answer quickly, with back-up
material available on demand. Question answering has become, over the
past several years, a major focus of research activity. This Call for
Papers solicits submissions that discuss the performance, the
requirements, the uses, and the challenges of question answering

Question answering systems provide a rich research area. To answer a
question, a system must analyze the question, perhaps in the context of
some ongoing interaction; it must find one or more answers by consulting
on-line resources; and it must present the answer to the user in some
appropriate form, perhaps associated with justification or supporting

Several conferences and workshops have focused on aspects of the
question answering research area. For the past two years, the Text
Retrieval Conference (TREC) ( ) has sponsored a
question-answering track which has evaluated systems that answer factual
questions based on finding answer strings in the TREC corpus, using both
information retrieval and natural language processing techniques. A
focus on reading comprehension provides a different approach to question
answering, evaluating systems' ability to answer questions about a
specific reading passage. These kinds of tests are used to evaluate
students' comprehension, providing a basis for comparing system
performance to human performance. This was the subject of a Johns
Hopkins Summer Workshop,

Both of these research areas have had to address a number of difficult
- How can question answering systems be evaluated? Do we have to have
human graders, or can we find automated ways of grading short answer
tests that approximate human graders closely enough?
- How should questions and answers be classified? Should classifications
be based on linguistic features of questions and answers? On the types
and sources of knowledge used to derive answers? On the types of
processing required to derive answers?=20
- What makes a question hard? Can we define linguistic features that
help to predict question difficulty?
- Can we identify different classes of users of question answering
systems, and if so, what are their different requirements?
- What makes an answer good? Should answers be short? Long? What about
sentence extracts compared to generated text? What about summaries?
- What is the best way to present answers to a user? How much context
and justification is appropriate? How much drill down needs to be
- Do question answering systems need to build models of users' knowledge
states to generate appropriate answers? How can this process be managed?
- What are reasonable expectations for question answering systems:
providing factual answers found literally in texts, providing factual
answers inferred from texts, providing summaries of multiple sources,
providing analysis?
- How does the performance of systems compare to the performance of
people? Can such systems complement people? Teach people? Replace
- Is it possible to create domain-independent question answering
systems, or is it critical to restrict the domain of such a system to a
specific topic area? What are the trade-offs in terms of performance?
- Can a question answering system use spoken input? Can it retrieve
information from spoken "documents" such as news stories or interviews?
What are the performance penalties when dealing with the additional
uncertainty that characterizes speech or OCR?

We invite submission of papers addressing any of these questions, or
other issues related to the creation, evaluation, or deployment of
question answering systems. We also encourage submissions that address
infrastructure issues, such as tools for building question answering
systems, for collecting corpora, or for annotating collections.

Submission Information

Submit full papers of no more than 25 pages (exclusive of references),
twelve point, double-spaced, with one inch margins before the initial
submission deadline. Submissions not conforming to these guidelines will
not be reviewed.

Email submission is preferred, and should be directed to the special
issue editors at the email address: The subject line
should read: JNLE QA Submission. Preferred email submission formats are:
Word, PostScript, PDF, or plain text (for papers without complex
figures, etc).

If email submission is not possible, then five copies of the paper
should be mailed to:

Dr. Lynette Hirschman
The MITRE Corporation 3K-157
202 Burlington Rd.
Bedford, MA 01730

Phone: 781-271-7789
Fax: 781-271-2352

Mailed submissions must arrive on or before the deadline for submission.

Submission Dates

 * Submissions are due on February 26, 2001
 * Notification of acceptance will be given by April 23, 2001.
 * Camera-ready copy due July2, 2001 =20
 * Publication: Fall-Winter 2001
Mail to author|Respond to list|Read more issues|LINGUIST home page|Top of issue