* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 19.1480

Sun May 04 2008

Calls: Computational Ling/UK; Applied Ling,Computational Ling/Belgium

Editor for this issue: F. Okki Kurniawan <okkilinguistlist.org>


As a matter of policy, LINGUIST discourages the use of abbreviations or acronyms in conference announcements unless they are explained in the text. To post to LINGUIST, use our convenient web form at http://linguistlist.org/LL/posttolinguist.html.
Directory
        1.    Sabine Schulte im Walde, Coling 2008 Workshop on Human Judgements in CL
        2.    Sebastian Blohm, ECML PKDD High-level Information Extraction Workshop


Message 1: Coling 2008 Workshop on Human Judgements in CL
Date: 03-May-2008
From: Sabine Schulte im Walde <schulteims.uni-stuttgart.de>
Subject: Coling 2008 Workshop on Human Judgements in CL
E-mail this message to a friend

Full Title: Coling 2008 Workshop on Human Judgements in CL
Short Title: hjcl

Date: 23-Aug-2008 - 23-Aug-2008
Location: Manchester, United Kingdom
Contact Person: Sabine Schulte im Walde
Meeting Email: schulteims.uni-stuttgart.de
Web Site: http://workshops.inf.ed.ac.uk/hjcl/

Linguistic Field(s): Computational Linguistics

Call Deadline: 10-May-2008

Meeting Description:

Coling 2008 workshop on human judgements in Computational Linguistics

Manchester, UK
23 August 2008

http://workshops.inf.ed.ac.uk/hjcl/

Deadline Extension

New deadline for submission: 10 May 2008

Workshop Description:
Human judgements play a key role in the development and the assessment of
linguistic resources and methods in Computational Linguistics. They are commonly
used in the creation of lexical resources and corpus annotation, and also in the
evaluation of automatic approaches to linguistic tasks. Furthermore,
systematically collected human judgements provide clues for research on
linguistic issues that underlie the judgement task, providing insights
complementary to introspective analysis or evidence gathered from corpora.

We invite papers about experiments that collect human judgements for
Computational Linguistic purposes, with a particular focus on linguistic tasks
that are controversial from a theoretical point of view (e.g., some coding tasks
having to do with semantics or pragmatics). Such experimental tasks are usually
difficult to design and interpret, and they typically result in mediocre
inter-rater reliability. We seek both broad methodological papers discussing
these issues, and specific case studies.

Topics of interest include, but are not limited to:

Experimental design:
- Which types of experiments support the collection of human judgements? Can any
general guidelines be defined? Is there a preference between lab-based
experiments and web-based experiments?
- Which experimental methodologies support controversial tasks? For instance,
does underspecification help? What is the role of ambiguity and polysemy in
these tasks?
- What is the appropriate level of granularity for the category labels?
- What kind of participants should be used (e.g., expert vs. non-expert), how is
it affected by the type of experiment, and how should the experiment design be
varied according to this issue?
- How much and which kind of information (examples, context, etc.) should be
provided to the experiment participants? When does information turn into a bias?
- Is it possible to design experiments that are useful for both computational
linguistics and psycholinguistics? What do the two research areas have in
common? What are the differences?

Analysis and interpretation of experimental data:
- How important is inter-annotator agreement in human judgement collection
experiments? How is it best measured for complex tasks?
- What other quantitative tools are useful for analysing human judgement
collection experiments?
- What qualitative methods are useful for analysing human judgement collection
experiments? Which questions should be asked? Is it possible to formulate
general guidelines?
- How is the analysis similar to psycholinguistic analysis? How is it different?
- How do results from all of the methods above affect the development of
annotation instructions and procedures?

Application of experiment insights:
- How do the experimental data fit into the general resource-creating process?
- How to modify the set of labels and the criteria or guidelines for the
annotation task according to the experimental results? How to avoid circularity
in this process?
- How can the data be used to refine or modify existing theoretical proposals?
- More generally, under what conditions can the obtained judgements be applied
to research questions?

Organisers:
Ron Artstein, University of Southern California
Gemma Boleda, Universitat Politècnica de Catalunya
Frank Keller, University of Edinburgh
Sabine Schulte im Walde, Universität Stuttgart

Keynote Speaker:
Martha Palmer, University of Colorado

Programme Committee:
Toni Badia, Universitat Pompeu Fabra
Marco Baroni, University of Trento
Beata Beigman Klebanov, Northwestern University
André Blessing, Universität Stuttgart
Chris Brew, Ohio State University
Kevin Cohen, University of Colorado Health Sciences Center
Barbara Di Eugenio, University of Illinois at Chicago
Katrin Erk, University of Texas at Austin
Stefan Evert, University of Osnabrück
Afsaneh Fazly, University of Toronto
Alex Fraser, Universität Stuttgart
Jesus Gimenez, Universitat Politècnica de Catalunya
Roxana Girju, University of Illinois at Urbana-Champaign
Ed Hovy, University of Southern California
Nancy Ide, Vassar College
Adam Kilgarriff, University of Brighton
Alexander Koller, University of Edinburgh
Anna Korhonen, University of Cambridge
Mirella Lapata, University of Edinburgh
Diana McCarthy, University of Sussex
Alissa Melinger, University of Dundee
Paola Merlo, University of Geneva
Sebastian Padó, Stanford University
Martha Palmer, University of Colorado
Rebecca Passonneau, Columbia University
Massimo Poesio, University of Trento
Sameer Pradhan, BBN Technologies
Horacio Rodriguez, Universitat Politècnica de Catalunya
Bettina Schrader, Universität Potsdam
Suzanne Stevenson, University of Toronto

Submission:
Deadline for the receipt of papers is extended to 10 May 2008, 23:59
UTC. Submit your paper via the submissions web page:
http://workshops.inf.ed.ac.uk/hjcl/submission.html

Submissions should be anonymous. Please submit only PDF files, 8 pages long
(including data, tables, figures, and references). We recommend to follow the
Coling 2008 style guidelines. Include a one-paragraph abstract of the entire
work (about 200 words). Accepted papers will appear in an on-line proceedings
volume.

Important Dates:
Paper submission deadline (extended): 10 May 2008
Notification of acceptance: 10 June 2008
Camera-ready copy due: 1 July 2008
Workshop date: 23 August 2008
Message 2: ECML PKDD High-level Information Extraction Workshop
Date: 02-May-2008
From: Sebastian Blohm <blohmaifb.uni-karlsruhe.de>
Subject: ECML PKDD High-level Information Extraction Workshop
E-mail this message to a friend


Full Title: ECML PKDD High-level Information Extraction Workshop
Short Title: HLIE08

Date: 19-Sep-2008 - 19-Sep-2008
Location: Antwerp, Belgium
Contact Person: Sebastian Blohm
Meeting Email: blohmaifb.uni-karlsruhe.de
Web Site: http://www-ai.cs.tu-dortmund.de/HLIE08/index.html

Linguistic Field(s): Applied Linguistics; Computational Linguistics

Call Deadline: 16-Jun-2008

Meeting Description:

We aim at bringing together an interdisciplinary group of researchers who are
working on high-level information extraction. The goal of this workshop will be
to structure and explore the state of the art, to evolve high-level IE models
with regard to real-world applications, and to identify future challenges and
applications. We intend to cover a broad range of methods, including
pipelined/hybrid approaches and structured prediction models.

Call for Papers

Information extraction (IE) techniques aim at extracting structured information
from unstructured data sources. IE methods are successful at addressing
naturally arising learning tasks where the data is generally structured, highly
correlated, and frequently preserve multiple-way dependencies within and between
recurrent structures.

By now, ''low-level'' tasks such as named entity recognition are well
understood, however, solving complex IE tasks - like relation and event
extraction - remains a challenge.

In the last years, significant contributions to high-level IE in relevant fields
led to applications that have matured to a point beyond proof of concept.
However, which strategy (e.g., pipeline, structured, or hybrid) is beneficial
for which problems is not yet well understood, neither from the theoretical nor
the practical point of view.

We are interested in the following topics:

- Algorithms:
What are the differences between pipelined and structured methods? Are there
hybrid methods, using the best of the two worlds? Are there novel algorithms and
techniques for solving high-level IE or subproblems thereof?

- Theoretical results:
Are there convergence/generalization bounds for high-level IE techniques? Is
there a characterization of problems for which a direct solution always exists?
How can high-level IE methods be evaluated?

- Pre- and post-processing techniques:
Which high-level IE applications benefit from pre-/post-processing? Can
pre-/post-processing be harmful? Are these techniques independent of the
underlying IE methods? How can pre- and post-processing techniques be evaluated?

- Applications:
What are novel applications involving high-level IE? Are there equivalent
problems in related areas that can be solved with existing methods?

Read more issues|LINGUIST home page|Top of issue




Please report any bad links or misclassified data

LINGUIST Homepage | Read LINGUIST | Contact us

NSF Logo

While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.