* * * * * * * * * * * * * * * * * * * * * * * *
LINGUIST List logo Eastern Michigan University Wayne State University *
* People & Organizations * Jobs * Calls & Conferences * Publications * Language Resources * Text & Computer Tools * Teaching & Learning * Mailing Lists * Search *
* *
LINGUIST List 20.2001

Wed May 27 2009

Calls: Computational Linguistics/Natural Language Engineering (Jrnl)

Editor for this issue: Fatemeh Abdollahi <fatemehlinguistlist.org>


LINGUIST is pleased to announce the launch of an exciting new feature: Easy Abstracts! Easy Abs is a free abstract submission and review facility designed to help conference organizers and reviewers accept and process abstracts online. Just go to: http://www.linguistlist.org/confcustom, and begin your conference customization process today! With Easy Abstracts, submission and review will be as easy as 1-2-3!
Directory
        1.    Marco Pennacchiotti, Natural Language Engineering

Message 1: Natural Language Engineering
Date: 25-May-2009
From: Marco Pennacchiotti <pennacyahoo-inc.com>
Subject: Natural Language Engineering
E-mail this message to a friend

Full Title: Natural Language Engineering


Linguistic Field(s): Computational Linguistics

Call Deadline: 30-Jun-2009

In the last decades, vector space models (VSM) have received a growing
attention in different fields of Artificial Intelligence, ranging from
natural language processing (NLP) and cognitive science, to vision
analysis and applications in the humanities. The basic idea of VSM is to
represent entities as vectors in a geometric space, so that their
similarity can be measured according to distance metrics in the space. VSM
have demonstrated to successfully model and solve a variety of problems,
such as metaphor detection and analysis, priming, discourse analysis, and
information retrieval.

In computational linguistics, the Distributional Hypothesis leverages the
notion of VSM to model the semantics of words and other linguistic
entities. The hypothesis was autonomously elaborated in different works,
and has been since then applied through different settings. The
hypothesis' core states that 'a word is defined by the company it keeps',
i.e. by the set of linguistic contexts in which it appears.

Despite the growing popularity of distributional approaches, existing
literature raises issues on many important aspects that have still to be
addressed. Examples are: the need of comparative in depth analyses of the
semantic properties captured by different types of distributional models;
the application of new geometrical approaches as the use of quantum logic
operators or tensor decomposition; the study of the interaction between
distributional approaches and supervised machine learning, as the adoption
of kernel methods based on distributional information; the application of
distributional techniques in real world applications and in other fields.

Topics

The goal of the special issue is to offer a common journal venue where to
gather and summarize the state of the art on distributional techniques
applied to lexical semantics, as a cornerstone in computational
linguistics research. As a side effect, the aim is also to propose a
systematic and harmonized view of the works carried out independently by
different researchers in the last years, which sometimes resulted in
diverging and somehow inconsistent uses of terminology and
axiomatizations. A further goal is to increase awareness in the
computational linguistic community about cutting-edge studies on
geometrical models, machine learning applications and experiences in
different scientific fields.

The special issue in particular focuses on the following areas of interest,
building on topics proposed for the GEMS workshop (EACL 2009, Athens,
http://art.uniroma2.it/gems):

- Comparisons analysis of different distributional spaces (document-based,
word-based, syntax based and others) and their parameters (dimension,
corpus size, etc.)
- Eigenvector methods (e.g. Singular Value and Tucker Decomposition)
- Higher order tensors and Quantum Logic extensions
- Feature engineering in machine learning models
- Computational complexity and evaluation issues
- Graph-based models over semantic spaces
- Logic and inference in semantic spaces
- Cognitive theories of semantic space models
- Applications in the humanities and social sciences
- Application of distributional approaches in :
- Word sense disambiguation and discrimination
- Selectional preference induction
- Acquisition of lexicons and linguistic patterns
- Conceptual clustering
- Kernels methods for NLP (e.g. relation extraction and textual
entailment)
- Quantitative extensions of Formal Concept Analysis
- Modeling of linguistic and ontological knowledge

For more information please see: http://art.uniroma2.it/jnle

Call Deadline: 30-June-2009


Read more issues|LINGUIST home page|Top of issue




Please report any bad links or misclassified data

LINGUIST Homepage | Read LINGUIST | Contact us

NSF Logo

While the LINGUIST List makes every effort to ensure the linguistic relevance of sites listed
on its pages, it cannot vouch for their contents.