LINGUIST List 15.3087
Mon Nov 01 2004
Diss: Comp Ling: Neumann: 'A Uniform...'
Editor for this issue: Takako Matsui <takolinguistlist.org>
To post to LINGUIST, use our convenient web form at
http://linguistlist.org/LL/posttolinguist.html.
Directory
1. Guenter
Neumann,
A Uniform Computational Model for Natural Language Parsing and Generation
Message 1: A Uniform Computational Model for Natural Language Parsing and Generation
Date: 31-Oct-2004
From: Guenter Neumann <neumann
dfki.de>
Subject: A Uniform Computational Model for Natural Language Parsing and Generation
Institution: Saarland University
Program: Department of Computational Linguistics and Phonetics
Dissertation Status: Completed
Degree Date: 1994
Author: Guenter Neumann
Dissertation Title: A Uniform Computational Model for Natural Language Parsing
and Generation
Dissertation URL:
http://www.dfki.de/~neumann/publications/diss/diss.html
Linguistic Field(s): Computational Linguistics
Dissertation Director(s):
Hans Uszkoreit
Wolfgang Wahlster
Dissertation Abstract:
We present a new model of natural language processing in which natural
language parsing and generation are strongly interleaved tasks. Interleaving of
parsing and generation is important if we assume that natural language
understanding and production are not only performed in isolation but also can
work together to obtain subsentential interactions in text revision or dialog
systems.
The core of the model is a new uniform agenda-driven tabular algorithm, called
UTA. Although uniformly defined, UTA is able to configure itself dynamically for
either parsing or generation, because it is fully driven by the structure of the
actual input---a string for parsing and a semantic expression for generation.
Efficient interleaving of parsing and generation is obtained through {\em item
sharing} between parsing and generation. This novel processing strategy
facilitates exchanging items (i.e., partial results) computed in one direction
automatically to the other direction as well.
The advantage of UTA in combination with the item sharing method is that we are
able to extend the use of memoization techniques even to the case of an
interleaved approach. In order to demonstrate UTA's utility for developing
high-level performance methods, we present a new algorithm for {\em incremental
self-monitoring} during natural language production.
Respond to list|
Read more issues|
LINGUIST home page|
Top of issue