Publishing Partner: Cambridge University Press CUP Extra Publisher Login
amazon logo
More Info

New from Oxford University Press!


It's Been Said Before

By Orin Hargraves

It's Been Said Before "examines why certain phrases become clichés and why they should be avoided -- or why they still have life left in them."

New from Cambridge University Press!


Sounds Fascinating

By J. C. Wells

How do you pronounce biopic, synod, and Breughel? - and why? Do our cake and archaic sound the same? Where does the stress go in stalagmite? What's odd about the word epergne? As a finale, the author writes a letter to his 16-year-old self.

Academic Paper

Title: A cross-corpus study of subjectivity identification using unsupervised learning
Author: Dong Wang
Institution: University of Texas at Dallas
Author: Yang Liu
Institution: University of Texas at Dallas
Linguistic Field: Computational Linguistics; Text/Corpus Linguistics
Abstract: In this study, we investigate using unsupervised generative learning methods for subjectivity detection across different domains. We create an initial training set using simple lexicon information and then evaluate two iterative learning methods with a base naive Bayes classifier to learn from unannotated data. The first method is self-training, which adds instances with high confidence into the training set in each iteration. The second is a calibrated EM (expectation-maximization) method where we calibrate the posterior probabilities from EM such that the class distribution is similar to that in the real data. We evaluate both approaches on three different domains: movie data, news resource, and meeting dialogues, and we found that in some cases the unsupervised learning methods can achieve performance close to the fully supervised setup. We perform a thorough analysis to examine factors, such as self-labeling accuracy of the initial training set in unsupervised learning, the accuracy of the added examples in self-training, and the size of the initial training set in different methods. Our experiments and analysis show inherent differences across domains and impacting factors explaining the model behaviors.


This article appears IN Natural Language Engineering Vol. 18, Issue 3.

Return to TOC.

Add a new paper
Return to Academic Papers main page
Return to Directory of Linguists main page