FYI: Benchmark for Open Relation Extraction
We wish to announce the public release of the Open Relation Extraction (ORE) benchmark used for the experiments reported in the paper:
Effectiveness and Efficiency of Open Relation Extraction, by Filipe Mesquita, Jordan Schmidek and Denilson Barbosa, appearing at the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP).
ORE is the task of recognizing relationships between two or more entities in text without requiring any relation-speciﬁc training data. ORE has become prevalent over traditional relation extraction methods, especially on the Web, because of the intrinsic difficulty in training individual extractors for every single relation.
To the best of our knowledge, our benchmark is the first of its kind to provide reusable gold standard annotations. Included in the benchmark are over 15,000 annotations (of which 13,000 were done automatically by matching facts in a knowledge base), including binary relations and n-ary relations. We also provide extractions from 8 state-of-the-art ORE methods and evaluation scripts that compute precision and recall of a given set of extractions.
For more information and access to the benchmark itself, please visit the following URL:
Computing Science Department
University of Alberta