1 Centre for Language Technology, Faculty of Humanities, Københavns Universitet2 Administration, Department of Computer Science, Faculty of Science, Københavns Universitet3 Centre for Language Technology, Faculty of Humanities, Københavns Universitet4 Administration, Department of Computer Science, Faculty of Science, Københavns Universitet
In linguistic annotation projects, we typically develop annotation guidelines to maximize inter-annotator agreement and learnability. However, in this position paper we question whether we should actually limit the disagreements between annotators, rather than embrace them. We present an empirical analysis of part-of-speech annotated data sets that suggests that certain disagreements are systematic across domains and languages. This points to an underlying ambiguity rather than random errors. Moreover, a quantitative analysis of disagreements reveals that the majority of them are due to linguistically debatable cases, rather than to actual annotation errors. Specifically, we show that even in the absence of annotation guidelines, only 2% of annotator choices are linguistically unmotivated.
Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (volume 2: Short Papers), 2014, p. 507-511