Paper 122 (Research track)

Automatic Generation of Benchmarks for Entity Recognition and Linking

Author(s): Axel-Cyrille Ngonga Ngomo, Michael Röder, Diego Moussallem, Ricardo Usbeck, René Speck

Abstract: Benchmarks are central to the improvement of named entity recognition and entity linking solutions. However, recent works have shown that manually created benchmarks often contain mistakes. We hence investigate the automatic generation of benchmarks for named entity recognition and linking from Linked Data as a complement to manually created benchmarks. The main advantage of automatically constructed benchmarks is that they can be readily generated at any time, and are cost-effective while being guaranteed to be free of annotation errors. Moreover, generators for resource-poor languages can foster the development of tools for such languages. We compare the performance of 11 tools on benchmarks generated using our approach with their performance on 16 benchmarks that were created manually. In addition, we perform a large-scale runtime evaluation of entity recognition and linking solutions for the first time in literature. Moreover, we present results achieved on the Portuguese version of our approach on four different tools. Overall, our results suggest that our automatic benchmark generation approach can create varied benchmarks that have characteristics similar to those of existing benchmarks. Our experimental results are available at http://faturl.com/bengalexp

Keywords: Benchmarking; Named Entity Recognition and Linking; Scalable Benchmarking

Leave a Reply

Your email address will not be published. Required fields are marked *