Paper 161 (Research track)

Knowledge Guided Attention and Inference for Describing Images Containing Unseen Objects

Author(s): Aditya Mogadala, Umanga Bista, Lexing Xie, Achim Rettinger

Abstract: Images on the Web encapsulate diverse knowledge about varied abstract concepts. They cannot be sufficiently described with models learned from image-caption pairs that mention only a small number of visual object categories. In contrast, large-scale knowledge graphs contain many more concepts that can be detected by image recognition models. Hence, to assist description generation for those images which contain visual objects unseen in image-caption pairs, we propose a two-step process by leveraging large-scale knowledge
graphs. In the first step, a multi-entity recognition model is built to annotate images with concepts not mentioned in any caption. In the second step, those annotations are leveraged as external semantic attention and constrained inference in the image description generation model. Evaluations show that our models outperform most of the prior work on out-of-domain MSCOCO image description generation and also scales better to broad domains with more unseen objects.

Keywords: Knowledge Base Semantic Attention; Caption Generation for Novel Visual Objects; Visual Entity Linking

Leave a Reply

Your email address will not be published. Required fields are marked *