Knowledge Graph-Based Image Classification Refinement

Biologically inspired ideas are important in image processing. Not only does more than 80% of the information received by humans comes from the visual system, but the human visual system also gives its fast, accurate, and efficient image processing capability. In the current image classification tas...

Full description

Bibliographic Details
Main Authors: Dehai Zhang, Menglong Cui, Yun Yang, Po Yang, Cheng Xie, Di Liu, Beibei Yu, Zhibo Chen
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8698455/
Description
Summary:Biologically inspired ideas are important in image processing. Not only does more than 80% of the information received by humans comes from the visual system, but the human visual system also gives its fast, accurate, and efficient image processing capability. In the current image classification task, convolutional neural networks (CNNs) focus on processing pixels and often ignore the semantic relationships and human brain mechanisms. With the development of image analysis and processing techniques, the information in the image is becoming increasingly complicated. Humans can learn about the characteristics of objects and the relationships that occur between them to classify the images. It is a significant characteristic that sets humans apart from the modern learning-based computer vision algorithms. How to make full use of the semantic relationships in categories and how to apply the knowledge of biological vision to image classification are our main concerns. In this view, we propose the concept of the image knowledge graph (IKG) to incorporate the semantic association and the scene association to fully consider the relations between objects (external and internal). We take full advantage of the reasoning model of the knowledge graph that is closer to the biological visual information-processing model. We conduct extensive experiments on large-scale image datasets (ImageNet), demonstrating the effectiveness of our approach. Furthermore, our method participates in ILSVRC 2017 challenges and obtains the new state-of-the-art results on the ImageNet (82.43%).
ISSN:2169-3536