Robust Visual Relationship Detection towards Sparse Images in Internet-of-Things
Visual relationship can capture essential information for images, like the interactions between pairs of objects. Such relationships have become one prominent component of knowledge within sparse image data collected by multimedia sensing devices. Both the latent information and potential privacy ca...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Hindawi-Wiley
2021-01-01
|
Series: | Wireless Communications and Mobile Computing |
Online Access: | http://dx.doi.org/10.1155/2021/6383646 |
id |
doaj-f8255e44ac8946739f5bc20a6d8417b9 |
---|---|
record_format |
Article |
spelling |
doaj-f8255e44ac8946739f5bc20a6d8417b92021-08-02T00:00:47ZengHindawi-WileyWireless Communications and Mobile Computing1530-86772021-01-01202110.1155/2021/6383646Robust Visual Relationship Detection towards Sparse Images in Internet-of-ThingsYang He0Guiduo Duan1Guangchun Luo2Xin Liu3School of Computer Science and EngineeringSchool of Computer Science and EngineeringTrusted Cloud Computing and Big Data Key Laboratory of Sichuan ProvinceSchool of Information and Software EngineeringVisual relationship can capture essential information for images, like the interactions between pairs of objects. Such relationships have become one prominent component of knowledge within sparse image data collected by multimedia sensing devices. Both the latent information and potential privacy can be included in the relationships. However, due to the high combinatorial complexity in modeling all potential relation triplets, previous studies on visual relationship detection have used the mixed visual and semantic features separately for each object, which is incapable for sparse data in IoT systems. Therefore, this paper proposes a new deep learning model for visual relationship detection, which is a novel attempt for cooperating computational intelligence (CI) methods with IoTs. The model imports the knowledge graph and adopts features for both entities and connections among them as extra information. It maps the visual features extracted from images into the knowledge-based embedding vector space, so as to benefit from information in the background knowledge domain and alleviate the impacts of data sparsity. This is the first time that visual features are projected and combined with prior knowledge for visual relationship detection. Moreover, the complexity of the network is reduced by avoiding the learning of redundant features from images. Finally, we show the superiority of our model by evaluating on two datasets.http://dx.doi.org/10.1155/2021/6383646 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Yang He Guiduo Duan Guangchun Luo Xin Liu |
spellingShingle |
Yang He Guiduo Duan Guangchun Luo Xin Liu Robust Visual Relationship Detection towards Sparse Images in Internet-of-Things Wireless Communications and Mobile Computing |
author_facet |
Yang He Guiduo Duan Guangchun Luo Xin Liu |
author_sort |
Yang He |
title |
Robust Visual Relationship Detection towards Sparse Images in Internet-of-Things |
title_short |
Robust Visual Relationship Detection towards Sparse Images in Internet-of-Things |
title_full |
Robust Visual Relationship Detection towards Sparse Images in Internet-of-Things |
title_fullStr |
Robust Visual Relationship Detection towards Sparse Images in Internet-of-Things |
title_full_unstemmed |
Robust Visual Relationship Detection towards Sparse Images in Internet-of-Things |
title_sort |
robust visual relationship detection towards sparse images in internet-of-things |
publisher |
Hindawi-Wiley |
series |
Wireless Communications and Mobile Computing |
issn |
1530-8677 |
publishDate |
2021-01-01 |
description |
Visual relationship can capture essential information for images, like the interactions between pairs of objects. Such relationships have become one prominent component of knowledge within sparse image data collected by multimedia sensing devices. Both the latent information and potential privacy can be included in the relationships. However, due to the high combinatorial complexity in modeling all potential relation triplets, previous studies on visual relationship detection have used the mixed visual and semantic features separately for each object, which is incapable for sparse data in IoT systems. Therefore, this paper proposes a new deep learning model for visual relationship detection, which is a novel attempt for cooperating computational intelligence (CI) methods with IoTs. The model imports the knowledge graph and adopts features for both entities and connections among them as extra information. It maps the visual features extracted from images into the knowledge-based embedding vector space, so as to benefit from information in the background knowledge domain and alleviate the impacts of data sparsity. This is the first time that visual features are projected and combined with prior knowledge for visual relationship detection. Moreover, the complexity of the network is reduced by avoiding the learning of redundant features from images. Finally, we show the superiority of our model by evaluating on two datasets. |
url |
http://dx.doi.org/10.1155/2021/6383646 |
work_keys_str_mv |
AT yanghe robustvisualrelationshipdetectiontowardssparseimagesininternetofthings AT guiduoduan robustvisualrelationshipdetectiontowardssparseimagesininternetofthings AT guangchunluo robustvisualrelationshipdetectiontowardssparseimagesininternetofthings AT xinliu robustvisualrelationshipdetectiontowardssparseimagesininternetofthings |
_version_ |
1721245405488873472 |