Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron
Object recognition in depth images is challenging and persistent task in machine vision, robotics, and automation of sustainability. Object recognition tasks are a challenging part of various multimedia technologies for video surveillance, human–computer interaction, robotic navigation, drone target...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-11-01
|
Series: | Symmetry |
Subjects: | |
Online Access: | https://www.mdpi.com/2073-8994/12/11/1928 |
id |
doaj-654de008029e41c4b28f8544bdf77c0a |
---|---|
record_format |
Article |
spelling |
doaj-654de008029e41c4b28f8544bdf77c0a2020-11-25T04:12:00ZengMDPI AGSymmetry2073-89942020-11-01121928192810.3390/sym12111928Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding PerceptronAdnan Ahmed Rafique0Ahmad Jalal1Kibum Kim2Department of Computer Science and Engineering, Air University, E-9, Islamabad 44000, PakistanDepartment of Computer Science and Engineering, Air University, E-9, Islamabad 44000, PakistanDepartment of Human-Computer Interaction, Hanyang University, Ansan 15588, KoreaObject recognition in depth images is challenging and persistent task in machine vision, robotics, and automation of sustainability. Object recognition tasks are a challenging part of various multimedia technologies for video surveillance, human–computer interaction, robotic navigation, drone targeting, tourist guidance, and medical diagnostics. However, the symmetry that exists in real-world objects plays a significant role in perception and recognition of objects in both humans and machines. With advances in depth sensor technology, numerous researchers have recently proposed RGB-D object recognition techniques. In this paper, we introduce a sustainable object recognition framework that is consistent despite any change in the environment, and can recognize and analyze RGB-D objects in complex indoor scenarios. Firstly, after acquiring a depth image, the point cloud and the depth maps are extracted to obtain the planes. Then, the plane fitting model and the proposed modified maximum likelihood estimation sampling consensus (MMLESAC) are applied as a segmentation process. Then, depth kernel descriptors (DKDES) over segmented objects are computed for single and multiple object scenarios separately. These DKDES are subsequently carried forward to isometric mapping (IsoMap) for feature space reduction. Finally, the reduced feature vector is forwarded to a kernel sliding perceptron (KSP) for the recognition of objects. Three datasets are used to evaluate four different experiments by employing a cross-validation scheme to validate the proposed model. The experimental results over RGB-D object, RGB-D scene, and NYUDv1 datasets demonstrate overall accuracies of 92.2%, 88.5%, and 90.5% respectively. These results outperform existing state-of-the-art methods and verify the suitability of the method.https://www.mdpi.com/2073-8994/12/11/1928kernel sliding perceptronmodified maximum likelihood estimation sampling consensusmulti-object recognitionsustainable object recognition |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Adnan Ahmed Rafique Ahmad Jalal Kibum Kim |
spellingShingle |
Adnan Ahmed Rafique Ahmad Jalal Kibum Kim Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron Symmetry kernel sliding perceptron modified maximum likelihood estimation sampling consensus multi-object recognition sustainable object recognition |
author_facet |
Adnan Ahmed Rafique Ahmad Jalal Kibum Kim |
author_sort |
Adnan Ahmed Rafique |
title |
Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron |
title_short |
Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron |
title_full |
Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron |
title_fullStr |
Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron |
title_full_unstemmed |
Automated Sustainable Multi-Object Segmentation and Recognition via Modified Sampling Consensus and Kernel Sliding Perceptron |
title_sort |
automated sustainable multi-object segmentation and recognition via modified sampling consensus and kernel sliding perceptron |
publisher |
MDPI AG |
series |
Symmetry |
issn |
2073-8994 |
publishDate |
2020-11-01 |
description |
Object recognition in depth images is challenging and persistent task in machine vision, robotics, and automation of sustainability. Object recognition tasks are a challenging part of various multimedia technologies for video surveillance, human–computer interaction, robotic navigation, drone targeting, tourist guidance, and medical diagnostics. However, the symmetry that exists in real-world objects plays a significant role in perception and recognition of objects in both humans and machines. With advances in depth sensor technology, numerous researchers have recently proposed RGB-D object recognition techniques. In this paper, we introduce a sustainable object recognition framework that is consistent despite any change in the environment, and can recognize and analyze RGB-D objects in complex indoor scenarios. Firstly, after acquiring a depth image, the point cloud and the depth maps are extracted to obtain the planes. Then, the plane fitting model and the proposed modified maximum likelihood estimation sampling consensus (MMLESAC) are applied as a segmentation process. Then, depth kernel descriptors (DKDES) over segmented objects are computed for single and multiple object scenarios separately. These DKDES are subsequently carried forward to isometric mapping (IsoMap) for feature space reduction. Finally, the reduced feature vector is forwarded to a kernel sliding perceptron (KSP) for the recognition of objects. Three datasets are used to evaluate four different experiments by employing a cross-validation scheme to validate the proposed model. The experimental results over RGB-D object, RGB-D scene, and NYUDv1 datasets demonstrate overall accuracies of 92.2%, 88.5%, and 90.5% respectively. These results outperform existing state-of-the-art methods and verify the suitability of the method. |
topic |
kernel sliding perceptron modified maximum likelihood estimation sampling consensus multi-object recognition sustainable object recognition |
url |
https://www.mdpi.com/2073-8994/12/11/1928 |
work_keys_str_mv |
AT adnanahmedrafique automatedsustainablemultiobjectsegmentationandrecognitionviamodifiedsamplingconsensusandkernelslidingperceptron AT ahmadjalal automatedsustainablemultiobjectsegmentationandrecognitionviamodifiedsamplingconsensusandkernelslidingperceptron AT kibumkim automatedsustainablemultiobjectsegmentationandrecognitionviamodifiedsamplingconsensusandkernelslidingperceptron |
_version_ |
1724416247386341376 |