3D object recognition with a linear time‐varying system of overlay layers
Abstract Object recognition is a challenging task in computer vision with numerous applications. The challenge is in selecting appropriate robust features with tolerable computing costs. Feature learning attempts to solve the feature extraction problem through a learning process using various sample...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2021-08-01
|
Series: | IET Computer Vision |
Online Access: | https://doi.org/10.1049/cvi2.12029 |
Summary: | Abstract Object recognition is a challenging task in computer vision with numerous applications. The challenge is in selecting appropriate robust features with tolerable computing costs. Feature learning attempts to solve the feature extraction problem through a learning process using various samples of the objects. This research proposes a two‐stage optimization framework to identify the structure of a first‐order linear non‐homogeneous difference equation which is a linear time‐varying system of overlay layers (LtvoL) that construct an image. The first stage consists of the determination of a finite set of impulses, called overlay layers, by the application of a genetic algorithm. The second stage defines the coefficients of the corresponding difference equation derived from L2 regularization. Classification of the test images is possible by a novel process exclusively designed for this model. Experiments on the Washington RGB‐D dataset and ETH‐80 show promising results which are comparable to those of state‐of‐the‐art methods for RGB‐D‐based object recognition. |
---|---|
ISSN: | 1751-9632 1751-9640 |