Guided Random Projection: A Lightweight Feature Representation for Image Classification
Modern neural networks [e.g., Deep Neural Networks (DNNs)] have recently gained increasing attention for visible image classification tasks. Their success mainly results from capabilities in learning a complex feature mapping of inputs (i.e., feature representation) that carries images manifold stru...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2021-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9536711/ |
id |
doaj-1c968a602dc04409a39049f12fee99df |
---|---|
record_format |
Article |
spelling |
doaj-1c968a602dc04409a39049f12fee99df2021-09-23T23:00:21ZengIEEEIEEE Access2169-35362021-01-01912911012911810.1109/ACCESS.2021.31125529536711Guided Random Projection: A Lightweight Feature Representation for Image ClassificationShichao Zhou0Junbo Wang1Wenzheng Wang2https://orcid.org/0000-0002-0278-6751Linbo Tang3Baojun Zhao4Key Laboratory of the Ministry of Education for Optoelectronic Measurement Technology and Instrument, Beijing Information Science and Technology University (BISTU), Beijing, ChinaBeijing Institute of Electronic System Engineering, Beijing, ChinaSchool of Electronics Engineering and Computer Science, Peking University, Beijing, ChinaSchool of Information and Electronics, Beijing Institute of Technology, Beijing, ChinaSchool of Information and Electronics, Beijing Institute of Technology, Beijing, ChinaModern neural networks [e.g., Deep Neural Networks (DNNs)] have recently gained increasing attention for visible image classification tasks. Their success mainly results from capabilities in learning a complex feature mapping of inputs (i.e., feature representation) that carries images manifold structure relevant to the task. Despite the current popularity of these techniques, they are training-costly with Back-propagation (BP) based iteration rules. Here, we advocate a lightweight feature representation framework termed as Guided Random Projection (GRP), which is closely related to the classical random neural networks and randomization-based kernel machines. Specifically, we present an efficient optimization method that explicitly learns the distribution of random hidden weights instead of time-consuming fine-tuning or task-independent randomization configurations. Further, we also report the detailed mechanisms of the GRP with subspace theories. Experiments were conducted on visible image classification benchmarks to evaluate our claims. It shows that the proposed method achieves reasonable accuracy improvement (more than 2%) with moderate training cost (seconds level) compared with other randomization methods.https://ieeexplore.ieee.org/document/9536711/Image classificationguided random projectionfeature representationneural network |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Shichao Zhou Junbo Wang Wenzheng Wang Linbo Tang Baojun Zhao |
spellingShingle |
Shichao Zhou Junbo Wang Wenzheng Wang Linbo Tang Baojun Zhao Guided Random Projection: A Lightweight Feature Representation for Image Classification IEEE Access Image classification guided random projection feature representation neural network |
author_facet |
Shichao Zhou Junbo Wang Wenzheng Wang Linbo Tang Baojun Zhao |
author_sort |
Shichao Zhou |
title |
Guided Random Projection: A Lightweight Feature Representation for Image Classification |
title_short |
Guided Random Projection: A Lightweight Feature Representation for Image Classification |
title_full |
Guided Random Projection: A Lightweight Feature Representation for Image Classification |
title_fullStr |
Guided Random Projection: A Lightweight Feature Representation for Image Classification |
title_full_unstemmed |
Guided Random Projection: A Lightweight Feature Representation for Image Classification |
title_sort |
guided random projection: a lightweight feature representation for image classification |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2021-01-01 |
description |
Modern neural networks [e.g., Deep Neural Networks (DNNs)] have recently gained increasing attention for visible image classification tasks. Their success mainly results from capabilities in learning a complex feature mapping of inputs (i.e., feature representation) that carries images manifold structure relevant to the task. Despite the current popularity of these techniques, they are training-costly with Back-propagation (BP) based iteration rules. Here, we advocate a lightweight feature representation framework termed as Guided Random Projection (GRP), which is closely related to the classical random neural networks and randomization-based kernel machines. Specifically, we present an efficient optimization method that explicitly learns the distribution of random hidden weights instead of time-consuming fine-tuning or task-independent randomization configurations. Further, we also report the detailed mechanisms of the GRP with subspace theories. Experiments were conducted on visible image classification benchmarks to evaluate our claims. It shows that the proposed method achieves reasonable accuracy improvement (more than 2%) with moderate training cost (seconds level) compared with other randomization methods. |
topic |
Image classification guided random projection feature representation neural network |
url |
https://ieeexplore.ieee.org/document/9536711/ |
work_keys_str_mv |
AT shichaozhou guidedrandomprojectionalightweightfeaturerepresentationforimageclassification AT junbowang guidedrandomprojectionalightweightfeaturerepresentationforimageclassification AT wenzhengwang guidedrandomprojectionalightweightfeaturerepresentationforimageclassification AT linbotang guidedrandomprojectionalightweightfeaturerepresentationforimageclassification AT baojunzhao guidedrandomprojectionalightweightfeaturerepresentationforimageclassification |
_version_ |
1717370317276446720 |