Multiactivation Pooling Method in Convolutional Neural Networks for Image Recognition

Convolutional neural networks (CNNs) are becoming more and more popular today. CNNs now have become a popular feature extractor applying to image processing, big data processing, fog computing, etc. CNNs usually consist of several basic units like convolutional unit, pooling unit, activation unit, a...

Full description

Bibliographic Details
Main Authors: Qi Zhao, Shuchang Lyu, Boxue Zhang, Wenquan Feng
Format: Article
Language:English
Published: Hindawi-Wiley 2018-01-01
Series:Wireless Communications and Mobile Computing
Online Access:http://dx.doi.org/10.1155/2018/8196906
id doaj-b7472e32a3364343b0adb4e7c92a43f3
record_format Article
spelling doaj-b7472e32a3364343b0adb4e7c92a43f32020-11-25T00:53:45ZengHindawi-WileyWireless Communications and Mobile Computing1530-86691530-86772018-01-01201810.1155/2018/81969068196906Multiactivation Pooling Method in Convolutional Neural Networks for Image RecognitionQi Zhao0Shuchang Lyu1Boxue Zhang2Wenquan Feng3School of Electronics and Information Engineering, Beihang University, Beijing 100191, ChinaSchool of Electronics and Information Engineering, Beihang University, Beijing 100191, ChinaSchool of Electronics and Information Engineering, Beihang University, Beijing 100191, ChinaSchool of Electronics and Information Engineering, Beihang University, Beijing 100191, ChinaConvolutional neural networks (CNNs) are becoming more and more popular today. CNNs now have become a popular feature extractor applying to image processing, big data processing, fog computing, etc. CNNs usually consist of several basic units like convolutional unit, pooling unit, activation unit, and so on. In CNNs, conventional pooling methods refer to 2×2 max-pooling and average-pooling, which are applied after the convolutional or ReLU layers. In this paper, we propose a Multiactivation Pooling (MAP) Method to make the CNNs more accurate on classification tasks without increasing depth and trainable parameters. We add more convolutional layers before one pooling layer and expand the pooling region to 4×4, 8×8, 16×16, and even larger. When doing large-scale subsampling, we pick top-k activation, sum up them, and constrain them by a hyperparameter σ. We pick VGG, ALL-CNN, and DenseNets as our baseline models and evaluate our proposed MAP method on benchmark datasets: CIFAR-10, CIFAR-100, SVHN, and ImageNet. The classification results are competitive.http://dx.doi.org/10.1155/2018/8196906
collection DOAJ
language English
format Article
sources DOAJ
author Qi Zhao
Shuchang Lyu
Boxue Zhang
Wenquan Feng
spellingShingle Qi Zhao
Shuchang Lyu
Boxue Zhang
Wenquan Feng
Multiactivation Pooling Method in Convolutional Neural Networks for Image Recognition
Wireless Communications and Mobile Computing
author_facet Qi Zhao
Shuchang Lyu
Boxue Zhang
Wenquan Feng
author_sort Qi Zhao
title Multiactivation Pooling Method in Convolutional Neural Networks for Image Recognition
title_short Multiactivation Pooling Method in Convolutional Neural Networks for Image Recognition
title_full Multiactivation Pooling Method in Convolutional Neural Networks for Image Recognition
title_fullStr Multiactivation Pooling Method in Convolutional Neural Networks for Image Recognition
title_full_unstemmed Multiactivation Pooling Method in Convolutional Neural Networks for Image Recognition
title_sort multiactivation pooling method in convolutional neural networks for image recognition
publisher Hindawi-Wiley
series Wireless Communications and Mobile Computing
issn 1530-8669
1530-8677
publishDate 2018-01-01
description Convolutional neural networks (CNNs) are becoming more and more popular today. CNNs now have become a popular feature extractor applying to image processing, big data processing, fog computing, etc. CNNs usually consist of several basic units like convolutional unit, pooling unit, activation unit, and so on. In CNNs, conventional pooling methods refer to 2×2 max-pooling and average-pooling, which are applied after the convolutional or ReLU layers. In this paper, we propose a Multiactivation Pooling (MAP) Method to make the CNNs more accurate on classification tasks without increasing depth and trainable parameters. We add more convolutional layers before one pooling layer and expand the pooling region to 4×4, 8×8, 16×16, and even larger. When doing large-scale subsampling, we pick top-k activation, sum up them, and constrain them by a hyperparameter σ. We pick VGG, ALL-CNN, and DenseNets as our baseline models and evaluate our proposed MAP method on benchmark datasets: CIFAR-10, CIFAR-100, SVHN, and ImageNet. The classification results are competitive.
url http://dx.doi.org/10.1155/2018/8196906
work_keys_str_mv AT qizhao multiactivationpoolingmethodinconvolutionalneuralnetworksforimagerecognition
AT shuchanglyu multiactivationpoolingmethodinconvolutionalneuralnetworksforimagerecognition
AT boxuezhang multiactivationpoolingmethodinconvolutionalneuralnetworksforimagerecognition
AT wenquanfeng multiactivationpoolingmethodinconvolutionalneuralnetworksforimagerecognition
_version_ 1725236676335239168