Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural Networks
In this work, we propose a mechanism for knowledge transfer between Convolutional Neural Networks via the geometric regularization of local features produced by the activations of convolutional layers. We formulate appropriate loss functions, driving a “student” model to adapt such that its local fe...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-08-01
|
Series: | Information |
Subjects: | |
Online Access: | https://www.mdpi.com/2078-2489/12/8/333 |
id |
doaj-e6e2a79899164d4987883f6fc14027a1 |
---|---|
record_format |
Article |
spelling |
doaj-e6e2a79899164d4987883f6fc14027a12021-08-26T13:54:14ZengMDPI AGInformation2078-24892021-08-011233333310.3390/info12080333Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural NetworksIlias Theodorakopoulos0Foteini Fotopoulou1George Economou2Department of Physics, University of Patras, 26504 Rion, GreeceDepartment of Computer Engineering & Informatics, University of Patras, 26504 Rion, GreeceDepartment of Physics, University of Patras, 26504 Rion, GreeceIn this work, we propose a mechanism for knowledge transfer between Convolutional Neural Networks via the geometric regularization of local features produced by the activations of convolutional layers. We formulate appropriate loss functions, driving a “student” model to adapt such that its local features exhibit similar geometrical characteristics to those of an “instructor” model, at corresponding layers. The investigated functions, inspired by manifold-to-manifold distance measures, are designed to compare the neighboring information inside the feature space of the involved activations without any restrictions in the features’ dimensionality, thus enabling knowledge transfer between different architectures. Experimental evidence demonstrates that the proposed technique is effective in different settings, including knowledge-transfer to smaller models, transfer between different deep architectures and harnessing knowledge from external data, producing models with increased accuracy compared to a typical training. Furthermore, results indicate that the presented method can work synergistically with methods such as knowledge distillation, further increasing the accuracy of the trained models. Finally, experiments on training with limited data show that a combined regularization scheme can achieve the same generalization as a non-regularized training with 50% of the data in the CIFAR-10 classification task.https://www.mdpi.com/2078-2489/12/8/333manifold regularizationknowledge transferknowledge distillationdeep learning with limited data |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Ilias Theodorakopoulos Foteini Fotopoulou George Economou |
spellingShingle |
Ilias Theodorakopoulos Foteini Fotopoulou George Economou Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural Networks Information manifold regularization knowledge transfer knowledge distillation deep learning with limited data |
author_facet |
Ilias Theodorakopoulos Foteini Fotopoulou George Economou |
author_sort |
Ilias Theodorakopoulos |
title |
Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural Networks |
title_short |
Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural Networks |
title_full |
Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural Networks |
title_fullStr |
Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural Networks |
title_full_unstemmed |
Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural Networks |
title_sort |
geometric regularization of local activations for knowledge transfer in convolutional neural networks |
publisher |
MDPI AG |
series |
Information |
issn |
2078-2489 |
publishDate |
2021-08-01 |
description |
In this work, we propose a mechanism for knowledge transfer between Convolutional Neural Networks via the geometric regularization of local features produced by the activations of convolutional layers. We formulate appropriate loss functions, driving a “student” model to adapt such that its local features exhibit similar geometrical characteristics to those of an “instructor” model, at corresponding layers. The investigated functions, inspired by manifold-to-manifold distance measures, are designed to compare the neighboring information inside the feature space of the involved activations without any restrictions in the features’ dimensionality, thus enabling knowledge transfer between different architectures. Experimental evidence demonstrates that the proposed technique is effective in different settings, including knowledge-transfer to smaller models, transfer between different deep architectures and harnessing knowledge from external data, producing models with increased accuracy compared to a typical training. Furthermore, results indicate that the presented method can work synergistically with methods such as knowledge distillation, further increasing the accuracy of the trained models. Finally, experiments on training with limited data show that a combined regularization scheme can achieve the same generalization as a non-regularized training with 50% of the data in the CIFAR-10 classification task. |
topic |
manifold regularization knowledge transfer knowledge distillation deep learning with limited data |
url |
https://www.mdpi.com/2078-2489/12/8/333 |
work_keys_str_mv |
AT iliastheodorakopoulos geometricregularizationoflocalactivationsforknowledgetransferinconvolutionalneuralnetworks AT foteinifotopoulou geometricregularizationoflocalactivationsforknowledgetransferinconvolutionalneuralnetworks AT georgeeconomou geometricregularizationoflocalactivationsforknowledgetransferinconvolutionalneuralnetworks |
_version_ |
1721192536915050496 |