Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.

In this work, we present a local intrinsic rule that we developed, dubbed IP, inspired by the Infomax rule. Like Infomax, this rule works by controlling the gain and bias of a neuron to regulate its rate of fire. We discuss the biological plausibility of the IP rule and compare it to batch normalisa...

Full description

Bibliographic Details
Main Authors: Nolan Peter Shaw, Tyler Jackson, Jeff Orchard
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2020-01-01
Series:PLoS ONE
Online Access:https://doi.org/10.1371/journal.pone.0238454
id doaj-69a9b373d052461eafc8672dd4f89f30
record_format Article
spelling doaj-69a9b373d052461eafc8672dd4f89f302021-03-03T22:04:44ZengPublic Library of Science (PLoS)PLoS ONE1932-62032020-01-01159e023845410.1371/journal.pone.0238454Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.Nolan Peter ShawTyler JacksonJeff OrchardIn this work, we present a local intrinsic rule that we developed, dubbed IP, inspired by the Infomax rule. Like Infomax, this rule works by controlling the gain and bias of a neuron to regulate its rate of fire. We discuss the biological plausibility of the IP rule and compare it to batch normalisation. We demonstrate that the IP rule improves learning in deep networks, and provides networks with considerable robustness to increases in synaptic learning rates. We also sample the error gradients during learning and show that the IP rule substantially increases the size of the gradients over the course of learning. This suggests that the IP rule solves the vanishing gradient problem. Supplementary analysis is provided to derive the equilibrium solutions that the neuronal gain and bias converge to using our IP rule. An analysis demonstrates that the IP rule results in neuronal information potential similar to that of Infomax, when tested on a fixed input distribution. We also show that batch normalisation also improves information potential, suggesting that this may be a cause for the efficacy of batch normalisation-an open problem at the time of this writing.https://doi.org/10.1371/journal.pone.0238454
collection DOAJ
language English
format Article
sources DOAJ
author Nolan Peter Shaw
Tyler Jackson
Jeff Orchard
spellingShingle Nolan Peter Shaw
Tyler Jackson
Jeff Orchard
Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.
PLoS ONE
author_facet Nolan Peter Shaw
Tyler Jackson
Jeff Orchard
author_sort Nolan Peter Shaw
title Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.
title_short Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.
title_full Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.
title_fullStr Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.
title_full_unstemmed Biological batch normalisation: How intrinsic plasticity improves learning in deep neural networks.
title_sort biological batch normalisation: how intrinsic plasticity improves learning in deep neural networks.
publisher Public Library of Science (PLoS)
series PLoS ONE
issn 1932-6203
publishDate 2020-01-01
description In this work, we present a local intrinsic rule that we developed, dubbed IP, inspired by the Infomax rule. Like Infomax, this rule works by controlling the gain and bias of a neuron to regulate its rate of fire. We discuss the biological plausibility of the IP rule and compare it to batch normalisation. We demonstrate that the IP rule improves learning in deep networks, and provides networks with considerable robustness to increases in synaptic learning rates. We also sample the error gradients during learning and show that the IP rule substantially increases the size of the gradients over the course of learning. This suggests that the IP rule solves the vanishing gradient problem. Supplementary analysis is provided to derive the equilibrium solutions that the neuronal gain and bias converge to using our IP rule. An analysis demonstrates that the IP rule results in neuronal information potential similar to that of Infomax, when tested on a fixed input distribution. We also show that batch normalisation also improves information potential, suggesting that this may be a cause for the efficacy of batch normalisation-an open problem at the time of this writing.
url https://doi.org/10.1371/journal.pone.0238454
work_keys_str_mv AT nolanpetershaw biologicalbatchnormalisationhowintrinsicplasticityimproveslearningindeepneuralnetworks
AT tylerjackson biologicalbatchnormalisationhowintrinsicplasticityimproveslearningindeepneuralnetworks
AT jefforchard biologicalbatchnormalisationhowintrinsicplasticityimproveslearningindeepneuralnetworks
_version_ 1714813584603611136