Estimating Information Flow in Deep Neural Networks

Copyright © 2019 ASME We study the estimation of the mutual information I(X;Tℓ) between the input X to a deep neural network (DNN) and the output vector Tℓ of its ℓth hidden layer (an "internal representation"). Focusing on feedforward networks with fixed weights and noisy internal represe...

Full description

Bibliographic Details
Main Authors: Goldfeld, Ziv (Author), van den Berg, Ewout (Author), Greenewald, Kristjan (Author), Melnyk, Igor (Author), Nguyen, Nam (Author), Kinsgbury, Brian (Author), Polyanskiy, Yury (Author)
Other Authors: MIT-IBM Watson AI Lab (Contributor)
Format: Article
Language:English
Published: 2021-11-05T14:29:09Z.
Subjects:
Online Access:Get fulltext
Description
Summary:Copyright © 2019 ASME We study the estimation of the mutual information I(X;Tℓ) between the input X to a deep neural network (DNN) and the output vector Tℓ of its ℓth hidden layer (an "internal representation"). Focusing on feedforward networks with fixed weights and noisy internal representations, we develop a rigorous framework for accurate estimation of I(X; Tℓ). By relating I(X; Tℓ) to information transmission over additive white Gaussian noise channels, we reveal that compression, i.e. reduction in I(X;Tℓ) over the course of training, is driven by progressive geometric clustering of the representations of samples from the same class. Experimental results verify this connection. Finally, we shift focus to purely deterministic DNNs, where I(X; Tℓ) is provably vacuous, and show that nevertheless, these models also cluster inputs belonging to the same class. The binning-based approximation of I(X; Tℓ) employed in past works to measure compression is identified as a measure of clustering, thus clarifying that these experiments were in fact tracking the same clustering phenomenon. Leveraging the clustering perspective, we provide new evidence that compression and generalization may not be causally related and discuss potential future research ideas.