Voxel-to-voxel predictive models reveal unexpected structure in unexplained variance
Encoding models based on deep convolutional neural networks (DCNN) predict BOLD responses to natural scenes in the human visual system more accurately than many other currently available models. However, DCNN-based encoding models fail to predict a significant amount of variance in the activity of m...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2021-09-01
|
Series: | NeuroImage |
Online Access: | http://www.sciencedirect.com/science/article/pii/S1053811921005425 |
id |
doaj-47e41a82ccd34837ae872cc6ceac4dbd |
---|---|
record_format |
Article |
spelling |
doaj-47e41a82ccd34837ae872cc6ceac4dbd2021-07-25T04:42:12ZengElsevierNeuroImage1095-95722021-09-01238118266Voxel-to-voxel predictive models reveal unexpected structure in unexplained varianceMaggie Mae Mell0Ghislain St-Yves1Thomas Naselaris2Department of Neuroscience, Medical University of South Carolina, Charleston, SC, USADepartment of Neuroscience, Medical University of South Carolina, Charleston, SC, USACorresponding author.; Department of Neuroscience, Medical University of South Carolina, Charleston, SC, USAEncoding models based on deep convolutional neural networks (DCNN) predict BOLD responses to natural scenes in the human visual system more accurately than many other currently available models. However, DCNN-based encoding models fail to predict a significant amount of variance in the activity of most voxels in all visual areas. This failure could reflect limitations in the data (e.g., a noise ceiling), or could reflect limitations of the DCNN as a model of computation in the brain. Understanding the source and structure of the unexplained variance could therefore provide helpful clues for improving models of brain computation. Here, we characterize the structure of the variance that DCNN-based encoding models cannot explain. Using a publicly available dataset of BOLD responses to natural scenes, we determined if the source of unexplained variance was shared across voxels, individual brains, retinotopic locations, and hierarchically distant visual brain areas. We answered these questions using voxel-to-voxel (vox2vox) models that predict activity in a target voxel given activity in a population of source voxels. We found that simple linear vox2vox models increased within-subject prediction accuracy over DCNN-based models for any pair of source/target visual areas, clearly demonstrating that the source of unexplained variance is widely shared within and across visual brain areas. However, vox2vox models were not more accurate than DCNN-based encoding models when source and target voxels came from different brains, demonstrating that the source of unexplained variance was not shared across brains. Importantly, control analyses demonstrated that the source of unexplained variance was not encoded in the mean activity of source voxels, or the activity of voxels in white matter. Interestingly, the weights of vox2vox models revealed preferential connection of target voxel activity to source voxels with adjacent receptive fields, even when source and target voxels were in different functional brain areas. Finally, we found that the prediction accuracy of the vox2vox models decayed with hierarchical distance between the source and target voxels but showed detailed patterns of dependence on hierarchical relationships that we did not observe in DCNNs. Given these results, we argue that the structured variance unexplained by DCNN-based encoding models is unlikely to be entirely caused by non-neural artifacts (e.g., spatially correlated measurement noise) or a failure of DCNNs to approximate the features encoded in brain activity; rather, our results point to a need for brain models that provide both mechanistic and computational explanations for structured ongoing activity in the brain. Keywords: fMRI, encoding models, deep neural networks, functional connectivityhttp://www.sciencedirect.com/science/article/pii/S1053811921005425 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Maggie Mae Mell Ghislain St-Yves Thomas Naselaris |
spellingShingle |
Maggie Mae Mell Ghislain St-Yves Thomas Naselaris Voxel-to-voxel predictive models reveal unexpected structure in unexplained variance NeuroImage |
author_facet |
Maggie Mae Mell Ghislain St-Yves Thomas Naselaris |
author_sort |
Maggie Mae Mell |
title |
Voxel-to-voxel predictive models reveal unexpected structure in unexplained variance |
title_short |
Voxel-to-voxel predictive models reveal unexpected structure in unexplained variance |
title_full |
Voxel-to-voxel predictive models reveal unexpected structure in unexplained variance |
title_fullStr |
Voxel-to-voxel predictive models reveal unexpected structure in unexplained variance |
title_full_unstemmed |
Voxel-to-voxel predictive models reveal unexpected structure in unexplained variance |
title_sort |
voxel-to-voxel predictive models reveal unexpected structure in unexplained variance |
publisher |
Elsevier |
series |
NeuroImage |
issn |
1095-9572 |
publishDate |
2021-09-01 |
description |
Encoding models based on deep convolutional neural networks (DCNN) predict BOLD responses to natural scenes in the human visual system more accurately than many other currently available models. However, DCNN-based encoding models fail to predict a significant amount of variance in the activity of most voxels in all visual areas. This failure could reflect limitations in the data (e.g., a noise ceiling), or could reflect limitations of the DCNN as a model of computation in the brain. Understanding the source and structure of the unexplained variance could therefore provide helpful clues for improving models of brain computation. Here, we characterize the structure of the variance that DCNN-based encoding models cannot explain. Using a publicly available dataset of BOLD responses to natural scenes, we determined if the source of unexplained variance was shared across voxels, individual brains, retinotopic locations, and hierarchically distant visual brain areas. We answered these questions using voxel-to-voxel (vox2vox) models that predict activity in a target voxel given activity in a population of source voxels. We found that simple linear vox2vox models increased within-subject prediction accuracy over DCNN-based models for any pair of source/target visual areas, clearly demonstrating that the source of unexplained variance is widely shared within and across visual brain areas. However, vox2vox models were not more accurate than DCNN-based encoding models when source and target voxels came from different brains, demonstrating that the source of unexplained variance was not shared across brains. Importantly, control analyses demonstrated that the source of unexplained variance was not encoded in the mean activity of source voxels, or the activity of voxels in white matter. Interestingly, the weights of vox2vox models revealed preferential connection of target voxel activity to source voxels with adjacent receptive fields, even when source and target voxels were in different functional brain areas. Finally, we found that the prediction accuracy of the vox2vox models decayed with hierarchical distance between the source and target voxels but showed detailed patterns of dependence on hierarchical relationships that we did not observe in DCNNs. Given these results, we argue that the structured variance unexplained by DCNN-based encoding models is unlikely to be entirely caused by non-neural artifacts (e.g., spatially correlated measurement noise) or a failure of DCNNs to approximate the features encoded in brain activity; rather, our results point to a need for brain models that provide both mechanistic and computational explanations for structured ongoing activity in the brain. Keywords: fMRI, encoding models, deep neural networks, functional connectivity |
url |
http://www.sciencedirect.com/science/article/pii/S1053811921005425 |
work_keys_str_mv |
AT maggiemaemell voxeltovoxelpredictivemodelsrevealunexpectedstructureinunexplainedvariance AT ghislainstyves voxeltovoxelpredictivemodelsrevealunexpectedstructureinunexplainedvariance AT thomasnaselaris voxeltovoxelpredictivemodelsrevealunexpectedstructureinunexplainedvariance |
_version_ |
1721283748445552640 |