Shared component analysis

This paper proposes Shared Component Analysis (SCA) as an alternative to Principal Component Analysis (PCA) for the purpose of dimensionality reduction of neuroimaging data. The trend towards larger numbers of recording sensors, pixels or voxels leads to richer data, with finer spatial resolution, b...

Full description

Bibliographic Details
Main Author: Alain de Cheveigné
Format: Article
Language:English
Published: Elsevier 2021-02-01
Series:NeuroImage
Online Access:http://www.sciencedirect.com/science/article/pii/S1053811920310995
id doaj-02975a4d717941dd895941883cfeb8e7
record_format Article
spelling doaj-02975a4d717941dd895941883cfeb8e72020-12-15T04:09:15ZengElsevierNeuroImage1095-95722021-02-01226117614Shared component analysisAlain de Cheveigné0Correspondence to: LSP, DEC, ENS, 29 rue d’Ulm, 75230, Paris, France.; Laboratoire des Systèmes Perceptifs, UMR 8248, CNRS, France; Département d’Etudes Cognitives, Ecole Normale Supérieure PSL, France; UCL Ear Institute, United KingdomThis paper proposes Shared Component Analysis (SCA) as an alternative to Principal Component Analysis (PCA) for the purpose of dimensionality reduction of neuroimaging data. The trend towards larger numbers of recording sensors, pixels or voxels leads to richer data, with finer spatial resolution, but it also inflates the cost of storage and computation and the risk of overfitting. PCA can be used to select a subset of orthogonal components that explain a large fraction of variance in the data. This implicitly equates variance with relevance, and for neuroimaging data such as electroencephalography (EEG) or magnetoencephalography (MEG) that assumption may be inappropriate if (latent) sources of interest are weak relative to competing sources. SCA instead assumes that components that contribute to observable signals on multiple sensors are of likely interest, as may be the case for deep sources within the brain as a result of current spread. In SCA, steps of normalization and PCA are applied iteratively, linearly transforming the data such that components more widely shared across channels appear first in the component series. The paper explains the motivation, defines the algorithm, evaluates the outcome, and sketches a wider strategy for dimensionality reduction of which this algorithm is an example. SCA is intended as a plug-in replacement for PCA for the purpose of dimensionality reduction.http://www.sciencedirect.com/science/article/pii/S1053811920310995
collection DOAJ
language English
format Article
sources DOAJ
author Alain de Cheveigné
spellingShingle Alain de Cheveigné
Shared component analysis
NeuroImage
author_facet Alain de Cheveigné
author_sort Alain de Cheveigné
title Shared component analysis
title_short Shared component analysis
title_full Shared component analysis
title_fullStr Shared component analysis
title_full_unstemmed Shared component analysis
title_sort shared component analysis
publisher Elsevier
series NeuroImage
issn 1095-9572
publishDate 2021-02-01
description This paper proposes Shared Component Analysis (SCA) as an alternative to Principal Component Analysis (PCA) for the purpose of dimensionality reduction of neuroimaging data. The trend towards larger numbers of recording sensors, pixels or voxels leads to richer data, with finer spatial resolution, but it also inflates the cost of storage and computation and the risk of overfitting. PCA can be used to select a subset of orthogonal components that explain a large fraction of variance in the data. This implicitly equates variance with relevance, and for neuroimaging data such as electroencephalography (EEG) or magnetoencephalography (MEG) that assumption may be inappropriate if (latent) sources of interest are weak relative to competing sources. SCA instead assumes that components that contribute to observable signals on multiple sensors are of likely interest, as may be the case for deep sources within the brain as a result of current spread. In SCA, steps of normalization and PCA are applied iteratively, linearly transforming the data such that components more widely shared across channels appear first in the component series. The paper explains the motivation, defines the algorithm, evaluates the outcome, and sketches a wider strategy for dimensionality reduction of which this algorithm is an example. SCA is intended as a plug-in replacement for PCA for the purpose of dimensionality reduction.
url http://www.sciencedirect.com/science/article/pii/S1053811920310995
work_keys_str_mv AT alaindecheveigne sharedcomponentanalysis
_version_ 1724382867978452992