MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis

Interpretability has emerged as a crucial aspect of building trust in machine learning systems, aimed at providing insights into the working of complex neural networks that are otherwise opaque to a user. There are a plethora of existing solutions addressing various aspects of interpretability rangi...

Full description

Bibliographic Details
Main Authors: Rushil Anirudh, Jayaraman J. Thiagarajan, Rahul Sridhar, Peer-Timo Bremer
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-05-01
Series:Frontiers in Big Data
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fdata.2021.589417/full
id doaj-b056afe8dfc8413b864f7b859f476bcf
record_format Article
spelling doaj-b056afe8dfc8413b864f7b859f476bcf2021-07-15T17:31:00ZengFrontiers Media S.A.Frontiers in Big Data2624-909X2021-05-01410.3389/fdata.2021.589417589417MARGIN: Uncovering Deep Neural Networks Using Graph Signal AnalysisRushil Anirudh0Jayaraman J. Thiagarajan1Rahul Sridhar2Peer-Timo Bremer3Center for Applied Scientific Computing (CASC), Lawrence Livermore National Laboratory, Livermore, CA, United StatesCenter for Applied Scientific Computing (CASC), Lawrence Livermore National Laboratory, Livermore, CA, United StatesWalmart Labs, California, CA, United StatesCenter for Applied Scientific Computing (CASC), Lawrence Livermore National Laboratory, Livermore, CA, United StatesInterpretability has emerged as a crucial aspect of building trust in machine learning systems, aimed at providing insights into the working of complex neural networks that are otherwise opaque to a user. There are a plethora of existing solutions addressing various aspects of interpretability ranging from identifying prototypical samples in a dataset to explaining image predictions or explaining mis-classifications. While all of these diverse techniques address seemingly different aspects of interpretability, we hypothesize that a large family of interepretability tasks are variants of the same central problem which is identifying relative change in a model’s prediction. This paper introduces MARGIN, a simple yet general approach to address a large set of interpretability tasks MARGIN exploits ideas rooted in graph signal analysis to determine influential nodes in a graph, which are defined as those nodes that maximally describe a function defined on the graph. By carefully defining task-specific graphs and functions, we demonstrate that MARGIN outperforms existing approaches in a number of disparate interpretability challenges.https://www.frontiersin.org/articles/10.3389/fdata.2021.589417/fullgraph signal processinginterpretabilityinfluence samplingadversarial attacksmachine learning
collection DOAJ
language English
format Article
sources DOAJ
author Rushil Anirudh
Jayaraman J. Thiagarajan
Rahul Sridhar
Peer-Timo Bremer
spellingShingle Rushil Anirudh
Jayaraman J. Thiagarajan
Rahul Sridhar
Peer-Timo Bremer
MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis
Frontiers in Big Data
graph signal processing
interpretability
influence sampling
adversarial attacks
machine learning
author_facet Rushil Anirudh
Jayaraman J. Thiagarajan
Rahul Sridhar
Peer-Timo Bremer
author_sort Rushil Anirudh
title MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis
title_short MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis
title_full MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis
title_fullStr MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis
title_full_unstemmed MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis
title_sort margin: uncovering deep neural networks using graph signal analysis
publisher Frontiers Media S.A.
series Frontiers in Big Data
issn 2624-909X
publishDate 2021-05-01
description Interpretability has emerged as a crucial aspect of building trust in machine learning systems, aimed at providing insights into the working of complex neural networks that are otherwise opaque to a user. There are a plethora of existing solutions addressing various aspects of interpretability ranging from identifying prototypical samples in a dataset to explaining image predictions or explaining mis-classifications. While all of these diverse techniques address seemingly different aspects of interpretability, we hypothesize that a large family of interepretability tasks are variants of the same central problem which is identifying relative change in a model’s prediction. This paper introduces MARGIN, a simple yet general approach to address a large set of interpretability tasks MARGIN exploits ideas rooted in graph signal analysis to determine influential nodes in a graph, which are defined as those nodes that maximally describe a function defined on the graph. By carefully defining task-specific graphs and functions, we demonstrate that MARGIN outperforms existing approaches in a number of disparate interpretability challenges.
topic graph signal processing
interpretability
influence sampling
adversarial attacks
machine learning
url https://www.frontiersin.org/articles/10.3389/fdata.2021.589417/full
work_keys_str_mv AT rushilanirudh marginuncoveringdeepneuralnetworksusinggraphsignalanalysis
AT jayaramanjthiagarajan marginuncoveringdeepneuralnetworksusinggraphsignalanalysis
AT rahulsridhar marginuncoveringdeepneuralnetworksusinggraphsignalanalysis
AT peertimobremer marginuncoveringdeepneuralnetworksusinggraphsignalanalysis
_version_ 1721298104853987328