A Metric to Compare Pixel-Wise Interpretation Methods for Neural Networks

There are various pixel-based interpretation methods such as saliency map, gradient×input, DeepLIFT, integrated-gradient-n, etc. However, it is difficult to compare their performance as it involves human cognitive processes. We propose a metric that can quantify the distance from the impo...

Full description

Bibliographic Details
Main Authors: Jay Hoon Jung, Youngmin Kwon
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9268152/
id doaj-7100d74279db404eaf30caac040432d2
record_format Article
spelling doaj-7100d74279db404eaf30caac040432d22021-03-30T04:45:53ZengIEEEIEEE Access2169-35362020-01-01822143322144110.1109/ACCESS.2020.30403499268152A Metric to Compare Pixel-Wise Interpretation Methods for Neural NetworksJay Hoon Jung0https://orcid.org/0000-0002-9495-0693Youngmin Kwon1https://orcid.org/0000-0002-5853-5943Computer Science Department, The State University of New York at Korea, Incheon, South KoreaComputer Science Department, The State University of New York at Korea, Incheon, South KoreaThere are various pixel-based interpretation methods such as saliency map, gradient×input, DeepLIFT, integrated-gradient-n, etc. However, it is difficult to compare their performance as it involves human cognitive processes. We propose a metric that can quantify the distance from the importance scores of the interpretation methods to human intuition. We create a new dataset by adding a simple and small image, named as a stamp, to the original images. The importance scores for the deep neural networks to classify the stamped and regular images are calculated. Ideally, the pixel-based interpretation has to successfully select the stamps. Previous methods to compare different interpretation methods are useful only when the scale of the importance scores is the same. Whereas, we standardize the importance scores and define the measure to ideal scores. Our proposed method can quantitatively measure how the interpretation methods are close to human intuition.https://ieeexplore.ieee.org/document/9268152/Interpretation of neural networksexplanation of neural networkscausality of neural networks
collection DOAJ
language English
format Article
sources DOAJ
author Jay Hoon Jung
Youngmin Kwon
spellingShingle Jay Hoon Jung
Youngmin Kwon
A Metric to Compare Pixel-Wise Interpretation Methods for Neural Networks
IEEE Access
Interpretation of neural networks
explanation of neural networks
causality of neural networks
author_facet Jay Hoon Jung
Youngmin Kwon
author_sort Jay Hoon Jung
title A Metric to Compare Pixel-Wise Interpretation Methods for Neural Networks
title_short A Metric to Compare Pixel-Wise Interpretation Methods for Neural Networks
title_full A Metric to Compare Pixel-Wise Interpretation Methods for Neural Networks
title_fullStr A Metric to Compare Pixel-Wise Interpretation Methods for Neural Networks
title_full_unstemmed A Metric to Compare Pixel-Wise Interpretation Methods for Neural Networks
title_sort metric to compare pixel-wise interpretation methods for neural networks
publisher IEEE
series IEEE Access
issn 2169-3536
publishDate 2020-01-01
description There are various pixel-based interpretation methods such as saliency map, gradient×input, DeepLIFT, integrated-gradient-n, etc. However, it is difficult to compare their performance as it involves human cognitive processes. We propose a metric that can quantify the distance from the importance scores of the interpretation methods to human intuition. We create a new dataset by adding a simple and small image, named as a stamp, to the original images. The importance scores for the deep neural networks to classify the stamped and regular images are calculated. Ideally, the pixel-based interpretation has to successfully select the stamps. Previous methods to compare different interpretation methods are useful only when the scale of the importance scores is the same. Whereas, we standardize the importance scores and define the measure to ideal scores. Our proposed method can quantitatively measure how the interpretation methods are close to human intuition.
topic Interpretation of neural networks
explanation of neural networks
causality of neural networks
url https://ieeexplore.ieee.org/document/9268152/
work_keys_str_mv AT jayhoonjung ametrictocomparepixelwiseinterpretationmethodsforneuralnetworks
AT youngminkwon ametrictocomparepixelwiseinterpretationmethodsforneuralnetworks
AT jayhoonjung metrictocomparepixelwiseinterpretationmethodsforneuralnetworks
AT youngminkwon metrictocomparepixelwiseinterpretationmethodsforneuralnetworks
_version_ 1724181227574919168