Counterfactual Explanation of Brain Activity Classifiers Using Image-To-Image Transfer by Generative Adversarial Network
Deep neural networks (DNNs) can accurately decode task-related information from brain activations. However, because of the non-linearity of DNNs, it is generally difficult to explain how and why they assign certain behavioral tasks to given brain activations, either correctly or incorrectly. One of...
Main Authors: | Chikazoe, J. (Author), Jimura, K. (Author), Matsui, T. (Author), Pham, T.Q (Author), Taki, M. (Author) |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2022
|
Subjects: | |
Online Access: | View Fulltext in Publisher |
Similar Items
-
Neural Image Compression and Explanation
by: Xiang Li, et al.
Published: (2020-01-01) -
A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence
by: Ilia Stepin, et al.
Published: (2021-01-01) -
A Framework and Benchmarking Study for Counterfactual Generating Methods on Tabular Data
by: Raphael Mazzine Barbosa de Oliveira, et al.
Published: (2021-08-01) -
In Search of Trustworthy and Transparent Intelligent Systems With Human-Like Cognitive and Reasoning Capabilities
by: Nikhil R. Pal
Published: (2020-06-01) -
Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research
by: Yi-han Sheu, et al.
Published: (2020-10-01)