Human Understandable Interpretation of Deep Neural Networks Decisions Using Generative Models
Deep Neural Networks have long been considered black box systems, where their interpretability is a concern when applied in safety critical systems. In this work, a novel approach of interpreting the decisions of DNNs is proposed. The approach depends on exploiting generative models and the interpre...
Main Author: | Alabdallah, Abdallah |
---|---|
Format: | Others |
Language: | English |
Published: |
Högskolan i Halmstad, Halmstad Embedded and Intelligent Systems Research (EIS)
2019
|
Subjects: | |
Online Access: | http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-41035 |
Similar Items
-
Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research
by: Yi-han Sheu, et al.
Published: (2020-10-01) -
Decontextualized learning for interpretable hierarchical representations of visual patterns
by: Robert Ian Etheredge, et al.
Published: (2021-02-01) -
Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection
by: Hammarström, Tobias
Published: (2020) -
New unified insights on deep learning in radiological and pathological images: Beyond quantitative performances to qualitative interpretation
by: Yoichi Hayashi
Published: (2020-01-01) -
Identification of Abnormal Movements in Infants: A Deep Neural Network for Body Part-Based Prediction of Cerebral Palsy
by: Dimitrios Sakkos, et al.
Published: (2021-01-01)