Neural Image Compression and Explanation
Explaining the prediction of deep neural networks (DNNs) and semantic image compression are two active research areas of deep learning with a numerous of applications in decision-critical systems, such as surveillance cameras, drones and self-driving cars, where interpretable decision is critical an...
Main Authors: | Xiang Li, Shihao Ji |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9273007/ |
Similar Items
-
Local Interpretable Model-Agnostic Explanations for Classification of Lymph Node Metastases
by: Iam Palatnik de Sousa, et al.
Published: (2019-07-01) -
Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research
by: Yi-han Sheu, et al.
Published: (2020-10-01) -
In Search of Trustworthy and Transparent Intelligent Systems With Human-Like Cognitive and Reasoning Capabilities
by: Nikhil R. Pal
Published: (2020-06-01) -
Explainable Deep Learning Models in Medical Image Analysis
by: Amitojdeep Singh, et al.
Published: (2020-06-01) -
Towards Explainable Decision-making Strategies of Deep Convolutional Neural Networks : An exploration into explainable AI and potential applications within cancer detection
by: Hammarström, Tobias
Published: (2020)