Towards Interpretable Deep Learning: A Feature Selection Framework for Prognostics and Health Management Using Deep Neural Networks
In the last five years, the inclusion of Deep Learning algorithms in prognostics and health management (PHM) has led to a performance increase in diagnostics, prognostics, and anomaly detection. However, the lack of interpretability of these models results in resistance towards their deployment. Dee...
Main Authors: | Joaquín Figueroa Barraza, Enrique López Droguett, Marcelo Ramos Martins |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-09-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/21/17/5888 |
Similar Items
-
Deep Neural Network Feature Selection Approaches for Data-Driven Prognostic Model of Aircraft Engines
by: Phattara Khumprom, et al.
Published: (2020-09-01) -
Illuminating the Black Box: Interpreting Deep Neural Network Models for Psychiatric Research
by: Yi-han Sheu, et al.
Published: (2020-10-01) -
A Sustainable Deep Learning Framework for Object Recognition Using Multi-Layers Deep Features Fusion and Selection
by: Muhammad Rashid, et al.
Published: (2020-06-01) -
A Review on Explainability in Multimodal Deep Neural Nets
by: Gargi Joshi, et al.
Published: (2021-01-01) -
Analysis of Features Selected by a Deep Learning Model for Differential Treatment Selection in Depression
by: Joseph Mehltretter, et al.
Published: (2020-01-01)