Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging
Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that dee...
Main Authors: | Antoine Pirovano, Hippolyte Heuberger, Sylvain Berlemont, SaÏd Ladjal, Isabelle Bloch |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-02-01
|
Series: | Machine Learning and Knowledge Extraction |
Subjects: | |
Online Access: | https://www.mdpi.com/2504-4990/3/1/12 |
Similar Items
-
An Interpretable Deep Learning Model for Automatic Sound Classification
by: Pablo Zinemanas, et al.
Published: (2021-04-01) -
Virtual microscopy using whole-slide imaging as an enabler for teledermatopathology: A paired consultant validation study
by: Ayman Al Habeeb, et al.
Published: (2012-01-01) -
Machine Learning Interpretability: A Survey on Methods and Metrics
by: Diogo V. Carvalho, et al.
Published: (2019-07-01) -
Deep learning-based fully automated differential diagnosis of eyelid basal cell and sebaceous carcinoma using whole slide images
by: Chen, X., et al.
Published: (2022) -
Image microarrays (IMA): Digital pathology′s missing tool
by: Jason Hipp, et al.
Published: (2011-01-01)