Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging
Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that dee...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-02-01
|
Series: | Machine Learning and Knowledge Extraction |
Subjects: | |
Online Access: | https://www.mdpi.com/2504-4990/3/1/12 |
id |
doaj-34908cc7e93b4ebe90ddcc0559c18805 |
---|---|
record_format |
Article |
spelling |
doaj-34908cc7e93b4ebe90ddcc0559c188052021-02-23T00:05:52ZengMDPI AGMachine Learning and Knowledge Extraction2504-49902021-02-0131224326210.3390/make3010012Automatic Feature Selection for Improved Interpretability on Whole Slide ImagingAntoine Pirovano0Hippolyte Heuberger1Sylvain Berlemont2SaÏd Ladjal3Isabelle Bloch4Keen Eye, 75012 Paris, FranceKeen Eye, 75012 Paris, FranceKeen Eye, 75012 Paris, FranceLTCI, Télécom Paris, Institut Polytechnique de Paris, 91120 Palaiseau, FranceLTCI, Télécom Paris, Institut Polytechnique de Paris, 91120 Palaiseau, FranceDeep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification with the formalization of the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances. We measure the improvement using the tile-level AUC that we called Localization AUC, and show an improvement of more than 0.2. We also validate our results with a RemOve And Retrain (ROAR) measure. Then, after studying the impact of the number of features used for heat-map computation, we propose a corrective approach, relying on activation colocalization of selected features, that improves the performances and the stability of our proposed method.https://www.mdpi.com/2504-4990/3/1/12histopathologyWSI classificationexplainabilityinterpretabilityheat-maps |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Antoine Pirovano Hippolyte Heuberger Sylvain Berlemont SaÏd Ladjal Isabelle Bloch |
spellingShingle |
Antoine Pirovano Hippolyte Heuberger Sylvain Berlemont SaÏd Ladjal Isabelle Bloch Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging Machine Learning and Knowledge Extraction histopathology WSI classification explainability interpretability heat-maps |
author_facet |
Antoine Pirovano Hippolyte Heuberger Sylvain Berlemont SaÏd Ladjal Isabelle Bloch |
author_sort |
Antoine Pirovano |
title |
Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging |
title_short |
Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging |
title_full |
Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging |
title_fullStr |
Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging |
title_full_unstemmed |
Automatic Feature Selection for Improved Interpretability on Whole Slide Imaging |
title_sort |
automatic feature selection for improved interpretability on whole slide imaging |
publisher |
MDPI AG |
series |
Machine Learning and Knowledge Extraction |
issn |
2504-4990 |
publishDate |
2021-02-01 |
description |
Deep learning methods are widely used for medical applications to assist medical doctors in their daily routine. While performances reach expert’s level, interpretability (highlighting how and what a trained model learned and why it makes a specific decision) is the next important challenge that deep learning methods need to answer to be fully integrated in the medical field. In this paper, we address the question of interpretability in the context of whole slide images (WSI) classification with the formalization of the design of WSI classification architectures and propose a piece-wise interpretability approach, relying on gradient-based methods, feature visualization and multiple instance learning context. After training two WSI classification architectures on Camelyon-16 WSI dataset, highlighting discriminative features learned, and validating our approach with pathologists, we propose a novel manner of computing interpretability slide-level heat-maps, based on the extracted features, that improves tile-level classification performances. We measure the improvement using the tile-level AUC that we called Localization AUC, and show an improvement of more than 0.2. We also validate our results with a RemOve And Retrain (ROAR) measure. Then, after studying the impact of the number of features used for heat-map computation, we propose a corrective approach, relying on activation colocalization of selected features, that improves the performances and the stability of our proposed method. |
topic |
histopathology WSI classification explainability interpretability heat-maps |
url |
https://www.mdpi.com/2504-4990/3/1/12 |
work_keys_str_mv |
AT antoinepirovano automaticfeatureselectionforimprovedinterpretabilityonwholeslideimaging AT hippolyteheuberger automaticfeatureselectionforimprovedinterpretabilityonwholeslideimaging AT sylvainberlemont automaticfeatureselectionforimprovedinterpretabilityonwholeslideimaging AT saidladjal automaticfeatureselectionforimprovedinterpretabilityonwholeslideimaging AT isabellebloch automaticfeatureselectionforimprovedinterpretabilityonwholeslideimaging |
_version_ |
1724255234227699712 |