How much off-the-shelf knowledge is transferable from natural images to pathology images?
Deep learning has achieved a great success in natural image classification. To overcome data-scarcity in computational pathology, recent studies exploit transfer learning to reuse knowledge gained from natural images in pathology image analysis, aiming to build effective pathology image diagnosis mo...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Public Library of Science (PLoS)
2020-01-01
|
Series: | PLoS ONE |
Online Access: | https://doi.org/10.1371/journal.pone.0240530 |
id |
doaj-834caed7a54844e08f901c3c795f1359 |
---|---|
record_format |
Article |
spelling |
doaj-834caed7a54844e08f901c3c795f13592021-03-04T11:10:27ZengPublic Library of Science (PLoS)PLoS ONE1932-62032020-01-011510e024053010.1371/journal.pone.0240530How much off-the-shelf knowledge is transferable from natural images to pathology images?Xingyu LiKonstantinos N PlataniotisDeep learning has achieved a great success in natural image classification. To overcome data-scarcity in computational pathology, recent studies exploit transfer learning to reuse knowledge gained from natural images in pathology image analysis, aiming to build effective pathology image diagnosis models. Since transferability of knowledge heavily depends on the similarity of the original and target tasks, significant differences in image content and statistics between pathology images and natural images raise the questions: how much knowledge is transferable? Is the transferred information equally contributed by pre-trained layers? If not, is there a sweet spot in transfer learning that balances transferred model's complexity and performance? To answer these questions, this paper proposes a framework to quantify knowledge gain by a particular layer, conducts an empirical investigation in pathology image centered transfer learning, and reports some interesting observations. Particularly, compared to the performance baseline obtained by a random-weight model, though transferability of off-the-shelf representations from deep layers heavily depend on specific pathology image sets, the general representation generated by early layers does convey transferred knowledge in various image classification applications. The trade-off between transferable performance and transferred model's complexity observed in this study encourages further investigation of specific metric and tools to quantify effectiveness of transfer learning in future.https://doi.org/10.1371/journal.pone.0240530 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Xingyu Li Konstantinos N Plataniotis |
spellingShingle |
Xingyu Li Konstantinos N Plataniotis How much off-the-shelf knowledge is transferable from natural images to pathology images? PLoS ONE |
author_facet |
Xingyu Li Konstantinos N Plataniotis |
author_sort |
Xingyu Li |
title |
How much off-the-shelf knowledge is transferable from natural images to pathology images? |
title_short |
How much off-the-shelf knowledge is transferable from natural images to pathology images? |
title_full |
How much off-the-shelf knowledge is transferable from natural images to pathology images? |
title_fullStr |
How much off-the-shelf knowledge is transferable from natural images to pathology images? |
title_full_unstemmed |
How much off-the-shelf knowledge is transferable from natural images to pathology images? |
title_sort |
how much off-the-shelf knowledge is transferable from natural images to pathology images? |
publisher |
Public Library of Science (PLoS) |
series |
PLoS ONE |
issn |
1932-6203 |
publishDate |
2020-01-01 |
description |
Deep learning has achieved a great success in natural image classification. To overcome data-scarcity in computational pathology, recent studies exploit transfer learning to reuse knowledge gained from natural images in pathology image analysis, aiming to build effective pathology image diagnosis models. Since transferability of knowledge heavily depends on the similarity of the original and target tasks, significant differences in image content and statistics between pathology images and natural images raise the questions: how much knowledge is transferable? Is the transferred information equally contributed by pre-trained layers? If not, is there a sweet spot in transfer learning that balances transferred model's complexity and performance? To answer these questions, this paper proposes a framework to quantify knowledge gain by a particular layer, conducts an empirical investigation in pathology image centered transfer learning, and reports some interesting observations. Particularly, compared to the performance baseline obtained by a random-weight model, though transferability of off-the-shelf representations from deep layers heavily depend on specific pathology image sets, the general representation generated by early layers does convey transferred knowledge in various image classification applications. The trade-off between transferable performance and transferred model's complexity observed in this study encourages further investigation of specific metric and tools to quantify effectiveness of transfer learning in future. |
url |
https://doi.org/10.1371/journal.pone.0240530 |
work_keys_str_mv |
AT xingyuli howmuchofftheshelfknowledgeistransferablefromnaturalimagestopathologyimages AT konstantinosnplataniotis howmuchofftheshelfknowledgeistransferablefromnaturalimagestopathologyimages |
_version_ |
1714804656996089856 |