Learning what and where to segment: A new perspective on medical image few-shot segmentation
Traditional medical image segmentation methods based on deep learning require experts to provide extensive manual delineations for model training. Few-shot learning aims to reduce the dependence on the scale of training data but usually shows poor generalizability to the new target. The trained mode...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier B.V.
2023
|
Subjects: | |
Online Access: | View Fulltext in Publisher View in Scopus |
Summary: | Traditional medical image segmentation methods based on deep learning require experts to provide extensive manual delineations for model training. Few-shot learning aims to reduce the dependence on the scale of training data but usually shows poor generalizability to the new target. The trained model tends to favor the training classes rather than being absolutely class-agnostic. In this work, we propose a novel two-branch segmentation network based on unique medical prior knowledge to alleviate the above problem. Specifically, we explicitly introduce a spatial branch to provide the spatial information of the target. In addition, we build a segmentation branch based on the classical encoder–decoder structure in supervised learning and integrate prototype similarity and spatial information as prior knowledge. To achieve effective information integration, we propose an attention-based fusion module (AF) that enables the content interaction of decoder features and prior knowledge. Experiments on an echocardiography dataset and an abdominal MRI dataset show that the proposed model achieves substantial improvements over state-of-the-art methods. Moreover, some results are comparable to those of the fully supervised model. The source code is available at github.com/warmestwind/RAPNet. © 2023 The Author(s) |
---|---|
ISBN: | 13618415 (ISSN) |
DOI: | 10.1016/j.media.2023.102834 |