Summary: | Melanoma malignancy recognition is a challenging task due to the existence of intraclass similarity, natural or clinical artefacts, skin contrast variation, and higher visual similarity among the normal or melanoma-affected skin. To overcome these problems, we propose a novel solution by leveraging “region-extreme convolutional neural network” for melanoma malignancy recognition as malignant or benign. Recent works on melanoma malignancy recognition employed the traditional machine learning techniques based on various handcrafted features or the recently introduced CNN network. However, the efficient training of these models is possible, if they localize the melanoma affected region and learn high-level feature representation from melanoma lesion to predict melanoma malignancy. In this paper, we incorporate this observation and propose a novel “region-extreme convolutional neural network” for melanoma malignancy recognition. Our proposed region-extreme convolutional neural network refines dermoscopy images to eliminate natural or clinical artefacts, localizes melanoma affected region, and defines precise boundary around the melanoma lesion. The defined melanoma lesion is used to generate deep feature maps for model learning using the extreme learning machine (ELM) classifier. The proposed model is evaluated on two challenge datasets (ISIC-2016 and ISIC-2017) and performs better than ISIC challenge winners. Our region-extreme convolutional neural network recognizes the melanoma malignancy 85% on ISIC-2016 and 93% on ISIC-2017 datasets. Our region-extreme convolutional neural network precisely segments the melanoma lesion with an average Jaccard index of 0.93 and Dice score of 0.94. The region-extreme convolutional neural network has several advantages: it eliminates the clinical and natural artefacts from dermoscopic images, precisely localizes and segments the melanoma lesion, and improves the melanoma malignancy recognition through feedforward model learning. The region-extreme convolutional neural network achieves significant performance improvement over existing methods that makes it adaptable for solving complex medical image analysis problems.
|