Summary: | Computer-Aided diagnosis (CAD) is a widely used technique to detect and diagnose diseases like tumors, cancers, edemas, etc. Several critical retinal diseases like diabetic retinopathy (DR), hypertensive retinopathy (HR), Macular degeneration, retinitis pigmentosa (RP) are mainly analyzed based on the observation of fundus images. The raw fundus images are of inferior quality to represent the minor changes directly. To detect and analyze minor changes in retinal vasculature or to apply advanced disease detection algorithms, the fundus image should be enhanced enough to visibly present vessel touristy. The performance of deep learning models for diagnosing these critical diseases is highly dependent on accurate segmentation of images. Specifically, for retinal vessels segmentation, accurate segmentation of fundus images is highly challenging due to low vessel contrast, varying widths, branching, and the crossing of vessels. For contrast enhancement, various retinal-vessel segmentation methods apply image-contrast enhancement as a pre-processing step, which can introduce noise in an image and affect vessel detection. Recently, numerous studies applied Contrast Limited Adaptive Histogram Equalization (CLAHE) for contrast enhancement, but with the default values for the contextual region and clip limit. In this study, our aim is to improve the performance of both supervised and unsupervised machine learning models for retinal-vessel segmentation by applying modified particle swarm optimization (MPSO) for CLAHE parameter tuning, with a specific focus on optimizing the clip limit and contextual regions. We subsequently assessed the capabilities of the optimized version of CLAHE using standard evaluation metrics. We used the contrast enhanced images achieved using MPSO-based CLAHE for demonstrating its real impact on performance of deep learning model for semantic segmentation of retinal images. The achieved results proved positive impact on sensitivity of supervised machine learning models, which is highly important. By applying the proposed approach on the enhanced retinal images of the publicly available databases of {DRIVE and STARE}, we achieved a sensitivity, specificity and accuracy of {0.8315 and 0.8433}, {0.9750 and 0.9760} and {0.9620 and 0.9645}, respectively.
|