Summary: | 碩士 === 國立交通大學 === 電子研究所 === 107 === Diabetic retinopathy is the primary cause of blindness in the working-age population of the developed world. Diagnosing the disease heavily relies on imaging studies, which is a time consuming and a manual process performed by trained clinicians. Enhancing the accuracy and speed of the detection process can potentially have a significant impact on population health via early diagnosis and intervention. Besides the prevention, how to keep tracking the treatment effect for the patient with diabetic retinopathy is another crucial issue in personalized healthcare. Motivated by this, we propose a recognition framework, based on deep convolutional neural networks. Our recognition system predicts not only the severity levels of DR but also the location of symptoms at the pixel level. With the combination of DR severity levels and the segmented DR symptoms, our system can predict the severity levels of DR more accurately which could potentially provide another measurement to monitor the progression or regression of retinopathy with therapeutic intervention. For the classification of DR severity levels, the proposed lightweight network, DRNet-cla-v1 improves the classification performance in two aspects: (1) Without any fine-tuning, DRNet-cla-v1, combined with seven other boosting methods achieved 0.961 and 0.967 AUROC on the Messidor dataset for referable and non-referable screening, which outperforms state of the art (0.921 and 0.957). (2) Compared with CKML Net, VNXK, and Zoom-in-Net, DRNet-cla-v1 is more memory efficient with at least 5.23x fewer in total parameters and requires lower computation cost with at least 1.24x fewer in total FLOPs. For the segmentation of DR symptoms, the proposed network, DRNet-seg-v1, achieves 0.6894 average AUPRC on the IDRiD test dataset and outperforms the start of the art (0.6693). Finally, we use the linear SVM to fuse features extracted from the DRNet-cla-v1 and DRNet-seg-v1 and achieves the average accuracy 0.7281 on the IDRiD test dataset, which also outperforms the start of the art (0.6311).
|