|
|
|
|
LEADER |
01749 am a22002533u 4500 |
001 |
110565 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Leifman, George
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Media Laboratory
|e contributor
|
100 |
1 |
0 |
|a Program in Media Arts and Sciences
|q (Massachusetts Institute of Technology)
|e contributor
|
100 |
1 |
0 |
|a Leifman, George
|e contributor
|
100 |
1 |
0 |
|a Swedish, Tristan
|e contributor
|
100 |
1 |
0 |
|a Roesch, Karin
|e contributor
|
100 |
1 |
0 |
|a Raskar, Ramesh
|e contributor
|
700 |
1 |
0 |
|a Swedish, Tristan
|e author
|
700 |
1 |
0 |
|a Roesch, Karin
|e author
|
700 |
1 |
0 |
|a Raskar, Ramesh
|e author
|
245 |
0 |
0 |
|a Leveraging the crowd for annotation of retinal images
|
260 |
|
|
|b Institute of Electrical and Electronics Engineers (IEEE),
|c 2017-07-07T20:35:57Z.
|
856 |
|
|
|z Get fulltext
|u http://hdl.handle.net/1721.1/110565
|
520 |
|
|
|a Medical data presents a number of challenges. It tends to be unstructured, noisy and protected. To train algorithms to understand medical images, doctors can label the condition associated with a particular image, but obtaining enough labels can be difficult. We propose an annotation approach which starts with a small pool of expertly annotated images and uses their expertise to rate the performance of crowd-sourced annotations. In this paper we demonstrate how to apply our approach for annotation of large-scale datasets of retinal images. We introduce a novel data validation procedure which is designed to cope with noisy ground-truth data and with non-consistent input from both experts and crowd-workers.
|
546 |
|
|
|a en_US
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)
|