CRF-Based Model for Instrument Detection and Pose Estimation in Retinal Microsurgery

Detection of instrument tip in retinal microsurgery videos is extremely challenging due to rapid motion, illumination changes, the cluttered background, and the deformable shape of the instrument. For the same reason, frequent failures in tracking add the overhead of reinitialization of the tracking...

Full description

Bibliographic Details
Main Authors: Mohamed Alsheakhali, Abouzar Eslami, Hessam Roodaki, Nassir Navab
Format: Article
Language:English
Published: Hindawi Limited 2016-01-01
Series:Computational and Mathematical Methods in Medicine
Online Access:http://dx.doi.org/10.1155/2016/1067509
id doaj-5934fbce5d7c4a0daa71a475cc4129e5
record_format Article
spelling doaj-5934fbce5d7c4a0daa71a475cc4129e52020-11-24T23:35:33ZengHindawi LimitedComputational and Mathematical Methods in Medicine1748-670X1748-67182016-01-01201610.1155/2016/10675091067509CRF-Based Model for Instrument Detection and Pose Estimation in Retinal MicrosurgeryMohamed Alsheakhali0Abouzar Eslami1Hessam Roodaki2Nassir Navab3Technische Universität München, Munich, GermanyCarl Zeiss Meditec AG, Munich, GermanyTechnische Universität München, Munich, GermanyTechnische Universität München, Munich, GermanyDetection of instrument tip in retinal microsurgery videos is extremely challenging due to rapid motion, illumination changes, the cluttered background, and the deformable shape of the instrument. For the same reason, frequent failures in tracking add the overhead of reinitialization of the tracking. In this work, a new method is proposed to localize not only the instrument center point but also its tips and orientation without the need of manual reinitialization. Our approach models the instrument as a Conditional Random Field (CRF) where each part of the instrument is detected separately. The relations between these parts are modeled to capture the translation, rotation, and the scale changes of the instrument. The tracking is done via separate detection of instrument parts and evaluation of confidence via the modeled dependence functions. In case of low confidence feedback an automatic recovery process is performed. The algorithm is evaluated on in vivo ophthalmic surgery datasets and its performance is comparable to the state-of-the-art methods with the advantage that no manual reinitialization is needed.http://dx.doi.org/10.1155/2016/1067509
collection DOAJ
language English
format Article
sources DOAJ
author Mohamed Alsheakhali
Abouzar Eslami
Hessam Roodaki
Nassir Navab
spellingShingle Mohamed Alsheakhali
Abouzar Eslami
Hessam Roodaki
Nassir Navab
CRF-Based Model for Instrument Detection and Pose Estimation in Retinal Microsurgery
Computational and Mathematical Methods in Medicine
author_facet Mohamed Alsheakhali
Abouzar Eslami
Hessam Roodaki
Nassir Navab
author_sort Mohamed Alsheakhali
title CRF-Based Model for Instrument Detection and Pose Estimation in Retinal Microsurgery
title_short CRF-Based Model for Instrument Detection and Pose Estimation in Retinal Microsurgery
title_full CRF-Based Model for Instrument Detection and Pose Estimation in Retinal Microsurgery
title_fullStr CRF-Based Model for Instrument Detection and Pose Estimation in Retinal Microsurgery
title_full_unstemmed CRF-Based Model for Instrument Detection and Pose Estimation in Retinal Microsurgery
title_sort crf-based model for instrument detection and pose estimation in retinal microsurgery
publisher Hindawi Limited
series Computational and Mathematical Methods in Medicine
issn 1748-670X
1748-6718
publishDate 2016-01-01
description Detection of instrument tip in retinal microsurgery videos is extremely challenging due to rapid motion, illumination changes, the cluttered background, and the deformable shape of the instrument. For the same reason, frequent failures in tracking add the overhead of reinitialization of the tracking. In this work, a new method is proposed to localize not only the instrument center point but also its tips and orientation without the need of manual reinitialization. Our approach models the instrument as a Conditional Random Field (CRF) where each part of the instrument is detected separately. The relations between these parts are modeled to capture the translation, rotation, and the scale changes of the instrument. The tracking is done via separate detection of instrument parts and evaluation of confidence via the modeled dependence functions. In case of low confidence feedback an automatic recovery process is performed. The algorithm is evaluated on in vivo ophthalmic surgery datasets and its performance is comparable to the state-of-the-art methods with the advantage that no manual reinitialization is needed.
url http://dx.doi.org/10.1155/2016/1067509
work_keys_str_mv AT mohamedalsheakhali crfbasedmodelforinstrumentdetectionandposeestimationinretinalmicrosurgery
AT abouzareslami crfbasedmodelforinstrumentdetectionandposeestimationinretinalmicrosurgery
AT hessamroodaki crfbasedmodelforinstrumentdetectionandposeestimationinretinalmicrosurgery
AT nassirnavab crfbasedmodelforinstrumentdetectionandposeestimationinretinalmicrosurgery
_version_ 1725525645770883072