Summary: | Diffuse optical tomography (DOT) is an emerging modality that reconstructs the optical properties in a highly scattering medium from measured boundary data. One way to solve DOT and recover the quantities of interest is by an inverse problem approach, which requires the choice of an optimization algorithm for the iterative approximation of the solution. However, the well-established and proven fact of the no free lunch principle holds in general. This paper aims to compare the behavior of three gradient descent-based optimizers on solving the DOT inverse problem by running randomized simulation and analyzing the generated data in order to shade light on any significant difference—if existing at all—in performance among these optimizers in our specific context of DOT. The major practical problems when selecting or using an optimization algorithm in a production context for a DOT system is to be confident that the algorithm will have a high convergence rate to the true solution, reasonably fast speed and high quality of the reconstructed image in terms of good localization of the inclusions and good agreement with the true image. In this work, we harnessed carefully designed randomized simulations to tackle the practical problem of choosing the right optimizer with the right parameters in the context of practical DOT applications, and derived statistical results concerning rate of convergence, speed, and quality of image reconstruction. The statistical analysis performed on the generated data and the main results for convergence rate, reconstruction speed, and quality between three optimization algorithms are presented in the paper at hand.
|