Summary: | We present a framework for performing collaborative localization for groups of micro aerial vehicles (MAV) that use vision-based sensing. The vehicles are each assumed to be equipped with a monocular camera, and to be capable of communicating with each other. This collaborative localization approach is developed as a decentralized algorithm and built in a distributed fashion where individual and relative pose estimation techniques are combined for the group to localize against surrounding environments. The MAVs initially detect and match salient features between each other to create a sparse reconstruction of the observed environment, which acts as a global map. Once a map is available, each MAV individually performs feature detection and tracking with a robust outlier rejection process to estimate its own pose in 6 degrees of freedom. When needed, one or more MAVs can compute poses for another MAV through relative measurements, which is achieved by exploiting multiple view geometry concepts. These relative measurements are then fused with individual measurements in a consistent fashion to result in more accurate pose estimates. We present the results of the algorithm on image data from MAV flights both in simulation and real life, and discuss the advantages of collaborative localization in improving pose estimation accuracy.
|