Summary: | We present a novel multiview training framework and convolutional neural network (CNN) architecture for combining information from multiple overlapping satellite images and noisy training labels derived from OpenStreetMap (OSM) to semantically label buildings and roads across large geographic regions (100 km<inline-formula><tex-math notation="LaTeX">$^2$</tex-math></inline-formula>). Our approach to multiview semantic segmentation yields a 4%–7% improvement in the per-class Intersection over Union (IoU) scores compared to the traditional approaches that use the views independently of one another. A unique (and, perhaps, surprising) property of our system is that modifications that are added to the tail-end of the CNN for learning from the multiview data can be discarded at the time of inference with a relatively small penalty in the overall performance. This implies that the benefits of training using multiple views are absorbed by all the layers of the network. Additionally, our approach only adds a small overhead in terms of the GPU-memory consumption even when training with as many as 32 views per scene. The system we present is end-to-end automated, which facilitates comparing the classifiers trained directly on true orthophotos vis-a-vis first training them on the off-nadir images and subsequently translating the predicted labels to geographical coordinates. <italic>With no human supervision</italic>, our IoU scores for the buildings and roads classes are 0.8 and 0.64, respectively, which are better than state-of-the-art approaches that use OSM labels and that are not completely automated.
|