Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches
Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2021-05-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/21/9/3185 |
id |
doaj-c7b3bbf69e1a46a2bbd2d33cb3b144d0 |
---|---|
record_format |
Article |
spelling |
doaj-c7b3bbf69e1a46a2bbd2d33cb3b144d02021-05-31T23:09:38ZengMDPI AGSensors1424-82202021-05-01213185318510.3390/s21093185Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal ApproachesJose L. Gómez0Gabriel Villalonga1Antonio M. López2Computer Vision Center (CVC), Universitat Autònoma de Barcelona (UAB), 08193 Bellaterra, SpainComputer Vision Center (CVC), Universitat Autònoma de Barcelona (UAB), 08193 Bellaterra, SpainComputer Vision Center (CVC), Universitat Autònoma de Barcelona (UAB), 08193 Bellaterra, SpainTop-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images.https://www.mdpi.com/1424-8220/21/9/3185co-trainingmulti-modalityvision-based object detectionADASself-driving |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Jose L. Gómez Gabriel Villalonga Antonio M. López |
spellingShingle |
Jose L. Gómez Gabriel Villalonga Antonio M. López Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches Sensors co-training multi-modality vision-based object detection ADAS self-driving |
author_facet |
Jose L. Gómez Gabriel Villalonga Antonio M. López |
author_sort |
Jose L. Gómez |
title |
Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches |
title_short |
Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches |
title_full |
Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches |
title_fullStr |
Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches |
title_full_unstemmed |
Co-Training for Deep Object Detection: Comparing Single-Modal and Multi-Modal Approaches |
title_sort |
co-training for deep object detection: comparing single-modal and multi-modal approaches |
publisher |
MDPI AG |
series |
Sensors |
issn |
1424-8220 |
publishDate |
2021-05-01 |
description |
Top-performing computer vision models are powered by convolutional neural networks (CNNs). Training an accurate CNN highly depends on both the raw sensor data and their associated ground truth (GT). Collecting such GT is usually done through human labeling, which is time-consuming and does not scale as we wish. This data-labeling bottleneck may be intensified due to domain shifts among image sensors, which could force per-sensor data labeling. In this paper, we focus on the use of co-training, a semi-supervised learning (SSL) method, for obtaining self-labeled object bounding boxes (BBs), i.e., the GT to train deep object detectors. In particular, we assess the goodness of multi-modal co-training by relying on two different views of an image, namely, appearance (RGB) and estimated depth (D). Moreover, we compare appearance-based single-modal co-training with multi-modal. Our results suggest that in a standard SSL setting (no domain shift, a few human-labeled data) and under virtual-to-real domain shift (many virtual-world labeled data, no human-labeled data) multi-modal co-training outperforms single-modal. In the latter case, by performing GAN-based domain translation both co-training modalities are on par, at least when using an off-the-shelf depth estimation model not specifically trained on the translated images. |
topic |
co-training multi-modality vision-based object detection ADAS self-driving |
url |
https://www.mdpi.com/1424-8220/21/9/3185 |
work_keys_str_mv |
AT joselgomez cotrainingfordeepobjectdetectioncomparingsinglemodalandmultimodalapproaches AT gabrielvillalonga cotrainingfordeepobjectdetectioncomparingsinglemodalandmultimodalapproaches AT antoniomlopez cotrainingfordeepobjectdetectioncomparingsinglemodalandmultimodalapproaches |
_version_ |
1721418218926505984 |