Exploring big volume sensor data with Vroom

© 2017 VLDB. State of the art sensors within a single autonomous vehicle (AV) can produce video and LIDAR data at rates greater than 30 GB/hour. Unsurprisingly, even small AV research teams can accumulate tens of terabytes of sensor data from multiple trips and multiple vehicles. AV practitioners wo...

Full description

Bibliographic Details
Main Authors: Moll, Oscar (Author), Zalewski, Aaron (Author), Pillai, Sudeep (Author), Madden, Sam (Author), Stonebraker, Michael (Author), Gadepally, Vijay (Author)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor)
Format: Article
Language:English
Published: VLDB Endowment, 2021-11-08T18:01:15Z.
Subjects:
Online Access:Get fulltext
Description
Summary:© 2017 VLDB. State of the art sensors within a single autonomous vehicle (AV) can produce video and LIDAR data at rates greater than 30 GB/hour. Unsurprisingly, even small AV research teams can accumulate tens of terabytes of sensor data from multiple trips and multiple vehicles. AV practitioners would like to extract information about specific locations or specific situations for further study, but are often unable to. Queries over AV sensor data are different from generic analytics or spatial queries because they demand reasoning about fields of view as well as heavy computation to extract features from scenes. In this article and demo we present Vroom, a system for ad-hoc queries over AV sensor databases. Vroom combines domain specific properties of AV datasets with selective indexing and multi-query optimization to address challenges posed by AV sensor data.