Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor Networks
Silicon retinas, also known as Dynamic Vision Sensors (DVS) or event-based visual sensors, have shown great advantages in terms of low power consumption, low bandwidth, wide dynamic range and very high temporal resolution. Owing to such advantages as compared to conventional vision sensors, DVS devi...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2019-04-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/19/8/1751 |
id |
doaj-c5e4427f5bc7488a97d114fc0e203770 |
---|---|
record_format |
Article |
spelling |
doaj-c5e4427f5bc7488a97d114fc0e2037702020-11-24T21:20:56ZengMDPI AGSensors1424-82202019-04-01198175110.3390/s19081751s19081751Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor NetworksNabeel Khan0Maria G. Martini1Wireless and Multimedia Networking Research Group, Faculty of Science, Engineering and Computing, Kingston University, Penrhyn Rd, Kingston upon Thames KT1 2EE 1, UKWireless and Multimedia Networking Research Group, Faculty of Science, Engineering and Computing, Kingston University, Penrhyn Rd, Kingston upon Thames KT1 2EE 1, UKSilicon retinas, also known as Dynamic Vision Sensors (DVS) or event-based visual sensors, have shown great advantages in terms of low power consumption, low bandwidth, wide dynamic range and very high temporal resolution. Owing to such advantages as compared to conventional vision sensors, DVS devices are gaining more and more attention in various applications such as drone surveillance, robotics, high-speed motion photography, etc. The output of such sensors is a sequence of events rather than a series of frames as for classical cameras. Estimating the data rate of the stream of events associated with such sensors is needed for the appropriate design of transmission systems involving such sensors. In this work, we propose to consider information about the scene content and sensor speed to support such estimation, and we identify suitable metrics to quantify the complexity of the scene for this purpose. According to the results of this study, the event rate shows an exponential relationship with the metric associated with the complexity of the scene and linear relationships with the speed of the sensor. Based on these results, we propose a two-parameter model for the dependency of the event rate on scene complexity and sensor speed. The model achieves a prediction accuracy of approximately 88.4% for the outdoor environment along with the overall prediction performance of approximately 84%.https://www.mdpi.com/1424-8220/19/8/1751neuromorphic engineeringdynamic and active-pixel vision sensorscene complexityneuromorphic event rategradient approximationscene textureSobelRobertsPrewitt |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Nabeel Khan Maria G. Martini |
spellingShingle |
Nabeel Khan Maria G. Martini Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor Networks Sensors neuromorphic engineering dynamic and active-pixel vision sensor scene complexity neuromorphic event rate gradient approximation scene texture Sobel Roberts Prewitt |
author_facet |
Nabeel Khan Maria G. Martini |
author_sort |
Nabeel Khan |
title |
Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor Networks |
title_short |
Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor Networks |
title_full |
Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor Networks |
title_fullStr |
Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor Networks |
title_full_unstemmed |
Bandwidth Modeling of Silicon Retinas for Next Generation Visual Sensor Networks |
title_sort |
bandwidth modeling of silicon retinas for next generation visual sensor networks |
publisher |
MDPI AG |
series |
Sensors |
issn |
1424-8220 |
publishDate |
2019-04-01 |
description |
Silicon retinas, also known as Dynamic Vision Sensors (DVS) or event-based visual sensors, have shown great advantages in terms of low power consumption, low bandwidth, wide dynamic range and very high temporal resolution. Owing to such advantages as compared to conventional vision sensors, DVS devices are gaining more and more attention in various applications such as drone surveillance, robotics, high-speed motion photography, etc. The output of such sensors is a sequence of events rather than a series of frames as for classical cameras. Estimating the data rate of the stream of events associated with such sensors is needed for the appropriate design of transmission systems involving such sensors. In this work, we propose to consider information about the scene content and sensor speed to support such estimation, and we identify suitable metrics to quantify the complexity of the scene for this purpose. According to the results of this study, the event rate shows an exponential relationship with the metric associated with the complexity of the scene and linear relationships with the speed of the sensor. Based on these results, we propose a two-parameter model for the dependency of the event rate on scene complexity and sensor speed. The model achieves a prediction accuracy of approximately 88.4% for the outdoor environment along with the overall prediction performance of approximately 84%. |
topic |
neuromorphic engineering dynamic and active-pixel vision sensor scene complexity neuromorphic event rate gradient approximation scene texture Sobel Roberts Prewitt |
url |
https://www.mdpi.com/1424-8220/19/8/1751 |
work_keys_str_mv |
AT nabeelkhan bandwidthmodelingofsiliconretinasfornextgenerationvisualsensornetworks AT mariagmartini bandwidthmodelingofsiliconretinasfornextgenerationvisualsensornetworks |
_version_ |
1726001953599651840 |