AMMDAS: Multi-Modular Generative Masks Processing Architecture With Adaptive Wide Field-of-View Modeling Strategy
The usage of transportation systems is inevitable; any assistance module which can catalyze the flow involved in transportation systems, parallelly improving the reliability of processes involved is a boon for day-to-day human lives. This paper introduces a novel, cost-effective, and highly responsi...
Main Authors: | , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9239270/ |
id |
doaj-3afa813e8dac4aa49227ac88e1d28f9b |
---|---|
record_format |
Article |
spelling |
doaj-3afa813e8dac4aa49227ac88e1d28f9b2021-03-30T03:53:56ZengIEEEIEEE Access2169-35362020-01-01819874819877810.1109/ACCESS.2020.30335379239270AMMDAS: Multi-Modular Generative Masks Processing Architecture With Adaptive Wide Field-of-View Modeling StrategyVenkata Subbaiah Desanamukula0https://orcid.org/0000-0002-5974-4069Premith Kumar Chilukuri1https://orcid.org/0000-0002-9392-7264Pushkal Padala2Preethi Padala3https://orcid.org/0000-0003-1380-0966Prasad Reddy Pvgd4CS&SE, Andhra University College of Engineering (A), Visakhapatnam, IndiaCS&SE, Andhra University College of Engineering (A), Visakhapatnam, IndiaCSE, The National Institute of Engineering, Mysuru, IndiaCSE, National Institute of Technology Karnataka, Mangaluru, IndiaCS&SE, Andhra University College of Engineering (A), Visakhapatnam, IndiaThe usage of transportation systems is inevitable; any assistance module which can catalyze the flow involved in transportation systems, parallelly improving the reliability of processes involved is a boon for day-to-day human lives. This paper introduces a novel, cost-effective, and highly responsive Post-active Driving Assistance System, which is "Adaptive-Mask-Modelling Driving Assistance System" with intuitive wide field-of-view modeling architecture. The proposed system is a vision-based approach, which processes a panoramic-front view (stitched from temporal synchronous left, right stereo camera feed) & simple monocular-rear view to generate robust & reliable proximity triggers along with co-relative navigation suggestions. The proposed system generates robust objects, adaptive field-of-view masks using FRCNN+Resnet-101_FPN, DSED neural-networks, and are later processed and mutually analyzed at respective stages to trigger proximity alerts and frame reliable navigation suggestions. The proposed DSED network is an Encoder-Decoder-Convolutional-Neural-Network to estimate lane-offset parameters which are responsible for adaptive modeling of field-of-view range (157<sup>o</sup>-210<sup>o</sup>) during live inference. Proposed stages, deep-neural-networks, and implemented algorithms, modules are state-of-the-art and achieved outstanding performance with minimal loss(L{p, t}, L<sub>δ</sub>, L<sub>Total</sub>) values during benchmarking analysis on our custombuilt, KITTI, MS-COCO, Pascal-VOC, Make-3D datasets. The proposed assistance-system is tested on our custom-built, multiple public datasets to generalize its reliability and robustness under multiple wild conditions, input traffic scenarios & locations.https://ieeexplore.ieee.org/document/9239270/Adaptive field of view modelingautomotive applicationsdriving assistance systemslane detection and analysisobject detection and trackingspatial auto-correlation |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Venkata Subbaiah Desanamukula Premith Kumar Chilukuri Pushkal Padala Preethi Padala Prasad Reddy Pvgd |
spellingShingle |
Venkata Subbaiah Desanamukula Premith Kumar Chilukuri Pushkal Padala Preethi Padala Prasad Reddy Pvgd AMMDAS: Multi-Modular Generative Masks Processing Architecture With Adaptive Wide Field-of-View Modeling Strategy IEEE Access Adaptive field of view modeling automotive applications driving assistance systems lane detection and analysis object detection and tracking spatial auto-correlation |
author_facet |
Venkata Subbaiah Desanamukula Premith Kumar Chilukuri Pushkal Padala Preethi Padala Prasad Reddy Pvgd |
author_sort |
Venkata Subbaiah Desanamukula |
title |
AMMDAS: Multi-Modular Generative Masks Processing Architecture With Adaptive Wide Field-of-View Modeling Strategy |
title_short |
AMMDAS: Multi-Modular Generative Masks Processing Architecture With Adaptive Wide Field-of-View Modeling Strategy |
title_full |
AMMDAS: Multi-Modular Generative Masks Processing Architecture With Adaptive Wide Field-of-View Modeling Strategy |
title_fullStr |
AMMDAS: Multi-Modular Generative Masks Processing Architecture With Adaptive Wide Field-of-View Modeling Strategy |
title_full_unstemmed |
AMMDAS: Multi-Modular Generative Masks Processing Architecture With Adaptive Wide Field-of-View Modeling Strategy |
title_sort |
ammdas: multi-modular generative masks processing architecture with adaptive wide field-of-view modeling strategy |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
The usage of transportation systems is inevitable; any assistance module which can catalyze the flow involved in transportation systems, parallelly improving the reliability of processes involved is a boon for day-to-day human lives. This paper introduces a novel, cost-effective, and highly responsive Post-active Driving Assistance System, which is "Adaptive-Mask-Modelling Driving Assistance System" with intuitive wide field-of-view modeling architecture. The proposed system is a vision-based approach, which processes a panoramic-front view (stitched from temporal synchronous left, right stereo camera feed) & simple monocular-rear view to generate robust & reliable proximity triggers along with co-relative navigation suggestions. The proposed system generates robust objects, adaptive field-of-view masks using FRCNN+Resnet-101_FPN, DSED neural-networks, and are later processed and mutually analyzed at respective stages to trigger proximity alerts and frame reliable navigation suggestions. The proposed DSED network is an Encoder-Decoder-Convolutional-Neural-Network to estimate lane-offset parameters which are responsible for adaptive modeling of field-of-view range (157<sup>o</sup>-210<sup>o</sup>) during live inference. Proposed stages, deep-neural-networks, and implemented algorithms, modules are state-of-the-art and achieved outstanding performance with minimal loss(L{p, t}, L<sub>δ</sub>, L<sub>Total</sub>) values during benchmarking analysis on our custombuilt, KITTI, MS-COCO, Pascal-VOC, Make-3D datasets. The proposed assistance-system is tested on our custom-built, multiple public datasets to generalize its reliability and robustness under multiple wild conditions, input traffic scenarios & locations. |
topic |
Adaptive field of view modeling automotive applications driving assistance systems lane detection and analysis object detection and tracking spatial auto-correlation |
url |
https://ieeexplore.ieee.org/document/9239270/ |
work_keys_str_mv |
AT venkatasubbaiahdesanamukula ammdasmultimodulargenerativemasksprocessingarchitecturewithadaptivewidefieldofviewmodelingstrategy AT premithkumarchilukuri ammdasmultimodulargenerativemasksprocessingarchitecturewithadaptivewidefieldofviewmodelingstrategy AT pushkalpadala ammdasmultimodulargenerativemasksprocessingarchitecturewithadaptivewidefieldofviewmodelingstrategy AT preethipadala ammdasmultimodulargenerativemasksprocessingarchitecturewithadaptivewidefieldofviewmodelingstrategy AT prasadreddypvgd ammdasmultimodulargenerativemasksprocessingarchitecturewithadaptivewidefieldofviewmodelingstrategy |
_version_ |
1724182610685460480 |