Summary: | In step with rapid advancements in computer vision, vehicle classification demonstrates a considerable potential to reshape intelligent transportation systems. In the last couple of decades, image processing and pattern recognition-based vehicle classification systems have been used to improve the effectiveness of automated highway toll collection and traffic monitoring systems. However, these methods are trained on limited handcrafted features extracted from small datasets, which do not cater the real-time road traffic conditions. Deep learning-based classification systems have been proposed to incorporate the above-mentioned issues in traditional methods. However, convolutional neural networks require piles of data including noise, weather, and illumination factors to ensure robustness in real-time applications. Moreover, there is no generalized dataset available to validate the efficacy of vehicle classification systems. To overcome these issues, we propose a convolutional neural network-based vehicle classification system to improve robustness of vehicle classification in real-time applications. We present a vehicle dataset comprising of 10,000 images categorized into six-common vehicle classes considering adverse illuminous conditions to achieve robustness in real-time vehicle classification systems. Initially, pretrained AlexNet, GoogleNet, Inception-v3, VGG, and ResNet are fine-tuned on self-constructed vehicle dataset to evaluate their performance in terms of accuracy and convergence. Based on better performance, ResNet architecture is further improved by adding a new classification block in the network. To ensure generalization, we fine-tuned the network on the public VeRi dataset containing 50,000 images, which have been categorized into six vehicle classes. Finally, a comparison study has been carried out between the proposed and existing vehicle classification methods to evaluate the effectiveness of the proposed vehicle classification system. Consequently, our proposed system achieved 99.68%, 99.65%, and 99.56% accuracy, precision, and F1-score on our self-constructed dataset.
|