Performance of active contour models in train rolling stock part segmentation on high-speed video data

Rolling stock examination is performed to identify the defects during train movements at speeds <30 kmph. In this study, this process was automated using computer vision models. Parts on a moving train were segmented using four types of active contour-level set models: Chan–Vese (CV), CV-based mo...

Full description

Bibliographic Details
Main Authors: Ch. Raghava Prasad, P.V.V. Kishore
Format: Article
Language:English
Published: Taylor & Francis Group 2017-01-01
Series:Cogent Engineering
Subjects:
Online Access:http://dx.doi.org/10.1080/23311916.2017.1279367
Description
Summary:Rolling stock examination is performed to identify the defects during train movements at speeds <30 kmph. In this study, this process was automated using computer vision models. Parts on a moving train were segmented using four types of active contour-level set models: Chan–Vese (CV), CV-based morphological differential gradient (CV-MDG), CV with shape priors (CV-SP), and CV with shape invariance (CV-SI). CV level sets with shape invariance model enables the adjustment of contour according to scale, rotation, and location of the shape prior object in the rolling stock frame. Train rolling stock video data were captured at a high speed of 240 fps by using a sports action camera with 52° wide angle lenses. The level sets yielded optimal segmentation results compared with traditional segmentation methods. The performance indicators of segmented parts from the proposed four algorithms are structural similarity index measure and peak signal-to-noise ratio (in dB). A total of 10 parts were extracted from a bogie by using the proposed models and compared against the ground truth models to test the performance of the methods. The train had 15 passenger cars with 30 bogies. Furthermore, the models were tested under various lighting conditions for five trains. The CV shape invariance model yielded more efficient segmentations both qualitatively and quantitatively.
ISSN:2331-1916