Region Proposal Based Object Detectors Integrated With an Extended Kalman Filter for a Robust Detect-Tracking Algorithm

In this thesis we present a detect-tracking algorithm (see figure 3.1) that combines the detection robustness of static region proposal based object detectors, like the faster region convolutional neural network (R-CNN) and the region-based fully convolutional networks (R-FCN) model, with the tracki...

Full description

Bibliographic Details
Main Author: Khajo, Gabriel
Format: Others
Language:English
Published: Karlstads universitet, Fakulteten för hälsa, natur- och teknikvetenskap (from 2013) 2019
Subjects:
Online Access:http://urn.kb.se/resolve?urn=urn:nbn:se:kau:diva-72698
Description
Summary:In this thesis we present a detect-tracking algorithm (see figure 3.1) that combines the detection robustness of static region proposal based object detectors, like the faster region convolutional neural network (R-CNN) and the region-based fully convolutional networks (R-FCN) model, with the tracking prediction strength of extended Kalman filters, by using, what we have called, a translating and non-rigid user input region of interest (RoI-) mapping. This so-called RoI-mapping maps a region, which includes the object that one is interested in tracking, to a featureless three-channeled image. The detection part of our proposed algorithm is then performed on the image that includes only the RoI features (see figure 3.2). After the detection step, our model re-maps the RoI features to the original frame, and translates the RoI to the center of the prediction. If no prediction occurs, our proposed model integrates a temporal dependence through a Kalman filter as a predictor; this filter is continuously corrected when detections do occur. To train the region proposal based object detectors that we integrate into our detect-tracking model, we used TensorFlow®’s object detection api, with a random search hyperparameter tuning, where we fine-tuned, all models from TensorFlow® slim base network classification checkpoints. The trained region proposal based object detectors used the inception V2 base network for the faster R-CNN model and the R-FCN model, while the inception V3 base network only was applied to the faster R-CNN model. This was made to compare the two base networks and their corresponding affects on the detection models. In addition to the deep learning part of this thesis, for the implementation part of our detect-tracking model, like for the extended Kalman filter, we used Python and OpenCV® . The results show that, with a stationary camera reference frame, our proposed detect-tracking algorithm, combined with region proposal based object detectors on images of size 414 × 740 × 3, can detect and track a small object in real-time, like a tennis ball, moving along a horizontal trajectory with an average velocity v ≈ 50 km/h at a distance d = 25 m, with a combined detect-tracking frequency of about 13 to 14 Hz. The largest measured state error between the actual state and the predicted state from the Kalman filter, at the aforementioned horizontal velocity, have been measured to be a maximum of 10-15 pixels, see table 5.1, but in certain frames where many detections occur this error has been shown to be much smaller (3-5 pixels). Additionally, our combined detect-tracking model has also been shown to be able to handle obstacles and two learnable features that overlap, thanks to the integrated extended Kalman filter. Lastly, our detect-tracking model also was applied on a set of infra-red images, where the goal was to detect and track a moving truck moving along a semi-horizontal path. Our results show that a faster R-CNN inception V2 model was able to extract features from a sequence of infra-red frames, and that our proposed RoI-mapping method worked relatively well at detecting only one truck in a short test-sequence (see figure 5.22).