Vision-Based Observation Models for Lower Limb 3D Tracking with a Moving Platform

Tracking and understanding human gait is an important step towards improving elderly mobility and safety. This thesis presents a vision-based tracking system that estimates the 3D pose of a wheeled walker user's lower limbs with cameras mounted on the moving walker. The tracker estimates 3D p...

Full description

Bibliographic Details
Main Author: Hu, Richard Zhi Ling
Language:en
Published: 2011
Subjects:
HMM
Online Access:http://hdl.handle.net/10012/6110
id ndltd-LACETR-oai-collectionscanada.gc.ca-OWTU.10012-6110
record_format oai_dc
spelling ndltd-LACETR-oai-collectionscanada.gc.ca-OWTU.10012-61102013-10-04T04:10:45ZHu, Richard Zhi Ling2011-08-23T16:30:53Z2011-08-23T16:30:53Z2011-08-23T16:30:53Z2011http://hdl.handle.net/10012/6110Tracking and understanding human gait is an important step towards improving elderly mobility and safety. This thesis presents a vision-based tracking system that estimates the 3D pose of a wheeled walker user's lower limbs with cameras mounted on the moving walker. The tracker estimates 3D poses from images of the lower limbs in the coronal plane in a dynamic, uncontrolled environment. It employs a probabilistic approach based on particle filtering with three different camera setups: a monocular RGB camera, binocular RGB cameras, and a depth camera. For the RGB cameras, observation likelihoods are designed to compare the colors and gradients of each frame with initial templates that are manually extracted. Two strategies are also investigated for handling appearance change of tracking target: increasing number of templates and using different representations of colors. For the depth camera, two observation likelihoods are developed: the first one works directly in the 3D space, while the second one works in the projected image space. Experiments are conducted to evaluate the performance of the tracking system with different users for all three camera setups. It is demonstrated that the trackers with the RGB cameras produce results with higher error as compared to the depth camera, and the strategies for handling appearance change improve tracking accuracy in general. On the other hand, the tracker with the depth sensor successfully tracks the 3D poses of users over the entire video sequence and is robust against unfavorable conditions such as partial occlusion, missing observations, and deformable tracking target.enHMMhidden markov modelobservation modelcolorgradient3D trackingKinectRGB cameracolor spacelighting changeparticle filterkinesiologyVision-Based Observation Models for Lower Limb 3D Tracking with a Moving PlatformThesis or DissertationSchool of Computer ScienceMaster of MathematicsComputer Science
collection NDLTD
language en
sources NDLTD
topic HMM
hidden markov model
observation model
color
gradient
3D tracking
Kinect
RGB camera
color space
lighting change
particle filter
kinesiology
Computer Science
spellingShingle HMM
hidden markov model
observation model
color
gradient
3D tracking
Kinect
RGB camera
color space
lighting change
particle filter
kinesiology
Computer Science
Hu, Richard Zhi Ling
Vision-Based Observation Models for Lower Limb 3D Tracking with a Moving Platform
description Tracking and understanding human gait is an important step towards improving elderly mobility and safety. This thesis presents a vision-based tracking system that estimates the 3D pose of a wheeled walker user's lower limbs with cameras mounted on the moving walker. The tracker estimates 3D poses from images of the lower limbs in the coronal plane in a dynamic, uncontrolled environment. It employs a probabilistic approach based on particle filtering with three different camera setups: a monocular RGB camera, binocular RGB cameras, and a depth camera. For the RGB cameras, observation likelihoods are designed to compare the colors and gradients of each frame with initial templates that are manually extracted. Two strategies are also investigated for handling appearance change of tracking target: increasing number of templates and using different representations of colors. For the depth camera, two observation likelihoods are developed: the first one works directly in the 3D space, while the second one works in the projected image space. Experiments are conducted to evaluate the performance of the tracking system with different users for all three camera setups. It is demonstrated that the trackers with the RGB cameras produce results with higher error as compared to the depth camera, and the strategies for handling appearance change improve tracking accuracy in general. On the other hand, the tracker with the depth sensor successfully tracks the 3D poses of users over the entire video sequence and is robust against unfavorable conditions such as partial occlusion, missing observations, and deformable tracking target.
author Hu, Richard Zhi Ling
author_facet Hu, Richard Zhi Ling
author_sort Hu, Richard Zhi Ling
title Vision-Based Observation Models for Lower Limb 3D Tracking with a Moving Platform
title_short Vision-Based Observation Models for Lower Limb 3D Tracking with a Moving Platform
title_full Vision-Based Observation Models for Lower Limb 3D Tracking with a Moving Platform
title_fullStr Vision-Based Observation Models for Lower Limb 3D Tracking with a Moving Platform
title_full_unstemmed Vision-Based Observation Models for Lower Limb 3D Tracking with a Moving Platform
title_sort vision-based observation models for lower limb 3d tracking with a moving platform
publishDate 2011
url http://hdl.handle.net/10012/6110
work_keys_str_mv AT hurichardzhiling visionbasedobservationmodelsforlowerlimb3dtrackingwithamovingplatform
_version_ 1716600684603768832