Local and global models for articulated motion analysis
Vision is likely the most important of the senses employed by humans in understanding their environment, but computer systems are still sorely lacking in this respect. The number of potential applications for visually capable computer systems is huge; this thesis focuses on the field of motion captu...
Main Author: | |
---|---|
Published: |
University of Southampton
2006
|
Subjects: | |
Online Access: | https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435722 |
id |
ndltd-bl.uk-oai-ethos.bl.uk-435722 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-bl.uk-oai-ethos.bl.uk-4357222018-09-05T03:21:05ZLocal and global models for articulated motion analysisWagg, David Kenneth2006Vision is likely the most important of the senses employed by humans in understanding their environment, but computer systems are still sorely lacking in this respect. The number of potential applications for visually capable computer systems is huge; this thesis focuses on the field of motion capture, in particular dealing with the problems encountered when analysing the motion of articulated or jointed targets, such as people. Joint articulation greatly increases the complexity of a target object, and increases the incidence of self-occlusion (one body part obscuring another). These problems are compounded in typical outdoor scenes by the clutter and noise generated by other objects. This thesis presents a model-based approach to automated extraction of walking people from video data, under indoor and outdoor capture conditions. Local and global modelling strategies are employed in an iterative process, similar to the Generalised Expectation-Maximisation algorithm. Prior knowledge of human shape, gait motion and self-occlusion is used to guide this extraction process. The extracted shape and motion information is applied to construct a gait signature, sufficient for recognition purposes. Results are presented demonstrating the success of this approach on the Southampton Gait Database, comprising 4820 sequences from 115 subjects. A recognition rate of 98.6% is achieved on clean indoor data, comparing favourably with other published approaches. This recognition rate is reduced to 87.1% under the more difficult outdoor capture conditions. Additional analyses are presented examining the discriminative potential of model features. It is shown that the majority of discriminative potential is contained within body shape features and gait frequency, although motion dynamics also make a significant contribution.006.37University of Southamptonhttps://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435722https://eprints.soton.ac.uk/263222/Electronic Thesis or Dissertation |
collection |
NDLTD |
sources |
NDLTD |
topic |
006.37 |
spellingShingle |
006.37 Wagg, David Kenneth Local and global models for articulated motion analysis |
description |
Vision is likely the most important of the senses employed by humans in understanding their environment, but computer systems are still sorely lacking in this respect. The number of potential applications for visually capable computer systems is huge; this thesis focuses on the field of motion capture, in particular dealing with the problems encountered when analysing the motion of articulated or jointed targets, such as people. Joint articulation greatly increases the complexity of a target object, and increases the incidence of self-occlusion (one body part obscuring another). These problems are compounded in typical outdoor scenes by the clutter and noise generated by other objects. This thesis presents a model-based approach to automated extraction of walking people from video data, under indoor and outdoor capture conditions. Local and global modelling strategies are employed in an iterative process, similar to the Generalised Expectation-Maximisation algorithm. Prior knowledge of human shape, gait motion and self-occlusion is used to guide this extraction process. The extracted shape and motion information is applied to construct a gait signature, sufficient for recognition purposes. Results are presented demonstrating the success of this approach on the Southampton Gait Database, comprising 4820 sequences from 115 subjects. A recognition rate of 98.6% is achieved on clean indoor data, comparing favourably with other published approaches. This recognition rate is reduced to 87.1% under the more difficult outdoor capture conditions. Additional analyses are presented examining the discriminative potential of model features. It is shown that the majority of discriminative potential is contained within body shape features and gait frequency, although motion dynamics also make a significant contribution. |
author |
Wagg, David Kenneth |
author_facet |
Wagg, David Kenneth |
author_sort |
Wagg, David Kenneth |
title |
Local and global models for articulated motion analysis |
title_short |
Local and global models for articulated motion analysis |
title_full |
Local and global models for articulated motion analysis |
title_fullStr |
Local and global models for articulated motion analysis |
title_full_unstemmed |
Local and global models for articulated motion analysis |
title_sort |
local and global models for articulated motion analysis |
publisher |
University of Southampton |
publishDate |
2006 |
url |
https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.435722 |
work_keys_str_mv |
AT waggdavidkenneth localandglobalmodelsforarticulatedmotionanalysis |
_version_ |
1718728449674182656 |