Human Part Segmentation in Depth Images with Annotated Part Positions
We present a method of segmenting human parts in depth images, when provided the image positions of the body parts. The goal is to facilitate per-pixel labelling of large datasets of human images, which are used for training and testing algorithms for pose estimation and automatic segmentation. A co...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2018-06-01
|
Series: | Sensors |
Subjects: | |
Online Access: | http://www.mdpi.com/1424-8220/18/6/1900 |
id |
doaj-4e8dc768cbfc4ebd8e0d6b5845222307 |
---|---|
record_format |
Article |
spelling |
doaj-4e8dc768cbfc4ebd8e0d6b58452223072020-11-25T00:17:14ZengMDPI AGSensors1424-82202018-06-01186190010.3390/s18061900s18061900Human Part Segmentation in Depth Images with Annotated Part PositionsAndrew Hynes0Stephen Czarnuch1Department of Electrical and Computer Engineering, Memorial University, St. John’s, NL A1B 3X5, CanadaDepartment of Electrical and Computer Engineering, Memorial University, St. John’s, NL A1B 3X5, CanadaWe present a method of segmenting human parts in depth images, when provided the image positions of the body parts. The goal is to facilitate per-pixel labelling of large datasets of human images, which are used for training and testing algorithms for pose estimation and automatic segmentation. A common technique in image segmentation is to represent an image as a two-dimensional grid graph, with one node for each pixel and edges between neighbouring pixels. We introduce a graph with distinct layers of nodes to model occlusion of the body by the arms. Once the graph is constructed, the annotated part positions are used as seeds for a standard interactive segmentation algorithm. Our method is evaluated on two public datasets containing depth images of humans from a frontal view. It produces a mean per-class accuracy of 93.55% on the first dataset, compared to 87.91% (random forest and graph cuts) and 90.31% (random forest and Markov random field). It also achieves a per-class accuracy of 90.60% on the second dataset. Future work can experiment with various methods for creating the graph layers to accurately model occlusion.http://www.mdpi.com/1424-8220/18/6/1900human partsinteractive image segmentationocclusiongrid graph |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Andrew Hynes Stephen Czarnuch |
spellingShingle |
Andrew Hynes Stephen Czarnuch Human Part Segmentation in Depth Images with Annotated Part Positions Sensors human parts interactive image segmentation occlusion grid graph |
author_facet |
Andrew Hynes Stephen Czarnuch |
author_sort |
Andrew Hynes |
title |
Human Part Segmentation in Depth Images with Annotated Part Positions |
title_short |
Human Part Segmentation in Depth Images with Annotated Part Positions |
title_full |
Human Part Segmentation in Depth Images with Annotated Part Positions |
title_fullStr |
Human Part Segmentation in Depth Images with Annotated Part Positions |
title_full_unstemmed |
Human Part Segmentation in Depth Images with Annotated Part Positions |
title_sort |
human part segmentation in depth images with annotated part positions |
publisher |
MDPI AG |
series |
Sensors |
issn |
1424-8220 |
publishDate |
2018-06-01 |
description |
We present a method of segmenting human parts in depth images, when provided the image positions of the body parts. The goal is to facilitate per-pixel labelling of large datasets of human images, which are used for training and testing algorithms for pose estimation and automatic segmentation. A common technique in image segmentation is to represent an image as a two-dimensional grid graph, with one node for each pixel and edges between neighbouring pixels. We introduce a graph with distinct layers of nodes to model occlusion of the body by the arms. Once the graph is constructed, the annotated part positions are used as seeds for a standard interactive segmentation algorithm. Our method is evaluated on two public datasets containing depth images of humans from a frontal view. It produces a mean per-class accuracy of 93.55% on the first dataset, compared to 87.91% (random forest and graph cuts) and 90.31% (random forest and Markov random field). It also achieves a per-class accuracy of 90.60% on the second dataset. Future work can experiment with various methods for creating the graph layers to accurately model occlusion. |
topic |
human parts interactive image segmentation occlusion grid graph |
url |
http://www.mdpi.com/1424-8220/18/6/1900 |
work_keys_str_mv |
AT andrewhynes humanpartsegmentationindepthimageswithannotatedpartpositions AT stephenczarnuch humanpartsegmentationindepthimageswithannotatedpartpositions |
_version_ |
1725380394634706944 |