3D gesture input in the future living room

碩士 === 逢甲大學 === 工業工程與系統管理學研究所 === 100 === Emerging digital and smart technologies are constantly being developed in order to meet the needs of human beings and make life more comfortable, convenient and fun. Gesture input in a 3D environment is one of such technologies and has been vigorously tested...

Full description

Bibliographic Details
Main Authors: Jung-Chen Liang, 梁榕真
Other Authors: Kuo-Hao Tang
Format: Others
Language:zh-TW
Published: 2012
Online Access:http://ndltd.ncl.edu.tw/handle/85481223428606961234
id ndltd-TW-100FCU05031076
record_format oai_dc
spelling ndltd-TW-100FCU050310762015-10-13T21:27:33Z http://ndltd.ncl.edu.tw/handle/85481223428606961234 3D gesture input in the future living room 三維空間姿態輸入於未來客廳之應用 Jung-Chen Liang 梁榕真 碩士 逢甲大學 工業工程與系統管理學研究所 100 Emerging digital and smart technologies are constantly being developed in order to meet the needs of human beings and make life more comfortable, convenient and fun. Gesture input in a 3D environment is one of such technologies and has been vigorously tested. This study measures the positioning accuracy and movement speed discriminability using motion capture apparatus, and explores the potential of using the results to build interactive systems. The experimental results show that for the 3D positioning task, the between subject errors are large; thus, a single set of positioning parameters to fit all users is not feasible. This implies that a pointing interactive system requires calibration process. Upon standardization across all participants, it’s obviously that the accuracy rate is higher when the pointing positions are close to lower right corner of the screen. Furthermore, when using depth in space as a pointing cue without visual feedback, the findings suggest that the error rate was higher than 30%, indicating visual feedback being required for such application. For movement speed discriminability, the experimental results reveal that both rightward and upward sliding movement have error rate of zero for discriminating two levels of speed. The error rate is less than 3% for discriminating three levels of speed. When levels of speed increased to four or five, the error rates become larger than 10%. Based on these results, it is suggested to limit the levels of speed not more than three when using levels of speed as an input dimension, and personal calibration is necessary. Kuo-Hao Tang 唐國豪 2012 學位論文 ; thesis 113 zh-TW
collection NDLTD
language zh-TW
format Others
sources NDLTD
description 碩士 === 逢甲大學 === 工業工程與系統管理學研究所 === 100 === Emerging digital and smart technologies are constantly being developed in order to meet the needs of human beings and make life more comfortable, convenient and fun. Gesture input in a 3D environment is one of such technologies and has been vigorously tested. This study measures the positioning accuracy and movement speed discriminability using motion capture apparatus, and explores the potential of using the results to build interactive systems. The experimental results show that for the 3D positioning task, the between subject errors are large; thus, a single set of positioning parameters to fit all users is not feasible. This implies that a pointing interactive system requires calibration process. Upon standardization across all participants, it’s obviously that the accuracy rate is higher when the pointing positions are close to lower right corner of the screen. Furthermore, when using depth in space as a pointing cue without visual feedback, the findings suggest that the error rate was higher than 30%, indicating visual feedback being required for such application. For movement speed discriminability, the experimental results reveal that both rightward and upward sliding movement have error rate of zero for discriminating two levels of speed. The error rate is less than 3% for discriminating three levels of speed. When levels of speed increased to four or five, the error rates become larger than 10%. Based on these results, it is suggested to limit the levels of speed not more than three when using levels of speed as an input dimension, and personal calibration is necessary.
author2 Kuo-Hao Tang
author_facet Kuo-Hao Tang
Jung-Chen Liang
梁榕真
author Jung-Chen Liang
梁榕真
spellingShingle Jung-Chen Liang
梁榕真
3D gesture input in the future living room
author_sort Jung-Chen Liang
title 3D gesture input in the future living room
title_short 3D gesture input in the future living room
title_full 3D gesture input in the future living room
title_fullStr 3D gesture input in the future living room
title_full_unstemmed 3D gesture input in the future living room
title_sort 3d gesture input in the future living room
publishDate 2012
url http://ndltd.ncl.edu.tw/handle/85481223428606961234
work_keys_str_mv AT jungchenliang 3dgestureinputinthefuturelivingroom
AT liángróngzhēn 3dgestureinputinthefuturelivingroom
AT jungchenliang sānwéikōngjiānzītàishūrùyúwèiláikètīngzhīyīngyòng
AT liángróngzhēn sānwéikōngjiānzītàishūrùyúwèiláikètīngzhīyīngyòng
_version_ 1718063039272452096