Kinect-Based Somatosensory Interface for Presentation

碩士 === 元智大學 === 資訊工程學系 === 102 === Abstract With advances in computer technology, new human-computer interaction has been developed. From the traditional keyboard, mouse to intelligent iphone introduced by Apple, it facilitates the population of touch panel and touchscreens. In addition, the wii lau...

Full description

Bibliographic Details
Main Authors: Shou-Jen Wu, 吳守仁
Other Authors: Shu-Yuan Chen
Format: Others
Language:zh-TW
Online Access:http://ndltd.ncl.edu.tw/handle/32353526013908871032
Description
Summary:碩士 === 元智大學 === 資訊工程學系 === 102 === Abstract With advances in computer technology, new human-computer interaction has been developed. From the traditional keyboard, mouse to intelligent iphone introduced by Apple, it facilitates the population of touch panel and touchscreens. In addition, the wii launched by Nintendo, initiates the era of somatosensory interface.Recently, three-dimensional human-computer interaction based on computer vision has been attracting a great deal of attention since it allows users to control or manipulate devices in a more natural manner through intentional movements of arms, hands and fingers. The main purpose of this thesis is to build up a Kinect-based somatosensory interface to enhance the convenience and friendly of reporting control for presentation. The proposed interface is based on Kinect on Windows 7 x64 platform without using traditional mouse and keyboard as the main operational tools. Instead, a gesture recognition approach is proposed to enable online manipulating projection contents, which include selecting and opening files, zooming screen, and scrolling slides, through detecting and recognizing speakers’ gesture from the skeleton joints provided by Kinect sensor. Experimental results prove the promise of the proposed method. Keywords: Kinect sensor, human-computer interaction, somatosensory interface, action recognition, gesture recognition, skeleton joint