Applying Traveler Models to Spatial Navigation User Interfaces in Spherical-Panoramic Virtual Reality

博士 === 國立臺灣科技大學 === 資訊工程系 === 106 === There are times when people face navigation problems in the real world, and there will be times when the same people face similar problems in virtual reality (VR). As VR is gaining popularity and is ready for mass-market adoption, it is important to research way...

Full description

Bibliographic Details
Main Authors: Hadziq Fabroyir, 哈明飛
Other Authors: Wei-Chung Teng
Format: Others
Language:en_US
Published: 2018
Online Access:http://ndltd.ncl.edu.tw/handle/66vfv9
Description
Summary:博士 === 國立臺灣科技大學 === 資訊工程系 === 106 === There are times when people face navigation problems in the real world, and there will be times when the same people face similar problems in virtual reality (VR). As VR is gaining popularity and is ready for mass-market adoption, it is important to research ways to present spatial navigation user interfaces (UIs) in VR that can benefit all types of users—not only experts but also novice users across genders. The research can begin from the real world point of view because basically, the way users navigate the real world can be incorporated into VR navigation. In this research, traveler models were proposed as an interaction paradigm to enhance the user experience in VR, especially in spherical-panoramic touring systems. The models employed the metaphor of travelers on street, navigating their surroundings while holding a paper map in their hands. Based on this metaphor, the models emphasized three important characteristics: (1) two separate displays (i.e., allocentric and egocentric views), (2) immersion in the egocentric view, and (3) interaction techniques based on user motions in the real world. Consequently, the models were used to generate three different kinds of prototypes or proofs of concept. To accommodate separate allocentric and egocentric views, prototype 1 utilized dual projector displays and a skeletal tracking sensor, prototype 2 employed a curved display and a multitouch tablet, and prototype 3 used a head-mounted display and one of two handheld controllers: a multitouch tablet or a gamepad. Through a series of experiments, the usability of the UIs in all these prototypes was then evaluated. The results showed that the proposed prototypes provided spatial cognition and user experiences better than those of their legacy system counterparts. User performance and preferences were further investigated in prototypes 2 and 3. The investigation of prototype 2 focused on the comparison of pointing and gestural UIs (e.g., mouse and multitouch device) for spatial navigation in desktop VR systems. Moreover, the investigation of prototype 3 concentrated on comparing the finger gestures on multitouch and tangible UIs (e.g., multitouch device and gamepad thumbsticks) for spatial navigation in head-mounted display (HMD) VR systems. In summary, the users preferred and performed better on spatial navigation with the gestural UIs, especially when the UIs were tangible. In addition, spatial behaviors were also observed and analyzed, especially for prototype 3. Results showed that the users preferred to apply egocentric techniques to orient and move within VR. The results also demonstrated that the users performed tasks faster and were less prone to errors while using gamepad thumbsticks, which manifested egocentric navigation. Results from workload measurements with the NASA-TLX and a brain-computer interface showed the gestures on the tangible UI (e.g., gamepad thumbsticks) to be superior to the gestures on the multitouch device. The relationships among spatial behaviors, gender, video gaming experience, and user interfaces in VR navigation were also examined. It was found that female users tended to navigate the VR allocentrically, while male users were likely to navigate the VR egocentrically, especially when using a tangible UI such as gamepad thumbsticks.