Summary: | This thesis aims to investigate design options for a touch-less freehandgesture interface for photo-taking, specifically its applied feedback possibilities. It was assumed that pure sound or sound and verbal responses are sufficiently supporting the interaction. Thereby, the research was challenging the concept of primarily visual feedback of modern photo-cameras. Based on an extended literature review of gestural interfaces, auditory interfaces, photo-taking strategies and applied gesture-to-sound solutions, design requirements were deduced and an interactive prototype was constructed. The vision-based hand recognition was implemented with CamSpace and run on consumer equipment. The outcome of the photo-taking interaction was simulated with “Wizard of Oz” techniques. Afterwards, the wearable setup was tested in two formative usability sessions, whereas the second was split in two groups with either sound or sound-verbal feedback. Based on these studies, qualitative results regarding the understanding, learnability, cognitive load, awareness of the sonic feedback cues, as well as on the preferences for certain gestural interactions were found. Despite a low rate of successful photo results, the sound design proved to be distinctive and intuitive for most of the users. The expected difference between the groups’ level of trust in the test-system could not be verified. Furthermore, there was no clear indicator for the assumed need of a continuous sound feedback while interacting. The feedback design did not prove to sufficiently support the intended interaction, but showed a promising approach for non-visual feedback design and revealed insights to a freehand gestural design process. The paper concludes with suggestions for a second iteration of the design process, and describes an advanced version of the experimental setup.
|