A Novel Mixed Reality Interface for Effective and Efficient Human Robot Interaction with Unique Mobility Platforms

Autonomous robots are increasingly working alongside humans in a variety of environments. While simple applications in controlled environments work fine with fully autonomous robots and little interaction between human and robot, mission-critical applications in unstructured and uncertain environmen...

Full description

Bibliographic Details
Other Authors: Kopinsky, Ryan J. (authoraut)
Format: Others
Language:English
English
Published: Florida State University
Subjects:
Online Access:http://purl.flvc.org/fsu/fd/FSU_SUMMER2017_Kopinsky_fsu_0071E_14062
Description
Summary:Autonomous robots are increasingly working alongside humans in a variety of environments. While simple applications in controlled environments work fine with fully autonomous robots and little interaction between human and robot, mission-critical applications in unstructured and uncertain environments require a stronger collaboration between human and robot. An example of such an instance occurs in dismounted military operations in which one or more autonomous robots act as part of a team of soldiers. The performance of the human-robot team depends largely on the interaction between human and robot, more specifically the communication interfaces between the two. Furthermore, due to the complex and unstructured environments in which dismounted military missions take place, robots need to have a diverse skill set. Therefore, a variety of sensors, robot platform types (e.g. wheeled vs legged) and other capabilities are needed. The goal of this research was to understand how robot platform type and visual complexity of the human-robot interface, in particular a Mixed Reality interface, affect cooperative human-robot teaming in dismounted military operations. More specifically, the research objectives were to understand how robot platform type (wheeled vs. legged) impacts the human's perception of robot capability and performance, and to assess how visual complexity of a Mixed Reality interface affects accuracy and response time for an information reporting task and a signal detection task. The results of this study revealed that an increased visual complexity of the Mixed Reality-based human-robot interface improved response time and accuracy for an information reporting task and resulted in a more usable interface. Furthermore, the results indicated that the response time and accuracy for a signal detection task did not differ between high visual complexity and low visual complexity modes of the human-robot interface, which was likely due to a low task load. Users of the interface in high visual complexity mode reported lower perceived workload and better perceived performance compared to users of the interface in low visual complexity mode. Moreover, the findings of this study demonstrated that the unique appearance of a biologically-inspired legged robot was not enough to result in a difference in perceived performance and trust compared to a more traditional- looking wheeled robot. Therefore, there was no basis to conclude that the unique appearance of the legged robot resulted in the user anthropomorphizing the legged robot more than the wheeled robot. Additionally, free response feedback from users revealed that Mixed Reality-based head-mounted displays have the potential to overcome the shortcomings of Augmented Reality-based head-mounted displays and offer a suitable alternative to hand-held displays in dismounted military operations. Finally, this study demonstrated that an increase in visual complexity of a Mixed Reality-based human-robot interface results in improved effectiveness of human robot interaction and ultimately human-robot team performance as long as the additional complexity supports the tasks of the human. === A Dissertation submitted to the Department of Mechanical Engineering in partial fulfillment of the requirements for the degree of Doctor of Philosophy. === Summer Semester 2017. === July 18, 2017. === Autonomous Robots, Dismounted Military, Human Robot Interaction, Legged Robots, Mixed Reality, Virtual Reality === Includes bibliographical references. === Emmanuel G. Collins, Professor Directing Dissertation; Rodney G. Roberts, University Representative; Jonathan E. Clark, Committee Member; William S. Oates, Committee Member; Daniel J. Barber, Committee Member.