Hands-Free Human–Robot Interaction Using Multimodal Gestures and Deep Learning in Wearable Mixed Reality

This study proposes a novel hands-free interaction method using multimodal gestures such as eye gazing and head gestures and deep learning for human-robot interaction (HRI) in mixed reality (MR) environments. Since human operators hold some objects for conducting tasks, there are many constrained si...

Full description

Bibliographic Details
Main Authors: Kyeong-Beom Park, Sung Ho Choi, Jae Yeol Lee, Yalda Ghasemi, Mustafa Mohammed, Heejin Jeong
Format: Article
Language:English
Published: IEEE 2021-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9395580/