User Modelling Using Multimodal Information for Personalised Dressing Assistance

Assistive robots in home environments are steadily increasing in popularity. Due to significant variabilities in human behaviour, as well as physical characteristics and individual preferences, personalising assistance poses a challenging problem. In this paper, we focus on an assistive dressing tas...

Full description

Bibliographic Details
Main Authors: Yixing Gao, Hyung Jin Chang, Yiannis Demiris
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9024050/
Description
Summary:Assistive robots in home environments are steadily increasing in popularity. Due to significant variabilities in human behaviour, as well as physical characteristics and individual preferences, personalising assistance poses a challenging problem. In this paper, we focus on an assistive dressing task that involves physical contact with a human's upper body, in which the goal is to improve the comfort level of the individual. Two aspects are considered to be significant in improving a user's comfort level: having more natural postures and exerting less effort. However, a dressing path that fulfils these two criteria may not be found at one time. Therefore, we propose a user modelling method that combines vision and force data to enable the robot to search for an optimised dressing path for each user and improve as the human-robot interaction progresses. We compare the proposed method against two single-modality state-of-the-art user modelling methods designed for personalised assistive dressing by user studies (31 subjects). Experimental results show that the proposed method provides personalised assistance that results in more natural postures and less effort for human users.
ISSN:2169-3536