User Modelling Using Multimodal Information for Personalised Dressing Assistance
Assistive robots in home environments are steadily increasing in popularity. Due to significant variabilities in human behaviour, as well as physical characteristics and individual preferences, personalising assistance poses a challenging problem. In this paper, we focus on an assistive dressing tas...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2020-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9024050/ |
id |
doaj-3fa3087859904f1cb3ec42503e8eb8b9 |
---|---|
record_format |
Article |
spelling |
doaj-3fa3087859904f1cb3ec42503e8eb8b92021-03-30T03:10:23ZengIEEEIEEE Access2169-35362020-01-018457004571410.1109/ACCESS.2020.29782079024050User Modelling Using Multimodal Information for Personalised Dressing AssistanceYixing Gao0https://orcid.org/0000-0003-4475-2792Hyung Jin Chang1Yiannis Demiris2https://orcid.org/0000-0003-4917-3343Department of Electrical and Electronic Engineering, Personal Robotics Laboratory, Imperial College London, London, U.K.Department of Electrical and Electronic Engineering, Personal Robotics Laboratory, Imperial College London, London, U.K.Department of Electrical and Electronic Engineering, Personal Robotics Laboratory, Imperial College London, London, U.K.Assistive robots in home environments are steadily increasing in popularity. Due to significant variabilities in human behaviour, as well as physical characteristics and individual preferences, personalising assistance poses a challenging problem. In this paper, we focus on an assistive dressing task that involves physical contact with a human's upper body, in which the goal is to improve the comfort level of the individual. Two aspects are considered to be significant in improving a user's comfort level: having more natural postures and exerting less effort. However, a dressing path that fulfils these two criteria may not be found at one time. Therefore, we propose a user modelling method that combines vision and force data to enable the robot to search for an optimised dressing path for each user and improve as the human-robot interaction progresses. We compare the proposed method against two single-modality state-of-the-art user modelling methods designed for personalised assistive dressing by user studies (31 subjects). Experimental results show that the proposed method provides personalised assistance that results in more natural postures and less effort for human users.https://ieeexplore.ieee.org/document/9024050/Multimodal user modellingassistive dressingvision and force fusionhuman-robot interaction |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Yixing Gao Hyung Jin Chang Yiannis Demiris |
spellingShingle |
Yixing Gao Hyung Jin Chang Yiannis Demiris User Modelling Using Multimodal Information for Personalised Dressing Assistance IEEE Access Multimodal user modelling assistive dressing vision and force fusion human-robot interaction |
author_facet |
Yixing Gao Hyung Jin Chang Yiannis Demiris |
author_sort |
Yixing Gao |
title |
User Modelling Using Multimodal Information for Personalised Dressing Assistance |
title_short |
User Modelling Using Multimodal Information for Personalised Dressing Assistance |
title_full |
User Modelling Using Multimodal Information for Personalised Dressing Assistance |
title_fullStr |
User Modelling Using Multimodal Information for Personalised Dressing Assistance |
title_full_unstemmed |
User Modelling Using Multimodal Information for Personalised Dressing Assistance |
title_sort |
user modelling using multimodal information for personalised dressing assistance |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2020-01-01 |
description |
Assistive robots in home environments are steadily increasing in popularity. Due to significant variabilities in human behaviour, as well as physical characteristics and individual preferences, personalising assistance poses a challenging problem. In this paper, we focus on an assistive dressing task that involves physical contact with a human's upper body, in which the goal is to improve the comfort level of the individual. Two aspects are considered to be significant in improving a user's comfort level: having more natural postures and exerting less effort. However, a dressing path that fulfils these two criteria may not be found at one time. Therefore, we propose a user modelling method that combines vision and force data to enable the robot to search for an optimised dressing path for each user and improve as the human-robot interaction progresses. We compare the proposed method against two single-modality state-of-the-art user modelling methods designed for personalised assistive dressing by user studies (31 subjects). Experimental results show that the proposed method provides personalised assistance that results in more natural postures and less effort for human users. |
topic |
Multimodal user modelling assistive dressing vision and force fusion human-robot interaction |
url |
https://ieeexplore.ieee.org/document/9024050/ |
work_keys_str_mv |
AT yixinggao usermodellingusingmultimodalinformationforpersonaliseddressingassistance AT hyungjinchang usermodellingusingmultimodalinformationforpersonaliseddressingassistance AT yiannisdemiris usermodellingusingmultimodalinformationforpersonaliseddressingassistance |
_version_ |
1724183966415585280 |