Multimodal Patient Satisfaction Recognition for Smart Healthcare
The inclusion of multimodal inputs improves the accuracy and dependability of smart healthcare systems. A user satisfaction monitoring system that uses multimodal inputs composed of users' facial images and speech is proposed in this paper. This smart healthcare system then sends multimodal inp...
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2019-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/8913430/ |
id |
doaj-9420532575ba48cf899e0d95b8a15e69 |
---|---|
record_format |
Article |
spelling |
doaj-9420532575ba48cf899e0d95b8a15e692021-03-30T00:27:43ZengIEEEIEEE Access2169-35362019-01-01717421917422610.1109/ACCESS.2019.29560838913430Multimodal Patient Satisfaction Recognition for Smart HealthcareAbdulhameed Alelaiwi0https://orcid.org/0000-0001-5459-6194Chair of Smart Technologies, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi ArabiaThe inclusion of multimodal inputs improves the accuracy and dependability of smart healthcare systems. A user satisfaction monitoring system that uses multimodal inputs composed of users' facial images and speech is proposed in this paper. This smart healthcare system then sends multimodal inputs to the cloud. The inputs are processed and classified as fully satisfied, partly satisfied, or unsatisfied, and the results are sent to various stakeholders in the smart healthcare environment. Multiple image and speech features are extracted during cloud processing. Moreover, directional derivatives and a weber local descriptor is used for speech and image features, respectively. The features are then combined to form a multimodal signal, which is supplied to a classifier by support vector machine. Our proposed system achieves 93% accuracy for satisfaction detection.https://ieeexplore.ieee.org/document/8913430/Healthcarelocal texture patternpatient monitoring |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Abdulhameed Alelaiwi |
spellingShingle |
Abdulhameed Alelaiwi Multimodal Patient Satisfaction Recognition for Smart Healthcare IEEE Access Healthcare local texture pattern patient monitoring |
author_facet |
Abdulhameed Alelaiwi |
author_sort |
Abdulhameed Alelaiwi |
title |
Multimodal Patient Satisfaction Recognition for Smart Healthcare |
title_short |
Multimodal Patient Satisfaction Recognition for Smart Healthcare |
title_full |
Multimodal Patient Satisfaction Recognition for Smart Healthcare |
title_fullStr |
Multimodal Patient Satisfaction Recognition for Smart Healthcare |
title_full_unstemmed |
Multimodal Patient Satisfaction Recognition for Smart Healthcare |
title_sort |
multimodal patient satisfaction recognition for smart healthcare |
publisher |
IEEE |
series |
IEEE Access |
issn |
2169-3536 |
publishDate |
2019-01-01 |
description |
The inclusion of multimodal inputs improves the accuracy and dependability of smart healthcare systems. A user satisfaction monitoring system that uses multimodal inputs composed of users' facial images and speech is proposed in this paper. This smart healthcare system then sends multimodal inputs to the cloud. The inputs are processed and classified as fully satisfied, partly satisfied, or unsatisfied, and the results are sent to various stakeholders in the smart healthcare environment. Multiple image and speech features are extracted during cloud processing. Moreover, directional derivatives and a weber local descriptor is used for speech and image features, respectively. The features are then combined to form a multimodal signal, which is supplied to a classifier by support vector machine. Our proposed system achieves 93% accuracy for satisfaction detection. |
topic |
Healthcare local texture pattern patient monitoring |
url |
https://ieeexplore.ieee.org/document/8913430/ |
work_keys_str_mv |
AT abdulhameedalelaiwi multimodalpatientsatisfactionrecognitionforsmarthealthcare |
_version_ |
1724188281343574016 |