Multimodal Patient Satisfaction Recognition for Smart Healthcare

The inclusion of multimodal inputs improves the accuracy and dependability of smart healthcare systems. A user satisfaction monitoring system that uses multimodal inputs composed of users' facial images and speech is proposed in this paper. This smart healthcare system then sends multimodal inp...

Full description

Bibliographic Details
Main Author: Abdulhameed Alelaiwi
Format: Article
Language:English
Published: IEEE 2019-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/8913430/
Description
Summary:The inclusion of multimodal inputs improves the accuracy and dependability of smart healthcare systems. A user satisfaction monitoring system that uses multimodal inputs composed of users' facial images and speech is proposed in this paper. This smart healthcare system then sends multimodal inputs to the cloud. The inputs are processed and classified as fully satisfied, partly satisfied, or unsatisfied, and the results are sent to various stakeholders in the smart healthcare environment. Multiple image and speech features are extracted during cloud processing. Moreover, directional derivatives and a weber local descriptor is used for speech and image features, respectively. The features are then combined to form a multimodal signal, which is supplied to a classifier by support vector machine. Our proposed system achieves 93% accuracy for satisfaction detection.
ISSN:2169-3536