Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study
Background Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots i...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SAGE Publishing
2019-08-01
|
Series: | Digital Health |
Online Access: | https://doi.org/10.1177/2055207619871808 |
id |
doaj-3400d43285dd4d26a8116081312abc0d |
---|---|
record_format |
Article |
spelling |
doaj-3400d43285dd4d26a8116081312abc0d2020-11-25T03:19:00ZengSAGE PublishingDigital Health2055-20762019-08-01510.1177/2055207619871808Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods studyTom NadarzynskiOliver MilesAimee CowieDamien RidgeBackground Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants’ willingness to engage with AI-led health chatbots. Methods The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor. Results Three broad themes: ‘Understanding of chatbots’, ‘AI hesitancy’ and ‘Motivations for health chatbots’ were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI 95% :0.13–0.78] and dislike for talking to computers OR = 0.77 [CI 95% :0.60–0.99] as well as positively correlated with perceived utility OR = 5.10 [CI 95% :3.08–8.43], positive attitude OR = 2.71 [CI 95% :1.77–4.16] and perceived trustworthiness OR = 1.92 [CI 95% :1.13–3.25]. Conclusion Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients’ concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients’ perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots.https://doi.org/10.1177/2055207619871808 |
collection |
DOAJ |
language |
English |
format |
Article |
sources |
DOAJ |
author |
Tom Nadarzynski Oliver Miles Aimee Cowie Damien Ridge |
spellingShingle |
Tom Nadarzynski Oliver Miles Aimee Cowie Damien Ridge Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study Digital Health |
author_facet |
Tom Nadarzynski Oliver Miles Aimee Cowie Damien Ridge |
author_sort |
Tom Nadarzynski |
title |
Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study |
title_short |
Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study |
title_full |
Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study |
title_fullStr |
Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study |
title_full_unstemmed |
Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study |
title_sort |
acceptability of artificial intelligence (ai)-led chatbot services in healthcare: a mixed-methods study |
publisher |
SAGE Publishing |
series |
Digital Health |
issn |
2055-2076 |
publishDate |
2019-08-01 |
description |
Background Artificial intelligence (AI) is increasingly being used in healthcare. Here, AI-based chatbot systems can act as automated conversational agents, capable of promoting health, providing education, and potentially prompting behaviour change. Exploring the motivation to use health chatbots is required to predict uptake; however, few studies to date have explored their acceptability. This research aimed to explore participants’ willingness to engage with AI-led health chatbots. Methods The study incorporated semi-structured interviews (N-29) which informed the development of an online survey (N-216) advertised via social media. Interviews were recorded, transcribed verbatim and analysed thematically. A survey of 24 items explored demographic and attitudinal variables, including acceptability and perceived utility. The quantitative data were analysed using binary regressions with a single categorical predictor. Results Three broad themes: ‘Understanding of chatbots’, ‘AI hesitancy’ and ‘Motivations for health chatbots’ were identified, outlining concerns about accuracy, cyber-security, and the inability of AI-led services to empathise. The survey showed moderate acceptability (67%), correlated negatively with perceived poorer IT skills OR = 0.32 [CI 95% :0.13–0.78] and dislike for talking to computers OR = 0.77 [CI 95% :0.60–0.99] as well as positively correlated with perceived utility OR = 5.10 [CI 95% :3.08–8.43], positive attitude OR = 2.71 [CI 95% :1.77–4.16] and perceived trustworthiness OR = 1.92 [CI 95% :1.13–3.25]. Conclusion Most internet users would be receptive to using health chatbots, although hesitancy regarding this technology is likely to compromise engagement. Intervention designers focusing on AI-led health chatbots need to employ user-centred and theory-based approaches addressing patients’ concerns and optimising user experience in order to achieve the best uptake and utilisation. Patients’ perspectives, motivation and capabilities need to be taken into account when developing and assessing the effectiveness of health chatbots. |
url |
https://doi.org/10.1177/2055207619871808 |
work_keys_str_mv |
AT tomnadarzynski acceptabilityofartificialintelligenceailedchatbotservicesinhealthcareamixedmethodsstudy AT olivermiles acceptabilityofartificialintelligenceailedchatbotservicesinhealthcareamixedmethodsstudy AT aimeecowie acceptabilityofartificialintelligenceailedchatbotservicesinhealthcareamixedmethodsstudy AT damienridge acceptabilityofartificialintelligenceailedchatbotservicesinhealthcareamixedmethodsstudy |
_version_ |
1724624459279630336 |