Assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness
INTRODUCTION: Conversational agents are of great interest in the field of mental health, often in the news these days as a solution to the problem of a limited number of clinicians per patient. Until very recently, little research was actually done in patients with mental health conditions, but rath...
Main Author: | |
---|---|
Other Authors: | |
Language: | en_US |
Published: |
2019
|
Subjects: | |
Online Access: | https://hdl.handle.net/2144/36733 |
id |
ndltd-bu.edu-oai-open.bu.edu-2144-36733 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-bu.edu-oai-open.bu.edu-2144-367332019-08-09T15:02:04Z Assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness Vaidyam, Aditya Nrusimha Flynn, David Mental health Chatbot Conversational agent Depression Medical informatics Mental health Psychiatry INTRODUCTION: Conversational agents are of great interest in the field of mental health, often in the news these days as a solution to the problem of a limited number of clinicians per patient. Until very recently, little research was actually done in patients with mental health conditions, but rather, only in healthy controls. Little is actually known if those with mental health conditions would want to use conversational agents, and how comfortable they might feel hearing results they would normally hear from a clinician, instead from a chatbot. OBJECTIVES: We asked patients with mental health conditions to ask a chatbot to read a results document to them and tell us how they found the experience. To our knowledge, this is one of the earliest studies to consider actual patient perspectives on conversational agents for mental health, and would inform whether this avenue of research is worth pursuing in the future. Our specific aims are to first and foremost determine the usability of such conversational agent tools, second, to determine their likely adoption among individuals with mental health disorders, and third, to determine whether those using them would grow a sense of artificial trust with the agent. METHODS: We designed and implemented a conversational agent specific to mental health tracking along with a supporting scale able to measure its efficacy in the selected domains of Adoption, Usability, and Trust. These specific domains were selected based on the phases of interaction during a conversation that patients would have with a conversational agent and adapted for simplicity in measurement. Patients were briefly introduced to the technology, our particular conversational agent, and a demo, before using it themselves and taking the survey with the supporting scale thereafter. RESULTS: With a mean score of 3.27 and standard deviation of 0.99 in the Adoption domain, we see that subjects typically felt less than content with adoption but believe that the conversational agent could become commonplace without complicated technical hurdles. With a mean score of 3.4 and standard deviation of 0.93 in the Usability domain, we see that subjects tended to feel more content with the usability of the conversational agent. With a mean score of 2.65 and standard deviation of 0.95 in the Trust domain, we see that subjects felt least content with trusting the conversational agent. CONCLUSIONS: In summary, though conversational agents are now readily accessible and relatively easy to use, we see there is a bridge to be crossed before patients are willing to trust a conversational agent over speaking directly with a clinician in mental health settings. With increased attention, clinic adoption, and patient experience, however, we feel that conversational agents could be readily adopted for simple or routine tasks and requesting information that would otherwise require time, cost, and effort to acquire. The field is still young, however, and with advances in digital technologies and artificial intelligence, capturing the essence of natural language conversation could transform this currently simple tool with limited use-cases into a powerful one for the digital clinician. 2019-07-31T18:43:58Z 2019-07-31T18:43:58Z 2019 2019-06-18T19:11:30Z Thesis/Dissertation https://hdl.handle.net/2144/36733 en_US Attribution-NonCommercial-ShareAlike 4.0 International http://creativecommons.org/licenses/by-nc-sa/4.0/ |
collection |
NDLTD |
language |
en_US |
sources |
NDLTD |
topic |
Mental health Chatbot Conversational agent Depression Medical informatics Mental health Psychiatry |
spellingShingle |
Mental health Chatbot Conversational agent Depression Medical informatics Mental health Psychiatry Vaidyam, Aditya Nrusimha Assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness |
description |
INTRODUCTION: Conversational agents are of great interest in the field of mental health, often in the news these days as a solution to the problem of a limited number of clinicians per patient. Until very recently, little research was actually done in patients with mental health conditions, but rather, only in healthy controls. Little is actually known if those with mental health conditions would want to use conversational agents, and how comfortable they might feel hearing results they would normally hear from a clinician, instead from a chatbot.
OBJECTIVES: We asked patients with mental health conditions to ask a chatbot to read a results document to them and tell us how they found the experience. To our knowledge, this is one of the earliest studies to consider actual patient perspectives on conversational agents for mental health, and would inform whether this avenue of research is worth pursuing in the future. Our specific aims are to first and foremost determine the usability of such conversational agent tools, second, to determine their likely adoption among individuals with mental health disorders, and third, to determine whether those using them would grow a sense of artificial trust with the agent.
METHODS: We designed and implemented a conversational agent specific to mental health tracking along with a supporting scale able to measure its efficacy in the selected domains of Adoption, Usability, and Trust. These specific domains were selected based on the phases of interaction during a conversation that patients would have with a conversational agent and adapted for simplicity in measurement. Patients were briefly introduced to the technology, our particular conversational agent, and a demo, before using it themselves and taking the survey with the supporting scale thereafter.
RESULTS: With a mean score of 3.27 and standard deviation of 0.99 in the Adoption domain, we see that subjects typically felt less than content with adoption but believe that the conversational agent could become commonplace without complicated technical hurdles. With a mean score of 3.4 and standard deviation of 0.93 in the Usability domain, we see that subjects tended to feel more content with the usability of the conversational agent. With a mean score of 2.65 and standard deviation of 0.95 in the Trust domain, we see that subjects felt least content with trusting the conversational agent.
CONCLUSIONS: In summary, though conversational agents are now readily accessible and relatively easy to use, we see there is a bridge to be crossed before patients are willing to trust a conversational agent over speaking directly with a clinician in mental health settings. With increased attention, clinic adoption, and patient experience, however, we feel that conversational agents could be readily adopted for simple or routine tasks and requesting information that would otherwise require time, cost, and effort to acquire. The field is still young, however, and with advances in digital technologies and artificial intelligence, capturing the essence of natural language conversation could transform this currently simple tool with limited use-cases into a powerful one for the digital clinician. |
author2 |
Flynn, David |
author_facet |
Flynn, David Vaidyam, Aditya Nrusimha |
author |
Vaidyam, Aditya Nrusimha |
author_sort |
Vaidyam, Aditya Nrusimha |
title |
Assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness |
title_short |
Assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness |
title_full |
Assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness |
title_fullStr |
Assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness |
title_full_unstemmed |
Assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness |
title_sort |
assessment of adoption, usability, and trustability of conversational agents in the diagnosis, treatment, and therapy of individuals with mental illness |
publishDate |
2019 |
url |
https://hdl.handle.net/2144/36733 |
work_keys_str_mv |
AT vaidyamadityanrusimha assessmentofadoptionusabilityandtrustabilityofconversationalagentsinthediagnosistreatmentandtherapyofindividualswithmentalillness |
_version_ |
1719233995221237760 |