Evaluating the quality of voice assistants’ responses to consumer health questions about vaccines: an exploratory comparison of Alexa, Google Assistant and Siri

ObjectiveTo assess the quality and accuracy of the voice assistants (VAs), Amazon Alexa, Siri and Google Assistant, in answering consumer health questions about vaccine safety and use.MethodsResponses of each VA to 54 questions related to vaccination were scored using a rubric designed to assess the...

Full description

Bibliographic Details
Main Authors: Emily Couvillon Alagha, Rachel Renee Helbing
Format: Article
Language:English
Published: BMJ Publishing Group 2019-05-01
Series:BMJ Health & Care Informatics
Online Access:https://informatics.bmj.com/content/26/1/e100075.full
id doaj-49565f2030d74f96b6562d981fcb3c9b
record_format Article
spelling doaj-49565f2030d74f96b6562d981fcb3c9b2021-03-01T12:00:27ZengBMJ Publishing GroupBMJ Health & Care Informatics2632-10092019-05-0126110.1136/bmjhci-2019-100075Evaluating the quality of voice assistants’ responses to consumer health questions about vaccines: an exploratory comparison of Alexa, Google Assistant and SiriEmily Couvillon AlaghaRachel Renee HelbingObjectiveTo assess the quality and accuracy of the voice assistants (VAs), Amazon Alexa, Siri and Google Assistant, in answering consumer health questions about vaccine safety and use.MethodsResponses of each VA to 54 questions related to vaccination were scored using a rubric designed to assess the accuracy of each answer provided through audio output and the quality of the source supporting each answer.ResultsOut of a total of 6 possible points, Siri averaged 5.16 points, Google Assistant averaged 5.10 points and Alexa averaged 0.98 points. Google Assistant and Siri understood voice queries accurately and provided users with links to authoritative sources about vaccination. Alexa understood fewer voice queries and did not draw answers from the same sources that were used by Google Assistant and Siri.ConclusionsThose involved in patient education should be aware of the high variability of results between VAs. Developers and health technology experts should also push for greater usability and transparency about information partnerships as the health information delivery capabilities of these devices expand in the future.https://informatics.bmj.com/content/26/1/e100075.full
collection DOAJ
language English
format Article
sources DOAJ
author Emily Couvillon Alagha
Rachel Renee Helbing
spellingShingle Emily Couvillon Alagha
Rachel Renee Helbing
Evaluating the quality of voice assistants’ responses to consumer health questions about vaccines: an exploratory comparison of Alexa, Google Assistant and Siri
BMJ Health & Care Informatics
author_facet Emily Couvillon Alagha
Rachel Renee Helbing
author_sort Emily Couvillon Alagha
title Evaluating the quality of voice assistants’ responses to consumer health questions about vaccines: an exploratory comparison of Alexa, Google Assistant and Siri
title_short Evaluating the quality of voice assistants’ responses to consumer health questions about vaccines: an exploratory comparison of Alexa, Google Assistant and Siri
title_full Evaluating the quality of voice assistants’ responses to consumer health questions about vaccines: an exploratory comparison of Alexa, Google Assistant and Siri
title_fullStr Evaluating the quality of voice assistants’ responses to consumer health questions about vaccines: an exploratory comparison of Alexa, Google Assistant and Siri
title_full_unstemmed Evaluating the quality of voice assistants’ responses to consumer health questions about vaccines: an exploratory comparison of Alexa, Google Assistant and Siri
title_sort evaluating the quality of voice assistants’ responses to consumer health questions about vaccines: an exploratory comparison of alexa, google assistant and siri
publisher BMJ Publishing Group
series BMJ Health & Care Informatics
issn 2632-1009
publishDate 2019-05-01
description ObjectiveTo assess the quality and accuracy of the voice assistants (VAs), Amazon Alexa, Siri and Google Assistant, in answering consumer health questions about vaccine safety and use.MethodsResponses of each VA to 54 questions related to vaccination were scored using a rubric designed to assess the accuracy of each answer provided through audio output and the quality of the source supporting each answer.ResultsOut of a total of 6 possible points, Siri averaged 5.16 points, Google Assistant averaged 5.10 points and Alexa averaged 0.98 points. Google Assistant and Siri understood voice queries accurately and provided users with links to authoritative sources about vaccination. Alexa understood fewer voice queries and did not draw answers from the same sources that were used by Google Assistant and Siri.ConclusionsThose involved in patient education should be aware of the high variability of results between VAs. Developers and health technology experts should also push for greater usability and transparency about information partnerships as the health information delivery capabilities of these devices expand in the future.
url https://informatics.bmj.com/content/26/1/e100075.full
work_keys_str_mv AT emilycouvillonalagha evaluatingthequalityofvoiceassistantsresponsestoconsumerhealthquestionsaboutvaccinesanexploratorycomparisonofalexagoogleassistantandsiri
AT rachelreneehelbing evaluatingthequalityofvoiceassistantsresponsestoconsumerhealthquestionsaboutvaccinesanexploratorycomparisonofalexagoogleassistantandsiri
_version_ 1724246686269702144