Evaluating the quality of voice assistants’ responses to consumer health questions about vaccines: an exploratory comparison of Alexa, Google Assistant and Siri

ObjectiveTo assess the quality and accuracy of the voice assistants (VAs), Amazon Alexa, Siri and Google Assistant, in answering consumer health questions about vaccine safety and use.MethodsResponses of each VA to 54 questions related to vaccination were scored using a rubric designed to assess the...

Full description

Bibliographic Details
Main Authors: Emily Couvillon Alagha, Rachel Renee Helbing
Format: Article
Language:English
Published: BMJ Publishing Group 2019-05-01
Series:BMJ Health & Care Informatics
Online Access:https://informatics.bmj.com/content/26/1/e100075.full
Description
Summary:ObjectiveTo assess the quality and accuracy of the voice assistants (VAs), Amazon Alexa, Siri and Google Assistant, in answering consumer health questions about vaccine safety and use.MethodsResponses of each VA to 54 questions related to vaccination were scored using a rubric designed to assess the accuracy of each answer provided through audio output and the quality of the source supporting each answer.ResultsOut of a total of 6 possible points, Siri averaged 5.16 points, Google Assistant averaged 5.10 points and Alexa averaged 0.98 points. Google Assistant and Siri understood voice queries accurately and provided users with links to authoritative sources about vaccination. Alexa understood fewer voice queries and did not draw answers from the same sources that were used by Google Assistant and Siri.ConclusionsThose involved in patient education should be aware of the high variability of results between VAs. Developers and health technology experts should also push for greater usability and transparency about information partnerships as the health information delivery capabilities of these devices expand in the future.
ISSN:2632-1009