Human performance in a multiple-task environment: effects of automation reliability on visual attention allocation

Multiple-task environments are pervasive in a variety of workplaces; many jobs require several concurrent, time-sensitive tasks be done in one task space. One concern in these multiple-task environments is attention allocation: To perform well, the operator must be able to know when and where to lo...

Full description

Bibliographic Details
Main Author: Cullen, Ralph H.
Published: Georgia Institute of Technology 2012
Subjects:
Online Access:http://hdl.handle.net/1853/42721
id ndltd-GATECH-oai-smartech.gatech.edu-1853-42721
record_format oai_dc
collection NDLTD
sources NDLTD
topic Human factors
Multiple-task environments
Automation
Automation reliability
Human information processing
Attention
spellingShingle Human factors
Multiple-task environments
Automation
Automation reliability
Human information processing
Attention
Cullen, Ralph H.
Human performance in a multiple-task environment: effects of automation reliability on visual attention allocation
description Multiple-task environments are pervasive in a variety of workplaces; many jobs require several concurrent, time-sensitive tasks be done in one task space. One concern in these multiple-task environments is attention allocation: To perform well, the operator must be able to know when and where to look. Otherwise, he or she will not be aware of the status of each task or be able to complete them. To aid these jobs, automation has been developed to support attention allocation: Auditory and visual alerts draw attention to where the system determines it is needed. However, imperfect automation may complicate the aid by introducing misses and false alarms to which the operator must also attend. Researchers studying these environments and automation's purview within them have focused on a variety of different topics. Some examples include: different types of automation (alerts, decision aid systems, etc.), levels of reliability (0-100% reliable), what automation supports (attention allocation to situation awareness to performance), and how automation affects multiple task environments (two tasks to many). Because attention had not been directly studied in relation to imperfect automation reliability in multiple-task environments, I decided to analyze the effects of different levels of automation reliability on visual attention allocation and how removal of that automation changed those effects. To study this, I helped to develop the Simultaneous Task Environment Platform (STEP), a program to study and test participants' behavior in multiple-task environments. The STEP program enabled me to vary the frequency and criticality (number of points gained/lost) of the different tasks to disambiguate how automation was affecting the participants. In the study, participants were trained on all four tasks of the STEP system, had the automation explained to them, and then were asked to gain as many points a trial as possible. There were three between-subject conditions; a system where ~70% of the automated alerts were reliable, one where ~90% of the alerts were reliable, and one where the participants received no automated aid at all. The automation was designed to support visual attention allocation. The participants interacted with the system and automation for twenty-four trials, divided into six blocks over two days, at which point they transferred to a system with no automation at all. To better understand exactly how the participants interacted with the system, I measured the number of times they accessed each task (attention allocation, as well as a measure of workload) and the number of points they scored (task performance). Mixed ANOVAs for these two measures, as well as a derived measure of efficiency (points scored per window opened), were conducted crossing automation condition with Block (to measure how the participants changed with experience) and task (to measure how certain tasks' attributes affected the way they were acted upon). Overall, the automation provided a benefit in terms of reduced workload and improved task performance. Participants in the automated conditions opened fewer windows and performed better. This also meant higher efficiency for those conditions. Experience affected conditions differentially. Those in the no automation condition increased their score but also the number of windows opened, causing their efficiency to stay the same. The 70% reliable condition was similar, with a minor point increase and no significant window decrease, resulting in no significant efficiency gain. The 90% reliable condition gained little score boost, but opened fewer windows by the end of the experiment, becoming more efficient. The frequency and criticality of tasks affected both the windows opened and the points scored across conditions, as participants in the two automated conditions opened fewer windows and scored relatively more points on those tasks worth many points that did not appear often. This increased their efficiency on those tasks, but also caused them to suffer greater when the automation was taken away. In the transfer trials, those participants in the automated conditions experienced both a workload increase and a performance decrease. These were centered on the two high-criticality/low-frequency tasks, as the other two tasks showed only small or no change between normal and transfer trials. These results show that automation at different levels of reliability affects the behavior of the operator of that system differentially based on the attributes of the tasks the operator must oversee. Tasks that happen often and are only important when aggregated over many are not aided by automation as much as those tasks that happen rarely and are critical every time they appear. When automation fails, however, those same tasks that are aided the most suffer the most, whereas those that do not get much aid do not suffer as much. Designers of automated systems should consider the type of tasks to be automated and their attributes, as well as the effects of increasing or decreasing the reliability of the automation when designing automation to provide support to system operators.
author Cullen, Ralph H.
author_facet Cullen, Ralph H.
author_sort Cullen, Ralph H.
title Human performance in a multiple-task environment: effects of automation reliability on visual attention allocation
title_short Human performance in a multiple-task environment: effects of automation reliability on visual attention allocation
title_full Human performance in a multiple-task environment: effects of automation reliability on visual attention allocation
title_fullStr Human performance in a multiple-task environment: effects of automation reliability on visual attention allocation
title_full_unstemmed Human performance in a multiple-task environment: effects of automation reliability on visual attention allocation
title_sort human performance in a multiple-task environment: effects of automation reliability on visual attention allocation
publisher Georgia Institute of Technology
publishDate 2012
url http://hdl.handle.net/1853/42721
work_keys_str_mv AT cullenralphh humanperformanceinamultipletaskenvironmenteffectsofautomationreliabilityonvisualattentionallocation
_version_ 1716475609017745408
spelling ndltd-GATECH-oai-smartech.gatech.edu-1853-427212013-01-07T20:38:07ZHuman performance in a multiple-task environment: effects of automation reliability on visual attention allocationCullen, Ralph H.Human factorsMultiple-task environmentsAutomationAutomation reliabilityHuman information processingAttentionMultiple-task environments are pervasive in a variety of workplaces; many jobs require several concurrent, time-sensitive tasks be done in one task space. One concern in these multiple-task environments is attention allocation: To perform well, the operator must be able to know when and where to look. Otherwise, he or she will not be aware of the status of each task or be able to complete them. To aid these jobs, automation has been developed to support attention allocation: Auditory and visual alerts draw attention to where the system determines it is needed. However, imperfect automation may complicate the aid by introducing misses and false alarms to which the operator must also attend. Researchers studying these environments and automation's purview within them have focused on a variety of different topics. Some examples include: different types of automation (alerts, decision aid systems, etc.), levels of reliability (0-100% reliable), what automation supports (attention allocation to situation awareness to performance), and how automation affects multiple task environments (two tasks to many). Because attention had not been directly studied in relation to imperfect automation reliability in multiple-task environments, I decided to analyze the effects of different levels of automation reliability on visual attention allocation and how removal of that automation changed those effects. To study this, I helped to develop the Simultaneous Task Environment Platform (STEP), a program to study and test participants' behavior in multiple-task environments. The STEP program enabled me to vary the frequency and criticality (number of points gained/lost) of the different tasks to disambiguate how automation was affecting the participants. In the study, participants were trained on all four tasks of the STEP system, had the automation explained to them, and then were asked to gain as many points a trial as possible. There were three between-subject conditions; a system where ~70% of the automated alerts were reliable, one where ~90% of the alerts were reliable, and one where the participants received no automated aid at all. The automation was designed to support visual attention allocation. The participants interacted with the system and automation for twenty-four trials, divided into six blocks over two days, at which point they transferred to a system with no automation at all. To better understand exactly how the participants interacted with the system, I measured the number of times they accessed each task (attention allocation, as well as a measure of workload) and the number of points they scored (task performance). Mixed ANOVAs for these two measures, as well as a derived measure of efficiency (points scored per window opened), were conducted crossing automation condition with Block (to measure how the participants changed with experience) and task (to measure how certain tasks' attributes affected the way they were acted upon). Overall, the automation provided a benefit in terms of reduced workload and improved task performance. Participants in the automated conditions opened fewer windows and performed better. This also meant higher efficiency for those conditions. Experience affected conditions differentially. Those in the no automation condition increased their score but also the number of windows opened, causing their efficiency to stay the same. The 70% reliable condition was similar, with a minor point increase and no significant window decrease, resulting in no significant efficiency gain. The 90% reliable condition gained little score boost, but opened fewer windows by the end of the experiment, becoming more efficient. The frequency and criticality of tasks affected both the windows opened and the points scored across conditions, as participants in the two automated conditions opened fewer windows and scored relatively more points on those tasks worth many points that did not appear often. This increased their efficiency on those tasks, but also caused them to suffer greater when the automation was taken away. In the transfer trials, those participants in the automated conditions experienced both a workload increase and a performance decrease. These were centered on the two high-criticality/low-frequency tasks, as the other two tasks showed only small or no change between normal and transfer trials. These results show that automation at different levels of reliability affects the behavior of the operator of that system differentially based on the attributes of the tasks the operator must oversee. Tasks that happen often and are only important when aggregated over many are not aided by automation as much as those tasks that happen rarely and are critical every time they appear. When automation fails, however, those same tasks that are aided the most suffer the most, whereas those that do not get much aid do not suffer as much. Designers of automated systems should consider the type of tasks to be automated and their attributes, as well as the effects of increasing or decreasing the reliability of the automation when designing automation to provide support to system operators.Georgia Institute of Technology2012-02-17T19:14:52Z2012-02-17T19:14:52Z2011-08-18Thesishttp://hdl.handle.net/1853/42721