Enhancing Trust in Autonomous Systems without Verifying Software
The complexity of the software behind autonomous systems is rapidly growing, as are the applications of what they can do. It is not unusual for the lines of code to reach the millions, which adds to the verification challenge. The machine learning algorithms involved are often "black boxes"...
Main Author: | |
---|---|
Other Authors: | |
Format: | Others |
Published: |
Virginia Tech
2019
|
Subjects: | |
Online Access: | http://hdl.handle.net/10919/89950 |
id |
ndltd-VTETD-oai-vtechworks.lib.vt.edu-10919-89950 |
---|---|
record_format |
oai_dc |
spelling |
ndltd-VTETD-oai-vtechworks.lib.vt.edu-10919-899502020-09-29T05:43:20Z Enhancing Trust in Autonomous Systems without Verifying Software Stamenkovich, Joseph Allan Electrical and Computer Engineering Patterson, Cameron D. Saad, Walid Huang, Bert Autonomy Runtime Verification FPGA Field Programmable Gate Array Monitor Formal Methods UAS UAV Security Linear Temporal Logic LTL High-Level Synthesis HLS monitor model checking drone malware assurance robotics firmware hardware The complexity of the software behind autonomous systems is rapidly growing, as are the applications of what they can do. It is not unusual for the lines of code to reach the millions, which adds to the verification challenge. The machine learning algorithms involved are often "black boxes" where the precise workings are not known by the developer applying them, and their behavior is undefined when encountering an untrained scenario. With so much code, the possibility of bugs or malicious code is considerable. An approach is developed to monitor and possibly override the behavior of autonomous systems independent of the software controlling them. Application-isolated safety monitors are implemented in configurable hardware to ensure that the behavior of an autonomous system is limited to what is intended. The sensor inputs may be shared with the software, but the output from the monitors is only engaged when the system violates its prescribed behavior. For each specific rule the system is expected to follow, a monitor is present processing the relevant sensor information. The behavior is defined in linear temporal logic (LTL) and the associated monitors are implemented in a field programmable gate array (FPGA). An off-the-shelf drone is used to demonstrate the effectiveness of the monitors without any physical modifications to the drone. Upon detection of a violation, appropriate corrective actions are persistently enforced on the autonomous system. Master of Science Autonomous systems are surprisingly vulnerable, not just from malicious hackers, but from design errors and oversights. The lines of code required can quickly climb into the millions, and the artificial decision algorithms can be inscrutable and fully dependent upon the information they are trained on. These factors cause the verification of the core software running our autonomous cars, drones, and everything else to be prohibitively difficult by traditional means. Independent safety monitors are implemented to provide internal oversight for these autonomous systems. A semi-automatic design process efficiently creates error-free monitors from safety rules drones need to follow. These monitors remain separate and isolated from the software typically controlling the system, but use the same sensor information. They are embedded in the circuitry and act as their own small, task-specific processors watching to make sure a particular rule is not violated; otherwise, they take control of the system and force corrective behavior. The monitors are added to a consumer off-the-shelf (COTS) drone to demonstrate their effectiveness. For every rule monitored, an override is triggered when they are violated. Their effectiveness depends on reliable sensor information as with any electronic component, and the completeness of the rules detailing these monitors. 2019-06-13T08:00:38Z 2019-06-13T08:00:38Z 2019-06-12 Thesis vt_gsexam:20441 http://hdl.handle.net/10919/89950 In Copyright http://rightsstatements.org/vocab/InC/1.0/ ETD application/pdf Virginia Tech |
collection |
NDLTD |
format |
Others
|
sources |
NDLTD |
topic |
Autonomy Runtime Verification FPGA Field Programmable Gate Array Monitor Formal Methods UAS UAV Security Linear Temporal Logic LTL High-Level Synthesis HLS monitor model checking drone malware assurance robotics firmware hardware |
spellingShingle |
Autonomy Runtime Verification FPGA Field Programmable Gate Array Monitor Formal Methods UAS UAV Security Linear Temporal Logic LTL High-Level Synthesis HLS monitor model checking drone malware assurance robotics firmware hardware Stamenkovich, Joseph Allan Enhancing Trust in Autonomous Systems without Verifying Software |
description |
The complexity of the software behind autonomous systems is rapidly growing, as are the
applications of what they can do. It is not unusual for the lines of code to reach the millions,
which adds to the verification challenge. The machine learning algorithms involved are often
"black boxes" where the precise workings are not known by the developer applying them, and
their behavior is undefined when encountering an untrained scenario. With so much code, the
possibility of bugs or malicious code is considerable. An approach is developed to monitor and
possibly override the behavior of autonomous systems independent of the software controlling
them. Application-isolated safety monitors are implemented in configurable hardware to
ensure that the behavior of an autonomous system is limited to what is intended. The sensor
inputs may be shared with the software, but the output from the monitors is only engaged
when the system violates its prescribed behavior. For each specific rule the system is expected
to follow, a monitor is present processing the relevant sensor information. The behavior is
defined in linear temporal logic (LTL) and the associated monitors are implemented in a
field programmable gate array (FPGA). An off-the-shelf drone is used to demonstrate the
effectiveness of the monitors without any physical modifications to the drone. Upon detection
of a violation, appropriate corrective actions are persistently enforced on the autonomous
system. === Master of Science === Autonomous systems are surprisingly vulnerable, not just from malicious hackers, but from design errors and oversights. The lines of code required can quickly climb into the millions, and the artificial decision algorithms can be inscrutable and fully dependent upon the information they are trained on. These factors cause the verification of the core software running our autonomous cars, drones, and everything else to be prohibitively difficult by traditional means. Independent safety monitors are implemented to provide internal oversight for these autonomous systems. A semi-automatic design process efficiently creates error-free monitors from safety rules drones need to follow. These monitors remain separate and isolated from the software typically controlling the system, but use the same sensor information. They are embedded in the circuitry and act as their own small, task-specific processors watching to make sure a particular rule is not violated; otherwise, they take control of the system and force corrective behavior. The monitors are added to a consumer off-the-shelf (COTS) drone to demonstrate their effectiveness. For every rule monitored, an override is triggered when they are violated. Their effectiveness depends on reliable sensor information as with any electronic component, and the completeness of the rules detailing these monitors. |
author2 |
Electrical and Computer Engineering |
author_facet |
Electrical and Computer Engineering Stamenkovich, Joseph Allan |
author |
Stamenkovich, Joseph Allan |
author_sort |
Stamenkovich, Joseph Allan |
title |
Enhancing Trust in Autonomous Systems without Verifying Software |
title_short |
Enhancing Trust in Autonomous Systems without Verifying Software |
title_full |
Enhancing Trust in Autonomous Systems without Verifying Software |
title_fullStr |
Enhancing Trust in Autonomous Systems without Verifying Software |
title_full_unstemmed |
Enhancing Trust in Autonomous Systems without Verifying Software |
title_sort |
enhancing trust in autonomous systems without verifying software |
publisher |
Virginia Tech |
publishDate |
2019 |
url |
http://hdl.handle.net/10919/89950 |
work_keys_str_mv |
AT stamenkovichjosephallan enhancingtrustinautonomoussystemswithoutverifyingsoftware |
_version_ |
1719345920866254848 |