Resilient visual perception for multiagent systems

There has been an increasing interest in visual sensors and vision-based solutions for single and multi-robot systems. Vision-based sensors, e.g., traditional RGB cameras, grant rich semantic information and accurate directional measurements at a relatively low cost; however, such sensors have two m...

Full description

Bibliographic Details
Main Author: Karimian, Arman
Other Authors: Tron, Roberto
Language:en_US
Published: 2021
Subjects:
Online Access:https://hdl.handle.net/2144/42591
id ndltd-bu.edu-oai-open.bu.edu-2144-42591
record_format oai_dc
spelling ndltd-bu.edu-oai-open.bu.edu-2144-425912021-05-26T05:01:14Z Resilient visual perception for multiagent systems Karimian, Arman Tron, Roberto Robotics Computer vision Field of view constraints Multiagent control SLAM Structure from motion Visual homing There has been an increasing interest in visual sensors and vision-based solutions for single and multi-robot systems. Vision-based sensors, e.g., traditional RGB cameras, grant rich semantic information and accurate directional measurements at a relatively low cost; however, such sensors have two major drawbacks. They do not generally provide reliable depth estimates, and typically have a limited field of view. These limitations considerably increase the complexity of controlling multiagent systems. This thesis studies some of the underlying problems in vision-based multiagent control and mapping. The first contribution of this thesis is a method for restoring bearing rigidity in non-rigid networks of robots. We introduce means to determine which bearing measurements can improve bearing rigidity in non-rigid graphs and provide a greedy algorithm that restores rigidity in 2D with a minimum number of added edges. The focus of the second part is on the formation control problem using only bearing measurements. We address the control problem for consensus and formation control through non-smooth Lyapunov functions and differential inclusion. We provide a stability analysis for undirected graphs and investigate the derived controllers for directed graphs. We also introduce a newer notion of bearing persistence for pure bearing-based control in directed graphs. The third part is concerned with the bearing-only visual homing problem with a limited field of view sensor. In essence, this problem is a special case of the formation control problem where there is a single moving agent with fixed neighbors. We introduce a navigational vector field composed of two orthogonal vector fields that converges to the goal position and does not violate the field of view constraints. Our method does not require the landmarks' locations and is robust to the landmarks' tracking loss. The last part of this dissertation considers outlier detection in pose graphs for Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM) problems. We propose a method for detecting incorrect orientation measurements before pose graph optimization by checking their geometric consistency in cycles. We use Expectation-Maximization to fine-tune the noise's distribution parameters and propose a new approximate graph inference procedure specifically designed to take advantage of evidence on cycles with better performance than standard approaches. These works will help enable multi-robot systems to overcome visual sensors' limitations in collaborative tasks such as navigation and mapping. 2021-05-24T17:58:02Z 2021-05-24T17:58:02Z 2021 2021-05-15T07:02:56Z Thesis/Dissertation https://hdl.handle.net/2144/42591 0000-0002-9293-0533 en_US
collection NDLTD
language en_US
sources NDLTD
topic Robotics
Computer vision
Field of view constraints
Multiagent control
SLAM
Structure from motion
Visual homing
spellingShingle Robotics
Computer vision
Field of view constraints
Multiagent control
SLAM
Structure from motion
Visual homing
Karimian, Arman
Resilient visual perception for multiagent systems
description There has been an increasing interest in visual sensors and vision-based solutions for single and multi-robot systems. Vision-based sensors, e.g., traditional RGB cameras, grant rich semantic information and accurate directional measurements at a relatively low cost; however, such sensors have two major drawbacks. They do not generally provide reliable depth estimates, and typically have a limited field of view. These limitations considerably increase the complexity of controlling multiagent systems. This thesis studies some of the underlying problems in vision-based multiagent control and mapping. The first contribution of this thesis is a method for restoring bearing rigidity in non-rigid networks of robots. We introduce means to determine which bearing measurements can improve bearing rigidity in non-rigid graphs and provide a greedy algorithm that restores rigidity in 2D with a minimum number of added edges. The focus of the second part is on the formation control problem using only bearing measurements. We address the control problem for consensus and formation control through non-smooth Lyapunov functions and differential inclusion. We provide a stability analysis for undirected graphs and investigate the derived controllers for directed graphs. We also introduce a newer notion of bearing persistence for pure bearing-based control in directed graphs. The third part is concerned with the bearing-only visual homing problem with a limited field of view sensor. In essence, this problem is a special case of the formation control problem where there is a single moving agent with fixed neighbors. We introduce a navigational vector field composed of two orthogonal vector fields that converges to the goal position and does not violate the field of view constraints. Our method does not require the landmarks' locations and is robust to the landmarks' tracking loss. The last part of this dissertation considers outlier detection in pose graphs for Structure from Motion (SfM) and Simultaneous Localization and Mapping (SLAM) problems. We propose a method for detecting incorrect orientation measurements before pose graph optimization by checking their geometric consistency in cycles. We use Expectation-Maximization to fine-tune the noise's distribution parameters and propose a new approximate graph inference procedure specifically designed to take advantage of evidence on cycles with better performance than standard approaches. These works will help enable multi-robot systems to overcome visual sensors' limitations in collaborative tasks such as navigation and mapping.
author2 Tron, Roberto
author_facet Tron, Roberto
Karimian, Arman
author Karimian, Arman
author_sort Karimian, Arman
title Resilient visual perception for multiagent systems
title_short Resilient visual perception for multiagent systems
title_full Resilient visual perception for multiagent systems
title_fullStr Resilient visual perception for multiagent systems
title_full_unstemmed Resilient visual perception for multiagent systems
title_sort resilient visual perception for multiagent systems
publishDate 2021
url https://hdl.handle.net/2144/42591
work_keys_str_mv AT karimianarman resilientvisualperceptionformultiagentsystems
_version_ 1719406166354690048