Average-consensus in a two-time-scale Markov system

In a spatially distributed network of sensors or mobile agents it is often required to compute the average of all local data collected by each member of the group. Obtaining the average of all local data is, for example, sufficient to conduct robust statistical inference; identify the group center o...

Full description

Bibliographic Details
Main Author: Topley, Kevin James
Language:English
Published: University of British Columbia 2014
Online Access:http://hdl.handle.net/2429/51262
id ndltd-UBC-oai-circle.library.ubc.ca-2429-51262
record_format oai_dc
spelling ndltd-UBC-oai-circle.library.ubc.ca-2429-512622018-01-05T17:27:51Z Average-consensus in a two-time-scale Markov system Topley, Kevin James In a spatially distributed network of sensors or mobile agents it is often required to compute the average of all local data collected by each member of the group. Obtaining the average of all local data is, for example, sufficient to conduct robust statistical inference; identify the group center of mass and direction of motion; or evenly assign a set of divisible tasks among processors. Due to the spatial distribution of the network, energy limitations and geographic barriers may render a data fusion center to be infeasible or highly inefficient in regard to averaging the local data. The problem of distributively computing the network average - also known as the average-consensus problem - has thus received significant attention in the signal processing and control research communities. Efforts in this direction propose and study distributed algorithms that allow every agent in the network to compute the global average via communication with only a subset of fellow agents. The thesis will present a framework in which to analyze distributed algorithms for both dynamic and static consensus formation. For dynamic consensus we consider a two-time-scale Markov system wherein each sensor node can observe the state of a local Markov chain. Assuming each Markov chain has a stationary distribution with slowly switching regime, we show that a local stochastic approximation algorithm in conjunction with linear distributed averaging can imply that each node estimate converges weakly to the current average of all stationary distributions. Each node can thus track the average of all stationary distributions, provided the regime switching is sufficiently slow. We then consider static consensus formation when the inter-node communication pattern is a priori unknown and signals possess arbitrarily long time-delays. In this setting, four distributed algorithms are proposed and shown to obtain average-consensus under a variety of general communication conditions. Applied Science, Faculty of Electrical and Computer Engineering, Department of Graduate 2014-12-02T15:35:21Z 2014-12-02T15:35:21Z 2014 2015-02 Text Thesis/Dissertation http://hdl.handle.net/2429/51262 eng Attribution-NonCommercial-NoDerivs 2.5 Canada http://creativecommons.org/licenses/by-nc-nd/2.5/ca/ University of British Columbia
collection NDLTD
language English
sources NDLTD
description In a spatially distributed network of sensors or mobile agents it is often required to compute the average of all local data collected by each member of the group. Obtaining the average of all local data is, for example, sufficient to conduct robust statistical inference; identify the group center of mass and direction of motion; or evenly assign a set of divisible tasks among processors. Due to the spatial distribution of the network, energy limitations and geographic barriers may render a data fusion center to be infeasible or highly inefficient in regard to averaging the local data. The problem of distributively computing the network average - also known as the average-consensus problem - has thus received significant attention in the signal processing and control research communities. Efforts in this direction propose and study distributed algorithms that allow every agent in the network to compute the global average via communication with only a subset of fellow agents. The thesis will present a framework in which to analyze distributed algorithms for both dynamic and static consensus formation. For dynamic consensus we consider a two-time-scale Markov system wherein each sensor node can observe the state of a local Markov chain. Assuming each Markov chain has a stationary distribution with slowly switching regime, we show that a local stochastic approximation algorithm in conjunction with linear distributed averaging can imply that each node estimate converges weakly to the current average of all stationary distributions. Each node can thus track the average of all stationary distributions, provided the regime switching is sufficiently slow. We then consider static consensus formation when the inter-node communication pattern is a priori unknown and signals possess arbitrarily long time-delays. In this setting, four distributed algorithms are proposed and shown to obtain average-consensus under a variety of general communication conditions. === Applied Science, Faculty of === Electrical and Computer Engineering, Department of === Graduate
author Topley, Kevin James
spellingShingle Topley, Kevin James
Average-consensus in a two-time-scale Markov system
author_facet Topley, Kevin James
author_sort Topley, Kevin James
title Average-consensus in a two-time-scale Markov system
title_short Average-consensus in a two-time-scale Markov system
title_full Average-consensus in a two-time-scale Markov system
title_fullStr Average-consensus in a two-time-scale Markov system
title_full_unstemmed Average-consensus in a two-time-scale Markov system
title_sort average-consensus in a two-time-scale markov system
publisher University of British Columbia
publishDate 2014
url http://hdl.handle.net/2429/51262
work_keys_str_mv AT topleykevinjames averageconsensusinatwotimescalemarkovsystem
_version_ 1718584534293807104