Distributed Optimization Algorithms for Networked Systems
<p>Distributed optimization methods allow us to decompose an optimization problem</p><p>into smaller, more manageable subproblems that are solved in parallel. For this</p><p>reason, they are widely used to solve large-scale problems arising in areas as diverse</p>...
Main Author: | |
---|---|
Other Authors: | |
Published: |
2015
|
Subjects: | |
Online Access: | http://hdl.handle.net/10161/11304 |
id |
ndltd-DUKE-oai-dukespace.lib.duke.edu-10161-11304 |
---|---|
record_format |
oai_dc |
collection |
NDLTD |
sources |
NDLTD |
topic |
Mechanical engineering Operations research Mathematics Distributed optimization Networked control systems Optimization algorithms Wireless communications |
spellingShingle |
Mechanical engineering Operations research Mathematics Distributed optimization Networked control systems Optimization algorithms Wireless communications Chatzipanagiotis, Nikolaos Distributed Optimization Algorithms for Networked Systems |
description |
<p>Distributed optimization methods allow us to decompose an optimization problem</p><p>into smaller, more manageable subproblems that are solved in parallel. For this</p><p>reason, they are widely used to solve large-scale problems arising in areas as diverse</p><p>as wireless communications, optimal control, machine learning, artificial intelligence,</p><p>computational biology, finance and statistics, to name a few. Moreover, distributed</p><p>algorithms avoid the cost and fragility associated with centralized coordination, and</p><p>provide better privacy for the autonomous decision makers. These are desirable</p><p>properties, especially in applications involving networked robotics, communication</p><p>or sensor networks, and power distribution systems.</p><p>In this thesis we propose the Accelerated Distributed Augmented Lagrangians</p><p>(ADAL) algorithm, a novel decomposition method for convex optimization prob-</p><p>lems with certain separability structure. The method is based on the augmented</p><p>Lagrangian framework and addresses problems that involve multiple agents optimiz-</p><p>ing a separable convex objective function subject to convex local constraints and</p><p>linear coupling constraints. We establish the convergence of ADAL and also show</p><p>that it has a worst-case O(1/k) convergence rate, where k denotes the number of</p><p>iterations.</p><p>Moreover, we show that ADAL converges to a local minimum of the problem</p><p>for cases with non-convex objective functions. This is the first published work that</p><p>formally establishes the convergence of a distributed augmented Lagrangian method</p><p>ivfor non-convex optimization problems. An alternative way to select the stepsizes</p><p>used in the algorithm is also discussed. These two contributions are independent</p><p>from each other, meaning that convergence of the non-convex ADAL method can</p><p>still be shown using the stepsizes from the convex case, and, similarly, convergence</p><p>of the convex ADAL method can be shown using the stepsizes proposed in the non-</p><p>convex proof.</p><p>Furthermore, we consider cases where the distributed algorithm needs to operate</p><p>in the presence of uncertainty and noise and show that the generated sequences of</p><p>primal and dual variables converge to their respective optimal sets almost surely. In</p><p>particular, we are concerned with scenarios where: i) the local computation steps</p><p>are inexact or are performed in the presence of uncertainty, and ii) the message</p><p>exchanges between agents are corrupted by noise. In this case, the proposed scheme</p><p>can be classified as a distributed stochastic approximation method. Compared to</p><p>existing literature in this area, our work is the first that utilizes the augmented</p><p>Lagrangian framework. Moreover, the method allows us to solve a richer class of</p><p>problems as compared to existing methods on distributed stochastic approximation</p><p>that consider only consensus constraints.</p><p>Extensive numerical experiments have been carried out in an effort to validate</p><p>the novelty and effectiveness of the proposed method in all the areas of the afore-</p><p>mentioned theoretical contributions. We examine problems in convex, non-convex,</p><p>and stochastic settings where uncertainties and noise affect the execution of the al-</p><p>gorithm. For the convex cases, we present applications of ADAL to certain popular</p><p>network optimization problems, as well as to a two-stage stochastic optimization</p><p>problem. The simulation results suggest that the proposed method outperforms</p><p>the state-of-the-art distributed augmented Lagrangian methods that are known in</p><p>the literature. For the non-convex cases, we perform simulations on certain simple</p><p>non-convex problems to establish that ADAL indeed converges to non-trivial local</p><p>vsolutions of the problems; in comparison, the straightforward implementation of the</p><p>other distributed augmented Lagrangian methods on the same problems does not</p><p>lead to convergence. For the stochastic setting, we present simulation results of</p><p>ADAL applied on network optimization problems and examine the effect that noise</p><p>and uncertainties have in the convergence behavior of the method.</p><p>As an extended and more involved application, we also consider the problem</p><p>of relay cooperative beamforming in wireless communications systems. Specifically,</p><p>we study the scenario of a multi-cluster network, in which each cluster contains</p><p>multiple single-antenna source destination pairs that communicate simultaneously</p><p>over the same channel. The communications are supported by cooperating amplify-</p><p>and-forward relays, which perform beamforming. Since the emerging problem is non-</p><p>convex, we propose an approximate convex reformulation. Based on ADAL, we also</p><p>discuss two different ways to obtain a distributed solution that allows for autonomous</p><p>computation of the optimal beamforming decisions by each cluster, while taking into</p><p>account intra- and inter-cluster interference effects.</p><p>Our goal in this thesis is to advance the state-of-the-art in distributed optimization by proposing methods that combine fast convergence, wide applicability, ease</p><p>of implementation, low computational complexity, and are robust with respect to</p><p>delays, uncertainty in the problem parameters, noise corruption in the message ex-</p><p>changes, and inexact computations.</p> === Dissertation |
author2 |
Zavlanos, Michael M. |
author_facet |
Zavlanos, Michael M. Chatzipanagiotis, Nikolaos |
author |
Chatzipanagiotis, Nikolaos |
author_sort |
Chatzipanagiotis, Nikolaos |
title |
Distributed Optimization Algorithms for Networked Systems |
title_short |
Distributed Optimization Algorithms for Networked Systems |
title_full |
Distributed Optimization Algorithms for Networked Systems |
title_fullStr |
Distributed Optimization Algorithms for Networked Systems |
title_full_unstemmed |
Distributed Optimization Algorithms for Networked Systems |
title_sort |
distributed optimization algorithms for networked systems |
publishDate |
2015 |
url |
http://hdl.handle.net/10161/11304 |
work_keys_str_mv |
AT chatzipanagiotisnikolaos distributedoptimizationalgorithmsfornetworkedsystems |
_version_ |
1718160402513133568 |
spelling |
ndltd-DUKE-oai-dukespace.lib.duke.edu-10161-113042016-01-06T03:30:45ZDistributed Optimization Algorithms for Networked SystemsChatzipanagiotis, NikolaosMechanical engineeringOperations researchMathematicsDistributed optimizationNetworked control systemsOptimization algorithmsWireless communications<p>Distributed optimization methods allow us to decompose an optimization problem</p><p>into smaller, more manageable subproblems that are solved in parallel. For this</p><p>reason, they are widely used to solve large-scale problems arising in areas as diverse</p><p>as wireless communications, optimal control, machine learning, artificial intelligence,</p><p>computational biology, finance and statistics, to name a few. Moreover, distributed</p><p>algorithms avoid the cost and fragility associated with centralized coordination, and</p><p>provide better privacy for the autonomous decision makers. These are desirable</p><p>properties, especially in applications involving networked robotics, communication</p><p>or sensor networks, and power distribution systems.</p><p>In this thesis we propose the Accelerated Distributed Augmented Lagrangians</p><p>(ADAL) algorithm, a novel decomposition method for convex optimization prob-</p><p>lems with certain separability structure. The method is based on the augmented</p><p>Lagrangian framework and addresses problems that involve multiple agents optimiz-</p><p>ing a separable convex objective function subject to convex local constraints and</p><p>linear coupling constraints. We establish the convergence of ADAL and also show</p><p>that it has a worst-case O(1/k) convergence rate, where k denotes the number of</p><p>iterations.</p><p>Moreover, we show that ADAL converges to a local minimum of the problem</p><p>for cases with non-convex objective functions. This is the first published work that</p><p>formally establishes the convergence of a distributed augmented Lagrangian method</p><p>ivfor non-convex optimization problems. An alternative way to select the stepsizes</p><p>used in the algorithm is also discussed. These two contributions are independent</p><p>from each other, meaning that convergence of the non-convex ADAL method can</p><p>still be shown using the stepsizes from the convex case, and, similarly, convergence</p><p>of the convex ADAL method can be shown using the stepsizes proposed in the non-</p><p>convex proof.</p><p>Furthermore, we consider cases where the distributed algorithm needs to operate</p><p>in the presence of uncertainty and noise and show that the generated sequences of</p><p>primal and dual variables converge to their respective optimal sets almost surely. In</p><p>particular, we are concerned with scenarios where: i) the local computation steps</p><p>are inexact or are performed in the presence of uncertainty, and ii) the message</p><p>exchanges between agents are corrupted by noise. In this case, the proposed scheme</p><p>can be classified as a distributed stochastic approximation method. Compared to</p><p>existing literature in this area, our work is the first that utilizes the augmented</p><p>Lagrangian framework. Moreover, the method allows us to solve a richer class of</p><p>problems as compared to existing methods on distributed stochastic approximation</p><p>that consider only consensus constraints.</p><p>Extensive numerical experiments have been carried out in an effort to validate</p><p>the novelty and effectiveness of the proposed method in all the areas of the afore-</p><p>mentioned theoretical contributions. We examine problems in convex, non-convex,</p><p>and stochastic settings where uncertainties and noise affect the execution of the al-</p><p>gorithm. For the convex cases, we present applications of ADAL to certain popular</p><p>network optimization problems, as well as to a two-stage stochastic optimization</p><p>problem. The simulation results suggest that the proposed method outperforms</p><p>the state-of-the-art distributed augmented Lagrangian methods that are known in</p><p>the literature. For the non-convex cases, we perform simulations on certain simple</p><p>non-convex problems to establish that ADAL indeed converges to non-trivial local</p><p>vsolutions of the problems; in comparison, the straightforward implementation of the</p><p>other distributed augmented Lagrangian methods on the same problems does not</p><p>lead to convergence. For the stochastic setting, we present simulation results of</p><p>ADAL applied on network optimization problems and examine the effect that noise</p><p>and uncertainties have in the convergence behavior of the method.</p><p>As an extended and more involved application, we also consider the problem</p><p>of relay cooperative beamforming in wireless communications systems. Specifically,</p><p>we study the scenario of a multi-cluster network, in which each cluster contains</p><p>multiple single-antenna source destination pairs that communicate simultaneously</p><p>over the same channel. The communications are supported by cooperating amplify-</p><p>and-forward relays, which perform beamforming. Since the emerging problem is non-</p><p>convex, we propose an approximate convex reformulation. Based on ADAL, we also</p><p>discuss two different ways to obtain a distributed solution that allows for autonomous</p><p>computation of the optimal beamforming decisions by each cluster, while taking into</p><p>account intra- and inter-cluster interference effects.</p><p>Our goal in this thesis is to advance the state-of-the-art in distributed optimization by proposing methods that combine fast convergence, wide applicability, ease</p><p>of implementation, low computational complexity, and are robust with respect to</p><p>delays, uncertainty in the problem parameters, noise corruption in the message ex-</p><p>changes, and inexact computations.</p>DissertationZavlanos, Michael M.2015Dissertationhttp://hdl.handle.net/10161/11304 |