Learning Over Multitask Graphs—Part II: Performance Analysis

Part I of this paper formulated a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. A diffusion strategy was devised that responds to streaming data and employs...

Full description

Bibliographic Details
Main Authors: Roula Nassif, Stefan Vlaski, Cedric Richard, Ali H. Sayed
Format: Article
Language:English
Published: IEEE 2020-01-01
Series:IEEE Open Journal of Signal Processing
Subjects:
Online Access:https://ieeexplore.ieee.org/document/9075192/
id doaj-f8824f76f61a4132b8ab62fa83f4145d
record_format Article
spelling doaj-f8824f76f61a4132b8ab62fa83f4145d2021-03-29T18:08:01ZengIEEEIEEE Open Journal of Signal Processing2644-13222020-01-011466310.1109/OJSP.2020.29890319075192Learning Over Multitask Graphs—Part II: Performance AnalysisRoula Nassif0https://orcid.org/0000-0001-9663-8559Stefan Vlaski1https://orcid.org/0000-0002-0616-3076Cedric Richard2https://orcid.org/0000-0003-2890-141XAli H. Sayed3https://orcid.org/0000-0002-5125-5519Institute of Electrical Engineering, EPFL, Lausanne, SwitzerlandInstitute of Electrical Engineering, EPFL, Lausanne, SwitzerlandUniversité de Nice Sophia-Antipolis, Nice, FranceUniversité de Nice Sophia-Antipolis, Nice, FrancePart I of this paper formulated a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. A diffusion strategy was devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. The approach relied on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that promotes smoothness. We examined the first-order, the second-order, and the fourth-order stability of the multitask learning algorithm. The results identified conditions on the step-size parameter, regularization strength, and data characteristics in order to ensure stability. This Part II examines steady-state performance of the strategy. The results reveal explicitly the influence of the network topology and the regularization strength on the network performance and provide insights into the design of effective multitask strategies for distributed inference over networks.https://ieeexplore.ieee.org/document/9075192/Multitask distributed inferencediffusion strategysmoothness priorgraph Laplacian regularizationgradient noisesteady-state performance
collection DOAJ
language English
format Article
sources DOAJ
author Roula Nassif
Stefan Vlaski
Cedric Richard
Ali H. Sayed
spellingShingle Roula Nassif
Stefan Vlaski
Cedric Richard
Ali H. Sayed
Learning Over Multitask Graphs—Part II: Performance Analysis
IEEE Open Journal of Signal Processing
Multitask distributed inference
diffusion strategy
smoothness prior
graph Laplacian regularization
gradient noise
steady-state performance
author_facet Roula Nassif
Stefan Vlaski
Cedric Richard
Ali H. Sayed
author_sort Roula Nassif
title Learning Over Multitask Graphs—Part II: Performance Analysis
title_short Learning Over Multitask Graphs—Part II: Performance Analysis
title_full Learning Over Multitask Graphs—Part II: Performance Analysis
title_fullStr Learning Over Multitask Graphs—Part II: Performance Analysis
title_full_unstemmed Learning Over Multitask Graphs—Part II: Performance Analysis
title_sort learning over multitask graphs—part ii: performance analysis
publisher IEEE
series IEEE Open Journal of Signal Processing
issn 2644-1322
publishDate 2020-01-01
description Part I of this paper formulated a multitask optimization problem where agents in the network have individual objectives to meet, or individual parameter vectors to estimate, subject to a smoothness condition over the graph. A diffusion strategy was devised that responds to streaming data and employs stochastic approximations in place of actual gradient vectors, which are generally unavailable. The approach relied on minimizing a global cost consisting of the aggregate sum of individual costs regularized by a term that promotes smoothness. We examined the first-order, the second-order, and the fourth-order stability of the multitask learning algorithm. The results identified conditions on the step-size parameter, regularization strength, and data characteristics in order to ensure stability. This Part II examines steady-state performance of the strategy. The results reveal explicitly the influence of the network topology and the regularization strength on the network performance and provide insights into the design of effective multitask strategies for distributed inference over networks.
topic Multitask distributed inference
diffusion strategy
smoothness prior
graph Laplacian regularization
gradient noise
steady-state performance
url https://ieeexplore.ieee.org/document/9075192/
work_keys_str_mv AT roulanassif learningovermultitaskgraphsx2014partiiperformanceanalysis
AT stefanvlaski learningovermultitaskgraphsx2014partiiperformanceanalysis
AT cedricrichard learningovermultitaskgraphsx2014partiiperformanceanalysis
AT alihsayed learningovermultitaskgraphsx2014partiiperformanceanalysis
_version_ 1724196781261062144