A conditional gradient algorithm for distributed online optimization in networks

Abstract This paper addresses a network of computing nodes aiming to solve an online convex optimisation problem in a distributed manner, that is, by means of the local estimation and communication, without any central coordinator. An online distributed conditional gradient algorithm based on the co...

Full description

Bibliographic Details
Main Authors: Xiuyu Shen, Dequan Li, Runyue Fang, Qiao Dong
Format: Article
Language:English
Published: Wiley 2021-03-01
Series:IET Control Theory & Applications
Subjects:
Online Access:https://doi.org/10.1049/cth2.12062
_version_ 1798033344623017984
author Xiuyu Shen
Dequan Li
Runyue Fang
Qiao Dong
author_facet Xiuyu Shen
Dequan Li
Runyue Fang
Qiao Dong
author_sort Xiuyu Shen
collection DOAJ
description Abstract This paper addresses a network of computing nodes aiming to solve an online convex optimisation problem in a distributed manner, that is, by means of the local estimation and communication, without any central coordinator. An online distributed conditional gradient algorithm based on the conditional gradient is developed, which can effectively tackle the problem of high time complexity of the distributed online optimisation. The proposed algorithm allows the global objective function to be decomposed into the sum of the local objective functions, and nodes collectively minimise the sum of local time‐varying objective functions while the communication pattern among nodes is captured as a connected undirected graph. By adding a regularisation term to the local objective function of each node, the proposed algorithm constructs a new time‐varying objective function. The proposed algorithm also utilises the local linear optimisation oracle to replace the projection operation such that the regret bound of the algorithm can be effectively improved. By introducing the nominal regret and the global regret, the convergence properties of the proposed algorithm are also theoretically analysed. It is shown that, if the objective function of each agent is strongly convex and smooth, these two types of regrets grow sublinearly with the order of O(logT), where T is the time horizon. Numerical experiments also demonstrate the advantages of the proposed algorithm over existing distributed optimisation algorithms.
first_indexed 2024-04-11T20:28:54Z
format Article
id doaj.art-155a0f04378e472bb7654129f68ca62f
institution Directory Open Access Journal
issn 1751-8644
1751-8652
language English
last_indexed 2024-04-11T20:28:54Z
publishDate 2021-03-01
publisher Wiley
record_format Article
series IET Control Theory & Applications
spelling doaj.art-155a0f04378e472bb7654129f68ca62f2022-12-22T04:04:33ZengWileyIET Control Theory & Applications1751-86441751-86522021-03-0115457057910.1049/cth2.12062A conditional gradient algorithm for distributed online optimization in networksXiuyu Shen0Dequan Li1Runyue Fang2Qiao Dong3School of Mathematics and Big Data Anhui University of Science and Technology Huainan 232001 ChinaSchool of Mathematics and Big Data Anhui University of Science and Technology Huainan 232001 ChinaSchool of Mathematics and Big Data Anhui University of Science and Technology Huainan 232001 ChinaSchool of Mathematics and Big Data Anhui University of Science and Technology Huainan 232001 ChinaAbstract This paper addresses a network of computing nodes aiming to solve an online convex optimisation problem in a distributed manner, that is, by means of the local estimation and communication, without any central coordinator. An online distributed conditional gradient algorithm based on the conditional gradient is developed, which can effectively tackle the problem of high time complexity of the distributed online optimisation. The proposed algorithm allows the global objective function to be decomposed into the sum of the local objective functions, and nodes collectively minimise the sum of local time‐varying objective functions while the communication pattern among nodes is captured as a connected undirected graph. By adding a regularisation term to the local objective function of each node, the proposed algorithm constructs a new time‐varying objective function. The proposed algorithm also utilises the local linear optimisation oracle to replace the projection operation such that the regret bound of the algorithm can be effectively improved. By introducing the nominal regret and the global regret, the convergence properties of the proposed algorithm are also theoretically analysed. It is shown that, if the objective function of each agent is strongly convex and smooth, these two types of regrets grow sublinearly with the order of O(logT), where T is the time horizon. Numerical experiments also demonstrate the advantages of the proposed algorithm over existing distributed optimisation algorithms.https://doi.org/10.1049/cth2.12062Optimisation techniquesInterpolation and function approximation (numerical analysis)Computational complexityParallel programming and algorithm theoryCombinatorial mathematics
spellingShingle Xiuyu Shen
Dequan Li
Runyue Fang
Qiao Dong
A conditional gradient algorithm for distributed online optimization in networks
IET Control Theory & Applications
Optimisation techniques
Interpolation and function approximation (numerical analysis)
Computational complexity
Parallel programming and algorithm theory
Combinatorial mathematics
title A conditional gradient algorithm for distributed online optimization in networks
title_full A conditional gradient algorithm for distributed online optimization in networks
title_fullStr A conditional gradient algorithm for distributed online optimization in networks
title_full_unstemmed A conditional gradient algorithm for distributed online optimization in networks
title_short A conditional gradient algorithm for distributed online optimization in networks
title_sort conditional gradient algorithm for distributed online optimization in networks
topic Optimisation techniques
Interpolation and function approximation (numerical analysis)
Computational complexity
Parallel programming and algorithm theory
Combinatorial mathematics
url https://doi.org/10.1049/cth2.12062
work_keys_str_mv AT xiuyushen aconditionalgradientalgorithmfordistributedonlineoptimizationinnetworks
AT dequanli aconditionalgradientalgorithmfordistributedonlineoptimizationinnetworks
AT runyuefang aconditionalgradientalgorithmfordistributedonlineoptimizationinnetworks
AT qiaodong aconditionalgradientalgorithmfordistributedonlineoptimizationinnetworks
AT xiuyushen conditionalgradientalgorithmfordistributedonlineoptimizationinnetworks
AT dequanli conditionalgradientalgorithmfordistributedonlineoptimizationinnetworks
AT runyuefang conditionalgradientalgorithmfordistributedonlineoptimizationinnetworks
AT qiaodong conditionalgradientalgorithmfordistributedonlineoptimizationinnetworks