Qian XU ,Chutian YU ,Xiang YUAN ,Mengli WEI ,Hongzhe LIU
1State Grid Zhejiang Economic Research Institute, Hangzhou 310008, China
2Beijing Yinshan Technology Co., Ltd., Beijing 100871, China
Abstract: In this paper,the optimization problem subject to N nonidentical closed convex set constraints is studied.The aim is to design a corresponding distributed optimization algorithm over the fixed unbalanced graph to solve the considered problem.To this end,with the push-sum framework improved,the distributed optimization algorithm is newly designed,and its strict convergence analysis is given under the assumption that the involved graph is strongly connected.Finally,simulation results support the good performance of the proposed algorithm.
Key words: Distributed optimization;Nonidentical constraints;Improved push-sum framework
Because the distributed optimization algorithm performs well in solving large-scale optimization problems involved in smart grid systems,smart communication systems,multiple unmanned systems,and intelligent transportation systems,it has attracted increasing attention recently.Many excellent algorithms have been obtained,including the continuous-time algorithms (Wang and Elia,2011;Gharesifard and Cortés,2014;Kia et al.,2015;Liu QS and Wang,2015;Yang et al.,2017;Zhu YN et al.,2019a,2019b) and the discrete-time algorithms (Nedi? and Ozdaglar,2009;Nedi? et al.,2010;Yuan et al.,2011;Zhu MH and Martínez,2012;Nedi? and Olshevsky,2015;Pu et al.,2018,2021;Qu and Li,2018;Xi et al.,2018;Mai and Abed,2019;Zimmermann et al.,2020;Liu HZ et al.,2021;Yu et al.,2021).In this paper,we focus on the results involving the design of discrete-time algorithms.The distributed gradient (subgradient) method was first designed in Nedi? and Ozdaglar (2009),and the unconstrained problem was solved.Then,based on Nedi? and Ozdaglar (2009),the distributed projected gradient (subgradient) method was designed in Nedi? et al.(2010),and the optimization problem subject to the identical closed convex set constraint was solved.Then,to extend the results with inequality and equality constraints,the primal dual method was introduced in Zhu MH and Martínez (2012).
However,in the above-mentioned results,only balanced graphs were considered.To design distributed optimization algorithms over unbalanced graphs,the method of estimating the left eigenvector with respect to eigenvalue 1 of the row-stochastic adjacent matrix,push-sum method,and push-pull method were introduced.As for using the method of estimating the left eigenvector with respect to eigenvalue 1 of the row-stochastic adjacent matrix,distributed algorithms were designed to solve the problem withNnonidentical closed convex set constraints in Mai and Abed (2019) and the problem withNnonidentical inequality constraints,Nnonidentical equality constraints,andNnonidentical closed convex set constraints in Liu HZ et al.(2021).Concerning the use of the push-sum method,a distributed algorithm was designed in Nedi? and Olshevsky (2015),which can solve only the unconstrained optimization problem.A distributed algorithm was developed in Yu et al.(2021) to solve the optimization problem withNnonidentical set constraints and in Zimmermann et al.(2020) to solve the problem with identical global constraints andNnonidentical inequality constraints.However,the structures of these algorithms are complex.As for using the push-pull method,related distributed algorithms were designed in Pu et al.(2018,2021),and they can solve only unconstrained optimization problems.
In the above-mentioned results,some of the designed distributed algorithms involved the sub-linear convergence rate and others (push-pull based algorithms) involved the linear convergence rate.Most recently,many excellent studies focused on designing distributed algorithms with a linear convergence rate to solve various optimization problems;details can be found in Qu and Li (2018) and Xi et al.(2018).
In this paper,the aim is to use the push-sum method to design a distributed algorithm over the fixed unbalanced graph with a sub-linear convergence rate and to solve the optimization problem withNnonidentical set constraints.Therefore,the main contribution of this paper is the successful development of a distributed optimization algorithm with strict convergence analysis.In detail,the improved push-sum framework was introduced in Zimmermann et al.(2020) to integrate the constraint handling method in the push-sum framework,while the new constraint handling method (gradient decent like method) was employed in Yu et al.(2021) to achieve the same aim.These two methods are combined in this paper to handle the constrained distributed optimization problems for unbalanced graphs.Moreover,compared to Zimmermann et al.(2020),the optimization problem studied in this paper is different and involvesNnonidentical set constraints,which is more challenging than the problem with identical set constraints.
The notations and graph theories used in this study are the same as those in Liu HZ et al.(2021).Therefore,we omit them here.
whereYis an arbitrarily given closed convex set in RnandPYdenotes the projection operator onY.
Note that Lemma 1 is the foundation of the directed graph,whereas Lemma 2 is for convergence analysis with diminishing step sizes.Lemma 3 is the projection lemma for the nonidentical set constraints.
In this study,we consider the following optimization problem:
wherey ∈Rn,fi: Rn →R (i=1,2···,N) are convex functions that can be seen as the local objective functions associated with agenti,andYi ?Rn(i=1,2···,N) are compact sets.LetIn this study,we design the distributed optimization algorithm on a static weight-unbalanced digraphG.Moreover,letB ?RN×Nbe the weight matrix associated withG.Assume thatBis column-stochastic.
In the following,the assumptions for the optimization problem and digraphGare introduced for subsequent analysis:
Assumption 1Fori=1,2,···,N,fi(x) has a continuous gradient.
Assumption 2DigraphGis strongly connected.
The measurement dist (y,Y) and parameterRdenote the distance ofytoYand a positive constant related to the diameter of the sets,respectively.Inequality (2) is known as the constraint regularity condition,which is the key to solving optimization problems with multiple nonidentical constraints.
In this subsection,the distributed algorithm on digraphGis designed for problem (1),which is given as
Remark 2BecauseYi(i=1,2,···,N) are compact sets,yi(t) under iteration (3c) are uniformly bounded with respect tot.Moreover,from Nedi? and Olshevsky (2015),we know thatxi(t) under iteration (3a) with the given initial values are strictly positive;in other words,there exists a constanta0(a0>0) such thatxi(t)≥a0,i=1,2,···,Nandt ≥0.Thus,(i=1,2,···,N) are uniformly bounded with respect tot.Therefore,it can be assumed that there exists a positive constantMsuch that
Then,without loss of generality,we can assume thatn=1 and algorithm (3) can be rewritten in a compact form
Clearly,the properties of matrixQ(t) are the key points for the convergence analysis of the proposed algorithm.The roadmap for the convergence analysis is briefly introduced as follows: first,we will discuss the properties of matrixQ(t) in Lemma 4.Then,we will analyze the consensus property ofyi(t) in Lemma 5 and the convergence property of the optimal solution toyi(t) under algorithm (3) in Lemmas 6 and 7.Finally,we give the convergence properties ofyi(t) in Theorem 1.
Lemma 4Under Assumption 2,matrixQ(t),induced from algorithm (3),has the following properties:
(1) For allt ≥1,Q(t) are stochastic;
(2) For allt ≥0,xT(t+1)Q(t+1)=xT(t).
The proof of Lemma 4 can be found in Zimmermann et al.(2020),and thus it is omitted here.
Lemma 5Under Assumptions 1 and 2,yi(t)(i=1,2,···,N) generated by algorithm (3) satisfy
The proof of Lemma 5 can be completed based on the results in Mai and Abed (2019),and thus it is omitted here.
Lemma 6Lety?be the optimal solution to problem (1).Under Assumptions 1 and 2,yi(t)(i=1,2,···,N) under algorithm (3) satisfy
ProofFrom the definition ofyi(t+1) and Lemma 3,we obtain
Then,for the first term of the right-hand side of inequality (8),based on the property 2 of Lemma 4 we have
For the second term of the right-hand side of inequality (8),we have
For the third term of the right-hand side of inequality (8),we have
Then,it follows from inequalities (13) and (14) that
Thus,substituting inequalities (12) and (15) into Eq.(11) yields
Therefore,it follows from inequalities (9),(10),and (16) that
The proof is completed.
Lemma 7Under Assumptions 1 and 2,yi(t)(i=1,2,···,N) under algorithm (3) further satisfy
ProofIt can be obtained from inequalities (5) and (6) that
Then,it follows from inequalities (19) and (21)that
Thus,with the selecteda,we have
Therefore,noting that
The proof is completed.
Now,we present Theorem 1,which describes the convergence property of the proposed algorithm (3):
Theorem 1Under Assumptions 1 and 2,allyi(t) under algorithm (3) converge to a common optimal solution to problem (1).
ProofIt can be obtained from Lemma 1 and the properties ofα(t) thatξ(t) is summable and thusξ(t)+a2α2(t) is summable.Then,based on the results in Mai and Abed (2019) and Lemma 2,we can easily complete the proof.
In this section,we consider the economic dispatch problem (EDP) in an IEEE 118-bus system with nine microgrids (nodes),where the topology is given in Fig.1.
Fig.1 Communication graph of an IEEE 118-bus example
In general,the distributed EDP can be transformed into problem (1) using the Fenchel dual method.That is,EDP obtains the optimal solution for the output power of all units as long as the optimal solution to problem (1) is achieved.Furthermore,we focus only on the dual problem modeled as
Specifically,the behaviors ofyi(t) (i=1,2,···,9) are shown in Fig.2,and it can be seen thatyi(t) converge to a common optimal solutiony?=9.684.
Fig.2 States for the incremental cost of all generation units with capacity limitations
Furthermore,we use the classical distributed projected gradient descent algorithm with rowstochastic adjacent matrix (DPGD-RS) in Mai and Abed (2019) as a benchmark to compare with the convergence performance of the proposed algorithm in this study.The settings are the same,except the selection of the row-stochastic matrixfor DPGDRS as
All behaviors ofyi(t) under the DPGD-RS as proposed in Mai and Abed (2019) are shown in Fig.3 for comparison.Comparing the results from the convergence analysis,the algorithm in this study has some advantages in terms of the convergence rate.Clearly,it can be shown that the algorithm presented in this study has a better convergence performance.
Fig.3 Accuracy of the proposed algorithm with rowstochastic distributed projected gradient descent
In this paper,we investigated the optimization problem withNnonidentical closed convex set constraints.Furthermore,we introduced the improved push-sum framework,based on which we designed the distributed algorithm over the fixed unbalanced graph.We presented strict convergence analysis of the proposed algorithm.Additionally,we verified the good performance of the proposed algorithm using simulations.Future works will focus on optimization problems with more complicated constraints and on distributed optimization over time-varying unbalanced graphs.
Contributors
Qian XU and Chutian YU designed the research.Qian XU and Xiang YUAN processed the data.Qian XU and Mengli WEI drafted the paper.Hongzhe LIU helped organize the paper.Hongzhe LIU revised and finalized the paper.
Compliance with ethics guidelines
Qian XU,Chutian YU,Xiang YUAN,Mengli WEI,and Hongzhe LIU declare that they have no conflict of interest.
Data availability
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Frontiers of Information Technology & Electronic Engineering2023年9期