• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Matrix-valued distributed stochastic optimization with constraints?

    2023-09-21 06:30:46ZicongXIAYangLIUWenlianLUWeihuaGUI

    Zicong XIA ,Yang LIU ,Wenlian LU ,Weihua GUI

    1Key Laboratory of Intelligent Education Technology and Application of Zhejiang Province,Zhejiang Normal University, Jinhua 321004, China

    2School of Mathematical Sciences, Zhejiang Normal University, Jinhua 321004, China

    3School of Mathematical Sciences, Fudan University, Shanghai 200433, China

    4School of Automation, Central South University, Changsha 410083, China

    Abstract: In this paper,we address matrix-valued distributed stochastic optimization with inequality and equality constraints,where the objective function is a sum of multiple matrix-valued functions with stochastic variables and the considered problems are solved in a distributed manner.A penalty method is derived to deal with the constraints,and a selection principle is proposed for choosing feasible penalty functions and penalty gains.A distributed optimization algorithm based on the gossip model is developed for solving the stochastic optimization problem,and its convergence to the optimal solution is analyzed rigorously.Two numerical examples are given to demonstrate the viability of the main results.

    Key words: Distributed optimization;Matrix-valued optimization;Stochastic optimization;Penalty method;Gossip model

    1 Introduction

    1.1 Distributed optimization

    In recent years,distributed optimization has received great attention thanks to its important role in describing a number of collective assignments.As an effective parallel computing method,distributed optimization can tackle large-scale optimization problems which can be decomposed into several subproblems that can be solved in parallel.Its applications and theoretical significance relate to various fields,including sensor networks (Wan and Lemmon,2009),machine learning (Li H et al.,2020),resource allocation (Deng et al.,2018),and so on (Yang SF et al.,2017;Shi et al.,2019;Yang T et al.,2019;Jiang et al.,2021;Wang D et al.,2021;Wang XY et al.,2021;Yue et al.,2022;Zeng et al.,2022).

    Distributed optimization can be regarded as an optimization approach with a multi-agent system.Each agent has its local objective function,and a local decision variable denotes the state of an agent.The objective function of the considered optimization problem is the sum of multiple local objective functions.A distinguishing feature of distributed optimization is that the information exchange among agents is through a network topology graph.In the graph,a node denotes an agent.Each agent knows only its own information (its local objective function and local decision variable).Compared with centralized optimization,distributed optimization has several desirable features as follows: (1) each agent needs to interact with only neighbor agents,so it can save the communication cost;(2) the information related to the optimization problem is distributed and stored in each agent,so it is more private and safer;(3) there is no single point of failure,which greatly improves the fault tolerance of the system;(4) because it is a parallel computing method,the scalability of the optimization algorithm can be enhanced.

    A large number of distributed optimization methods have been developed in recent years.For example,distributed subgradient methods for multiagent optimization were developed in Nedic and Ozdaglar (2009).In Liu and Wang (2015),a second-order multi-agent system was proposed for distributed optimization with bound constraints.In Zeng et al.(2017),a distributed continuous-time optimization method was presented via non-smooth analysis.In Liu et al.(2017),a recurrent neural network (RNN) system was developed for distributed optimization.Based on an RNN system,decentralized-partial-consensus constrained optimization was addressed in Xia ZC et al.(2023).In Xia ZC et al.(2021,2022),multi-complex-variable distributed optimization was studied.

    Note that the proposed systems in existing works are vector-variable systems.Hence,the computation time relies on the dimension of the state in the optimization problem.However,if the dimension is high,the methods converge slowly.A matrix-valued optimization model can overcome this difficulty in several areas (Bouhamidi and Jbilou,2012;Bin and Xia,2014;Xia YS et al.,2014;Li JF et al.,2016;Huang et al.,2021).However,the results of matrix-valued optimization methods are not systematic enough,although several seminal works have been done (Huang et al.,2021;Xing et al.,2021;Zhu ZH et al.,2021;Zhang et al.,2022).

    1.2 Constraints and penalty methods

    Various types of constraints are studied in distributed optimization.For the resource allocation problem,the choice of each agent is in a certain range,and no agent shares its private information with others (Zeng et al.,2017).In this case,bound constraints and linear equality constraints are needed.Many engineering tasks have complex constraints due to time limitations and technical restrictions.In addition,the limitations of communication capacities cause constraints in social networks.To this end,the handling of constraints has been investigated in many works,including inequality constraints (Zhu MH and Martínez,2012;Liang et al.,2018a;Li XX et al.,2020),equality constraints (Zhu MH and Martínez,2012;Liu and Wang,2013;Liang et al.,2018b;Lv et al.,2020),bound constraints (Zeng et al.,2017;Zhou et al.,2019),and approximate constraints (Jiang et al.,2021).

    The exact penalty method is a valid method for dealing with the constraints of the optimization problem.Its core is to choose feasible penalty functions and penalty gains to transform a constrained optimization problem into an equivalent unconstrained one,in which the penalty gains are not so large in contrast to the ones in the conventional penalty methods.There are several related works about the exact penalty methods for distributed optimization.An adaptive exact penalty method was proposed in Zhou et al.(2019) for distributed optimization.A distributed optimization algorithm was developed in Liang et al.(2018a) using an exact penalty function.However,Zhou et al.(2017,2019) considered the bound constraint and used the distance function as the penalty function.In this paper,we develop an exact penalty method for handling inequality and equality constraints.

    1.3 Gossip model

    Originating from the social communication network,the gossip model plays an important role in consensus algorithms,and is applied in sensor networks and peer-to-peer networks thanks to its advantages including high fault-tolerance and high scalability (Boyd et al.,2006).In studies of distributed optimization,the gossip model has been applied widely to achieve consensus on the states of agents.In recent years,a large number of works on gossiplike optimization algorithms have been done.For example,a gossip algorithm was designed for the convex consensus optimization in Lu et al.(2011).In Jakovetic et al.(2011),a gossip algorithm was developed to solve cooperative convex optimization in networked systems.In Yuan (2014),a gossip-based gradient-free method was developed,and the gossip model was regarded as a multi-agent system.In Koloskova et al.(2019),a distributed stochastic optimization algorithm was proposed based on a gossip model with compressed communication.

    1.4 Goal and contributions

    In this paper,we consider a distributed stochastic optimization problem withNagents as follows:

    Problem (1) is said to be a matrix-valued distributed stochastic optimization problem.For vector-valued stochastic optimization,several works have been done.The problem of distributed stochastic optimization was addressed by Shamir and Srebro (2014).The strongly convex stochastic optimization problem was studied in Rakhlin et al.(2012).Several gossip algorithms with compressed communication were derived for decentralized stochastic optimization in Koloskova et al.(2019).

    In this paper,we address the matrix-valued distributed stochastic optimization with inequality and equality constraints using an algorithm based on a gossip model.Specifically,the contributions are summarized as follows:

    1.An auxiliary function is proposed to analyze several properties of the matrix-valued functions.Many common properties for vector-valued optimization methods are proposed in a matrix-valued fashion (see in Definitions 1-5 and Lemma 1).

    2.A selection principle of the penalty functions and the penalty gains is derived (see in Selection principle 1).Based on the selection principle,an exact penalty method is proposed for transforming a matrix-valued optimization problem with inequality and equality constraints into a problem without inequality or equality constraints (see in Theorems 1 and 2,and Fig.1).

    Fig.1 Transient states of Xk(1,i) (a), Xk(2,i) (b), Xk(3,i) (c),and the transient values of the objective function (d) in Example 1 (k, i ∈{1,2,3})

    3.A distributed optimization algorithm based on a gossip model is developed for solving the matrixvalued distributed stochastic optimization problem (see in Algorithm 1),and its convergence is analyzed (see in Theorem 3 and Remark 1).Two numerical examples are provided to illustrate the efficiency of Algorithm 1 for solving matrix-valued distributed stochastic optimization problems (see in Figs.2 and 3).

    2 Preliminaries

    2.1 Notations

    Let R,Rn,and Rn×mdenote the set of all real numbers,the set of alln-dimensional real vectors,and the set of all (n×m)-dimensional real matrices,respectively.IN={1,2,...,N}.‖·‖denotes the Euclidean norm,‖·‖F(xiàn)denotes the Frobenius norm,and |·|denotes the absolute value.?X ∈Rn×m,X(i,j) denotes the (i,j)thelement ofX.vec (X)=(X(1,1),X(2,1),...,X(n,1),X(1,2),X(2,2),...,X(n,m))T∈Rnm.For vectorsx1,x2,...,xN,ATdenotes the transpose of matrixA.tr (A) denotes the trace of thenth-order matrixA.δ2(A) denotes the second largest eigenvalue of thenth-order matrixA.Indenotes then-dimensional identity matrix,and 1ndenotes then-dimensional vector with all components being 1.“?” denotes the Kronecker product operator.G=(V,E,A) denotes a graph withNnodes,whereV=INis the node set,E ?V×Vis the edge set,andA ∈RN×Nis the weighted adjacency matrix.A(i,j) >0 if (i,j)∈E;otherwise,A(i,j)=0.LetNi={j|A(i,j)/=0}be the set of the neighbors of nodei.For a setS ?Rn×m,PS(X)=arg minY∈S‖X-Y‖F(xiàn).

    2.2 Matrix-valued function

    In problem (1),local objective functionfiis a mapping from Rn×mto R.In addition,the functionsgandhin the constraints are the mappings from Rn×mto R.In this study,we call the functionf: Rn×m →R a matrix-valued function.An optimization problem is said to be a matrix-valued optimization problem if its objective function is a matrix-valued function withn/=1 andm/=1.Different from the normal vector-valued optimization,the decision variable of a matrix-valued optimization problem is a matrix.

    To address matrix-valued optimization problem (1),we need to analyze the properties of matrix-valued functionf.Before analyzing the properties off,we define an auxiliary functionα(x)=f(X),wherex=vec (X).α(x) is a useful tool for proving the properties off(X).Now,we propose several definitions and lemmas off(X) usingα(x).

    Definition 1(L-Lipschitz continuity)f:Rn×m →R is said to beL-Lipschitz continuous if?X,Y ∈Rn×m,?L>0,such that |f(X)-f(Y)|≤L||X-Y‖F(xiàn),whereLis a Lipschitz constant.

    Definition 2(l-smoothness)f:Rn×m →R is said to bel-smooth if?X,Y ∈Rn×m,?l>0,such that‖?f(X)-?f(Y)‖F(xiàn)≤l||X-Y‖F(xiàn).

    Definition 3(μ-strong convexity)f: Rn×m →R is said to beμ-strongly convex if?X,Y ∈Rn×m,?μ>0,such thatf(Y)≥f(X)+tr ((?f(X))T(YX))+μ‖Y-X/2.

    Note that Definition 3 defines the convexity offifμ=0.

    Definition 4For any convex functionf:Rn×m →R,the subdifferential offwith respect toXis defined by

    In addition,G ∈?f(X) is called a subgradient offatX.

    Furthermore,based on the properties of the Frobenius norm and Definitions 1-3,two lemmas are derived:

    Lemma 1Assumef:Rn×m →R andα:Rnm →R with vec (X)=x.?X,Y ∈Rn×m,we have the following statements:

    (4)f(X) isl-Lipschitz continuous if and only ifα(x) isl-Lipschitz continuous;

    (5)f(X) isl-smooth if and only ifα(x) islsmooth;

    (6)f(X) isμ-strongly convex if and only ifα(x) isμ-strongly convex.

    ProofFor statement (1),we can obtain

    For statement (3),it is a well-known norm inequality.For statements (4)-(6),they can be easily proved using statements (1) and (2).

    Lemma 2Iff(X) isμ-strongly convex and bounded withM′,and?f(X) is bounded withM′′,thenf(X) is 2M′μ/M′′-Lipschitz continuous.

    ProofBased on statement (5) in Lemma 1,α(x) isμ-strongly convex iff(X) isμ-strongly convex.Then,according to Lemma 4.2 in Zhou et al.(2017),we have thatα(x) is 2M′μ/M′′-Lipschitz continuous.Based on statement (3) in Lemma 1,f(X) is 2M′μ/M′′-Lipschitz continuous ifα(x) is 2M′μ/M′′-Lipschitz continuous.

    Definitions 1-3 provide several properties of functionf(X) which will be commonly considered in the vector-optimization theory.Thus,in matrixvalued optimization,we generalize them into the matrix-valued domain.Based on Lemma 1 andα(x),many existing results in the vector-valued domain Rnmcan be generalized to the matrix-valued domain Rn×m.For example,Lemma 2 can be proved via statements (4) and (6) in Lemma 1.According to Lemma 2,the strong convexity can lead to Lipschitz continuity under the conditions in Lemma 2.

    For a non-smooth function,the subgradient defined in Definition 4 is adopted when the gradient does not exist.In distributed optimization,there are many works about non-smooth analysis with subgradients (Ruszczyński,2006;Zeng et al.,2017).In the optimization problem minif for anyi ∈IN,fi(Xi) satisfies the Lipschitz condition in Definition 1,the sum sign and the subdifferential sign can be exchanged,i.e.,This statement can be proved by changing the matrix problem into a vector problem via functionα(x),and its proof can be obtained from Ruszczyński (2006) by statement (3) in Lemma 1.In addition,for Definitions 2 and 3,if the function is non-smooth,we can substitute its subgradient for its gradient.In these cases,μ-strong convexity is still calledμ-strong convexity,butl-smoothness is now calledl-pseudo smoothness and is defined as follows:

    Lemma 3 provides a sufficient condition for judging thel-pseudo smoothness of a non-smooth function.

    3 Problem formulation

    Consider an optimization problem with inequality and equality constraints as follows:

    Problem (2) is a constrained matrix-valued distributed optimization problem,and the handling of its constraints is a difficult point.For handling bound constraints,a penalty method was proposed in Zhou et al.(2019).In Zhou et al.(2019),the selection of penalty gains relied on the Lipschitz constants of the objective functions,and the continuoustime optimization method was also dependent on the penalty gains.Compared with bound constraints,equality and inequality constraints in problem (2) are more complex,and the selection of penalty gains relies on not only the objective functionsfi,but alsogandh,which may be more complex than the optimization in Zhou et al.(2019).Thus,we will develop an exact penalty method for handling the equality and inequality constraints in Section 4.2.The exact penalty method can transform an optimization problem with equality and inequality constraints into an optimization problem without equality or inequality constraints.

    Using the penalty method,an optimization problem without inequality constraintg(Xi)≤0 or equality constrainth(Xi)=0 is proposed as follows:

    The core of the penalty method is to derive feasible penalty gains and penalty functions for guaranteeing the equivalence between the original constrained problems and unconstrained problems.In this study,AiandBiare penalty functions.PgiandPhi(i=1,2,...,N) are the penalty gains that need to be chosen to guarantee the equivalence between problems (2) and (3).Thus,the penalty gainsPgiandPhiare important for problem transformation,and the selection principle of them is given in Section 4.1.

    In addition,we propose a new type of optimization,matrix-valued distributed stochastic optimization,in which stochastic variables are considered into problem (2),and its form is shown as follows:

    Note that problems (4) and (1) are identical if we setXi=Xj=X,?i,j ∈IN.Fi(Xi,ξi) can be regarded as a stochastic function sinceDisatisfies some distribution.We call problem (4) a matrixvalued distributed stochastic optimization with inequality and equality constraints.In this study,we focus on solving problem (4) by developing a distributed optimization algorithm.

    4 Main results

    In this section,we address the exact penalty method and the development of a distributed optimization algorithm for solving the matrix-valued distributed stochastic optimization problem (4).In Section 4.1,a selection principle of penalty gains and penalty functions is proposed.Then,we analyze how to obtain feasible penalty gains.In Section 4.2,an exact penalty method is proposed to select the penalty gains and handle the inequality and equality constraints.In Section 4.3,an algorithm based on a gossip model for solving problem (4) is developed.

    4.1 Selection principle of penalty gains

    Beginning with a centralized matrix-valued optimization problem minf(X) s.t.X ∈S ?Rn×m(Sdenotes a feasible set),we propose a selection principle for seeking the penalty gains and penalty functions:

    Selection principle 1Penalty gainc(c>0) and penalty functionτS(X) : Rn×m →R satisfy the following conditions:

    (1)?X ∈Rn×m,f(X)+cτS(X)≥f(PS(X));

    (2)?X ∈Rn×m,τS(X)≥0;

    (3)?X ∈S,τS(X)=0.

    Based on Selection principle 1,the equivalence theorem is derived in the following:

    Theorem 1If there existcandτS(X) satisfying Selection principle 1 and the considered problem has at least one solution,then arg minX∈S f(X)=arg minX∈Rn×m(f(X)+cτS(X)).

    According to Theorem 1,if the penalty functionτS(X) and the penalty gaincsatisfy Selection principle 1,then the equivalence between the original constrained problem minX∈S f(X) and problem minX∈Rn×m(f(X)+cτS(X)) can be guaranteed.Similarly,we can select the penalty functions and penalty gains by Selection principle 1 to guarantee the equivalence between problems (2) and (3).Actually,when we setS:=Ωgi={Xi|g(Xi)≤0},the function

    satisfies conditions (2) and (3) in Selection principle 1.When we setS:=Ωhi={Xi|h(Xi)=0},the function

    satisfies conditions (2) and (3) in Selection principle 1.Hence,if the penalty gainsPgiandPhisatisfy condition (1) in Selection principle 1,then problems (2) and (3) are equivalent.Therefore,in the next subsection,we propose an exact penalty method to select feasible penalty gains satisfying condition (1) in Selection principle 1.

    4.2 A penalty method

    In this subsection,we derive the selection of penalty gains for problem (2).Let

    Then,Assumption 1 is provided as follows:

    Assumption 1(1)fiis convex andLf-Lipschitz continuous fori ∈IN;(2) Slater’s condition holds.

    Next,we give the equivalence theorem between problems (2) and (3) on the basis of penalty gainsPgiandPhisatisfying the conditions withLgi(Xi) andLhi(Xi):

    Theorem 2Under Assumption 1,ifLgi(Xi)Pgi+Lhi(Xi)Phi ≥LfforXi/∈Ωgi ∩Ωhi,then problem (2) is equivalent to problem (3).

    4.3 Matrix-valued distributed stochastic optimization algorithm based on a gossip model

    In this subsection,we develop an algorithm based on a gossip model (its vector-valued form can be found in Xiao and Boyd (2004)).First,we introduce the gossip model as follows:

    wherei ∈IN.κ>0 limits the rate of the gossip updating.Each stateXiis ann×mmatrix.Ais the adjacency matrix of graphG=(V,E,A).In gossip updating,each agent shares its own information with its neighbors and updates by local average,which makes all of the agents reach an agreement on a target solution.

    Before proceeding,we propose Assumption 2:

    Assumption 2(1)fiis bounded withm,μstrongly convex,andl-smooth fori ∈ IN.?fiis bounded withM.(2)Aiis convex andlg-pseudo smooth;Biis convex andlhpseudo smooth.?Aiis bounded withMg,and?Biis bounded withMh.(3) Fori ∈ IN,withσiandEbeing known upper bounds.(4)Ais a symmetric doubly stochastic matrix.(5) Slater’s condition holds.

    In Assumption 2,the convexity of the objective functions and constraint functions can lead to a globally optimal solution for optimization problems (Boyd and Vandenberghe,2004) when the optimal solution exists.The strong convexity and thel-smoothness contribute to the proof of the convergence of the proposed algorithm (Rakhlin et al.,2012).According to Lemma 2,item (3) gives the characteristics of random sampling.Item (4) indicates that the graph is connected and weightbalanced,which contributes to the consensus of states.Slater’s condition is a common constraint qualification to guarantee the solvability of an optimization problem (Liu et al.,2017;Xia ZC et al.,2022).

    To proceed,based on the gossip model,Algorithm 1 is developed for solving problem (4).

    Xiis regarded as the state of theithagent.Eqs.(5) and (6) are regarded as information update processes of theithagent,and they originate from the gossip model.Each agent shares its own information with its neighbors byAand updates by local average,which makes all of the agents reach an agreement on a target solution.Therefore,Eq.(5) with Eq.(6) is a multi-agent system,and Algorithm 1 is a distributed optimization method.

    In Algorithm 1,the time-varying stepζ(k) should be chosen to achieve convergence to the optimal value.Thus,we give a convergence theorem to prove the convergence of Algorithm 1,andζ(k) is designed in the theorem.Before proposing the convergence theorem,we introduce four necessary lemmas.

    Based on Eq.(8),the proof of Lemma 4 is completed.

    According to Lemma 2,we have thatfiis 2mμ/M-Lipschitz continuous.

    Lemma 5Assume thatX?is an optimal solution to problem (4).Under Assumption 2,(k) in Algorithm 1 satisfies

    Based on item (4) in Assumption 2,we can derive

    Then,according to statement (3) in Lemma 1,and items (1) and (2) in Assumption 2,we have

    According to items (1) and (2) in Assumption 2,we can also obtain

    Combining with expressions (10)-(12),the proof is completed.

    Lemma 6(Koloskova et al.,2019) Under Assumption 2,{X(k)}k≥0in Algorithm 1 withζ(k)=a1/(k+a2),a1>0,anda2≥5/psatisfies

    From Theorem 1 in Koloskova et al.(2019),we obtainp=κ(1-δ2(A))(κis from the gossip model).Based on item (4) in Assumption 2,Ais a symmetric doubly stochastic matrix;thus,all its eigenvalues are positive and its largest eigenvalue is 1.

    Lemma 7(Koloskova et al.,2019) Let{a(k)}k≥0witha(k)≥0 and{e(k)}k≥0withe(k)≥0 be the sequences satisfying

    Summing with Lemmas 4-7,we derive a theorem as follows,which implies that Algorithm 1 can converge to an optimal solution to problem (4) with a properζ(k):

    Theorem 3Under Assumption 2,forp>0,Algorithm 1 forζ(k)=4/(χ1(a+k)) witha ≥5/pconverges at the rate

    ProofSubstituting inequality (13) into the bound in inequality (9),we can obtain

    Theorem 3 provides the proper time-varying stepζ(k) for Algorithm 1.Withζ(k),Algorithm 1 can solve the matrix-valued distributed stochastic optimization problem (4).In the next section,we will provide two examples to show the validity of the proposed penalty method and Algorithm 1.

    Remark 1(Convergence rate) The convergence rate of Algorithm 1 is(see in Theorem 3),whereχ1=μandχ2=2M2+4PgMg+4PhMh.The methods developed in Zhou et al.(2019),Huang et al.(2021),Xia ZC et al.(2021),and Zhang et al.(2022) are continuous-time optimization methods;thus,they are conservative.Therefore,their convergence rate is low.The convergence rate of the algorithm in Koloskova et al.(2019) isBecause the problems considered in Koloskova et al.(2019) are not subject to any constraint,T1(K) is larger thanT2(K).The extra part ofT1(K),exceptT2(K),is the handling of constraints;note thatχ2,with respect to constraints mainly,is natural to the balance between unconstrained problems and constrained ones.

    Remark 2(Complexity) The numbers of floating points are the same inXand vec (X).Thus,the spatial complexities of a matrix-valued algorithm and a vector-valued algorithm are the same when there are no other constraints.The complexity of Algorithm 1 isO(4KN(nm+1)),while the complexity of the algorithm in Koloskova et al.(2019) isO(3KNnm).The difference is also generated from the handling of constraints naturally.

    Compared with conventional distributed optimization methods (see in the references in Section 1.1),we consider matrix-valued optimization and stochastic optimization.In addition,an exact penalty for dealing with (in) equality constraints is employed to distributed optimization.Table 1 presents the comparison results.

    Table 1 Comparison of existing works with this study

    5 Simulations

    In this section,two numerical examples are presented to illustrate the characteristics of the penalty methods and Algorithm 1.The algorithm and the data are implemented and simulated in MATLAB?R2017b and run on Intel?CoreTMi5-8257U CPU@1.40 GHz,Intel Iris Plus Graphics 645 1536 MB,8 GB 2133 MHz LPDDR3,macOS 10.15.7.

    Example 1Consider the following matrix-valued distributed optimization problem withN=3:

    Note that the objective functions are all convex and-strongly convex.The equality constraint function and inequality constraint function are linear.Thus,the penalty functionsA(X) andB(X) are convex and pseudo smooth.We can obtain the optimal solutionwithout any constraint as follows:

    and the optimal value 107.88.Then,we perform Algorithm 1 without random samples where parameters are taken asκ=0.1,Pg=4,Ph=10,

    withδ2(A)=0.8212,anda=280 based onp=0.0178.

    Based on MATLAB,we run Algorithm 1,and we can obtain Figs.1a-1c depicting the transient states ofXk(i,j),k,i,j ∈{1,2,3},showing that Algorithm 1 is always globally convergent.We can obtain the optimal solution to problem (14):

    and the optimal value is 147.21.Fig.1d shows that the objective function value obtained by Algorithm 1 is the same as the optimal solution to problem (14).Therefore,the exact penalty method is valid,and Algorithm 1 can solve problem (14) without random samples.

    Then we add random samples to problem (14),and other settings remain the same.We run Algorithm 1 by MATLAB and Fig.2 is obtained.Fig.2 depicts the transient objective function values of problem (14) with or without random samples by running Algorithm 1.Note that two trajectories are roughly the same;thus,Algorithm 1 can also solve problem (14) with random samples,which illustrates that Algorithm 1 can be used to solve matrix-valued distributed stochastic optimization problems.

    Fig.2 Transient values of the objective function in Example 1

    Example 2Consider a matrix-valued distributed stochastic optimization problem with more agents and higher dimensions (N=10 andX ∈R9×9) as follows:

    We run Algorithm 1 to solve problem (15) without or with random samples,and then we obtain Fig.3.Fig.3 shows the errors between the transient values of the objective function obtained by Algorithm 1 and the optimal values of the objective function in Example 2.The error converges to 0,and two trajectories are roughly the same,which illustrates the validity of Algorithm 1.

    Fig.3 Errors between the transient values of the objective function obtained by Algorithm 1 and the optimal values of the objective function in Example 2

    6 Conclusions

    In this paper,we have focused on a special constrained optimization called matrix-valued distributed stochastic optimization subject to inequality and equality constraints.We have adopted an exact penalty for the handling of the constraints.Based on a gossip model,we have developed a distributed stochastic gradient descent algorithm and analyzed its stability.Two illustrative examples have been provided to explain the validity of the exact penalty method and the optimization method.

    Contributors

    Zicong XIA,Yang LIU,and Wenlian LU designed the research.Zicong XIA processed the data.Zicong XIA and Yang LIU drafted the paper.Wenlian LU and Weihua GUI helped organize the paper.Yang LIU and Weihua GUI revised and finalized the paper.

    Compliance with ethics guidelines

    Yang LIU is a guest editor of this special feature,and he was not involved with the peer review process of this manuscript.Zicong XIA,Yang LIU,Wenlian LU,and Weihua GUI declare that they have no conflict of interest.

    Data availability

    The data that support the findings of this study are available from the corresponding author upon reasonable request.

    亚洲国产精品成人久久小说| 精品国产一区二区三区久久久樱花| 亚洲精品亚洲一区二区| 丰满乱子伦码专区| 国产精品嫩草影院av在线观看| 欧美精品国产亚洲| 久久99热这里只频精品6学生| 久久久久国产精品人妻一区二区| 久热这里只有精品99| 成人亚洲精品一区在线观看| 26uuu在线亚洲综合色| 9色porny在线观看| 男人狂女人下面高潮的视频| 在线精品无人区一区二区三| 婷婷色综合大香蕉| 91久久精品国产一区二区成人| 热re99久久精品国产66热6| h视频一区二区三区| 在线天堂最新版资源| 欧美bdsm另类| 日本爱情动作片www.在线观看| 亚洲精品第二区| 精品一品国产午夜福利视频| 伊人亚洲综合成人网| 五月玫瑰六月丁香| av女优亚洲男人天堂| 国产一区有黄有色的免费视频| 免费av不卡在线播放| 五月伊人婷婷丁香| 王馨瑶露胸无遮挡在线观看| 精品亚洲乱码少妇综合久久| 国产av码专区亚洲av| 日本wwww免费看| 国产免费视频播放在线视频| 久久精品国产a三级三级三级| 国产在线免费精品| av免费观看日本| 国产精品女同一区二区软件| 美女视频免费永久观看网站| 久久久久久久久久久免费av| 亚洲伊人久久精品综合| 午夜91福利影院| 熟妇人妻不卡中文字幕| 成人亚洲欧美一区二区av| 人人妻人人澡人人看| 亚洲色图综合在线观看| 三上悠亚av全集在线观看 | 99久久综合免费| 国产无遮挡羞羞视频在线观看| 亚洲人成网站在线观看播放| 国产精品成人在线| 国产高清有码在线观看视频| 丝瓜视频免费看黄片| 欧美激情极品国产一区二区三区 | 久久免费观看电影| 我要看日韩黄色一级片| 如何舔出高潮| 国产男女超爽视频在线观看| 18禁裸乳无遮挡动漫免费视频| 最后的刺客免费高清国语| 97在线人人人人妻| 亚洲欧美一区二区三区黑人 | 亚洲精华国产精华液的使用体验| 国产一区二区三区av在线| 日韩强制内射视频| 欧美丝袜亚洲另类| 国产精品.久久久| 精品国产乱码久久久久久小说| 精品少妇久久久久久888优播| 亚洲成人一二三区av| 高清视频免费观看一区二区| 免费播放大片免费观看视频在线观看| 亚洲高清免费不卡视频| 十八禁高潮呻吟视频 | 国产精品一区www在线观看| 美女中出高潮动态图| 美女大奶头黄色视频| 九色成人免费人妻av| 简卡轻食公司| 国产午夜精品一二区理论片| freevideosex欧美| 国产日韩一区二区三区精品不卡 | 国产乱人偷精品视频| 人人妻人人澡人人爽人人夜夜| 人人妻人人澡人人看| av国产久精品久网站免费入址| 高清在线视频一区二区三区| 人妻夜夜爽99麻豆av| 2018国产大陆天天弄谢| 少妇人妻精品综合一区二区| 免费看光身美女| 婷婷色综合大香蕉| 一级毛片我不卡| 久久久久久久亚洲中文字幕| 欧美精品一区二区大全| 少妇裸体淫交视频免费看高清| 91精品国产九色| 国产成人一区二区在线| 国产精品秋霞免费鲁丝片| 性色avwww在线观看| 国产色爽女视频免费观看| 成年美女黄网站色视频大全免费 | 51国产日韩欧美| 另类亚洲欧美激情| 午夜av观看不卡| 国产成人免费无遮挡视频| 高清在线视频一区二区三区| 高清黄色对白视频在线免费看 | 久久av网站| 深夜a级毛片| 在线观看免费高清a一片| 亚洲图色成人| 中文天堂在线官网| 免费观看性生交大片5| 狠狠精品人妻久久久久久综合| 久久久久久久久久久免费av| 有码 亚洲区| 亚洲欧美日韩另类电影网站| 高清视频免费观看一区二区| 国产精品国产三级国产av玫瑰| 日本欧美国产在线视频| 色5月婷婷丁香| 婷婷色综合大香蕉| 国产老妇伦熟女老妇高清| av专区在线播放| 久久99蜜桃精品久久| 欧美性感艳星| 热99国产精品久久久久久7| 久久精品国产自在天天线| 69精品国产乱码久久久| 国产精品秋霞免费鲁丝片| 97在线视频观看| 亚洲一区二区三区欧美精品| 精品国产乱码久久久久久小说| 精品久久久久久久久av| 91在线精品国自产拍蜜月| 亚洲精品日本国产第一区| 99热这里只有精品一区| 免费看不卡的av| 免费观看在线日韩| 99久久精品热视频| 在线观看美女被高潮喷水网站| 免费在线观看成人毛片| 欧美日本中文国产一区发布| 国产成人精品婷婷| 欧美 日韩 精品 国产| 亚洲成人av在线免费| 久久人人爽人人爽人人片va| 丰满饥渴人妻一区二区三| 精品酒店卫生间| 一级毛片 在线播放| 下体分泌物呈黄色| 大香蕉久久网| 97在线视频观看| 香蕉精品网在线| 国产av精品麻豆| 久久人人爽人人爽人人片va| 国产爽快片一区二区三区| 你懂的网址亚洲精品在线观看| 少妇的逼好多水| 国产精品一二三区在线看| 国产黄片美女视频| 成人影院久久| 欧美成人午夜免费资源| 国产成人精品久久久久久| 久久ye,这里只有精品| 特大巨黑吊av在线直播| 制服丝袜香蕉在线| 9色porny在线观看| 国产亚洲av片在线观看秒播厂| 国内少妇人妻偷人精品xxx网站| 欧美日韩视频高清一区二区三区二| 老熟女久久久| 欧美精品一区二区免费开放| 老司机影院成人| 国产毛片在线视频| 777米奇影视久久| 国产视频内射| 国产精品久久久久久久电影| 久久久久人妻精品一区果冻| 久久久久久人妻| 边亲边吃奶的免费视频| 国产成人a∨麻豆精品| 亚洲丝袜综合中文字幕| 哪个播放器可以免费观看大片| 亚洲国产最新在线播放| 麻豆成人av视频| 午夜av观看不卡| 中国国产av一级| 汤姆久久久久久久影院中文字幕| 欧美日韩在线观看h| 97在线视频观看| 国产老妇伦熟女老妇高清| 久热这里只有精品99| 2022亚洲国产成人精品| 国产高清国产精品国产三级| 亚洲精品第二区| 国产伦精品一区二区三区四那| 亚洲成人av在线免费| 亚洲欧洲国产日韩| 久久久久人妻精品一区果冻| av不卡在线播放| 亚洲国产精品国产精品| 亚洲成色77777| 亚洲国产av新网站| 男男h啪啪无遮挡| 亚洲美女搞黄在线观看| 女人久久www免费人成看片| av免费观看日本| 啦啦啦中文免费视频观看日本| 欧美丝袜亚洲另类| 亚洲,欧美,日韩| 一区二区三区精品91| 我要看日韩黄色一级片| 热re99久久精品国产66热6| 人妻一区二区av| 国产 一区精品| 夜夜骑夜夜射夜夜干| 亚洲无线观看免费| 国产欧美另类精品又又久久亚洲欧美| 高清毛片免费看| 久久久久久久久久久久大奶| 一级毛片我不卡| 国产毛片在线视频| 黑人猛操日本美女一级片| 极品教师在线视频| 婷婷色麻豆天堂久久| 精品国产国语对白av| 人人妻人人看人人澡| 菩萨蛮人人尽说江南好唐韦庄| 久久人人爽人人片av| 夫妻性生交免费视频一级片| 人人澡人人妻人| 欧美精品高潮呻吟av久久| 中文欧美无线码| 亚洲av免费高清在线观看| 国产探花极品一区二区| 亚洲欧洲精品一区二区精品久久久 | 9色porny在线观看| 日韩一区二区视频免费看| 亚洲欧美成人综合另类久久久| 水蜜桃什么品种好| 最近中文字幕2019免费版| 久久久久人妻精品一区果冻| 午夜影院在线不卡| 日本wwww免费看| 建设人人有责人人尽责人人享有的| 免费观看在线日韩| videossex国产| 中文资源天堂在线| 久久久久视频综合| 亚洲美女搞黄在线观看| 婷婷色综合大香蕉| 免费看av在线观看网站| 最新的欧美精品一区二区| 午夜老司机福利剧场| 人妻一区二区av| 亚洲内射少妇av| 亚洲不卡免费看| 亚洲精品久久久久久婷婷小说| 一区二区三区乱码不卡18| 国产探花极品一区二区| 18禁在线播放成人免费| a级一级毛片免费在线观看| 亚洲av.av天堂| 一级av片app| 精品人妻偷拍中文字幕| 午夜91福利影院| 日韩在线高清观看一区二区三区| 国产极品天堂在线| 一级毛片电影观看| 中文欧美无线码| 搡老乐熟女国产| 天堂俺去俺来也www色官网| 中国美白少妇内射xxxbb| 久久精品久久久久久噜噜老黄| 最近手机中文字幕大全| 国产成人freesex在线| 欧美日韩在线观看h| 久久6这里有精品| videos熟女内射| 久久99一区二区三区| 精品久久国产蜜桃| 国产视频内射| 少妇高潮的动态图| 亚洲av男天堂| 国产精品一区www在线观看| 亚洲国产精品999| 日韩中字成人| 成人无遮挡网站| 日韩一区二区视频免费看| 大香蕉久久网| 九九久久精品国产亚洲av麻豆| 一区二区三区乱码不卡18| 国产欧美日韩精品一区二区| 亚洲精品,欧美精品| 男男h啪啪无遮挡| 国产日韩欧美在线精品| 国产黄色免费在线视频| 亚洲色图综合在线观看| 国产色爽女视频免费观看| 亚洲av中文av极速乱| 91久久精品国产一区二区成人| 在线观看人妻少妇| 日韩一本色道免费dvd| 校园人妻丝袜中文字幕| 伊人久久国产一区二区| 欧美精品人与动牲交sv欧美| 观看美女的网站| 国产在线男女| 亚洲国产欧美在线一区| 夜夜爽夜夜爽视频| 日韩三级伦理在线观看| 在线观看一区二区三区激情| 日韩不卡一区二区三区视频在线| 国产一级毛片在线| 国产熟女午夜一区二区三区 | 久久精品国产鲁丝片午夜精品| 观看av在线不卡| 少妇熟女欧美另类| 亚洲av国产av综合av卡| 免费在线观看成人毛片| av在线观看视频网站免费| 欧美xxxx性猛交bbbb| 国产成人精品无人区| 中文字幕精品免费在线观看视频 | 久久久久久久久久成人| 校园人妻丝袜中文字幕| 99国产精品免费福利视频| 91久久精品国产一区二区三区| 免费人成在线观看视频色| 亚州av有码| 中文在线观看免费www的网站| 日韩不卡一区二区三区视频在线| av线在线观看网站| 日韩制服骚丝袜av| 涩涩av久久男人的天堂| 各种免费的搞黄视频| 韩国高清视频一区二区三区| 老司机影院毛片| 免费不卡的大黄色大毛片视频在线观看| 乱系列少妇在线播放| 国产一区二区三区综合在线观看 | 精品99又大又爽又粗少妇毛片| 精品国产一区二区三区久久久樱花| 国国产精品蜜臀av免费| 亚洲精品456在线播放app| 熟女电影av网| 久久综合国产亚洲精品| 在线观看人妻少妇| 一区二区三区精品91| 春色校园在线视频观看| 91精品一卡2卡3卡4卡| 国产精品秋霞免费鲁丝片| 精品久久久精品久久久| 赤兔流量卡办理| 26uuu在线亚洲综合色| 午夜福利影视在线免费观看| 亚洲人成网站在线观看播放| 菩萨蛮人人尽说江南好唐韦庄| 天堂俺去俺来也www色官网| 亚洲av中文av极速乱| 少妇人妻精品综合一区二区| 免费高清在线观看视频在线观看| 妹子高潮喷水视频| 哪个播放器可以免费观看大片| 亚洲精品456在线播放app| 午夜av观看不卡| 美女视频免费永久观看网站| 国内揄拍国产精品人妻在线| 国产在线一区二区三区精| 五月伊人婷婷丁香| 99久久人妻综合| www.色视频.com| 新久久久久国产一级毛片| 色婷婷av一区二区三区视频| 丁香六月天网| 精品午夜福利在线看| 欧美精品高潮呻吟av久久| 日韩中文字幕视频在线看片| 99久久精品一区二区三区| 中文精品一卡2卡3卡4更新| 大又大粗又爽又黄少妇毛片口| 少妇的逼好多水| a级一级毛片免费在线观看| 日韩三级伦理在线观看| 国产91av在线免费观看| 亚洲欧美精品专区久久| 欧美日韩av久久| 国产精品一区二区在线观看99| 午夜久久久在线观看| 熟妇人妻不卡中文字幕| 两个人的视频大全免费| 久久6这里有精品| 99久久精品一区二区三区| 免费播放大片免费观看视频在线观看| 亚洲国产色片| 久久精品国产亚洲av涩爱| 黑人高潮一二区| 国产中年淑女户外野战色| 麻豆成人av视频| 亚洲天堂av无毛| 热99国产精品久久久久久7| 国产成人freesex在线| 有码 亚洲区| 欧美精品高潮呻吟av久久| 一级片'在线观看视频| 日韩欧美一区视频在线观看 | 亚洲久久久国产精品| av在线播放精品| 黄色一级大片看看| 能在线免费看毛片的网站| 精品熟女少妇av免费看| 亚洲四区av| 老司机影院毛片| 国产亚洲精品久久久com| 日本-黄色视频高清免费观看| 欧美日韩国产mv在线观看视频| 九草在线视频观看| 午夜91福利影院| 成年美女黄网站色视频大全免费 | 久久这里有精品视频免费| 日韩强制内射视频| 国产伦在线观看视频一区| 国精品久久久久久国模美| 老司机影院毛片| av不卡在线播放| 久久热精品热| 男人狂女人下面高潮的视频| 精品久久久久久电影网| 青春草视频在线免费观看| 大片电影免费在线观看免费| 又爽又黄a免费视频| 成人毛片a级毛片在线播放| 卡戴珊不雅视频在线播放| 欧美成人午夜免费资源| 最新的欧美精品一区二区| 麻豆成人午夜福利视频| 高清午夜精品一区二区三区| a级毛片免费高清观看在线播放| 久久久久久久国产电影| 五月伊人婷婷丁香| 国产精品一区www在线观看| 久久久久久久久久人人人人人人| 噜噜噜噜噜久久久久久91| 高清在线视频一区二区三区| 国产熟女午夜一区二区三区 | 天堂俺去俺来也www色官网| 亚洲国产日韩一区二区| 欧美国产精品一级二级三级 | 久久久国产一区二区| 国产成人精品一,二区| 男女国产视频网站| 亚洲,欧美,日韩| 毛片一级片免费看久久久久| 寂寞人妻少妇视频99o| 水蜜桃什么品种好| 观看av在线不卡| 涩涩av久久男人的天堂| 免费av中文字幕在线| 99国产精品免费福利视频| 大片免费播放器 马上看| 成年人午夜在线观看视频| 国产av精品麻豆| 我要看黄色一级片免费的| 美女大奶头黄色视频| 亚洲av综合色区一区| 精品一区二区免费观看| 中文字幕久久专区| av天堂久久9| 精品酒店卫生间| 久久国内精品自在自线图片| 亚洲精品日韩在线中文字幕| 精品99又大又爽又粗少妇毛片| 一本色道久久久久久精品综合| 成人午夜精彩视频在线观看| 一区二区三区四区激情视频| 国产熟女欧美一区二区| 久久人人爽人人片av| 国产欧美日韩综合在线一区二区 | 欧美bdsm另类| 乱系列少妇在线播放| 亚洲真实伦在线观看| 建设人人有责人人尽责人人享有的| 亚洲精品久久久久久婷婷小说| 久久精品国产亚洲网站| 日韩人妻高清精品专区| 秋霞在线观看毛片| 九九久久精品国产亚洲av麻豆| 男人添女人高潮全过程视频| 校园人妻丝袜中文字幕| 插逼视频在线观看| 秋霞在线观看毛片| 菩萨蛮人人尽说江南好唐韦庄| 久久精品熟女亚洲av麻豆精品| 午夜免费鲁丝| 全区人妻精品视频| 少妇熟女欧美另类| 精品一区二区免费观看| 国产精品国产av在线观看| 久久韩国三级中文字幕| h日本视频在线播放| 美女内射精品一级片tv| 全区人妻精品视频| 中文字幕精品免费在线观看视频 | 少妇精品久久久久久久| xxx大片免费视频| 中文字幕久久专区| 精品久久久久久久久av| 夜夜爽夜夜爽视频| 中文精品一卡2卡3卡4更新| 日韩视频在线欧美| 一级片'在线观看视频| 免费人成在线观看视频色| 久久久国产一区二区| 亚洲丝袜综合中文字幕| 国国产精品蜜臀av免费| 国产在线男女| 我要看日韩黄色一级片| 99久久精品一区二区三区| 97在线视频观看| 国产成人午夜福利电影在线观看| 搡老乐熟女国产| 99久久精品一区二区三区| a级毛片免费高清观看在线播放| 18+在线观看网站| 99九九线精品视频在线观看视频| 中文资源天堂在线| 人人妻人人添人人爽欧美一区卜| 亚洲久久久国产精品| 精品少妇内射三级| 日韩av不卡免费在线播放| 欧美成人午夜免费资源| 成人特级av手机在线观看| 亚洲成色77777| 一本大道久久a久久精品| 亚洲美女黄色视频免费看| 亚洲国产毛片av蜜桃av| 人人妻人人添人人爽欧美一区卜| 大话2 男鬼变身卡| 久久久久久久大尺度免费视频| 高清av免费在线| 精品酒店卫生间| 国产 精品1| 人人妻人人澡人人看| 国产在线男女| 成人无遮挡网站| 寂寞人妻少妇视频99o| 国产精品久久久久久久久免| 2018国产大陆天天弄谢| 国产精品.久久久| 最近的中文字幕免费完整| 亚洲伊人久久精品综合| 亚洲无线观看免费| 热99国产精品久久久久久7| 69精品国产乱码久久久| 日韩一区二区视频免费看| 18禁在线播放成人免费| 成年美女黄网站色视频大全免费 | 人妻制服诱惑在线中文字幕| 人妻少妇偷人精品九色| 亚洲欧洲国产日韩| 亚洲精华国产精华液的使用体验| 日韩免费高清中文字幕av| 少妇人妻一区二区三区视频| 国产淫语在线视频| 亚洲成人手机| 亚洲真实伦在线观看| 一级爰片在线观看| 欧美另类一区| 久久国产精品大桥未久av | av福利片在线| 又黄又爽又刺激的免费视频.| 偷拍熟女少妇极品色| 亚洲av日韩在线播放| 特大巨黑吊av在线直播| 黑人高潮一二区| 精品少妇黑人巨大在线播放| 一本大道久久a久久精品| 久久免费观看电影| 十八禁高潮呻吟视频 | 中文字幕人妻丝袜制服| 香蕉精品网在线| 欧美少妇被猛烈插入视频| 精品国产一区二区三区久久久樱花| 国产无遮挡羞羞视频在线观看| 一级片'在线观看视频| 色94色欧美一区二区| 国产中年淑女户外野战色| 国产成人免费观看mmmm| 国产探花极品一区二区| 亚洲欧美一区二区三区国产| 亚洲国产精品专区欧美| 伦精品一区二区三区| 曰老女人黄片| 亚洲国产av新网站| 欧美激情国产日韩精品一区| 简卡轻食公司| 日日撸夜夜添| 人妻制服诱惑在线中文字幕| 97精品久久久久久久久久精品| 亚洲av男天堂| 人妻制服诱惑在线中文字幕| 国产中年淑女户外野战色| 国产欧美另类精品又又久久亚洲欧美| 少妇高潮的动态图| 建设人人有责人人尽责人人享有的| 自拍欧美九色日韩亚洲蝌蚪91 | 黄色视频在线播放观看不卡| 菩萨蛮人人尽说江南好唐韦庄| 99re6热这里在线精品视频| 精品人妻熟女av久视频| 一级毛片 在线播放| 九草在线视频观看| 三级国产精品片|