• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme

    2015-12-06 00:45:38PengYIYiguangHONG
    Control Theory and Technology 2015年4期

    Peng YI,Yiguang HONG

    Key Lab of Systems and Control,Academy of Mathematics and Systems Science,Chinese Academy of Sciences,Beijing 100190,China

    Received 10 October 2015;revised 27 October 2015;accepted 27 October 2015

    Stochastic sub-gradient algorithm for distributed optimization with random sleep scheme

    Peng YI,Yiguang HONG?

    Key Lab of Systems and Control,Academy of Mathematics and Systems Science,Chinese Academy of Sciences,Beijing 100190,China

    Received 10 October 2015;revised 27 October 2015;accepted 27 October 2015

    In thispaper,we considera distributed convex optimization problem ofa multi-agentsystem with the globalobjective function as the sum of agents’individual objective functions.To solve such an optimization problem,we propose a distributed stochastic sub-gradient algorithm with random sleep scheme.In the random sleep scheme,each agent independently and randomly decides whether to inquire the sub-gradient information of its local objective function at each iteration.The algorithm not only generalizes distributed algorithms with variable working nodes and multi-step consensus-based algorithms,but also extends some existing randomized convex set intersection results.We investigate the algorithm convergence properties under two types of stepsizes:the randomized diminishing stepsize that is heterogeneous and calculated by individual agent,and the fixed stepsize that is homogeneous.Then we prove that the estimates of the agents reach consensus almost surely and in mean,and the consensus point is the optimal solution with probability 1,both under randomized stepsize.Moreover,we analyze the algorithm error bound under fixed homogeneous stepsize,and also show how the errors depend on the fixed stepsize and update rates.

    Distributed optimization,sub-gradient algorithm,random sleep,multi-agent systems,randomized algorithm

    DOI 10.1007/s11768-015-5100-8

    1 Introduction

    Recent years have witnessed growing research attention in distributed optimization and control,due to a broad range of applications such as distributed parameter estimation in sensor networks,resource allocation in communication systems,and optimal power flow in smart grids in[1-7].Many complicated practical problems can be formulated as distributed optimization problems with a global objective function as the sum of agents’individual objective functions,where the individual function of each agent cannot be known or manipulated by other agents.Various significant distributed optimization algorithms have been proposed and analyzed,including primal domain(sub)gradient algorithms[1,2,6,8-11],dual domain algorithms[12],and primal-dual domain algorithms[5,7,13-15].

    Distributed optimization problems arise in network decision problems with information distributed through the agents over the network.Hence,each agent can only obtain the(sub)gradient of its local objective function at any given testing point by manipulating its own private data.However,the calculation of(sub)gradient may involve computation with big data[16],or solving certain subproblem[17],or calculating multiple function values[18],which is still a big computation burden for each agent.Moreover,in application cases,like[19]and[20],each agent needs to perform measurements or observations for gradient information,which results in additional energy burden.Therefore,it might be undesirable to inquire local gradient information at each iteration.In fact,for the convex set intersection problem,which is a specific case of distributed optimization with the localobjective function as the distance to a local convex set,[21]and[22]have introduced random sleep scheme into projection-based consensus algorithms and shown its convergence and efficiency.

    On the otherhand,itmightbe inefficientto inquire the local gradient information at each iteration.Due to data distributiveness,the agents’estimates of the optimal solution cannot keep consensus all the time.To eliminate the disagreements between the agents’estimates,which indeed constitute the main obstacle in the optimal solution seeking,consensus-type algorithms are usually incorporated into the distributed optimization algorithms,like[1-15].Recently,[23]proposed a distributed algorithm to preform multiple consensus steps before updating with gradients,in order to remarkably improve the algorithm convergence speed,but each agent is required to perform the same number of consensus steps before updating with gradients.

    In this paper,we propose a distributed stochastic subgradient algorithm with random sleep scheme to solve a nonsmooth constrained convex optimization problem.In such algorithms,each agent first performs a consensus step by a convex combination of its own estimate and its neighbor agents’estimates in a uniformly jointly strongly connected communication network.Then each agent independently makes a Bernoulli experiment to decide whether it performs update step with its own objective function.It is called the waking agent if the agent performs update,or called the sleeping agent otherwise.The waking agents can calculate noisy subgradients at given testing point with local objective functions,and then update the estimates by moving along the noisy sub-gradient directions and projecting onto their feasible sets.Meanwhile,the sleeping agents only update their estimates with the consensus step information.

    Different from existing algorithms[1-15],the agents in our design need not calculate sub-gradient information and perform projection at each iteration.Thus,the random sleep scheme works similarly with the ones in[21]and[22],and is introduced to reduce the computation and energy burden forthe problems with costly calculation of sub-gradient and projection.This scheme is also similar with the variable working nodes design in[24],whose numerical experience shows remarkable communication and computation burden reduction.Besides that,the random sleep scheme can be seen as the randomized generalization of the multi-step consensus based algorithm in[23]by allowing heterogeneous and randomized consensus steps.Furthermore,the random sleep scheme can model the random failure or scheduling of agents in uncertain environments[25,26].

    The algorithm convergence analysis is given with two types of stepsizes.The randomized diminishing stepsize is taken as the inverse of gradient update times,which can be calculated by each agent in a distributed manner.With the randomized diminishing stepsize,we prove that all the agents reach consensus and the consensual point belongs to the optimal solution set with probability one.Then we investigate the algorithm error bounds when adopting the fixed stepsize α.In this case,the agents cannot find the precise optimal solution as proved in[11].Here,the distance between agents’estimates and the optimal solution is analyzed when the local objective functions are smooth strongly convex functions,and the optimal function value gap is given when the local objective functions are nonsmooth convex functions,both depending on stepsize α and update rates of the agents.

    The contributions of this paper can be found in the following aspects:

    ?Random sleep scheme is firstly proposed for distributed optimization.It generalizes the works of[21]and[22],which are special cases of the nonsmooth convex objective functions that we handle here,and moreover,it provides a randomized version of the multi-step consensus based algorithm in[23].

    ?The algorithm convergence propertiesare fully investigated with both the randomized diminishing stepsize and fixed stepsize,which provide further understanding and insight of random sleep scheme in addition to[24].

    The paper is organized as follows.The distributed optimization problem is first formulated,and then the distributed algorithm with random sleep scheme is proposed in Section 2.In Section 3,the algorithm convergence analysis is given under randomized diminishing stepsize.We first show that all the agents reach consensus in mean and almost surely,and then prove that all the agents converge to the same optimal solution with probability 1.The error bound is analyzed for the fixed stepsize case in Section 4.Numericalexamples are given to illustrate the algorithm performance in Section 5.Finally,the concluding remarks are presented in Section 6.

    2 Problem formulation and distributed optimization algorithm

    In this section,we first give the formulation of a distributed nonsmooth convex optimization problem with set constraint,and then propose a distributed optimization algorithm with random sleep scheme.

    2.1 Problem formulation

    Consider a group of agents with agent index set N={1,...,n}to cooperatively optimize the sum of their local objective functionsfi(x),that is,

    HereXis a nonempty convex set,which is also assumed to be closed and bounded.Eachfi(x)is a convex function over an open set containing the setX.Notice thatfi(x)is privately known by agenti,and cannot be shared or known by other agents.

    We first give an assumption about problem(1).

    Assumption 1The optimal solution set

    is nonempty.Moreover,the sub-gradient set of local objective functionsfi(x)is uniformly bounded over the compact setX,i.e.,

    We call anyx*∈X*to be the optimal solution andf(x*)to be the optimal value.It follows from[27]that,for any vectorg∈?f(x),

    Without global information,the agents have to solve problem(1)by local computation and information sharing.The information sharing networks among the agents are usually represented by graphs,which may be timevarying due to link failures,packet dropouts,or energy saving schedule.Here the time-varying information sharing network is described by the graph sequence{G(k)=(N,?(k)),k=1,2,...}with N representing the agent set and ?(k)containing all the information interconnections at timek.If agentican get information from agentjat timek,then(j,i)∈ ?(k)and agentjis said to be in agenti’s neighbor set Ni(k)={j|(j,i) ∈ ?(k)}.A path of graph G is a sequence of distinct agents in N such that any consecutive agents in the sequence corresponding to an edge of graph G.Agentjis said to be connected to agentiif there is a path fromjtoi.The graph G is said to be strongly connected if any two agents are connected.The following assumption about the sequence{G(k)=(N,?(k)),k=1,2,...}can guarantee that any agent’s information can reach any other agents quite often in the long run.

    Define matricesA(k)=[aij(k)]according to graph G(k)withaij(k)>0 ifj∈Ni(k)andaij(k)=0 otherwise.All matricesA(k)are double stochastic,namely,1TnA(k)=1TnandA(k)1n=1n.We also assume that there exists a constant τ∈(0,1)such that for any timek,aij(k)≥ τ whenj∈ Ni(k)andaii(k)≥ τ,?i.Define Φ(k,s)=A(k)A(k-1)...A(s+1)for allk,swithk>s≥ 0 and Φ(k,k)=In.Then the following lemma holds according to reference[1].

    Lemma 1With Assumption 2 andA(k),Φ(k,s)defined as before,we obtain

    where

    2.2 Distributed optimization algorithm with random sleep scheme

    The agentihas an estimate ofx*denoted asxi(k)with the randomly selected initial valuexi(0)∈X.Let χi,k,k≥ 0 be independent identically distributed(i.i.d.)Bernoulli random variables with P(χi,k=1)= γiand P(χi,k=0)=1- γi,0 < γi< 1.Random variables χi,k,k≥ 0 are also assumed to be independent across the agents.The distributed optimization algorithm with random sleep scheme is given as follows:

    Consensus:

    wheregi(k)+?i,kis one noisy sub-gradient offi(x)at pointvi(k)andgi(k)∈?fi(vi(k)).In other words,the algorithm(2)solves the problem(1)with the following two steps at each iterationk.

    ?Consensus step:The agentifirst gets its neighbors’estimates{xj(k),j∈Ni(k)}through the network G(k).Then agentimakes a convex combination ofxi(k)and{xj(k),j∈Ni(k)}with weightsaij(k),j=1,...,nto reachvi(k).

    ?Update step:Agentidoes a Bernoulli experiment with random variable χi,k.When χi,k=1,agentiinquires its local objective functionfi(x)and gets the noisy sub-gradient directiongi(k)+?i,k,which is not separable.Then agentiupdates its estimate by moving alonggi(k)+?i,kwith stepsize αi,k,and then projects the result onto the constraint setX.With probability 1-γi,χi,k=0 and agentionly updates its estimate tovi(k).That is,γidecides the update rate of agenti.

    Agenticannotgetthe exactsub-gradientdirection but only a noisy onegi(k)+?i,k,where ?i,kis the sub-gradient observation noise at iterationk.Noisy subgradients may come from the gradient evaluation process or stochastic optimization problems(see[8-10]and[18]).

    We introduce a random sleep scheme into the distributed optimization for the following three reasons:

    i)The random sleep scheme wasintroduced to reduce the projection calculation in the convex set intersection problem in[22].Here it is introduced to reduce the calculation of sub-gradient and projection,especially when the objective functions have big data sets or involve subproblems.

    ii)[23]proposed to perform multiple consensus steps before updating with gradients,and achieved fast convergence speed.In fact,the disagreement of the agents’estimates results in gradient evaluation errors,and therefore,it is preferred to reduce the disagreement by multiple consensus steps before gradient evaluations.Hence,the random sleep scheme can be regarded as the randomized version of the multi-step consensus based algorithm by allowing heterogeneous update rates.

    iii)The random sleep scheme can model the random failure of multi-agent networks,especially in sensor networks(referring to[25]and[26]).

    DenoteJkas the set of agents performing the update step at timek:Jk={i:χi,k=1}.LetFkbe the σ-algebra generated by the entire history of the algorithm up to timek,i.e.,fork≥1,

    The following assumption imposed on the stochastic noise ?i,khas been widely adopted[8,9].

    Assumption 3There exists a deterministic constant ν>0 such that

    ?i∈N,k≥ 1 with probability 1.

    The following two lemmas are very useful in the convergence analysis to achieve the optimization,which can be found in[28]and[27],respectively.

    Lemma 2Suppose

    with 0< α(k)≤ 1,β(k)≥ 0,k=1,2,...Then

    The stepsize αi,kin(2)determines how the agents update with the gradient information when the agents are waking,and hence,plays a key role in the algorithm performance.We will give the convergence analysis under randomized diminishing stepsize and the error bound analysis under fixed stepsize in Sections 3 and 4,respectively.

    3 Convergence analysis with randomized diminishing stepsize

    3.1 Consensus result

    In this subsection,we first show that all the agents reach consensus in mean,and then show that all the agents also reach consensus with probability 1.Without loss of generality,we assume the dimensionmto be 1.

    agents reach consensus in mean with algorithm(2),that is

    ProofDefineζi(k)=PX[vi(k)-αi,k(gi(k)+?i,k)]-vi(k)for agenti∈Jk,and ζi(k)=0 for agenti?Jk.We rewrite

    the dynamics of agentias follows:

    where the second step follows from Assumption 1,Lemma 1,equations(4)-(6)and E[E[x|F1]|F2]=E[x|F2]forF2?F1.

    Hence,still with E[E[x|F1]|F2]=E[x|F2]forF2?F1and(6),fork>s>?k,with probability 1,we have

    which implies that the agents reach consensus in mean. □

    Next result describes the consensus rate and will be used in the following analysis.

    with probability 1 by algorithm(2).

    From the monotone convergence theorem(referring to[29]),

    By Theorem 1,‖xi(k+1)-y(k+1)‖converges in mean.According to Fatou’s Lemma([29]),

    Then we only need to show that‖xi(k+1)-y(k+1)‖converges with probability 1.

    For agenti∈Jk,we define

    Sincexi(k+1)=PX[ξi(k)]for agenti∈Jk,by the nonexpansive property of projection we have

    For agenti∈Jk,

    Thus,combined with(11),all the agents reach consensus with probability 1. □

    3.2 Convergence result

    Next result shows that all the agents reach the same optimal solution of(1)almost surely.

    ProofLet x*be any point in the optimal solution set X*.Then for i∈ Jk,

    For 2ab ≤ a2+b2,we have 2ηk|(gi(k)+?i,k)T(vi(k)-x*)|≤ηk‖vi(k)-x*‖2+ ηk‖gi(k)+ ?i,k‖2.Therefore,

    4 Errorbound analysis underf i xed stepsize

    In some cases,a fixed stepsize policy may be adopted to facilitate the update of agents(referring to[1,2,9]).In fact,with the fixed stepsize,each agent can adapt with the changing environments while diminishing stepsize cannot.Because the agents cannotconverge to the exact optimal solution under fixed stepsize,which was proved in[11],we analyze the error bound in this section.

    Suppose that all the agents use the same fixed stepsize αi,k= α in the distributed algorithm(2).Denote γmin=min{γ1,...,γn}and γmax={γ1,...,γn},which represent the minimal and the maximal update rate in the network.The following result gives the consensus error bounds analysis.

    Lemma 6With Assumptions 1-3 and fixed stepsize αi,k= α,by the random sleep algorithm(2),we have

    ProofIt is not hard to obtain

    Using ζi(k)given in Theorem 1,and with fix stepsize,from equation(4),we have

    where the last step follows from Lemma 1 and Assumption 1.

    With fixed stepsize αi,k= α,

    Therefore,taking the conditional expectation of(14)with respect toFs-1,we have

    Taking full expectation and the limits yields(13). □

    Remark 2Lemma 6 shows that smaller update rate γileads to smaller consensus error,which is consistent with the result in[23].

    4.1 Strongly convex objective function

    Here we analyze the distance between the agents’estimates and the optimal solution when the local objective functions are differentiable and σ-strongly convex,that is,

    ProofWithout loss of generality,we still use the sub-gradient notation,though ?f(x)={?f(x)}here.With(2),for agenti∈Jk,

    because (vi(k)-x*)is measurable to {Fk-1,Jk},E[?i,k|Fk-1,Jk]=0,

    Moreover,because the local objective functions are σ-strongly convex,

    Hence,fori∈Jk,

    Hence,for all agents,

    Take the expectation with respect toFk-1,

    Summing up above equation fromi=1 tongives

    Therefore,

    By Lemmas 2 and 6,we obtain

    4.2 Nonsmooth convex objective functions

    In this part,we apply the time averaging technique in[27]to study the gap between the function value at the estimate and the optimal value when the local objective functions are nonsmooth convex functions.

    ProofFor agenti∈Jk,by the non-expansive of projection operation,(15)still holds.

    Hence,take an expectation with respect toFk-1,

    Denoting κ =2nγmax(Cg+ ν)2and taking full expectation,we have

    Summing up fromr=1 tor=kgives

    As a result,

    Then simple manipulations yield the conclusion. □

    5 Numerical examples

    In this section,two numerical examples are given to illustrate the algorithm performance.Example 1 shows the algorithm convergence with the randomized diminishing stepsize when the local objective functions are nonsmooth,while Example 2 illustratesthe performance for distributed quadratic optimization with the fixed stepsize.

    Example 1Consider five agents solving the optimization problem(1)with x=(θ1,θ2)T∈ R2where

    and the constraint set X ? R2is X={x=(θ1,θ2)T∈R2:(θ1-1)2+θ22-1 ≤ 0,θ21+θ22-1 ≤ 0,θ2≥ 0}.The optimal solution is(0.978,0.206).

    The five agents share information with the topologies G1,G2,G3,G4,switched periodically as shown in Fig.1.

    Fig.1 The communication graphs.

    The corresponding mixing matrices Aiare given as

    The agents are assumed with the same update rate γi= γ.The sub-gradient can be calculated corrupted with gaussian noise N(0,1)with zero mean and variance 1.

    Let γ =0.5,then,the trajectories of estimates for θ1and θ2of the random sleep algorithm versus the standard algorithm(γ=1)are shown in Fig.2.

    Fig.2 Trajectories of estimates for γ =0.5 and γ =1.The trajectories behave almost the same,but the random sleep algorithm(γ=0.5)has only half subgradients calculation times of the standard algorithm(γ=1)without sleeping.(a)The estimates of θ1,γ =0.5.(b)The estimates of θ1,γ =1.(c)The estimates of θ2,γ =0.5.(d)The estimates of θ2,γ =1.

    Fig.3 Iteration times and update times versus update rates γ:It shows that raising the update rate deceases the iteration times,meanwhile,increases the update times,to reach the same stopping criterion.Hence,the selection of update rate should be a trade off between iteration time requirement and computation burden.

    Example 2 is given to illustrate the performance for differentiable strongly convex objective functions with fixed stepsize.

    Fig.4 TrajectoriesThe trajectories behave almost the same,but(a)the random sleep algorithm(γ=0.5)has only half update times of(b)the standard algorithm(γ=1)without sleeping.

    6 Conclusions

    In this paper,we proposed a distributed sub-gradient algorithm with random sleep scheme for convex constrained optimization problems.Because stochastic(sub-)gradient noise was also taken into consideration,the algorithm generalized some existing results.We gave convergence analysis under both randomized diminishing stepsizes and fixed stepsizes,and also numerical simulations.

    Acknowledgements

    The authors would like to thank Dr.Youcheng Lou very much for his fruitful discussions.

    [1]A.Nedic,A.Ozdaglar.Distributed subgradient methods for multi-agentoptimization.IEEE Transactions on Automatic Control,2009,54(1):48-61.

    [2]A.H.Sayed,S.Tu,J.Chen,et al.Diffusion strategies for adaptation and learning over networks.IEEE Signal Processing Magazine,2013,30(3):155-171.

    [3]G.Shi,K.H.Johansson,Y.Hong.Reaching an optimalconsensus:dynamical systems that compute intersections of convex sets.IEEE Transactions on Automatic Control,2013,58(3):610-622.[4]Y.Lou,G.Shi,K.H.Johansson,et al.An approximate projected consensus algorithm for computing intersection of convex sets.IEEE Transactions on Automatic Control,2014,59(7):1722-1736.

    [5]P.Yi,Y.Zhang,Y.Hong.Potential game design for a class of distributed optimization problems.Journal of Control and Decision,2014,1(2):166-179.

    [6]P.Yi,Y.Hong.Quantized subgradient algorithm and data-rate analysis fordistributed optimization.IEEE Transactions on Control of Network Systems,2014,1(4):380-392.

    [7]P.Yi,Y.Hong,F.Liu.Distributed gradient algorithm for constrained optimization with application to load sharing in power systems.Systems&Control Letters,2015,83:45-52.

    [8]S.Ram,A.Nedic,V.V.Venugopal.Distributed stochastic subgradient projection algorithms for convex optimization.Journal of Optimization Theory and Applications,2010,147(3):516-545.

    [9]A.Nedic.Asynchronous broadcast-based convex optimization over a network.IEEE Transactions on Automatic Control,2011,56(6):1337-1351.

    [10]I.Lobel,A.Ozdaglar.Distributed subgradient methods for convex optimization over random networks.IEEE Transactions on Automatic Control,2011,56(6):1291-1306.

    [11]K.Yuan,Q.Ling,W.Yin.On the convergence of decentralized gradient descent.arXiv,2013:http://arxiv.org/abs/1310.7063.

    [12]J.C.Duchi,A.Agarwal,M.J.Wainwright.Dual averaging for distributed optimization:convergence analysis and network scaling.IEEE Transactions on Automatic Control,2012,57(3):592-606.

    [13]D.Yuan,S.Xu,H.Zhao.Distributed primal-dual subgradient method for multiagent optimization via consensus algorithms.IEEE Transactions on Systems Man and Cybernetics:Part BCybernetics,2011,41(6):1715-1724.

    [14]W.Shi,Q.Ling,G.Wu,et al.EXTRA:An exact first-order algorithm fordecentralized consensus optimization.SIAM Journal on Optimization,2015,25(2):944-966.

    [15]J.Wang,Q.Liu.A second-order multi-agent network for bound-constrained distributed optimization.IEEE Transactions on Automatic Control,2015:DOI 10.1109/TAC.2015.241692.

    [16]V.Cevher,S.Becker,M.Schmidt.Convex optimization for big data:Scalable,randomized,and parallel algorithms for big data analytics.IEEE Signal Processing Magazine,2014,31(5):32-43.

    [17]H.Lakshmanan,D.P.Farias.Decentralized resource allocation in dynamic networks of agents.SIAM Journal on Optimization,2008,19(2):911-940.

    [18]D.Yuan,D.W.C.Ho.Randomized gradient-free method for multiagent optimization over time-varying networks.IEEE Transactions on Neural Networks and Learning Systems,2015,26(6):1342-1347.

    [19]K.Kvaternik,L.Pavel.Analysis of Decentralized Extremum-Seeking Schemes.Toronto:Systems Control Group,University of Toronto,2012.

    [20]N.Atanasov,J.L.Ny,G.J.Pappas.Distributed algorithms for stochastic source seeking with mobile robot networks.ASME Journal on Dynamic Systems,Measurement,and Control,2015,137(3):DOI 10.1115/1.4027892.

    [21]G.Shi,K.H.Johansson.Randomized optimal consensus of multiagent systems.Automatica,2012,48(12):3018-3030.

    [22]Y.Lou,G.Shi,K.H.Johansson,et al.Convergence of random sleep algorithms for optimal consensus.Systems&Control Letters,2013,62(12):1196-1202.

    [23]D.Jakovetic,J.Xavier,J.M.F.Moura.Fast distributed gradient methods.IEEE Transactions on Automatic Control,2014,59(5):1131-1146.

    [24]D.Jakovetic,D.Bajovic,N.Krejic,et al.Distributed gradient methods with variable number of working nodes.arXiv,2015:http://arxiv.org/abs/1504.04049.

    [25]J.Liu,X.Jiang,S.Horiguchi,et al.Analysis of random sleep scheme for wireless sensor networks.International Journal of Sensor Networks,2010,7(1):71-84.

    [26]H.Fan,M.Liu.Network coverage using low duty-cycled sensors:random&coordinated sleep algorithms.Proceedings of the 3rd International Symposium on Information Processing in Sensor Networks,New York:ACM,2004:433-442.

    [27]B.T.Polyak.Introduction to Optimization.New York:Optimization Software Inc.,1987.

    [28]T.Li,L.Xie.Distributed consensus over digital networks with limited bandwidth and time-varying topologies.Automatica,2011,47(9):2006-2015.

    [29]R.B.Ash.Real Analysis and Probability.New York:Academic Press,1972.

    Peng YIis a Ph.D.candidate at the Academy of Mathematics and Systems Science,Chinese Academy of Sciences.He received his B.Sc.degree of Automation from University of Science and Technology of China in 2011.His research interest covers multi-agents system,distributed optimization,hybrid system and smart grid.E-mail:yipeng@amss.ac.cn.

    ?Corresponding author.

    E-mail:yghong@iss.ac.cn.

    This work was supported by the Beijing Natural Science Foundation(No.4152057),the National Natural Science Foundation of China(No.61333001)and the Program 973(No.2014CB845301/2/3).

    ?2015 South China University of Technology,Academy of Mathematics and Systems Science,CAS,and Springer-Verlag Berlin Heidelberg

    the B.Sc.and M.Sc.degrees from Peking University,Beijing,China,and the Ph.D.degree from the Chinese Academy of Sciences(CAS),Beijing.He is currently a professor in Academy of Mathematics and Systems Science,CAS,and serves as the Director of Key Lab of Systems and Control,CAS and the Director of the Information Technology Division,National Center for Mathematics and Interdisciplinary Sciences,CAS.His research interests include nonlinear dynamics and control,multi-agent systems,distributed optimization,social networks,and software reliability.E-mail:yghong@iss.ac.cn.

    国产成人精品福利久久| 亚洲精品国产av成人精品| 国产精品熟女久久久久浪| 中文字幕制服av| 免费看不卡的av| 国产免费福利视频在线观看| 精品国产一区二区久久| 啦啦啦在线免费观看视频4| 国产精品国产三级国产专区5o| 精品国产一区二区三区四区第35| 免费高清在线观看日韩| 日韩精品有码人妻一区| 高清不卡的av网站| 国产爽快片一区二区三区| 亚洲成av片中文字幕在线观看 | 日本vs欧美在线观看视频| 久久人人爽av亚洲精品天堂| 超碰97精品在线观看| 久久精品aⅴ一区二区三区四区 | 精品少妇内射三级| 丝瓜视频免费看黄片| 欧美黄色片欧美黄色片| 1024视频免费在线观看| 激情五月婷婷亚洲| 最近2019中文字幕mv第一页| 久久这里有精品视频免费| 免费观看在线日韩| 大码成人一级视频| 男人爽女人下面视频在线观看| 丝袜喷水一区| 大话2 男鬼变身卡| av片东京热男人的天堂| 亚洲四区av| 女人高潮潮喷娇喘18禁视频| 日本免费在线观看一区| 亚洲av福利一区| 亚洲综合精品二区| 国产又爽黄色视频| 美女大奶头黄色视频| 成年女人毛片免费观看观看9 | 免费观看a级毛片全部| 亚洲av电影在线观看一区二区三区| 国产老妇伦熟女老妇高清| 波多野结衣av一区二区av| 夫妻性生交免费视频一级片| 亚洲一级一片aⅴ在线观看| videossex国产| 熟妇人妻不卡中文字幕| 欧美日韩综合久久久久久| 久久久久精品久久久久真实原创| 少妇猛男粗大的猛烈进出视频| 黄色 视频免费看| 国产毛片在线视频| 欧美日韩精品成人综合77777| 综合色丁香网| 少妇人妻精品综合一区二区| 国产精品二区激情视频| av免费观看日本| 啦啦啦在线观看免费高清www| 大片免费播放器 马上看| 天堂俺去俺来也www色官网| 在线观看免费视频网站a站| 午夜福利网站1000一区二区三区| 欧美另类一区| 激情视频va一区二区三区| 亚洲精华国产精华液的使用体验| 制服诱惑二区| 99久久综合免费| 欧美成人午夜免费资源| 狠狠婷婷综合久久久久久88av| 七月丁香在线播放| 高清欧美精品videossex| 午夜福利视频在线观看免费| 边亲边吃奶的免费视频| 成人18禁高潮啪啪吃奶动态图| 久久人人爽人人片av| av视频免费观看在线观看| 男女啪啪激烈高潮av片| 在线观看美女被高潮喷水网站| 日韩一区二区三区影片| 国产成人一区二区在线| 一区在线观看完整版| 久久99热这里只频精品6学生| 蜜桃国产av成人99| 日韩,欧美,国产一区二区三区| av免费观看日本| 少妇猛男粗大的猛烈进出视频| 久久精品国产a三级三级三级| 最近中文字幕2019免费版| 99热网站在线观看| 久久国内精品自在自线图片| 久久久精品免费免费高清| 日韩一本色道免费dvd| 97在线视频观看| 久久久久国产精品人妻一区二区| 91精品三级在线观看| 精品国产乱码久久久久久小说| 久久国产亚洲av麻豆专区| 天天躁夜夜躁狠狠躁躁| 亚洲av日韩在线播放| 一边亲一边摸免费视频| 久久精品久久久久久久性| 99re6热这里在线精品视频| 日本色播在线视频| 亚洲国产最新在线播放| 午夜免费观看性视频| 91午夜精品亚洲一区二区三区| 国产片内射在线| 青春草国产在线视频| 高清视频免费观看一区二区| 国产一区二区在线观看av| 亚洲精品国产av成人精品| 青春草视频在线免费观看| 老鸭窝网址在线观看| 欧美另类一区| 免费黄网站久久成人精品| 美女脱内裤让男人舔精品视频| 成人国产av品久久久| 激情视频va一区二区三区| 成年人免费黄色播放视频| 国产国语露脸激情在线看| 国产成人a∨麻豆精品| 黑人巨大精品欧美一区二区蜜桃| 亚洲精品aⅴ在线观看| 黑人猛操日本美女一级片| 三级国产精品片| 99国产精品免费福利视频| av免费观看日本| 亚洲欧美一区二区三区久久| 两个人免费观看高清视频| a级毛片在线看网站| 丰满迷人的少妇在线观看| 久久久久久人人人人人| 亚洲av日韩在线播放| 久久ye,这里只有精品| 久久久久久免费高清国产稀缺| 十八禁网站网址无遮挡| 少妇的丰满在线观看| 下体分泌物呈黄色| 日韩熟女老妇一区二区性免费视频| 国产一区二区在线观看av| 亚洲四区av| 亚洲三级黄色毛片| 男人操女人黄网站| 午夜老司机福利剧场| 亚洲少妇的诱惑av| freevideosex欧美| 日日摸夜夜添夜夜爱| 国产日韩欧美亚洲二区| 欧美亚洲 丝袜 人妻 在线| 欧美日韩视频高清一区二区三区二| 欧美成人午夜精品| 国产成人精品福利久久| 亚洲伊人久久精品综合| 中文乱码字字幕精品一区二区三区| 成年女人在线观看亚洲视频| 亚洲欧美成人综合另类久久久| 又黄又粗又硬又大视频| 国产伦理片在线播放av一区| 七月丁香在线播放| 亚洲精品中文字幕在线视频| 成年美女黄网站色视频大全免费| 久久ye,这里只有精品| 午夜精品国产一区二区电影| 美女大奶头黄色视频| 观看美女的网站| 久久精品国产a三级三级三级| 亚洲av日韩在线播放| 国产人伦9x9x在线观看 | 日韩大片免费观看网站| 妹子高潮喷水视频| 国产免费福利视频在线观看| 久久 成人 亚洲| tube8黄色片| 日韩av在线免费看完整版不卡| 日本av手机在线免费观看| 日日啪夜夜爽| 欧美日韩视频精品一区| 国产精品嫩草影院av在线观看| 热99久久久久精品小说推荐| 国产成人免费观看mmmm| 国产成人欧美| 欧美日韩一区二区视频在线观看视频在线| 久久久久人妻精品一区果冻| 亚洲少妇的诱惑av| 超色免费av| 嫩草影院入口| 亚洲少妇的诱惑av| 亚洲天堂av无毛| 精品久久久精品久久久| 亚洲三级黄色毛片| 精品国产一区二区久久| 最近的中文字幕免费完整| 久久久国产精品麻豆| 啦啦啦在线观看免费高清www| 婷婷色av中文字幕| 天天操日日干夜夜撸| 观看美女的网站| 少妇精品久久久久久久| 亚洲av日韩在线播放| 色视频在线一区二区三区| 免费黄频网站在线观看国产| freevideosex欧美| www日本在线高清视频| 国产成人一区二区在线| 欧美中文综合在线视频| 精品亚洲乱码少妇综合久久| 你懂的网址亚洲精品在线观看| 国产视频首页在线观看| 丝袜喷水一区| 日韩中字成人| 少妇 在线观看| 久久久国产精品麻豆| 免费av中文字幕在线| 午夜精品国产一区二区电影| 一区二区三区精品91| 午夜老司机福利剧场| 久久亚洲国产成人精品v| av免费在线看不卡| 精品99又大又爽又粗少妇毛片| 在线看a的网站| 欧美日韩视频高清一区二区三区二| 亚洲国产毛片av蜜桃av| 久久久久精品久久久久真实原创| 性色av一级| 一级毛片黄色毛片免费观看视频| 啦啦啦中文免费视频观看日本| 久久狼人影院| 日日撸夜夜添| 少妇熟女欧美另类| 亚洲精华国产精华液的使用体验| av又黄又爽大尺度在线免费看| 日本欧美国产在线视频| 欧美变态另类bdsm刘玥| 日韩制服骚丝袜av| 18禁动态无遮挡网站| 男女下面插进去视频免费观看| 91在线精品国自产拍蜜月| 欧美97在线视频| 观看av在线不卡| 老司机影院成人| 捣出白浆h1v1| 亚洲精品国产色婷婷电影| 男女午夜视频在线观看| 少妇被粗大猛烈的视频| 亚洲欧美精品综合一区二区三区 | 黄片播放在线免费| h视频一区二区三区| 久久久久精品久久久久真实原创| 亚洲精品成人av观看孕妇| 久久久久久久久久人人人人人人| xxxhd国产人妻xxx| 久久av网站| av又黄又爽大尺度在线免费看| 午夜精品国产一区二区电影| 午夜av观看不卡| www日本在线高清视频| 精品酒店卫生间| 黑人猛操日本美女一级片| 男女午夜视频在线观看| 亚洲欧美成人综合另类久久久| 欧美成人午夜精品| 久久久亚洲精品成人影院| 亚洲国产最新在线播放| 看十八女毛片水多多多| 校园人妻丝袜中文字幕| 蜜桃在线观看..| 成人国产av品久久久| 欧美变态另类bdsm刘玥| 熟女少妇亚洲综合色aaa.| 自线自在国产av| 久久99蜜桃精品久久| 亚洲国产色片| 满18在线观看网站| 午夜福利影视在线免费观看| 国产男女超爽视频在线观看| 高清不卡的av网站| 日本黄色日本黄色录像| 2021少妇久久久久久久久久久| www.av在线官网国产| 黑人猛操日本美女一级片| 丝袜喷水一区| 国产97色在线日韩免费| 精品人妻一区二区三区麻豆| 久久精品久久久久久噜噜老黄| 黄网站色视频无遮挡免费观看| 男女下面插进去视频免费观看| 99九九在线精品视频| 久久av网站| 夫妻午夜视频| 精品一区二区三卡| 日韩一卡2卡3卡4卡2021年| 韩国高清视频一区二区三区| 亚洲精品久久久久久婷婷小说| 国产老妇伦熟女老妇高清| 久久国产精品大桥未久av| 日本爱情动作片www.在线观看| 国产成人午夜福利电影在线观看| 久久久精品94久久精品| 日韩电影二区| 亚洲国产欧美在线一区| 国产亚洲精品第一综合不卡| 国产麻豆69| 欧美 日韩 精品 国产| 日产精品乱码卡一卡2卡三| 一本大道久久a久久精品| 成人免费观看视频高清| 一区二区三区四区激情视频| 在线精品无人区一区二区三| 在线观看美女被高潮喷水网站| 国产av一区二区精品久久| 成人国语在线视频| 亚洲国产精品国产精品| 国产极品粉嫩免费观看在线| 晚上一个人看的免费电影| 超色免费av| 最近2019中文字幕mv第一页| 一级,二级,三级黄色视频| 亚洲四区av| 国产精品av久久久久免费| 中文字幕av电影在线播放| 午夜免费男女啪啪视频观看| 午夜久久久在线观看| 国产又爽黄色视频| 久久久国产一区二区| 亚洲国产毛片av蜜桃av| 免费不卡的大黄色大毛片视频在线观看| 亚洲精品国产av蜜桃| 街头女战士在线观看网站| 日韩 亚洲 欧美在线| 免费观看在线日韩| 在现免费观看毛片| 最近最新中文字幕大全免费视频 | 亚洲情色 制服丝袜| 赤兔流量卡办理| 高清av免费在线| 国产精品麻豆人妻色哟哟久久| 国产精品秋霞免费鲁丝片| 成人二区视频| 日本av免费视频播放| 丰满饥渴人妻一区二区三| 男女下面插进去视频免费观看| 亚洲综合精品二区| 在线天堂最新版资源| 大片免费播放器 马上看| 看十八女毛片水多多多| 国产免费一区二区三区四区乱码| 欧美日韩综合久久久久久| 丝袜美足系列| 丝袜美腿诱惑在线| 婷婷色综合www| 成人亚洲欧美一区二区av| 久久 成人 亚洲| 久久久久国产网址| 免费观看无遮挡的男女| 亚洲成色77777| 啦啦啦视频在线资源免费观看| 日韩,欧美,国产一区二区三区| 天美传媒精品一区二区| 日本黄色日本黄色录像| 91精品伊人久久大香线蕉| 成人免费观看视频高清| 亚洲国产最新在线播放| 亚洲第一区二区三区不卡| 欧美人与性动交α欧美软件| 国产福利在线免费观看视频| 777久久人妻少妇嫩草av网站| 男女免费视频国产| 亚洲国产精品成人久久小说| 麻豆av在线久日| 侵犯人妻中文字幕一二三四区| 老鸭窝网址在线观看| 水蜜桃什么品种好| 亚洲一区二区三区欧美精品| 在线观看www视频免费| 亚洲人成电影观看| 精品国产超薄肉色丝袜足j| 亚洲精品久久久久久婷婷小说| 韩国av在线不卡| 只有这里有精品99| 婷婷成人精品国产| 搡女人真爽免费视频火全软件| 久久精品国产a三级三级三级| 极品人妻少妇av视频| 亚洲图色成人| 女人精品久久久久毛片| 国产免费又黄又爽又色| 丰满乱子伦码专区| 久久亚洲国产成人精品v| 久久久久久久久免费视频了| 成年人免费黄色播放视频| 免费观看无遮挡的男女| 成人亚洲精品一区在线观看| 亚洲欧美一区二区三区久久| 亚洲国产精品一区二区三区在线| 九色亚洲精品在线播放| 日产精品乱码卡一卡2卡三| 亚洲精品在线美女| 中文字幕人妻熟女乱码| 亚洲少妇的诱惑av| 一级片'在线观看视频| 青春草视频在线免费观看| www.精华液| 热99久久久久精品小说推荐| 日韩成人av中文字幕在线观看| 国产欧美亚洲国产| 亚洲婷婷狠狠爱综合网| 欧美日韩一区二区视频在线观看视频在线| 曰老女人黄片| 国产黄色免费在线视频| 超碰97精品在线观看| 99香蕉大伊视频| 999久久久国产精品视频| 性少妇av在线| 巨乳人妻的诱惑在线观看| 午夜久久久在线观看| 国产又色又爽无遮挡免| 色视频在线一区二区三区| 国产成人精品福利久久| 婷婷色综合www| 边亲边吃奶的免费视频| 天堂俺去俺来也www色官网| 哪个播放器可以免费观看大片| 精品99又大又爽又粗少妇毛片| 亚洲综合色惰| 国产xxxxx性猛交| 欧美少妇被猛烈插入视频| 18+在线观看网站| 综合色丁香网| 老司机影院毛片| 99久国产av精品国产电影| 精品人妻偷拍中文字幕| 蜜桃国产av成人99| 99热网站在线观看| 午夜福利乱码中文字幕| 综合色丁香网| 欧美日韩综合久久久久久| 老司机亚洲免费影院| 日韩av在线免费看完整版不卡| 啦啦啦啦在线视频资源| 成人国产麻豆网| 波多野结衣av一区二区av| 亚洲精品国产一区二区精华液| 亚洲国产欧美日韩在线播放| 欧美日韩一级在线毛片| 一个人免费看片子| 午夜久久久在线观看| 欧美黄色片欧美黄色片| 王馨瑶露胸无遮挡在线观看| 亚洲欧美精品自产自拍| 少妇人妻精品综合一区二区| 少妇被粗大猛烈的视频| 欧美精品高潮呻吟av久久| 国产男女内射视频| 国产探花极品一区二区| 亚洲婷婷狠狠爱综合网| 男的添女的下面高潮视频| 亚洲精品成人av观看孕妇| 69精品国产乱码久久久| 老熟女久久久| 亚洲内射少妇av| 深夜精品福利| 亚洲婷婷狠狠爱综合网| 伊人亚洲综合成人网| 人妻系列 视频| 婷婷色综合大香蕉| 国产精品久久久av美女十八| 男女边吃奶边做爰视频| 免费大片黄手机在线观看| videossex国产| 中文天堂在线官网| 国产精品久久久久久精品古装| xxxhd国产人妻xxx| 熟女电影av网| 亚洲五月色婷婷综合| av福利片在线| 国产色婷婷99| 天美传媒精品一区二区| 日本免费在线观看一区| 日韩一区二区三区影片| 亚洲欧美成人综合另类久久久| videos熟女内射| 国产精品 欧美亚洲| 国产在线视频一区二区| av电影中文网址| 1024视频免费在线观看| 欧美成人午夜免费资源| 色哟哟·www| 欧美激情 高清一区二区三区| 飞空精品影院首页| 色吧在线观看| 午夜免费男女啪啪视频观看| 日日摸夜夜添夜夜爱| 制服丝袜香蕉在线| 少妇熟女欧美另类| 精品一区二区免费观看| 少妇的逼水好多| 久久 成人 亚洲| 寂寞人妻少妇视频99o| 亚洲三区欧美一区| 亚洲av成人精品一二三区| 美女中出高潮动态图| 久久久久久久久免费视频了| 国产高清不卡午夜福利| 下体分泌物呈黄色| 观看av在线不卡| 国产一区有黄有色的免费视频| 青春草亚洲视频在线观看| 精品酒店卫生间| 亚洲精品日本国产第一区| 欧美精品一区二区免费开放| 91成人精品电影| 欧美少妇被猛烈插入视频| 五月开心婷婷网| 99久国产av精品国产电影| 亚洲精品久久久久久婷婷小说| 九九爱精品视频在线观看| 777米奇影视久久| 亚洲国产精品999| 国产1区2区3区精品| 成年美女黄网站色视频大全免费| 亚洲三区欧美一区| 女人精品久久久久毛片| 熟女av电影| 欧美日韩国产mv在线观看视频| 男女啪啪激烈高潮av片| 熟妇人妻不卡中文字幕| 多毛熟女@视频| 女人久久www免费人成看片| 亚洲av免费高清在线观看| 街头女战士在线观看网站| 国产亚洲精品第一综合不卡| 亚洲av电影在线观看一区二区三区| 免费高清在线观看日韩| 侵犯人妻中文字幕一二三四区| 日韩不卡一区二区三区视频在线| 精品视频人人做人人爽| 日韩成人av中文字幕在线观看| 国产熟女欧美一区二区| 少妇人妻精品综合一区二区| 国产国语露脸激情在线看| 男人添女人高潮全过程视频| 久久97久久精品| 亚洲精品aⅴ在线观看| 国产精品蜜桃在线观看| 久久久久视频综合| 亚洲男人天堂网一区| 婷婷色综合www| 在线观看免费日韩欧美大片| av在线app专区| 亚洲av电影在线观看一区二区三区| 亚洲成人av在线免费| 狂野欧美激情性bbbbbb| 人人妻人人澡人人看| 午夜免费观看性视频| 亚洲成人手机| 亚洲av中文av极速乱| 国产日韩欧美亚洲二区| 天堂8中文在线网| av.在线天堂| 亚洲国产欧美日韩在线播放| 有码 亚洲区| 人人妻人人澡人人爽人人夜夜| 精品少妇一区二区三区视频日本电影 | 亚洲国产日韩一区二区| 久久久久精品性色| 日韩不卡一区二区三区视频在线| 国产午夜精品一二区理论片| 亚洲国产欧美网| 亚洲精品自拍成人| 亚洲av在线观看美女高潮| 丰满乱子伦码专区| 最近中文字幕2019免费版| 久久久精品区二区三区| 久久久久国产一级毛片高清牌| 中文字幕人妻丝袜一区二区 | 国产成人精品一,二区| 久久精品久久精品一区二区三区| 人成视频在线观看免费观看| 国产欧美日韩一区二区三区在线| 97在线视频观看| 欧美日韩综合久久久久久| 精品国产超薄肉色丝袜足j| 日韩中文字幕视频在线看片| 高清视频免费观看一区二区| 久久久久网色| 久久这里只有精品19| 熟女av电影| 日本欧美国产在线视频| 99热国产这里只有精品6| 午夜免费观看性视频| 午夜日韩欧美国产| 国产精品偷伦视频观看了| 国产亚洲精品第一综合不卡| 久久久久久久久免费视频了| 狠狠婷婷综合久久久久久88av| 免费看av在线观看网站| 亚洲精品国产一区二区精华液| 亚洲成色77777| 久久久精品区二区三区| 大片电影免费在线观看免费| 国产在线免费精品| 一级黄片播放器| 亚洲综合色惰| 久久久久精品人妻al黑| 美女国产高潮福利片在线看| 国产国语露脸激情在线看| 国产伦理片在线播放av一区| a级毛片黄视频| av片东京热男人的天堂| 久久鲁丝午夜福利片| 亚洲精品第二区| 亚洲国产欧美在线一区| 欧美最新免费一区二区三区| 在线观看www视频免费|