• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize

    2021-07-26 07:22:58XiaoxingRenDeweiLiYugengXiandHaibinShao
    IEEE/CAA Journal of Automatica Sinica 2021年8期

    Xiaoxing Ren, Dewei Li, Yugeng Xi,, and Haibin Shao,

    Abstract—In this paper, we consider distributed convex optimization problems on multi-agent networks. We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing the time-varying estimate of the local function value at the global optimal solution.Our approach can be applied to both synchronous and asynchronous communication protocols. Specifically, we propose the distributed subgradient with uncoordinated dynamic stepsizes(DS-UD) algorithm for synchronous protocol and the AsynDGD algorithm for asynchronous protocol. Theoretical analysis shows that the proposed algorithms guarantee that all agents reach a consensus on the solution to the multi-agent optimization problem. Moreover, the proposed approach with dynamic stepsizes eliminates the requirement of diminishing stepsize in existing works. Numerical examples of distributed estimation in sensor networks are provided to illustrate the effectiveness of the proposed approach.

    I. INTRODUCTION

    DISTRIBUTED optimization in multi-agent systems has received extensive attention due to its ubiquity in scenarios such as power systems [1], [2], smart grids [3], [4],compressed sensing problems [5], [6], learning-based control[7], and machine learning [8], [9], etc. In distributed optimization problems, a whole task can be accomplished cooperatively by a group of agents via simple local information exchange and computation.

    There exist various studies of distributed optimization methods based on multi-agent networks. Among them, the most widely studied is distributed gradient methods. In this line of research, Nedic and Ozdaglar [10] develop a general framework for the multi-agent optimization problem over a network, they propose a distributed subgradient method and analyze its convergence properties. They further consider the case where the agent’s states are constrained to convex sets and propose the projected consensus algorithm in [11]. The authors of [12] develop and analyze the dual averaging subgradient method which carries out a projection operation after averaging and descending. In [13], two fast distributed gradient algorithms based on the centralized Nesterov gradient algorithm are proposed. Novel distributed methods that achieve linear rates for strongly convex and smooth problems have been proposed in [14]–[18]. The common idea in these methods to correct the error caused by a fixed stepsize is to construct a correction term using historical information. To deal with the case where communications among agents are asynchronous, some extensions have been proposed. Nedic[19] proposes an asynchronous broadcast-based algorithm while authors in [20] develop a gossip-based random projection (GRP) algorithm, they both study the convergence of the algorithms for a diminishing stepsize and the error bounds for a fixed stepsize. Leiet al. [21] consider the distributed constrained optimization problem in random graphs and develop a distributed primal-dual algorithm that uses the same diminishing stepsize for both the consensus part and the subgradient part.

    The selection of stepsize is critical in the design of gradient methods. Typically, the literature considers two types of methods, namely, diminishing stepsizes and fixed stepsizes.The existing distributed gradient methods with diminishing stepsizes asymptotically converge to the optimal solution. The diminishing stepsizes should follow a decaying rule such as being positive, vanishing, non-summable but square summable, see e.g., [10], [11], [13], [19]–[22]. While in [23],a wider selection of stepsizes is explored, the square summable requirement of the stepsizes commonly adopted in the literature is removed, which provides the possibility for better convergence rate. These methods are widely applicable to nonsmooth convex functions but the convergence is rather slow due to diminishing stepsizes. With a fixed stepsize, it is shown in [24] that the algorithm converges faster but only to a point in the neighborhood of an optimal solution. The recent developed distributed algorithms with fixed stepsizes[14]–[18] achieve linear convergence to the optimal solution.However, it requires that the local objective functions are strongly convex and smooth. Besides, the fixed stepsize should be less than a certain critical value which is determined by the weighted matrix of the network, the Lipschitz continuous and the strongly convex parameters of the objective functions. Thus, these algorithms have restricted conditions on the fixed stepsize and require the knowledge of global information, which makes it not widely applicable.

    By comparison to the previous work, the contribution of this paper is a novel dynamic stepsize selection approach for the distributed gradient algorithm. We develop the associated distributed gradient algorithms for synchronous and asynchronous gossip-like communication protocol. An interesting feature of the dynamic stepsize is that differently from the existing distributed algorithms whose diminishing or fixed stepsizes are determined before the algorithm is run, the proposed distributed subgradient with uncoordinated dynamic stepsizes (DS-UD) and AsynDGD algorithms use dynamic stepsizes that rely on the time-varying estimates of the optimal function values generated at each iteration during the algorithm. The advantages of this dynamic stepsize proposed in this paper lie in two aspects. On the one hand, the dynamic stepsize only requires that the local convex objective functions have locally bounded subgradients for the synchronous scenario and have locally Lipschitz continuous bounded gradients for the asynchronous scenario. Besides, the dynamic stepsize needs no knowledge of the global information on the network or the objective functions. Thus, the proposed algorithms are more applicable compared with the distributed algorithms with fixed stepsize. On the other hand, the dynamic stepsize can overcome inefficient computations caused by the diminishing stepsize and achieve faster convergence. The dynamic stepsize is a generalization of the Polyak stepsize [25], which is commonly used in centralized optimization and is shown to have faster convergence than the diminishing stepsize even with the estimated optimal function value [26]. Note that the proposed algorithms utilize two gradients at each iteration: one of them is used to construct the stepsize, and the other one is for the direction, which means that the iteration complexity of the algorithm is doubled.However, numerical examples in which the plots are in terms of the number of gradient calculations illustrate the effectiveness of the proposed algorithms.

    The remainder of this paper is organized as follows. In Section II, we describe the problem formulation. The distributed subgradient algorithm with uncoordinated dynamic stepsize and the convergence analysis are provided in Section III. Section IV discusses the extension of the proposed algorithm to the asynchronous communication protocol. In Section V, we apply our algorithms to distributed estimate problems to illustrate their effectiveness. Finally, we make concluding remarks in Section VI.

    II. PRObLEM FORMULATION

    Consider a network consisting ofNagents, the goal of the agents is to solve the following problem defined on the network cooperatively

    III. DISTRIbUTED SUbGRADIENT ALGORITHM WITH DYNAMIC STEPSIZES

    In this section, we derive the distributed subgradient algorithm with dynamic stepsizes for synchronous communication protocol and present its convergence analysis.

    A. Algorithm Development

    Algorithm 1 summarizes the above steps, this distributed subgradient method with uncoordinated dynamic stepsizes is abbreviated as DS-UD.

    Remark 1:Since the convergence speed of the algorithm varies when solving different specific optimization problems,different maximum iteration numbers can be set for different problems to ensure that the optimality error decreases rather slowly at the maximum iteration. In practical applications, we can set the maximum iteration number according to the connectivity of the multi-agent network and the scale of the optimization problem.

    Algorithm 1 DS-UD xi0=yi0 ∈Ω ?i ∈V W ∈RN×N k=0 1: Initial: Given initial variables , , the weight matrix under Assumptions 2 and 3, and the maximum iteration number. Set .2: Obtain the estimate: For , each agent computes (2) and(3) to get the estimate .i ∈V i fest i (k)3: Dynamic stepsize: Each agent obtains its stepsize based on the estimate according to (4) and (5).i k=k+1 i αi,k fest i (k)4: Local variable updates: Each agent updates according to (6).Set .5: Repeat steps from 2 to 4 until the predefined maximum iteration number is reached.

    B. Analysis of the Algorithm

    Substitute (9) into (8), for allx∈Ω andk≥0,

    IV. ASYNCHRONOUS COMMUNICATION

    In this section, we extend the DS-UD algorithm to the asynchronous communication protocol, which allows a group of agents to update while the others do not in each iteration.Also, we establish the convergence analysis of the proposed asynchronous algorithm.

    A. Algorithm Development

    In practical multi-agent systems, there exist uncertainties in communication networks, such as packet drops and link failures. We consider the gossip-like asynchronous communication protocol from [28]. Specifically, each agent is assumed to have a local clock that ticks at a Poisson rate of 1 and is independent of the other agent’s clocks. This setting is equivalent to having a single virtual clock whose ticking times form a Poisson process of rateNand which ticks whenever any local clock ticks. LetZkbe the absolute time of thek-th

    The idle agents do not update, i.e.,

    This asynchronous distributed gradient method with dynamic stepsize is abbreviated as AsynDGD, Algorithm 2 summarizes the above steps. We would like to remark that the maximum iteration number in Algorithm 2 is set in the same way as Algorithm 1.

    Algorithm 2 AsynDGD xi0=yi 0 ∈Ω ?i ∈V 1: Initial: Given initial variables , and the maximum iteration number. Set .i ∈V i ∈Jk i ?Jk k=0 2: Asynchronous updates: For , if , go to Step 3, if ,go to Step 6.3: Optimal value estimation: Agent computes (20) and (21) to get the estimate .i fest i (k)4: Dynamic stepsize: Agent calculates its stepsize based on the estimate according to (22) and (23).i k=k+1 i αi,k fest i (k)5: Local variable updates: Agent updates according to (24). Set, go to Step 7.i k=k+1 6: Idle agent does not update and maintain its variables in the new iteration as (25) and (26). Set .7: Repeat steps from 2 to 6 until the predefined maximum iteration number is satisfied.

    B. Analysis of the Algorithm

    To define the history of the algorithm, we denote the σalgebra generated by the entire history of our algorithm until timekbyFk, i.e., fork≥0,

    The convergence rates of the distributed gradient algorithms[11], [20] to the optimal solution are sublinear for convex functions due to the use of diminishing stepsize. The convergence rates of DS-UD and AsynDGD are also sublinear, however, we will discuss in detail why the proposed algorithms can achieve faster convergence than the algorithms with diminishing stepsizes.

    Recall that the dynamic stepsize is defined by

    V. NUMERICAL EXAMPLES

    In this section, we provide numerical examples on the convergence performance of the proposed algorithms and provide comparisons with the existing distributed gradient algorithms. The results are consistent with our theoretical convergence analysis and illustrate the improved algorithmic performance.

    Example 1:First, we study the performance of DS-UD. We consider an undirected cycle consisting of 4 agents. The convex objective functions are as follows.

    Fig. 1. The estimates of 4 agents and the residual of the DS-UD algorithm.(a) The estimates for the first component. (b) The estimates for the second component. (c) The residual.

    where γiis the regularization parameter.

    Consider a randomly generated undirected connected network consisting of 100 sensors, the average degree of the network is 49. We sets=10,d=10 and γi=0.05. The symmetric measurement matrixMi∈R10×10has eigenvalues

    Fig. 2. The normalized relative errors of three algorithms. (a) The normalized relative errors of DGD, D-NG and DS-UD algorithms versus the number of iterations. (b) The normalized relative errors of DGD, D-NG and DS-UD algorithms versus the number of gradient calculations.

    Note, that the proposed DS-UD algorithm utilizes two gradients at each iteration: one of them is used to construct the stepsize, and the other one is for the update direction. This means that the iteration complexity (number of gradients calculations at each iteration) of the DS-UD algorithm is twice as those of the DGD, D-NG and DDA algorithms. Therefore,to have a fair comparison with the DGD, D-NG and DDA algorithms, the plots in Fig. 2(a), Fig. 3(a) are in terms of the number of iterations and the plots in Fig. 2(b), Fig. 3(b) are in terms of the number of gradient calculations.

    Fig. 3. The normalized relative errors of three algorithms. (a) The normalized relative errors of DGD, DDA and DS-UD algorithms versus the number of iterations. (b) The normalized relative errors of DGD, DDA and DS-UD algorithms versus the number of gradient calculations.

    Moreover, DS-UD requires fewer iterations and gradient calculations to solve the optimization problem to a high-level of accuracy than the DGD, D-NG and DDA algorithms. It can be seen that DS-UD brings a satisfactory convergence result for the distributed optimization problem and outperforms the DGD, D-NG and DDA algorithms.

    Besides, we provide the trajectory of dynamic stepsizes in DS-UD and compare it to the diminishing stepsize in DGD.

    Fig. 4. The stepsizes of DGD and DS-UD algorithms.

    Fig. 5. The distance S R between the current stepsizes and the optimal stepsizes of DS-UD and DGD algorithms.

    Example 3:Now, we examine the effectiveness of AsynDGD. We compare it with the GRP algorithm in [20]and the primal-dual algorithm in [21].

    Consider an undirected fully connected network consisting of 10 sensors. The sensors are attempting to measure a parameter θ? by solving the distributed estimation problem(46). We sets=1,d=2, γi=0.2.Mi∈R1×2has entries randomly generated in (0,1) and the noise ωi∈R follows an i.i.d.Gaussianse{quenceN(0,0}.1),i=1,...,10. The constraintsetisΩ=θ∈R2:‖θ‖≤5.

    In the asynchronous scenario, for fair comparison, the three algorithms are assumed to use the same gossip-like protocol as in this work. Specifically, at each iteration, one of the 10 sensors will be randomly selected to be idle, it does not update and the associated edges are not activated.

    Fig. 6(a) depicts the averaged normalized relative error(over the Monte-Carlo runs) of the three algorithms versus the total number of iterations. Fig. 6(b) depicts the averaged normalized relative error (over the Monte-Carlo runs) of the three algorithms versus the total number of gradient calculations of 10 sensors. Fig. 6 shows that GRP and the primal-dual algorithm converge faster than AsynDGD at the beginning, but fall behind AsynDGD after short fast convergence. Besides, AsynDGD requires fewer iterations and gradient calculations to solve the optimization problem to a high-level of accuracy than GRP and the primal-dual algorithm. The reason for the observed result is the same as that in Example 2 and thus is omitted. It is seen that AsynDGD achieves improved convergence performance for the distributed optimization problem.

    VI. CONCLUSIONS

    In this paper, distributed gradient algorithms with dynamic stepsize are proposed for constrained distributed convex optimization problems. First, we develop distributed optimization algorithms for both synchronous and asynchronous communication protocols, in which each agent calculates its dynamic stepsizes based on the time-varying estimates of its local function value at the global optimal solution. Second, we present the convergence analysis for the proposed algorithms. Besides, we compare them with the existing algorithms through numerical examples of distributed estimation problems to illustrate their effectiveness.

    Fig. 6. The averaged normalized relative errors of three asynchronous algorithms. (a) The averaged normalized relative error of there asynchronous algorithms versus the number of iterations. (b) The averaged normalized relative error of there asynchronous algorithms versus the number of gradient calculations.

    亚洲精品乱码久久久v下载方式| 人妻制服诱惑在线中文字幕| 日韩一本色道免费dvd| 国产精品国产三级国产av玫瑰| 天美传媒精品一区二区| 国产精品久久久久久精品古装| 亚洲欧美色中文字幕在线| 国产精品一区二区在线观看99| 18在线观看网站| 大片电影免费在线观看免费| 午夜91福利影院| 18禁观看日本| www.av在线官网国产| 国产黄色免费在线视频| 中文字幕最新亚洲高清| 一本一本综合久久| 国产免费又黄又爽又色| 成人毛片60女人毛片免费| 久久久久久久久大av| 男男h啪啪无遮挡| 欧美亚洲日本最大视频资源| 国产成人一区二区在线| av在线观看视频网站免费| 九色成人免费人妻av| 亚洲四区av| 丝瓜视频免费看黄片| 免费大片黄手机在线观看| 黄片无遮挡物在线观看| 男女国产视频网站| 建设人人有责人人尽责人人享有的| 久久这里有精品视频免费| 欧美xxⅹ黑人| 国产国拍精品亚洲av在线观看| 春色校园在线视频观看| 日韩视频在线欧美| 母亲3免费完整高清在线观看 | 五月开心婷婷网| 制服诱惑二区| 中文字幕av电影在线播放| 亚洲成人一二三区av| 亚洲欧美成人精品一区二区| 亚洲av中文av极速乱| 一级毛片电影观看| 午夜激情福利司机影院| 欧美老熟妇乱子伦牲交| 一区在线观看完整版| 久久综合国产亚洲精品| 亚洲国产成人一精品久久久| kizo精华| 久久久国产精品麻豆| 国产精品一区二区在线观看99| 成年人免费黄色播放视频| 日本黄色日本黄色录像| 日韩熟女老妇一区二区性免费视频| 精品国产露脸久久av麻豆| 你懂的网址亚洲精品在线观看| 亚洲精品自拍成人| 国产黄频视频在线观看| 日本色播在线视频| 亚洲精品乱久久久久久| 男女高潮啪啪啪动态图| 国产av国产精品国产| 国产精品久久久久成人av| 亚洲五月色婷婷综合| 欧美成人午夜免费资源| 国产不卡av网站在线观看| 欧美xxⅹ黑人| 男女高潮啪啪啪动态图| 特大巨黑吊av在线直播| 大陆偷拍与自拍| 精品一区二区免费观看| 一级a做视频免费观看| 高清不卡的av网站| 亚洲精品一区蜜桃| 纵有疾风起免费观看全集完整版| 九色成人免费人妻av| 国产 精品1| 秋霞伦理黄片| 91aial.com中文字幕在线观看| 青青草视频在线视频观看| 国产无遮挡羞羞视频在线观看| 人妻一区二区av| 免费人妻精品一区二区三区视频| 亚洲情色 制服丝袜| 天堂俺去俺来也www色官网| 欧美激情 高清一区二区三区| 最新的欧美精品一区二区| 国产高清国产精品国产三级| 99热这里只有是精品在线观看| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 国产深夜福利视频在线观看| 欧美最新免费一区二区三区| 十分钟在线观看高清视频www| 亚洲四区av| 丝袜喷水一区| 精品人妻熟女毛片av久久网站| 少妇的逼水好多| 午夜免费观看性视频| 蜜桃在线观看..| 午夜福利,免费看| 日本免费在线观看一区| 日韩av在线免费看完整版不卡| 久久精品久久久久久噜噜老黄| 欧美少妇被猛烈插入视频| 久久久久人妻精品一区果冻| 国产黄片视频在线免费观看| 99久久精品国产国产毛片| 秋霞在线观看毛片| 少妇精品久久久久久久| 女人久久www免费人成看片| 免费播放大片免费观看视频在线观看| 一本久久精品| 日本欧美视频一区| 日韩一本色道免费dvd| 9色porny在线观看| 久久精品国产亚洲av天美| 你懂的网址亚洲精品在线观看| 久久人人爽人人爽人人片va| 国产精品国产三级国产专区5o| 蜜桃在线观看..| 成人亚洲精品一区在线观看| 亚洲欧美一区二区三区黑人 | 国产精品久久久久久精品古装| 一级爰片在线观看| 国产在线视频一区二区| 亚洲激情五月婷婷啪啪| 少妇的逼好多水| 亚洲欧美一区二区三区国产| 久久国产精品男人的天堂亚洲 | 午夜福利,免费看| 久久精品人人爽人人爽视色| 日本-黄色视频高清免费观看| 日本午夜av视频| 极品人妻少妇av视频| 免费大片18禁| 色婷婷久久久亚洲欧美| 久久精品国产亚洲av天美| 人妻一区二区av| 伊人亚洲综合成人网| 亚洲av.av天堂| 青春草亚洲视频在线观看| 日韩免费高清中文字幕av| 99久久人妻综合| 你懂的网址亚洲精品在线观看| 日本猛色少妇xxxxx猛交久久| 亚洲欧美一区二区三区国产| 亚洲图色成人| 亚洲图色成人| 麻豆成人av视频| 在线天堂最新版资源| 免费高清在线观看视频在线观看| 99久久中文字幕三级久久日本| 国产精品女同一区二区软件| 一本一本综合久久| 蜜桃国产av成人99| 老女人水多毛片| 高清视频免费观看一区二区| 夜夜看夜夜爽夜夜摸| 欧美日韩成人在线一区二区| 97超视频在线观看视频| 亚洲成人手机| 飞空精品影院首页| 日韩成人伦理影院| 免费播放大片免费观看视频在线观看| 日韩亚洲欧美综合| 国产欧美亚洲国产| 黑人猛操日本美女一级片| 欧美 日韩 精品 国产| 成人亚洲欧美一区二区av| 自拍欧美九色日韩亚洲蝌蚪91| 欧美日韩av久久| 国产毛片在线视频| 中国三级夫妇交换| 青春草国产在线视频| 国产白丝娇喘喷水9色精品| 爱豆传媒免费全集在线观看| 国产69精品久久久久777片| 久久女婷五月综合色啪小说| 国产精品免费大片| 老熟女久久久| 免费高清在线观看视频在线观看| 一级毛片我不卡| 80岁老熟妇乱子伦牲交| 亚洲国产精品专区欧美| 日本黄色片子视频| 成人毛片60女人毛片免费| 综合色丁香网| videos熟女内射| 婷婷色麻豆天堂久久| av卡一久久| 大又大粗又爽又黄少妇毛片口| 一区二区三区精品91| 大片免费播放器 马上看| 精品一品国产午夜福利视频| 看非洲黑人一级黄片| 亚洲四区av| 亚洲久久久国产精品| 国产不卡av网站在线观看| 亚洲美女黄色视频免费看| 欧美日韩国产mv在线观看视频| 成年女人在线观看亚洲视频| 精品国产乱码久久久久久小说| 2021少妇久久久久久久久久久| 国产av一区二区精品久久| 日韩中字成人| 青春草国产在线视频| 国产一区二区三区综合在线观看 | 丝袜美足系列| 国产成人一区二区在线| 一级片'在线观看视频| 婷婷成人精品国产| 日韩人妻高清精品专区| 人妻人人澡人人爽人人| 伊人久久精品亚洲午夜| 久久人人爽人人爽人人片va| 内地一区二区视频在线| 国产黄频视频在线观看| 国产精品熟女久久久久浪| 一个人看视频在线观看www免费| 久久久久久伊人网av| 少妇熟女欧美另类| 国产有黄有色有爽视频| 啦啦啦中文免费视频观看日本| 黄色欧美视频在线观看| 亚洲综合色惰| 在线亚洲精品国产二区图片欧美 | 飞空精品影院首页| freevideosex欧美| 免费日韩欧美在线观看| 九草在线视频观看| 美女xxoo啪啪120秒动态图| 乱码一卡2卡4卡精品| 日产精品乱码卡一卡2卡三| 亚洲综合精品二区| 亚洲精品美女久久av网站| 日韩av免费高清视频| 成年av动漫网址| 男女边摸边吃奶| 18禁在线无遮挡免费观看视频| 亚洲精品一二三| 国产一区二区在线观看av| 三级国产精品片| 欧美xxⅹ黑人| 精品久久久精品久久久| 精品少妇黑人巨大在线播放| 97在线视频观看| av线在线观看网站| 日本午夜av视频| 欧美精品亚洲一区二区| 肉色欧美久久久久久久蜜桃| 久久国产亚洲av麻豆专区| 人妻 亚洲 视频| 精品久久久久久电影网| 91精品国产九色| 边亲边吃奶的免费视频| 美女福利国产在线| 不卡视频在线观看欧美| av天堂久久9| 一级爰片在线观看| 一级毛片 在线播放| 婷婷色av中文字幕| freevideosex欧美| 九色亚洲精品在线播放| 亚洲国产精品一区二区三区在线| 精品少妇内射三级| 国产精品一区二区三区四区免费观看| 青青草视频在线视频观看| 男人添女人高潮全过程视频| 桃花免费在线播放| 国产在线一区二区三区精| 国产午夜精品久久久久久一区二区三区| 99久久精品国产国产毛片| 大香蕉久久网| 精品一区在线观看国产| 日韩欧美一区视频在线观看| 最近最新中文字幕免费大全7| 国产精品久久久久久精品古装| 亚洲av欧美aⅴ国产| 一区二区三区免费毛片| 欧美日本中文国产一区发布| 国产高清国产精品国产三级| h视频一区二区三区| 精品一区二区免费观看| 亚洲图色成人| 又大又黄又爽视频免费| 在线观看www视频免费| 天天操日日干夜夜撸| 久久韩国三级中文字幕| 日韩,欧美,国产一区二区三区| 黑人猛操日本美女一级片| 亚洲国产毛片av蜜桃av| 久久久久久久久大av| 亚洲av不卡在线观看| 亚洲av中文av极速乱| 国产老妇伦熟女老妇高清| 99精国产麻豆久久婷婷| 欧美日韩视频高清一区二区三区二| 亚洲av成人精品一区久久| 亚洲欧洲国产日韩| 欧美成人精品欧美一级黄| 亚洲激情五月婷婷啪啪| 狂野欧美白嫩少妇大欣赏| 亚洲欧美一区二区三区黑人 | 天美传媒精品一区二区| 久久久久久久久久成人| 成人漫画全彩无遮挡| 午夜免费观看性视频| 亚州av有码| 久久精品国产a三级三级三级| 成人手机av| 精品人妻熟女av久视频| a级毛片在线看网站| 欧美 亚洲 国产 日韩一| 女性生殖器流出的白浆| 卡戴珊不雅视频在线播放| 日韩成人伦理影院| 9色porny在线观看| 欧美+日韩+精品| 中文字幕最新亚洲高清| kizo精华| 欧美性感艳星| 久久av网站| 国产亚洲最大av| 男人添女人高潮全过程视频| 亚洲综合色网址| av黄色大香蕉| 美女国产视频在线观看| 97在线人人人人妻| 99热网站在线观看| 中文字幕免费在线视频6| 91aial.com中文字幕在线观看| 亚洲精品成人av观看孕妇| 91久久精品电影网| 美女国产高潮福利片在线看| 永久免费av网站大全| 日本欧美国产在线视频| 国产成人精品久久久久久| 成人无遮挡网站| 久久精品人人爽人人爽视色| 丝袜美足系列| 最近中文字幕2019免费版| 亚洲高清免费不卡视频| .国产精品久久| 亚洲,一卡二卡三卡| 高清视频免费观看一区二区| 欧美日韩视频高清一区二区三区二| 国产精品不卡视频一区二区| 国产精品一区二区在线不卡| 久久久欧美国产精品| √禁漫天堂资源中文www| 国产淫语在线视频| √禁漫天堂资源中文www| 日韩精品免费视频一区二区三区 | 在线观看美女被高潮喷水网站| 亚洲欧美成人精品一区二区| 亚洲情色 制服丝袜| 日韩成人伦理影院| 日韩成人av中文字幕在线观看| 人妻一区二区av| 一级二级三级毛片免费看| 国产成人午夜福利电影在线观看| 制服人妻中文乱码| a级片在线免费高清观看视频| 国产亚洲最大av| 久久毛片免费看一区二区三区| 97在线视频观看| av视频免费观看在线观看| 在线免费观看不下载黄p国产| 亚洲国产精品一区二区三区在线| 美女国产高潮福利片在线看| av网站免费在线观看视频| a 毛片基地| 国产精品 国内视频| 最近2019中文字幕mv第一页| 欧美亚洲 丝袜 人妻 在线| 秋霞在线观看毛片| 欧美精品一区二区免费开放| 在线亚洲精品国产二区图片欧美 | 日韩电影二区| 中文字幕精品免费在线观看视频 | 亚洲国产av新网站| 老女人水多毛片| 日韩三级伦理在线观看| 精品一品国产午夜福利视频| 成人影院久久| 麻豆成人av视频| 午夜免费鲁丝| 国产免费现黄频在线看| 亚洲欧美清纯卡通| 在线看a的网站| 久久国内精品自在自线图片| av网站免费在线观看视频| 最近2019中文字幕mv第一页| 久久99蜜桃精品久久| 蜜桃国产av成人99| 少妇人妻 视频| 十八禁高潮呻吟视频| 日韩精品有码人妻一区| av黄色大香蕉| 大片电影免费在线观看免费| 国产又色又爽无遮挡免| 日本av手机在线免费观看| 九草在线视频观看| 一区二区三区乱码不卡18| 日本-黄色视频高清免费观看| 在线 av 中文字幕| 最新的欧美精品一区二区| 在线观看www视频免费| 免费观看a级毛片全部| 欧美 日韩 精品 国产| 最新中文字幕久久久久| videos熟女内射| xxx大片免费视频| av专区在线播放| 成人影院久久| 黄色配什么色好看| 丰满少妇做爰视频| 边亲边吃奶的免费视频| 久久99热6这里只有精品| 欧美精品一区二区免费开放| 亚洲美女搞黄在线观看| 中国三级夫妇交换| 久久久精品免费免费高清| 丰满乱子伦码专区| 国产成人一区二区在线| 少妇猛男粗大的猛烈进出视频| 精品国产一区二区久久| 午夜福利视频在线观看免费| 免费久久久久久久精品成人欧美视频 | 欧美精品亚洲一区二区| 亚洲美女黄色视频免费看| 三上悠亚av全集在线观看| 国产精品秋霞免费鲁丝片| 亚洲少妇的诱惑av| 九色成人免费人妻av| 国产成人午夜福利电影在线观看| xxxhd国产人妻xxx| .国产精品久久| 大香蕉97超碰在线| 18在线观看网站| 香蕉精品网在线| 亚洲欧美成人综合另类久久久| 精品亚洲乱码少妇综合久久| 亚洲国产欧美在线一区| 欧美变态另类bdsm刘玥| 亚洲精品久久午夜乱码| 老司机影院毛片| 女性被躁到高潮视频| 亚洲精品日韩在线中文字幕| 亚洲国产成人一精品久久久| 亚洲精品乱久久久久久| 久久国产精品男人的天堂亚洲 | 精品久久蜜臀av无| 国产精品女同一区二区软件| 人妻夜夜爽99麻豆av| 综合色丁香网| 亚洲人成网站在线观看播放| 亚洲av男天堂| 18禁观看日本| 国产熟女午夜一区二区三区 | 久久久国产精品麻豆| 啦啦啦中文免费视频观看日本| 国产不卡av网站在线观看| 高清av免费在线| 久久久a久久爽久久v久久| 日韩制服骚丝袜av| 久久久久久人妻| 一区在线观看完整版| 久久精品国产亚洲网站| 纵有疾风起免费观看全集完整版| 国语对白做爰xxxⅹ性视频网站| 国产精品一区二区三区四区免费观看| 制服人妻中文乱码| 欧美97在线视频| 亚洲怡红院男人天堂| 亚洲国产欧美日韩在线播放| 亚洲av国产av综合av卡| 国内精品宾馆在线| 插逼视频在线观看| 黄色毛片三级朝国网站| 99国产精品免费福利视频| 亚洲欧洲日产国产| 亚洲第一av免费看| 18禁裸乳无遮挡动漫免费视频| 精品亚洲乱码少妇综合久久| 搡女人真爽免费视频火全软件| 少妇熟女欧美另类| 国产精品 国内视频| 水蜜桃什么品种好| 欧美精品一区二区大全| 春色校园在线视频观看| 国产熟女欧美一区二区| 嫩草影院入口| 99精国产麻豆久久婷婷| 好男人视频免费观看在线| 国产成人精品在线电影| 国产成人免费无遮挡视频| 亚洲国产日韩一区二区| 青青草视频在线视频观看| 亚洲精品一二三| 成人18禁高潮啪啪吃奶动态图 | 国产一区二区三区av在线| 日韩一区二区三区影片| freevideosex欧美| 最近的中文字幕免费完整| 成人毛片a级毛片在线播放| 色婷婷久久久亚洲欧美| 精品熟女少妇av免费看| 婷婷色综合www| 亚洲第一区二区三区不卡| 亚洲国产最新在线播放| 久久久久久久久久久免费av| 少妇猛男粗大的猛烈进出视频| 大香蕉久久网| 一二三四中文在线观看免费高清| 人妻制服诱惑在线中文字幕| 黄色怎么调成土黄色| 91精品三级在线观看| 校园人妻丝袜中文字幕| www.av在线官网国产| 在线观看免费高清a一片| 性高湖久久久久久久久免费观看| 国产黄片视频在线免费观看| 免费看光身美女| 精品少妇久久久久久888优播| 99国产精品免费福利视频| 最近中文字幕高清免费大全6| 99re6热这里在线精品视频| 一级二级三级毛片免费看| 人人妻人人爽人人添夜夜欢视频| av天堂久久9| 国产成人一区二区在线| 观看美女的网站| 欧美人与善性xxx| 在线观看美女被高潮喷水网站| 亚洲,欧美,日韩| 国产黄片视频在线免费观看| 免费大片黄手机在线观看| 精品少妇内射三级| 99精国产麻豆久久婷婷| 我要看黄色一级片免费的| 免费观看在线日韩| av不卡在线播放| 国产女主播在线喷水免费视频网站| 精品人妻熟女av久视频| 春色校园在线视频观看| 国产有黄有色有爽视频| 欧美日韩国产mv在线观看视频| 欧美日韩一区二区视频在线观看视频在线| 午夜老司机福利剧场| 女性生殖器流出的白浆| 日日摸夜夜添夜夜爱| 亚洲成色77777| 一级毛片我不卡| 欧美xxⅹ黑人| 午夜激情av网站| 亚洲精品自拍成人| 青春草国产在线视频| 欧美日韩在线观看h| 丝袜在线中文字幕| 国产极品粉嫩免费观看在线 | 色5月婷婷丁香| 观看美女的网站| av有码第一页| 精品少妇内射三级| 精品国产一区二区三区久久久樱花| 婷婷色综合www| 纵有疾风起免费观看全集完整版| 少妇 在线观看| av播播在线观看一区| 欧美日韩亚洲高清精品| 少妇丰满av| 久久久国产欧美日韩av| 大片电影免费在线观看免费| 午夜视频国产福利| 久久综合国产亚洲精品| 成年人免费黄色播放视频| 国产免费一区二区三区四区乱码| 日本免费在线观看一区| a级毛片黄视频| 国产极品天堂在线| 国产精品蜜桃在线观看| 99热国产这里只有精品6| 高清在线视频一区二区三区| 免费不卡的大黄色大毛片视频在线观看| 黄色欧美视频在线观看| 免费av中文字幕在线| 99re6热这里在线精品视频| 在线观看免费高清a一片| 国产有黄有色有爽视频| 熟女人妻精品中文字幕| 如日韩欧美国产精品一区二区三区 | 美女视频免费永久观看网站| 国产一区二区在线观看日韩| 黑人巨大精品欧美一区二区蜜桃 | 啦啦啦在线观看免费高清www| 午夜免费观看性视频| 久久午夜综合久久蜜桃| 大香蕉久久网| 夜夜骑夜夜射夜夜干| 日韩av不卡免费在线播放| 啦啦啦啦在线视频资源| 国产极品粉嫩免费观看在线 | 久久99一区二区三区| 麻豆乱淫一区二区| 3wmmmm亚洲av在线观看| 亚洲精品乱码久久久v下载方式| 免费不卡的大黄色大毛片视频在线观看| 人妻 亚洲 视频| 日本爱情动作片www.在线观看| 99精国产麻豆久久婷婷| 国产精品一区www在线观看| 日本av免费视频播放| 高清毛片免费看| 2018国产大陆天天弄谢|