• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Distributed projection subgradient algorithm for two-network zero-sum game with random sleep scheme

    2021-10-13 07:16:54HongyunXiongJiangxiongHanXiaohongNianShilingLi
    Control Theory and Technology 2021年3期

    Hongyun Xiong·Jiangxiong Han·Xiaohong Nian·Shiling Li

    Abstract In this paper,a zero-sum game Nash equilibrium computation problem with a common constraint set is investigated under two time-varying multi-agent subnetworks,where the two subnetworks have opposite payoff function.A novel distributed projection subgradient algorithm with random sleep scheme is developed to reduce the calculation amount of agents in the process of computing Nash equilibrium.In our algorithm,each agent is determined by an independent identically distributed Bernoulli decision to compute the subgradient and perform the projection operation or to keep the previous consensus estimate,it effectively reduces the amount of computation and calculation time.Moreover,the traditional assumption of stepsize adopted in the existing methods is removed,and the stepsizes in our algorithm are randomized diminishing.Besides,we prove that all agents converge to Nash equilibrium with probability 1 by our algorithm.Finally,a simulation example verifies the validity of our algorithm.

    Keywords Zero-sum game·Nash equilibrium·Time-varying multi-agent network·Projection subgradient algorithm·Random sleep scheme

    1 Introduction

    Nash equilibrium computation problem in multi-agent network game have been paid more and more attention in social networks,cloud computation,smart grids [1–3],and so on.In recent years,some useful methods were proposed by many researchers in the related field to calculate the Nash equilibrium of different game problems.For example,a distributed operator splitting method was developed in [4] to solve generalized Nash equilibrium.Ye and Hu [5] solved Nash equilibrium of non-cooperative game by combining consensus and Gradient Play method.Furthermore,Deng et al.[6–8] studied the aggregate game problems,where the agent’s decision is decided by its own decision and the aggregation of all agent’s decisions.Moreover,Zeng et al.[9] considered Nash equilibrium computation of multinetwork games,in which agents cooperate with each other within the subnetwork and the subnetworks compete with each other.

    It is noted that the game problem studied by [4–8] contains only one multi-agent network.However,the multinetwork game is closer to the practical application scenarios.In particular,two-network zero-sum game is the most common multi-network game and has important research value in the power allocation of Gaussian communication channel [10] and adversarial detection of sensor networks[11].Many researchers have proposed some useful algorithms to compute the Nash equilibrium of two-network zero-sum games.An original dual method based on continuous-time to solve the Nash equilibrium of the twonetwork zero-sum game was provided in [12].Besides,a distributed projection subgradient algorithm was proposed in [13],it studied the Nash equilibrium computation of two-network zero-sum game with different stepsize and different communication topologies.In addition,Shi and Yang [14] proposed a novel incremental algorithm to reduce communication burden between two subnetworks.

    In the existing distributed methods mentioned above,each iteration of the agent needs to calculate the local subgradient information.The calculation of subgradient involves the calculation of big data and multiple function values,and the cost function is usually complex.Therefore,the calculation amount of each agent in the above method is large and it is necessary to study a new algorithm to reduce the computation of agents.In order to reduce the computation of agents in distributed optimization problems,the random sleep scheme was introduced in [15],and a distributed algorithm with random sleep scheme is proposed in [16] to study constrained convex optimization problems over multi-agent networks.Besides,the distributed optimization problem over multi-cluster networks is studied in [17] by combining hierarchical algorithm and random sleep scheme.Inspired by the existing work,this paper considers two-network zero-sum games Nash equilibrium computation problem under two time-varying multi-agent subnetworks,we will propose a new algorithm with random sleep scheme to reduce the computation of agents.According to the principle of random sleep strategy,each agent is based on independent identically distributed Bernoulli decision decides whether to calculate the local subgradient at the current time and perform projection operation,or only perform consistency calculation by combining all neighbors information.Therefore,the computational complexity of subgradient and projection will be simultaneously reduced in our algorithm.

    This paper mainly studies the Nash equilibrium computation problem of two-network zero-sum games with a common constraint set,and the communication graphs are time-varying.In order to effectively reduce the amount of calculation and running time,inspired by [15–17],a new projection subgradient algorithm with random sleep scheme is proposed in this paper.Besides,consensus and convergence of our algorithm are analyzed.The main contributions of this paper are as follows:

    (1) This paper proposes a new distributed projection subgradient algorithm with random sleep scheme to compute Nash equilibrium of two-network zero-sum game under a time-varying weight-balanced communication topology.Compared with the existing algorithm[12–14],the proposed algorithm not only ensures convergence performance but also reduces the amount of computation and running time.

    (2) Compared with coordinated stepsize in [13,14],the randomized diminishing stepsize is used in our algorithm,and our stepsizes are nonhomogeneous.Therefore,our algorithm is more flexible in the choice of stepsize than [13,14].

    (3) We use the distributed optimization method to solve the Nash equilibrium of zero-sum game.In our algorithm,all agents of two subnetworks minimize and maximize the same convex-concave payoff function,respectively.However,all agents in [16] agreed to minimize a convex objective function.Therefore,the design and proof of our algorithm are different from [16].

    (4) Different from [12],our algorithm is based on discrete time,and due to the introduction of random sleep scheme,the convergence analysis is more complex than continuous-time algorithm [12].Moreover,our algorithm considers a common constraints and allows the cost functions to be nonsmooth,the requirement is more general than [18].

    The organization of this paper is as follows.In Sect.2,preliminaries are introduced and the considered two-network zero-sum games Nash equilibrium computation problem is formulated.In Sect.3,the novel distributed projection subgradient algorithm random sleep scheme is designed.Besides,the consensus and convergence of proposed algorithm is given under an uncoordinated stepsize.In Sect.4,numerical examples are given.Finally,conclusion is given in Sect.5.

    1.1 Notations

    ? is the set of real number,and ? is the set of natural numbers.?ndenotes then-dimension Euclidean space,and ‖?‖denotes the Euclidean norm.rTdenotes the transpose of vectorr.The mathematical expectation of a random variableYis defined as E[Y].Let X be a closed convex set,PX[α]is defined as a projection of vectorα∈?nonto X,that is,for anyγ∈X .The projection operatorPX[?]∶?n→X has the following important property:

    For a convex functionf(x),the subgradientψ(x1) atx1∈?nsatisfiesf(x)?f(x1)≥〈x?x1,ψ(x1)〉 for allx∈?n.For a concave functionf(x),the subgradientψ(x1) atx1∈?nsatisfiesf(x)?f(x1)≤〈x?x1,ψ(x1)〉 for allx∈?n.The subdifferential?f(x1) off(x) atx1is the set of all subgradients off(x) atx1.A functionf(x,y)∶→? is (strictly)convex–concave if it is (strictly) convex inxfor any fixedyand (strictly) concave inyfor any fixedx.The subdifferential off(x,y) at pointxis denoted as?x f(x,y) .Similarly,the subdifferential off(x,y) at pointyis denoted as?y f(x,y).

    2 Preliminaries and formulation

    In this section,we give some preliminary knowledge and formulate our problem.

    2.1 Graph theory

    2.2 Saddle point and zero-sum games

    For a functionf(x,y),there exists a pair (x?,y?)∈X×Y such thatf(x?,y)≤f(x?,y?)≤f(x,y?) for allx∈X,y∈Y,then (x?,y?) is called a saddle point of functionf.In this paper,the sets X and Y usually are nonempty convex compact set,and the functionfis convex–concave,these ensure that the set of saddle points of the functionfis nonempty.

    A game problem is denoted as (M,S,U),where|M|=N∈?≥2is the number of all players,and S=s1×…×sNis the set of actions,wheresiis defined as the action of playeri.Besides,U=(u1,…,uN),whereui∶S→? is the payoff function of playeri.For a game problem,each player aims to minimize its cost function until none of player can get better benefits by changing their action alone.A profile actionis a Nash equilibrium iffor all playeri∈M andxi∈si,wherex?iis denoted as the actions of all players excepti.An N-person zero-sum game satisfies=0 .In particular,two-person zero-sum game only has two players,their payoff functions satisfiesu1+u2=0 .There is an important conclusion that the Nash equilibrium of two-person zero-sum game is the saddle point of payoff function.

    2.3 Problem formulation

    In this paper,we provide a novel distributed projection subgradient algorithm with random sleep scheme to solve Nash equilibrium of two-network zero-sum game under time-varying weight-balanced topologies,each agent updates its state only by communicating with its neighbors until it finally converges to Nash equilibrium.An illustrative example for the communication topology of the two-network zero-sum game is given in Fig.1.The same problem was studied in [13],where each agent needs to calculate the subgradient information and perform projection operation in each iteration.However,our algorithm with random sleep scheme can effectively reduce the computation and energy burden and ensure the agents converge to Nash equilibrium.

    Fig.1 The communication topology of two-network zero-sum game

    3 Main result

    In this section,a novel distributed algorithm is proposed in Sect.3.1.The consensus result of the proposed algorithm is given in Sect.3.2.Finally,the convergence of the proposed algorithm is analyzed in Sect.3.3.

    3.1 Distributed algorithm design

    In this section,a distributed projection subgradient algorithm with random sleep scheme first is provided to solve Nash equilibrium of two-network zero-sum game under time-varying weight-balanced communication topologies.Firstly,a useful assumption about communication topologies is given.

    Assumption 1The time-varying communication graphs G1(k) and G2(k) are uniformly jointly strongly connected,Gcon(k) is uniformly jointly bipartite.

    Remark 1The requirement of communication topologies is connected for all the time in [12].However,this paper only requires agents in time-varying networks G(k) to exchange information with neighbor agents at least once during a period of lengthT,which can save energy and reduce communication burden of multi-agent network systems in practical application.

    We denote the state of agenti∈V1at timekasxi(k)∈?m1 and the state of agenti∈V2at timekasLetξ?i(k),?=1,2,k∈? be independent identically distributed Bernoulli random variables,and they satisfyThe update mechanism of the proposed distributed projection subgradient algorithm with random sleep scheme is as follows:we denote the initial value of these variables arexi(0)∈X,yi(0)∈Y .During the iteration,each agenti∈V1chooses to either calculate the subgradient and perform projection operation whenξ1i(k)=1,or keep the previous consensus estimate whenξ1i(k)=0 .For alli∈V2,there has a similar update mechanism aboutξ2i(k) .The distributed projection subgradient algorithm with random sleep scheme is given as follows:

    whereaij(k),bij(k) are the communication weights of link (j,i),respectively.q1i(k) is subdifferential offiati∈V1andq2i(k) is subdifferential ofgiatAt timek,the stepsizes of two subnetworks are defined asαi(k),i∈Vl(k) andβi(k),i∈V2(k),respectively.is neighbor set of agentiin subnetwork G?(k) at timek.

    In the proposed algorithm (2) and (3),andrepresent the estimation of the state of the opposite subnetwork,but due to the communication topology Gcon(k)is time-varying,agents may not receive information from opposite subnetwork at every time.Therefore,our algorithm needs to consider the following two cases during the iteration process,that is,the agent has or does not have neighbors in the opposite subnetwork,accordingly,the state estimationandare defined as follows:

    (i) If agenti∈V1has at least one neighbor in subnetwork G2(k) at timek,then

    If agenti∈V2has at least one neighbor in subnetwork G1(k) at timek,then

    (ii) If agenti∈V1has no neighbors in subnetwork G2(k)at timek,by Assumption 1,then there must existh1i∈? such that

    wherek?h1iis last time beforekwhen agentihas at least one neighbor in subnetwork G2(k) .If agenti∈V2has no neighbors in subnetwork G1(k) at timek,by Assumption 1,then there must existh2i∈?such that

    wherek?h2iis last time beforekwhen agentihas at least one neighbor in subnetwork G1(k).

    Remark 2According to the definition ofabove,it is not difficult to find that our algorithm requires each agent to have at least one neighbor of the opposite subnetwork during a period of lengthT,so as to estimate the state of the opposite subnetwork.The weighted average state of the neighbors in the opposite subnetwork is used as the estimationThe state estimationin this paper are similar to [13].

    In the proposed algorithm,(2) and (3) execute the state iteration of the two subnetwork,respectively.For all timek,each agent decides which iteration to carry out according to the random variableξ?i(k),whenξ?i(k)=0,the agent only uses the neighbor information to perform the consistency operation,which avoids the calculation of subgradient and the execution of projection operation,thus effectively reducing the calculation amount.In particular,whenρ?i=1,our algorithm is equivalent to standard distributed projection subgradient algorithm in [13].

    Let Fkbe theσ-algebra created by the entire history of our algorithm up to timek,that is,fork≥1,Fk={(xi(0),ξ1i(l),i∈V1);(yi(0),ξ2i(l),i∈V2),1 ≤l≤k?1},and F0={(xi(0),i∈V1);(yi(0),i∈V2)} .Besides,J?k,?=1,2 represents the set of agents that perform projective subgradient iterations in two subnetworks at thekth moment,respectively,thenJ?k={i∶ξ?i(k)=1} .Some important assumptions for consensus and convergence analysis are as follows:

    Assumption 2There exists a constant numberL>0 such that the subgradients are bounded,that is,for allk∈?,|q1i(k)|≤Lfor alli∈V1and |q2i(k)|≤Lfor alli∈V2.

    Remark 3According to Assumption 2,we can obtain the following useful conclusions:‖fi(x1,?)?fi(x2,?)‖≤L‖x1?x2‖and ‖fi(?,x1)?fi(?,x2)‖≤L‖x1 ?x2‖,and there are similar conclusions for cost functionsgi:‖gi(y1,?)?gi(y2,?)‖≤L‖y1?y2‖ and ‖gi(?,y1)?gi(?,y2)‖≤L‖y1?y2‖ .These conclusions are useful in the process of algorithm convergence analysis.

    Assumption 3For allk∈?,the adjacency matricesA(k)andB(k) are doubly stochastic.Besides,are stochastic matrices,that is,

    Assumption 4The stepsizes of agents in two subnetworks are defined as follows:

    Remark 4The nonhomogeneous randomized diminishing stepsizesαi(k),βi(k) are used in our algorithm,anddenote the sum of times that agenti∈V?,?=1,2 is active before timek.Compared with [13,14],the stepsize we consider is more flexible and general.

    For the stepsizes we set,there is an important lemma which will be used to analyze the convergence of our algorithm.

    Lemma 1[16]Let0

    Lemma 2[16]Let(Ω,F,P)be a probability space andF0?F1?F2?…be a sequence of σ-algebra ofF .Let{λ(k)},{μ(k)},{ν(k)},{σ(k)}are sequences ofFk-measurable non-negative random variables such that with probability1for all k∈?,

    3.2 Consensus analysis

    In this section,we show that the multi-agent network G(k)achieves a consensus result with probability 1 by the proposed algorithm.First,we introduce an important lemma

    Lemma 3[15]Let Assumptions1,2,3and4hold,then with probability1we have

    are the average states of two subnetworks at time k,respectively.

    ProofBecause the two-network zero-sum game can be divided into two distributed optimization problems to some extent,but the agent needs to estimate the state of other subnetwork,so the proof of the (9) and (10) in Lemma 3 is similar to Lemma 5 of [15].Since the proof process of formula (9) is the same as that of Lemma 5 of [15],except that a gradient perturbation term is considered in [15].On the other hand,the two subnetworks have similar state update laws,but the update direction is opposite,then the proof of formula (10) is similar to that of formula (9),so we are not going to describe the proof process in detail. ?

    The following lemma ensures the consensus of the proposed algorithm.

    Lemma 4Under Assumptions1,2,3and4,for all k∈? ,the agents update their states through algorithms(2)and(3),then with probability1,

    Taking conditional expectation for (15),by Lemma 1,for anyk≥with probability 1 we can obtain

    3.3 Convergence analysis

    In this section,we prove that the multi-agent network G(k)converges to Nash equilibrium with probability 1 by algorithms (2) and (3).Firstly,a useful lemma needs to be introduced.

    ProofThe proof process is similar to that of of [13],we just need to replace allαk,βkin Lemma 5 withBesides,because our algorithm has a random update strategy,we need to take the conditional expectation on both sides of the formula in Lemma 5,but the proof idea is exactly the same as that in the [13],then we are not going to describe the proof process in detail. ?

    The following theorem is our main conclusion that shows the convergence result of the proposed algorithm.

    Theorem 1Let Assumptions1,2,3and4hold and U is strictly convex–concave.Consider Nash equilibrium computation problem of two-network zero-sum game under a time-varying weight-balanced communication topologyG(k),k∈? ,agents update their states by the algorithm(2)and(3),then with probability1,

    4 Numerical examples

    In this section,we will provide two numerical examples to verify the validity of the distributed projection subgradient algorithm with random sleep scheme from smooth and nonsmooth cost functions,respectively.

    Example 1We consider a multi-agent system composed of two subnetwork with 3 agents and 4 agents,respectively,that is,n1=3,n2=4 andm1=1,m2=1 .We denote the common constraint set as X=Y=[1,5] .The smooth cost functions of agents in subnetwork G1(k) are as follows:

    and the cost functions of agents in subnetwork G2(k) are as follows:

    Besides,for the two subnetworks of graph Gφ,we letbe the communication weights of agents and its neighbors in the opposite subnetwork,respectively.

    We letρ=0.6,Figs.3 and 4 show the convergence results of our algorithm under smooth cost functions after 4000 iterations,and each agent performs projection operations and computes subgradient only about 2400 times.It can be seen that our algorithm not only makes all agents converge to Nash equilibrium,but also greatly reduces the amount of computation and speeds up the running time of the algorithm.

    Fig.2 The time-varying communication topology of multi-agent network

    Fig.3 The convergence result of subnetwork G1(k) with smooth cost function

    Next we consider the effect ofρ?ion the convergence rate and accuracy of our algorithm.We define the relative error as follows:

    The relative errors under four differentρvalues are shown in Fig.5.It can be seen that the value ofρhas large influence on the convergence performance of our algorithm,that is,whenρis close to 1,our algorithm shows a better convergence performance.In particular,whenρ=1,the corresponding algorithm is the classical projection subgradient algorithm[13].However,at this time,the efficiency of reducing the amount of calculation will become worse.We also can find that the convergence performance of the algorithm is not as good as that of the classical algorithm after adopting the random sleep strategy.However,whenρ=0.6,our algorithm can achieve satisfactory accuracy while reducing a large amount of calculation.On the other hand,due to the existence of random sleep strategy and switching of communication topology in our algorithm,the relative error shows oscillatory behavior,which does not affect our algorithm to converge to Nash equilibrium with better precision.

    Fig.4 The convergence result of subnetwork G2(k) with smooth cost function

    Fig.5 The relative errors with smooth cost function

    Finally,under the given accuracy,we compare the average calculation times of subgradient under three differentρvalues.The average calculation times of 100 experiments are shown in Table 1.According to the principle of the random sleep scheme,the amount of computation of the agent mainly comes from the calculation of the subgradient,so the average calculation times of the subgradient in the whole iterative process can effectively reflect the amount of computation of the algorithm.According to Table 1,under the same accuracy requirement,the smaller theρvalue is,the less the calculation times of the subgradient are,it means that the amount of computation is smaller.However,combining with the relative error curve in Fig.5,we can see that whenρis small,the convergence performance of the algorithm will be poor.Therefore,it is necessary to choose a suitableρto balance the convergence performance and the computational complexity.

    Table 1 Average calculation times of subgradient under the various ρ values

    According to the proposed algorithm (2) and (3),when theρvalue is large,the agent updates its state toward the subgradient direction with a greater probability,which improves the convergence performance.When theρvalue is small,the agent performs more consistent operations,and the convergence performance is weakened to a certain extent.Therefore,in practical application,if it tends to have a higher convergence performance,it is recommended to use a largerρvalue (close to 1),if it tends to have a small amount of computation,it is recommended to use a smaller value ofρ,but the value ofρshould be less than or equal to 0.5,because it can be seen from Fig.5 that too smallρwill lead to poor convergence performance.

    Example 2We consider a multi-agent system similar to the first example.We denote the common constraint set as X=Y=[?2,3],the nonsmooth cost functions of agents in subnetwork G1(k) are as follows:

    the nonsmooth cost functions of agents in subnetwork G2(k)are as follows:

    We also can find thatU(x,y) is strictly convex–concave.The saddle point ofU(x,y) is (1.0671,0.7698).Besides,sincef2is nonsmooth atx=1,g1is nonsmooth aty=0,then we define the subgradientq12(k)=?x f2(1,y)=1∈[?1,1],q21(k)=?yg2(x,0)=?1∈[?1,1].We select the initial valuexi(0)=[1,?2,2],yi(0)=[1,?1,0.5,?2] randomly.As with the first example,we letρ1i=ρ2i=ρ.The communication topology and adjacency matrix are given in the first example.We letρ=0.6,and Figs.6 and 7 show the convergence results of our algorithm under nonsmooth cost functions after 1500 iterations,and each agent performs projection operations and computes subgradient only about 900 times.

    For the influence ofρvalue on the convergence performance of our algorithm under nonsmooth cost functions,the conclusion is similar to the first example,which will not be shown in detail here.

    In summary,both smooth and nonsmooth cost functions,we verify that all agents converge to Nash equilibrium by our algorithm.The value ofρaffects the convergence performance and computational complexity of the algorithm.Therefore,by properly setting the appropriateρin algorithm,it can effectively reduce the calculation amount of the agent,speed up the running time of the algorithm,and ensure the good convergence accuracy of the algorithm to a certain extent.

    Fig.6 The convergence result of subnetwork G1(k) with nonsmooth cost function

    Fig.7 The convergence result of subnetwork G2(k) with nonsmooth cost function

    5 Conclusions

    The Nash equilibrium computation problem of two-network zero-sum game under time-varying communication graph is considered in this paper.We propose a distributed projection subgradient algorithm with random sleep scheme for reducing calculation amount of the agent and guaranteeing the convergence performance.Besides,uncoordinated random decreasing stepsizes are used in our algorithm.Furthermore,we analyze the convergence of our algorithm and verify the validity by a numerical examples.

    Our future work will extend zero-sum game to N-coalition non-cooperative game,and design a distributed algorithm to solve the Nash equilibrium computation problem of N-coalition noncooperative games with event-triggered communication.

    国产黄色视频一区二区在线观看| 国产成人精品无人区| 春色校园在线视频观看| 春色校园在线视频观看| 国产淫片久久久久久久久| 国产69精品久久久久777片| 亚洲av综合色区一区| 纯流量卡能插随身wifi吗| 免费少妇av软件| 精品人妻一区二区三区麻豆| 久久狼人影院| 欧美人与善性xxx| 大香蕉久久网| 美女脱内裤让男人舔精品视频| 欧美最新免费一区二区三区| 色94色欧美一区二区| 十八禁网站网址无遮挡 | 交换朋友夫妻互换小说| 亚洲经典国产精华液单| 久久久久久久亚洲中文字幕| 男人添女人高潮全过程视频| 内射极品少妇av片p| av福利片在线观看| 亚洲精品乱码久久久久久按摩| 久久精品国产自在天天线| 亚州av有码| av在线播放精品| 日韩免费高清中文字幕av| 亚洲四区av| 免费观看的影片在线观看| 亚洲四区av| 女性被躁到高潮视频| 国产亚洲精品久久久com| 精品卡一卡二卡四卡免费| 日韩欧美精品免费久久| av国产久精品久网站免费入址| av卡一久久| 男男h啪啪无遮挡| 18禁在线无遮挡免费观看视频| 在现免费观看毛片| 国产在线男女| 欧美精品一区二区免费开放| 久久韩国三级中文字幕| 国产色爽女视频免费观看| 亚洲成人av在线免费| 精品国产一区二区久久| 男男h啪啪无遮挡| 欧美成人午夜免费资源| 制服丝袜香蕉在线| 国产欧美另类精品又又久久亚洲欧美| 午夜福利视频精品| 精品国产国语对白av| 天天躁夜夜躁狠狠久久av| 亚洲精品久久久久久婷婷小说| 国产一区二区三区av在线| 亚洲伊人久久精品综合| 各种免费的搞黄视频| 久久影院123| 亚洲va在线va天堂va国产| 夜夜骑夜夜射夜夜干| 日韩亚洲欧美综合| 亚洲精品中文字幕在线视频 | 欧美97在线视频| 国产欧美日韩精品一区二区| 亚洲精华国产精华液的使用体验| 黄色配什么色好看| 黑丝袜美女国产一区| 两个人的视频大全免费| 只有这里有精品99| 3wmmmm亚洲av在线观看| 中国三级夫妇交换| 亚洲欧洲精品一区二区精品久久久 | 国产日韩欧美视频二区| 欧美精品高潮呻吟av久久| 三级经典国产精品| 一级毛片 在线播放| 亚洲精品国产av成人精品| 丝袜喷水一区| 久久免费观看电影| 一区二区三区四区激情视频| 亚洲av不卡在线观看| 岛国毛片在线播放| 七月丁香在线播放| 国产中年淑女户外野战色| 97超碰精品成人国产| 午夜av观看不卡| 久久青草综合色| 18禁动态无遮挡网站| 亚洲av二区三区四区| 日本vs欧美在线观看视频 | 国产极品天堂在线| 黄色毛片三级朝国网站 | 国国产精品蜜臀av免费| 特大巨黑吊av在线直播| 日韩熟女老妇一区二区性免费视频| 精品国产乱码久久久久久小说| 精品卡一卡二卡四卡免费| 丝袜喷水一区| 亚洲真实伦在线观看| 日韩成人av中文字幕在线观看| 亚洲av不卡在线观看| 夫妻性生交免费视频一级片| 国产成人91sexporn| 热99国产精品久久久久久7| 大又大粗又爽又黄少妇毛片口| 男的添女的下面高潮视频| 久久午夜综合久久蜜桃| 新久久久久国产一级毛片| 国产免费福利视频在线观看| 人人妻人人添人人爽欧美一区卜| av黄色大香蕉| 久久久久久久久久久久大奶| 亚洲精品久久午夜乱码| tube8黄色片| 成人毛片60女人毛片免费| 黑人猛操日本美女一级片| 性色av一级| 亚洲久久久国产精品| av福利片在线| 精品一区二区三区视频在线| 亚洲,欧美,日韩| 国产一区二区在线观看日韩| 国产成人91sexporn| 成年人午夜在线观看视频| 午夜视频国产福利| 69精品国产乱码久久久| 夜夜骑夜夜射夜夜干| 久热这里只有精品99| av在线观看视频网站免费| 国产成人免费无遮挡视频| 久久久a久久爽久久v久久| 天天操日日干夜夜撸| 成人影院久久| 亚洲国产精品国产精品| 美女脱内裤让男人舔精品视频| 一级片'在线观看视频| 午夜激情福利司机影院| 波野结衣二区三区在线| 多毛熟女@视频| 国产精品久久久久久精品古装| 国产黄片美女视频| 国产精品欧美亚洲77777| 亚洲成人一二三区av| 人人妻人人澡人人爽人人夜夜| 亚洲精品色激情综合| 97精品久久久久久久久久精品| 国产毛片在线视频| 日韩中文字幕视频在线看片| 色视频www国产| 久久女婷五月综合色啪小说| 成人影院久久| 亚洲精品一区蜜桃| 婷婷色麻豆天堂久久| 亚洲,一卡二卡三卡| 亚洲精品一区蜜桃| 国产精品偷伦视频观看了| 国产成人免费无遮挡视频| 免费少妇av软件| 国产亚洲5aaaaa淫片| 在线观看免费高清a一片| 亚洲国产毛片av蜜桃av| 国产精品偷伦视频观看了| 好男人视频免费观看在线| 免费少妇av软件| 国产精品偷伦视频观看了| 亚洲婷婷狠狠爱综合网| 熟女av电影| 一区二区av电影网| 秋霞在线观看毛片| 精品国产露脸久久av麻豆| 男女边吃奶边做爰视频| 国产在线一区二区三区精| 人妻夜夜爽99麻豆av| 久久影院123| 久久久久久久久久久丰满| 精品国产露脸久久av麻豆| 最近最新中文字幕免费大全7| 久久99热这里只频精品6学生| 国产日韩欧美视频二区| 欧美高清成人免费视频www| 国产黄色免费在线视频| 亚洲av电影在线观看一区二区三区| 黄色视频在线播放观看不卡| 午夜免费观看性视频| 日日啪夜夜撸| 97超视频在线观看视频| 人妻人人澡人人爽人人| 国产精品一二三区在线看| 亚洲国产毛片av蜜桃av| 欧美最新免费一区二区三区| 婷婷色综合大香蕉| 我要看日韩黄色一级片| 国产老妇伦熟女老妇高清| 亚洲精品乱码久久久v下载方式| 内射极品少妇av片p| 两个人的视频大全免费| 中国三级夫妇交换| 久久97久久精品| 欧美丝袜亚洲另类| 亚洲欧美成人精品一区二区| 边亲边吃奶的免费视频| 日韩中文字幕视频在线看片| 精品一区二区三区视频在线| 精品一品国产午夜福利视频| 国产精品一区二区性色av| 一本色道久久久久久精品综合| 日韩免费高清中文字幕av| 一个人免费看片子| 精品国产露脸久久av麻豆| 精品一区二区免费观看| 男女国产视频网站| 免费观看性生交大片5| 热re99久久精品国产66热6| 久热这里只有精品99| 在线天堂最新版资源| 26uuu在线亚洲综合色| 午夜激情久久久久久久| 久久午夜福利片| 久热久热在线精品观看| 日日摸夜夜添夜夜添av毛片| 九九在线视频观看精品| 亚洲欧美成人综合另类久久久| 观看美女的网站| 一级av片app| 我的老师免费观看完整版| 日本av手机在线免费观看| 日本欧美视频一区| 亚洲av.av天堂| 国产视频内射| 日韩人妻高清精品专区| 精品国产国语对白av| 国产精品人妻久久久久久| 七月丁香在线播放| 熟妇人妻不卡中文字幕| 如日韩欧美国产精品一区二区三区 | 亚洲一级一片aⅴ在线观看| 超碰97精品在线观看| 亚洲第一区二区三区不卡| 最近中文字幕高清免费大全6| 欧美97在线视频| 日韩不卡一区二区三区视频在线| 国产中年淑女户外野战色| 一级毛片黄色毛片免费观看视频| 免费大片18禁| 国产精品国产av在线观看| 色吧在线观看| 在线亚洲精品国产二区图片欧美 | 亚洲欧洲精品一区二区精品久久久 | 日韩不卡一区二区三区视频在线| 少妇裸体淫交视频免费看高清| 国产黄色视频一区二区在线观看| 亚洲丝袜综合中文字幕| 永久网站在线| 多毛熟女@视频| 国产精品久久久久久久电影| 亚洲经典国产精华液单| 国产熟女欧美一区二区| 成人国产av品久久久| 欧美区成人在线视频| 亚洲av福利一区| 精品人妻熟女av久视频| 国产伦理片在线播放av一区| 免费观看a级毛片全部| 一区二区三区免费毛片| 成人综合一区亚洲| 久久99精品国语久久久| 天堂8中文在线网| 在线观看www视频免费| 欧美日本中文国产一区发布| 看非洲黑人一级黄片| 免费观看性生交大片5| 内地一区二区视频在线| 中文字幕人妻丝袜制服| 丰满乱子伦码专区| 午夜精品国产一区二区电影| 日本黄色日本黄色录像| 69精品国产乱码久久久| 国产视频首页在线观看| 国产免费一区二区三区四区乱码| 亚洲不卡免费看| 欧美日韩综合久久久久久| 亚洲一级一片aⅴ在线观看| 精品一区二区三区视频在线| 欧美老熟妇乱子伦牲交| 日韩中字成人| 午夜91福利影院| 精品一区二区三卡| 这个男人来自地球电影免费观看 | 国产伦精品一区二区三区视频9| 日韩免费高清中文字幕av| 日韩欧美一区视频在线观看 | 欧美丝袜亚洲另类| 日韩熟女老妇一区二区性免费视频| 国产一区亚洲一区在线观看| 熟女人妻精品中文字幕| 尾随美女入室| 精品国产一区二区久久| 日韩 亚洲 欧美在线| 免费观看av网站的网址| 中文字幕精品免费在线观看视频 | 日本黄色日本黄色录像| 嫩草影院新地址| 国产精品国产av在线观看| 国产成人精品婷婷| 日韩电影二区| 欧美精品国产亚洲| 欧美精品高潮呻吟av久久| 99久久精品热视频| 伊人久久国产一区二区| 久久久久久久久大av| 久久久久久久国产电影| 最黄视频免费看| 亚洲av综合色区一区| 日韩av在线免费看完整版不卡| 国产老妇伦熟女老妇高清| 99视频精品全部免费 在线| 校园人妻丝袜中文字幕| 国产成人一区二区在线| 久久人人爽av亚洲精品天堂| 麻豆成人av视频| 国产精品一区二区在线不卡| 免费看不卡的av| 国产亚洲午夜精品一区二区久久| h视频一区二区三区| 国产精品国产三级国产av玫瑰| 精品99又大又爽又粗少妇毛片| 91aial.com中文字幕在线观看| 国产 精品1| 国产精品一区二区性色av| 亚洲欧美清纯卡通| 国产成人精品久久久久久| 夜夜骑夜夜射夜夜干| 欧美日韩综合久久久久久| 精品少妇内射三级| .国产精品久久| 高清不卡的av网站| 五月开心婷婷网| 亚洲欧美日韩卡通动漫| 欧美激情极品国产一区二区三区 | 国产亚洲最大av| 久久这里有精品视频免费| 亚洲自偷自拍三级| 日本欧美视频一区| 啦啦啦在线观看免费高清www| 久久久久久久亚洲中文字幕| 777米奇影视久久| 丁香六月天网| 亚洲av福利一区| 一级爰片在线观看| 亚洲内射少妇av| 欧美97在线视频| 啦啦啦在线观看免费高清www| 成人美女网站在线观看视频| 五月玫瑰六月丁香| 国产欧美日韩精品一区二区| 男女国产视频网站| 久久av网站| 久久久久人妻精品一区果冻| 国产av一区二区精品久久| 九色成人免费人妻av| 免费人成在线观看视频色| 亚洲av福利一区| 欧美97在线视频| 精品久久久久久久久av| kizo精华| 天堂8中文在线网| 99re6热这里在线精品视频| 99九九线精品视频在线观看视频| 十八禁网站网址无遮挡 | 国产免费福利视频在线观看| 热99国产精品久久久久久7| 制服丝袜香蕉在线| 水蜜桃什么品种好| 日日啪夜夜撸| 91精品国产九色| 久久国产乱子免费精品| 中文字幕免费在线视频6| 少妇被粗大猛烈的视频| 高清在线视频一区二区三区| 久久久久视频综合| 亚洲丝袜综合中文字幕| 我要看日韩黄色一级片| 日本黄色日本黄色录像| 王馨瑶露胸无遮挡在线观看| 美女脱内裤让男人舔精品视频| 人妻人人澡人人爽人人| 成人毛片60女人毛片免费| 国产精品一区www在线观看| 久久久亚洲精品成人影院| 最近的中文字幕免费完整| 亚洲精品久久久久久婷婷小说| 国产乱来视频区| 最新的欧美精品一区二区| 国产精品熟女久久久久浪| 亚洲经典国产精华液单| 亚洲情色 制服丝袜| 成人综合一区亚洲| 欧美亚洲 丝袜 人妻 在线| 丰满少妇做爰视频| 国产成人一区二区在线| av又黄又爽大尺度在线免费看| 嘟嘟电影网在线观看| 亚洲精品aⅴ在线观看| 91久久精品国产一区二区成人| 99热这里只有是精品50| 久久国产乱子免费精品| 精品一区二区三区视频在线| 亚洲成色77777| 亚洲精品国产色婷婷电影| 97在线人人人人妻| 亚洲精品,欧美精品| 老司机影院成人| 国产成人精品福利久久| 一级毛片电影观看| videos熟女内射| a级毛片在线看网站| 日韩三级伦理在线观看| 少妇熟女欧美另类| 成年美女黄网站色视频大全免费 | 精品一区二区三区视频在线| 精品国产露脸久久av麻豆| 国产成人免费无遮挡视频| 欧美激情国产日韩精品一区| 夫妻午夜视频| 高清在线视频一区二区三区| av播播在线观看一区| 少妇 在线观看| 亚洲第一区二区三区不卡| 精品卡一卡二卡四卡免费| 欧美变态另类bdsm刘玥| 99久久中文字幕三级久久日本| 色视频www国产| 国产熟女欧美一区二区| 青青草视频在线视频观看| 国产色婷婷99| 精品一区二区免费观看| 免费看av在线观看网站| 边亲边吃奶的免费视频| 欧美日韩国产mv在线观看视频| 日本黄色日本黄色录像| 色哟哟·www| videos熟女内射| 哪个播放器可以免费观看大片| 亚洲欧美日韩另类电影网站| 国产av码专区亚洲av| 免费观看性生交大片5| 97超碰精品成人国产| 国产男人的电影天堂91| 精品亚洲成a人片在线观看| 日本午夜av视频| 天堂俺去俺来也www色官网| 精品人妻偷拍中文字幕| 在现免费观看毛片| 亚洲精品日韩在线中文字幕| 久久久久久久国产电影| 中文精品一卡2卡3卡4更新| 国产午夜精品一二区理论片| 免费观看无遮挡的男女| 精品熟女少妇av免费看| 麻豆乱淫一区二区| 日本wwww免费看| 又爽又黄a免费视频| 菩萨蛮人人尽说江南好唐韦庄| 午夜精品国产一区二区电影| 3wmmmm亚洲av在线观看| 国产69精品久久久久777片| 亚洲精品久久久久久婷婷小说| 丝袜喷水一区| 22中文网久久字幕| 一本大道久久a久久精品| 看免费成人av毛片| 国产日韩一区二区三区精品不卡 | 午夜激情福利司机影院| 天天操日日干夜夜撸| 男人和女人高潮做爰伦理| 国产av码专区亚洲av| 国产黄色视频一区二区在线观看| av福利片在线| 大又大粗又爽又黄少妇毛片口| 99九九线精品视频在线观看视频| 欧美最新免费一区二区三区| 日韩 亚洲 欧美在线| 69精品国产乱码久久久| 成人美女网站在线观看视频| 国产综合精华液| 99精国产麻豆久久婷婷| 国产中年淑女户外野战色| 国产极品粉嫩免费观看在线 | 国产伦在线观看视频一区| 热re99久久国产66热| 国产免费福利视频在线观看| 热re99久久精品国产66热6| 亚洲伊人久久精品综合| 国产精品国产av在线观看| 一二三四中文在线观看免费高清| 永久免费av网站大全| 国产一级毛片在线| av又黄又爽大尺度在线免费看| 中文资源天堂在线| 99九九在线精品视频 | 亚洲av不卡在线观看| 五月玫瑰六月丁香| 一个人看视频在线观看www免费| 午夜91福利影院| 亚洲精品亚洲一区二区| 亚洲图色成人| 伦理电影免费视频| 亚洲av二区三区四区| 国产精品一区www在线观看| 日韩中字成人| 一级毛片aaaaaa免费看小| 国产高清有码在线观看视频| 国产免费福利视频在线观看| 国产一级毛片在线| 国产男人的电影天堂91| 人妻制服诱惑在线中文字幕| av女优亚洲男人天堂| 一区在线观看完整版| 成人美女网站在线观看视频| 亚洲精品久久午夜乱码| 久久99一区二区三区| 国产精品久久久久久久久免| 又粗又硬又长又爽又黄的视频| 国产成人一区二区在线| 2021少妇久久久久久久久久久| 中文欧美无线码| av在线老鸭窝| 久久久久久久精品精品| 国产成人精品久久久久久| 一级av片app| 亚洲国产欧美在线一区| 亚洲在久久综合| 一本一本综合久久| 婷婷色综合大香蕉| 国产欧美日韩精品一区二区| 午夜福利视频精品| 亚洲精品自拍成人| videos熟女内射| 久久人人爽av亚洲精品天堂| 亚洲第一区二区三区不卡| 亚洲av综合色区一区| 大码成人一级视频| 国产成人freesex在线| 日韩欧美精品免费久久| 欧美精品一区二区免费开放| 少妇精品久久久久久久| 日本爱情动作片www.在线观看| 人妻夜夜爽99麻豆av| 纯流量卡能插随身wifi吗| 国产亚洲欧美精品永久| 国产精品99久久久久久久久| 男男h啪啪无遮挡| 午夜福利在线观看免费完整高清在| 简卡轻食公司| 自拍偷自拍亚洲精品老妇| 亚洲av综合色区一区| 最黄视频免费看| 国产深夜福利视频在线观看| 一本色道久久久久久精品综合| 日韩电影二区| 久久午夜综合久久蜜桃| 内射极品少妇av片p| 毛片一级片免费看久久久久| 你懂的网址亚洲精品在线观看| 简卡轻食公司| 国产黄频视频在线观看| 女的被弄到高潮叫床怎么办| 日韩中字成人| 国产精品麻豆人妻色哟哟久久| 九色成人免费人妻av| 国产精品不卡视频一区二区| 久久97久久精品| 高清午夜精品一区二区三区| 99久久精品国产国产毛片| 日韩欧美 国产精品| 麻豆精品久久久久久蜜桃| 最黄视频免费看| 精品亚洲成a人片在线观看| 高清av免费在线| 免费看不卡的av| 久久99热这里只频精品6学生| .国产精品久久| 久久久久网色| 一区二区三区精品91| 久久久久久久国产电影| 色吧在线观看| 80岁老熟妇乱子伦牲交| 9色porny在线观看| 欧美最新免费一区二区三区| 五月天丁香电影| 亚洲真实伦在线观看| 狠狠精品人妻久久久久久综合| 国产成人免费观看mmmm| 国产欧美日韩精品一区二区| 国产综合精华液| 亚洲成人av在线免费| 乱系列少妇在线播放| 日韩中字成人| 亚洲精华国产精华液的使用体验| 亚洲四区av| 精品国产一区二区久久| 欧美+日韩+精品| 在现免费观看毛片| 亚洲国产av新网站| 国产成人一区二区在线| 欧美日韩av久久| av免费在线看不卡| 看免费成人av毛片| 一级黄片播放器| 日本免费在线观看一区| 嘟嘟电影网在线观看| 91久久精品国产一区二区三区| 永久网站在线| 亚洲av在线观看美女高潮| 一级毛片久久久久久久久女|