• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Distributed projection subgradient algorithm for two-network zero-sum game with random sleep scheme

    2021-10-13 07:16:54HongyunXiongJiangxiongHanXiaohongNianShilingLi
    Control Theory and Technology 2021年3期

    Hongyun Xiong·Jiangxiong Han·Xiaohong Nian·Shiling Li

    Abstract In this paper,a zero-sum game Nash equilibrium computation problem with a common constraint set is investigated under two time-varying multi-agent subnetworks,where the two subnetworks have opposite payoff function.A novel distributed projection subgradient algorithm with random sleep scheme is developed to reduce the calculation amount of agents in the process of computing Nash equilibrium.In our algorithm,each agent is determined by an independent identically distributed Bernoulli decision to compute the subgradient and perform the projection operation or to keep the previous consensus estimate,it effectively reduces the amount of computation and calculation time.Moreover,the traditional assumption of stepsize adopted in the existing methods is removed,and the stepsizes in our algorithm are randomized diminishing.Besides,we prove that all agents converge to Nash equilibrium with probability 1 by our algorithm.Finally,a simulation example verifies the validity of our algorithm.

    Keywords Zero-sum game·Nash equilibrium·Time-varying multi-agent network·Projection subgradient algorithm·Random sleep scheme

    1 Introduction

    Nash equilibrium computation problem in multi-agent network game have been paid more and more attention in social networks,cloud computation,smart grids [1–3],and so on.In recent years,some useful methods were proposed by many researchers in the related field to calculate the Nash equilibrium of different game problems.For example,a distributed operator splitting method was developed in [4] to solve generalized Nash equilibrium.Ye and Hu [5] solved Nash equilibrium of non-cooperative game by combining consensus and Gradient Play method.Furthermore,Deng et al.[6–8] studied the aggregate game problems,where the agent’s decision is decided by its own decision and the aggregation of all agent’s decisions.Moreover,Zeng et al.[9] considered Nash equilibrium computation of multinetwork games,in which agents cooperate with each other within the subnetwork and the subnetworks compete with each other.

    It is noted that the game problem studied by [4–8] contains only one multi-agent network.However,the multinetwork game is closer to the practical application scenarios.In particular,two-network zero-sum game is the most common multi-network game and has important research value in the power allocation of Gaussian communication channel [10] and adversarial detection of sensor networks[11].Many researchers have proposed some useful algorithms to compute the Nash equilibrium of two-network zero-sum games.An original dual method based on continuous-time to solve the Nash equilibrium of the twonetwork zero-sum game was provided in [12].Besides,a distributed projection subgradient algorithm was proposed in [13],it studied the Nash equilibrium computation of two-network zero-sum game with different stepsize and different communication topologies.In addition,Shi and Yang [14] proposed a novel incremental algorithm to reduce communication burden between two subnetworks.

    In the existing distributed methods mentioned above,each iteration of the agent needs to calculate the local subgradient information.The calculation of subgradient involves the calculation of big data and multiple function values,and the cost function is usually complex.Therefore,the calculation amount of each agent in the above method is large and it is necessary to study a new algorithm to reduce the computation of agents.In order to reduce the computation of agents in distributed optimization problems,the random sleep scheme was introduced in [15],and a distributed algorithm with random sleep scheme is proposed in [16] to study constrained convex optimization problems over multi-agent networks.Besides,the distributed optimization problem over multi-cluster networks is studied in [17] by combining hierarchical algorithm and random sleep scheme.Inspired by the existing work,this paper considers two-network zero-sum games Nash equilibrium computation problem under two time-varying multi-agent subnetworks,we will propose a new algorithm with random sleep scheme to reduce the computation of agents.According to the principle of random sleep strategy,each agent is based on independent identically distributed Bernoulli decision decides whether to calculate the local subgradient at the current time and perform projection operation,or only perform consistency calculation by combining all neighbors information.Therefore,the computational complexity of subgradient and projection will be simultaneously reduced in our algorithm.

    This paper mainly studies the Nash equilibrium computation problem of two-network zero-sum games with a common constraint set,and the communication graphs are time-varying.In order to effectively reduce the amount of calculation and running time,inspired by [15–17],a new projection subgradient algorithm with random sleep scheme is proposed in this paper.Besides,consensus and convergence of our algorithm are analyzed.The main contributions of this paper are as follows:

    (1) This paper proposes a new distributed projection subgradient algorithm with random sleep scheme to compute Nash equilibrium of two-network zero-sum game under a time-varying weight-balanced communication topology.Compared with the existing algorithm[12–14],the proposed algorithm not only ensures convergence performance but also reduces the amount of computation and running time.

    (2) Compared with coordinated stepsize in [13,14],the randomized diminishing stepsize is used in our algorithm,and our stepsizes are nonhomogeneous.Therefore,our algorithm is more flexible in the choice of stepsize than [13,14].

    (3) We use the distributed optimization method to solve the Nash equilibrium of zero-sum game.In our algorithm,all agents of two subnetworks minimize and maximize the same convex-concave payoff function,respectively.However,all agents in [16] agreed to minimize a convex objective function.Therefore,the design and proof of our algorithm are different from [16].

    (4) Different from [12],our algorithm is based on discrete time,and due to the introduction of random sleep scheme,the convergence analysis is more complex than continuous-time algorithm [12].Moreover,our algorithm considers a common constraints and allows the cost functions to be nonsmooth,the requirement is more general than [18].

    The organization of this paper is as follows.In Sect.2,preliminaries are introduced and the considered two-network zero-sum games Nash equilibrium computation problem is formulated.In Sect.3,the novel distributed projection subgradient algorithm random sleep scheme is designed.Besides,the consensus and convergence of proposed algorithm is given under an uncoordinated stepsize.In Sect.4,numerical examples are given.Finally,conclusion is given in Sect.5.

    1.1 Notations

    ? is the set of real number,and ? is the set of natural numbers.?ndenotes then-dimension Euclidean space,and ‖?‖denotes the Euclidean norm.rTdenotes the transpose of vectorr.The mathematical expectation of a random variableYis defined as E[Y].Let X be a closed convex set,PX[α]is defined as a projection of vectorα∈?nonto X,that is,for anyγ∈X .The projection operatorPX[?]∶?n→X has the following important property:

    For a convex functionf(x),the subgradientψ(x1) atx1∈?nsatisfiesf(x)?f(x1)≥〈x?x1,ψ(x1)〉 for allx∈?n.For a concave functionf(x),the subgradientψ(x1) atx1∈?nsatisfiesf(x)?f(x1)≤〈x?x1,ψ(x1)〉 for allx∈?n.The subdifferential?f(x1) off(x) atx1is the set of all subgradients off(x) atx1.A functionf(x,y)∶→? is (strictly)convex–concave if it is (strictly) convex inxfor any fixedyand (strictly) concave inyfor any fixedx.The subdifferential off(x,y) at pointxis denoted as?x f(x,y) .Similarly,the subdifferential off(x,y) at pointyis denoted as?y f(x,y).

    2 Preliminaries and formulation

    In this section,we give some preliminary knowledge and formulate our problem.

    2.1 Graph theory

    2.2 Saddle point and zero-sum games

    For a functionf(x,y),there exists a pair (x?,y?)∈X×Y such thatf(x?,y)≤f(x?,y?)≤f(x,y?) for allx∈X,y∈Y,then (x?,y?) is called a saddle point of functionf.In this paper,the sets X and Y usually are nonempty convex compact set,and the functionfis convex–concave,these ensure that the set of saddle points of the functionfis nonempty.

    A game problem is denoted as (M,S,U),where|M|=N∈?≥2is the number of all players,and S=s1×…×sNis the set of actions,wheresiis defined as the action of playeri.Besides,U=(u1,…,uN),whereui∶S→? is the payoff function of playeri.For a game problem,each player aims to minimize its cost function until none of player can get better benefits by changing their action alone.A profile actionis a Nash equilibrium iffor all playeri∈M andxi∈si,wherex?iis denoted as the actions of all players excepti.An N-person zero-sum game satisfies=0 .In particular,two-person zero-sum game only has two players,their payoff functions satisfiesu1+u2=0 .There is an important conclusion that the Nash equilibrium of two-person zero-sum game is the saddle point of payoff function.

    2.3 Problem formulation

    In this paper,we provide a novel distributed projection subgradient algorithm with random sleep scheme to solve Nash equilibrium of two-network zero-sum game under time-varying weight-balanced topologies,each agent updates its state only by communicating with its neighbors until it finally converges to Nash equilibrium.An illustrative example for the communication topology of the two-network zero-sum game is given in Fig.1.The same problem was studied in [13],where each agent needs to calculate the subgradient information and perform projection operation in each iteration.However,our algorithm with random sleep scheme can effectively reduce the computation and energy burden and ensure the agents converge to Nash equilibrium.

    Fig.1 The communication topology of two-network zero-sum game

    3 Main result

    In this section,a novel distributed algorithm is proposed in Sect.3.1.The consensus result of the proposed algorithm is given in Sect.3.2.Finally,the convergence of the proposed algorithm is analyzed in Sect.3.3.

    3.1 Distributed algorithm design

    In this section,a distributed projection subgradient algorithm with random sleep scheme first is provided to solve Nash equilibrium of two-network zero-sum game under time-varying weight-balanced communication topologies.Firstly,a useful assumption about communication topologies is given.

    Assumption 1The time-varying communication graphs G1(k) and G2(k) are uniformly jointly strongly connected,Gcon(k) is uniformly jointly bipartite.

    Remark 1The requirement of communication topologies is connected for all the time in [12].However,this paper only requires agents in time-varying networks G(k) to exchange information with neighbor agents at least once during a period of lengthT,which can save energy and reduce communication burden of multi-agent network systems in practical application.

    We denote the state of agenti∈V1at timekasxi(k)∈?m1 and the state of agenti∈V2at timekasLetξ?i(k),?=1,2,k∈? be independent identically distributed Bernoulli random variables,and they satisfyThe update mechanism of the proposed distributed projection subgradient algorithm with random sleep scheme is as follows:we denote the initial value of these variables arexi(0)∈X,yi(0)∈Y .During the iteration,each agenti∈V1chooses to either calculate the subgradient and perform projection operation whenξ1i(k)=1,or keep the previous consensus estimate whenξ1i(k)=0 .For alli∈V2,there has a similar update mechanism aboutξ2i(k) .The distributed projection subgradient algorithm with random sleep scheme is given as follows:

    whereaij(k),bij(k) are the communication weights of link (j,i),respectively.q1i(k) is subdifferential offiati∈V1andq2i(k) is subdifferential ofgiatAt timek,the stepsizes of two subnetworks are defined asαi(k),i∈Vl(k) andβi(k),i∈V2(k),respectively.is neighbor set of agentiin subnetwork G?(k) at timek.

    In the proposed algorithm (2) and (3),andrepresent the estimation of the state of the opposite subnetwork,but due to the communication topology Gcon(k)is time-varying,agents may not receive information from opposite subnetwork at every time.Therefore,our algorithm needs to consider the following two cases during the iteration process,that is,the agent has or does not have neighbors in the opposite subnetwork,accordingly,the state estimationandare defined as follows:

    (i) If agenti∈V1has at least one neighbor in subnetwork G2(k) at timek,then

    If agenti∈V2has at least one neighbor in subnetwork G1(k) at timek,then

    (ii) If agenti∈V1has no neighbors in subnetwork G2(k)at timek,by Assumption 1,then there must existh1i∈? such that

    wherek?h1iis last time beforekwhen agentihas at least one neighbor in subnetwork G2(k) .If agenti∈V2has no neighbors in subnetwork G1(k) at timek,by Assumption 1,then there must existh2i∈?such that

    wherek?h2iis last time beforekwhen agentihas at least one neighbor in subnetwork G1(k).

    Remark 2According to the definition ofabove,it is not difficult to find that our algorithm requires each agent to have at least one neighbor of the opposite subnetwork during a period of lengthT,so as to estimate the state of the opposite subnetwork.The weighted average state of the neighbors in the opposite subnetwork is used as the estimationThe state estimationin this paper are similar to [13].

    In the proposed algorithm,(2) and (3) execute the state iteration of the two subnetwork,respectively.For all timek,each agent decides which iteration to carry out according to the random variableξ?i(k),whenξ?i(k)=0,the agent only uses the neighbor information to perform the consistency operation,which avoids the calculation of subgradient and the execution of projection operation,thus effectively reducing the calculation amount.In particular,whenρ?i=1,our algorithm is equivalent to standard distributed projection subgradient algorithm in [13].

    Let Fkbe theσ-algebra created by the entire history of our algorithm up to timek,that is,fork≥1,Fk={(xi(0),ξ1i(l),i∈V1);(yi(0),ξ2i(l),i∈V2),1 ≤l≤k?1},and F0={(xi(0),i∈V1);(yi(0),i∈V2)} .Besides,J?k,?=1,2 represents the set of agents that perform projective subgradient iterations in two subnetworks at thekth moment,respectively,thenJ?k={i∶ξ?i(k)=1} .Some important assumptions for consensus and convergence analysis are as follows:

    Assumption 2There exists a constant numberL>0 such that the subgradients are bounded,that is,for allk∈?,|q1i(k)|≤Lfor alli∈V1and |q2i(k)|≤Lfor alli∈V2.

    Remark 3According to Assumption 2,we can obtain the following useful conclusions:‖fi(x1,?)?fi(x2,?)‖≤L‖x1?x2‖and ‖fi(?,x1)?fi(?,x2)‖≤L‖x1 ?x2‖,and there are similar conclusions for cost functionsgi:‖gi(y1,?)?gi(y2,?)‖≤L‖y1?y2‖ and ‖gi(?,y1)?gi(?,y2)‖≤L‖y1?y2‖ .These conclusions are useful in the process of algorithm convergence analysis.

    Assumption 3For allk∈?,the adjacency matricesA(k)andB(k) are doubly stochastic.Besides,are stochastic matrices,that is,

    Assumption 4The stepsizes of agents in two subnetworks are defined as follows:

    Remark 4The nonhomogeneous randomized diminishing stepsizesαi(k),βi(k) are used in our algorithm,anddenote the sum of times that agenti∈V?,?=1,2 is active before timek.Compared with [13,14],the stepsize we consider is more flexible and general.

    For the stepsizes we set,there is an important lemma which will be used to analyze the convergence of our algorithm.

    Lemma 1[16]Let0

    Lemma 2[16]Let(Ω,F,P)be a probability space andF0?F1?F2?…be a sequence of σ-algebra ofF .Let{λ(k)},{μ(k)},{ν(k)},{σ(k)}are sequences ofFk-measurable non-negative random variables such that with probability1for all k∈?,

    3.2 Consensus analysis

    In this section,we show that the multi-agent network G(k)achieves a consensus result with probability 1 by the proposed algorithm.First,we introduce an important lemma

    Lemma 3[15]Let Assumptions1,2,3and4hold,then with probability1we have

    are the average states of two subnetworks at time k,respectively.

    ProofBecause the two-network zero-sum game can be divided into two distributed optimization problems to some extent,but the agent needs to estimate the state of other subnetwork,so the proof of the (9) and (10) in Lemma 3 is similar to Lemma 5 of [15].Since the proof process of formula (9) is the same as that of Lemma 5 of [15],except that a gradient perturbation term is considered in [15].On the other hand,the two subnetworks have similar state update laws,but the update direction is opposite,then the proof of formula (10) is similar to that of formula (9),so we are not going to describe the proof process in detail. ?

    The following lemma ensures the consensus of the proposed algorithm.

    Lemma 4Under Assumptions1,2,3and4,for all k∈? ,the agents update their states through algorithms(2)and(3),then with probability1,

    Taking conditional expectation for (15),by Lemma 1,for anyk≥with probability 1 we can obtain

    3.3 Convergence analysis

    In this section,we prove that the multi-agent network G(k)converges to Nash equilibrium with probability 1 by algorithms (2) and (3).Firstly,a useful lemma needs to be introduced.

    ProofThe proof process is similar to that of of [13],we just need to replace allαk,βkin Lemma 5 withBesides,because our algorithm has a random update strategy,we need to take the conditional expectation on both sides of the formula in Lemma 5,but the proof idea is exactly the same as that in the [13],then we are not going to describe the proof process in detail. ?

    The following theorem is our main conclusion that shows the convergence result of the proposed algorithm.

    Theorem 1Let Assumptions1,2,3and4hold and U is strictly convex–concave.Consider Nash equilibrium computation problem of two-network zero-sum game under a time-varying weight-balanced communication topologyG(k),k∈? ,agents update their states by the algorithm(2)and(3),then with probability1,

    4 Numerical examples

    In this section,we will provide two numerical examples to verify the validity of the distributed projection subgradient algorithm with random sleep scheme from smooth and nonsmooth cost functions,respectively.

    Example 1We consider a multi-agent system composed of two subnetwork with 3 agents and 4 agents,respectively,that is,n1=3,n2=4 andm1=1,m2=1 .We denote the common constraint set as X=Y=[1,5] .The smooth cost functions of agents in subnetwork G1(k) are as follows:

    and the cost functions of agents in subnetwork G2(k) are as follows:

    Besides,for the two subnetworks of graph Gφ,we letbe the communication weights of agents and its neighbors in the opposite subnetwork,respectively.

    We letρ=0.6,Figs.3 and 4 show the convergence results of our algorithm under smooth cost functions after 4000 iterations,and each agent performs projection operations and computes subgradient only about 2400 times.It can be seen that our algorithm not only makes all agents converge to Nash equilibrium,but also greatly reduces the amount of computation and speeds up the running time of the algorithm.

    Fig.2 The time-varying communication topology of multi-agent network

    Fig.3 The convergence result of subnetwork G1(k) with smooth cost function

    Next we consider the effect ofρ?ion the convergence rate and accuracy of our algorithm.We define the relative error as follows:

    The relative errors under four differentρvalues are shown in Fig.5.It can be seen that the value ofρhas large influence on the convergence performance of our algorithm,that is,whenρis close to 1,our algorithm shows a better convergence performance.In particular,whenρ=1,the corresponding algorithm is the classical projection subgradient algorithm[13].However,at this time,the efficiency of reducing the amount of calculation will become worse.We also can find that the convergence performance of the algorithm is not as good as that of the classical algorithm after adopting the random sleep strategy.However,whenρ=0.6,our algorithm can achieve satisfactory accuracy while reducing a large amount of calculation.On the other hand,due to the existence of random sleep strategy and switching of communication topology in our algorithm,the relative error shows oscillatory behavior,which does not affect our algorithm to converge to Nash equilibrium with better precision.

    Fig.4 The convergence result of subnetwork G2(k) with smooth cost function

    Fig.5 The relative errors with smooth cost function

    Finally,under the given accuracy,we compare the average calculation times of subgradient under three differentρvalues.The average calculation times of 100 experiments are shown in Table 1.According to the principle of the random sleep scheme,the amount of computation of the agent mainly comes from the calculation of the subgradient,so the average calculation times of the subgradient in the whole iterative process can effectively reflect the amount of computation of the algorithm.According to Table 1,under the same accuracy requirement,the smaller theρvalue is,the less the calculation times of the subgradient are,it means that the amount of computation is smaller.However,combining with the relative error curve in Fig.5,we can see that whenρis small,the convergence performance of the algorithm will be poor.Therefore,it is necessary to choose a suitableρto balance the convergence performance and the computational complexity.

    Table 1 Average calculation times of subgradient under the various ρ values

    According to the proposed algorithm (2) and (3),when theρvalue is large,the agent updates its state toward the subgradient direction with a greater probability,which improves the convergence performance.When theρvalue is small,the agent performs more consistent operations,and the convergence performance is weakened to a certain extent.Therefore,in practical application,if it tends to have a higher convergence performance,it is recommended to use a largerρvalue (close to 1),if it tends to have a small amount of computation,it is recommended to use a smaller value ofρ,but the value ofρshould be less than or equal to 0.5,because it can be seen from Fig.5 that too smallρwill lead to poor convergence performance.

    Example 2We consider a multi-agent system similar to the first example.We denote the common constraint set as X=Y=[?2,3],the nonsmooth cost functions of agents in subnetwork G1(k) are as follows:

    the nonsmooth cost functions of agents in subnetwork G2(k)are as follows:

    We also can find thatU(x,y) is strictly convex–concave.The saddle point ofU(x,y) is (1.0671,0.7698).Besides,sincef2is nonsmooth atx=1,g1is nonsmooth aty=0,then we define the subgradientq12(k)=?x f2(1,y)=1∈[?1,1],q21(k)=?yg2(x,0)=?1∈[?1,1].We select the initial valuexi(0)=[1,?2,2],yi(0)=[1,?1,0.5,?2] randomly.As with the first example,we letρ1i=ρ2i=ρ.The communication topology and adjacency matrix are given in the first example.We letρ=0.6,and Figs.6 and 7 show the convergence results of our algorithm under nonsmooth cost functions after 1500 iterations,and each agent performs projection operations and computes subgradient only about 900 times.

    For the influence ofρvalue on the convergence performance of our algorithm under nonsmooth cost functions,the conclusion is similar to the first example,which will not be shown in detail here.

    In summary,both smooth and nonsmooth cost functions,we verify that all agents converge to Nash equilibrium by our algorithm.The value ofρaffects the convergence performance and computational complexity of the algorithm.Therefore,by properly setting the appropriateρin algorithm,it can effectively reduce the calculation amount of the agent,speed up the running time of the algorithm,and ensure the good convergence accuracy of the algorithm to a certain extent.

    Fig.6 The convergence result of subnetwork G1(k) with nonsmooth cost function

    Fig.7 The convergence result of subnetwork G2(k) with nonsmooth cost function

    5 Conclusions

    The Nash equilibrium computation problem of two-network zero-sum game under time-varying communication graph is considered in this paper.We propose a distributed projection subgradient algorithm with random sleep scheme for reducing calculation amount of the agent and guaranteeing the convergence performance.Besides,uncoordinated random decreasing stepsizes are used in our algorithm.Furthermore,we analyze the convergence of our algorithm and verify the validity by a numerical examples.

    Our future work will extend zero-sum game to N-coalition non-cooperative game,and design a distributed algorithm to solve the Nash equilibrium computation problem of N-coalition noncooperative games with event-triggered communication.

    激情在线观看视频在线高清 | 久久午夜亚洲精品久久| 两性夫妻黄色片| 欧美人与性动交α欧美精品济南到| 国产欧美日韩综合在线一区二区| 国产精品免费大片| 久久人人爽av亚洲精品天堂| 激情在线观看视频在线高清 | aaaaa片日本免费| 日本wwww免费看| 最新在线观看一区二区三区| 高潮久久久久久久久久久不卡| 一本综合久久免费| 欧美日韩亚洲高清精品| 精品视频人人做人人爽| 亚洲情色 制服丝袜| 亚洲欧美日韩高清在线视频| 亚洲一区中文字幕在线| 国产免费男女视频| 欧美 日韩 精品 国产| 天天躁狠狠躁夜夜躁狠狠躁| 十八禁高潮呻吟视频| 美女国产高潮福利片在线看| 久久中文看片网| 国产成人系列免费观看| 精品视频人人做人人爽| 午夜免费成人在线视频| 在线观看免费视频日本深夜| 美女福利国产在线| 精品午夜福利视频在线观看一区| 日本wwww免费看| 校园春色视频在线观看| 国产国语露脸激情在线看| 高清视频免费观看一区二区| 欧美老熟妇乱子伦牲交| 国产av一区二区精品久久| 韩国精品一区二区三区| 曰老女人黄片| 国产亚洲精品一区二区www | 1024香蕉在线观看| 国产三级黄色录像| 日韩人妻精品一区2区三区| 三级毛片av免费| 欧美国产精品一级二级三级| 咕卡用的链子| 成人av一区二区三区在线看| 久久精品国产亚洲av香蕉五月 | 免费不卡黄色视频| 国产高清国产精品国产三级| 欧美日韩瑟瑟在线播放| 精品亚洲成国产av| 国产真人三级小视频在线观看| 99re6热这里在线精品视频| 午夜亚洲福利在线播放| 一进一出抽搐动态| 一级毛片女人18水好多| 又紧又爽又黄一区二区| 国产精品香港三级国产av潘金莲| 免费观看人在逋| 这个男人来自地球电影免费观看| 99久久综合精品五月天人人| 一级毛片精品| 曰老女人黄片| 日韩成人在线观看一区二区三区| 中文字幕人妻丝袜一区二区| av超薄肉色丝袜交足视频| 操出白浆在线播放| 成人18禁高潮啪啪吃奶动态图| av天堂久久9| 高清欧美精品videossex| 亚洲三区欧美一区| 村上凉子中文字幕在线| 91成年电影在线观看| 国产黄色免费在线视频| 久久久精品免费免费高清| 天天添夜夜摸| 久久午夜亚洲精品久久| 首页视频小说图片口味搜索| 一级片'在线观看视频| 国产熟女午夜一区二区三区| 色婷婷久久久亚洲欧美| 亚洲人成电影免费在线| 国产av又大| 亚洲一区二区三区欧美精品| 在线观看午夜福利视频| 国产欧美日韩精品亚洲av| 日韩成人在线观看一区二区三区| 建设人人有责人人尽责人人享有的| 欧美日韩一级在线毛片| cao死你这个sao货| 男女下面插进去视频免费观看| 美女高潮喷水抽搐中文字幕| 亚洲欧美色中文字幕在线| 动漫黄色视频在线观看| 每晚都被弄得嗷嗷叫到高潮| 国产精品99久久99久久久不卡| 欧美精品啪啪一区二区三区| 亚洲一区中文字幕在线| 成人国产一区最新在线观看| 国产精品九九99| 丝袜人妻中文字幕| av一本久久久久| 国产精品亚洲一级av第二区| 国产精品久久久av美女十八| 亚洲精华国产精华精| 91麻豆av在线| 9热在线视频观看99| 999久久久精品免费观看国产| 久久精品aⅴ一区二区三区四区| 国产精品自产拍在线观看55亚洲 | 成年人午夜在线观看视频| av线在线观看网站| 天天添夜夜摸| 精品熟女少妇八av免费久了| 亚洲人成77777在线视频| 欧美色视频一区免费| 亚洲精品乱久久久久久| av电影中文网址| 老司机在亚洲福利影院| 美女视频免费永久观看网站| 亚洲成人国产一区在线观看| 中亚洲国语对白在线视频| 成年女人毛片免费观看观看9 | 午夜视频精品福利| 亚洲少妇的诱惑av| 成年动漫av网址| 大香蕉久久成人网| 啪啪无遮挡十八禁网站| 桃红色精品国产亚洲av| 老司机午夜十八禁免费视频| 可以免费在线观看a视频的电影网站| aaaaa片日本免费| 99久久综合精品五月天人人| 国产精品一区二区免费欧美| 国产成人精品久久二区二区免费| 黑人巨大精品欧美一区二区mp4| 中文字幕制服av| 精品国产一区二区三区四区第35| 国产三级黄色录像| 亚洲av日韩在线播放| 亚洲国产精品一区二区三区在线| 人成视频在线观看免费观看| 中亚洲国语对白在线视频| 欧美日韩一级在线毛片| bbb黄色大片| 欧美另类亚洲清纯唯美| xxx96com| 精品国产亚洲在线| 亚洲欧美色中文字幕在线| 一进一出好大好爽视频| 成人免费观看视频高清| 成人亚洲精品一区在线观看| 亚洲性夜色夜夜综合| 国产伦人伦偷精品视频| 在线看a的网站| 亚洲国产精品一区二区三区在线| 这个男人来自地球电影免费观看| 久久九九热精品免费| 国产精品久久久久久人妻精品电影| 中文字幕另类日韩欧美亚洲嫩草| 国产伦人伦偷精品视频| 午夜福利在线观看吧| 男人的好看免费观看在线视频 | 9191精品国产免费久久| 12—13女人毛片做爰片一| a级片在线免费高清观看视频| 交换朋友夫妻互换小说| 婷婷丁香在线五月| 欧美日韩瑟瑟在线播放| 国产精品久久久人人做人人爽| 亚洲国产欧美一区二区综合| 黑人猛操日本美女一级片| 女性被躁到高潮视频| 色在线成人网| 日本a在线网址| 免费在线观看亚洲国产| av国产精品久久久久影院| 每晚都被弄得嗷嗷叫到高潮| 午夜福利免费观看在线| 夜夜夜夜夜久久久久| 亚洲欧美一区二区三区久久| 免费黄频网站在线观看国产| 叶爱在线成人免费视频播放| 精品无人区乱码1区二区| 国精品久久久久久国模美| 飞空精品影院首页| 国产精品一区二区在线观看99| 99精品久久久久人妻精品| 国产精品电影一区二区三区 | 亚洲av欧美aⅴ国产| 少妇的丰满在线观看| 亚洲专区字幕在线| 国产xxxxx性猛交| 18禁美女被吸乳视频| 久久久久久亚洲精品国产蜜桃av| 亚洲午夜理论影院| 成年动漫av网址| 中亚洲国语对白在线视频| 一级毛片精品| 十八禁高潮呻吟视频| 1024视频免费在线观看| 久久热在线av| 妹子高潮喷水视频| 精品久久久久久久毛片微露脸| 欧美日韩亚洲国产一区二区在线观看 | 十八禁网站免费在线| 制服诱惑二区| 亚洲欧洲精品一区二区精品久久久| 色综合欧美亚洲国产小说| 国产又色又爽无遮挡免费看| 人妻一区二区av| 精品国产乱码久久久久久男人| 一级,二级,三级黄色视频| 国产欧美日韩一区二区精品| 亚洲色图 男人天堂 中文字幕| www.精华液| 久久午夜综合久久蜜桃| 欧美日韩亚洲国产一区二区在线观看 | 亚洲av成人一区二区三| 九色亚洲精品在线播放| 一边摸一边做爽爽视频免费| 免费av中文字幕在线| √禁漫天堂资源中文www| 男男h啪啪无遮挡| 少妇 在线观看| 国产男靠女视频免费网站| 精品免费久久久久久久清纯 | 成人国产一区最新在线观看| 亚洲欧美一区二区三区黑人| 亚洲情色 制服丝袜| 自线自在国产av| 露出奶头的视频| 99久久精品国产亚洲精品| 两性午夜刺激爽爽歪歪视频在线观看 | 免费久久久久久久精品成人欧美视频| 亚洲中文字幕日韩| 午夜福利在线免费观看网站| 亚洲欧美日韩高清在线视频| 人人妻,人人澡人人爽秒播| 国产有黄有色有爽视频| 国产高清国产精品国产三级| 久久精品熟女亚洲av麻豆精品| 视频区图区小说| 日日摸夜夜添夜夜添小说| avwww免费| 国产黄色免费在线视频| 十分钟在线观看高清视频www| 国内久久婷婷六月综合欲色啪| 免费在线观看视频国产中文字幕亚洲| 黄色女人牲交| 热99re8久久精品国产| 欧美最黄视频在线播放免费 | 热re99久久精品国产66热6| 18禁国产床啪视频网站| 色婷婷av一区二区三区视频| 午夜日韩欧美国产| 欧美黑人欧美精品刺激| 99热网站在线观看| 亚洲三区欧美一区| 亚洲国产毛片av蜜桃av| а√天堂www在线а√下载 | 亚洲中文字幕日韩| 亚洲熟妇中文字幕五十中出 | 成年女人毛片免费观看观看9 | 精品人妻在线不人妻| 国产av又大| 亚洲成人手机| 女人高潮潮喷娇喘18禁视频| 老鸭窝网址在线观看| 国产日韩一区二区三区精品不卡| 国产精品国产高清国产av | 免费av中文字幕在线| 亚洲 欧美一区二区三区| 久热爱精品视频在线9| 国产精品98久久久久久宅男小说| 91字幕亚洲| 日韩 欧美 亚洲 中文字幕| 亚洲精品粉嫩美女一区| 最近最新中文字幕大全免费视频| 欧美日韩成人在线一区二区| 精品熟女少妇八av免费久了| 中文字幕色久视频| 校园春色视频在线观看| 亚洲成人手机| 欧美精品av麻豆av| 亚洲中文日韩欧美视频| 亚洲精品成人av观看孕妇| 欧美av亚洲av综合av国产av| 91成年电影在线观看| 亚洲av欧美aⅴ国产| 亚洲成a人片在线一区二区| 黄色毛片三级朝国网站| 精品久久久久久,| 视频区图区小说| 午夜激情av网站| 国产精品香港三级国产av潘金莲| av片东京热男人的天堂| 亚洲专区国产一区二区| 一区二区日韩欧美中文字幕| 久久草成人影院| 建设人人有责人人尽责人人享有的| 亚洲av欧美aⅴ国产| 欧美最黄视频在线播放免费 | 欧美日韩精品网址| 免费女性裸体啪啪无遮挡网站| 伦理电影免费视频| 精品视频人人做人人爽| 欧美日韩福利视频一区二区| 9色porny在线观看| 精品国产一区二区三区四区第35| 久久午夜亚洲精品久久| 国产精品久久久久成人av| 一级a爱片免费观看的视频| 亚洲av熟女| 国产激情欧美一区二区| 婷婷成人精品国产| 一夜夜www| 日本wwww免费看| 午夜日韩欧美国产| 一二三四在线观看免费中文在| 日韩制服丝袜自拍偷拍| 午夜福利一区二区在线看| 成年人免费黄色播放视频| 黄色视频,在线免费观看| 99国产综合亚洲精品| 人人妻人人澡人人看| 亚洲一区高清亚洲精品| 日本一区二区免费在线视频| 亚洲在线自拍视频| 久热爱精品视频在线9| 国产一区在线观看成人免费| 亚洲男人天堂网一区| 国产成人av教育| 黄色a级毛片大全视频| 精品久久久精品久久久| 国产精品久久久久久精品古装| 国产人伦9x9x在线观看| 亚洲熟女毛片儿| 国产男女超爽视频在线观看| 91大片在线观看| 国产主播在线观看一区二区| 女人精品久久久久毛片| 日本黄色视频三级网站网址 | 啦啦啦视频在线资源免费观看| 亚洲第一av免费看| 成人三级做爰电影| 91字幕亚洲| 久久99一区二区三区| 一级毛片女人18水好多| 久久久精品免费免费高清| 久久久国产一区二区| 午夜精品国产一区二区电影| 欧美亚洲 丝袜 人妻 在线| 一级毛片精品| 欧美午夜高清在线| 亚洲第一欧美日韩一区二区三区| 婷婷成人精品国产| 国产色视频综合| 国产视频一区二区在线看| 视频在线观看一区二区三区| 国产免费男女视频| 成人亚洲精品一区在线观看| 91精品国产国语对白视频| 日韩欧美一区二区三区在线观看 | 午夜福利影视在线免费观看| 久久久水蜜桃国产精品网| 高清在线国产一区| 午夜影院日韩av| 亚洲五月天丁香| 中文字幕av电影在线播放| 精品电影一区二区在线| 麻豆国产av国片精品| 岛国在线观看网站| 岛国毛片在线播放| 精品国内亚洲2022精品成人 | 午夜亚洲福利在线播放| 成人影院久久| 午夜福利视频在线观看免费| 在线观看日韩欧美| 亚洲成人免费电影在线观看| 欧美色视频一区免费| 亚洲色图综合在线观看| 91麻豆精品激情在线观看国产 | 国产亚洲欧美在线一区二区| 看片在线看免费视频| 亚洲av日韩精品久久久久久密| 国产aⅴ精品一区二区三区波| 在线永久观看黄色视频| 亚洲一区高清亚洲精品| 亚洲专区中文字幕在线| 老司机午夜十八禁免费视频| 中文字幕制服av| 国产精品美女特级片免费视频播放器 | 午夜成年电影在线免费观看| 曰老女人黄片| 天堂√8在线中文| 一区二区三区精品91| 美女福利国产在线| 午夜免费观看网址| 99riav亚洲国产免费| 夜夜躁狠狠躁天天躁| 一进一出抽搐gif免费好疼 | 国产xxxxx性猛交| 男女下面插进去视频免费观看| 啦啦啦视频在线资源免费观看| 极品少妇高潮喷水抽搐| 51午夜福利影视在线观看| 免费在线观看日本一区| 欧美+亚洲+日韩+国产| 91av网站免费观看| 自拍欧美九色日韩亚洲蝌蚪91| 每晚都被弄得嗷嗷叫到高潮| 国产精品一区二区在线观看99| 精品久久蜜臀av无| 婷婷成人精品国产| 精品久久久久久久毛片微露脸| 最新在线观看一区二区三区| 成年版毛片免费区| 少妇被粗大的猛进出69影院| 国产av又大| 精品高清国产在线一区| 久久精品国产a三级三级三级| 香蕉丝袜av| 久久久国产一区二区| 日韩中文字幕欧美一区二区| 99精品欧美一区二区三区四区| 大型黄色视频在线免费观看| 日韩三级视频一区二区三区| 18禁裸乳无遮挡动漫免费视频| 亚洲三区欧美一区| 欧美精品人与动牲交sv欧美| 国产成人av激情在线播放| 欧美亚洲日本最大视频资源| 一个人免费在线观看的高清视频| 熟女少妇亚洲综合色aaa.| 国产亚洲一区二区精品| 国产欧美亚洲国产| e午夜精品久久久久久久| 老司机亚洲免费影院| 热99久久久久精品小说推荐| 成年人黄色毛片网站| 啦啦啦在线免费观看视频4| 久久久久久久国产电影| 青草久久国产| 亚洲一区二区三区不卡视频| 欧美午夜高清在线| 人人妻人人澡人人看| 一进一出抽搐动态| 精品一区二区三区av网在线观看| 成人特级黄色片久久久久久久| 国产精品九九99| 大香蕉久久成人网| 18禁裸乳无遮挡动漫免费视频| 国产精品免费视频内射| 久久久水蜜桃国产精品网| 日本wwww免费看| 欧美日韩av久久| 国产精品香港三级国产av潘金莲| 久久天躁狠狠躁夜夜2o2o| 免费观看精品视频网站| 亚洲欧美激情在线| 天堂动漫精品| 久99久视频精品免费| 久久精品亚洲熟妇少妇任你| 国产91精品成人一区二区三区| 午夜精品久久久久久毛片777| 日韩免费高清中文字幕av| 在线免费观看的www视频| 精品熟女少妇八av免费久了| 欧美成人午夜精品| 十八禁人妻一区二区| 亚洲五月色婷婷综合| 国产精品偷伦视频观看了| 又紧又爽又黄一区二区| 国产黄色免费在线视频| 天天躁夜夜躁狠狠躁躁| 两个人免费观看高清视频| 久久久久久久久免费视频了| videosex国产| 亚洲中文日韩欧美视频| 宅男免费午夜| 王馨瑶露胸无遮挡在线观看| 91成年电影在线观看| 欧美日韩瑟瑟在线播放| cao死你这个sao货| 色婷婷久久久亚洲欧美| 成年人黄色毛片网站| 国产视频一区二区在线看| 日韩有码中文字幕| 日韩免费av在线播放| 一区在线观看完整版| 建设人人有责人人尽责人人享有的| 99riav亚洲国产免费| 变态另类成人亚洲欧美熟女 | 国产精品永久免费网站| 村上凉子中文字幕在线| 多毛熟女@视频| 在线观看免费视频网站a站| 极品人妻少妇av视频| 日韩欧美一区视频在线观看| 亚洲国产中文字幕在线视频| 亚洲国产毛片av蜜桃av| xxx96com| 成年人免费黄色播放视频| av天堂久久9| 在线免费观看的www视频| 免费av中文字幕在线| 美女高潮到喷水免费观看| 91精品国产国语对白视频| 男女之事视频高清在线观看| 欧美日韩瑟瑟在线播放| 欧美精品人与动牲交sv欧美| 纯流量卡能插随身wifi吗| bbb黄色大片| av欧美777| 精品国产乱子伦一区二区三区| 国内毛片毛片毛片毛片毛片| 国产视频一区二区在线看| 精品一区二区三区四区五区乱码| 在线永久观看黄色视频| 精品卡一卡二卡四卡免费| 一级,二级,三级黄色视频| 自线自在国产av| 国产精品 欧美亚洲| 久久久久久久午夜电影 | 久久国产精品男人的天堂亚洲| 人人澡人人妻人| 欧美日韩乱码在线| 身体一侧抽搐| 成熟少妇高潮喷水视频| 99在线人妻在线中文字幕 | 后天国语完整版免费观看| 91成人精品电影| 丰满迷人的少妇在线观看| 丝袜美腿诱惑在线| 久久香蕉精品热| 国产97色在线日韩免费| 高清视频免费观看一区二区| 最近最新中文字幕大全电影3 | 精品亚洲成a人片在线观看| 国产精品免费大片| 日韩熟女老妇一区二区性免费视频| 色老头精品视频在线观看| 国产不卡av网站在线观看| 亚洲av成人一区二区三| 18在线观看网站| 成人av一区二区三区在线看| 亚洲精品国产一区二区精华液| 日韩欧美一区二区三区在线观看 | 亚洲精品一卡2卡三卡4卡5卡| 99香蕉大伊视频| 婷婷精品国产亚洲av在线 | 三上悠亚av全集在线观看| 欧美人与性动交α欧美精品济南到| 不卡av一区二区三区| 18禁美女被吸乳视频| 在线十欧美十亚洲十日本专区| а√天堂www在线а√下载 | 亚洲人成77777在线视频| 99精品在免费线老司机午夜| 捣出白浆h1v1| 午夜激情av网站| 黄片大片在线免费观看| 老汉色∧v一级毛片| 国产成人av教育| 久久精品国产清高在天天线| 91老司机精品| 最近最新中文字幕大全免费视频| 婷婷成人精品国产| 别揉我奶头~嗯~啊~动态视频| av在线播放免费不卡| 久久久久精品国产欧美久久久| 一个人免费在线观看的高清视频| 欧美国产精品va在线观看不卡| 日韩欧美免费精品| 999精品在线视频| 久久这里只有精品19| 亚洲第一青青草原| 日日夜夜操网爽| av超薄肉色丝袜交足视频| 精品国产超薄肉色丝袜足j| 久久久国产成人精品二区 | 日本欧美视频一区| 精品国产乱码久久久久久男人| 色综合欧美亚洲国产小说| 亚洲欧美激情在线| 亚洲精品美女久久av网站| 久久久久精品国产欧美久久久| 一二三四在线观看免费中文在| 性色av乱码一区二区三区2| 黄色a级毛片大全视频| 日韩欧美一区二区三区在线观看 | 他把我摸到了高潮在线观看| 人妻丰满熟妇av一区二区三区 | 一进一出抽搐gif免费好疼 | 亚洲午夜精品一区,二区,三区| 国产精品.久久久| 成人18禁在线播放| 国产又色又爽无遮挡免费看| 国产男女内射视频| 国产精品亚洲一级av第二区| 在线观看免费日韩欧美大片| 9191精品国产免费久久| 捣出白浆h1v1| 国内久久婷婷六月综合欲色啪| 日本欧美视频一区| 国产亚洲欧美98| 欧美+亚洲+日韩+国产| 在线av久久热| 日韩欧美国产一区二区入口| 中文欧美无线码| 99香蕉大伊视频| 岛国在线观看网站| 亚洲中文av在线| 亚洲 欧美一区二区三区|