• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Heuristic dynamic programming-based learning control for discrete-time disturbed multi-agent systems

    2021-10-13 07:16:34YaoZhangChaoxuMuYongZhangYangheFeng
    Control Theory and Technology 2021年3期

    Yao Zhang·Chaoxu Mu·Yong Zhang·Yanghe Feng

    Abstract Owing to extensive applications in many fields,the synchronization problem has been widely investigated in multi-agent systems.The synchronization for multi-agent systems is a pivotal issue,which means that under the designed control policy,the output of systems or the state of each agent can be consistent with the leader.The purpose of this paper is to investigate a heuristic dynamic programming (HDP)-based learning tracking control for discrete-time multi-agent systems to achieve synchronization while considering disturbances in systems.Besides,due to the difficulty of solving the coupled Hamilton–Jacobi–Bellman equation analytically,an improved HDP learning control algorithm is proposed to realize the synchronization between the leader and all following agents,which is executed by an action-critic neural network.The action and critic neural network are utilized to learn the optimal control policy and cost function,respectively,by means of introducing an auxiliary action network.Finally,two numerical examples and a practical application of mobile robots are presented to demonstrate the control performance of the HDP-based learning control algorithm.

    Keywords Multi-agent systems·Heuristic dynamic programming (HDP)·Learning control·Neural network·Synchronization

    1 Introduction

    Owing to the rapid development of artificial intelligence technology,multi-agent systems have gradually turned into an attractive topic causing intense discussion among researchers in recent years [1–3].Multi-agent system control has been widely studied in both theoretical and practical research,such as formation control [4],consensus control[5] and flocking [6].In these research topics,many scholars especially focus on the consensus control because of widely application in engineering,for instance,formation control of unmanned aerial vehicles,cooperation control of undersea robots and attitude control of satellites.The consensus problem is that states of the leader can be synchronized by all the following agents under local coupling among agents.For multi-agent systems,since the behavior of each agent is jointly determined by its neighbors and itself,a coupled Hamilon-Jacobi-Bellman (HJB) equation is established.Therefore,the key to solving the consensus control is to find the solution of the coupled HJB equation.However,because of existing the partial differential terms,it is difficult to get the solution of the HJB equation directly.Hence,many effective algorithms have been extended to solve this problem.

    Recently,reinforcement learning (RL) has made remarkable advances in the field of artificial intelligence [7–9].The learning process is roughly divided into two steps.First,the system reward is constructed by the interaction with the environment.Second,the optimal control policy is obtained using the feedback mechanism [10,11].Adaptive dynamic programming (ADP) is an important branch of RL,and its prominent role is to effectively approximate the optimal solution of HJB equation [12–14].The theoretical research on neural networks has further promoted the development of ADP method [15–17].The ADP method usually consists of two processes:offline iteration [18] and online implementation [19,20].The ADP method mainly includes three basic types:heuristic dynamic programming (HDP),dual heuristic programming (DHP),and globalized dual heuristic programming (GDHP).Recently,the ADP method is widely used in the consensus control of multi-agent systems.In [21],an ADP technique was used to find the optimal controllers for continuous-time linear systems with single agent rather than multi-agent.In [22],a multi-agent formulation online algorithm of team games was developed to solve synchronization control by combining cooperative control,RL and game theory.In [23],an optimal coordinated control scheme for multi-agent consensus problem based on fuzzy ADP algorithm was proposed.It combined game theory,generalized fuzzy hyperbolic model and ADP method.However,in the above studies,the existence of disturbances in systems was not considered.Specifically,if disturbances are considered in systems,the control performance based on these methods may be declined.

    In practical applications,due to the complexity and variability of environment,the control problem of multi-agent systems is often affected by various disturbances,such as modeling uncertainty disturbances caused by specific system models that cannot be determined,model parameter disturbances,and external disturbances caused by many factors such as wind,noise,temperature,etc.The existence of these disturbances is not conducive to the stability of the systems and ultimately leads to the difficulty in achieving the control objectives.Therefore,in the modern control theory,how to deal with disturbances in systems becomes an important problem [24–26].In [27],through the cyclic small gain theorem,the asymptotic stability could be realized by devising a decentralized optimal control policy in the disturbed system.Lin [28] adopted a method of projection to deal with disturbances.The idea in [28] was further used for nonlinear systems considered unmatched disturbances using the ADP method in [29].Furthermore,in the research of multi-agent consensus control,the importance of studying on systems with disturbances is more critical.Cao et al.[30] proposed a distributed extended state observer.The ultimate purpose is to achieve consensus of multi-agent systems with the same linear dynamics and unknown external disturbances.In [31],a disturbance observer was designed to study sliding mode control of second-order multi-agent systems under mismatched uncertainties.In [32],the accurate optimization solution of multi-agent systems with uncertain external parameters was obtained by estimating unknown frequencies and rejecting bounded disturbances.In [33],a distributed optimization controller was proposed to eliminate the bounded disturbance composed of a set of known frequency sinusoid signals.

    The proposed methods have certain limitations in the scope of application,and they are mainly applied to the continuous-time multi-agent systems.However,for the discrete-time multi-agent systems with a wider range of application,there are few studies that focus on the consensus problem of discrete-time multi-agent systems with disturbances which is solved by HDP method;meanwhile,the disturbed multi-agent systems have practical application significance as the control objects.Besides,the HDP algorithm has many unique advantages.Therefore,a novel learning control scheme for discrete-time multi-agent systems with disturbances is proposed in this paper.The ultimate goal is to make all following agents synchronize with the leader under the communication graph.The contributions of this work are enumerated as follows:(1) A learning control method which is essentially an approximate optimal control scheme is formulated for the discrete-time disturbed multiagent system,and the optimal control policy is learned from the partial neighborhood communication.(2) The improved HDP algorithm is developed to obtain the optimal control in the way of estimating both the iterative control policy and the cost function implemented by neural networks.(3) The theoretical guarantee of learning control for the discrete-time disturbed multi-agent system is presented.(4) The proposed HDP algorithm is compared with the LQR method in terms of rapidity and accuracy,and the superiority of the HDP algorithm is proved by the simulation results.

    The paper is organized as follows.In Sect.2,the preliminary of discrete-time multi-agent system with disturbances is established.In Sect.3,the HDP algorithm design methodology,the action-critic neural network implementation and the stability analysis are proposed.Section 4 substantiates the validity of the above method by two numerical examples and one practical application.The paper is summarized in Sect.5.

    2 Problem formulation

    2.1 Algebraic graph theory and synchronization problem

    With a communication graphF,the studied discretetime multi-agent system containingNagents is generally described as follows:

    Remark 1The leader should generate divergent signal or sinusoidal reference trajectory,so that all eigenvalues ofAshould lie outside or on the edge of the unit disk.The reason is that the command trajectory is finally convergent ifAis stable.Therefore,it is more significant to design control policies for an unstableA.

    Since in the process of exploring the optimal control policy,it only involves the agent itself and corresponding neighboring agents.Therefore,the synchronization problem can be depicted asfor any agenti.Then the partial neighborhood error for each agentiis defined as

    whereη(k) is the global synchronization error vector andη(k)∈?nN.(L+B) is nonsingular under conditions that the graph contains a spanning tree and the agentiis connected with the leader directly.

    Lemma 1If(L+B)is nonsingular,the global synchronization error η(k)is given by

    where λmin(L+B)is the minimum singular value of(L+B).

    According to Lemma 1,if the global tracking errorε(k)converges to zero,the global synchronization errorη(k) converges to zero,too.Thus the system will achieve synchronization if the global tracking error tends to be small enough.

    The partial neighborhood error is derived by

    Obviously,the disturbance is contained in the partial neighborhood error,which implies that there may exist deviation among agents in the process of the information communication.In the following part,the consensus control of disturbed multi-agent system is studied.

    2.2 Consensus control formulation of disturbed multi-agent system

    For the disturbed multi-agent system (1),we decompose the disturbanceDici(k) into the sum of matched and unmatched components by projectingDici(k) onto the range of matrixBi(k) .Thus,it can derive that

    The information about neighboring agents is required to design the control input of each agenti,so the neighboring agents’ control policies of agentiare described as

    3 HDP-based learning control of disturbed multi-agent system

    3.1 Convergence of iteration algorithm

    Theorem 1Suppose that a spanning tree is contained inthe graph,letsatisfy (19)and the optimal controlpolicy satisfy (20).Then the partial neighborhood error εi(k)is asymptotically stable under Lemma1and the goal of synchronization can be achieved.

    ProofFirst,define the difference ofJi(εi(k)) and its gradient as follows:

    Lemma 2According to the Hamiltonian equation (18),the local performance index satisfies the following discrete-time Hamilton–Jacobi equation:

    Based on (18),(22) and (23),(25) can further be deduced that

    3.2 Multi-agent system learning control implementation with heuristic dynamic programming

    An action-critic neural network is developed for implementing the learning control with HDP algorithm for disturbed multi-agent system.

    The critic network is designed to estimate the cost function approximately for each agentiand the action network is designed to estimate the control policy approximately.It is known thatri(k) is not the real control policy of the system with the disturbance,but an auxiliary control policy which helps to approximate the optimal control policyui(k) .The outputof the critic,action and auxiliary action network are,respectively,expressed as

    Next,the approximation error of critic network is expressed asEc(k),and the objective function of approximation is denoted as

    Then the weight updating processes of action and auxiliary action networks are given as follows:

    where 0<ηu <1 and 0<ηr <1 are the learning rates of action and auxiliary action networks,respectively.

    The algorithm procedure under HDP structure implemented by neural networks is carried out based on the following steps:

    The framework of approximate optimal tracking control with the HDP structure is shown in Fig.1.In essence,the learning control of disturbed multi-agent systems is an optimal control problem,and its ultimate goal is to minimize the cost function.Through the deformation of the uncertainty,the auxiliary system related to the disturbed multi-agent system is constructed to solve the control problem.By introducing the auxiliary action network,the critic neural network and action neural network are utilized to learn the cost function and the control policy,respectively.It is worth noting that,different from general neural networks,the auxiliary control policyri(k) is helpful to obtain the actual optimal control policyui(k),so thatri(k) is not the real control policy.

    Fig.1 Learning-based control with the HDP structure for disturbed multi-agent system

    3.3 Stability analysis

    4 Simulation studies

    In this section,three typical simulation examples are investigated.

    4.1 Four-agent system

    A four-agent system is studied and the directed graph communication structure is given in Fig.2.The model of fouragent system with the disturbance is chosen as

    Fig.2 Network structure with four agents

    and the leader is modeled by

    where the system matrices are chosen as follows:

    The disturbance is

    The pinning gains areb1=b4=0,b2=b3=1 .The edge weights are selected asa12=0.8,a23=0.6,a43=0.5 .Choose the performance index weights matricesQ11=Q22=Q33=Q44=I2×2,R11=R22=R33=R44=1,R13=R21=R24=R32=R34=R41=R43=0,R12=R14=R23=R31=R42=1,Y11=Y22=Y33=Y44=1,Y13=Y21=Y32=Y24=Y34=Y41=0,Y12=Y14=Y23=Y31=Y42=Y43=1 .The initial states of the leader and each agent are both randomly set from [0,1].Choose the learning rates asηc=ηu=ηr=0.5 .The maximal stepsNis selected as 1500,which is big enough to keep all agents synchronized with the leader.

    The dynamics of agents and the leader are presented in Fig.3.Figure 4 shows the phase plane plot of the system.From the figures,we can get that all agents track the leader accurately.Figure 5 reflects the consensus control policies.

    Fig.3 Agents states versus iteration steps

    Fig.4 Phase plane plot

    Fig.5 Consensus control policies of four-agent system

    4.2 Algorithm comparison with linear quadratic regulator (LQR)

    Next,a three-agent system is further researched and the directed graph communication structure is given in Fig.6.

    Fig.6 Network structure with three agents

    The system matrices are chosen as follows:

    Choose the disturbance as follows:

    whereθiis the unknown parameter of the system.In the training process,the unknown parameterθi=[θ1,θ2]Tis selected andθ1,θ2∈[?10,10].

    Fig.7 Tracking performance of agent 1

    We choose the pinning gainsb1=1,b2=b3=0 .The edge weights are selected asa12=0.8,a23=0.6,a31=0.8 .The performance index weights matrices are chosen asQ11=Q22=Q33=I2×2,R11=R22=R33=1,R13=R21=R32=0,R12=R23=R31=1,Y11=Y22=Y33=1,Y13=Y21=Y32=0,Y12=Y23=Y31=1 .The initial states of the leader and each agent are both randomly obtained from [0,1].Set the learning rates asηc=1,ηu=ηr=0.2,respectively.

    For the purpose of comparing the performance,the HDP algorithm and the LQR method are adopted to obtain control policy,respectively,and the simulation results are shown as follows:the performance of each agent tracking the leader and the tracking error dynamics are shown in Figs.7,8,9,10,11 and 12,respectively.The control policiesui(k) are performed in Fig.13.

    Fig.8 Tracking errors of agent 1

    Fig.9 Tracking performance of agent 2

    Fig.10 Tracking errors of agent 2

    Fig.11 Tracking performance of agent 3

    Fig.12 Tracking errors of agent 3

    Fig.13 Control policies of three-agent system

    To further illustrate the control performance of the algorithm,the root mean square error,absolute mean error and iteration steps for LQR method and HDP algorithm are listed in Table 1,so as to compare performance indicators of two algorithms from two aspects of rapidity and accuracy in a clearer and more rigorous way.

    Table 1 Performance comparison between HDP algorithm and LQR method

    The comparison results show that LQR method and HDP algorithm can achieve synchronization between all agents and the leader.However,on the one hand,the root mean square error and absolute mean error based on HDP control method are less than those of LQR method,which shows that the HDP algorithm is better than LQR method in the accuracy of synchronization.On the other hand,a convergence accuracy is defined as 10?4and the steps are showed when the error converges to a certain range.All agents can track the leader after about 700 iteration steps under the LQR method,while the synchronization is achieved after about 300 iteration steps under the HDP algorithm.Compared with the LQR method,it indicates that the HDP method has advantages in convergence speed when considering disturbances.

    On the whole,after comparing three performance indicators,we can conclude that the control policy derived by the proposed HDP algorithm has better tracking performance than the LQR method in terms of accuracy and rapidity.

    In addition,the parameter uncertainty is considered in the multi-agent system to show the effectiveness of the proposed control scheme.

    Consider a linear multi-agent system composed ofNagents,in which theith agent’s dynamics can be expressed as

    where ΔAis the real matrix function representing time-varying parameter uncertainty in the multi-agent system.The uncertainty is the result of model linearization and is usually assumed to be of the form:

    whereDa,Eaare the real known constant matrices that represent how the uncertain parameters inFaenter the nominal matrixA.Fais the unknown real time-varying matrices with Lebesgue measurable elements satisfying

    The relevant system matrices are given as follows:

    Other parameter settings are the same as 4.2.

    The corresponding tracking curves of the system under the given parameters are plotted in Fig.14.It can be easily observed that the states of all agents are exactly synchronized with the leader.Moreover,the corresponding tracking error dynamics are given in Fig.15.In addition,we also studied tracking effect of this method and LQR method with uncertain parameters under the same conditions.The tracking errors are shown in Fig.16.It can be clearly concluded that under the premise of uncertain parameters,the tracking effect of the multi-agent system using HDP algorithm is still better than LQR method in terms of rapidity.

    Fig.14 Agents states versus iteration steps

    Fig.15 Tracking errors under HDP algorithm

    Fig.16 Tracking errors under LQR method

    4.3 A practical application for multi-agent system

    To verify the validity of the above theoretical results in practical application scenarios,an applied multi-agent system is proposed,which consists of three mobile robots and one leader robot.

    The robots move in the one-dimensional Euclidean space and the purpose is to achieve synchronization of both state and velocity eventually.

    Three follower robots are divided into two subsystems.The first subsystem is stated as follows:

    wherexi(k)∈?,vi(k)∈? andui(k)∈? are the state vector,velocity vector and control policy of robotiat time instantkT.m=0.9963,T1=0.0498 andT2=0.8 are the coefficient of the state and sampling intervals,respectively.ζ1=?0.2492 andζ2=0.9888 are the designed parameters.Dicidenotes the disturbance which is described as follows:

    where the unknown parameterτiis chosen asτi∈[?10,10] .The other parameters and initializers are the same as in Example 2.The dynamics of the leader robot are stated as follows:

    The communication structure of three mobile robots and one leader robot is described as Fig.17.The state and velocity responses of three follower mobile robots and one leader robot are showed as Figs.18 and 19,respectively.The state and velocity errors of three follower mobile robots and one leader robot are depicted in Figs.20 and 21,respectively.

    Fig.17 Communication structure of three follower mobile robots and one leader robot

    Fig.18 State responses of three robots and one leader

    5 Conclusions

    In this paper,the goal of learning tracking control is to achieve synchronization between the leader and all following agents.The multi-agent system with disturbances is derived to the tracking control of its nominal system.The HDP algorithm is applied and implemented by neural networks.Meanwhile,the stability of action-critic neural network is presented.Finally,three representative simulations are investigated to demonstrate correctness and superiority of the performance for HDP-based learning tracking control strategy.It is clear that the improved HDP algorithm mentioned in this paper has good performance in terms of tracking speed and effect considering disturbances.At the same time,in practical applications,the algorithm also can achieve the synchronization in a good way.Some meaningful works extended to nonlinear multi-agent systems,application in actual systems can be mainly concerned in the future work.

    Fig.19 Velocity responses of three robots and one leader

    Fig.20 State errors for three robots and one leader

    Fig.21 Velocity errors for three robots and one leader

    AcknowledgementsThis work was supported by Tianjin Natural Science Foundation under Grant 20JCYBJC00880,Beijing key Laboratory Open Fund of Long-Life Technology of Precise Rotation and Transmission Mechanisms,and Guangdong Provincial Key Laboratory of Intelligent Decision and Cooperative Control.

    欧美久久黑人一区二区| 欧美激情极品国产一区二区三区| 亚洲成人手机| 亚洲情色 制服丝袜| 欧美另类一区| 久久精品亚洲av国产电影网| 亚洲第一av免费看| 午夜福利影视在线免费观看| 汤姆久久久久久久影院中文字幕| 一二三四社区在线视频社区8| 国产高清视频在线播放一区 | av在线老鸭窝| 亚洲欧美一区二区三区久久| 午夜精品久久久久久毛片777| 亚洲九九香蕉| 12—13女人毛片做爰片一| 咕卡用的链子| 美女福利国产在线| 免费不卡黄色视频| 久久久国产欧美日韩av| 黑人巨大精品欧美一区二区mp4| 丰满饥渴人妻一区二区三| 午夜两性在线视频| 久久青草综合色| 免费在线观看日本一区| 9色porny在线观看| 免费在线观看日本一区| 亚洲欧洲精品一区二区精品久久久| 欧美激情极品国产一区二区三区| 国产亚洲一区二区精品| 久久精品国产综合久久久| 极品人妻少妇av视频| 又大又爽又粗| 又大又爽又粗| 香蕉国产在线看| 国产精品久久久久久人妻精品电影 | 亚洲一区二区三区欧美精品| 天天添夜夜摸| 狠狠狠狠99中文字幕| 亚洲成av片中文字幕在线观看| 美女中出高潮动态图| 青春草视频在线免费观看| 欧美变态另类bdsm刘玥| 91麻豆精品激情在线观看国产 | av不卡在线播放| 久久久久久久久免费视频了| 免费不卡黄色视频| 黄片大片在线免费观看| 午夜福利在线免费观看网站| 精品视频人人做人人爽| 成人av一区二区三区在线看 | 国产免费现黄频在线看| 日韩精品免费视频一区二区三区| 国产野战对白在线观看| 亚洲精品国产av成人精品| 久久久久久久精品精品| 久久精品国产a三级三级三级| 精品少妇内射三级| 最新在线观看一区二区三区| 超色免费av| 大陆偷拍与自拍| 交换朋友夫妻互换小说| 免费高清在线观看视频在线观看| 久久精品亚洲av国产电影网| 午夜福利在线观看吧| cao死你这个sao货| 亚洲一区二区三区欧美精品| 国产黄频视频在线观看| 国产欧美日韩一区二区精品| 1024视频免费在线观看| kizo精华| 午夜视频精品福利| 18禁观看日本| 亚洲成人免费av在线播放| 搡老岳熟女国产| 亚洲精品一卡2卡三卡4卡5卡 | 欧美日韩av久久| av不卡在线播放| 亚洲欧美一区二区三区久久| 脱女人内裤的视频| 可以免费在线观看a视频的电影网站| 免费人妻精品一区二区三区视频| 两性夫妻黄色片| 欧美黄色淫秽网站| 视频区图区小说| 国产成人一区二区三区免费视频网站| 99热网站在线观看| 成人三级做爰电影| a在线观看视频网站| 母亲3免费完整高清在线观看| a在线观看视频网站| 19禁男女啪啪无遮挡网站| 91av网站免费观看| 久久久水蜜桃国产精品网| 美女视频免费永久观看网站| 看免费av毛片| 亚洲 欧美一区二区三区| 欧美乱码精品一区二区三区| 精品少妇一区二区三区视频日本电影| 老熟妇仑乱视频hdxx| 久久精品国产a三级三级三级| 三上悠亚av全集在线观看| 成人国产一区最新在线观看| 波多野结衣av一区二区av| 国产成人欧美| 女人高潮潮喷娇喘18禁视频| 天天影视国产精品| 悠悠久久av| 91字幕亚洲| 国产亚洲av片在线观看秒播厂| 欧美变态另类bdsm刘玥| 亚洲精品美女久久av网站| 欧美老熟妇乱子伦牲交| 一本大道久久a久久精品| √禁漫天堂资源中文www| 亚洲成国产人片在线观看| 最新在线观看一区二区三区| 各种免费的搞黄视频| 99精品久久久久人妻精品| 日韩人妻精品一区2区三区| 国内毛片毛片毛片毛片毛片| 性高湖久久久久久久久免费观看| 欧美黑人欧美精品刺激| 久久精品aⅴ一区二区三区四区| 午夜影院在线不卡| 日本91视频免费播放| 日本五十路高清| 黑人巨大精品欧美一区二区蜜桃| 成人av一区二区三区在线看 | 俄罗斯特黄特色一大片| 精品免费久久久久久久清纯 | 麻豆国产av国片精品| 国产福利在线免费观看视频| 黄片大片在线免费观看| 国产极品粉嫩免费观看在线| 欧美一级毛片孕妇| 成人国语在线视频| 国产97色在线日韩免费| av网站在线播放免费| www.精华液| 精品高清国产在线一区| 欧美成人午夜精品| 亚洲国产毛片av蜜桃av| 午夜福利乱码中文字幕| 一本一本久久a久久精品综合妖精| 亚洲九九香蕉| 欧美日韩福利视频一区二区| svipshipincom国产片| 国产野战对白在线观看| 久久久国产欧美日韩av| 国产又爽黄色视频| 欧美日韩黄片免| 女人高潮潮喷娇喘18禁视频| 欧美激情 高清一区二区三区| 最近中文字幕2019免费版| 国产福利在线免费观看视频| 在线看a的网站| 天天操日日干夜夜撸| 免费女性裸体啪啪无遮挡网站| 国产一区二区三区综合在线观看| 女人久久www免费人成看片| 女性生殖器流出的白浆| 女人久久www免费人成看片| 狠狠狠狠99中文字幕| 18禁黄网站禁片午夜丰满| 多毛熟女@视频| 夜夜骑夜夜射夜夜干| 欧美日韩黄片免| 男人舔女人的私密视频| 人人妻,人人澡人人爽秒播| 欧美午夜高清在线| 少妇裸体淫交视频免费看高清 | 国产精品欧美亚洲77777| 国产片内射在线| 欧美大码av| 国产97色在线日韩免费| 伦理电影免费视频| 高清在线国产一区| 中文字幕高清在线视频| 秋霞在线观看毛片| 巨乳人妻的诱惑在线观看| 亚洲精品美女久久久久99蜜臀| 性高湖久久久久久久久免费观看| 1024视频免费在线观看| 日韩大片免费观看网站| 亚洲avbb在线观看| 亚洲av成人一区二区三| 国产视频一区二区在线看| 亚洲国产欧美日韩在线播放| 国产亚洲精品第一综合不卡| 高清视频免费观看一区二区| 精品福利永久在线观看| 国产一区二区在线观看av| 啦啦啦中文免费视频观看日本| 丝袜美足系列| 99久久综合免费| 高清欧美精品videossex| 成年人午夜在线观看视频| 欧美精品亚洲一区二区| 黄网站色视频无遮挡免费观看| 国产高清视频在线播放一区 | 这个男人来自地球电影免费观看| 国产一区二区三区av在线| 亚洲国产av新网站| 亚洲成国产人片在线观看| 日本猛色少妇xxxxx猛交久久| 精品一区在线观看国产| 丰满迷人的少妇在线观看| 黑人欧美特级aaaaaa片| 国产1区2区3区精品| 国产日韩欧美视频二区| 久久久精品免费免费高清| 亚洲精品国产精品久久久不卡| 国产有黄有色有爽视频| 欧美在线一区亚洲| 这个男人来自地球电影免费观看| 国产精品偷伦视频观看了| 国产成人av教育| 国产精品一区二区在线不卡| 中文欧美无线码| 波多野结衣一区麻豆| 热99re8久久精品国产| 久久久久久久久免费视频了| 黑人巨大精品欧美一区二区mp4| 日韩精品免费视频一区二区三区| 欧美激情高清一区二区三区| 99热国产这里只有精品6| 欧美另类亚洲清纯唯美| 国产精品 欧美亚洲| 午夜激情久久久久久久| 免费在线观看日本一区| 久久人人爽人人片av| 在线永久观看黄色视频| 90打野战视频偷拍视频| 69av精品久久久久久 | 精品久久久久久久毛片微露脸 | 免费看十八禁软件| 精品一区二区三区四区五区乱码| √禁漫天堂资源中文www| 丝袜美足系列| 男人舔女人的私密视频| 久久久久精品人妻al黑| 日本av免费视频播放| 欧美大码av| 在线十欧美十亚洲十日本专区| 999精品在线视频| 在线观看免费视频网站a站| 99久久精品国产亚洲精品| 亚洲美女黄色视频免费看| 精品欧美一区二区三区在线| 亚洲欧美清纯卡通| 啦啦啦免费观看视频1| 国内毛片毛片毛片毛片毛片| 亚洲国产av影院在线观看| 午夜免费鲁丝| 精品福利观看| 少妇猛男粗大的猛烈进出视频| 亚洲精品久久午夜乱码| 侵犯人妻中文字幕一二三四区| 久热这里只有精品99| 电影成人av| 日韩大片免费观看网站| 日日摸夜夜添夜夜添小说| 热re99久久精品国产66热6| 美女高潮喷水抽搐中文字幕| 中文字幕另类日韩欧美亚洲嫩草| 亚洲国产av新网站| 精品少妇内射三级| 人人妻,人人澡人人爽秒播| 欧美午夜高清在线| 欧美在线一区亚洲| 午夜久久久在线观看| 97精品久久久久久久久久精品| 国产精品久久久人人做人人爽| 一级片免费观看大全| 老司机午夜福利在线观看视频 | 捣出白浆h1v1| 永久免费av网站大全| 精品熟女少妇八av免费久了| 欧美中文综合在线视频| 99久久综合免费| 国产精品国产三级国产专区5o| 亚洲伊人色综图| 国产一卡二卡三卡精品| 91av网站免费观看| 中文字幕人妻熟女乱码| 大片电影免费在线观看免费| 在线观看免费高清a一片| 老汉色∧v一级毛片| 久久精品国产亚洲av香蕉五月 | 曰老女人黄片| 久久久欧美国产精品| 欧美日韩视频精品一区| 欧美日韩国产mv在线观看视频| 精品人妻一区二区三区麻豆| 人妻一区二区av| 亚洲国产精品一区二区三区在线| 嫁个100分男人电影在线观看| 午夜成年电影在线免费观看| 欧美日韩av久久| 国产欧美日韩一区二区三 | av天堂久久9| 啦啦啦啦在线视频资源| 一区福利在线观看| 日韩欧美免费精品| 欧美黄色淫秽网站| 秋霞在线观看毛片| 欧美激情极品国产一区二区三区| 亚洲国产av新网站| 国产精品熟女久久久久浪| 两个人免费观看高清视频| 亚洲精品中文字幕一二三四区 | 天天操日日干夜夜撸| 一级片'在线观看视频| 99热全是精品| 亚洲精品美女久久av网站| 日韩有码中文字幕| 亚洲va日本ⅴa欧美va伊人久久 | 亚洲精品国产精品久久久不卡| 嫁个100分男人电影在线观看| 午夜成年电影在线免费观看| 国产精品熟女久久久久浪| 一个人免费看片子| 国内毛片毛片毛片毛片毛片| 亚洲成国产人片在线观看| 超碰成人久久| av在线老鸭窝| 国产av精品麻豆| 久久这里只有精品19| 亚洲欧美日韩高清在线视频 | 岛国在线观看网站| 人妻 亚洲 视频| 国产一区二区三区av在线| 亚洲av成人一区二区三| 乱人伦中国视频| 精品久久久久久久毛片微露脸 | 啦啦啦在线免费观看视频4| 一边摸一边做爽爽视频免费| 80岁老熟妇乱子伦牲交| 久久精品亚洲av国产电影网| 亚洲 欧美一区二区三区| 精品人妻一区二区三区麻豆| 人成视频在线观看免费观看| 9色porny在线观看| 天堂俺去俺来也www色官网| 久久久精品免费免费高清| 天天添夜夜摸| 亚洲美女黄色视频免费看| 久久久久精品国产欧美久久久 | 国产av一区二区精品久久| 欧美日韩视频精品一区| 精品卡一卡二卡四卡免费| 精品人妻一区二区三区麻豆| 99国产综合亚洲精品| 欧美中文综合在线视频| 亚洲成人免费电影在线观看| 国产欧美日韩精品亚洲av| 免费观看a级毛片全部| 女人爽到高潮嗷嗷叫在线视频| h视频一区二区三区| 在线观看免费日韩欧美大片| 精品第一国产精品| 中文字幕制服av| 叶爱在线成人免费视频播放| www.精华液| 亚洲成人免费电影在线观看| 国产欧美亚洲国产| 免费av中文字幕在线| 亚洲专区字幕在线| 少妇精品久久久久久久| 国产伦人伦偷精品视频| 精品久久蜜臀av无| 欧美精品啪啪一区二区三区 | 国产精品.久久久| 99久久国产精品久久久| 丝袜美足系列| 女人高潮潮喷娇喘18禁视频| 黄色视频,在线免费观看| 女人精品久久久久毛片| 亚洲精品国产av成人精品| 久久久国产欧美日韩av| 国产伦人伦偷精品视频| 少妇的丰满在线观看| 久久精品亚洲熟妇少妇任你| 啦啦啦免费观看视频1| 99国产精品一区二区蜜桃av | 在线观看一区二区三区激情| 国内毛片毛片毛片毛片毛片| 在线av久久热| 女性被躁到高潮视频| 日韩视频在线欧美| 国产精品 国内视频| 国产成人av激情在线播放| 婷婷色av中文字幕| 亚洲成av片中文字幕在线观看| 亚洲综合色网址| 99久久人妻综合| 汤姆久久久久久久影院中文字幕| 久久 成人 亚洲| 国产主播在线观看一区二区| 久久九九热精品免费| 亚洲av成人一区二区三| 亚洲第一av免费看| 狠狠狠狠99中文字幕| 欧美国产精品一级二级三级| 国产亚洲av高清不卡| 亚洲国产精品999| 日韩一卡2卡3卡4卡2021年| 国产精品1区2区在线观看. | 久久九九热精品免费| 精品卡一卡二卡四卡免费| 午夜福利乱码中文字幕| 欧美日韩一级在线毛片| 在线十欧美十亚洲十日本专区| 一本一本久久a久久精品综合妖精| 国产区一区二久久| 一区二区三区激情视频| 亚洲av美国av| 亚洲国产精品一区二区三区在线| 国产麻豆69| 亚洲精品国产av成人精品| 久久狼人影院| 精品免费久久久久久久清纯 | 在线观看舔阴道视频| 日韩精品免费视频一区二区三区| 亚洲精品自拍成人| 午夜免费鲁丝| 国产野战对白在线观看| 中亚洲国语对白在线视频| 精品国内亚洲2022精品成人 | 一本一本久久a久久精品综合妖精| 后天国语完整版免费观看| 真人做人爱边吃奶动态| av视频免费观看在线观看| 日本wwww免费看| 婷婷色av中文字幕| 久久精品国产亚洲av高清一级| 久久99热这里只频精品6学生| 美女中出高潮动态图| 宅男免费午夜| 国产成人av教育| 午夜激情av网站| 国产欧美亚洲国产| 亚洲免费av在线视频| 欧美日韩精品网址| 日韩熟女老妇一区二区性免费视频| 成人三级做爰电影| 在线亚洲精品国产二区图片欧美| 精品福利永久在线观看| 热re99久久国产66热| 国产精品国产av在线观看| 精品国产一区二区久久| 亚洲专区国产一区二区| 国产一区二区激情短视频 | 国产精品影院久久| 亚洲精品日韩在线中文字幕| 午夜日韩欧美国产| 亚洲精品一二三| 少妇的丰满在线观看| 欧美+亚洲+日韩+国产| 美女大奶头黄色视频| 中文字幕av电影在线播放| 久热这里只有精品99| 操美女的视频在线观看| 欧美日韩黄片免| 天天操日日干夜夜撸| 亚洲成人手机| 老汉色av国产亚洲站长工具| 国产片内射在线| 精品亚洲成a人片在线观看| 丝袜人妻中文字幕| 国产成人a∨麻豆精品| 一区二区日韩欧美中文字幕| 欧美日韩一级在线毛片| 女性被躁到高潮视频| 人人妻人人爽人人添夜夜欢视频| 亚洲伊人久久精品综合| 9热在线视频观看99| 国产精品九九99| 搡老乐熟女国产| 日本欧美视频一区| 日本一区二区免费在线视频| 精品少妇一区二区三区视频日本电影| 99精国产麻豆久久婷婷| 亚洲欧洲日产国产| 12—13女人毛片做爰片一| 大型av网站在线播放| 久久人人爽av亚洲精品天堂| 精品人妻一区二区三区麻豆| 欧美日韩视频精品一区| 黑丝袜美女国产一区| 久久综合国产亚洲精品| 另类亚洲欧美激情| 欧美一级毛片孕妇| 日本91视频免费播放| 日韩制服骚丝袜av| 亚洲精品中文字幕在线视频| 欧美精品一区二区大全| 在线观看舔阴道视频| 免费在线观看完整版高清| 国产成人啪精品午夜网站| 国产野战对白在线观看| 99热国产这里只有精品6| 不卡一级毛片| 国产成人影院久久av| 国产亚洲精品一区二区www | 亚洲成国产人片在线观看| 亚洲精品国产av蜜桃| tocl精华| 少妇 在线观看| 一级,二级,三级黄色视频| 精品福利观看| cao死你这个sao货| 欧美国产精品va在线观看不卡| videosex国产| 人人澡人人妻人| 热99久久久久精品小说推荐| 爱豆传媒免费全集在线观看| 久久人人97超碰香蕉20202| 桃红色精品国产亚洲av| 男人爽女人下面视频在线观看| 性高湖久久久久久久久免费观看| 制服人妻中文乱码| 日韩电影二区| 欧美人与性动交α欧美精品济南到| 国产精品一区二区精品视频观看| 久久人妻福利社区极品人妻图片| 久久 成人 亚洲| 男女床上黄色一级片免费看| 精品第一国产精品| 亚洲精品一二三| 久久久精品94久久精品| 欧美在线黄色| 老司机午夜福利在线观看视频 | 十八禁网站网址无遮挡| 国产一区二区 视频在线| 国产主播在线观看一区二区| av片东京热男人的天堂| 国产日韩欧美视频二区| 男女下面插进去视频免费观看| 一区二区三区乱码不卡18| av有码第一页| 久久国产亚洲av麻豆专区| www.av在线官网国产| 久久综合国产亚洲精品| 十八禁网站免费在线| 久久久久久久久免费视频了| 国产又色又爽无遮挡免| 亚洲综合色网址| 涩涩av久久男人的天堂| 亚洲欧美成人综合另类久久久| 国产精品99久久99久久久不卡| 国产精品av久久久久免费| 午夜福利视频精品| 精品人妻在线不人妻| 午夜老司机福利片| 好男人电影高清在线观看| 99热国产这里只有精品6| 好男人电影高清在线观看| 99久久国产精品久久久| www.自偷自拍.com| 久久久精品国产亚洲av高清涩受| 国产无遮挡羞羞视频在线观看| 午夜福利在线观看吧| 黑人巨大精品欧美一区二区mp4| 亚洲精品自拍成人| 国产亚洲一区二区精品| 亚洲美女黄色视频免费看| 日韩一区二区三区影片| 麻豆av在线久日| 少妇 在线观看| 亚洲五月婷婷丁香| 九色亚洲精品在线播放| 免费人妻精品一区二区三区视频| 欧美成人午夜精品| 1024视频免费在线观看| 午夜激情av网站| 日韩人妻精品一区2区三区| 久热爱精品视频在线9| 18在线观看网站| 国产无遮挡羞羞视频在线观看| 中文字幕人妻熟女乱码| 亚洲av成人不卡在线观看播放网 | 欧美 日韩 精品 国产| 亚洲人成电影观看| 日韩中文字幕欧美一区二区| 亚洲人成电影观看| 国产日韩欧美视频二区| 18在线观看网站| 麻豆乱淫一区二区| 欧美亚洲 丝袜 人妻 在线| 久久人人爽人人片av| 汤姆久久久久久久影院中文字幕| 黑人巨大精品欧美一区二区mp4| 1024视频免费在线观看| 亚洲精品成人av观看孕妇| 国产色视频综合| 成人av一区二区三区在线看 | 免费在线观看影片大全网站| 免费在线观看黄色视频的| 日本五十路高清| 久久久久视频综合| 国产又爽黄色视频| 国产成人啪精品午夜网站| 两个人免费观看高清视频| 国产一区二区 视频在线| 国产在线免费精品| 亚洲精品国产av成人精品| 永久免费av网站大全| 下体分泌物呈黄色| 国产一区二区三区av在线| 午夜精品国产一区二区电影| 日韩中文字幕视频在线看片| 黄色 视频免费看| 国产一区二区激情短视频 |