• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Learning-based adaptive optimal output regulation of linear and nonlinear systems:an overview

    2022-03-02 06:48:26WeinanGaoZhongPingJiang
    Control Theory and Technology 2022年1期

    Weinan Gao·Zhong-Ping Jiang

    Abstract This paper reviews recent developments in learning-based adaptive optimal output regulation that aims to solve the problem of adaptive and optimal asymptotic tracking with disturbance rejection.The proposed framework aims to bring together two separate topics—output regulation and adaptive dynamic programming—that have been under extensive investigation due to their broad applications in modern control engineering. Under this framework, one can solve optimal output regulation problems of linear, partially linear, nonlinear, and multi-agent systems in a data-driven manner. We will also review some practical applications based on this framework,such as semi-autonomous vehicles,connected and autonomous vehicles,and nonlinear oscillators.

    Keywords Adaptive optimal output regulation·Adaptive dynamic programming·Reinforcement learning·Learning-based control

    1 Introduction

    1.1 Background

    The output regulation problems [1–11] concern designing controllers to achieve asymptotic tracking with disturbance rejection for dynamic systems,wherein both disturbance and reference signals are generated by a class of autonomous systems, named exosystems. It is a general mathematical formulation applicable to numerous control problems arising from engineering,biology,and other disciplines.

    The evolution of output regulation theory can be summarized by three phases. In the first phase, the theory of servomechanism was actively developed to tackle output regulation problems based on classical control theory in the frequency domain tracing back to the 1940s [12,13]. After the state-space representation was introduced by Kalman,pioneers in the automatic control community, including Davison, Francis, and Wonham, have extensively studied the linear output regulation problem with multiple inputs and multiple outputs [5,14–17]. With some mild assumptions on the exosystem and plant,the solvability of the linear output regulation problem is reduced to the solvability of a class of Sylvester equations, called regulator equations.There are two major strategies for addressing output regulation problems: feedback-feedforward and internal model principle [16]. By means of the internal model principle,one can convert an output regulation problem to a stabilization problem of an augmented system composed of the plant and a dynamic compensator named internal model.Another remarkable feature of the internal-model based control schemes is that they guarantee asymptotic decay of the tracking error while tolerating plant parameter uncertainties.Asanextensionofthetraditionalinternalmodelprinciple,the notion of adaptive internal model was proposed by taking the totally unknown exosystem into consideration [18]. Moreover, the cooperative output regulation problems of linear multi-agent systems[19–24]have drawn considerable attention over the last decade which include the leader-follower consensus as a special case.

    In the second phase,the control community has turned its attention to the development of a theory for nonlinear output regulation due to the fact that almost all the real-world control systems are nonlinear, and many of them are strongly nonlinear.The nonlinear output regulation problem was initially studied for the special case when the exosignals are constant[25]. Owing to the pioneering work of Isidori and Byrnes[7], the solvability of nonlinear output regulation problem was linked to that of a set of nonlinear partial differential equations,named nonlinear regulator equations.The solution to the nonlinear regulator equation contributes to a feasible feedforward control input. By center manifold theory [26],one can design a corresponding feedback-feedforward control policy to achieve the nonlinear output regulation.Since nonlinear regulator equations contain a set of partial differential equations,it cannot be ignored that to obtain the analytic solution to these equations is generally hard.With the above obstacle in mind,Huang and Rugh offered an approximated solution of the nonlinear regulator equation by power series approaches [27].Similar to linear output regulation, (adaptive)internal model based solutions have also been proposed for nonlinear output regulation problems; see [28–32] and references therein.To realize asymptotic tracking and disturbance rejection in an optimal sense is another major task in outputregulationtheory;see[33–35].Toourbestknowledge,Krener firstly opened the door of nonlinear optimal output regulation[34].His solution starts from solving the nonlinear regulator equation,then a feedback controller is obtained by solving the Hamilton–Jacobi–Bellman equation.The asymptotic convergence of the tracking error can be ensured by LaSalle’s invariance principle.

    Notice that most solutions developed in the first and second phases are model-based. Owing to the fact that developing mathematical models for physical systems is often costly,time-consuming,and involves uncertainties,the third phase is devoted to the integration of data-driven and learning-based techniques for output regulator design.This phase shift is strongly motivated by the exciting developments in data science,artificial intelligence(AI)and machine learning that have received tremendous media coverage over the last few years. For instance, deep neural networks and reinforcement learning techniques have been bridged such that the agent can learn efficiently towards the optimal control policy despite of its uncertain and complex environment[36,37].Inspired by the deep reinforcement learning theory,the Google DeepMind team has invented its own AI players of Go game,named AlphaGo and AlphaGo Zero,which have shown their superiority against human players[38,39].In the area of output regulation,neural network based approaches have been proposed in [40,41] to approximate the solution to nonlinear regulator equations.A numerical method based on successive approximation, which is an important tool in machine learning, has been proposed to obtain the center manifolds and to solve the nonlinear output regulation problem [42]. Nevertheless, it is a longstanding challenge to generalize existing solutions to tackle the learning-based adaptive optimal output regulation problem, which aims at realizing output regulation and optimizing the closed-loop system performance under unknown system model.The purpose of this paper is to provide an overview of our recent works,see,e.g.,[43–45],on the learning-based output regulation,that are aimed at learning adaptive and optimal output regulators from input and state or output data collected along the trajectories of the control system.

    1.2 Learning-based adaptive optimal output regulation

    The framework of learning-based adaptive optimal output regulation is depicted in Fig.1. Based on this framework,the optimal controller that achieves output regulation can be learned through the actor-critic structure, which is popular in adaptive dynamic programming (ADP) [46–68]. As an important branch of reinforcement learning, ADP concentrates on how an agent should modify its actions to better interact with an unknown environment to achieve a longterm goal. It is thus a good candidate to solve adaptive optimal control.Existing ADP methods include policy iteration (PI) [50,51,53,55,61,62,69] and value iteration (VI)[46,52,60,70–72].

    There are several reasons why we have developed the learning-based adaptive optimal output regulation framework.

    1. Fundamentally different from most existing output regulation approaches,the developed learning-based adaptive optimal output regulation framework does not rely on either the knowledge of system model or the system identification. It is a non-model-based and direct adaptive control framework.

    2. Practically,non-vanishing disturbance,time-varying references,and dynamic uncertainties may exist simultaneously in many control systems; see, e.g., [73]. Unfortunately,existing ADP approaches often do not consider the challenge to tracking control arising from the co-existence of these factors. The proposed framework successfully fills in this gap,and thus it significantly enhances the practicability of ADP.

    3. The developed framework has a wide applicability. One canleverageittotackleadaptiveoptimaloutputregulation problems of linear,partially linear,nonlinear,and multiagent systems.

    Fig.1 The framework of learning-based adaptive optimal output regulation

    The remainder of this papar is organized as follows. In Sect.2,we present two learning-based solutions to the adaptive optimal output regulation problem of linear systems:feedback-feedforward and internal model principle.In Sect.3, we have achieved learning-based robust optimal output regulation of partially linear composite systems via robust ADP [53]. To solve the learning-based output regulation problems of multi-player systems,a solution based on ADP and game theories is given in Sect. 4. Sections 5 and 6 target on the learning-based cooperative output regulation problems,while Sect.7 contains the learning-based adaptive optimal output regulation of nonlinear systems.Application results are discussed in Sect.8,while Sect.9 concludes this overview paper by a summary and overlook.

    2 Learning-based adaptive optimal output regulation of linear systems

    We begin with a class of continuous-time linear systems described by

    where the vectorx∈Rnis the state,u∈Rmis the control input,andv∈Rqstands for the exostate of an autonomous system(2),named the exosystem .A∈Rn×n,B∈Rn×m,C∈Rr×n,D∈Rr×m,E∈Rn×q,F∈Rr×qandS∈Rq×qare system matrices.d=Evrepresents the external disturbance,y=Cx+Duthe system output,yd= -Fvthe reference signal,ande∈Rrthe tracking error.

    Throughout this paper,we make the following assumption on the exosystem.

    Assumption 1 The origin of exosystem(2)is Lyapunov stable,and all the eigenvalues ofShave zero real parts.

    2.1 Adaptive optimal feedback-feedforward controller design

    With respect to the system (1)–(3), the linear output regulation problem has been solved through the feedbackfeedforward control strategy [5], i.e., to design a controller in the form of

    such that the closed-loop system is globally exponentially stable and the tracking error asymptotically converges to zero, whereK∈Rm×nandL∈Rm×qare feedback and feedforward control gains,respectively.

    Beyond the linear output regulation problem, the linear optimal output regulation problem has been proposed in[34]considering both asymptotic tracking and transient performances of the closed-loop control system.Specifically,with the complete knowledge of system dynamics,one can design a controller as follows to solve the linear optimal output regulation problem

    whereK*is the optimal feedback control gain which can be obtained by solving the dynamic optimization Problem 2,and the corresponding feedforward control gainL*isL*=U*+K*X*.The pair(X*,U*)is the minimizer of Problem 1.Now we ready to formulate these Problems 1 and 2.

    Problem 1

    Algorithm1(ADPlearningalgorithmforsolvinglinearoptimal output regulation problems)1: Compute basis matrices X0,X1,··· ,Xh+1.2: Utilize u =-K0x+ξ on[t0,ts]with bounded exploration noise ξ and a stabilizing K0.3: Select a small ∈>0.i ←0, j ←0 4: repeat 5:Solve Pj,K j+1 from(9)6:j ←j +1 7: until|Pj - Pj-1|≤∈8: j* ←j,i ←i +1 9: repeat 10:Solve S(Xi)from(9)11: until i =h+1 12: Find(X*,U*)

    Note that the presented method is generalizable to the cases of discrete-time linear systems with time delay. We refer the interested reader to recent papers[74,75].

    2.2 Adaptive optimal controller design based on internal model principle

    Besides feedback-feedforward control methods, the second class of solutions to output regulation problems rely on the internalmodelprinciple[16].Thesolutioncomesfromdeveloping a dynamic feedback controller

    3 Learning-based robust optimal output regulation of partially linear composite systems

    The purpose of this section is to show that the Theorem 1 can be generalized to solve the robust optimal output regulation problem of a class of partially linear composite systems[77].The system in this class is an interconnection of a linear subsystem and a nonlinear subsystem named the dynamic uncertainty,which is modeled as follows:

    holds,then the robust optimal output regulation problem of the partially linear composite system(13)–(16)is solvable by the robust optimal controller u=-K*(x-X*v)+U*v.

    Even if the system dynamicsA,B,E,g,Δare unknown withζunmeasurable,the control gain of the robust optimal controller,K*,X*,U*,can be learned online following the Algorithm 1 withureplaced byu+Δ.Please refer to[80]for mode details.

    4 Game and learning-based output regulation for multi-player systems

    There is only one player in all the system models considered in previous sections.In this section,we study the non-zerosum game output regulation problem for continuous-time multi-player linear systems. Our goal is to learn the Nash equilibrium through online data collected along the system trajectories.The linear continuous-time systems with multiple players are described by

    Remark 1Note that it is usually difficult to solve (31) analytically as it is a system of coupled nonlinear functions.Interestingly, one can leverage the data-driven ADP algorithm proposed in [83] to numerically approximate the solution to (31) and the corresponding feedback and feedforward control gains via online data.

    The following theorem discusses the convergence of the ADP algorithm and the tracking ability of the closed-loop system.

    As an extension,we have recently solved the global robust optimal output regulation problem of partially linear composite systems with both static and dynamic uncertainties in[84].In order to overcome this challenge,we have combined game theory,small-gain theory[85],output regulation theory and ADP techniques.

    5 Learning-based cooperative optimal output regulation of multi-agent systems

    In this section,we will present data-driven distributed control methods to solve the cooperative optimal output regulation problem of leader-follower multi-agent systems. Different from existing work, a distributed adaptive internal model is originally developed which is composed of a distributed internal model and a distributed observer to mimic the leader’s dynamics and behavior.

    Consider a class of linear multi-agent systems

    Note that the cooperative output regulation problem is solved if one designs a control policy such that the closedloop multi-agent systems are asymptotically stable (in the absence ofv) and limt→∞ei(t)= 0,fori= 1,2,...,N.We show in the Lemma 1 that the cooperative output regulation problem is solvable by developing a distributed adaptive internal model.

    is Hurwitz for all the followers. Based on Lemma 1, the designed optimal controller (52) can be used to solve the cooperative optimal output regulation problem.

    Note that PI and VI are two typical ADP methods to deal with adaptive optimal control problems.We will concentrate on designing data-driven adaptive optimal control policies based on PI and VI, and solving the cooperative optimal output regulation problem in a model-free sense.

    To begin with, it is brought to attention that the leader’s state information is required by all followers. Although we cannot measure this information directly from the multiagent systems,we can develop an estimator of leader’s statev.

    whereμ2>0.

    5.1 Online PI design

    To begin with, we rewrite the augmented system (34)–(38)by

    5.2 Online VI design

    Essentially, the VI algorithm is to update the value matrix and control gain by

    It is interesting to note that the proposed research can be extended to the case of cooperative optimal output regulation of discrete-time multi-agent systems [87], and cooperative adaptive optimal control of continuous-time multi-agent systems with ensured leader-to-formation(LFS)stability[45],which indicates how the leader inputs and disturbances affect the stability of the formation.

    6 Learning-based cooperative robust optimal output regulation of partially linear multi-agent systems

    The solution presented in the previous section does not take the dynamic uncertainty into consideration. In this section,we aim at solving the cooperative robust optimal output regulation of multi-agent system in the presence of dynamic uncertainties.

    To begin with, consider a class of heterogeneous and multi-agent systems

    By linear optimal control theory and output regulation theory,the decentralized optimal controller can be designed through solving AREs.However,this controller is designed under two strong conditions:(1)Ψi≡0,(2)each agent can directly communicate with the exosystem.We will present an approach based on cyclic-small-gain theory and robust adaptive dynamic programming which can remove the Condition(1),and relax the Condition(2)by the Assumption 6.To be more specific,under the Assumption 6,the robust distributed controller is developed as follows:

    Then, the system(49)in closed-loop with(52)–(53)achieves cooperative output regulation.

    One can leverage the robust ADP algorithm in[88]to learn the control gainsK*iandL*i.The convergence has also been rigorously analyzed therein.

    7 Learning-based adaptive optimal output regulation of nonlinear systems

    In this section, we focus on the adaptive optimal output regulation problem of continuous-time nonlinear systems.Consider the class of strict-feedback nonlinear systems described by

    whereQ: Rn→R is positive definite and proper, andris a positive constant with initial conditionsx0=x(0)andv0=v(0).

    Lettingξ(v)=[ξ1(v),...,ξn(v)]T,the nonlinear optimal output regulation problem is formulated as follows:

    Then,we present a model-based PI method starting from an admissibleu1:

    7.1 Phase-one learning:solving regulator equations

    7.2 Phase-two learning:solving HJB equations

    Theorem 6 [44]Consider the nonlinear plant (55), the exosystem (2) in closed-loop with the approximate optimal controller obtained by[44,Algorithm 2]Then,the following properties hold:

    (1)The trajectory of the closed-loop system is bounded for any t≥0.

    (2)The tracking error e(t)is uniformly ultimately bounded with arbitrarily small ultimate bound.

    Notice that the results in this section has been generalized to solve cooperative optimal output regulation problem of nonlinear discrete-time systems in[91].

    8 Applications

    The aim of this section is to demonstrate the broad applicability and efficiency of the developed learning-based adaptive optimal output regulation approaches. Through using semi-autonomous vehicles as an example, we validate the learning-based adaptive optimal output regulation approach.We further apply the learning-based cooperative output regulation approaches on connected and autonomous vehicles.Last but not least,we leverage learning-based nonlinear output regulation approaches on Van Der Pol Oscillators.

    8.1 Application to semi-autonomous vehicles

    In this section,we present a data-driven shared control framework of the driver and the semi-autonomous vehicle to obtain the desired steering control performance. The terminology semi-autonomous refers to the situation that an auxiliary copilot controller and the human driver manipulate the vehicle simultaneously.By leveraging the small-gain theory,we have developed shared steering controllers which is not relied on the unmeasurable internal states of human driver. Furthermore, by adopting data-driven ADP and an iterative learning scheme, the shared steering controller is learned from real-time data collected along the trajectories of the interconnected human-vehicle system.

    Fig.2 Vehicle model illustration

    The vehicle model for steering control is described by

    Fig.3 Road curvature profile

    Fig. 4 Convergee of Ui during drivi with different chosen Q values

    Fig. 5 Lane-keeping performance comparison between driver and shared control strategies with different Q values

    8.2 Application to connected and autonomous vehicles

    whereuirepresents the desired acceleration of vehiclei.Forj=i-1,i,xk= [Δhk,Δvk,ak]Tincludes the headway and velocity errors,and the acceleration of vehiclek.

    Fig.6 Buses in closed-loop with data-driven CACC controller,a traffic simulation via Paramics

    Fig. 7 The comparison of learned value matrix Pi j and the optimal value matrix P*i

    Remark 2Another scenario is considered in [96] where a platoon ofnhuman-driven vehicles is followed by an autonomous vehicle. A data-driven connected cruise control has been designed therein.We have further included the vehicle-to-infrastructure (V2I) communication and developed data-driven predictive cruise control approaches in[97,98].

    The ADP algorithm to learn theK*iandP*ihas been proposed in [94] and validated by the Paramics micro-traffic simulation.

    Figure6 shows buses(with green shape)by using the datadriven control algorithm in the traffic simulations. These buses operate with roughly the same headway. We have selectedaplatoonof 4autonomous buses ontheexclusivebus lane to observe the convergence.The learned valuePi jand corresponding optimal valueP*iof theith bus are compared in Fig.7.The the comparison of learned control gain and the optimal gain is shown in Fig.8. It shows that the learning stops at less than 15 iterations for all the autonomous buses.

    Fig. 8 The comparison of learned value matrix Ki j and the optimal value matrix K*i

    8.3 Application to Van Der Pol oscillators

    Consider a nonlinear Van der Pol oscillator modeled by

    where the system parameterιis unknown but within the range of [-0.3,-0.5]. The exosystem (2) is a marginally stable autonomous system modeled by

    In order to validate the effectiveness of the nonlinear adaptive optimal output regulation techniques, we have applied the[44,Algorithms 1 and 2]to approximate the optimal controller via collected input and state data from the oscillator.

    In order to generate online data, we setι= -0.5 and initial values of(exo)states byξ(0)= [2,-4]Tandv(0)=[0,0.5]T.Initially,the control inputμ(t)is a summation of sinusoidal noises with different frequencies in order to excite the system,which is function obtained by the first iterationV1(x,v)in Figs.11 and 12.It is checkable that the cost of the closed-loop system reduce dramatically after the online learning.

    Fig.9 System state and reference trajectories

    Fig.10 Evolution of weights of value function and control policy

    Fig. 11 System costs with different control policies under v1 = 0.5 and v2 =0

    Fig.12 System costs with different control policies under v1 = v2 =0.3536

    9 Summary and outlook

    This paper is an overview of the recent progresses in output regulation and ADP,including our work in learningbased adaptive optimal output regulation. The proposed learning-based framework is different from traditional output regulation approaches that are mostly model-based. It also enhances the practicability of ADP as it considers nonvanishing disturbance,time-varying reference,and dynamic uncertainties in the control system together.We have shown in this overview that the framework can be used to solve the adaptive optimal output regulation of dynamic systems in different models,which attests its wide applicability.

    Future research directions under the proposed frameworkincludelearning-basedrobustoptimaloutputregulation of nonlinear systems with dynamic uncertainties, learningbased stochastic adaptive optimal output regulation with unmeasurablenoises,andlearning-basedresilientoutputregulation under malicious cyberattacks.

    久久久久性生活片| 少妇 在线观看| 高清不卡的av网站| 欧美一区二区亚洲| 亚洲自偷自拍三级| 日本色播在线视频| 亚洲精品日韩在线中文字幕| 色综合色国产| 精品视频人人做人人爽| 日本黄色片子视频| 亚洲欧美一区二区三区国产| 国产精品久久久久成人av| 久久综合国产亚洲精品| 欧美另类一区| 一级片'在线观看视频| av国产免费在线观看| 三级国产精品欧美在线观看| 波野结衣二区三区在线| 亚洲国产日韩一区二区| 久久久欧美国产精品| 日本欧美视频一区| 久久精品国产a三级三级三级| 晚上一个人看的免费电影| 美女高潮的动态| 老司机影院成人| 久久久色成人| 高清欧美精品videossex| 日日摸夜夜添夜夜爱| 青青草视频在线视频观看| 麻豆国产97在线/欧美| 成人国产av品久久久| 欧美 日韩 精品 国产| 日本黄色片子视频| 麻豆乱淫一区二区| 国产精品嫩草影院av在线观看| 99久久综合免费| 人人妻人人添人人爽欧美一区卜 | 一区二区三区乱码不卡18| 91精品国产国语对白视频| 亚洲aⅴ乱码一区二区在线播放| 舔av片在线| 我要看日韩黄色一级片| 人人妻人人添人人爽欧美一区卜 | 毛片一级片免费看久久久久| 日日摸夜夜添夜夜爱| 一二三四中文在线观看免费高清| 一级二级三级毛片免费看| 色哟哟·www| 亚洲精品国产色婷婷电影| 久久久国产一区二区| 丰满乱子伦码专区| 夫妻午夜视频| 国产视频首页在线观看| 多毛熟女@视频| 亚洲国产欧美人成| 免费av中文字幕在线| a级毛片免费高清观看在线播放| 欧美一区二区亚洲| 国产成人a∨麻豆精品| 嫩草影院入口| 一本久久精品| 日韩免费高清中文字幕av| 国产在视频线精品| 夜夜骑夜夜射夜夜干| 国产淫片久久久久久久久| 乱系列少妇在线播放| 久久精品夜色国产| 亚洲成人一二三区av| 亚洲精品国产av蜜桃| 国产爽快片一区二区三区| 九九爱精品视频在线观看| 亚洲国产毛片av蜜桃av| 欧美变态另类bdsm刘玥| 亚洲欧美成人精品一区二区| 日日摸夜夜添夜夜添av毛片| 国产午夜精品一二区理论片| 熟女av电影| 亚洲在久久综合| 九九在线视频观看精品| 免费观看av网站的网址| 日韩欧美 国产精品| 三级国产精品欧美在线观看| 国产永久视频网站| 亚洲av欧美aⅴ国产| 免费观看在线日韩| 你懂的网址亚洲精品在线观看| 欧美xxⅹ黑人| 亚洲精品久久久久久婷婷小说| 中文欧美无线码| 日本wwww免费看| 日韩大片免费观看网站| 中文精品一卡2卡3卡4更新| 欧美另类一区| 国产av国产精品国产| av国产久精品久网站免费入址| 午夜福利视频精品| 美女cb高潮喷水在线观看| 美女高潮的动态| 成人二区视频| 中文字幕亚洲精品专区| 亚洲不卡免费看| 免费少妇av软件| 成人一区二区视频在线观看| 99久久精品一区二区三区| 小蜜桃在线观看免费完整版高清| 欧美日本视频| 人人妻人人爽人人添夜夜欢视频 | 18禁动态无遮挡网站| 在线观看一区二区三区激情| 久热这里只有精品99| 熟女av电影| 精品酒店卫生间| 欧美日韩一区二区视频在线观看视频在线| 精品国产露脸久久av麻豆| 在线观看av片永久免费下载| av播播在线观看一区| 亚洲怡红院男人天堂| 欧美精品国产亚洲| 噜噜噜噜噜久久久久久91| 亚洲怡红院男人天堂| 菩萨蛮人人尽说江南好唐韦庄| 午夜福利高清视频| 国产一区二区在线观看日韩| 日韩伦理黄色片| 日韩中文字幕视频在线看片 | 久久精品国产鲁丝片午夜精品| 中文在线观看免费www的网站| 中文字幕久久专区| 欧美变态另类bdsm刘玥| 国产伦在线观看视频一区| 久久久久久久久久人人人人人人| 亚洲精品aⅴ在线观看| 91aial.com中文字幕在线观看| 日韩伦理黄色片| 少妇的逼好多水| 99九九线精品视频在线观看视频| 午夜老司机福利剧场| 久久久久网色| 欧美 日韩 精品 国产| 亚洲第一av免费看| 欧美高清成人免费视频www| 久久久久久久久久人人人人人人| 成人毛片60女人毛片免费| 国产亚洲5aaaaa淫片| 97在线人人人人妻| 国产精品麻豆人妻色哟哟久久| 日韩一区二区视频免费看| 性高湖久久久久久久久免费观看| 制服丝袜香蕉在线| 日本av免费视频播放| 一本—道久久a久久精品蜜桃钙片| 精品久久久久久久久av| 在线观看美女被高潮喷水网站| 毛片女人毛片| 国产一级毛片在线| 黄色欧美视频在线观看| 欧美极品一区二区三区四区| 永久网站在线| 亚洲中文av在线| 免费看不卡的av| 日日啪夜夜撸| 啦啦啦中文免费视频观看日本| 啦啦啦啦在线视频资源| 最近中文字幕2019免费版| 亚洲国产精品专区欧美| 国产精品一区www在线观看| 亚洲欧美成人精品一区二区| 少妇人妻 视频| 国模一区二区三区四区视频| 成人一区二区视频在线观看| 亚洲精品日韩av片在线观看| 国产av国产精品国产| 久久久久久人妻| av天堂中文字幕网| 亚洲精品国产色婷婷电影| 蜜桃亚洲精品一区二区三区| 大话2 男鬼变身卡| 欧美日韩视频高清一区二区三区二| 亚洲国产成人一精品久久久| 1000部很黄的大片| 高清不卡的av网站| 精品久久久噜噜| 国产免费一级a男人的天堂| 久久久久人妻精品一区果冻| 熟妇人妻不卡中文字幕| 免费观看在线日韩| 日韩av免费高清视频| 亚洲欧美一区二区三区国产| 80岁老熟妇乱子伦牲交| 亚洲欧美日韩东京热| 国产午夜精品久久久久久一区二区三区| 2018国产大陆天天弄谢| 免费在线观看成人毛片| 看免费成人av毛片| 97超视频在线观看视频| 99热6这里只有精品| 国产精品99久久久久久久久| 18禁裸乳无遮挡动漫免费视频| 黄色视频在线播放观看不卡| 日韩国内少妇激情av| 3wmmmm亚洲av在线观看| 日本欧美国产在线视频| 欧美变态另类bdsm刘玥| 亚洲美女黄色视频免费看| 久久久久人妻精品一区果冻| 欧美成人午夜免费资源| 成年美女黄网站色视频大全免费 | 日本黄大片高清| 日本猛色少妇xxxxx猛交久久| 日韩一区二区三区影片| 黄色配什么色好看| 九色成人免费人妻av| 秋霞在线观看毛片| 免费av不卡在线播放| 免费不卡的大黄色大毛片视频在线观看| 久久久久网色| 伊人久久精品亚洲午夜| 中文精品一卡2卡3卡4更新| 亚洲欧美一区二区三区国产| 精品酒店卫生间| 日本wwww免费看| 婷婷色综合www| 青春草视频在线免费观看| 国产精品成人在线| 午夜老司机福利剧场| 国产日韩欧美在线精品| 国产午夜精品久久久久久一区二区三区| 日日摸夜夜添夜夜爱| 久久久久久久亚洲中文字幕| 蜜臀久久99精品久久宅男| 在线观看免费高清a一片| videos熟女内射| 肉色欧美久久久久久久蜜桃| 午夜免费鲁丝| 国产色爽女视频免费观看| 男的添女的下面高潮视频| 噜噜噜噜噜久久久久久91| 韩国高清视频一区二区三区| 全区人妻精品视频| 视频中文字幕在线观看| 免费少妇av软件| 亚洲欧美日韩无卡精品| 好男人视频免费观看在线| 成人综合一区亚洲| 男女下面进入的视频免费午夜| 日韩免费高清中文字幕av| 99久久精品一区二区三区| 欧美日韩在线观看h| 久久综合国产亚洲精品| 久久久a久久爽久久v久久| 91久久精品电影网| 亚洲色图综合在线观看| 99久久精品一区二区三区| 日本色播在线视频| 亚洲精品日本国产第一区| 91狼人影院| 高清日韩中文字幕在线| 成年免费大片在线观看| 亚洲成人av在线免费| 亚洲av福利一区| 建设人人有责人人尽责人人享有的 | 国产乱来视频区| 亚洲欧美日韩东京热| 亚洲av.av天堂| 狂野欧美激情性bbbbbb| 久久久久久久久久久免费av| 日韩强制内射视频| 国产深夜福利视频在线观看| 婷婷色麻豆天堂久久| 卡戴珊不雅视频在线播放| 国产淫语在线视频| 少妇被粗大猛烈的视频| 欧美精品亚洲一区二区| 黑丝袜美女国产一区| 麻豆国产97在线/欧美| 嘟嘟电影网在线观看| 大话2 男鬼变身卡| 国模一区二区三区四区视频| 女性生殖器流出的白浆| 色视频在线一区二区三区| av专区在线播放| 国产精品av视频在线免费观看| 在线免费观看不下载黄p国产| av天堂中文字幕网| 久久国产亚洲av麻豆专区| 小蜜桃在线观看免费完整版高清| 亚洲国产欧美在线一区| 亚洲精品一区蜜桃| 99热国产这里只有精品6| 秋霞在线观看毛片| av网站免费在线观看视频| 99久久综合免费| 欧美老熟妇乱子伦牲交| 国产亚洲午夜精品一区二区久久| 久久久a久久爽久久v久久| 99热国产这里只有精品6| 国产精品国产三级国产专区5o| 欧美三级亚洲精品| 国产在视频线精品| 久久久久久久久久人人人人人人| 日日摸夜夜添夜夜爱| 尾随美女入室| 噜噜噜噜噜久久久久久91| 精品一区二区三卡| 亚洲一级一片aⅴ在线观看| 欧美日韩亚洲高清精品| 亚洲熟女精品中文字幕| 亚洲成人一二三区av| 22中文网久久字幕| 久热久热在线精品观看| 久久久午夜欧美精品| 麻豆成人午夜福利视频| 亚洲第一区二区三区不卡| 免费观看无遮挡的男女| 亚洲欧美日韩东京热| 激情 狠狠 欧美| 日韩电影二区| 国产成人精品婷婷| 99热6这里只有精品| 国产成人一区二区在线| 精品人妻视频免费看| 18禁在线播放成人免费| 99热这里只有是精品50| 国产精品一及| 涩涩av久久男人的天堂| 久久久久久久久大av| 极品教师在线视频| 美女主播在线视频| 嘟嘟电影网在线观看| av在线观看视频网站免费| 国产 精品1| 男女免费视频国产| 男人和女人高潮做爰伦理| 色综合色国产| 国精品久久久久久国模美| 亚洲精华国产精华液的使用体验| 久久久久人妻精品一区果冻| 老师上课跳d突然被开到最大视频| 欧美xxⅹ黑人| 成人影院久久| 韩国高清视频一区二区三区| 亚洲精品中文字幕在线视频 | 如何舔出高潮| 一个人看的www免费观看视频| 日本wwww免费看| 免费观看性生交大片5| 三级国产精品片| 人人妻人人爽人人添夜夜欢视频 | 欧美区成人在线视频| 欧美精品国产亚洲| 免费在线观看成人毛片| 黄色一级大片看看| 亚州av有码| 观看av在线不卡| 亚洲,一卡二卡三卡| 精品一品国产午夜福利视频| 亚洲一区二区三区欧美精品| 久久6这里有精品| 大又大粗又爽又黄少妇毛片口| 观看av在线不卡| 国产老妇伦熟女老妇高清| 亚洲欧美成人精品一区二区| 3wmmmm亚洲av在线观看| 国产精品久久久久成人av| 亚洲精品一二三| 成人国产麻豆网| 亚洲精品日本国产第一区| 不卡视频在线观看欧美| 亚洲自偷自拍三级| 啦啦啦啦在线视频资源| 全区人妻精品视频| 美女xxoo啪啪120秒动态图| 亚洲美女搞黄在线观看| 午夜福利在线观看免费完整高清在| 激情 狠狠 欧美| 极品少妇高潮喷水抽搐| av网站免费在线观看视频| 一二三四中文在线观看免费高清| 精品人妻一区二区三区麻豆| 高清黄色对白视频在线免费看 | av又黄又爽大尺度在线免费看| 午夜激情福利司机影院| 国产在线免费精品| 2018国产大陆天天弄谢| 蜜桃亚洲精品一区二区三区| 欧美xxⅹ黑人| 国产高清国产精品国产三级 | 日韩不卡一区二区三区视频在线| 婷婷色综合www| 亚洲国产精品999| 亚洲欧美成人综合另类久久久| 国产精品一区www在线观看| 不卡视频在线观看欧美| 各种免费的搞黄视频| 青春草国产在线视频| 久久女婷五月综合色啪小说| 国产亚洲91精品色在线| 麻豆精品久久久久久蜜桃| 日本黄色日本黄色录像| 亚洲伊人久久精品综合| 99久久精品国产国产毛片| 日韩一区二区视频免费看| 日本午夜av视频| 国模一区二区三区四区视频| 成人午夜精彩视频在线观看| 日本wwww免费看| 亚洲欧美一区二区三区国产| 亚洲国产日韩一区二区| 日韩三级伦理在线观看| 国产美女午夜福利| 看十八女毛片水多多多| 自拍欧美九色日韩亚洲蝌蚪91 | 免费看av在线观看网站| 在线天堂最新版资源| 免费大片18禁| 乱码一卡2卡4卡精品| 亚洲欧美一区二区三区黑人 | 永久网站在线| 久久综合国产亚洲精品| 老师上课跳d突然被开到最大视频| 欧美日韩一区二区视频在线观看视频在线| 啦啦啦在线观看免费高清www| 国产精品久久久久久久久免| 成年美女黄网站色视频大全免费 | 一个人看的www免费观看视频| 免费av中文字幕在线| 中文字幕亚洲精品专区| 91久久精品电影网| av一本久久久久| 久久久久网色| 欧美老熟妇乱子伦牲交| 日产精品乱码卡一卡2卡三| 亚洲精品国产成人久久av| 老熟女久久久| 亚洲真实伦在线观看| 久久 成人 亚洲| 亚洲丝袜综合中文字幕| 哪个播放器可以免费观看大片| 最近的中文字幕免费完整| 国产精品一区二区性色av| 国产成人a∨麻豆精品| 国产精品人妻久久久久久| 黑人高潮一二区| av线在线观看网站| 自拍偷自拍亚洲精品老妇| 亚洲国产精品成人久久小说| 高清午夜精品一区二区三区| 91精品伊人久久大香线蕉| 尾随美女入室| 成人特级av手机在线观看| 精品一区二区三卡| 亚洲怡红院男人天堂| 最近2019中文字幕mv第一页| 欧美xxⅹ黑人| 少妇猛男粗大的猛烈进出视频| 国产探花极品一区二区| 亚洲国产欧美人成| 久久久a久久爽久久v久久| 七月丁香在线播放| 亚洲图色成人| 日本午夜av视频| 韩国高清视频一区二区三区| 一级毛片我不卡| 亚洲人与动物交配视频| 三级经典国产精品| 天堂8中文在线网| 成年女人在线观看亚洲视频| 一区二区av电影网| 亚洲av综合色区一区| 久久99精品国语久久久| 水蜜桃什么品种好| 人妻夜夜爽99麻豆av| 美女cb高潮喷水在线观看| 国产精品.久久久| 精品少妇久久久久久888优播| 亚洲性久久影院| 国产男女内射视频| 精品久久久噜噜| 一本色道久久久久久精品综合| 伊人久久国产一区二区| 亚洲欧美成人综合另类久久久| 如何舔出高潮| 国产一区二区三区av在线| 国产 精品1| 午夜福利网站1000一区二区三区| 最后的刺客免费高清国语| 亚洲成人av在线免费| 91精品伊人久久大香线蕉| 黄色配什么色好看| 色视频在线一区二区三区| 97超碰精品成人国产| 91久久精品国产一区二区三区| 欧美97在线视频| 97超视频在线观看视频| 久久久a久久爽久久v久久| 99热网站在线观看| 老熟女久久久| 免费观看a级毛片全部| 久久精品国产亚洲网站| 午夜福利高清视频| 亚洲欧美中文字幕日韩二区| 成年美女黄网站色视频大全免费 | 日韩av在线免费看完整版不卡| av女优亚洲男人天堂| 欧美日韩一区二区视频在线观看视频在线| 亚洲高清免费不卡视频| 日本免费在线观看一区| 我要看黄色一级片免费的| 日本免费在线观看一区| 99久国产av精品国产电影| 插逼视频在线观看| 王馨瑶露胸无遮挡在线观看| www.av在线官网国产| av在线播放精品| 欧美激情极品国产一区二区三区 | 国产美女午夜福利| 国产高潮美女av| av不卡在线播放| 国产免费福利视频在线观看| 亚洲精品国产av成人精品| 大陆偷拍与自拍| 午夜福利影视在线免费观看| av天堂中文字幕网| 一区二区三区免费毛片| 久久女婷五月综合色啪小说| 六月丁香七月| 三级经典国产精品| 成人无遮挡网站| 亚洲,欧美,日韩| 天堂8中文在线网| 一个人看视频在线观看www免费| 久久99热这里只频精品6学生| 伦精品一区二区三区| 深爱激情五月婷婷| 97超碰精品成人国产| 久久ye,这里只有精品| 亚洲欧美成人精品一区二区| 观看美女的网站| 国产精品99久久久久久久久| 亚洲,一卡二卡三卡| .国产精品久久| 国产精品久久久久久av不卡| 黑丝袜美女国产一区| 我要看黄色一级片免费的| 身体一侧抽搐| 一本一本综合久久| 国产女主播在线喷水免费视频网站| 久久久久精品性色| 久久热精品热| 久久久久国产精品人妻一区二区| 国产精品偷伦视频观看了| 妹子高潮喷水视频| 最新中文字幕久久久久| 国产乱人偷精品视频| 亚洲国产精品专区欧美| 国产日韩欧美在线精品| 2018国产大陆天天弄谢| 国产又色又爽无遮挡免| 男女边吃奶边做爰视频| 蜜桃在线观看..| 偷拍熟女少妇极品色| 久久鲁丝午夜福利片| 国产成人免费无遮挡视频| 久久亚洲国产成人精品v| xxx大片免费视频| 一级爰片在线观看| 亚洲av福利一区| 国产在线一区二区三区精| 亚洲天堂av无毛| 99久久人妻综合| 亚洲一级一片aⅴ在线观看| 日本欧美视频一区| 啦啦啦在线观看免费高清www| 日韩欧美一区视频在线观看 | 久久97久久精品| 黄色日韩在线| 中文字幕精品免费在线观看视频 | 国产亚洲91精品色在线| 80岁老熟妇乱子伦牲交| 18禁在线播放成人免费| 国产在线免费精品| 欧美成人一区二区免费高清观看| 少妇高潮的动态图| 97热精品久久久久久| 国产免费一级a男人的天堂| 国产中年淑女户外野战色| 久久国产乱子免费精品| 久久久久久久久久久丰满| 午夜激情久久久久久久| 蜜桃在线观看..| 久久国产亚洲av麻豆专区| 欧美三级亚洲精品| 熟女人妻精品中文字幕| 亚洲av中文av极速乱| 嘟嘟电影网在线观看| 久久99热这里只有精品18| 老女人水多毛片| 黑人猛操日本美女一级片| 免费看光身美女| 国语对白做爰xxxⅹ性视频网站| 午夜福利在线观看免费完整高清在| 久久精品夜色国产| 亚洲精品日韩av片在线观看| 老熟女久久久| 国产精品人妻久久久影院| 18+在线观看网站| 国产高清有码在线观看视频| .国产精品久久| 中文乱码字字幕精品一区二区三区| 国产精品国产三级专区第一集| 国产精品久久久久久精品古装| 少妇人妻久久综合中文| 国产欧美日韩精品一区二区| 欧美一区二区亚洲| av天堂中文字幕网|