• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Learning-based adaptive optimal output regulation of linear and nonlinear systems:an overview

    2022-03-02 06:48:26WeinanGaoZhongPingJiang
    Control Theory and Technology 2022年1期

    Weinan Gao·Zhong-Ping Jiang

    Abstract This paper reviews recent developments in learning-based adaptive optimal output regulation that aims to solve the problem of adaptive and optimal asymptotic tracking with disturbance rejection.The proposed framework aims to bring together two separate topics—output regulation and adaptive dynamic programming—that have been under extensive investigation due to their broad applications in modern control engineering. Under this framework, one can solve optimal output regulation problems of linear, partially linear, nonlinear, and multi-agent systems in a data-driven manner. We will also review some practical applications based on this framework,such as semi-autonomous vehicles,connected and autonomous vehicles,and nonlinear oscillators.

    Keywords Adaptive optimal output regulation·Adaptive dynamic programming·Reinforcement learning·Learning-based control

    1 Introduction

    1.1 Background

    The output regulation problems [1–11] concern designing controllers to achieve asymptotic tracking with disturbance rejection for dynamic systems,wherein both disturbance and reference signals are generated by a class of autonomous systems, named exosystems. It is a general mathematical formulation applicable to numerous control problems arising from engineering,biology,and other disciplines.

    The evolution of output regulation theory can be summarized by three phases. In the first phase, the theory of servomechanism was actively developed to tackle output regulation problems based on classical control theory in the frequency domain tracing back to the 1940s [12,13]. After the state-space representation was introduced by Kalman,pioneers in the automatic control community, including Davison, Francis, and Wonham, have extensively studied the linear output regulation problem with multiple inputs and multiple outputs [5,14–17]. With some mild assumptions on the exosystem and plant,the solvability of the linear output regulation problem is reduced to the solvability of a class of Sylvester equations, called regulator equations.There are two major strategies for addressing output regulation problems: feedback-feedforward and internal model principle [16]. By means of the internal model principle,one can convert an output regulation problem to a stabilization problem of an augmented system composed of the plant and a dynamic compensator named internal model.Another remarkable feature of the internal-model based control schemes is that they guarantee asymptotic decay of the tracking error while tolerating plant parameter uncertainties.Asanextensionofthetraditionalinternalmodelprinciple,the notion of adaptive internal model was proposed by taking the totally unknown exosystem into consideration [18]. Moreover, the cooperative output regulation problems of linear multi-agent systems[19–24]have drawn considerable attention over the last decade which include the leader-follower consensus as a special case.

    In the second phase,the control community has turned its attention to the development of a theory for nonlinear output regulation due to the fact that almost all the real-world control systems are nonlinear, and many of them are strongly nonlinear.The nonlinear output regulation problem was initially studied for the special case when the exosignals are constant[25]. Owing to the pioneering work of Isidori and Byrnes[7], the solvability of nonlinear output regulation problem was linked to that of a set of nonlinear partial differential equations,named nonlinear regulator equations.The solution to the nonlinear regulator equation contributes to a feasible feedforward control input. By center manifold theory [26],one can design a corresponding feedback-feedforward control policy to achieve the nonlinear output regulation.Since nonlinear regulator equations contain a set of partial differential equations,it cannot be ignored that to obtain the analytic solution to these equations is generally hard.With the above obstacle in mind,Huang and Rugh offered an approximated solution of the nonlinear regulator equation by power series approaches [27].Similar to linear output regulation, (adaptive)internal model based solutions have also been proposed for nonlinear output regulation problems; see [28–32] and references therein.To realize asymptotic tracking and disturbance rejection in an optimal sense is another major task in outputregulationtheory;see[33–35].Toourbestknowledge,Krener firstly opened the door of nonlinear optimal output regulation[34].His solution starts from solving the nonlinear regulator equation,then a feedback controller is obtained by solving the Hamilton–Jacobi–Bellman equation.The asymptotic convergence of the tracking error can be ensured by LaSalle’s invariance principle.

    Notice that most solutions developed in the first and second phases are model-based. Owing to the fact that developing mathematical models for physical systems is often costly,time-consuming,and involves uncertainties,the third phase is devoted to the integration of data-driven and learning-based techniques for output regulator design.This phase shift is strongly motivated by the exciting developments in data science,artificial intelligence(AI)and machine learning that have received tremendous media coverage over the last few years. For instance, deep neural networks and reinforcement learning techniques have been bridged such that the agent can learn efficiently towards the optimal control policy despite of its uncertain and complex environment[36,37].Inspired by the deep reinforcement learning theory,the Google DeepMind team has invented its own AI players of Go game,named AlphaGo and AlphaGo Zero,which have shown their superiority against human players[38,39].In the area of output regulation,neural network based approaches have been proposed in [40,41] to approximate the solution to nonlinear regulator equations.A numerical method based on successive approximation, which is an important tool in machine learning, has been proposed to obtain the center manifolds and to solve the nonlinear output regulation problem [42]. Nevertheless, it is a longstanding challenge to generalize existing solutions to tackle the learning-based adaptive optimal output regulation problem, which aims at realizing output regulation and optimizing the closed-loop system performance under unknown system model.The purpose of this paper is to provide an overview of our recent works,see,e.g.,[43–45],on the learning-based output regulation,that are aimed at learning adaptive and optimal output regulators from input and state or output data collected along the trajectories of the control system.

    1.2 Learning-based adaptive optimal output regulation

    The framework of learning-based adaptive optimal output regulation is depicted in Fig.1. Based on this framework,the optimal controller that achieves output regulation can be learned through the actor-critic structure, which is popular in adaptive dynamic programming (ADP) [46–68]. As an important branch of reinforcement learning, ADP concentrates on how an agent should modify its actions to better interact with an unknown environment to achieve a longterm goal. It is thus a good candidate to solve adaptive optimal control.Existing ADP methods include policy iteration (PI) [50,51,53,55,61,62,69] and value iteration (VI)[46,52,60,70–72].

    There are several reasons why we have developed the learning-based adaptive optimal output regulation framework.

    1. Fundamentally different from most existing output regulation approaches,the developed learning-based adaptive optimal output regulation framework does not rely on either the knowledge of system model or the system identification. It is a non-model-based and direct adaptive control framework.

    2. Practically,non-vanishing disturbance,time-varying references,and dynamic uncertainties may exist simultaneously in many control systems; see, e.g., [73]. Unfortunately,existing ADP approaches often do not consider the challenge to tracking control arising from the co-existence of these factors. The proposed framework successfully fills in this gap,and thus it significantly enhances the practicability of ADP.

    3. The developed framework has a wide applicability. One canleverageittotackleadaptiveoptimaloutputregulation problems of linear,partially linear,nonlinear,and multiagent systems.

    Fig.1 The framework of learning-based adaptive optimal output regulation

    The remainder of this papar is organized as follows. In Sect.2,we present two learning-based solutions to the adaptive optimal output regulation problem of linear systems:feedback-feedforward and internal model principle.In Sect.3, we have achieved learning-based robust optimal output regulation of partially linear composite systems via robust ADP [53]. To solve the learning-based output regulation problems of multi-player systems,a solution based on ADP and game theories is given in Sect. 4. Sections 5 and 6 target on the learning-based cooperative output regulation problems,while Sect.7 contains the learning-based adaptive optimal output regulation of nonlinear systems.Application results are discussed in Sect.8,while Sect.9 concludes this overview paper by a summary and overlook.

    2 Learning-based adaptive optimal output regulation of linear systems

    We begin with a class of continuous-time linear systems described by

    where the vectorx∈Rnis the state,u∈Rmis the control input,andv∈Rqstands for the exostate of an autonomous system(2),named the exosystem .A∈Rn×n,B∈Rn×m,C∈Rr×n,D∈Rr×m,E∈Rn×q,F∈Rr×qandS∈Rq×qare system matrices.d=Evrepresents the external disturbance,y=Cx+Duthe system output,yd= -Fvthe reference signal,ande∈Rrthe tracking error.

    Throughout this paper,we make the following assumption on the exosystem.

    Assumption 1 The origin of exosystem(2)is Lyapunov stable,and all the eigenvalues ofShave zero real parts.

    2.1 Adaptive optimal feedback-feedforward controller design

    With respect to the system (1)–(3), the linear output regulation problem has been solved through the feedbackfeedforward control strategy [5], i.e., to design a controller in the form of

    such that the closed-loop system is globally exponentially stable and the tracking error asymptotically converges to zero, whereK∈Rm×nandL∈Rm×qare feedback and feedforward control gains,respectively.

    Beyond the linear output regulation problem, the linear optimal output regulation problem has been proposed in[34]considering both asymptotic tracking and transient performances of the closed-loop control system.Specifically,with the complete knowledge of system dynamics,one can design a controller as follows to solve the linear optimal output regulation problem

    whereK*is the optimal feedback control gain which can be obtained by solving the dynamic optimization Problem 2,and the corresponding feedforward control gainL*isL*=U*+K*X*.The pair(X*,U*)is the minimizer of Problem 1.Now we ready to formulate these Problems 1 and 2.

    Problem 1

    Algorithm1(ADPlearningalgorithmforsolvinglinearoptimal output regulation problems)1: Compute basis matrices X0,X1,··· ,Xh+1.2: Utilize u =-K0x+ξ on[t0,ts]with bounded exploration noise ξ and a stabilizing K0.3: Select a small ∈>0.i ←0, j ←0 4: repeat 5:Solve Pj,K j+1 from(9)6:j ←j +1 7: until|Pj - Pj-1|≤∈8: j* ←j,i ←i +1 9: repeat 10:Solve S(Xi)from(9)11: until i =h+1 12: Find(X*,U*)

    Note that the presented method is generalizable to the cases of discrete-time linear systems with time delay. We refer the interested reader to recent papers[74,75].

    2.2 Adaptive optimal controller design based on internal model principle

    Besides feedback-feedforward control methods, the second class of solutions to output regulation problems rely on the internalmodelprinciple[16].Thesolutioncomesfromdeveloping a dynamic feedback controller

    3 Learning-based robust optimal output regulation of partially linear composite systems

    The purpose of this section is to show that the Theorem 1 can be generalized to solve the robust optimal output regulation problem of a class of partially linear composite systems[77].The system in this class is an interconnection of a linear subsystem and a nonlinear subsystem named the dynamic uncertainty,which is modeled as follows:

    holds,then the robust optimal output regulation problem of the partially linear composite system(13)–(16)is solvable by the robust optimal controller u=-K*(x-X*v)+U*v.

    Even if the system dynamicsA,B,E,g,Δare unknown withζunmeasurable,the control gain of the robust optimal controller,K*,X*,U*,can be learned online following the Algorithm 1 withureplaced byu+Δ.Please refer to[80]for mode details.

    4 Game and learning-based output regulation for multi-player systems

    There is only one player in all the system models considered in previous sections.In this section,we study the non-zerosum game output regulation problem for continuous-time multi-player linear systems. Our goal is to learn the Nash equilibrium through online data collected along the system trajectories.The linear continuous-time systems with multiple players are described by

    Remark 1Note that it is usually difficult to solve (31) analytically as it is a system of coupled nonlinear functions.Interestingly, one can leverage the data-driven ADP algorithm proposed in [83] to numerically approximate the solution to (31) and the corresponding feedback and feedforward control gains via online data.

    The following theorem discusses the convergence of the ADP algorithm and the tracking ability of the closed-loop system.

    As an extension,we have recently solved the global robust optimal output regulation problem of partially linear composite systems with both static and dynamic uncertainties in[84].In order to overcome this challenge,we have combined game theory,small-gain theory[85],output regulation theory and ADP techniques.

    5 Learning-based cooperative optimal output regulation of multi-agent systems

    In this section,we will present data-driven distributed control methods to solve the cooperative optimal output regulation problem of leader-follower multi-agent systems. Different from existing work, a distributed adaptive internal model is originally developed which is composed of a distributed internal model and a distributed observer to mimic the leader’s dynamics and behavior.

    Consider a class of linear multi-agent systems

    Note that the cooperative output regulation problem is solved if one designs a control policy such that the closedloop multi-agent systems are asymptotically stable (in the absence ofv) and limt→∞ei(t)= 0,fori= 1,2,...,N.We show in the Lemma 1 that the cooperative output regulation problem is solvable by developing a distributed adaptive internal model.

    is Hurwitz for all the followers. Based on Lemma 1, the designed optimal controller (52) can be used to solve the cooperative optimal output regulation problem.

    Note that PI and VI are two typical ADP methods to deal with adaptive optimal control problems.We will concentrate on designing data-driven adaptive optimal control policies based on PI and VI, and solving the cooperative optimal output regulation problem in a model-free sense.

    To begin with, it is brought to attention that the leader’s state information is required by all followers. Although we cannot measure this information directly from the multiagent systems,we can develop an estimator of leader’s statev.

    whereμ2>0.

    5.1 Online PI design

    To begin with, we rewrite the augmented system (34)–(38)by

    5.2 Online VI design

    Essentially, the VI algorithm is to update the value matrix and control gain by

    It is interesting to note that the proposed research can be extended to the case of cooperative optimal output regulation of discrete-time multi-agent systems [87], and cooperative adaptive optimal control of continuous-time multi-agent systems with ensured leader-to-formation(LFS)stability[45],which indicates how the leader inputs and disturbances affect the stability of the formation.

    6 Learning-based cooperative robust optimal output regulation of partially linear multi-agent systems

    The solution presented in the previous section does not take the dynamic uncertainty into consideration. In this section,we aim at solving the cooperative robust optimal output regulation of multi-agent system in the presence of dynamic uncertainties.

    To begin with, consider a class of heterogeneous and multi-agent systems

    By linear optimal control theory and output regulation theory,the decentralized optimal controller can be designed through solving AREs.However,this controller is designed under two strong conditions:(1)Ψi≡0,(2)each agent can directly communicate with the exosystem.We will present an approach based on cyclic-small-gain theory and robust adaptive dynamic programming which can remove the Condition(1),and relax the Condition(2)by the Assumption 6.To be more specific,under the Assumption 6,the robust distributed controller is developed as follows:

    Then, the system(49)in closed-loop with(52)–(53)achieves cooperative output regulation.

    One can leverage the robust ADP algorithm in[88]to learn the control gainsK*iandL*i.The convergence has also been rigorously analyzed therein.

    7 Learning-based adaptive optimal output regulation of nonlinear systems

    In this section, we focus on the adaptive optimal output regulation problem of continuous-time nonlinear systems.Consider the class of strict-feedback nonlinear systems described by

    whereQ: Rn→R is positive definite and proper, andris a positive constant with initial conditionsx0=x(0)andv0=v(0).

    Lettingξ(v)=[ξ1(v),...,ξn(v)]T,the nonlinear optimal output regulation problem is formulated as follows:

    Then,we present a model-based PI method starting from an admissibleu1:

    7.1 Phase-one learning:solving regulator equations

    7.2 Phase-two learning:solving HJB equations

    Theorem 6 [44]Consider the nonlinear plant (55), the exosystem (2) in closed-loop with the approximate optimal controller obtained by[44,Algorithm 2]Then,the following properties hold:

    (1)The trajectory of the closed-loop system is bounded for any t≥0.

    (2)The tracking error e(t)is uniformly ultimately bounded with arbitrarily small ultimate bound.

    Notice that the results in this section has been generalized to solve cooperative optimal output regulation problem of nonlinear discrete-time systems in[91].

    8 Applications

    The aim of this section is to demonstrate the broad applicability and efficiency of the developed learning-based adaptive optimal output regulation approaches. Through using semi-autonomous vehicles as an example, we validate the learning-based adaptive optimal output regulation approach.We further apply the learning-based cooperative output regulation approaches on connected and autonomous vehicles.Last but not least,we leverage learning-based nonlinear output regulation approaches on Van Der Pol Oscillators.

    8.1 Application to semi-autonomous vehicles

    In this section,we present a data-driven shared control framework of the driver and the semi-autonomous vehicle to obtain the desired steering control performance. The terminology semi-autonomous refers to the situation that an auxiliary copilot controller and the human driver manipulate the vehicle simultaneously.By leveraging the small-gain theory,we have developed shared steering controllers which is not relied on the unmeasurable internal states of human driver. Furthermore, by adopting data-driven ADP and an iterative learning scheme, the shared steering controller is learned from real-time data collected along the trajectories of the interconnected human-vehicle system.

    Fig.2 Vehicle model illustration

    The vehicle model for steering control is described by

    Fig.3 Road curvature profile

    Fig. 4 Convergee of Ui during drivi with different chosen Q values

    Fig. 5 Lane-keeping performance comparison between driver and shared control strategies with different Q values

    8.2 Application to connected and autonomous vehicles

    whereuirepresents the desired acceleration of vehiclei.Forj=i-1,i,xk= [Δhk,Δvk,ak]Tincludes the headway and velocity errors,and the acceleration of vehiclek.

    Fig.6 Buses in closed-loop with data-driven CACC controller,a traffic simulation via Paramics

    Fig. 7 The comparison of learned value matrix Pi j and the optimal value matrix P*i

    Remark 2Another scenario is considered in [96] where a platoon ofnhuman-driven vehicles is followed by an autonomous vehicle. A data-driven connected cruise control has been designed therein.We have further included the vehicle-to-infrastructure (V2I) communication and developed data-driven predictive cruise control approaches in[97,98].

    The ADP algorithm to learn theK*iandP*ihas been proposed in [94] and validated by the Paramics micro-traffic simulation.

    Figure6 shows buses(with green shape)by using the datadriven control algorithm in the traffic simulations. These buses operate with roughly the same headway. We have selectedaplatoonof 4autonomous buses ontheexclusivebus lane to observe the convergence.The learned valuePi jand corresponding optimal valueP*iof theith bus are compared in Fig.7.The the comparison of learned control gain and the optimal gain is shown in Fig.8. It shows that the learning stops at less than 15 iterations for all the autonomous buses.

    Fig. 8 The comparison of learned value matrix Ki j and the optimal value matrix K*i

    8.3 Application to Van Der Pol oscillators

    Consider a nonlinear Van der Pol oscillator modeled by

    where the system parameterιis unknown but within the range of [-0.3,-0.5]. The exosystem (2) is a marginally stable autonomous system modeled by

    In order to validate the effectiveness of the nonlinear adaptive optimal output regulation techniques, we have applied the[44,Algorithms 1 and 2]to approximate the optimal controller via collected input and state data from the oscillator.

    In order to generate online data, we setι= -0.5 and initial values of(exo)states byξ(0)= [2,-4]Tandv(0)=[0,0.5]T.Initially,the control inputμ(t)is a summation of sinusoidal noises with different frequencies in order to excite the system,which is function obtained by the first iterationV1(x,v)in Figs.11 and 12.It is checkable that the cost of the closed-loop system reduce dramatically after the online learning.

    Fig.9 System state and reference trajectories

    Fig.10 Evolution of weights of value function and control policy

    Fig. 11 System costs with different control policies under v1 = 0.5 and v2 =0

    Fig.12 System costs with different control policies under v1 = v2 =0.3536

    9 Summary and outlook

    This paper is an overview of the recent progresses in output regulation and ADP,including our work in learningbased adaptive optimal output regulation. The proposed learning-based framework is different from traditional output regulation approaches that are mostly model-based. It also enhances the practicability of ADP as it considers nonvanishing disturbance,time-varying reference,and dynamic uncertainties in the control system together.We have shown in this overview that the framework can be used to solve the adaptive optimal output regulation of dynamic systems in different models,which attests its wide applicability.

    Future research directions under the proposed frameworkincludelearning-basedrobustoptimaloutputregulation of nonlinear systems with dynamic uncertainties, learningbased stochastic adaptive optimal output regulation with unmeasurablenoises,andlearning-basedresilientoutputregulation under malicious cyberattacks.

    91麻豆精品激情在线观看国产| 久久99热这里只有精品18| 免费av观看视频| 精品国内亚洲2022精品成人| 国产精品永久免费网站| 国产av一区在线观看免费| 国产一级毛片在线| 久久国内精品自在自线图片| 两个人视频免费观看高清| 一区二区三区四区激情视频 | 99久国产av精品国产电影| 性色avwww在线观看| 国产精品久久久久久av不卡| а√天堂www在线а√下载| 69人妻影院| 欧美zozozo另类| 日本爱情动作片www.在线观看| 国产一区二区激情短视频| 天堂√8在线中文| 亚洲内射少妇av| 国产黄a三级三级三级人| 国产精品三级大全| 亚洲精品粉嫩美女一区| 国国产精品蜜臀av免费| 噜噜噜噜噜久久久久久91| 中文字幕熟女人妻在线| 中国国产av一级| a级毛片免费高清观看在线播放| 久久综合国产亚洲精品| 午夜亚洲福利在线播放| 少妇猛男粗大的猛烈进出视频 | 欧美色视频一区免费| 欧美xxxx性猛交bbbb| 三级毛片av免费| 成年免费大片在线观看| 国产久久久一区二区三区| 最近最新中文字幕大全电影3| 亚洲三级黄色毛片| av在线亚洲专区| 少妇高潮的动态图| 一个人看的www免费观看视频| 在线a可以看的网站| 精品一区二区三区人妻视频| 麻豆国产av国片精品| 在现免费观看毛片| 人妻少妇偷人精品九色| 99热全是精品| 在线免费观看不下载黄p国产| 国产视频内射| 午夜精品国产一区二区电影 | 日韩 亚洲 欧美在线| 色视频www国产| 亚洲久久久久久中文字幕| 天堂影院成人在线观看| 99热精品在线国产| 日韩成人伦理影院| 内射极品少妇av片p| 久久久久久久久久久免费av| 99视频精品全部免费 在线| 久久精品91蜜桃| ponron亚洲| 只有这里有精品99| 午夜a级毛片| 国产精品免费一区二区三区在线| 国产精品一及| 九色成人免费人妻av| 激情 狠狠 欧美| 少妇裸体淫交视频免费看高清| 免费观看人在逋| 久久韩国三级中文字幕| 高清毛片免费观看视频网站| 好男人在线观看高清免费视频| 少妇熟女欧美另类| 国产精品伦人一区二区| 亚洲第一电影网av| 免费人成在线观看视频色| 精品少妇黑人巨大在线播放 | 欧美日本视频| 国产精品1区2区在线观看.| 亚洲一区二区三区色噜噜| 国产三级在线视频| 国产高清激情床上av| 九色成人免费人妻av| 一级二级三级毛片免费看| 免费人成视频x8x8入口观看| 日韩欧美精品免费久久| 亚洲最大成人手机在线| 一进一出抽搐动态| 精品99又大又爽又粗少妇毛片| 男女那种视频在线观看| 国产伦精品一区二区三区四那| 久久精品国产亚洲av涩爱 | 亚洲高清免费不卡视频| 日本欧美国产在线视频| 国产黄色视频一区二区在线观看 | 午夜福利成人在线免费观看| 中文在线观看免费www的网站| 97超碰精品成人国产| 免费无遮挡裸体视频| 高清午夜精品一区二区三区 | 禁无遮挡网站| 青春草国产在线视频 | av天堂中文字幕网| 中文字幕av成人在线电影| 亚洲乱码一区二区免费版| 美女国产视频在线观看| 国产成人福利小说| 日本爱情动作片www.在线观看| 成人无遮挡网站| 国产精品一区www在线观看| 国产日本99.免费观看| 亚洲美女搞黄在线观看| 成人综合一区亚洲| www.av在线官网国产| 国内精品美女久久久久久| 国产三级在线视频| 日韩三级伦理在线观看| 69人妻影院| 一本一本综合久久| 少妇熟女欧美另类| 国产久久久一区二区三区| 久久久久久久亚洲中文字幕| 久久精品久久久久久久性| 国产精品伦人一区二区| 少妇人妻精品综合一区二区 | 欧美激情久久久久久爽电影| 欧美三级亚洲精品| 黄片无遮挡物在线观看| 热99在线观看视频| 长腿黑丝高跟| 69人妻影院| 久久九九热精品免费| 亚洲人成网站高清观看| 99热这里只有是精品在线观看| 婷婷色av中文字幕| 十八禁国产超污无遮挡网站| 在线国产一区二区在线| 在线观看免费视频日本深夜| 蜜臀久久99精品久久宅男| 欧美性猛交╳xxx乱大交人| 国产一区二区三区在线臀色熟女| 免费人成视频x8x8入口观看| 国产麻豆成人av免费视频| 国产精品蜜桃在线观看 | av福利片在线观看| 国产三级中文精品| 99久久精品热视频| 精品久久国产蜜桃| 搡老妇女老女人老熟妇| 国内精品宾馆在线| 成人亚洲精品av一区二区| 国产欧美日韩精品一区二区| 99久久九九国产精品国产免费| ponron亚洲| 久久99热这里只有精品18| 国产精品三级大全| 性插视频无遮挡在线免费观看| 三级毛片av免费| 高清毛片免费看| 国产午夜精品论理片| 寂寞人妻少妇视频99o| 3wmmmm亚洲av在线观看| 亚洲av成人av| 久久综合国产亚洲精品| 麻豆国产av国片精品| 中出人妻视频一区二区| 国产色婷婷99| 99久久九九国产精品国产免费| 国产精品综合久久久久久久免费| 日韩中字成人| 国产精品一及| 日本av手机在线免费观看| 能在线免费观看的黄片| av黄色大香蕉| 国产成人一区二区在线| 一区二区三区高清视频在线| 少妇被粗大猛烈的视频| 99在线人妻在线中文字幕| 嫩草影院新地址| 亚洲国产精品成人久久小说 | 一区二区三区高清视频在线| 国产不卡一卡二| 变态另类丝袜制服| 欧美激情在线99| 在线免费观看不下载黄p国产| 国产av一区在线观看免费| 免费看光身美女| 国产免费一级a男人的天堂| 久久精品久久久久久噜噜老黄 | 变态另类丝袜制服| 免费人成视频x8x8入口观看| 国产成人aa在线观看| 国产91av在线免费观看| 日韩一区二区视频免费看| 欧美最新免费一区二区三区| 久久99精品国语久久久| 高清日韩中文字幕在线| 69人妻影院| 国产人妻一区二区三区在| 国产av一区在线观看免费| 不卡视频在线观看欧美| 身体一侧抽搐| 日韩制服骚丝袜av| 美女xxoo啪啪120秒动态图| 在现免费观看毛片| 99久久成人亚洲精品观看| a级一级毛片免费在线观看| 亚洲成人av在线免费| 国产真实伦视频高清在线观看| 亚洲精品粉嫩美女一区| 亚洲最大成人手机在线| 国产又黄又爽又无遮挡在线| www.av在线官网国产| 久久精品人妻少妇| 亚洲av免费在线观看| 亚洲高清免费不卡视频| 国产精品久久久久久久久免| 久久久欧美国产精品| av在线蜜桃| 国产精品一及| 午夜老司机福利剧场| 特大巨黑吊av在线直播| 尾随美女入室| 日韩一本色道免费dvd| 日韩三级伦理在线观看| videossex国产| 亚洲欧美精品综合久久99| av在线亚洲专区| 男人舔奶头视频| 国产成人精品婷婷| 一区二区三区免费毛片| 蜜桃亚洲精品一区二区三区| 国产成人午夜福利电影在线观看| 精品人妻偷拍中文字幕| 国产精品,欧美在线| 搡老妇女老女人老熟妇| 午夜激情福利司机影院| 丝袜喷水一区| 亚洲最大成人中文| 亚洲精品亚洲一区二区| 免费不卡的大黄色大毛片视频在线观看 | 久久久久久伊人网av| 69人妻影院| 午夜激情欧美在线| 午夜精品一区二区三区免费看| 黄色配什么色好看| 少妇被粗大猛烈的视频| 国产精品女同一区二区软件| 亚洲va在线va天堂va国产| 久久精品国产亚洲网站| 亚洲在线自拍视频| www.av在线官网国产| 精品人妻视频免费看| 两性午夜刺激爽爽歪歪视频在线观看| 只有这里有精品99| 成人特级av手机在线观看| 51国产日韩欧美| 国产在线男女| 91午夜精品亚洲一区二区三区| 有码 亚洲区| 乱系列少妇在线播放| 精品久久久久久久人妻蜜臀av| 日韩欧美 国产精品| 色哟哟·www| 国产美女午夜福利| 欧美变态另类bdsm刘玥| 2022亚洲国产成人精品| 我要搜黄色片| 一级av片app| 不卡视频在线观看欧美| 成人国产麻豆网| 中国美女看黄片| 国产亚洲av片在线观看秒播厂 | 狂野欧美白嫩少妇大欣赏| .国产精品久久| 欧美性感艳星| 欧美潮喷喷水| 深夜精品福利| 成年免费大片在线观看| 两个人的视频大全免费| 在线观看午夜福利视频| 久久久久久久久久成人| 嫩草影院精品99| 欧美另类亚洲清纯唯美| 村上凉子中文字幕在线| 麻豆成人午夜福利视频| 熟女电影av网| 一区二区三区免费毛片| 成年版毛片免费区| 一夜夜www| 99精品在免费线老司机午夜| 97超视频在线观看视频| 久久精品国产鲁丝片午夜精品| 亚洲欧美精品专区久久| 熟女人妻精品中文字幕| 成人毛片60女人毛片免费| 免费人成视频x8x8入口观看| 91精品一卡2卡3卡4卡| 亚洲国产欧洲综合997久久,| 老女人水多毛片| 亚洲国产精品sss在线观看| 1024手机看黄色片| 高清午夜精品一区二区三区 | 国产亚洲精品久久久久久毛片| 亚洲av一区综合| 久久这里有精品视频免费| 亚洲av.av天堂| 99久国产av精品| 日日啪夜夜撸| 少妇人妻精品综合一区二区 | 少妇猛男粗大的猛烈进出视频 | 日韩av在线大香蕉| 欧美+日韩+精品| 日韩强制内射视频| 小蜜桃在线观看免费完整版高清| 麻豆国产97在线/欧美| 亚洲av成人精品一区久久| 国产伦精品一区二区三区视频9| 蜜桃亚洲精品一区二区三区| 成人三级黄色视频| 干丝袜人妻中文字幕| 神马国产精品三级电影在线观看| 午夜激情福利司机影院| 波多野结衣高清无吗| 能在线免费观看的黄片| 国产乱人视频| 99热6这里只有精品| 成人毛片a级毛片在线播放| 一本久久中文字幕| 欧美日韩一区二区视频在线观看视频在线 | 久久精品国产亚洲网站| 午夜福利在线在线| 夜夜夜夜夜久久久久| 神马国产精品三级电影在线观看| 久久久久性生活片| 国产乱人视频| 99热全是精品| 欧美日本亚洲视频在线播放| 久久精品久久久久久久性| 神马国产精品三级电影在线观看| 午夜激情欧美在线| 日本五十路高清| 久久人人爽人人片av| 日韩一本色道免费dvd| 成人性生交大片免费视频hd| 我的老师免费观看完整版| 国产精华一区二区三区| 精品一区二区三区人妻视频| 简卡轻食公司| 久久精品人妻少妇| 国产日本99.免费观看| 91午夜精品亚洲一区二区三区| 成人性生交大片免费视频hd| 国产三级中文精品| 亚洲成a人片在线一区二区| 亚洲精品亚洲一区二区| 99热这里只有是精品在线观看| 波多野结衣高清无吗| 少妇人妻精品综合一区二区 | 一级毛片我不卡| 小说图片视频综合网站| 一本久久精品| 麻豆久久精品国产亚洲av| 99久国产av精品国产电影| 国产午夜福利久久久久久| 亚洲精品影视一区二区三区av| 人妻制服诱惑在线中文字幕| 岛国在线免费视频观看| 波多野结衣巨乳人妻| av免费在线看不卡| 亚洲av免费在线观看| 久久久a久久爽久久v久久| 亚洲av中文av极速乱| 亚洲国产色片| 在线观看av片永久免费下载| 禁无遮挡网站| 亚洲成人久久性| 99国产精品一区二区蜜桃av| 一个人看的www免费观看视频| 色哟哟·www| 日韩 亚洲 欧美在线| 国产老妇女一区| 一个人免费在线观看电影| 国产探花极品一区二区| 色综合站精品国产| 久久久久久久久中文| 久久中文看片网| 熟女人妻精品中文字幕| 日产精品乱码卡一卡2卡三| 日本一二三区视频观看| 成人毛片a级毛片在线播放| 成人性生交大片免费视频hd| 日本五十路高清| 欧美3d第一页| 男女下面进入的视频免费午夜| 人妻久久中文字幕网| 日产精品乱码卡一卡2卡三| 免费在线观看成人毛片| 国产极品精品免费视频能看的| 欧美在线一区亚洲| h日本视频在线播放| 舔av片在线| 亚洲天堂国产精品一区在线| 欧美bdsm另类| 日本撒尿小便嘘嘘汇集6| 日产精品乱码卡一卡2卡三| 男人的好看免费观看在线视频| 欧美激情国产日韩精品一区| 国产成人a∨麻豆精品| 欧美变态另类bdsm刘玥| 亚洲av.av天堂| 国产单亲对白刺激| 国产亚洲5aaaaa淫片| av免费观看日本| 极品教师在线视频| 老师上课跳d突然被开到最大视频| 精品人妻视频免费看| 亚洲av电影不卡..在线观看| 欧美+亚洲+日韩+国产| 亚洲av免费高清在线观看| 亚洲av二区三区四区| 黄色视频,在线免费观看| 久久人妻av系列| 亚洲熟妇中文字幕五十中出| 欧美高清性xxxxhd video| 婷婷六月久久综合丁香| 天堂网av新在线| 久久久久久国产a免费观看| 看片在线看免费视频| 国产一区二区三区在线臀色熟女| 黄色欧美视频在线观看| 我要看日韩黄色一级片| 小蜜桃在线观看免费完整版高清| 乱人视频在线观看| 内射极品少妇av片p| 激情 狠狠 欧美| 免费看日本二区| 精品国产三级普通话版| 久久久精品大字幕| 最近视频中文字幕2019在线8| 不卡视频在线观看欧美| 成人无遮挡网站| 国产精品一区二区性色av| 午夜激情福利司机影院| 久久精品国产自在天天线| 精品人妻一区二区三区麻豆| 变态另类丝袜制服| 国产免费男女视频| 最后的刺客免费高清国语| 悠悠久久av| 久久精品国产亚洲网站| 一区福利在线观看| 国产成人a区在线观看| 精品久久国产蜜桃| 日韩国内少妇激情av| 亚洲婷婷狠狠爱综合网| 一个人观看的视频www高清免费观看| 久久精品久久久久久噜噜老黄 | 国产成人精品一,二区 | 亚洲av成人精品一区久久| 精品久久久久久久久亚洲| 国内精品宾馆在线| 人人妻人人澡欧美一区二区| 久久精品夜色国产| av国产免费在线观看| 青春草国产在线视频 | 国产精品日韩av在线免费观看| 久久精品国产鲁丝片午夜精品| 免费电影在线观看免费观看| 真实男女啪啪啪动态图| 国产一区二区在线观看日韩| 国产在线精品亚洲第一网站| 日韩欧美一区二区三区在线观看| 一卡2卡三卡四卡精品乱码亚洲| 成人三级黄色视频| 国产精品久久电影中文字幕| 三级国产精品欧美在线观看| 给我免费播放毛片高清在线观看| 日本五十路高清| 秋霞在线观看毛片| 身体一侧抽搐| 免费在线观看成人毛片| 欧美又色又爽又黄视频| 亚洲人成网站在线播| 国产午夜精品久久久久久一区二区三区| 高清毛片免费看| 久久国产乱子免费精品| 中文在线观看免费www的网站| 国产伦精品一区二区三区视频9| 免费观看精品视频网站| 亚洲18禁久久av| 日日摸夜夜添夜夜添av毛片| 激情 狠狠 欧美| 国产伦精品一区二区三区四那| 午夜亚洲福利在线播放| 色5月婷婷丁香| 97热精品久久久久久| av福利片在线观看| 国产精品久久久久久av不卡| 午夜福利高清视频| 国产人妻一区二区三区在| 日本色播在线视频| 婷婷色av中文字幕| 国产精品国产三级国产av玫瑰| 老师上课跳d突然被开到最大视频| 非洲黑人性xxxx精品又粗又长| 一区二区三区高清视频在线| 干丝袜人妻中文字幕| 美女大奶头视频| 日本撒尿小便嘘嘘汇集6| 婷婷六月久久综合丁香| 久久精品国产鲁丝片午夜精品| 国产一区二区激情短视频| 国产欧美日韩精品一区二区| 一级毛片电影观看 | 日本爱情动作片www.在线观看| 色5月婷婷丁香| 国产成人影院久久av| 国产午夜精品论理片| 国产精品久久久久久av不卡| av天堂中文字幕网| 久久人人爽人人片av| 国产av在哪里看| 99热全是精品| 高清日韩中文字幕在线| 国产成人精品一,二区 | 国产激情偷乱视频一区二区| 成人午夜高清在线视频| 国产毛片a区久久久久| 少妇裸体淫交视频免费看高清| 高清午夜精品一区二区三区 | 我的老师免费观看完整版| 男人的好看免费观看在线视频| 2021天堂中文幕一二区在线观| a级毛色黄片| 99久久中文字幕三级久久日本| 老女人水多毛片| 国产私拍福利视频在线观看| 中国美白少妇内射xxxbb| 舔av片在线| 九九在线视频观看精品| 69av精品久久久久久| 99久久久亚洲精品蜜臀av| 精品人妻熟女av久视频| 日本色播在线视频| 亚洲图色成人| 亚洲欧美成人综合另类久久久 | 欧美激情久久久久久爽电影| 一区二区三区四区激情视频 | 久久综合国产亚洲精品| 高清午夜精品一区二区三区 | 国产精品永久免费网站| 麻豆国产97在线/欧美| 国产午夜精品一二区理论片| videossex国产| 全区人妻精品视频| 亚洲欧美清纯卡通| 在线天堂最新版资源| 日本一二三区视频观看| 91av网一区二区| 亚洲不卡免费看| 色综合站精品国产| 色5月婷婷丁香| 日本欧美国产在线视频| 国内少妇人妻偷人精品xxx网站| 国产69精品久久久久777片| 少妇丰满av| 中文字幕久久专区| 嫩草影院精品99| 亚洲电影在线观看av| 亚洲国产精品成人久久小说 | 狂野欧美白嫩少妇大欣赏| 亚洲激情五月婷婷啪啪| 级片在线观看| 国产高清视频在线观看网站| 久久久久国产网址| 欧美成人一区二区免费高清观看| 亚洲精品亚洲一区二区| 久久精品国产99精品国产亚洲性色| 美女国产视频在线观看| 中文字幕久久专区| 欧美一区二区国产精品久久精品| 国产老妇女一区| 久久久国产成人免费| 久久精品国产亚洲av香蕉五月| 男女那种视频在线观看| 搡老妇女老女人老熟妇| 婷婷亚洲欧美| 韩国av在线不卡| 成人三级黄色视频| 日韩一区二区视频免费看| 日韩强制内射视频| 天堂中文最新版在线下载 | 最近的中文字幕免费完整| 亚洲真实伦在线观看| 亚洲av成人精品一区久久| 色综合站精品国产| 成人特级黄色片久久久久久久| 欧美色视频一区免费| 草草在线视频免费看| 国产在线精品亚洲第一网站| 丝袜美腿在线中文| 插逼视频在线观看| 欧美成人免费av一区二区三区| 91麻豆精品激情在线观看国产| 免费看光身美女| 性色avwww在线观看| 欧美日韩精品成人综合77777| 色尼玛亚洲综合影院| 亚洲av一区综合| 国产高清不卡午夜福利| 蜜桃亚洲精品一区二区三区| 色综合亚洲欧美另类图片| 欧美日韩国产亚洲二区| 午夜福利在线观看吧|