• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Adaptive Linear Quadratic Regulator for Continuous-Time Systems With Uncertain Dynamics

    2020-05-21 05:44:54SumitKumarJhaandShubhenduBhasin
    IEEE/CAA Journal of Automatica Sinica 2020年3期

    Sumit Kumar Jha,and Shubhendu Bhasin,

    Abstract—In this paper,adaptive linear quadratic regulator(LQR)is proposed for continuous-time systems with uncertain dynamics. The dynamic state-feedback controller uses inputoutput data along the system trajectory to continuously adapt and converge to the optimal controller.The result differs from previous results in that the adaptive optimal controller is designed without the knowledge of the system dynamics and an initial stabilizing policy.Further,the controller is updated continuously using input-output data, as opposed to the commonly used switched/intermittent updates which can potentially lead to stability issues.An online state derivative estimator facilitates the design of a model-free controller.Gradient-based update laws are developed for online estimation of the optimal gain.Uniform exponential stability of the closed-loop system is established using the Lyapunov-based analysis,and a simulation example is provided to validate the theoretical contribution.

    I.INTRODUCTION

    THE development of the infinite-horizon linear quadratic regulator(LQR)[1]has been one of the most important contributions in linear optimal control theory.The optimal control law for the LQR problem is expressed in state-feedback form,where the optimal gain is obtained from the solution of the nonlinear matrix equation–the algebraic Riccati equation(ARE).The solution of the ARE requires exact knowledge of the system matrices and is typically found offline,a major impediment to online real-time control.

    Recent research has focused on solving the optimal control problem using iterative,data-driven algorithms which can be implemented online and require minimal knowledge of the system dynamics[2]?[15].In[2],Kleinman proposed a computationally efficient procedure for solving the ARE by iterating on the solution of the linear Lyapunov equation,with proven convergence to the optimal policy for any initial condition.The Newton-Kleinman algorithm[2],although offline and model-based,paved the way for a class of reinforcement learning(RL)/approximate dynamic programming(ADP)-based algorithms which utilize data along the system trajectory to learn the optimal policy[4],[7],[10],[16]?[18].Strong connections between RL/ADP and optimal control have been established[19]?[23]and several RL algorithms including policy iteration(PI),value iteration(VI)and Qlearning have been adapted for optimal control problems[4],[7]?[9],[13],[22],[24].Initial research on adaptive optimal control was mostly concentrated in the discrete-time domain due to the recursive nature of RL/ADP algorithms.An important contribution in[4]is the development of a modelfree PI algorithm using Q-functions for discrete-time adaptive linear quadratic control.The iterative RL/ADP algorithms have since been applied to various discrete-time optimal control problems[25]?[27].

    Extension to continuous-time systems entails challenges in controller development and convergence/stability proofs.One of the first adaptive optimal controllers for continuous-time systems is proposed in[17],where a model-based algorithm is designed using a continuous-time version of the temporal difference(TD)error.Model-free RL algorithms for continuoustime systems are proposed in[22],which require measurement of the state derivatives.In chapter 7 of[3],an indirect adaptive optimal linear quadratic(ALQ)controller is proposed,where the unknown system parameters are identified using an online adaptive update law,and the ARE is solved at every time instant using the current parameter estimates.However,the algorithm may become computationally prohibitive for higher dimensional systems,owing to the need for solving the ARE at every time instant.More recently,partially model-free PI algorithms are developed in[7],[24]for linear systems with unknown internal dynamics.In[9],[10],the idea in[7]is extended to adaptive optimal control of linear systems with completely unknown dynamics.In another significant contribution[6],the connections between Q-learning and the Pontryagin’s minimum principle are established,based on which an off policy control algorithm is proposed.

    A common feature of RL algorithms adapted for continuous-time systems is the requirement of an initial stabilizing policy[7],[9],[10],[18],[24],and a batch least square estimation algorithm leading to intermittent updates of the control policy[7],[9].Finding an initial stabilizing policy for systems with unknown dynamics may not always possible.Further,the intermittent control policy updates in[7],[9],[18]render the control law discontinuous,potentially leading to challenges in proving stability.Moreover,many adaptive optimal control algorithms require to implement delayedwindow integrals to construct the regressor/design update laws[5],[7],[9],[14],and“intelligent”data storage mechanism(procedure for populating independent set of data)[5],[7],[9],[10]to satisfy an underlying full-rank condition.The computation of delayed-window integrals of functions of states requires past data storage for the time interval[t ?T, t],?t >0,wheretandTare the current time instant and the window length,respectively,which demands significant memory consumption,especially for large scale systems.

    Recent works in[8],[11],[13]have cast the continuoustime RL problem in an adaptive control framework with continuous policy updates,without the need for an initial stabilizing policy.However,for continuous-time RL,it is not straight forward to develop a fixed-point equation for parameter updation,which is independent of the knowledge of system dynamics and state derivatives.A synchronous PI algorithm for known system dynamics is developed in[8],which is extended to a partially model-free method using a novel actor-critic-identifier architecture [11]. For inputconstrained systems with completely unknown dynamics,a PI and neural network(NN)based adaptive control algorithm is proposed in[13].However,the work in[13]utilizes past stored data along with the current data for identifier design,while guaranteeing bounded convergence of critic weight estimation error for bounded NN reconstruction error.

    The contribution of this paper is the design of a continuoustime adaptive LQR with a time-varying state-feedback gain,which is shown to exponentially converge to the optimal gain.The novelty of the proposed result lies in the computational/memory efficient algorithm used to solve the optimal control problem for uncertain dynamics,without requiring an initial stabilizing control policy,unlike previous results which either use an initial stabilizing control policy and a switched policy update[5],[7],[9],[10]or past data storage[5],[7],[9],[10],[28],[29]or memory-intensive delayedwindow integrals[5],[7],[9],[14].The result in this paper is facilitated by the development of a fixed point equation which is independent of system matrices,and the design of a state derivative estimator.A gradient-based update law is devised for online adaptation of the state-feedback gain and convergence to the optimal gain is shown,provided a uniform persistence of excitation(u-PE)condition[30],[31]on the state-dependent regressor is satisfied.The u-PE condition,although restrictive in its verification and implementation,establishes the theoretical requirements for convergence of adaptive linear quadratic controller proposed in the paper.The Lyapunov analysis is used to prove uniform exponential stability of the overall system.

    This paper is organized as follows.Section II discusses the primary concepts of linear optimal control,problem formulation,and subsequently the general methodology.The proposed model-free adaptive optimal control design along with the state derivative estimator is described in Section III.Convergence and exponential stability of the proposed result is shown in Section IV.Finally,an illustrative example is given in Section V.

    Notations:Throughout this paper,R is used to denote the set of real numbers.The operator||.||designates the Euclidean norm for vectors and induced matrix norm for matrices.The symbol?denotes the Kronecker product operator andvec(Z)∈Rqrdenotes the vectorization of the argument matrixZ ∈Rq×rand is obtained by stacking columns of the argument matrix on top of one another.The operatorsλmin(.)andλmax(.)denote the minimum and maximum eigenvalues of the argument matrix,respectively.The symbolBddenotes the open ballBd={z ∈Rn(n+m):||z||

    1),where matrix multiplication(DEF)is defined.

    2)vec(D+E+F)=vec(D)+vec(E)+vec(F),where matrix summation(D+E+F)is defined.

    wherea,bare vectors,Dis a matrix and the multiplication(aT Db)is defined,has also been used.

    II.PRELIMINARIES AND PROBLEM FORMULATION

    Consider a continuous-time deterministic LTI system given as

    wherex(t)∈Rndenotes the state andu(t)∈Rmdenotes the control input.A ∈Rn×nandB ∈Rn×mare constant unknown matrices and(A,B)are assumed to be controllable.

    The infinite horizon quadratic value function can be defined as the total cost starting from statex(t)and following a fixed control actionu(t)from timetonwards as

    whereQ ∈Rn×nis symmetric positive semi-definite with(Q,A)being observable andR ∈Rm×mis a positive definite matrix.

    WhenAandBare accurately known,the standard LQR problem is to find the optimal policy by minimizing the value function(2)with respect to the policyu.

    whereK?=R?1BT P ?∈Rm×nis the optimal control gain matrix andP ?∈Rn×nis the constant positive definite matrix solution of ARE[32]

    Remark 1:It is obvious that solving the ARE forP ?requires knowledge of the system matricesAandB,however,in the case where information aboutAandBis unavailable,it is challenging to determineP ?andK?online.

    The following assumptions are required to facilitate the subsequent design.

    Assumption 1:The optimal Riccati matrixP ?is upper bounded as∥P ?∥≤α1,whereα1is a known positive scalar constant.

    Assumption 2:The optimal gain matrixK?is upper bounded as∥K?∥≤α2,whereα2is a known positive scalar constant.

    For the linear system in(1),the optimal value function can be written as a quadratic function[33]

    To facilitate the development of the model-free LQR,differentiate(5)with respect to time and use system dynamics(1)to obtain

    Using(4),(6)reduces to

    The LHS of(7)can be written asby considering(5),which is then substituted in(7)as

    The expression in(8)acts as the fixed point equation used to defineD ∈R as the difference between LHS and RHS of(8)

    Remark 2:The motivation behind the formulation of(9)is to represent the fixed point equation in a model-free way without using memory-intensive delayed-window integrals and subsequently design a parameter estimation algorithm to learnP ?andK?without knowledge of system matricesAandB.

    III.OPTIMAL CONTROL DESIGN FOR COMPLETELY UNKNOWN LTI SYSTEMS

    In(9),P ?andK?are unknown parameter matrices and the objective is to estimate these parameters using gradient-based update laws.

    The gradient-based update laws are developed which minimize the squared errorΞ ∈R defined asΞ=E2/2.The update laws for the parameters to be estimated are given by

    whereν ∈R+andνk ∈R+are adaptation gains.Substituting the values of gradients ofΞwith respect toandthe normalized update laws are given as

    The continuous policy update is given as

    The design of the state derivative estimator,mentioned in (11) and (12), is facilitated by expressing the system dynamics(1)as linear-in-the-parameters(LIP)

    whereY(x,u)∈Rn×n(n+m)is the regressor matrix andθ ∈Rn(n+m)is the unknown vector defined as

    Assumption 3:The system parameter vectorθin(16)is upper bounded as∥θ∥≤a1,wherea1is a known positive constant.

    The state derivative estimator is designed as

    where Γ∈Rn(n+m)×n(n+m)is the constant positive definite gain matrix.

    Lemma 1:The update laws in(17)and(19)ensure that the state estimation and the system parameter estimation error dynamics are Lyapunov stable?t ≥0.

    Proof:Consider a positive-definite Lyapunov function candidate as

    Taking time derivative of(20)and substituting the value offrom(18),the following expression is obtained

    where

    Sinceandis bounded which implies that

    Remark 3:Assumptions 1 and 2 are standard assumptions required for projection based adaptive algorithms,frequently used in robust adaptive control literature([3],Chapter 11 of[36],Chapter 3 of[37],[38]).In fact,in the context of adaptive optimal control,analogous to Assumptions 1 and 2,many existing results[8],[11],[13],[14],[29]assume a known upper bound of the unknown parameters associated with the value function,an essential requirement for proving stability of the closed-loop system.Although the true system parameters(AandB)are unknown,a range of operating values(a compact set containing the true values of the elements ofAandB)may be known in many cases from the particular domain knowledge of the plant.Performing a uniform sampling over the known compact set and solving the ARE offline with those samples,a set of Riccati matrices can be obtained,and hence,the upper bounds(α1andα2),assumed in Assumptions 1 and 2,can be conservatively estimated using this set.Moreover,the proposed algorithm serves as an effective approach for the case where it is hard to obtain the initial stabilizing policy for uncertain systems.

    IV.CONVERGENCE AND STABILITY

    A.Development of Controller Parameter Estimation Error Dynamics

    The controller parameter estimation error dynamics forcan be obtained using(11)and(13)as

    where

    Using thevecoperator in(22),the following expression is obtained

    and

    Using(15)and(23),the system dynamics in terms of the error statez(t)can be expressed as

    whereF ∈Rn(1+m)is a vector valued function containing the right hand sides of(15)and(23).

    Assumption 4:The pair(φk,F)is u-PE,i.e.,PE uniformly in the initial conditions(z0,t0),if for eachd>0,?ε,δ>0 such that,?(z0,t0)∈Bd×[0,∞),all corresponding solutions satisfy

    ?t ≥t0[30].

    Remark 4:Since the regressorφk(z,t)in(23)is state dependent,the u-PE condition in(26),which is uniform in initial condition,is used instead of the classical PE condition,where the regressor is only function of time and not the states,e.g.,where the objective is identification(Section 2.5 of[39]).

    Remark 5:In adaptive control,convergence of system and control parameter error vectors are dependent on the excitation of the system regressors.This excitation property,typically known as persistence of excitation (PE), is necessary to achieve perfect identification and adaptation.The PE condition,although restrictive in its verification and implementation,is typically imposed by using a reference input with as many spectral lines as the number of unknown parameters[40].The u-PE condition mentioned in Assumption 4 may be satisfied by adding a probing exploratory signal to the control input[4],[8],[11],[13],[41].This signal can be removed once the parameter estimateconverges to optimal control policy and subsequently,exact regulation of the system states will be achieved.Exact regulation of the system states in presence of persistently exciting signal can also be achieved by following the method given in[42],in which the PE property is generated in a finite time interval by an asymptotically decaying“rich”feedback law.

    The expression in(23)can be represented using a perturbed system as

    For eachd>0,the dynamics of the nominal system

    can be shown to be uniformly exponentially stable?(z0,t0)∈Bd×[0,∞)by using Assumption 4,(25)and Lemma 5 of[31].

    Sinceis continuously differentiable and the Jacobianis bounded for the nominal system(28),it can be shown,by referring to the converse Lyapunov Theorem 4.14 in[43]and definitions and results in[31],[44],that there exists a Lyapunov function,which satisfies following inequalities.

    for some positive constantsd1,d2,d3,d4∈R.

    B.Lyapunov Stability Analysis

    Theorem 1:If Assumption 4 holds,the adaptive optimal controller(14)along with the parameter update laws(12)and(13)and the state derivative estimators(17)and(19)guarantees that the system states and the controller parameter estimation errorsz(t)are uniformly exponentially stable?t ≥0,providedz(0)∈?,where the set?is defined as1The initial condition region ? can be increased by appropriately choosing user defined matrices Q,R,and by tuning design parameters ν,νk and ηk.

    Proof:A positive-definite,continuously differentiable Lyapunov function candidateVL:Bd×[0,∞)→R is defined for eachd>0 as

    whereV ?(x)is the optimal value function defined in(5)which is positive definite and continuously differentiable andVcis defined in(29).Taking the time derivative ofVL,along the trajectories of(1)and(27),the following expression is obtained

    Using(6),(29)and the Rayleigh-Ritz theorem,can be upper bounded as

    where

    where the known functionρ2(∥z∥):R→R,defined asρ2(∥z∥)=2l2∥x∥2/d3,is positive,globally invertible and non-decreasing andˉν=1/νk ∈R.By using(24),(34)can be further expressed as

    Using(5),(24)and(29),the Lyapunov function candidateVLcan be bounded as

    whereσ1andσ2are positive constants.

    Using(36),(35)can be expressed as

    The expression in(37)can be further upper bounded by

    where the set?is defined as

    Ifz(0)∈?,then by looking at the solution of(38),

    it can be said that system states and the parameter estimation errors uniformly exponentially converge to the origin.

    Remark 6:The positive constantsd1,d2,d4in(29)do not appear in the design of the control law(14) or the parameter update law(13)and are only utilized for the stability analysis purpose.As a result,knowing the exact values of these constants is not required in general.However,the quantityd3,which appears in Theorem 1,can be determined by following the procedure given in[43](for details see proof of Theorem 4.14 in[43]).

    Remark 7:Traditionally, the parameter update laws in adaptive control have user defined design parameters termed as adaptation gains(in this paperνandνkdefined in(12)and(13),respectively).Typically,these gains are responsible for the convergence rate of the estimation of the unknown parameters.Hence,a careful selection of gains govern the performance of the designed estimators.However,a large value of adaptation gain may result in an unstable adaptive system,which can be overcome by introducing“normalization”in the update laws[45].The normalized estimator in the update law(13)involves constant tunable gainηk,which can be chosen in such a way that maintains the system stability in presence of high adaptation gainνk.

    Remark 8:The estimates of the system matricesAandB,given by(19),are not guaranteed to converge to the optimal parameters,since Lemma 1 only proves that the parameter estimation erroris bounded.Therefore,solving ARE in(4)using the estimates ofAandBmay not yield the optimal parameterP ?andK?.Moreover,solvingP ?directly from the ARE,which is nonlinear inP ?,can be challenging,especially for large scale systems.However,the proposed method utilizes the estimates ofAandBin the estimator design of the controller parametersP ?andK?.The adaptive update laws forandin(12)and(13),include the identifierwhich is designed in(17),and uses(estimates ofAandB).The proposed design is architecturally analogous to[11],[13],[29],where a system identifier is utilized in controller parameter estimation.Also,note that although the system parameter estimatesandare only guaranteed to be bounded,the controller parameter estimatesandare proved to be exponentially convergent to the optimal parameters,as proved in Theorem 1.

    C.Comparison With Existing Literature

    One of the main contributions of the result is that the initial stabilizing policy assumption is not required,unlike the iterative algorithms in[5],[7],[9],[10],where an initial stabilizing policy is assumed to ensure that the subsequent policies remain stabilizing.On the other hand,an adaptive control framework is considered in the proposed approach where the control policies are continuously updated until convergence to the optimal policy.The design of the controller,the parameter update laws and the state derivative estimator ensure exponential stability of the closed-loop system which is proved using a rigorous Lyapunov-based stability analysis,irrespective of the initial control policy(stabilizing or destabilizing)chosen.

    Moreover,other significant contributions of this paper with respect to some of the existing literatures are highlighted as follows.

    The algorithms proposed in[5],[7],[9],[10]require computation of delayed-window integrals to construct the regressor,and/or“intelligent”data storage mechanism to satisfy an underlying full-rank condition.Computation of delayedwindow integrals require past data storage for the time interval[t ?T, t],?t >0,wheretandTare the current time instant and the window length,respectively,which demands significant consumption of memory stacks,especially for large scale systems.Unlike[5],[7],[9],[10],the proposed work strategically obviates the requirement of memory intensive delayed-window integrals and“intelligent”data storage,a definite advantage in the case of large scale systems implemented on embedded hardware.

    Although the result in[14]designs an actor-critic architecture based adaptive optimal controller for uncertain LTI systems,it uses memory-intensive delayed-window integral based Bellman error(see the error expression for“e”defined below(17)in[14])to tune the critic weight estimates ?Wc.Unlike[14],the proposed algorithm uses an online state derivative estimator to obviate the need of past data storage for control parameter estimation by strategically formulating Bellman error“E”(11)to be independent of delayed-window integrals.Further,an exponential stability result is obtained using the proposed algorithm as compared to the asymptotic result achieved in[14].

    Recent results in[28],[29]relax the PE condition by concurrently applying past stored data along with the current parameter estimates,however,unlike[28],[29],the proposed result is established for completely uncertain systems without requiring past data storage.Moreover,a stronger exponential regulation result is obtained using the proposed controller,while obviating the need of past data storage,as compared to[28],[29].

    The proposed result also differs from the ALQ algorithm[3]in that it avoids the computational burden of solving the ARE(with the estimates ofAandB)at every iteration,thus also avoiding the restrictive condition on stabilizability of estimates ofAandB,at every iteration.

    V.SIMULATION

    To verify the effectiveness of the proposed result, the problem of controlling the angular position of the shaft in a DC motor is considered[12].The plant is modeled as a third order continuous-time LTI system and its system matrices are given as

    The objective is to find the optimal control policy for the infinite horizon value function(2),where the state and input penalties are taken asQ=I3andR=1,respectively.Solving ARE(4)for the given system dynamics,the optimal control gainK?is obtained asK?=[1.0 0.8549 0.4791].The gains for parameter update laws(12)and(13)are chosen asν= 35,νk= 55 andηk= 5.The gain matrix of the state derivative estimator is selected asL=I3.An exploration signal,comprising of a sum of sinusoids with irrational frequencies,is added to the control input in(14)which subsequently leads to the convergence of control gain to its optimal values(depicted by$)as shown in Fig.1.

    Fig.1. The evolution of parameter estimate ?K(t)for the proposed method.

    The proposed method is compared with the recently published work in[14].The Q-learning algorithm proposed in[14]solves adaptive optimal control problem for completely uncertain linear time invariant(LTI)systems.The norms of the control gain estimation error(used in the proposed work)and the actor weight estimation error(as discussed in[14]and analogous to theare depicted in Fig.2.

    Fig.2. Comparison of the parameter estimation error norms between[14]and the proposed method.

    The initial conditions are chosen as[0 0 0]andx(0)=[?0.2 0.2?0.2]T,and the gains for the update laws of the approach in[14]are chosen asαa=6 andαc=50.To ensure sufficient excitation,an exploration noise is added to the control input up tot=4 s in both cases.

    From the Fig.3,it can be observed that for similar control inputs,the convergence rates for both the methods(as shown in Fig.2)are comparable.However,as opposed to the memoryintensive delayed-window integration for the calculation of the regressor in[14],the proposed result does not use paststored data and hence is more memory efficient.Further,an exponential stability result is obtained using the proposed controller as compared to the asymptotic result obtained in[14].As seen from Figs.4 and 5,the state trajectories for both the methods initially have bounded perturbation around origin due to the presence of the exploration signal.However,once this signal is removed aftert=4 s,the trajectories converge to the origin.

    Fig.3. Comparison of the control inputs between[14]and the proposed method.

    Fig.4. System state trajectories for the proposed method.

    Fig.5. System state trajectories for[14].

    VI.CONCLUSION

    An adaptive LQR is developed for continuous-time LTI systems with uncertain dynamics. Unlike previous results on adaptive optimal control which use RL/ADP methods,the proposed adaptive controller is memory/computationally efficient and does not require an initial stabilizing policy.The result hinges on a u-PE condition on the regressor vector,which is shown to be critical for proving convergence to the optimal controller.Future work will be focused on relaxing the restrictive u-PE condition without compromising the merits of the proposed result.The Lyapunov analysis is used to prove uniform exponential stability of the tracking error and parameter estimation error dynamics.Simulation results validate the efficacy of the proposed algorithm.

    APPENDIX EVALUATION OF BOUND FOR

    This section presents bounds on different terms encountered at different stages of the proof for Theorem 1.These bounds,comprising of norms of the elements of the vectorz(t)defined in(24),are developed by using(13),(15),(18),(19),Lemma 1 and considering standardvecoperator and Kronecker product properties.

    The following inequality results from the use of projection operator in(12)[35].

    The expression in(39)is upper bounded,by using Assumptions 1 and 2,Lemma 1,(40)and the following supporting bounds

    wherehi ∈R fori=1,2,...,11 are positive constants and in(41b),equality expressionis used,as

    where the known functionρ1(∥z∥):R→R is a positive,globally invertible and non decreasing andz ∈Rn(n+m)is defined in(24).

    日韩不卡一区二区三区视频在线| 激情五月婷婷亚洲| 亚洲av一区综合| 好男人在线观看高清免费视频| 久久精品夜色国产| 青春草视频在线免费观看| 在线a可以看的网站| 久久99精品国语久久久| 自拍欧美九色日韩亚洲蝌蚪91 | 在线观看av片永久免费下载| 精品国产乱码久久久久久小说| 777米奇影视久久| 丰满少妇做爰视频| 欧美 日韩 精品 国产| 国产伦理片在线播放av一区| 亚洲激情五月婷婷啪啪| 少妇人妻久久综合中文| 男男h啪啪无遮挡| 久热久热在线精品观看| 欧美+日韩+精品| 久久久午夜欧美精品| 一级a做视频免费观看| 热99国产精品久久久久久7| 熟妇人妻不卡中文字幕| 国产成人freesex在线| freevideosex欧美| 成人特级av手机在线观看| 老司机影院成人| 中文在线观看免费www的网站| 欧美高清成人免费视频www| 交换朋友夫妻互换小说| 国产av不卡久久| 亚洲最大成人手机在线| 99热这里只有是精品50| 国产老妇女一区| 亚洲怡红院男人天堂| 亚洲欧美日韩卡通动漫| 国产中年淑女户外野战色| 成人欧美大片| 男的添女的下面高潮视频| 亚洲自偷自拍三级| 亚洲欧美日韩卡通动漫| 成年av动漫网址| 亚洲欧美日韩东京热| 少妇人妻一区二区三区视频| 日本av手机在线免费观看| 国产69精品久久久久777片| 国产探花在线观看一区二区| 久久久久久九九精品二区国产| 精品国产三级普通话版| av国产精品久久久久影院| 国产黄色视频一区二区在线观看| 国产色爽女视频免费观看| 全区人妻精品视频| av免费在线看不卡| 久久久久九九精品影院| 最近最新中文字幕大全电影3| 一个人看视频在线观看www免费| 午夜福利高清视频| 久久精品综合一区二区三区| 久久人人爽人人片av| 看黄色毛片网站| 男人舔奶头视频| 男女国产视频网站| 久久久久性生活片| 色5月婷婷丁香| 一区二区三区精品91| 欧美性感艳星| 香蕉精品网在线| 天天一区二区日本电影三级| 又大又黄又爽视频免费| 色播亚洲综合网| 春色校园在线视频观看| 精品人妻一区二区三区麻豆| 熟女电影av网| 免费观看a级毛片全部| 久久久久久国产a免费观看| 久久综合国产亚洲精品| 能在线免费看毛片的网站| 久久久色成人| 免费在线观看成人毛片| 久久久亚洲精品成人影院| 边亲边吃奶的免费视频| 两个人的视频大全免费| 免费高清在线观看视频在线观看| 日韩av在线免费看完整版不卡| 国产探花极品一区二区| 五月开心婷婷网| 在线a可以看的网站| 亚洲精品自拍成人| 婷婷色综合大香蕉| 精品午夜福利在线看| 超碰av人人做人人爽久久| 最后的刺客免费高清国语| 亚洲精品中文字幕在线视频 | 日韩 亚洲 欧美在线| 人妻 亚洲 视频| freevideosex欧美| 精品少妇久久久久久888优播| 简卡轻食公司| 国产精品麻豆人妻色哟哟久久| 国产欧美日韩精品一区二区| 特级一级黄色大片| 国产女主播在线喷水免费视频网站| 自拍偷自拍亚洲精品老妇| 久久久精品欧美日韩精品| 国产69精品久久久久777片| 亚洲国产日韩一区二区| 少妇人妻久久综合中文| av女优亚洲男人天堂| 久热这里只有精品99| 神马国产精品三级电影在线观看| 国产高清国产精品国产三级 | 秋霞伦理黄片| 免费看a级黄色片| 亚洲精品久久午夜乱码| 99热这里只有精品一区| 插逼视频在线观看| 成人特级av手机在线观看| 色视频www国产| 99久久九九国产精品国产免费| 亚洲美女视频黄频| 午夜福利视频1000在线观看| 99久久精品一区二区三区| 日日啪夜夜撸| 69av精品久久久久久| 精品人妻一区二区三区麻豆| 国内少妇人妻偷人精品xxx网站| 国产成人福利小说| 三级国产精品片| 欧美+日韩+精品| 国产精品秋霞免费鲁丝片| 国内精品宾馆在线| 小蜜桃在线观看免费完整版高清| 国产黄频视频在线观看| 亚洲高清免费不卡视频| 乱码一卡2卡4卡精品| 人妻夜夜爽99麻豆av| 国产一区二区亚洲精品在线观看| 建设人人有责人人尽责人人享有的 | 日韩一区二区三区影片| 成人免费观看视频高清| 777米奇影视久久| 三级国产精品片| 白带黄色成豆腐渣| 欧美精品国产亚洲| 少妇熟女欧美另类| 国产精品不卡视频一区二区| 国产老妇女一区| 精品99又大又爽又粗少妇毛片| 国产精品久久久久久久久免| 97超碰精品成人国产| 国产乱人视频| 欧美少妇被猛烈插入视频| 久久精品夜色国产| 国产在线男女| 欧美亚洲 丝袜 人妻 在线| 亚洲欧洲日产国产| 在线观看美女被高潮喷水网站| 亚洲最大成人手机在线| 国产精品久久久久久精品古装| 精品99又大又爽又粗少妇毛片| 国产淫语在线视频| 亚洲成色77777| 欧美亚洲 丝袜 人妻 在线| 亚洲av男天堂| 久久精品久久精品一区二区三区| 一级二级三级毛片免费看| 男人和女人高潮做爰伦理| 内地一区二区视频在线| 高清视频免费观看一区二区| 三级国产精品欧美在线观看| 免费观看a级毛片全部| 一级二级三级毛片免费看| 男插女下体视频免费在线播放| 91精品国产九色| 欧美变态另类bdsm刘玥| 在线观看av片永久免费下载| xxx大片免费视频| 免费观看的影片在线观看| 免费看日本二区| 伊人久久精品亚洲午夜| 亚洲精品日韩在线中文字幕| 激情 狠狠 欧美| 亚洲国产色片| 午夜视频国产福利| 欧美精品人与动牲交sv欧美| 又粗又硬又长又爽又黄的视频| 卡戴珊不雅视频在线播放| 亚洲精品成人av观看孕妇| 免费在线观看成人毛片| 成人美女网站在线观看视频| 白带黄色成豆腐渣| 在线亚洲精品国产二区图片欧美 | 舔av片在线| 久久97久久精品| 性色av一级| 波多野结衣巨乳人妻| 亚洲av中文av极速乱| 日韩av免费高清视频| 国语对白做爰xxxⅹ性视频网站| 午夜激情福利司机影院| 午夜精品一区二区三区免费看| 亚洲av成人精品一区久久| 成年版毛片免费区| 日韩中字成人| 一级毛片 在线播放| 毛片一级片免费看久久久久| 嘟嘟电影网在线观看| 亚洲性久久影院| av网站免费在线观看视频| 99热国产这里只有精品6| 青青草视频在线视频观看| 免费黄色在线免费观看| 精品久久久久久久人妻蜜臀av| 午夜福利在线观看免费完整高清在| 成人黄色视频免费在线看| 国产高清有码在线观看视频| 亚洲精品中文字幕在线视频 | 国产色爽女视频免费观看| 可以在线观看毛片的网站| 国产女主播在线喷水免费视频网站| 国产黄a三级三级三级人| 啦啦啦啦在线视频资源| 国产成人精品福利久久| 日韩伦理黄色片| 男男h啪啪无遮挡| 青春草视频在线免费观看| 自拍偷自拍亚洲精品老妇| 久久精品久久精品一区二区三区| 美女视频免费永久观看网站| 内射极品少妇av片p| 日韩伦理黄色片| 高清欧美精品videossex| 欧美日本视频| 纵有疾风起免费观看全集完整版| 国产成人精品婷婷| 狂野欧美白嫩少妇大欣赏| 尾随美女入室| 高清午夜精品一区二区三区| av免费在线看不卡| 精品人妻熟女av久视频| 欧美 日韩 精品 国产| 亚洲色图av天堂| 精品人妻视频免费看| 综合色丁香网| 亚洲精品456在线播放app| 欧美人与善性xxx| 汤姆久久久久久久影院中文字幕| 国产av不卡久久| 久久久久性生活片| 国产视频内射| 亚洲不卡免费看| 三级国产精品欧美在线观看| 视频中文字幕在线观看| 97在线人人人人妻| 91午夜精品亚洲一区二区三区| 97超视频在线观看视频| 麻豆成人午夜福利视频| 91狼人影院| 亚洲欧美日韩无卡精品| 亚洲精品影视一区二区三区av| 尤物成人国产欧美一区二区三区| 人妻夜夜爽99麻豆av| 国产高清有码在线观看视频| 涩涩av久久男人的天堂| 亚洲精品,欧美精品| 丝袜美腿在线中文| a级一级毛片免费在线观看| 成人黄色视频免费在线看| 久久午夜福利片| 青春草视频在线免费观看| 亚洲真实伦在线观看| 午夜免费鲁丝| 亚洲国产精品国产精品| 日产精品乱码卡一卡2卡三| 男男h啪啪无遮挡| 精品视频人人做人人爽| 亚洲天堂av无毛| 亚洲aⅴ乱码一区二区在线播放| 丝袜喷水一区| 国产精品福利在线免费观看| 欧美成人一区二区免费高清观看| 国产日韩欧美亚洲二区| 日本-黄色视频高清免费观看| 中文精品一卡2卡3卡4更新| 欧美3d第一页| 最近中文字幕2019免费版| 听说在线观看完整版免费高清| 啦啦啦啦在线视频资源| 亚洲第一区二区三区不卡| 亚洲av一区综合| 黄色欧美视频在线观看| 国产一区二区三区av在线| 亚洲精品久久午夜乱码| 老女人水多毛片| 三级国产精品欧美在线观看| 男女下面进入的视频免费午夜| 观看美女的网站| av在线app专区| 91aial.com中文字幕在线观看| 好男人视频免费观看在线| 免费av观看视频| 亚洲av男天堂| 成人鲁丝片一二三区免费| 国产老妇女一区| 免费大片18禁| 中文字幕人妻熟人妻熟丝袜美| 欧美xxⅹ黑人| 日韩av免费高清视频| 极品教师在线视频| 国产精品一区二区三区四区免费观看| 亚洲精品aⅴ在线观看| 狂野欧美激情性xxxx在线观看| 韩国av在线不卡| 美女脱内裤让男人舔精品视频| av一本久久久久| 亚洲av在线观看美女高潮| 亚洲精品久久久久久婷婷小说| 欧美性猛交╳xxx乱大交人| 亚洲精品乱码久久久v下载方式| 国产av码专区亚洲av| 噜噜噜噜噜久久久久久91| 国产日韩欧美亚洲二区| 亚洲欧美日韩无卡精品| 欧美+日韩+精品| 看非洲黑人一级黄片| 下体分泌物呈黄色| 亚洲国产日韩一区二区| 国产欧美另类精品又又久久亚洲欧美| 六月丁香七月| 亚洲国产精品国产精品| 免费av不卡在线播放| 99热国产这里只有精品6| 成年人午夜在线观看视频| 国产午夜精品一二区理论片| 深夜a级毛片| 嫩草影院入口| 国产一区有黄有色的免费视频| 欧美最新免费一区二区三区| 国产精品一及| 日韩免费高清中文字幕av| 中文资源天堂在线| 日韩中字成人| 一级毛片我不卡| 欧美老熟妇乱子伦牲交| 亚洲自拍偷在线| 能在线免费看毛片的网站| 免费人成在线观看视频色| 亚洲自拍偷在线| 王馨瑶露胸无遮挡在线观看| 亚洲在线观看片| 国产一区二区三区av在线| 国内精品宾馆在线| 国产伦在线观看视频一区| 日韩制服骚丝袜av| 青春草国产在线视频| 美女脱内裤让男人舔精品视频| 国产欧美另类精品又又久久亚洲欧美| 乱系列少妇在线播放| 国产毛片在线视频| 国产黄a三级三级三级人| 男女无遮挡免费网站观看| 在线观看三级黄色| 精品视频人人做人人爽| 成人特级av手机在线观看| 人妻 亚洲 视频| 欧美少妇被猛烈插入视频| 国产午夜精品一二区理论片| 成人漫画全彩无遮挡| 尾随美女入室| 啦啦啦中文免费视频观看日本| 熟女av电影| 亚洲av电影在线观看一区二区三区 | 高清视频免费观看一区二区| 色视频在线一区二区三区| 特级一级黄色大片| 夫妻午夜视频| 内地一区二区视频在线| 秋霞在线观看毛片| av线在线观看网站| 精品人妻偷拍中文字幕| 国产综合精华液| 成年av动漫网址| 亚洲色图av天堂| 日本wwww免费看| 91精品伊人久久大香线蕉| 美女视频免费永久观看网站| 高清午夜精品一区二区三区| 久久久久久久午夜电影| 国产亚洲一区二区精品| 亚洲成色77777| 大片电影免费在线观看免费| 99热这里只有是精品在线观看| 成人鲁丝片一二三区免费| 成人亚洲精品一区在线观看 | 精品少妇黑人巨大在线播放| 一本色道久久久久久精品综合| 国产片特级美女逼逼视频| 亚洲在久久综合| 国内揄拍国产精品人妻在线| 激情五月婷婷亚洲| 国产精品一区www在线观看| 一级毛片aaaaaa免费看小| 91久久精品电影网| 在线观看美女被高潮喷水网站| 成人亚洲精品av一区二区| 在线看a的网站| 观看美女的网站| 国产高清三级在线| 深夜a级毛片| 久久久久久国产a免费观看| 亚洲精品成人av观看孕妇| 国产老妇女一区| 亚洲精品国产色婷婷电影| 观看免费一级毛片| 免费观看a级毛片全部| 国产精品熟女久久久久浪| 亚洲精品aⅴ在线观看| 少妇 在线观看| 免费看a级黄色片| 国产精品蜜桃在线观看| 国产精品福利在线免费观看| 人妻少妇偷人精品九色| 老司机影院毛片| 欧美少妇被猛烈插入视频| a级毛色黄片| 久久这里有精品视频免费| 蜜臀久久99精品久久宅男| 午夜亚洲福利在线播放| 亚洲精品乱码久久久v下载方式| 18+在线观看网站| 99热国产这里只有精品6| 午夜激情福利司机影院| 人人妻人人澡人人爽人人夜夜| 99热国产这里只有精品6| 亚洲欧美日韩另类电影网站 | 女人被狂操c到高潮| 99精国产麻豆久久婷婷| 国产亚洲最大av| 午夜激情福利司机影院| 国产成人精品一,二区| 亚洲av中文av极速乱| 大片免费播放器 马上看| 天堂网av新在线| 国产伦精品一区二区三区视频9| 2021少妇久久久久久久久久久| 波多野结衣巨乳人妻| 婷婷色综合大香蕉| 麻豆乱淫一区二区| 国产老妇女一区| 国产高清有码在线观看视频| 亚洲精华国产精华液的使用体验| 久久久成人免费电影| 18禁动态无遮挡网站| 大话2 男鬼变身卡| 午夜免费鲁丝| 久久久久久久亚洲中文字幕| 午夜福利网站1000一区二区三区| 有码 亚洲区| 女的被弄到高潮叫床怎么办| 我的老师免费观看完整版| 国产成人免费无遮挡视频| 亚洲欧美成人精品一区二区| 99热网站在线观看| 亚洲成人一二三区av| 久久精品人妻少妇| 日韩免费高清中文字幕av| 白带黄色成豆腐渣| 97超视频在线观看视频| 亚洲av在线观看美女高潮| 国产一区二区在线观看日韩| 欧美国产精品一级二级三级 | 王馨瑶露胸无遮挡在线观看| 久久精品国产鲁丝片午夜精品| 亚洲经典国产精华液单| 久久ye,这里只有精品| 别揉我奶头 嗯啊视频| 日产精品乱码卡一卡2卡三| 国产片特级美女逼逼视频| 一级毛片黄色毛片免费观看视频| 亚洲精品国产色婷婷电影| 久久精品熟女亚洲av麻豆精品| 日本与韩国留学比较| 日韩免费高清中文字幕av| 九草在线视频观看| 国产国拍精品亚洲av在线观看| 亚洲成人久久爱视频| 国产精品一区二区在线观看99| 亚洲国产欧美人成| 老女人水多毛片| 亚洲美女视频黄频| 女的被弄到高潮叫床怎么办| 亚洲精品成人久久久久久| 欧美成人精品欧美一级黄| av国产免费在线观看| 国产v大片淫在线免费观看| 五月伊人婷婷丁香| 在线亚洲精品国产二区图片欧美 | 欧美精品人与动牲交sv欧美| 精品久久久久久久末码| 国产精品人妻久久久久久| www.色视频.com| 欧美高清性xxxxhd video| 久久99精品国语久久久| 日韩欧美精品免费久久| av天堂中文字幕网| av国产久精品久网站免费入址| 国产精品不卡视频一区二区| 国产精品三级大全| 精品视频人人做人人爽| 高清av免费在线| 一级毛片aaaaaa免费看小| 国产黄a三级三级三级人| 免费人成在线观看视频色| 国产国拍精品亚洲av在线观看| 国产综合懂色| 99久久精品热视频| 久久久久久久久久久丰满| 日本午夜av视频| 国产精品99久久99久久久不卡 | 午夜福利视频精品| 人妻 亚洲 视频| 久久久久久国产a免费观看| 日韩电影二区| 自拍欧美九色日韩亚洲蝌蚪91 | 国精品久久久久久国模美| 大话2 男鬼变身卡| 美女视频免费永久观看网站| 国产精品人妻久久久影院| 亚洲在久久综合| 18禁裸乳无遮挡免费网站照片| 成人免费观看视频高清| 少妇猛男粗大的猛烈进出视频 | 国产一区二区亚洲精品在线观看| 精品亚洲乱码少妇综合久久| 成人国产av品久久久| 婷婷色av中文字幕| 欧美最新免费一区二区三区| 亚洲欧美日韩卡通动漫| 国产伦理片在线播放av一区| 深爱激情五月婷婷| 久久精品国产亚洲av涩爱| 啦啦啦中文免费视频观看日本| 亚洲国产精品999| 精品少妇久久久久久888优播| 午夜视频国产福利| 男女国产视频网站| 色5月婷婷丁香| 偷拍熟女少妇极品色| 免费少妇av软件| 99久久中文字幕三级久久日本| 秋霞在线观看毛片| 国产黄色免费在线视频| 亚洲精品第二区| 男的添女的下面高潮视频| 精品一区二区三区视频在线| 精品熟女少妇av免费看| 能在线免费看毛片的网站| 女人久久www免费人成看片| 性色avwww在线观看| 亚洲av日韩在线播放| 亚洲欧美日韩另类电影网站 | 纵有疾风起免费观看全集完整版| 91在线精品国自产拍蜜月| 国产毛片a区久久久久| 欧美成人a在线观看| 国产女主播在线喷水免费视频网站| 老司机影院成人| 男女边摸边吃奶| 国产欧美日韩一区二区三区在线 | a级毛片免费高清观看在线播放| 成人欧美大片| 青青草视频在线视频观看| 亚洲成色77777| 乱系列少妇在线播放| 国产午夜精品久久久久久一区二区三区| 亚洲成人一二三区av| 99久久中文字幕三级久久日本| av在线老鸭窝| 久久久精品94久久精品| 特级一级黄色大片| 一二三四中文在线观看免费高清| av在线亚洲专区| 能在线免费看毛片的网站| 午夜福利网站1000一区二区三区| 狂野欧美白嫩少妇大欣赏| 精华霜和精华液先用哪个| 伊人久久国产一区二区| 九色成人免费人妻av| 高清视频免费观看一区二区| 亚洲aⅴ乱码一区二区在线播放| 三级男女做爰猛烈吃奶摸视频| 99久国产av精品国产电影| 亚洲精品亚洲一区二区| 男人狂女人下面高潮的视频| 久久久久久伊人网av| 国产精品人妻久久久久久| 毛片一级片免费看久久久久| 成年女人看的毛片在线观看| 最近2019中文字幕mv第一页| 欧美bdsm另类| 成人综合一区亚洲| 校园人妻丝袜中文字幕| 免费观看在线日韩| 韩国高清视频一区二区三区| 国产精品久久久久久精品电影| 尤物成人国产欧美一区二区三区| 国产成人91sexporn| 黄色欧美视频在线观看| 国产精品女同一区二区软件| 你懂的网址亚洲精品在线观看| 下体分泌物呈黄色| 黄片wwwwww| 国产色婷婷99|