• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Adaptive dynamic programming for finite-horizon optimal control of linear time-varying discrete-time systems

    2019-01-24 06:12:50BoPANGTaoBIANZhongPingJIANG
    Control Theory and Technology 2019年1期

    Bo PANG ,Tao BIAN ,Zhong-Ping JIANG

    1.Control and Networks(CAN)Lab,Department of Electrical and Computer Engineering,Tandon School of Engineering,New York University,Brooklyn,NY 11201,U.S.A.

    2.Bank of America Merrill Lynch,One Bryant Park,New York,NY 10036,U.S.A.

    Received 15 August 2018;revised 24 October 2018;accepted 26 October 2018

    Abstract This paper studies data-driven learning-based methods for the finite-horizon optimal control of linear time-varying discretetime systems.First,a novel finite-horizon Policy Iteration(PI)method for linear time-varying discrete-time systems is presented.Its connections with existing infinite-horizon PI methods are discussed.Then,both data-driven off-policy PI and Value Iteration(VI)algorithms are derived to find approximate optimal controllers when the system dynamics is completely unknown.Under mild conditions,the proposed data-driven off-policy algorithms converge to the optimal solution.Finally,the effectiveness and feasibility of the developed methods are validated by a practical example of spacecraft attitude control.

    Keywords:Optimal control,time-varying system,adaptive dynamic programming,policy iteration(PI),value iteration(VI)

    1 Introduction

    Bellman’s dynamic programming(DP)[1]has been successful in solving sequential decision making problems arising from areas ranging from engineering to economics.Despite a powerful theoretical tool in optimal control of dynamical systems[2,3],besides the wellknown “Curse ofDimensionality”,the originalDP isalso haunted by the “Curse of Modeling”[4],i.e.,the exact knowledge of the dynamical process under consideration is required.Reinforcement learning(RL)[5,6]and adaptive dynamic programing(ADP)[7-12]are promising to deal with this problem.RL and ADP find approximate optimal laws by iteratively utilizing the data collected from the interactions between the controller and the plant,so that an explicit plant model is not needed.Over the past decade,many papers have been devoted to this routine,for systems characterized by linear or nonlinear,differentialordifference equations.Forexample,adaptive optimal controllers were proposed without the knowledge of system dynamics,by using policy iteration(PI)[7,8,13-15]or value iteration(VI)[16,17],and many references therein.

    However,most existing results focus only on the infinite-horizon optimal control problem for timeinvariantsystems.There are relatively few studies on the finite-horizon optimal control problem for time-varying systems.Although finite-horizon approximate optimal controllers were derived for linear systems in[18,19],and nonlinear systems in[20-24],the authors of these papers assumed either the full knowledge of system dynamics,or time-invariant system parameters.Recently,several methods that do not require the exact knowledge of system dynamics have also been proposed for the finite-horizon control of time-varying systems.Extremum seeking techniques were applied to find approximate finite-horizon optimal open-loop control sequences for linear time-varying discrete-time(LTVDT)systems in[25,26].A dual-loop iteration algorithm was devised to obtain approximate optimal control for linear time-varying continuous-time(LTVCT)systems in[27].Different from the time-invariant and infinite-horizon case where the optimal controller is stationary,in the time-varying and finite-horizon case the optimal controller is nonstationary.This brings new challenges to the design ofdata-driven,non-modelbased optimalcontrollers when the precise information of time-varying system dynamics is not available.

    This paper considers the finite-horizon optimal control problem without the knowledge of system dynamics for LTVDT systems.Firstly,a novel finite-horizon PI method for LTVDT systems is presented.On one hand,the proposed finite-horizon PI method can be seen as the counterpart to the existing infinite-horizon PI methods,which may be found in[28]for LTVDT systems,in[29]for LTVCT systems,in[30]for linear timeinvariant discrete-time(LTIDT)systems,and in[31]for linear time-invariant continuous-time(LTICT)systems,respectively.On the other hand,it is parallel to the finite-horizon PI for LTVCT systems in[32](Theorem 8).Secondly,we prove that in the time-invariant case,as the time horizon goes to infinity,the finite-horizon PI method reduces to the infinite-horizon PI method for LTIDT systems.Thirdly,we propose data-driven offpolicy finite-horizon PI and VI algorithms to find approximate optimal controllers when the system dynamics is unknown.The proposed data-driven algorithms are offpolicy,by contrast,the methods in[25-27]were onpolicy.In addition,the work[25-27]only considered special cases of linear systems or lacked convergence analysis.Finally,we simulate the proposed methods on a practical example of spacecraft attitude control with time-varying dynamics.The simulation results demonstrate the effectiveness and feasibility of our data-driven finite-horizon PI and VI algorithms.

    The restofthispaperisorganized asfollows:Section 2 introduces the problem formulation and necessary preliminaries;Section 3 first presents the finite-horizon PI method,and then its connections with infinite-horizon PI method under certain conditions are revealed;Section 4 derives the data-driven finite-horizon PI and VI algorithms;in Section 5,the application of the proposed methods to spacecraft attitude control is provided;Section 6 concludes the whole paper.

    Notations Throughoutthis paper,R denotes the set of real numbers,Z+denotes the set of nonnegative integers.k∈Z+denotes the discrete time instance.X(k)is denoted as Xkfor short and the time index is always the first subscript if it has multiple subscripts,e.g.,Xk,i1,i2.?is the Kronecker product operator.For matrix A ∈ Rn×m,vec(A)=,where aiis the i th column of A.For symmetric matrix B ∈ Rm×m,

    |·|is the Euclidean norm for vectors and ‖·‖represents the induced matrix norm for matrices.

    2 Problem formulation and preliminaries

    Consider the following linear time-varying discretetime system:

    where k∈[k0,N)is the discrete time instant,xk∈Rnis the system state,uk∈ Rmisthe controlinput,Ak∈ Rn×nand Bk∈ Rn×mare time-dependent system matrices.

    We are interested in finding a sequence of control inputsto minimize the following cost function with penalty on the final state

    where Qj∈ Rn×n,Rj∈ Rm×m,F ∈ Rn×nare symmetric weighting matrices satisfying Qj?0,Rj>0,and F?0.

    When the system matrices Akand Bkare precisely known for all k∈[k0,N),this is the well-known finitehorizon linear quadratic regulator(LQR)problem(see[2,Pages 110-112]).The optimal control input is given by

    and the optimal cost is given by

    with the symmetric and positive semidefinite matricesthe solutions to the discrete-time Riccati equation

    with the terminal condition=F.Obviously,given a fixed matrix F,the matrix seriesare uniquely determined.

    Remark 1 Note that(5)shares similar features with the VI for infinite-horizon discrete-time LQR problem[33,Theorem 17.5.3].In Section 4.2,we will develop a VI-based finite-horizon ADP based on(5).

    Remark 2 For the optimal control problem considered here,the optimalcontrol u*always exists,no matter whether system(1)is controllable,stabilizable or not.This fact can be verified by a straightforward application of DP to system(1)and cost function(2)(see[2,Pages 110-112]).

    When Akand Bkare unknown,due to the difference between(5)and the algebraic Riccati equation(ARE)in infinite-horizon control problems(see[2,Page 113]),existing infinite-horizon data-driven methods(see,e.g.,[34,35])cannot be used here directly.Moreover,the methods in[25]and[26]are on-policy and only applied to linear systems with scalar input.By exploiting the property of the cost function,new off-policy methods that find the optimal control inputs without the knowledge of system matrices can be derived.

    Now,applying a sequence u(L)of control inputs uk=-Lkxk,k∈[k0,N)with arbitrary gain matricesto system(1),we have

    Then the cost at time instance k,k∈[k0,N],can be rewritten as

    By using(7)with k1=k+1,k2=k and(9),the following Lyapunov equation can be obtained:

    3 Finite-horizon policy iteration for linear time-varying discrete-time systems

    For convenience,next,we use A(i)kto denote the closed-loop system matrix at time instance k in i th iteration,i.e.,

    3.1 Finite-horizon PI for LTVDT systems

    Theorem 1 For system(1),consider the finitehorizon linear quadratic regulator problem with respect to the cost function(2).

    1)Choose arbitrary initial gain matrices,and let i=0.

    2)(Policy evaluation)Solve for0,by using the following Lyapunov equations with

    3)(Policy improvement)Solve for,by using the following equations:

    4)Let i=i+1,and go to Step 2).Then for all k∈[k0,N],it holds:

    Proof i)Note that

    Substituting the above equations into(12)gives

    Note that,by(13),there is

    Thus,the last two terms in expression(14)vanish.Then we have

    This is exactly the Riccati equation.Due toF and the uniqueness of solution to Riccati equation,follows.This completes the proof.

    Remark 3 Most existing numerical finite-horizon optimal control methods were developed for continuous-time models;see[18,22,23],to name a few.It is not clear how to adapt their model-based methods into data-driven algorithms.On the basis of Theorem 1,one can derive corresponding data-driven off-policy PI algorithm(see Section 4).

    3.2 Connections with infinite-horizon PI

    In this section,we investigate properties of the finitehorizon PI in Theorem 1 as the final time N→ ∞.It is shown that,as N→∞,the proposed finite-horizon PI reduces to the infinite-horizon PI,i.e.,Hewer’s algorithm proposed in[30].

    Assumption 1 Throughout this section,we assume stationary system(A,B)and constant matrices Q?0,R>0,F=0.In addition,(A,B)is controllable,and(A,Q1/2)is observable.

    For convenient reference,the infinite-horizon PI algorithm is summarized in the following lemma.

    Lemma 1 If staring with an initial stabilizing control u(0)=-L(0)x,setting i=0,the following three steps are iterated infinitely:

    1)(Policy evaluation)Solve for V(i)from

    where A(i)=A-BL(i).

    2)(Policy improvement)Obtain improved control gain

    3)Let i=i+1,and go to 1).Then,

    i)V(i)?V(i+1)?P*,?i∈ Z+.

    and the corresponding stationary optimal control u*=-L*x,where

    minimizes the infinite-horizon cost function

    iii)A(i)is a Schur matrix for each i∈Z+.V(i)is the unique positive definite solution to Lyapunov equation(16),and the cost under control u(i)=-L(i)x is given by

    In the rest of this section,let thenote the solutions to equations(12)and(13),respectively;(V(i),L(i))denote solutions to equations(16)and(17),respectively;)denote the solutions to equations(5)and(4),respectively;(P*,L*)denote solutions to equations(18)and(19),respectively.Note that for fixed=F and free,N→∞implies k→-∞.

    Theorem 2 In stationary case,if initial control gains are chosen as,for all k∈[k0,N)in Theorem 1,where L(0)is the same with that in Lemma 1,and F=0,then,as N→∞.Thus,as N goesto the infinity,the finite-horizon PIin Theorem 1 reduces to the infinite-horizon PI in Lemma 1.

    Note that the above inequality holds for all x0∈Rn,thus we know thatis a nondecreasing sequence.Furthermore,for every k?N-1,is bounded from above by the costSince for the same initial conditional x0and same stabilizing controlis equal to the sum of only first N-k terms in equation(21).Thus as k→ -∞,is a nondecreasing sequence and bounded from above by V(0).Again,by the theorem on the convergence of a monotone sequence of self-adjoint operators(See[36,Pages 189-190]),exists.Since A,Q and R are time invariant,letting k→ -∞in(12),we have

    Obviously,this is the same form with Lyapunov equation(16).Due to the uniqueness of the solution to Lyapunov equation(16).We knowThis means in the limiting case,policy evaluation step(12)and policy improvementstep(13)in Theorem 1 reduce to(16)and(17)when i=0,respectively.By induction,hold for all i>0.This proves that,in the limiting case N→∞,the proposed finite-horizon PI in Theorem 1 reduces to the Hewer’s algorithm,i.e.,Lemma 1.This completes the proof.

    Remark 4 There are two iteration variables in Theorem 1:time index k and algorithmic iteration index i.The relationships between different value-control pairs with respect to these two iteration variables are summarized in Fig.1.The convergence ofas k→ -∞is proved in Theorem 2;the convergencethe convergence of(V(i),L(i))to(P*,L*)is the Hewer’s algorithm[30];the proof of convergence of(P*,L*)can be found in[2,Proposition 3.1.1].is the main result in Theorem 1;

    Fig.1 Value-control pair relationships of policy iteration.

    Remark 5 When system(1)is periodic,it is also true that the proposed finite-horizon PI reduces to the infinite-horizon PI for periodic systems as N→∞.The infinite-horizon PI for discrete-time periodic linear systems can be found in[37](Theorem 3).In[38](Theorem 3.1),it was shown that,the optimal control problem of discrete-time periodic linear system can be transformed into an equivalent optimal control problem of linear time-invariant system.Thus it is straightforward to extend Theorem 2 to periodic systems.

    Remark 6 It is easy to see from[14]that in stationary case,(5)reduces to the infinite-horizon discrete-time VI,by definingin[33,Theorem 17.5.3].

    4 Adaptive optimal controller design for linear time-varying discrete-time systems

    This section applies adaptive dynamic programming to derive novel data-driven algorithms without precise knowledge of the system dynamics based on Theorem 1 and equation(5),respectively.

    4.1 PI-based off-policy ADP

    Suppose that a series of control inputs{u(0)

    k}N-1k=k0is applied to the system to generate data,and we are in the i th stage of the procedure in Theorem 1.Then(1)can be rewritten as Substituting(12)into equation(23)and rearranging the terms,we obtain

    Using the properties of Kronecker product,

    where a∈Rn,b∈Rm,W ∈Rn×m,A,B,C,D are matrices with compatible dimensions,the equation(24)can be rewritten as

    For fixed k,(25)is a degenerate linear equation,since there is only one data triad(,yet at leastunknown variables to solve.Therefore,more data needs to be collected.To this end,suppose in total l groups of different control sequences,j=1,2,...,l are applied to the system and corresponding data is recorded.Each group of control sequences can choose the following form:

    where wk,j∈Rmis the exploration noise used to achieve sufficient excitation of the system.By defining the following data matrices:

    we obtain the matrix equation

    where

    and Ψ(i),Φ(i)are given in(28)and(29)below.

    If Φ(i)is a full-column rank matrix,(27)can be uniquely solved,i.e.,

    Now we are in the place to present the data-driven off-policy policy iteration algorithm.

    Next,the convergence analysis of Algorithm 1 is presented.

    Algorithm 1(PI-based off-policy ADP)Choose arbitrary{L(0)k}N-1 k=k0,threshold ?> 0.Run system(1)l times.In the j th run,use{u(0)k,j}N-1 k=k0 from(26)as control inputs,and collect the generated data.Let i←0.Repeat Compute Ψ(i),Φ(i)by(28),(29)respectively;Θ(i)← ((Φ(i))TΦ(i))-1(Φ(i))TΨ(i);k←k0;While k<N do L(i+1)k ←(Rk+B T k V(i)k+1 Bk)-1 B T k V(i)k+1 Ak;k←k+1;i←i+1;Until max k ‖V(i)k-V(i-1)k ‖< ?.Use uk=-L(i)k xk as the approximate optimal control.

    Theorem 3 If there exists an integer l0>0,such that,for all l>l0and k∈[k0,N),

    then,

    1)Φ(i)has full column rank for all i∈Z+.

    Proof For convenience,define variables

    where Xv=vecs(Xm),Yv=vec(Ym),Zv=vecs(Zm),Xm∈ Rn×nand Zm∈ Rm×mare symmetric matrices,Ym∈Rm×n,k∈[k0,N).

    We first prove property 1).Obviously,Φ(i)has full column rank if and only ifhas full column rank for all k∈[k0,N).This is equivalent to show that linear equation

    has unique solution Ξv=0.We shall show that it is indeed the case.

    According to the definition ofand the equation(24),we have

    From(31),we know that[Γxk,2Γxuk,0,Γuk,0]has full column rank.This means the only solution to(32)isΛ1=0,Λ2=0,Λ3=0.This is true if and only if Ξv=0.Thusalways has full column rank.Therefore,Φ(i)has full column ranks for all i∈Z+.

    Now we prove property 2).By(24),it is easy to check that

    Suppose now Ξvand symmetric matrix Wm∈ Rn×nsatisfies

    Again,[Γxk,2Γxuk,0,Γuk,0]has full column rank.This means the only solution to(34)is Ω1=0, Ω2=0,Ω3=0.Substituting Ym=BTkWmAk,Zm=into Ω1=0,we obtain

    which is exactly the Lyapunov equation(12).SinceF for all i∈Z+,Policy Iteration by(27)and(13)is equivalent to(12)and(13).By Theorem 1,the convergence is proved.

    Remark 7 Different from the infinite-horizon PI algorithms[30,31,35,39],where the initial gain matrices must be stabilizing to guarantee the convergence of the algorithm,there is no restriction on the initial gain matrices in finite-horizon PI-based ADP.However,in practice,utilization of stabilizing initial gain matrices(if exist)will prevent the system states from becoming too large.This is beneficial for the sufficient excitation of the systems and numerical stability.

    4.2 VI-based off-policy ADP

    In this section,we develop a VI-based off-policy ADP scheme using(5).

    For k=k0,...,N-1,define

    Similar to the last section,suppose l groups of different control(26)are applied to the systems to collect data.Above equation yields

    Theorem 4 If there exists an integer l0>0,such that,for all l>l0and k∈[k0,N),(31)holds,then Algorithm 2 yields the optimal values

    Remark 8 In this paper,two data-driven ADP methods are proposed.Compared with PI,VI is much easier to implement and only requires finite steps to find the optimal solution.However,all the(N-k0)-step iterations must be executed cascaded to find the optimal solution,which is time consuming when N is large.On the contrary,in PI,the full sequencecan be obtained in each learning iteration in parallel.Due to the fast convergence of PI,PI shows a better performance when N is large.

    Algorithm 2(VI-based off-policy ADP)Run system(1)l times.In the j th run,use{u(0)k,j}N-1 k=k0 from(26)as control inputs,and collect the generated data.P*N ←F;k←N;While k>k0 do H*k-1←(F T k-1 F k-1)-1 F T k-1Γxk vecs(P*k);L*Solve P*k-1 by(37);k-1←(Rk-1+H*k-1,3)-1 H*k-1,2;k←k-1;Use uk=-L*k xk as the approximate optimal control.

    5 Application

    In this section,we apply the algorithms presented in previoussectionsto the spacecraftattitude controlproblem.Due to a space structure extended in orbit[40]or on-orbit refueling[41],moment of inertia of the spacecraft will be time-varying.This can be modeled by the following continuous-time linear time varying system[42],

    where x=[q,˙q]T,q=[α,β,γ]T,α,β,γ are the row angle,the pitch angle,the yaw angle of spacecrafts respectively,

    Jx(t),Jy(t),Jz(t)are the components of momentof inertia of spacecrafts with respect to a body coordinate system,and ω0=0.0011 rad/s is the orbital rate.We discretize system(38)by using Forward-Euler method with sampling time h=0.1 s,and obtain the discrete-time linear time varying system,

    Let final time N=300,and choose weighting matrices Q=10I6,R=I3.The proposed Algorithms 1 and 2 are applied to system(39).For Algorithm 1,the initial control gains are all zero matrices.To collect data that satisfies conditions(31),in the simulation,the elements of initial angles q0are assumed to be independently and uniformly distributed over[-1,1],while the elements of initial angle velocities˙q0are assumed to be independently and uniformly distributed over[-0.01,0.01].The exploration noises are chosen as

    where for r th component of control input,in j th trial,each σs,jis independently drawn from the uniform dis-tribution over[-500,500].

    Algorithm 1 stops after 10 iterations on this task.Fig.2 shows the convergence ofin the PI case.One can find that,for fixed time k,converges monotonically to their optimal values,as predicted in Theorem 1.For Algorithm 2,the maximum difference between the LS solutionsgiven by Algorithm 2 and the optimal values P*k,measured by matrix 2-norm,is 1.6749×10-8.The final approximate optimal control gains and the initial control gains(all zero matrix)are applied to system(39)with initial condition q0=[0.0175,0.0175,0.0175]T,˙q0=[0,0,0]T.The state trajectories under these two control are shown in Figs.3 and 4,respectively.The final control inputs are shown in Fig.5.Therefore,the simulation results demonstrate the effectiveness of the proposed finite-horizon PI and VI algorithms.

    Fig.2 Comparison of with its optimal value.

    Fig.3 Trajectories of angles of the spacecraft.

    Fig.4 Trajectories of angle velocities of the spacecraft.

    Fig.5 Approximate optimal control inputs.

    6 Conclusions

    The popular policy iteration(PI)method has been extended for the finite-horizon optimal control problem of linear time-varying discrete-time systems in this paper.In the stationary case,its connections with the existing infinite-horizon PI methods are revealed.In addition,novel data-driven off-policy PI and VI algorithms are derived to find the approximate optimal controllers in the absence of the precise knowledge of system dynamics.It is shown via rigorous convergence proofs that under the proposed data-driven algorithms,the convergence of a sequence of suboptimal controllers to the optimal control is guaranteed under some mild conditions.The obtained results are validated by a case study in spacecraft attitude control.

    精品久久久久久久久av| 99精品在免费线老司机午夜| 亚洲成av人片在线播放无| 99久久九九国产精品国产免费| 欧美不卡视频在线免费观看| 搡老岳熟女国产| 国产亚洲91精品色在线| 男女视频在线观看网站免费| 日韩高清综合在线| 99热这里只有精品一区| 欧美激情久久久久久爽电影| 午夜福利18| 黄色丝袜av网址大全| 日韩大尺度精品在线看网址| 在线观看66精品国产| 午夜免费成人在线视频| 亚洲最大成人av| 欧美xxxx黑人xx丫x性爽| 国产精品伦人一区二区| 尾随美女入室| 国产男靠女视频免费网站| 亚洲18禁久久av| 毛片一级片免费看久久久久 | 桃色一区二区三区在线观看| 免费一级毛片在线播放高清视频| 欧美日韩黄片免| 最新中文字幕久久久久| 91在线精品国自产拍蜜月| 午夜激情福利司机影院| a级一级毛片免费在线观看| 小说图片视频综合网站| 搡女人真爽免费视频火全软件 | 日本免费a在线| 国产人妻一区二区三区在| 日韩欧美国产在线观看| 久久香蕉精品热| 美女大奶头视频| 亚洲熟妇熟女久久| 乱系列少妇在线播放| 九色国产91popny在线| 成人永久免费在线观看视频| 亚洲乱码一区二区免费版| 欧美精品国产亚洲| 国产精品嫩草影院av在线观看 | 日本与韩国留学比较| 国产成人福利小说| 成人av在线播放网站| av在线老鸭窝| 永久网站在线| 长腿黑丝高跟| 午夜爱爱视频在线播放| 看片在线看免费视频| 身体一侧抽搐| 国产亚洲91精品色在线| 熟女人妻精品中文字幕| 日韩一区二区视频免费看| 一区二区三区激情视频| 国产高清视频在线观看网站| 国产淫片久久久久久久久| 精品人妻一区二区三区麻豆 | 永久网站在线| 亚洲精品粉嫩美女一区| 能在线免费观看的黄片| 亚洲成av人片在线播放无| 国产爱豆传媒在线观看| a级毛片a级免费在线| 久9热在线精品视频| 国产在视频线在精品| 伦理电影大哥的女人| 欧美日韩精品成人综合77777| 乱人视频在线观看| 国产69精品久久久久777片| 看片在线看免费视频| 悠悠久久av| 国产男靠女视频免费网站| 日日摸夜夜添夜夜添小说| 制服丝袜大香蕉在线| av在线天堂中文字幕| 亚洲狠狠婷婷综合久久图片| 香蕉av资源在线| av国产免费在线观看| 国产真实伦视频高清在线观看 | 国产综合懂色| 国产 一区精品| 亚洲av中文av极速乱 | 免费电影在线观看免费观看| 日日摸夜夜添夜夜添av毛片 | eeuss影院久久| 欧美又色又爽又黄视频| 在线a可以看的网站| 国产白丝娇喘喷水9色精品| 国产伦精品一区二区三区四那| av黄色大香蕉| 亚洲久久久久久中文字幕| 一级黄片播放器| 欧美国产日韩亚洲一区| 免费观看在线日韩| 国产一区二区激情短视频| 成人性生交大片免费视频hd| 午夜日韩欧美国产| 无遮挡黄片免费观看| 舔av片在线| 亚洲成a人片在线一区二区| 亚洲中文日韩欧美视频| 亚洲av中文av极速乱 | 两性午夜刺激爽爽歪歪视频在线观看| 少妇被粗大猛烈的视频| 18禁裸乳无遮挡免费网站照片| 国内久久婷婷六月综合欲色啪| 国产高清不卡午夜福利| 欧美精品国产亚洲| 国内精品久久久久久久电影| 两性午夜刺激爽爽歪歪视频在线观看| 99热网站在线观看| 亚洲av免费高清在线观看| 十八禁国产超污无遮挡网站| ponron亚洲| 国产国拍精品亚洲av在线观看| 久久久色成人| 免费一级毛片在线播放高清视频| 精品久久久久久,| 亚洲av一区综合| 夜夜夜夜夜久久久久| 亚洲无线观看免费| avwww免费| 免费搜索国产男女视频| 男女那种视频在线观看| 欧美一区二区精品小视频在线| 国产精品日韩av在线免费观看| 狂野欧美白嫩少妇大欣赏| 成人欧美大片| 亚洲成人精品中文字幕电影| 国产精品久久久久久av不卡| 天堂网av新在线| 日本免费一区二区三区高清不卡| 麻豆国产97在线/欧美| 精品人妻熟女av久视频| 亚洲最大成人中文| 最新中文字幕久久久久| 国产欧美日韩一区二区精品| 尾随美女入室| 一进一出好大好爽视频| 国产精品美女特级片免费视频播放器| 亚洲va日本ⅴa欧美va伊人久久| 日韩精品有码人妻一区| 欧美色视频一区免费| 成人一区二区视频在线观看| 一级a爱片免费观看的视频| 久久亚洲真实| 免费观看精品视频网站| 精品人妻视频免费看| 免费人成视频x8x8入口观看| 啦啦啦韩国在线观看视频| 老熟妇仑乱视频hdxx| 99九九线精品视频在线观看视频| 黄色丝袜av网址大全| 国产一区二区三区视频了| 99在线人妻在线中文字幕| 别揉我奶头~嗯~啊~动态视频| 成人精品一区二区免费| 国产伦精品一区二区三区视频9| www.www免费av| 联通29元200g的流量卡| 黄片wwwwww| 18禁裸乳无遮挡免费网站照片| 免费观看精品视频网站| 国产主播在线观看一区二区| 能在线免费观看的黄片| 成年版毛片免费区| 色综合站精品国产| 午夜久久久久精精品| 国产精品99久久久久久久久| 欧美区成人在线视频| www日本黄色视频网| 男女边吃奶边做爰视频| 欧美色欧美亚洲另类二区| 伦精品一区二区三区| 成人国产麻豆网| 免费观看的影片在线观看| a级一级毛片免费在线观看| 国产三级在线视频| 深夜精品福利| 亚洲最大成人av| 欧美又色又爽又黄视频| 日韩欧美三级三区| 黄色视频,在线免费观看| 美女cb高潮喷水在线观看| 日本一本二区三区精品| 网址你懂的国产日韩在线| 999久久久精品免费观看国产| 久久精品国产自在天天线| 一区二区三区四区激情视频 | 18禁黄网站禁片免费观看直播| 国产精品三级大全| 久久久色成人| 亚洲三级黄色毛片| 精品不卡国产一区二区三区| 国产伦精品一区二区三区视频9| 欧美激情在线99| 最新在线观看一区二区三区| 国产午夜精品论理片| 91午夜精品亚洲一区二区三区 | 欧美日韩乱码在线| 中文在线观看免费www的网站| 日韩精品青青久久久久久| 亚洲av熟女| 少妇丰满av| 亚洲最大成人手机在线| 欧美激情国产日韩精品一区| 十八禁网站免费在线| av女优亚洲男人天堂| 熟妇人妻久久中文字幕3abv| АⅤ资源中文在线天堂| 国产久久久一区二区三区| 亚洲专区中文字幕在线| 成人午夜高清在线视频| 日日摸夜夜添夜夜添小说| 婷婷色综合大香蕉| 亚洲av免费高清在线观看| 久久久久久久久久成人| 色吧在线观看| 人妻久久中文字幕网| 国产高潮美女av| 日本爱情动作片www.在线观看 | 身体一侧抽搐| 69人妻影院| 午夜精品一区二区三区免费看| 国产精品无大码| 99久久精品国产国产毛片| 91狼人影院| 亚洲一级一片aⅴ在线观看| 国产视频内射| 免费在线观看影片大全网站| 日本爱情动作片www.在线观看 | 成人国产一区最新在线观看| 久久久久国内视频| 少妇熟女aⅴ在线视频| 久久久久久久久久黄片| 欧美极品一区二区三区四区| 亚洲精品一区av在线观看| 久久久久国内视频| 国产精品国产高清国产av| 国产一区二区在线av高清观看| 亚洲人成网站在线播| 一进一出抽搐gif免费好疼| а√天堂www在线а√下载| 淫妇啪啪啪对白视频| 国产 一区精品| 久久久久国内视频| 熟妇人妻久久中文字幕3abv| 亚洲av日韩精品久久久久久密| 久久精品国产鲁丝片午夜精品 | 直男gayav资源| av天堂在线播放| 给我免费播放毛片高清在线观看| 成人国产综合亚洲| 最近视频中文字幕2019在线8| 国产综合懂色| 搡女人真爽免费视频火全软件 | 国产在视频线在精品| 人人妻人人澡欧美一区二区| 简卡轻食公司| 国产午夜福利久久久久久| 欧美日韩乱码在线| 尾随美女入室| a级毛片免费高清观看在线播放| 国产av一区在线观看免费| 国产男靠女视频免费网站| 好男人在线观看高清免费视频| 老熟妇乱子伦视频在线观看| 午夜亚洲福利在线播放| 国产色婷婷99| 我的女老师完整版在线观看| 日韩欧美在线二视频| 日本五十路高清| 嫁个100分男人电影在线观看| 亚洲va日本ⅴa欧美va伊人久久| 久久久久久久午夜电影| 国产色婷婷99| 国产精品,欧美在线| 国产精品伦人一区二区| 2021天堂中文幕一二区在线观| 成年人黄色毛片网站| 国产精品国产高清国产av| 九九热线精品视视频播放| 中文字幕人妻熟人妻熟丝袜美| 搡老岳熟女国产| 欧美xxxx性猛交bbbb| 久久香蕉精品热| 两个人视频免费观看高清| 免费av观看视频| 美女高潮喷水抽搐中文字幕| 中文字幕人妻熟人妻熟丝袜美| 一级a爱片免费观看的视频| 美女 人体艺术 gogo| 国产精品亚洲一级av第二区| а√天堂www在线а√下载| 日韩在线高清观看一区二区三区 | 精品99又大又爽又粗少妇毛片 | 色尼玛亚洲综合影院| 国产午夜精品久久久久久一区二区三区 | 成人av一区二区三区在线看| 午夜福利视频1000在线观看| 亚洲人成网站在线播放欧美日韩| 欧美性感艳星| 精品久久久久久久人妻蜜臀av| 欧美黑人欧美精品刺激| 老女人水多毛片| 男女下面进入的视频免费午夜| 麻豆精品久久久久久蜜桃| 久久久久久九九精品二区国产| 国模一区二区三区四区视频| 国产黄片美女视频| 两个人视频免费观看高清| 91在线观看av| 男女边吃奶边做爰视频| 成人一区二区视频在线观看| 长腿黑丝高跟| 国产色婷婷99| 变态另类丝袜制服| 亚洲av成人精品一区久久| 男女下面进入的视频免费午夜| 色视频www国产| 国产高清视频在线播放一区| 亚洲精品久久国产高清桃花| 国产精品98久久久久久宅男小说| 亚洲欧美日韩卡通动漫| 亚洲国产精品合色在线| 91久久精品国产一区二区成人| 国产伦一二天堂av在线观看| 最近视频中文字幕2019在线8| 可以在线观看毛片的网站| 丰满人妻一区二区三区视频av| 日本在线视频免费播放| 搡女人真爽免费视频火全软件 | 午夜久久久久精精品| 久久久久久九九精品二区国产| 91在线精品国自产拍蜜月| 精品一区二区三区av网在线观看| 国产精品,欧美在线| 亚洲不卡免费看| 精品久久久久久久久亚洲 | 香蕉av资源在线| 国产精品久久电影中文字幕| 少妇被粗大猛烈的视频| 色av中文字幕| eeuss影院久久| 久久精品国产99精品国产亚洲性色| 人人妻,人人澡人人爽秒播| 18禁黄网站禁片午夜丰满| 国内精品美女久久久久久| a在线观看视频网站| 又紧又爽又黄一区二区| 亚洲精华国产精华精| 色综合色国产| 少妇猛男粗大的猛烈进出视频 | netflix在线观看网站| 国产成人福利小说| 国产激情偷乱视频一区二区| 国产一区二区在线av高清观看| 久久久精品大字幕| 国产欧美日韩精品亚洲av| 99热这里只有是精品50| 亚洲熟妇中文字幕五十中出| 免费一级毛片在线播放高清视频| 色综合站精品国产| 中文资源天堂在线| 在线观看美女被高潮喷水网站| 国产伦一二天堂av在线观看| 亚洲性夜色夜夜综合| 可以在线观看毛片的网站| 亚洲熟妇中文字幕五十中出| 免费人成在线观看视频色| 天堂动漫精品| 联通29元200g的流量卡| 男女边吃奶边做爰视频| 亚洲午夜理论影院| 国产精品精品国产色婷婷| 久久久久久久久中文| 久久久久久国产a免费观看| 天堂网av新在线| 亚洲精品成人久久久久久| 亚洲欧美精品综合久久99| 日韩欧美精品v在线| 夜夜夜夜夜久久久久| 国产男靠女视频免费网站| 最新在线观看一区二区三区| 亚洲成人中文字幕在线播放| 婷婷丁香在线五月| 国产免费av片在线观看野外av| 亚洲avbb在线观看| 色综合亚洲欧美另类图片| 蜜桃亚洲精品一区二区三区| 日本色播在线视频| 久久久国产成人精品二区| 亚洲成人精品中文字幕电影| 国产精品久久电影中文字幕| 免费av毛片视频| 久久久久久久久久久丰满 | 欧美中文日本在线观看视频| 97人妻精品一区二区三区麻豆| 麻豆av噜噜一区二区三区| 亚洲美女黄片视频| 91精品国产九色| eeuss影院久久| 欧美极品一区二区三区四区| 久久久成人免费电影| 久久久久精品国产欧美久久久| 日日撸夜夜添| 国内精品久久久久精免费| 少妇的逼好多水| 黄色欧美视频在线观看| 国产一区二区亚洲精品在线观看| netflix在线观看网站| 在线观看美女被高潮喷水网站| 欧美绝顶高潮抽搐喷水| 亚洲av熟女| 黄色日韩在线| 99久久精品国产国产毛片| 亚洲中文字幕一区二区三区有码在线看| 好男人在线观看高清免费视频| 成人av一区二区三区在线看| 久久久久性生活片| 国产精品无大码| 亚洲avbb在线观看| 91麻豆精品激情在线观看国产| 国产aⅴ精品一区二区三区波| 国产精品,欧美在线| 成人综合一区亚洲| 床上黄色一级片| 99久久精品一区二区三区| a级毛片免费高清观看在线播放| av在线亚洲专区| 一夜夜www| 老熟妇乱子伦视频在线观看| 给我免费播放毛片高清在线观看| 在线观看av片永久免费下载| 亚洲一区高清亚洲精品| 草草在线视频免费看| 精品人妻一区二区三区麻豆 | 舔av片在线| 搞女人的毛片| 精品久久久久久久末码| 不卡一级毛片| 午夜激情福利司机影院| 18禁黄网站禁片免费观看直播| 国内久久婷婷六月综合欲色啪| 在线免费观看的www视频| 欧美最黄视频在线播放免费| 我要看日韩黄色一级片| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲四区av| 亚洲第一区二区三区不卡| 简卡轻食公司| 亚洲精品日韩av片在线观看| 国内久久婷婷六月综合欲色啪| 精品乱码久久久久久99久播| 国产在线男女| 午夜亚洲福利在线播放| 免费av不卡在线播放| 午夜免费激情av| 12—13女人毛片做爰片一| 国产中年淑女户外野战色| 少妇人妻一区二区三区视频| 一本精品99久久精品77| 国产成人福利小说| 久久久久久久久久成人| 久久久久国内视频| 国产精品综合久久久久久久免费| 国产69精品久久久久777片| 精品国产三级普通话版| 精品久久久噜噜| 综合色av麻豆| 国产亚洲精品av在线| 两个人视频免费观看高清| 日本成人三级电影网站| 好男人在线观看高清免费视频| 亚洲真实伦在线观看| 午夜福利成人在线免费观看| 免费不卡的大黄色大毛片视频在线观看 | 国产探花在线观看一区二区| 乱人视频在线观看| 变态另类成人亚洲欧美熟女| 99精品久久久久人妻精品| av在线观看视频网站免费| 国产日本99.免费观看| 我的老师免费观看完整版| 91在线精品国自产拍蜜月| 两性午夜刺激爽爽歪歪视频在线观看| 观看美女的网站| 在线观看66精品国产| 精品一区二区三区视频在线| 亚洲自拍偷在线| 国产在视频线在精品| 九色成人免费人妻av| 亚洲国产日韩欧美精品在线观看| 色精品久久人妻99蜜桃| 两个人的视频大全免费| 欧美3d第一页| 国产蜜桃级精品一区二区三区| 中文字幕久久专区| 免费看美女性在线毛片视频| 一个人看的www免费观看视频| 久久久色成人| 无遮挡黄片免费观看| 在线播放国产精品三级| 国国产精品蜜臀av免费| 国产毛片a区久久久久| 嫁个100分男人电影在线观看| 神马国产精品三级电影在线观看| 校园春色视频在线观看| 97人妻精品一区二区三区麻豆| 色视频www国产| 欧美bdsm另类| 国产色婷婷99| 一个人免费在线观看电影| 全区人妻精品视频| 韩国av在线不卡| 热99在线观看视频| 日韩一区二区视频免费看| 亚洲av熟女| 身体一侧抽搐| 蜜桃亚洲精品一区二区三区| 国产男人的电影天堂91| 韩国av一区二区三区四区| 国产精品久久久久久av不卡| 欧美+亚洲+日韩+国产| 精品久久久久久成人av| 国产黄a三级三级三级人| 不卡视频在线观看欧美| 久久午夜亚洲精品久久| 国产午夜精品久久久久久一区二区三区 | 狂野欧美白嫩少妇大欣赏| 欧美bdsm另类| 免费在线观看日本一区| 两个人视频免费观看高清| 悠悠久久av| 国产v大片淫在线免费观看| 欧美黑人欧美精品刺激| 国产探花极品一区二区| 日本欧美国产在线视频| 成人三级黄色视频| 亚洲在线观看片| 一个人免费在线观看电影| 欧美三级亚洲精品| 欧美中文日本在线观看视频| 亚州av有码| 欧美高清成人免费视频www| 成人无遮挡网站| 久久欧美精品欧美久久欧美| 亚洲第一电影网av| 久久久精品欧美日韩精品| 在线观看66精品国产| 在线免费观看不下载黄p国产 | 国产精品野战在线观看| 久久6这里有精品| 99久久久亚洲精品蜜臀av| bbb黄色大片| 一a级毛片在线观看| a级一级毛片免费在线观看| 嫩草影院入口| 日韩人妻高清精品专区| 黄色丝袜av网址大全| 国产一区二区三区在线臀色熟女| 麻豆成人av在线观看| 色视频www国产| 亚洲av免费在线观看| 搡老熟女国产l中国老女人| 国产亚洲av嫩草精品影院| 国产精品无大码| 精品一区二区三区av网在线观看| 岛国在线免费视频观看| av在线蜜桃| 中文字幕制服av| 成人特级av手机在线观看| 国产男女超爽视频在线观看| 免费在线观看成人毛片| 亚洲精品色激情综合| 欧美人与善性xxx| 一本一本综合久久| 免费观看a级毛片全部| 精品视频人人做人人爽| 久久韩国三级中文字幕| 亚洲av中文av极速乱| 中文字幕人妻熟人妻熟丝袜美| 欧美性感艳星| 永久免费av网站大全| 欧美高清性xxxxhd video| 国产精品欧美亚洲77777| 精品国产一区二区三区久久久樱花 | 精品一品国产午夜福利视频| 毛片一级片免费看久久久久| 亚洲精品久久久久久婷婷小说| 在线观看人妻少妇| 天美传媒精品一区二区| 亚洲精品aⅴ在线观看| 免费黄网站久久成人精品| 91久久精品国产一区二区三区| 人体艺术视频欧美日本| 香蕉精品网在线| 男女边吃奶边做爰视频| 黑丝袜美女国产一区| 亚洲精品456在线播放app| 少妇的逼好多水| 波野结衣二区三区在线| 干丝袜人妻中文字幕| 国产国拍精品亚洲av在线观看| 三级经典国产精品| 青春草国产在线视频| 一本色道久久久久久精品综合| 免费看光身美女| 99热6这里只有精品| 爱豆传媒免费全集在线观看| 欧美激情极品国产一区二区三区 | 亚洲一区二区三区欧美精品|