• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Discounted Iterative Adaptive Critic Designs With Novel Stability Analysis for Tracking Control

    2022-07-18 06:17:08MingmingHaDingWangandDerongLiu
    IEEE/CAA Journal of Automatica Sinica 2022年7期

    Mingming Ha, Ding Wang,,, and Derong Liu,,

    Abstract—The core task of tracking control is to make the controlled plant track a desired trajectory. The traditional performance index used in previous studies cannot eliminate completely the tracking error as the number of time steps increases. In this paper, a new cost function is introduced to develop the value-iteration-based adaptive critic framework to solve the tracking control problem. Unlike the regulator problem,the iterative value function of tracking control problem cannot be regarded as a Lyapunov function. A novel stability analysis method is developed to guarantee that the tracking error converges to zero. The discounted iterative scheme under the new cost function for the special case of linear systems is elaborated.Finally, the tracking performance of the present scheme is demonstrated by numerical results and compared with those of the traditional approaches.

    I. INTRODUCTION

    RECENTLY, adaptive critic methods, known as approximate or adaptive dynamic programming (ADP) [1]–[8],have enjoyed rather remarkable successes for a wide range of fields in the energy scheduling [9], [10], orbital rendezvous[11], [12], urban wastewater treatment [13], attitude-tracking control for hypersonic vehicles [14] and so forth. Adaptive critic designs have close connections to both adaptive control and optimal control [15], [16]. For nonlinear systems, it is difficult to obtain the analytical solution of the Hamilton-Jacobi-Bellman (HJB) equation. Iterative adaptive critic techniques, mainly including value iteration (VI) [17]–[20]and policy iteration (PI) [21], [22], have been extensively studied and successfully applied to iteratively approximate the numerical solution of the HJB equation [23]–[26]. In [27],relaxed dynamic programming was introduced to overcome the “curse of dimensionality” problem by relaxing the demand for optimality. The upper and lower bounds of the iterative value function were first determined and the convergence of VI was revealed. For ensuring stability of the undiscounted VI, Heydari [28] developed a stabilizing VI algorithm initialized by a stabilizing policy. With this operation, the stability of the closed-loop system using the iterative control policy can be guaranteed. In [29], the convergence and monotonicity of discounted value function were investigated.The discounted iterative scheme was implemented by the neural-network-based globalized dual heuristic programming.Afterwards, Haet al. [30] discussed the effect of the discount factor on the stability of the iterative control policy. Several stability criteria with respect to the discount factor were established. In [31], Wanget al. developed an event-based adaptive critic scheme and presented an appropriate triggering condition to ensure the stability of the controlled plant.

    Optimal tracking control is a significant topic in the control community, which mainly aims at designing a controller to make the controlled plant track a reference trajectory. The literature on this problem is extensive [32]–[37] and reflects considerable current activity. In [38], Wanget al. developed a finite-horizon optimal tracking control strategy with convergence analysis for affine discrete-time systems by employing the iterative heuristic dynamic programming approach. For the linear quadratic output tracking control problem, Kiumarsiet al. [39] presented a novel Bellman equation, which allows policy evaluation by using only the input, output, and reference trajectory data. Liuet al. [40] concerned the robust optimal tracking control problem and introduced the adaptive critic design scheme into the controller to overcome the unknown uncertainty caused by multi-input multi-output discrete-time systems. In [41], Luoet al. designed the modelfree optimal tracking controller for nonaffine systems by using a critic-only Q-learning algorithm, while the proposed method needs to be given an initial admissible control policy. In [42],a novel cost function was proposed to eliminate the tracking error. The convergence and monotonicity of the new value function sequence were investigated. On the other hand, some methods to solve the tracking problem for affine continuoustime systems can be found in [43]–[46]. For affine nonlinear partially-unknown constraint-input systems, the integral reinforcement learning technique was studied to learn the solution to the optimal tracking control problem in [43], which does not require to identify the unknown systems.

    In general, the majority of adaptive critic tracking control methods need to solve the feedforward control input of the reference trajectory. Then, the tracking control problem can be transformed into a regulator problem. However, for some nonlinear systems, the feedforward control input corresponding to the reference trajectory might be nonexistent or not unique, which makes these methods unavailable. To avoid solving the feedforward control input, some tracking control approaches establish a performance index function of the tracking error and the control input. Then, the adaptive critic design is employed to minimize the performance index. With this operation, the tracking error cannot be eliminated because the minimization of the control input cannot always lead to the minimization of the tracking error. Moreover, as mentioned in[30], the introduction of discount factor will affect the stability of the optimal control policy. If an inappropriate discount factor is selected, the stability of the closed-loop system cannot be guaranteed. Besides, unlike the regulator problem,the iterative value function of tracking control is not a Lyapunov function. Till now, few studies have focussed on this problem. In this paper, inspired by [42], the new performance index is adopted to avoid solving the feedforward control and eliminate the tracking error. The stability conditions with respect to the discount factor are discussed, which can guarantee that the tracking error converges to zero as the number of time steps increases.

    The main contributions of this article are summarized as follows.

    1) Based on the new performance index function, a novel stability analysis method for the tracking control problem is established. It is guaranteed that the tracking error can be eliminated completely.

    2) The effect of the presence of the approximation errors derived from the value function approximator is discussed with respect to the stability of controlled systems.

    3) For linear systems, the new VI-based adaptive critic scheme between the kernel matrix and the state feedback gain is developed.

    The remainder of this paper is organized as follows. In Section II, the necessary background and motivation are provided. The VI-based adaptive critic scheme and the properties of the iterative value function are presented. In Section III, the novel stability analysis for tracking control is developed. In Section IV, the discounted iterative formulation under the new performance index for the special case of linear systems is discussed. Section V compares the tracking performance of the new and traditional tracking control approaches by the numerical results. In Section VI, conclusions of this paper and further research topics are summarized.

    Notations:Throughout this paper, N and N+are the sets of all nonnegative and positive integers, respectively, i.e.,N={0,1,2,...} and N+={1,2,...}. R denotes the set of all real numbers and R+is the set of nonnegative real numbers. Rnis the Euclidean space of alln-dimensional real vectors.Inand 0m×nrepresents then×nidentity matrix and them×nzero matrix, respectively.C≤0 means that the matrixCis negative semi-definite.

    II. PROBLEM FORMULATION AND VI-BASED ADAPTIVE CRITIC SCHEME

    Consider the following affine nonlinear systems given by:

    withthestateXk∈Rnandinputuk∈Rm,wheren,m∈N+andk∈N.F: Rn→RnandG: Rn×Rm→Rnarethedriftand control input dynamics, respectively. The tracking error is defined as

    whereDkis the reference trajectory at stagek. Suppose thatDkis bounded and satisfies

    whereM(·) is the command generator dynamics. The objective of the tracking control problem is to design a controller to track the desired trajectory. Letuk={uk,uk+1,...},k∈N, be an infinite-length sequence of control inputs. Assume that there exists a control sequenceu0such thatEk→0 ask→∞.

    In general, in the previous works [34], [38], assume that there exists a feedforward control input ηksatisfyingDk+1=F(Dk)+G(Dk)ηkto achieve perfect tracking. However,for some nonlinear systems, the feedforward control input might be nonexistent. To avoid computing the feedforward control input ηk, the performance index [33], [34] is generally designed as

    whereγ ∈(0,1]isthe discountfactor and U(·,·)istheutility function.TermsQ: Rn→R+andR: Rm→R+intheutility function are positive definite continuous functions. With this operation, both the tracking error and the control input in the performance index (4) are minimized. To the best of our knowledge, the minimization of the control input does not always result in the minimization of the tracking error unless the reference trajectory is assumed to beDk→0 ask→∞.Such assumption greatly reduces the application scope of the approach. Therefore, for the majority of desired trajectories,the tracking error cannot be eliminated [42] by adopting the performance index (4). According to [42], under the control sequence u0, a new discounted cost function for the initial tracking errorE0and reference pointD0is introduced as

    The adopted cost function (5) not only avoids computing the feedforward control input, but also eliminates the tracking error. The objective of this paper is to find a feedback control policy π(E,D), which both makes the dynamical system (1)track the reference trajectory and minimizes the cost function(5). According to (5), the state value function can be obtained as

    and its optimal value isV?(Ek,Dk).

    According to the Bellman’s principle of optimality, the optimal value function for tracking control problem satisfies

    whereEk+1=F(Ek+Dk)+G(Ek+Dk)π(Ek,Dk)?M(Dk). The corresponding optimal control policy is computed by

    Therefore, the Hamiltonian function for tracking control can be obtained as

    The optimal control policy π?satisfies the first-order necessary condition for optimality, i.e.,=0 [42]. The gradient of(9) with respect toπis given as

    In general, the positive definite function Q is chosen as the following quadratic form:

    whereQ∈Rn×nis a positive definite matrix. Then, the expression of the optimal control policy can be obtained by solving (10) [42].

    Since it is difficult or impossible to directly solve the Bellman equation (7), iterative adaptive critic methods are widely adopted to obtain its numerical solution. Here, the VIbased adaptive critic scheme for the tracking control problem is employed to approximate the optimal value functionV?(Ek,Dk)formulated in (7). The VI-based adaptive critic algorithm starts from a positive semi-definite continuous value functionV(0)(Ek,Dk).UsingtheinitialvaluefunctionV(0)(Ek,Dk),theinitialcontrol policy iscomputed by

    whereEk+1=F(Ek+Dk)+G(Ek+Dk)π(Ek,Dk)?M(Dk). For the iteration index ? ∈N+, the VI-based adaptive critic algorithm is implemented between the value function update

    and the policy improvement

    In the iteration learning process, two sequences, namely the iterative value function sequence {V(?)} and the corresponding control policy sequence {π(?)}, are obtained. The convergence and monotonicity of the undiscounted value function sequence have been investigated in [42]. Inspired by [42], the corresponding convergence and monotonicity properties of the discounted value function can be obtained.

    Lemma 1 [42]:Let the value function and control policy sequences be tuned by (13) and (14), respectively. For anyEkandDk, the value function starts fromV(0)(·,·)=0.

    1) The value function sequence {V(?)(Ek,Dk)} is monotonically nondecreasing, i.e.,V(?)(Ek,Dk)≤V(?+1)(Ek,Dk),? ∈N.

    2) Suppose that there exists a constant κ ∈(0,∞) such that 0 ≤γV?(Ek+1,Dk+1)≤κU(Ek,Dk,uk), whereEk+1=F(Ek+Dk)+G(Ek+Dk)uk?M(Dk). Then, the iterative value function approaches the optimal value function with the following manner:

    It can be guaranteed that the discounted value function and corresponding control policy sequences approximate the optimal value function and optimal control policy as the number of iterations increases, i.e.,lim?→∞V(?)(Ek,Dk)=V?(Ek,Dk) and lim?→∞π(?)(Ek,Dk)=π?(Ek,Dk). Note that the introduction of the discount factor will affect the stability of the optimal and iterative control policies. If the discount factor is chosen too small, the optimal control policy might be unstable. For the tracking control problem, the policy π?(Ek,Dk)cannot make the controlled plant track the desired trajectory. It is meaningless to design various iterative methods to approximate the optimal control policy. On the other hand, for the regulation problem, the iterative value function is a Lyapunov function to judge the stability of the closed-loop systems [18]. However, for the tracking control problem, the iterative value function cannot be regarded as a Lyapunov function as the iterative value function does not only depend on the tracking errorE. Therefore, it is necessary to develop a novel stability analysis approach for tracking control problems.

    III. NOVEL STABILITY ANALYSIS OF VI-BASED ADAPTIVE CRITIC DESIGNS

    In this section, the stability of the tracking error system is discussed. It is guaranteed that the tracking error under the iterative control policy converges to zero as the number of time steps increases.

    Theorem 1:Suppose that there exists a control sequenceu0for the system (1) and the desired trajectory (3) such thatEk→0 ask→∞. If the discount factor satisfies

    wherec∈(0,1) is a constant, then the tracking error under the optimal control π?(Ek,Dk) converges to zero ask→∞.

    Proof:According to (7) and (8), the Bellman equation can be rewritten as

    which is equivalent to

    Applying (19) to the tracking errorsE0,E1, ...,ENand the corresponding reference pointsD0,D1, ...,DN, one has

    Combining the inequalities in (20), we have

    For the discounted iterative adaptive critic tracking control,the condition (16) is important. Otherwise, the stability of the optimal control policy cannot be guaranteed. Theorem 1 reveals the effect of the discount factor on the convergence of the tracking error. However, the optimal value function is unknown in advance. In what follows, a practical stability condition is provided to guarantee that the tracking error converges to zero under the iterative control policy.

    Theorem 2:Let the value function with(·,·)=0 and the control policy be updated by (13) and (14), respectively. If the iterative value function satisfies

    which implies, forj=1,2,...,N

    Combining (23) and (25), the following relationship can be obtained:

    According to 2) in Lemma 1,V(?+1)(Ek,Dk)?V(?)(Ek,Dk)→0 as ? →∞. Therefore, the condition (22) in Theorem 2 can be satisfied in the iteration process. There must exist an iterative control policyin the control policy sequence {π(?)}, which makesEk→0 ask→∞.

    In general, for nonlinear systems, the value function update(13) cannot be solved exactly. Various fitting methods, such as neural networks, polynomial fitting and so forth, can be used to approximate the iterative value function of the nonlinear systems and many numerical methods can be applied to solve (14). Note that the inputs of the function approximator are the tracking error vectorEand the desired trajectoryD.Especially, for high dimensional nonlinear systems, the artificial neural network is applicable to approximate the iterative value function. Compared with the polynomial fitting method, the artificial neural network avoids manually designing each basis function. The introduction of the function approximator inevitably leads to the approximation error.

    Define the approximation error at the?th iteration as ε(?)(Ek,Dk). According to the value function update equation(13), the approximate value function is obtained as

    whereEk+1=F(Ek+Dk)+G(Ek+Dk)μ(Ek,Dk)?M(Dk) and the corresponding control policy μ (Ek,Dk) is computed by

    Note that the approximation error ε(??1)(Ek,Dk) is not the error between the approximate value function(?)(Ek,Dk) and the exact value functionV(?)(Ek,Dk). Next, considering the approximation error of the function approximator, we further discuss the stability of the closed-loop system using the control policy derived from the approximate value function.

    Theorem 3:Let the iterative value function withV(0)(·,·)=0 be approximated by a smooth function approximator. The approximate value function and the corresponding control policy are updated by (28) and (29), respectively. If the approximatevaluefunction withtheapproximationerror≤αU(Ek,Dk,μ(?)(Ek,Dk))isfiniteand satisfies where α ∈(0,1) andc∈(0,1?α) are constants, then the tracking error under the control policy μ(?)(Ek,Dk) satisfiesEk→0ask→∞.

    Proof:Forconvenience, in the sequel, μ(?)(Ek,Dk) is written as.According to (28) and the condition(30),itleadsto

    Evaluating (32) at the time stepsk=0,1,2,...,N, it results in

    Combining the inequalities in (33), we obtain

    IV. DISCOUNTED TRACKING CONTROL FOR THE SPECIAL CASE OF LINEAR SYSTEMS

    In this section, the VI-based adaptive critic scheme for linear systems and the stability properties are investigated.Consider the following discrete-time linear systems given by

    whereA∈Rn×nandB∈Rn×mare system matrices. Here, we assume that the reference trajectory satisfiesDk+1=ΓDk,where Γ ∈Rn×nis a constant matrix. This form is used because its analysis is convenient. According to the new cost function(5), for the linear system (35), a quadratic performance index with a positive definite weight matrixQis formulated as follows:

    Combining the dynamical system (35) and the desired trajectoryDk, we can obtain an augmented system as

    where the new weight matrixsatisfies

    As mentioned in [15], [16], [39], the value function can be regarded as the quadratic form in the state, i.e.,V()=for some kernel matrix. Then, the Bellman equation of linear quadratic tracking is obtained by

    The Hamiltonian function of linear quadratic tracking control is defined as

    Considering the state feedback policy π ()=?and the equation (40), it results in

    Therefore, the linear quadratic tracking problem can be solved by using the following equation:

    Considering the Hamiltonian function (41), a necessary condition for optimality is the stationarity condition=0[15 ], [16]. The optimal control policy is computed by

    and

    Theorem 4:Let the kernel matrix and the state feedback gain be iteratively updated by (45) and (46), respectively. If the iterative kernel matrix and state feedback gain satisfy

    which implies

    According to Theorem 2, we can obtainU(,π(?)())→0 ask→∞,whichshows thatthetracking errorunder the iterativecontrol policy π(?)()approacheszero ask→∞. ■

    For linear systems, if the system matricesAandBare known, it is not necessary to use the function approximator to estimate the iterative value function. According to the iterative algorithm (45) and (46), there is no approximation error derived from the approximate value function in the iteration procedure.

    V. SIMULATION STUDIES

    In this section, two numerical simulations with physical background are conducted to verify the effectiveness of the discounted adaptive critic designs. Compared with the cost function (4) proposed by the traditional studies, the adopted performance index can eliminate the tracking error.

    A. Example 1

    As shown in Fig. 1, the spring-mass-damper system is used to validate the present results and compare the performance between the present and the traditional adaptive critic tracking control approaches. LetM,s, anddbe the mass of the object,the stiffness constant of the spring, and the damping, respectively. The system dynamics is given as

    wherexdenotes the position,vstands for the velocity, andfis theforce appliedtotheobject. Letthesystemstate vectorbeX=[x,v]T∈R2andthe controlinputbeu=f∈R. The continuous-time system dynamics (50) is discretized using the Euler method with sampling interval ?t=0.01 s. Then, the discrete-time state space equation is obtained as

    Fig. 1. Diagrammatic sketch of the spring-mass-damper system.

    In this example, the practical parameters are selected asM=1 kg,s=5 N/m, andd=0.5 Ns/m. The reference trajectory is defined as

    Combining the original system (51) and the reference trajectory (52), the augmented system is formulated as

    The iterative kernel matrix with(0)=04×4and the state feedback gain are updated by (45) and (46), respectively,whereQ=I2and the discount factor is chosen as γ =0.98. On the other hand, considering the following traditional cost function:

    the corresponding VI-based adaptive critic control algorithm for system (53) is implemented between

    and

    whereR∈Rm×mis a positive definite matrix. As defined in(54), the objective of the cost function is to minimize both the tracking error and the control input. The role of the cost function (54) is to balance the minimizations of the tracking error and the control input according to the selection of the matricesQandR. To compare the tracking performance under different cost functions, we carry out the new VI-based adaptive critic algorithm and the traditional approach for 400 iterations. Three traditional cost functions with different weight matricesQiandRi,i=1,2,3 are selected to implement the algorithms (55) and (56), whereQ1,2,3=I2andR1,2,3=1,0.1,0.01. After 400 iteration steps, the obtained corresponding optimal kernel matrices and state feedback gains are given as follows:

    Let the initial system state and reference point beX0=[0.1,0.14]TandD0=[?0.3,0.3]T. Then, the obtained state feedback gains are applied to generate the control inputs of the controlled plant (53). The system state and tracking error trajectories under different weight matrices are shown in Figs. 2 and 3, respectively. It can be observed that smallerRleads to smaller tracking error. The weight matricesQandRreveal the importance of the minimizations of the tracking error and the control input. The tracking performance of the traditional cost function with smallerRis similar to that of the new tracking control approach. From (56), the matrixRcannot be a zero matrix. Otherwise, there might exist no inverse of the matrixR+. The corresponding control input curves are plotted in Fig. 4.

    Fig. 2. The reference trajectory and system state curves under different cost functions (Example 1).

    Fig. 3. The tracking error curves under different cost functions (Example 1).

    Fig. 4. The control input curves under different cost functions (Example 1).

    B. Example 2

    Consider the single link robot arm given in [47]. LetM,g,L, J andfrbe the mass of the payload, acceleration of gravity, length of the arm, moment of inertia and viscous friction, respectively. The system dynamics is formulated as

    whereαandudenote the angle position of robot arm and controlinput, respectively.LetthesystemstatevectorbeX=[α,α˙]T∈R2. SimilarlytoExample 1,the singlelink robot arm dynamics is discretized using the Euler method with sampling interval ?t=0.05 s. Then, the discrete-time state space equation of (61) is obtained as

    Inthisexample,thepractical parametersaresetasM=1kg,g=9.8m/s2,L=1m ,J =5 kg·m2andfr=2.The desired trajectory is defined as

    The cost function (5) is set as the quadratic form, whereQandγare selected asQ=I2and γ=0.97, respectively. In this example, sinceEkandDkare the independent variables of the value function, the function approximator of the iterative value function is selected as the following form:

    whereW(?)∈R26is the parameter vector. In the iteration process, 300 random samples in the region ?={(E∈R2,D∈R2):?1 ≤E1≤1,?1 ≤E2≤1,?1 ≤D1≤1,?1 ≤D2≤1}are chosen to learn the iterative value functionV(?)(E,D) for 200 iteration steps. The value function is initialized as zero. In the iteration process, considering the first-order necessary condition for optimality, the iterative control policy can be computed by the following equation:

    Note that the unknown control input μ(?)(Ek,Dk) exists on both sides of (65). Therefore, at each iteration step,μ(?)(Ek,Dk)is iteratively obtained by using the successive approximation approach. After the iterative learning process,the parameter vector is obtained as follows:

    Next, we compare the tracking performance of the new and the traditional methods. The traditional cost function is also selected as the quadratic form. Three traditional cost functions withQ1,2,3=I2andR1,2,3=0.1,0.01,0.001 are selected. The initial state and initial reference point are set asX0=[?0.32,0.12]TandD0=[0.12,?0.23]T, respectively. The obtained parameter vectors derived from the present and the traditional adaptive critic methods are employed to generate the near optimal control policy. The controlled plant state trajectories using these near optimal control policies are shown in Fig. 5. The corresponding tracking error and control input curves are plotted in Figs. 6 and 7, respectively. From Figs. 6 and 7, it is observed that both the tracking error and the control input derived from the traditional approach are minimized. However, it is not necessary to minimize the control input by deteriorating the tracking performance for tracking control.

    VI. CONCLUSIONS

    In this paper, for the tracking control problem, the stability of the discounted VI-based adaptive critic method with a new performance index is investigated. Based on the new performance index, the iterative formulation for the special case of linear systems is given. Some stability conditions are provided to guarantee that the tracking error approaches zero as the number of time steps increases. Moreover, the effect of the presence of the approximation errors of the value function is discussed. Two numerical simulations are performed to compare the tracking performance of the iterative adaptive critic designs under different performance index functions.

    Fig. 5. The reference trajectory and system state curves under different cost functions (Example 2).

    Fig. 6. The tracking error curves under different cost functions (Example 2).

    Fig. 7. The control input curves under different cost functions (Example 2).

    It is also interesting to further extend the present tracking control method to the nonaffine systems, data-based tracking control, output tracking control, various practical applications and so forth. The developed tracking control method will be more advanced in the future work of online adaptive critic designs for some practical complex systems with noises.

    国产淫片久久久久久久久| 全区人妻精品视频| 国产在线男女| 久久久久久久亚洲中文字幕| 精品国内亚洲2022精品成人| 如何舔出高潮| 免费看日本二区| 国产麻豆成人av免费视频| 插阴视频在线观看视频| АⅤ资源中文在线天堂| 国产精品福利在线免费观看| 亚洲中文字幕一区二区三区有码在线看| 色哟哟哟哟哟哟| 国产爱豆传媒在线观看| 91久久精品电影网| 国产女主播在线喷水免费视频网站 | 一个人观看的视频www高清免费观看| 91在线精品国自产拍蜜月| 深爱激情五月婷婷| 成人特级av手机在线观看| 亚洲精品粉嫩美女一区| 欧美一区二区国产精品久久精品| 久久精品影院6| 99在线人妻在线中文字幕| 床上黄色一级片| 91久久精品国产一区二区三区| 综合色av麻豆| 亚洲精品色激情综合| 日韩欧美三级三区| 长腿黑丝高跟| 亚洲国产欧洲综合997久久,| 国产av不卡久久| 亚洲熟妇熟女久久| a级毛片免费高清观看在线播放| 久久久精品94久久精品| 哪里可以看免费的av片| 淫秽高清视频在线观看| 美女xxoo啪啪120秒动态图| 99久久无色码亚洲精品果冻| 日日干狠狠操夜夜爽| 欧美激情在线99| 欧美在线一区亚洲| 亚洲婷婷狠狠爱综合网| 一边摸一边抽搐一进一小说| 日韩欧美精品免费久久| 日韩欧美精品免费久久| 99久国产av精品国产电影| 国产淫片久久久久久久久| 99热6这里只有精品| 亚洲一区高清亚洲精品| 麻豆av噜噜一区二区三区| 国产69精品久久久久777片| 欧美中文日本在线观看视频| 久久草成人影院| 亚洲人成网站在线播| 别揉我奶头 嗯啊视频| 日本一本二区三区精品| 大又大粗又爽又黄少妇毛片口| 97超碰精品成人国产| 不卡视频在线观看欧美| 国产黄片美女视频| 成人亚洲欧美一区二区av| 夜夜看夜夜爽夜夜摸| 91久久精品电影网| 国产三级在线视频| 国产精品av视频在线免费观看| 国产精品国产三级国产av玫瑰| 99热网站在线观看| 精品久久国产蜜桃| 亚洲无线在线观看| 搡老妇女老女人老熟妇| 99riav亚洲国产免费| 男插女下体视频免费在线播放| 好男人在线观看高清免费视频| 午夜激情福利司机影院| 国产精品久久久久久久久免| 色尼玛亚洲综合影院| 久久久a久久爽久久v久久| 亚洲av中文字字幕乱码综合| 天堂网av新在线| 少妇的逼水好多| 两性午夜刺激爽爽歪歪视频在线观看| 国产精品一区二区三区四区免费观看 | 夜夜看夜夜爽夜夜摸| 欧美+亚洲+日韩+国产| 亚洲国产精品sss在线观看| 99热网站在线观看| 变态另类丝袜制服| 久久久久久国产a免费观看| 乱人视频在线观看| 女人十人毛片免费观看3o分钟| 美女cb高潮喷水在线观看| 午夜激情福利司机影院| 欧美高清性xxxxhd video| 淫妇啪啪啪对白视频| 精品久久久久久久人妻蜜臀av| 人人妻人人看人人澡| 国产精品一区二区三区四区免费观看 | 精品久久久久久久久亚洲| 久久精品夜夜夜夜夜久久蜜豆| 黑人高潮一二区| 床上黄色一级片| 观看美女的网站| 一边摸一边抽搐一进一小说| 精品无人区乱码1区二区| 99热这里只有精品一区| 男女那种视频在线观看| 美女被艹到高潮喷水动态| 伊人久久精品亚洲午夜| 亚洲人成网站在线播放欧美日韩| 日韩一本色道免费dvd| 搡老岳熟女国产| 国产蜜桃级精品一区二区三区| 长腿黑丝高跟| 超碰av人人做人人爽久久| 国产精品久久视频播放| 好男人在线观看高清免费视频| aaaaa片日本免费| 色av中文字幕| 久久人妻av系列| 午夜视频国产福利| 亚洲色图av天堂| 国产爱豆传媒在线观看| 国产一区二区在线观看日韩| 黑人高潮一二区| 美女xxoo啪啪120秒动态图| 午夜免费激情av| 欧美色欧美亚洲另类二区| 一卡2卡三卡四卡精品乱码亚洲| 欧美一级a爱片免费观看看| 亚洲av中文字字幕乱码综合| 亚洲最大成人中文| av在线蜜桃| 久久人人精品亚洲av| 国产成人91sexporn| 国产黄a三级三级三级人| 亚洲国产精品成人综合色| 亚洲欧美中文字幕日韩二区| 免费黄网站久久成人精品| 18+在线观看网站| 国产精品久久久久久久电影| 日本五十路高清| 国产蜜桃级精品一区二区三区| 99九九线精品视频在线观看视频| 一区二区三区四区激情视频 | 成人午夜高清在线视频| 在现免费观看毛片| 亚洲精品456在线播放app| 亚洲人成网站高清观看| 欧美激情久久久久久爽电影| 我的老师免费观看完整版| 人人妻人人澡欧美一区二区| 国产精品免费一区二区三区在线| 国产精品爽爽va在线观看网站| 嫩草影院入口| 三级毛片av免费| 中文字幕人妻熟人妻熟丝袜美| 国产午夜精品论理片| 精品不卡国产一区二区三区| 日日啪夜夜撸| 99热网站在线观看| 搡老妇女老女人老熟妇| 国产成人aa在线观看| 国产91av在线免费观看| 午夜日韩欧美国产| av.在线天堂| 看非洲黑人一级黄片| 久久精品综合一区二区三区| 18禁在线无遮挡免费观看视频 | 久久精品91蜜桃| 日本五十路高清| 久久精品影院6| 国产成人a∨麻豆精品| 成年免费大片在线观看| 国产精品电影一区二区三区| 伊人久久精品亚洲午夜| 此物有八面人人有两片| 高清日韩中文字幕在线| 少妇裸体淫交视频免费看高清| 亚洲欧美日韩高清专用| 老司机午夜福利在线观看视频| 日日摸夜夜添夜夜添小说| 女同久久另类99精品国产91| 精品人妻视频免费看| 少妇的逼好多水| 久久久久国内视频| 国内少妇人妻偷人精品xxx网站| avwww免费| 久久综合国产亚洲精品| 午夜精品在线福利| 亚洲人成网站在线播| 欧美人与善性xxx| 婷婷精品国产亚洲av在线| 九九久久精品国产亚洲av麻豆| 国产精品精品国产色婷婷| 亚洲欧美日韩高清专用| 成人午夜高清在线视频| 非洲黑人性xxxx精品又粗又长| 久久久久精品国产欧美久久久| 白带黄色成豆腐渣| 2021天堂中文幕一二区在线观| a级毛片a级免费在线| 午夜免费激情av| 国产 一区 欧美 日韩| 国产在线男女| 成人毛片a级毛片在线播放| 高清日韩中文字幕在线| 嫩草影院入口| 极品教师在线视频| 亚洲人成网站在线观看播放| 国产亚洲精品久久久久久毛片| 久久这里只有精品中国| 久久6这里有精品| 男人狂女人下面高潮的视频| av黄色大香蕉| 成人高潮视频无遮挡免费网站| 日本与韩国留学比较| 男女边吃奶边做爰视频| 最近手机中文字幕大全| 精品久久久久久久久久久久久| 日韩,欧美,国产一区二区三区 | 色视频www国产| 天堂av国产一区二区熟女人妻| 国产极品精品免费视频能看的| 久久这里只有精品中国| 国产黄色小视频在线观看| 如何舔出高潮| 免费看光身美女| 亚洲精品日韩av片在线观看| 成人漫画全彩无遮挡| 国产精品一二三区在线看| 日韩在线高清观看一区二区三区| 身体一侧抽搐| 国产精品免费一区二区三区在线| 18禁裸乳无遮挡免费网站照片| 欧美国产日韩亚洲一区| 亚洲色图av天堂| 美女免费视频网站| 午夜亚洲福利在线播放| 免费在线观看成人毛片| 久久久欧美国产精品| 蜜臀久久99精品久久宅男| 久久久久久大精品| 热99re8久久精品国产| 亚洲三级黄色毛片| 免费观看在线日韩| 91麻豆精品激情在线观看国产| 国产成人a∨麻豆精品| 久久精品国产亚洲av天美| 老熟妇乱子伦视频在线观看| 搡女人真爽免费视频火全软件 | 精品人妻一区二区三区麻豆 | 丰满人妻一区二区三区视频av| 精品久久久久久久末码| 亚洲精品乱码久久久v下载方式| 久久久久国产精品人妻aⅴ院| 无遮挡黄片免费观看| 俄罗斯特黄特色一大片| 中出人妻视频一区二区| 欧美+日韩+精品| 少妇人妻一区二区三区视频| 日本免费a在线| 亚洲精品456在线播放app| 免费看光身美女| 看非洲黑人一级黄片| 丝袜美腿在线中文| 日本在线视频免费播放| 亚洲精品一卡2卡三卡4卡5卡| 日韩欧美一区二区三区在线观看| 色播亚洲综合网| 国产精品永久免费网站| 三级经典国产精品| 一进一出好大好爽视频| 国产成人影院久久av| 国产在线精品亚洲第一网站| 国产精品一区二区性色av| 亚洲国产欧洲综合997久久,| 在现免费观看毛片| 国产欧美日韩精品一区二区| 永久网站在线| 国产一区亚洲一区在线观看| 国产成人影院久久av| 久久久精品大字幕| 亚洲一区二区三区色噜噜| 久久精品综合一区二区三区| 国产精品一区www在线观看| 少妇的逼水好多| 久久人人精品亚洲av| 深爱激情五月婷婷| 极品教师在线视频| 大香蕉久久网| 久久亚洲精品不卡| 性色avwww在线观看| 人妻丰满熟妇av一区二区三区| 日本免费一区二区三区高清不卡| 精品久久国产蜜桃| 非洲黑人性xxxx精品又粗又长| 国产免费男女视频| 一a级毛片在线观看| 天天躁日日操中文字幕| 国产综合懂色| 悠悠久久av| 国产精品爽爽va在线观看网站| 一级av片app| 黄色日韩在线| 五月玫瑰六月丁香| 非洲黑人性xxxx精品又粗又长| 国产探花极品一区二区| 国产伦精品一区二区三区视频9| 一本久久中文字幕| 99精品在免费线老司机午夜| 亚洲一区二区三区色噜噜| 国产一区二区三区在线臀色熟女| 亚洲人与动物交配视频| 91在线观看av| 久久国产乱子免费精品| 久久韩国三级中文字幕| 久久久久精品国产欧美久久久| 亚洲一区二区三区色噜噜| 99在线人妻在线中文字幕| 99热这里只有是精品在线观看| 亚洲欧美日韩东京热| 丰满的人妻完整版| 亚洲国产色片| 18+在线观看网站| av国产免费在线观看| 男插女下体视频免费在线播放| 免费观看精品视频网站| 亚洲中文字幕日韩| 久久久久久久久久黄片| 简卡轻食公司| 国产精品不卡视频一区二区| 国产真实伦视频高清在线观看| 欧美成人a在线观看| 欧美最黄视频在线播放免费| 国产三级在线视频| 日日干狠狠操夜夜爽| 老女人水多毛片| 国产乱人视频| 国产精品女同一区二区软件| 成人亚洲精品av一区二区| 亚洲欧美精品自产自拍| 一级毛片电影观看 | 黄色日韩在线| 亚洲18禁久久av| 少妇熟女欧美另类| 99久久九九国产精品国产免费| 久久中文看片网| 午夜激情欧美在线| 成年版毛片免费区| 久久久久久久久久黄片| 精品无人区乱码1区二区| 欧美绝顶高潮抽搐喷水| 神马国产精品三级电影在线观看| 嫩草影院新地址| 99热这里只有是精品在线观看| 中文字幕久久专区| 精品人妻视频免费看| 99riav亚洲国产免费| 联通29元200g的流量卡| 午夜影院日韩av| 69av精品久久久久久| 国产高清三级在线| 日日啪夜夜撸| 性插视频无遮挡在线免费观看| 免费观看精品视频网站| 亚洲国产精品sss在线观看| 一级毛片我不卡| 国产色婷婷99| 亚洲精品日韩av片在线观看| videossex国产| 波多野结衣巨乳人妻| 成人国产麻豆网| 午夜爱爱视频在线播放| 久久久久国产网址| 国产亚洲91精品色在线| 亚洲成a人片在线一区二区| 国语自产精品视频在线第100页| 少妇的逼好多水| 晚上一个人看的免费电影| 精品人妻偷拍中文字幕| 欧美激情在线99| 欧美最黄视频在线播放免费| 美女免费视频网站| 精品欧美国产一区二区三| 搞女人的毛片| 五月玫瑰六月丁香| 欧美丝袜亚洲另类| 在线播放无遮挡| 久久精品国产清高在天天线| 99热全是精品| 亚洲国产高清在线一区二区三| 一级毛片久久久久久久久女| 久久久久精品国产欧美久久久| 此物有八面人人有两片| 日韩欧美精品免费久久| av在线亚洲专区| 精品午夜福利在线看| 91精品国产九色| 大型黄色视频在线免费观看| 性插视频无遮挡在线免费观看| 国产亚洲精品综合一区在线观看| 国产蜜桃级精品一区二区三区| 一进一出抽搐动态| 伦理电影大哥的女人| 久久精品综合一区二区三区| 欧美高清性xxxxhd video| 日韩人妻高清精品专区| 美女内射精品一级片tv| 亚洲性久久影院| 天堂动漫精品| 欧美成人精品欧美一级黄| 国产精品综合久久久久久久免费| 亚洲婷婷狠狠爱综合网| 中文资源天堂在线| 亚洲av免费在线观看| 色av中文字幕| 久久欧美精品欧美久久欧美| 亚洲va在线va天堂va国产| 久久久久免费精品人妻一区二区| 91麻豆精品激情在线观看国产| 成人鲁丝片一二三区免费| 亚洲第一区二区三区不卡| 欧美性猛交╳xxx乱大交人| 亚洲人成网站高清观看| 国产一区二区在线观看日韩| 少妇人妻精品综合一区二区 | 乱码一卡2卡4卡精品| 亚洲自拍偷在线| 成人av在线播放网站| 又黄又爽又免费观看的视频| 国产欧美日韩精品亚洲av| 成人鲁丝片一二三区免费| 天堂av国产一区二区熟女人妻| 国产蜜桃级精品一区二区三区| 久久精品国产亚洲网站| 午夜福利在线观看免费完整高清在 | 国产女主播在线喷水免费视频网站 | 91麻豆精品激情在线观看国产| 国产69精品久久久久777片| 麻豆久久精品国产亚洲av| 性插视频无遮挡在线免费观看| 国产亚洲精品综合一区在线观看| 身体一侧抽搐| 欧美zozozo另类| 日韩欧美在线乱码| 国产av不卡久久| 亚洲美女搞黄在线观看 | 精品人妻熟女av久视频| 黄色日韩在线| 亚洲av一区综合| 亚洲中文日韩欧美视频| 久久99热这里只有精品18| 久久99热6这里只有精品| 一区二区三区高清视频在线| 欧美成人a在线观看| 国产精品亚洲美女久久久| 有码 亚洲区| 国产精品久久久久久久久免| 22中文网久久字幕| 日本a在线网址| 蜜桃久久精品国产亚洲av| 全区人妻精品视频| 亚洲综合色惰| 1024手机看黄色片| 真人做人爱边吃奶动态| 精品人妻偷拍中文字幕| 国产高清视频在线观看网站| 91在线精品国自产拍蜜月| 麻豆av噜噜一区二区三区| 久久亚洲国产成人精品v| 亚洲激情五月婷婷啪啪| 男插女下体视频免费在线播放| 亚洲无线观看免费| 久久久久久国产a免费观看| 丝袜喷水一区| 亚洲欧美日韩无卡精品| 亚洲欧美中文字幕日韩二区| 国产探花极品一区二区| 国产一区二区亚洲精品在线观看| 男女那种视频在线观看| 成人国产麻豆网| 又爽又黄无遮挡网站| 精品人妻偷拍中文字幕| 午夜福利视频1000在线观看| h日本视频在线播放| av视频在线观看入口| 在线播放无遮挡| 97在线视频观看| 淫妇啪啪啪对白视频| 色吧在线观看| 尤物成人国产欧美一区二区三区| 一级a爱片免费观看的视频| 国产精品一区www在线观看| 精品不卡国产一区二区三区| 免费观看人在逋| 欧美日本视频| 亚洲内射少妇av| 国产高清三级在线| 国产精品久久久久久精品电影| 亚洲欧美精品综合久久99| 欧美日本亚洲视频在线播放| 精品人妻熟女av久视频| 日本黄色视频三级网站网址| 美女内射精品一级片tv| 日韩精品中文字幕看吧| 乱人视频在线观看| 国内精品久久久久精免费| 久久久成人免费电影| 国产精品久久电影中文字幕| 亚洲成人中文字幕在线播放| 又爽又黄无遮挡网站| av天堂在线播放| 三级毛片av免费| 一区二区三区高清视频在线| 男插女下体视频免费在线播放| 九九在线视频观看精品| 男女视频在线观看网站免费| 老熟妇乱子伦视频在线观看| 免费观看人在逋| 中文资源天堂在线| 国产一区二区三区在线臀色熟女| 亚洲人成网站在线观看播放| 色综合站精品国产| 欧美潮喷喷水| 国产亚洲91精品色在线| 淫妇啪啪啪对白视频| 97超视频在线观看视频| 国产久久久一区二区三区| 男女下面进入的视频免费午夜| 晚上一个人看的免费电影| 少妇人妻一区二区三区视频| 干丝袜人妻中文字幕| 午夜福利在线在线| 国产成人aa在线观看| 在线观看av片永久免费下载| 一本一本综合久久| 国产一级毛片七仙女欲春2| 国产成人a∨麻豆精品| 成人高潮视频无遮挡免费网站| 淫秽高清视频在线观看| 亚洲成av人片在线播放无| 嫩草影院精品99| 国产亚洲精品av在线| 亚洲av一区综合| 国产精品乱码一区二三区的特点| 最近中文字幕高清免费大全6| 欧美高清成人免费视频www| 狂野欧美激情性xxxx在线观看| 真人做人爱边吃奶动态| 久久久久久久午夜电影| av专区在线播放| 成年女人永久免费观看视频| 美女被艹到高潮喷水动态| 国产一区二区激情短视频| 国产激情偷乱视频一区二区| 免费高清视频大片| 中文字幕久久专区| 久久中文看片网| 老女人水多毛片| 长腿黑丝高跟| 国内精品久久久久精免费| 99在线人妻在线中文字幕| 国产av麻豆久久久久久久| 国产男人的电影天堂91| 啦啦啦观看免费观看视频高清| 97超视频在线观看视频| 淫妇啪啪啪对白视频| 哪里可以看免费的av片| 国产av麻豆久久久久久久| 淫秽高清视频在线观看| 97超碰精品成人国产| 在线观看美女被高潮喷水网站| 欧美绝顶高潮抽搐喷水| 两个人视频免费观看高清| 熟妇人妻久久中文字幕3abv| av在线天堂中文字幕| 国产伦一二天堂av在线观看| 俄罗斯特黄特色一大片| 免费看光身美女| 丝袜美腿在线中文| .国产精品久久| 亚洲最大成人手机在线| ponron亚洲| 99热这里只有是精品在线观看| 亚洲精品一区av在线观看| 一级av片app| 久久婷婷人人爽人人干人人爱| 国产又黄又爽又无遮挡在线| 日韩三级伦理在线观看| 午夜精品一区二区三区免费看| 亚洲av成人av| 美女被艹到高潮喷水动态| 舔av片在线| 久久鲁丝午夜福利片| 69av精品久久久久久| 国产老妇女一区| 亚洲最大成人手机在线| 两个人的视频大全免费| 日韩 亚洲 欧美在线| 亚洲精品一区av在线观看| 蜜桃亚洲精品一区二区三区| 男人舔女人下体高潮全视频| av黄色大香蕉| 变态另类丝袜制服| 91狼人影院| 午夜福利在线观看吧| 亚洲第一区二区三区不卡| 一进一出抽搐动态| 亚洲自拍偷在线| 亚洲国产精品久久男人天堂| 国产一区亚洲一区在线观看| 99久久精品一区二区三区| 听说在线观看完整版免费高清| 久久久久国产网址|