• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Discounted Iterative Adaptive Critic Designs With Novel Stability Analysis for Tracking Control

    2022-07-18 06:17:08MingmingHaDingWangandDerongLiu
    IEEE/CAA Journal of Automatica Sinica 2022年7期

    Mingming Ha, Ding Wang,,, and Derong Liu,,

    Abstract—The core task of tracking control is to make the controlled plant track a desired trajectory. The traditional performance index used in previous studies cannot eliminate completely the tracking error as the number of time steps increases. In this paper, a new cost function is introduced to develop the value-iteration-based adaptive critic framework to solve the tracking control problem. Unlike the regulator problem,the iterative value function of tracking control problem cannot be regarded as a Lyapunov function. A novel stability analysis method is developed to guarantee that the tracking error converges to zero. The discounted iterative scheme under the new cost function for the special case of linear systems is elaborated.Finally, the tracking performance of the present scheme is demonstrated by numerical results and compared with those of the traditional approaches.

    I. INTRODUCTION

    RECENTLY, adaptive critic methods, known as approximate or adaptive dynamic programming (ADP) [1]–[8],have enjoyed rather remarkable successes for a wide range of fields in the energy scheduling [9], [10], orbital rendezvous[11], [12], urban wastewater treatment [13], attitude-tracking control for hypersonic vehicles [14] and so forth. Adaptive critic designs have close connections to both adaptive control and optimal control [15], [16]. For nonlinear systems, it is difficult to obtain the analytical solution of the Hamilton-Jacobi-Bellman (HJB) equation. Iterative adaptive critic techniques, mainly including value iteration (VI) [17]–[20]and policy iteration (PI) [21], [22], have been extensively studied and successfully applied to iteratively approximate the numerical solution of the HJB equation [23]–[26]. In [27],relaxed dynamic programming was introduced to overcome the “curse of dimensionality” problem by relaxing the demand for optimality. The upper and lower bounds of the iterative value function were first determined and the convergence of VI was revealed. For ensuring stability of the undiscounted VI, Heydari [28] developed a stabilizing VI algorithm initialized by a stabilizing policy. With this operation, the stability of the closed-loop system using the iterative control policy can be guaranteed. In [29], the convergence and monotonicity of discounted value function were investigated.The discounted iterative scheme was implemented by the neural-network-based globalized dual heuristic programming.Afterwards, Haet al. [30] discussed the effect of the discount factor on the stability of the iterative control policy. Several stability criteria with respect to the discount factor were established. In [31], Wanget al. developed an event-based adaptive critic scheme and presented an appropriate triggering condition to ensure the stability of the controlled plant.

    Optimal tracking control is a significant topic in the control community, which mainly aims at designing a controller to make the controlled plant track a reference trajectory. The literature on this problem is extensive [32]–[37] and reflects considerable current activity. In [38], Wanget al. developed a finite-horizon optimal tracking control strategy with convergence analysis for affine discrete-time systems by employing the iterative heuristic dynamic programming approach. For the linear quadratic output tracking control problem, Kiumarsiet al. [39] presented a novel Bellman equation, which allows policy evaluation by using only the input, output, and reference trajectory data. Liuet al. [40] concerned the robust optimal tracking control problem and introduced the adaptive critic design scheme into the controller to overcome the unknown uncertainty caused by multi-input multi-output discrete-time systems. In [41], Luoet al. designed the modelfree optimal tracking controller for nonaffine systems by using a critic-only Q-learning algorithm, while the proposed method needs to be given an initial admissible control policy. In [42],a novel cost function was proposed to eliminate the tracking error. The convergence and monotonicity of the new value function sequence were investigated. On the other hand, some methods to solve the tracking problem for affine continuoustime systems can be found in [43]–[46]. For affine nonlinear partially-unknown constraint-input systems, the integral reinforcement learning technique was studied to learn the solution to the optimal tracking control problem in [43], which does not require to identify the unknown systems.

    In general, the majority of adaptive critic tracking control methods need to solve the feedforward control input of the reference trajectory. Then, the tracking control problem can be transformed into a regulator problem. However, for some nonlinear systems, the feedforward control input corresponding to the reference trajectory might be nonexistent or not unique, which makes these methods unavailable. To avoid solving the feedforward control input, some tracking control approaches establish a performance index function of the tracking error and the control input. Then, the adaptive critic design is employed to minimize the performance index. With this operation, the tracking error cannot be eliminated because the minimization of the control input cannot always lead to the minimization of the tracking error. Moreover, as mentioned in[30], the introduction of discount factor will affect the stability of the optimal control policy. If an inappropriate discount factor is selected, the stability of the closed-loop system cannot be guaranteed. Besides, unlike the regulator problem,the iterative value function of tracking control is not a Lyapunov function. Till now, few studies have focussed on this problem. In this paper, inspired by [42], the new performance index is adopted to avoid solving the feedforward control and eliminate the tracking error. The stability conditions with respect to the discount factor are discussed, which can guarantee that the tracking error converges to zero as the number of time steps increases.

    The main contributions of this article are summarized as follows.

    1) Based on the new performance index function, a novel stability analysis method for the tracking control problem is established. It is guaranteed that the tracking error can be eliminated completely.

    2) The effect of the presence of the approximation errors derived from the value function approximator is discussed with respect to the stability of controlled systems.

    3) For linear systems, the new VI-based adaptive critic scheme between the kernel matrix and the state feedback gain is developed.

    The remainder of this paper is organized as follows. In Section II, the necessary background and motivation are provided. The VI-based adaptive critic scheme and the properties of the iterative value function are presented. In Section III, the novel stability analysis for tracking control is developed. In Section IV, the discounted iterative formulation under the new performance index for the special case of linear systems is discussed. Section V compares the tracking performance of the new and traditional tracking control approaches by the numerical results. In Section VI, conclusions of this paper and further research topics are summarized.

    Notations:Throughout this paper, N and N+are the sets of all nonnegative and positive integers, respectively, i.e.,N={0,1,2,...} and N+={1,2,...}. R denotes the set of all real numbers and R+is the set of nonnegative real numbers. Rnis the Euclidean space of alln-dimensional real vectors.Inand 0m×nrepresents then×nidentity matrix and them×nzero matrix, respectively.C≤0 means that the matrixCis negative semi-definite.

    II. PROBLEM FORMULATION AND VI-BASED ADAPTIVE CRITIC SCHEME

    Consider the following affine nonlinear systems given by:

    withthestateXk∈Rnandinputuk∈Rm,wheren,m∈N+andk∈N.F: Rn→RnandG: Rn×Rm→Rnarethedriftand control input dynamics, respectively. The tracking error is defined as

    whereDkis the reference trajectory at stagek. Suppose thatDkis bounded and satisfies

    whereM(·) is the command generator dynamics. The objective of the tracking control problem is to design a controller to track the desired trajectory. Letuk={uk,uk+1,...},k∈N, be an infinite-length sequence of control inputs. Assume that there exists a control sequenceu0such thatEk→0 ask→∞.

    In general, in the previous works [34], [38], assume that there exists a feedforward control input ηksatisfyingDk+1=F(Dk)+G(Dk)ηkto achieve perfect tracking. However,for some nonlinear systems, the feedforward control input might be nonexistent. To avoid computing the feedforward control input ηk, the performance index [33], [34] is generally designed as

    whereγ ∈(0,1]isthe discountfactor and U(·,·)istheutility function.TermsQ: Rn→R+andR: Rm→R+intheutility function are positive definite continuous functions. With this operation, both the tracking error and the control input in the performance index (4) are minimized. To the best of our knowledge, the minimization of the control input does not always result in the minimization of the tracking error unless the reference trajectory is assumed to beDk→0 ask→∞.Such assumption greatly reduces the application scope of the approach. Therefore, for the majority of desired trajectories,the tracking error cannot be eliminated [42] by adopting the performance index (4). According to [42], under the control sequence u0, a new discounted cost function for the initial tracking errorE0and reference pointD0is introduced as

    The adopted cost function (5) not only avoids computing the feedforward control input, but also eliminates the tracking error. The objective of this paper is to find a feedback control policy π(E,D), which both makes the dynamical system (1)track the reference trajectory and minimizes the cost function(5). According to (5), the state value function can be obtained as

    and its optimal value isV?(Ek,Dk).

    According to the Bellman’s principle of optimality, the optimal value function for tracking control problem satisfies

    whereEk+1=F(Ek+Dk)+G(Ek+Dk)π(Ek,Dk)?M(Dk). The corresponding optimal control policy is computed by

    Therefore, the Hamiltonian function for tracking control can be obtained as

    The optimal control policy π?satisfies the first-order necessary condition for optimality, i.e.,=0 [42]. The gradient of(9) with respect toπis given as

    In general, the positive definite function Q is chosen as the following quadratic form:

    whereQ∈Rn×nis a positive definite matrix. Then, the expression of the optimal control policy can be obtained by solving (10) [42].

    Since it is difficult or impossible to directly solve the Bellman equation (7), iterative adaptive critic methods are widely adopted to obtain its numerical solution. Here, the VIbased adaptive critic scheme for the tracking control problem is employed to approximate the optimal value functionV?(Ek,Dk)formulated in (7). The VI-based adaptive critic algorithm starts from a positive semi-definite continuous value functionV(0)(Ek,Dk).UsingtheinitialvaluefunctionV(0)(Ek,Dk),theinitialcontrol policy iscomputed by

    whereEk+1=F(Ek+Dk)+G(Ek+Dk)π(Ek,Dk)?M(Dk). For the iteration index ? ∈N+, the VI-based adaptive critic algorithm is implemented between the value function update

    and the policy improvement

    In the iteration learning process, two sequences, namely the iterative value function sequence {V(?)} and the corresponding control policy sequence {π(?)}, are obtained. The convergence and monotonicity of the undiscounted value function sequence have been investigated in [42]. Inspired by [42], the corresponding convergence and monotonicity properties of the discounted value function can be obtained.

    Lemma 1 [42]:Let the value function and control policy sequences be tuned by (13) and (14), respectively. For anyEkandDk, the value function starts fromV(0)(·,·)=0.

    1) The value function sequence {V(?)(Ek,Dk)} is monotonically nondecreasing, i.e.,V(?)(Ek,Dk)≤V(?+1)(Ek,Dk),? ∈N.

    2) Suppose that there exists a constant κ ∈(0,∞) such that 0 ≤γV?(Ek+1,Dk+1)≤κU(Ek,Dk,uk), whereEk+1=F(Ek+Dk)+G(Ek+Dk)uk?M(Dk). Then, the iterative value function approaches the optimal value function with the following manner:

    It can be guaranteed that the discounted value function and corresponding control policy sequences approximate the optimal value function and optimal control policy as the number of iterations increases, i.e.,lim?→∞V(?)(Ek,Dk)=V?(Ek,Dk) and lim?→∞π(?)(Ek,Dk)=π?(Ek,Dk). Note that the introduction of the discount factor will affect the stability of the optimal and iterative control policies. If the discount factor is chosen too small, the optimal control policy might be unstable. For the tracking control problem, the policy π?(Ek,Dk)cannot make the controlled plant track the desired trajectory. It is meaningless to design various iterative methods to approximate the optimal control policy. On the other hand, for the regulation problem, the iterative value function is a Lyapunov function to judge the stability of the closed-loop systems [18]. However, for the tracking control problem, the iterative value function cannot be regarded as a Lyapunov function as the iterative value function does not only depend on the tracking errorE. Therefore, it is necessary to develop a novel stability analysis approach for tracking control problems.

    III. NOVEL STABILITY ANALYSIS OF VI-BASED ADAPTIVE CRITIC DESIGNS

    In this section, the stability of the tracking error system is discussed. It is guaranteed that the tracking error under the iterative control policy converges to zero as the number of time steps increases.

    Theorem 1:Suppose that there exists a control sequenceu0for the system (1) and the desired trajectory (3) such thatEk→0 ask→∞. If the discount factor satisfies

    wherec∈(0,1) is a constant, then the tracking error under the optimal control π?(Ek,Dk) converges to zero ask→∞.

    Proof:According to (7) and (8), the Bellman equation can be rewritten as

    which is equivalent to

    Applying (19) to the tracking errorsE0,E1, ...,ENand the corresponding reference pointsD0,D1, ...,DN, one has

    Combining the inequalities in (20), we have

    For the discounted iterative adaptive critic tracking control,the condition (16) is important. Otherwise, the stability of the optimal control policy cannot be guaranteed. Theorem 1 reveals the effect of the discount factor on the convergence of the tracking error. However, the optimal value function is unknown in advance. In what follows, a practical stability condition is provided to guarantee that the tracking error converges to zero under the iterative control policy.

    Theorem 2:Let the value function with(·,·)=0 and the control policy be updated by (13) and (14), respectively. If the iterative value function satisfies

    which implies, forj=1,2,...,N

    Combining (23) and (25), the following relationship can be obtained:

    According to 2) in Lemma 1,V(?+1)(Ek,Dk)?V(?)(Ek,Dk)→0 as ? →∞. Therefore, the condition (22) in Theorem 2 can be satisfied in the iteration process. There must exist an iterative control policyin the control policy sequence {π(?)}, which makesEk→0 ask→∞.

    In general, for nonlinear systems, the value function update(13) cannot be solved exactly. Various fitting methods, such as neural networks, polynomial fitting and so forth, can be used to approximate the iterative value function of the nonlinear systems and many numerical methods can be applied to solve (14). Note that the inputs of the function approximator are the tracking error vectorEand the desired trajectoryD.Especially, for high dimensional nonlinear systems, the artificial neural network is applicable to approximate the iterative value function. Compared with the polynomial fitting method, the artificial neural network avoids manually designing each basis function. The introduction of the function approximator inevitably leads to the approximation error.

    Define the approximation error at the?th iteration as ε(?)(Ek,Dk). According to the value function update equation(13), the approximate value function is obtained as

    whereEk+1=F(Ek+Dk)+G(Ek+Dk)μ(Ek,Dk)?M(Dk) and the corresponding control policy μ (Ek,Dk) is computed by

    Note that the approximation error ε(??1)(Ek,Dk) is not the error between the approximate value function(?)(Ek,Dk) and the exact value functionV(?)(Ek,Dk). Next, considering the approximation error of the function approximator, we further discuss the stability of the closed-loop system using the control policy derived from the approximate value function.

    Theorem 3:Let the iterative value function withV(0)(·,·)=0 be approximated by a smooth function approximator. The approximate value function and the corresponding control policy are updated by (28) and (29), respectively. If the approximatevaluefunction withtheapproximationerror≤αU(Ek,Dk,μ(?)(Ek,Dk))isfiniteand satisfies where α ∈(0,1) andc∈(0,1?α) are constants, then the tracking error under the control policy μ(?)(Ek,Dk) satisfiesEk→0ask→∞.

    Proof:Forconvenience, in the sequel, μ(?)(Ek,Dk) is written as.According to (28) and the condition(30),itleadsto

    Evaluating (32) at the time stepsk=0,1,2,...,N, it results in

    Combining the inequalities in (33), we obtain

    IV. DISCOUNTED TRACKING CONTROL FOR THE SPECIAL CASE OF LINEAR SYSTEMS

    In this section, the VI-based adaptive critic scheme for linear systems and the stability properties are investigated.Consider the following discrete-time linear systems given by

    whereA∈Rn×nandB∈Rn×mare system matrices. Here, we assume that the reference trajectory satisfiesDk+1=ΓDk,where Γ ∈Rn×nis a constant matrix. This form is used because its analysis is convenient. According to the new cost function(5), for the linear system (35), a quadratic performance index with a positive definite weight matrixQis formulated as follows:

    Combining the dynamical system (35) and the desired trajectoryDk, we can obtain an augmented system as

    where the new weight matrixsatisfies

    As mentioned in [15], [16], [39], the value function can be regarded as the quadratic form in the state, i.e.,V()=for some kernel matrix. Then, the Bellman equation of linear quadratic tracking is obtained by

    The Hamiltonian function of linear quadratic tracking control is defined as

    Considering the state feedback policy π ()=?and the equation (40), it results in

    Therefore, the linear quadratic tracking problem can be solved by using the following equation:

    Considering the Hamiltonian function (41), a necessary condition for optimality is the stationarity condition=0[15 ], [16]. The optimal control policy is computed by

    and

    Theorem 4:Let the kernel matrix and the state feedback gain be iteratively updated by (45) and (46), respectively. If the iterative kernel matrix and state feedback gain satisfy

    which implies

    According to Theorem 2, we can obtainU(,π(?)())→0 ask→∞,whichshows thatthetracking errorunder the iterativecontrol policy π(?)()approacheszero ask→∞. ■

    For linear systems, if the system matricesAandBare known, it is not necessary to use the function approximator to estimate the iterative value function. According to the iterative algorithm (45) and (46), there is no approximation error derived from the approximate value function in the iteration procedure.

    V. SIMULATION STUDIES

    In this section, two numerical simulations with physical background are conducted to verify the effectiveness of the discounted adaptive critic designs. Compared with the cost function (4) proposed by the traditional studies, the adopted performance index can eliminate the tracking error.

    A. Example 1

    As shown in Fig. 1, the spring-mass-damper system is used to validate the present results and compare the performance between the present and the traditional adaptive critic tracking control approaches. LetM,s, anddbe the mass of the object,the stiffness constant of the spring, and the damping, respectively. The system dynamics is given as

    wherexdenotes the position,vstands for the velocity, andfis theforce appliedtotheobject. Letthesystemstate vectorbeX=[x,v]T∈R2andthe controlinputbeu=f∈R. The continuous-time system dynamics (50) is discretized using the Euler method with sampling interval ?t=0.01 s. Then, the discrete-time state space equation is obtained as

    Fig. 1. Diagrammatic sketch of the spring-mass-damper system.

    In this example, the practical parameters are selected asM=1 kg,s=5 N/m, andd=0.5 Ns/m. The reference trajectory is defined as

    Combining the original system (51) and the reference trajectory (52), the augmented system is formulated as

    The iterative kernel matrix with(0)=04×4and the state feedback gain are updated by (45) and (46), respectively,whereQ=I2and the discount factor is chosen as γ =0.98. On the other hand, considering the following traditional cost function:

    the corresponding VI-based adaptive critic control algorithm for system (53) is implemented between

    and

    whereR∈Rm×mis a positive definite matrix. As defined in(54), the objective of the cost function is to minimize both the tracking error and the control input. The role of the cost function (54) is to balance the minimizations of the tracking error and the control input according to the selection of the matricesQandR. To compare the tracking performance under different cost functions, we carry out the new VI-based adaptive critic algorithm and the traditional approach for 400 iterations. Three traditional cost functions with different weight matricesQiandRi,i=1,2,3 are selected to implement the algorithms (55) and (56), whereQ1,2,3=I2andR1,2,3=1,0.1,0.01. After 400 iteration steps, the obtained corresponding optimal kernel matrices and state feedback gains are given as follows:

    Let the initial system state and reference point beX0=[0.1,0.14]TandD0=[?0.3,0.3]T. Then, the obtained state feedback gains are applied to generate the control inputs of the controlled plant (53). The system state and tracking error trajectories under different weight matrices are shown in Figs. 2 and 3, respectively. It can be observed that smallerRleads to smaller tracking error. The weight matricesQandRreveal the importance of the minimizations of the tracking error and the control input. The tracking performance of the traditional cost function with smallerRis similar to that of the new tracking control approach. From (56), the matrixRcannot be a zero matrix. Otherwise, there might exist no inverse of the matrixR+. The corresponding control input curves are plotted in Fig. 4.

    Fig. 2. The reference trajectory and system state curves under different cost functions (Example 1).

    Fig. 3. The tracking error curves under different cost functions (Example 1).

    Fig. 4. The control input curves under different cost functions (Example 1).

    B. Example 2

    Consider the single link robot arm given in [47]. LetM,g,L, J andfrbe the mass of the payload, acceleration of gravity, length of the arm, moment of inertia and viscous friction, respectively. The system dynamics is formulated as

    whereαandudenote the angle position of robot arm and controlinput, respectively.LetthesystemstatevectorbeX=[α,α˙]T∈R2. SimilarlytoExample 1,the singlelink robot arm dynamics is discretized using the Euler method with sampling interval ?t=0.05 s. Then, the discrete-time state space equation of (61) is obtained as

    Inthisexample,thepractical parametersaresetasM=1kg,g=9.8m/s2,L=1m ,J =5 kg·m2andfr=2.The desired trajectory is defined as

    The cost function (5) is set as the quadratic form, whereQandγare selected asQ=I2and γ=0.97, respectively. In this example, sinceEkandDkare the independent variables of the value function, the function approximator of the iterative value function is selected as the following form:

    whereW(?)∈R26is the parameter vector. In the iteration process, 300 random samples in the region ?={(E∈R2,D∈R2):?1 ≤E1≤1,?1 ≤E2≤1,?1 ≤D1≤1,?1 ≤D2≤1}are chosen to learn the iterative value functionV(?)(E,D) for 200 iteration steps. The value function is initialized as zero. In the iteration process, considering the first-order necessary condition for optimality, the iterative control policy can be computed by the following equation:

    Note that the unknown control input μ(?)(Ek,Dk) exists on both sides of (65). Therefore, at each iteration step,μ(?)(Ek,Dk)is iteratively obtained by using the successive approximation approach. After the iterative learning process,the parameter vector is obtained as follows:

    Next, we compare the tracking performance of the new and the traditional methods. The traditional cost function is also selected as the quadratic form. Three traditional cost functions withQ1,2,3=I2andR1,2,3=0.1,0.01,0.001 are selected. The initial state and initial reference point are set asX0=[?0.32,0.12]TandD0=[0.12,?0.23]T, respectively. The obtained parameter vectors derived from the present and the traditional adaptive critic methods are employed to generate the near optimal control policy. The controlled plant state trajectories using these near optimal control policies are shown in Fig. 5. The corresponding tracking error and control input curves are plotted in Figs. 6 and 7, respectively. From Figs. 6 and 7, it is observed that both the tracking error and the control input derived from the traditional approach are minimized. However, it is not necessary to minimize the control input by deteriorating the tracking performance for tracking control.

    VI. CONCLUSIONS

    In this paper, for the tracking control problem, the stability of the discounted VI-based adaptive critic method with a new performance index is investigated. Based on the new performance index, the iterative formulation for the special case of linear systems is given. Some stability conditions are provided to guarantee that the tracking error approaches zero as the number of time steps increases. Moreover, the effect of the presence of the approximation errors of the value function is discussed. Two numerical simulations are performed to compare the tracking performance of the iterative adaptive critic designs under different performance index functions.

    Fig. 5. The reference trajectory and system state curves under different cost functions (Example 2).

    Fig. 6. The tracking error curves under different cost functions (Example 2).

    Fig. 7. The control input curves under different cost functions (Example 2).

    It is also interesting to further extend the present tracking control method to the nonaffine systems, data-based tracking control, output tracking control, various practical applications and so forth. The developed tracking control method will be more advanced in the future work of online adaptive critic designs for some practical complex systems with noises.

    成年女人毛片免费观看观看9| 在线国产一区二区在线| 国产精品久久久久久久久免 | 最新在线观看一区二区三区| 欧美色欧美亚洲另类二区| 久久伊人香网站| 中文在线观看免费www的网站| 久久久久久久亚洲中文字幕 | 国产精品精品国产色婷婷| 91在线精品国自产拍蜜月| 十八禁人妻一区二区| 青草久久国产| 十八禁人妻一区二区| 国产亚洲av嫩草精品影院| 天美传媒精品一区二区| 午夜亚洲福利在线播放| 一级毛片久久久久久久久女| 国产三级黄色录像| 午夜免费成人在线视频| 免费看a级黄色片| 日韩中字成人| av欧美777| av在线蜜桃| 成人高潮视频无遮挡免费网站| 在现免费观看毛片| 一区二区三区四区激情视频 | 国产视频一区二区在线看| 国产精品亚洲美女久久久| 成人欧美大片| 亚洲人成网站高清观看| 久久精品国产99精品国产亚洲性色| 在现免费观看毛片| 亚洲成av人片免费观看| 亚洲国产精品成人综合色| 我要搜黄色片| 欧美黑人欧美精品刺激| 亚洲国产欧美人成| a级毛片a级免费在线| 少妇裸体淫交视频免费看高清| 亚洲国产欧洲综合997久久,| 天堂动漫精品| 男女床上黄色一级片免费看| 18禁在线播放成人免费| 免费搜索国产男女视频| 超碰av人人做人人爽久久| 色精品久久人妻99蜜桃| 色精品久久人妻99蜜桃| 亚洲人成伊人成综合网2020| 能在线免费观看的黄片| 无人区码免费观看不卡| 蜜桃久久精品国产亚洲av| 精品国内亚洲2022精品成人| 天堂av国产一区二区熟女人妻| 波多野结衣巨乳人妻| 国产亚洲精品av在线| 小蜜桃在线观看免费完整版高清| h日本视频在线播放| 日韩免费av在线播放| 亚洲国产精品sss在线观看| 97碰自拍视频| 亚洲精品在线美女| 国产精品综合久久久久久久免费| 91av网一区二区| 99久久精品热视频| 国产国拍精品亚洲av在线观看| 久久精品综合一区二区三区| 亚洲国产精品成人综合色| 午夜免费激情av| 亚洲成a人片在线一区二区| 国产精品久久久久久久久免 | 国产综合懂色| 日本 欧美在线| 亚洲aⅴ乱码一区二区在线播放| 亚洲国产欧洲综合997久久,| 欧美性猛交黑人性爽| av视频在线观看入口| 少妇人妻精品综合一区二区 | 亚洲男人的天堂狠狠| 免费人成视频x8x8入口观看| 91麻豆精品激情在线观看国产| 又粗又爽又猛毛片免费看| 成人特级黄色片久久久久久久| 国产精品人妻久久久久久| 久久久精品大字幕| 国产探花在线观看一区二区| 国产高清三级在线| 精品人妻一区二区三区麻豆 | 丁香六月欧美| 亚洲欧美日韩高清在线视频| 欧洲精品卡2卡3卡4卡5卡区| 蜜桃亚洲精品一区二区三区| 欧美黄色淫秽网站| 日韩 亚洲 欧美在线| 51国产日韩欧美| 一级av片app| 中文在线观看免费www的网站| 国产精品电影一区二区三区| 在线观看舔阴道视频| 国产乱人伦免费视频| 欧美日韩黄片免| 亚洲国产精品sss在线观看| 国产精品一及| 琪琪午夜伦伦电影理论片6080| 最近中文字幕高清免费大全6 | 亚洲 欧美 日韩 在线 免费| 自拍偷自拍亚洲精品老妇| 亚洲中文字幕日韩| 99视频精品全部免费 在线| 国产爱豆传媒在线观看| av天堂在线播放| 国产私拍福利视频在线观看| 亚洲精华国产精华精| 最新在线观看一区二区三区| 赤兔流量卡办理| 精品不卡国产一区二区三区| 搡老岳熟女国产| 国产精品三级大全| 天堂√8在线中文| 亚洲av免费在线观看| 国产精品日韩av在线免费观看| 国产私拍福利视频在线观看| av在线天堂中文字幕| 久久精品国产亚洲av香蕉五月| 两个人的视频大全免费| 欧美日韩福利视频一区二区| 精品熟女少妇八av免费久了| 成年人黄色毛片网站| 亚洲狠狠婷婷综合久久图片| 嫁个100分男人电影在线观看| 麻豆av噜噜一区二区三区| 国产色爽女视频免费观看| 免费av观看视频| 一区二区三区激情视频| 国产精品久久久久久久电影| 此物有八面人人有两片| 精品午夜福利视频在线观看一区| 免费观看的影片在线观看| x7x7x7水蜜桃| 久久人妻av系列| 日韩国内少妇激情av| 欧美激情久久久久久爽电影| av中文乱码字幕在线| 亚洲av电影在线进入| 日韩 亚洲 欧美在线| 欧美国产日韩亚洲一区| 在线国产一区二区在线| 国产精品乱码一区二三区的特点| 日本黄色视频三级网站网址| 中文字幕人成人乱码亚洲影| 一区二区三区四区激情视频 | 国产日本99.免费观看| 观看免费一级毛片| 老女人水多毛片| 日韩精品中文字幕看吧| 国产色婷婷99| 麻豆成人午夜福利视频| 欧美日韩福利视频一区二区| 欧美一级a爱片免费观看看| 亚洲色图av天堂| 12—13女人毛片做爰片一| 老熟妇仑乱视频hdxx| 一级a爱片免费观看的视频| 美女免费视频网站| 国产精品免费一区二区三区在线| 国产精品自产拍在线观看55亚洲| 免费看美女性在线毛片视频| 亚洲人成电影免费在线| 日韩欧美在线二视频| 天堂av国产一区二区熟女人妻| 在线a可以看的网站| 一进一出抽搐gif免费好疼| 成人三级黄色视频| 日本精品一区二区三区蜜桃| 舔av片在线| 51国产日韩欧美| 直男gayav资源| 啦啦啦韩国在线观看视频| 少妇的逼水好多| 国产午夜精品论理片| 国产高清视频在线播放一区| 国产三级中文精品| 国产精品久久久久久精品电影| 午夜免费激情av| 国产大屁股一区二区在线视频| www.熟女人妻精品国产| 特级一级黄色大片| 久久国产乱子伦精品免费另类| 嫩草影院精品99| 99国产精品一区二区蜜桃av| 麻豆国产97在线/欧美| 欧美成人一区二区免费高清观看| 亚洲一区高清亚洲精品| 久久精品国产亚洲av涩爱 | 中文亚洲av片在线观看爽| 国产精品,欧美在线| 精品久久久久久久末码| 色av中文字幕| 国产精品99久久久久久久久| 一区二区三区高清视频在线| 一进一出好大好爽视频| 久久九九热精品免费| 看免费av毛片| 亚洲人成网站在线播放欧美日韩| 色吧在线观看| 禁无遮挡网站| 国产成人a区在线观看| 国产精品一区二区性色av| 在线免费观看的www视频| 欧美一区二区国产精品久久精品| 无人区码免费观看不卡| 欧美高清成人免费视频www| 美女大奶头视频| 久久99热这里只有精品18| 黄片小视频在线播放| 97超视频在线观看视频| 免费观看人在逋| 91麻豆av在线| 午夜精品一区二区三区免费看| 国产精品亚洲一级av第二区| 97超视频在线观看视频| 国产亚洲精品久久久com| 国产色爽女视频免费观看| 久久精品91蜜桃| 国产精品久久久久久精品电影| 最近中文字幕高清免费大全6 | 日韩欧美国产一区二区入口| 波多野结衣高清作品| 18美女黄网站色大片免费观看| 麻豆国产av国片精品| 久9热在线精品视频| 欧美成狂野欧美在线观看| 国产真实伦视频高清在线观看 | 国内精品一区二区在线观看| 欧美一区二区精品小视频在线| 久久精品夜夜夜夜夜久久蜜豆| 久久婷婷人人爽人人干人人爱| 乱人视频在线观看| 男人的好看免费观看在线视频| 久久人人精品亚洲av| 国产一区二区亚洲精品在线观看| 日韩高清综合在线| 欧洲精品卡2卡3卡4卡5卡区| 在线天堂最新版资源| 日本精品一区二区三区蜜桃| 国产亚洲av嫩草精品影院| 欧美另类亚洲清纯唯美| 国产精品一区二区三区四区免费观看 | 成人精品一区二区免费| 国产精品99久久久久久久久| 久久国产精品影院| 亚洲精品一区av在线观看| 国产精品日韩av在线免费观看| 国产黄a三级三级三级人| 欧美乱妇无乱码| 男人和女人高潮做爰伦理| 国产69精品久久久久777片| 成人美女网站在线观看视频| 国内少妇人妻偷人精品xxx网站| 床上黄色一级片| 欧美成人a在线观看| 99热这里只有是精品在线观看 | 深夜a级毛片| 久久久久久久午夜电影| 久99久视频精品免费| 国产久久久一区二区三区| 国产一级毛片七仙女欲春2| 国产国拍精品亚洲av在线观看| 亚洲七黄色美女视频| 午夜激情福利司机影院| 精品久久久久久成人av| 国产精品亚洲一级av第二区| 国产国拍精品亚洲av在线观看| 午夜福利视频1000在线观看| 可以在线观看毛片的网站| 久久久久亚洲av毛片大全| 噜噜噜噜噜久久久久久91| 欧美黄色淫秽网站| 又紧又爽又黄一区二区| 亚洲自偷自拍三级| 国产成人av教育| 久久久久久久久久成人| 日韩免费av在线播放| 国产高清视频在线观看网站| 99riav亚洲国产免费| 国产真实乱freesex| 亚洲精品成人久久久久久| 舔av片在线| 久久婷婷人人爽人人干人人爱| 综合色av麻豆| 国产淫片久久久久久久久 | 国产av麻豆久久久久久久| 日本五十路高清| 亚洲av一区综合| 亚洲久久久久久中文字幕| 九色国产91popny在线| 欧美激情久久久久久爽电影| 亚洲精品亚洲一区二区| 久久久久久国产a免费观看| 国产淫片久久久久久久久 | 好看av亚洲va欧美ⅴa在| 在线观看免费视频日本深夜| 久久中文看片网| 搡老妇女老女人老熟妇| 色av中文字幕| 国产色婷婷99| 在线a可以看的网站| 少妇人妻一区二区三区视频| 男女之事视频高清在线观看| 一进一出抽搐动态| 欧美高清成人免费视频www| 国产乱人伦免费视频| 好男人电影高清在线观看| 亚洲无线在线观看| 国产亚洲欧美98| 亚洲成av人片在线播放无| 亚洲欧美日韩无卡精品| 色哟哟·www| 精品一区二区三区视频在线| 高清在线国产一区| 国产乱人伦免费视频| 久久久成人免费电影| 亚洲黑人精品在线| 亚洲国产精品久久男人天堂| 亚洲av二区三区四区| 亚洲国产精品sss在线观看| 欧美成人性av电影在线观看| 51国产日韩欧美| 99久久成人亚洲精品观看| 国产极品精品免费视频能看的| 成人三级黄色视频| 中亚洲国语对白在线视频| 级片在线观看| 亚洲精品在线美女| 露出奶头的视频| 99热6这里只有精品| 国产午夜精品论理片| 精品人妻偷拍中文字幕| 国产精品三级大全| 久久精品国产99精品国产亚洲性色| 国产白丝娇喘喷水9色精品| 欧美黑人欧美精品刺激| 日本a在线网址| 搡女人真爽免费视频火全软件 | 久久久国产成人免费| 看免费av毛片| 亚洲成人精品中文字幕电影| 亚洲av熟女| 亚洲第一电影网av| 成人国产综合亚洲| 精品国产三级普通话版| 色精品久久人妻99蜜桃| 午夜久久久久精精品| 小说图片视频综合网站| 久久精品国产清高在天天线| 欧美日韩福利视频一区二区| 日韩精品中文字幕看吧| 哪里可以看免费的av片| 国产v大片淫在线免费观看| 九九在线视频观看精品| 哪里可以看免费的av片| 特大巨黑吊av在线直播| 亚洲精品在线美女| 欧美性感艳星| 亚洲精品乱码久久久v下载方式| 老熟妇仑乱视频hdxx| 91久久精品国产一区二区成人| 日本一本二区三区精品| 久久国产乱子伦精品免费另类| 欧美成狂野欧美在线观看| 91在线观看av| 少妇丰满av| 久99久视频精品免费| 一个人看的www免费观看视频| 亚洲av成人av| 麻豆av噜噜一区二区三区| 看十八女毛片水多多多| 成年女人永久免费观看视频| 国产精品久久电影中文字幕| 日韩精品青青久久久久久| www.www免费av| 国产精品一区二区三区四区免费观看 | 蜜桃久久精品国产亚洲av| 男人舔女人下体高潮全视频| 亚洲第一电影网av| 亚洲av美国av| 久久精品国产亚洲av天美| 男人舔奶头视频| 18美女黄网站色大片免费观看| 日韩欧美在线乱码| 精品久久久久久久久亚洲 | 最新在线观看一区二区三区| 好看av亚洲va欧美ⅴa在| 国产伦精品一区二区三区视频9| 国产亚洲精品久久久com| 国产在线男女| 久久久久久久久大av| 99热这里只有是精品在线观看 | 欧美性猛交╳xxx乱大交人| 级片在线观看| 91久久精品电影网| 超碰av人人做人人爽久久| 亚洲欧美精品综合久久99| 午夜福利在线观看免费完整高清在 | 国产av不卡久久| 亚洲第一电影网av| 精品久久久久久,| 真人一进一出gif抽搐免费| 日韩精品中文字幕看吧| a级毛片免费高清观看在线播放| 中文在线观看免费www的网站| 搡老岳熟女国产| 三级毛片av免费| 一区二区三区激情视频| 欧美激情在线99| 9191精品国产免费久久| 99久国产av精品| 国产高潮美女av| 久久久久国内视频| 国产伦在线观看视频一区| 亚洲自偷自拍三级| 欧美日韩亚洲国产一区二区在线观看| 永久网站在线| 国产爱豆传媒在线观看| 精品久久久久久久末码| 日日摸夜夜添夜夜添小说| 久久精品影院6| 亚洲人成网站高清观看| 国产精品久久久久久亚洲av鲁大| 18美女黄网站色大片免费观看| 国内精品一区二区在线观看| 午夜福利免费观看在线| 夜夜躁狠狠躁天天躁| 久久中文看片网| 性欧美人与动物交配| 大型黄色视频在线免费观看| 男插女下体视频免费在线播放| 特级一级黄色大片| 国产av不卡久久| 国产aⅴ精品一区二区三区波| 久久精品久久久久久噜噜老黄 | 精品人妻1区二区| 欧美日韩福利视频一区二区| 亚洲美女黄片视频| 国产久久久一区二区三区| 97超视频在线观看视频| 小蜜桃在线观看免费完整版高清| 一个人免费在线观看电影| 久久久久国产精品人妻aⅴ院| 男女视频在线观看网站免费| 久久精品91蜜桃| 亚洲成人久久爱视频| 亚洲美女黄片视频| 国产精品永久免费网站| 九色成人免费人妻av| 中文资源天堂在线| 亚洲国产精品成人综合色| 在线天堂最新版资源| 精品欧美国产一区二区三| 日韩精品青青久久久久久| 久久草成人影院| 国产主播在线观看一区二区| 国内精品久久久久精免费| 琪琪午夜伦伦电影理论片6080| 久久久久久久久久成人| 97热精品久久久久久| 成人av一区二区三区在线看| 亚洲成a人片在线一区二区| avwww免费| 国产高清三级在线| 中文字幕熟女人妻在线| 日韩精品中文字幕看吧| 午夜精品在线福利| 亚洲乱码一区二区免费版| 亚洲精品乱码久久久v下载方式| 五月玫瑰六月丁香| 亚洲无线观看免费| 97热精品久久久久久| 国产精品综合久久久久久久免费| 国产69精品久久久久777片| 国产高清激情床上av| 国产在线精品亚洲第一网站| or卡值多少钱| 婷婷精品国产亚洲av| 亚洲最大成人手机在线| 免费看a级黄色片| 国产久久久一区二区三区| 免费高清视频大片| 91九色精品人成在线观看| 日本 欧美在线| 12—13女人毛片做爰片一| 亚洲一区高清亚洲精品| 宅男免费午夜| 91在线观看av| 国产精品久久视频播放| 中文字幕av成人在线电影| 91麻豆av在线| 天堂动漫精品| 亚洲无线观看免费| 性色avwww在线观看| 91字幕亚洲| 亚洲精品456在线播放app | 免费搜索国产男女视频| 99久久久亚洲精品蜜臀av| 国产亚洲欧美98| 国产精品久久久久久精品电影| 丁香六月欧美| 日韩av在线大香蕉| 每晚都被弄得嗷嗷叫到高潮| 成人鲁丝片一二三区免费| 91麻豆精品激情在线观看国产| 波多野结衣巨乳人妻| 欧美在线一区亚洲| 国产精品一区二区性色av| 欧美3d第一页| 欧美黄色片欧美黄色片| netflix在线观看网站| 亚洲aⅴ乱码一区二区在线播放| 日本黄大片高清| 桃色一区二区三区在线观看| 免费电影在线观看免费观看| 日日摸夜夜添夜夜添av毛片 | 国产av一区在线观看免费| 欧美一区二区国产精品久久精品| 亚洲色图av天堂| 久99久视频精品免费| 国产不卡一卡二| 国产探花在线观看一区二区| 一区二区三区四区激情视频 | 性欧美人与动物交配| 丰满人妻一区二区三区视频av| 欧美日本视频| 国模一区二区三区四区视频| 亚洲精品在线观看二区| 天堂影院成人在线观看| 俄罗斯特黄特色一大片| 亚洲 欧美 日韩 在线 免费| 国产精品亚洲av一区麻豆| 他把我摸到了高潮在线观看| 国产亚洲精品av在线| www.www免费av| 免费看a级黄色片| 一本精品99久久精品77| 舔av片在线| 国产亚洲精品久久久久久毛片| 免费在线观看亚洲国产| 宅男免费午夜| 国产精品人妻久久久久久| 毛片一级片免费看久久久久 | 久久热精品热| 中文字幕免费在线视频6| 国产三级黄色录像| 波野结衣二区三区在线| 能在线免费观看的黄片| 最新中文字幕久久久久| 国产欧美日韩精品一区二区| 日韩av在线大香蕉| 亚洲 国产 在线| 国内久久婷婷六月综合欲色啪| 天堂√8在线中文| 波多野结衣巨乳人妻| 噜噜噜噜噜久久久久久91| 色5月婷婷丁香| 亚洲熟妇熟女久久| 国产高清三级在线| 亚洲,欧美精品.| 国产在线精品亚洲第一网站| АⅤ资源中文在线天堂| 赤兔流量卡办理| 亚洲性夜色夜夜综合| 天天一区二区日本电影三级| 欧美成人a在线观看| 亚洲中文字幕日韩| 99在线视频只有这里精品首页| 91午夜精品亚洲一区二区三区 | 午夜福利成人在线免费观看| 婷婷六月久久综合丁香| 真实男女啪啪啪动态图| 麻豆国产97在线/欧美| 国产亚洲精品av在线| 亚洲欧美日韩卡通动漫| 国产精品1区2区在线观看.| 午夜福利视频1000在线观看| 午夜福利高清视频| 两人在一起打扑克的视频| 97人妻精品一区二区三区麻豆| 欧美日韩亚洲国产一区二区在线观看| 亚洲综合色惰| 夜夜躁狠狠躁天天躁| 露出奶头的视频| 伦理电影大哥的女人| 国产精品久久视频播放| 亚洲精品在线观看二区| 精品久久久久久久久久免费视频| 亚洲av第一区精品v没综合| 12—13女人毛片做爰片一| 直男gayav资源| 超碰av人人做人人爽久久| 久久九九热精品免费| 国产一区二区在线观看日韩| 中文字幕av在线有码专区| 俺也久久电影网| 日韩中字成人| 免费av观看视频| 欧美成狂野欧美在线观看| 色噜噜av男人的天堂激情| 真人做人爱边吃奶动态| 十八禁人妻一区二区| 精品久久久久久,| 日日摸夜夜添夜夜添小说| 十八禁人妻一区二区| 精品久久久久久久久亚洲 | 2021天堂中文幕一二区在线观| 亚洲三级黄色毛片| 18美女黄网站色大片免费观看| 少妇人妻一区二区三区视频|