• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Data-Based Optimal Tracking of Autonomous Nonlinear Switching Systems

    2021-04-14 06:54:04XiaofengLiLuDongMemberIEEEandChangyinSunSeniorMemberIEEE
    IEEE/CAA Journal of Automatica Sinica 2021年1期
    關(guān)鍵詞:章句指導(dǎo)意義學(xué)界

    Xiaofeng Li, Lu Dong, Member, IEEE, and Changyin Sun, Senior Member, IEEE

    Abstract—In this paper, a data-based scheme is proposed to solve the optimal tracking problem of autonomous nonlinear switching systems. The system state is forced to track the reference signal by minimizing the performance function. First,the problem is transformed to solve the corresponding Bellman optimality equation in terms of the Q-function (also named as action value function). Then, an iterative algorithm based on adaptive dynamic programming (ADP) is developed to find the optimal solution which is totally based on sampled data. The linear-in-parameter (LIP) neural network is taken as the value function approximator. Considering the presence of approximation error at each iteration step, the generated approximated value function sequence is proved to be boundedness around the exact optimal solution under some verifiable assumptions. Moreover, the effect that the learning process will be terminated after a finite number of iterations is investigated in this paper. A sufficient condition for asymptotically stability of the tracking error is derived. Finally,the effectiveness of the algorithm is demonstrated with three simulation examples.

    I. INTRODUCTION

    THE optimal scheduling of nonlinear switching systems has attracted vast attention in recent decades. A switching system is a hybrid dynamic system which consists of continuous time subsystems and discrete time events. For each time step, only one subsystem is selected so that the main issue is to find the optimal policy to determine “when” to switch the mode and “which” mode should be activated [1],[2]. Many complex real-world applications can be described as switching systems, ranging from applications in bioengineering field to electronic circuits [3]-[7].

    Generally, the existing methods for optimal switching problem can be classified into two categories. The methods belonging to the first category are to find the switching sequence in a “planning” manner. In [8]-[11], nonlinear programming based algorithms are designed to determine the switching instants by using the gradient of the performance function. Note that the sequence of active modes are required to be fixed a priori. In [12], the authors propose a two-stage decision algorithm to allow free mode sequence which distinguishes the decision process of active mode and switching time. On the other hand, discretization-based methods solve the problem by discretizing the state and input space with a finite number of options [13]-[16]. However,these planning based algorithms achieve good performance only with specific initial states. Once the given initial conditions are changed, a new planning schedule should be made from the scratch.

    Optimal control is an important topic in modern control theory which aims to find a stabilized controller which minimizes the performance function [17]. In recent years,researchers have developed many optimal schemes for addressing practical real-world applications, such as trajectory planning and closed loop optimal control of cable robots[18]-[20]. Based on the reinforcement learning mechanism,the adaptive dynamic programming (ADP) algorithm was first developed to solve the optimal control problem of discretetime systems with continuous state space [21], [22]. In parallel, a continuous-time framework was proposed by the group of Frank L. Lewis to extend the application of ADP to continuous-time nonlinear systems [23]-[25]. Two main iterative methods including value iteration (VI) [26] and policy iteration (PI) [27] are employed to solve the Hamilton-Jacobi-Bellman (HJB) equation. The actor-critic (AC)structure is often employed to implement the ADP algorithm with two neural networks (NNs) [28]. The critic network takes system states as input and outputs the estimated value function while the actor network approximates the mapping between states and control input [29].

    According to the requirement of system dynamics, the family of ADP algorithms can be divided into three main aspects, including model-based methods, model-free methods,and data-based methods. The model-based ADP algorithms require we know the exact dynamics of the plant [26], [27],[30]. A monotonous non-decreasing or non-increasing sequence of value function is generated VI or PI based algorithm which will converge to the optimal solution. For the model-free algorithms, the system model is first identified,e.g., by using neural network (NN) or fuzzy system. Then, the iterations are operated based on the approximated model [31],[32]. It is worth noting that the presence of identification error may lead to sub-optimality of the learned policy. In contrast to the above two approaches, data-based ADP methods are totally based on input and output data [33]-[37]. The objective is to solve the Q-function based optimal Bellman function so that the optimal controller can be obtained without knowing system dynamics. Recently, the combination of ADP method with event-trigger mechanism has been investigated which substantially reduces the updating times of the control input without degrading the performance [38]-[41]. Considering the uncertainty of the system dynamics, the robust ADP algorithms are proposed to find the optimal controller of practical applications [42], [43]. In addition, many practical applications have been solved successfully by using the ADP method [44]-[46].

    As a powerful method for solving the HJB equation, ADP has been applied to solve the optimal control of switching systems in recent years. In [30], the optimal switching problem of autonomous subsystems is solved by using an ADP based method in a backwards fashion. In addition, the minimum dwell time constraint between different modes is considered in [47]. The feedback solution is obtained by learning the optimal value function with respect to the augmented states including system state, already active subsystem, and the elapsed time a given mode. In order to reduce the switching frequency, a switching cost is incorporated in the performance function [48]. In [49], the optimal tracking problem with infinite-horizon performance function is investigated by learning the mapping between the optimal value function and the switching instants. For the continuous-time autonomous switching system, a PI based learning scheme is proposed with consideration of the effect of approximation error on the behaviour [50]. Moreover, the problem of controlled switching nonlinear systems is addressed by co-designing the control signal and switching instants. In [51], the authors develop a VI based algorithm for solving the switching problem. Since a fixed-horizon performance function is considered, the optimal hybrid policy is obtained backward-in-time. In [52], the optimal control and triggering of networked control system is first transformed to an augmented switching system. Then, an ADP based algorithm is proposed to solve the problems with zero order hold (ZOH), generalized ZOH, finite-horizon and infinitehorizon performance functions. These aforementioned methods provide the closed-form solution which works for a vast domain of initial states. However, it is worthwhile noting that the accurate system dynamics is required to implement the existing algorithms which is difficult to obtain for complex nonlinear systems. In addition, the effect of approximation error incurred by employing the NN as the value function approximator is often ignored in previous literature.

    In this paper, a data-based algorithm is first proposed to address the optimal switching problem of autonomous subsystems. Instead of the requirement for system model, only input and output data is needed to learn the switching policy.Furthermore, two realistic issues are considered in this paper.On the one hand, the effect of presence of approximation errors between the outputs of a NN and the real target values are investigated. On the other hand, a sufficient condition is derived to guarantee the stability of the tracking error with a finite number of iterations. In addition, the critic-only structure is utilized for implementing the algorithm. The main contributions of this paper are listed as following. First, the problem is transformed to solving the Q-function based Bellman optimality equation, which enables us to derive a data-based algorithm. Second, considering the approximation errors, an approximated Q-learning based algorithm is first proposed for learning the optimal switching policy. Finally,the theoretical analysis of continuity of Q-functions,boundedness of generated value function sequence and the stability of the system is presented. Since [50]-[52] are all model-based methods, the completely “model-free” character of the proposed algorithm demonstrates its potential for complex nonlinear systems.

    The rest of this paper is organized as follows. Section II presents the problem formulation. In Section III, the exact Qlearning algorithm is proposed. Then, the approximated method is derived considering approximation error and a finite number of iterations. In addition, a linear-in-parameter (LIP)NN is utilized for implementing the algorithm of which the weights are updated by using least-mean-square (LMS)method. In Section IV, the theoretical analysis is given.Afterwards, three simulation examples are given in Section V.The simulation results demonstrate the potentials of the proposed method. Finally, conclusions are drawn in Section VI.

    II. PROBLEM FORMULATION

    Hence, the tracking problem is transformed to find the optimal Q-function. In the next section, an iterative Q-learning based algorithm is developed. In addition, the effects of the presence of the approximation error as well as termination condition of iterations are considered.

    III. PROPOSED ALGORITHM AND ITS IMPLEMENTATION

    A. Exact Q-Learning Algorithm

    It is worth noting that the convergence, optimality, and stability properties of exact Q-learning algorithm is achieved based on several ideal assumptions. On the one hand, the exact reconstruction of the target value function (15) is difficult when using value function approximators, except for some simple linear systems. On the other hand, theoretically,an infinite number of iterations are required to obtain the optimal Q-function. In the following subsection, these two realistic issues are considered and the approximated Qlearning algorithm is developed.

    B. Approximated Q-Learning Algorithm

    The approximated Q-learning method is proposed by extending the exact Q-learning algorithm. First, the algorithm starts from a zero initial Q-function, i.e., Q?(0)=0. Afterwards,considering the approximation error, the algorithm iterates between

    C. Implementation

    Fig. 1. The structure of critic network. The LIP NN consists of a basis function layer and a output layer. The basis functions are polynomials of combinations of system states and reference signals while the number of nodes is determined by trial-and-error method and the output layer has M nodes.

    The output of critic network can be expressed by can be updated at each iteration.

    Fig. 2. Simple diagram of the proposed algorithm. This figure shows the weight update process of an arbitrary output channel. The target network shares the same structure and weights of the critic network and computes the minimum value of Q-function at next time step. Note that at each iteration step, the weights of all output nodes should be updated.

    Another critical problem is to select the appropriate termination criteria for the training process. Let the iteration be stopped at the j-th iteration if the following convergence tolerance is satisfied

    where ζ(x,s) is positive definite function. Once the Q-function Q?(j)(x,s,v)is obtained, it can be applied to control system (1)by comparing the values of different modes and selecting the optimal one. The main procedure for implementing the proposed algorithm is given as in Algorithm 1. The theoretical analysis of the effect caused by the termination condition is given in the following section.

    Algorithm 1 Outline of Implementation of the Proposed Algorithm Step 1: Initialize the hyper-parameters including number of sampled data L and the termination condition of training process .?W(0)c,v =0 8v Ξ ζ Step 2: Initialize the weight vector of the critic NN, i.e.,.?x[l]k Ωx,s[l]k Ωs,v[l]k Ξ?L l=1 L Step 3: Randomly select a set of sample data, where is a large positive integer.?x[l+1]k ,s[l]k+1?L s[l]k+1=F(s[l]k )l=1 x[l]k+1= fv[l]k (x[l]k )Step 4: Obtain according to and, respectively.j=0 Step 5: Let and start the training process.Step 6: The active mode at next time step is selected according to.v(j),[l]k+1 =argminv Ξ(?W(j)c,v)Tφ(x[l]k+1,s[l]k+1)?Q(j+1)tar (xk,sk,vk)Step 7: The target values for critic network is computed according to (22). Then, the weights of the LIP NN are updated by using LMS method.j ?W(j+1)c,v - ?W(j+1)c,v j ≤ζ 8v Ξ j= j+1 Step 8: If is satisfied, then, proceed to Step 9, otherwise, let and execute Step 6.W?c,v= ?W(j)c,v 8v Ξ Step 9: Let and stop the iteration process.

    Remark 3: Note that the training process in Algorithm 1 is totally based on input and output data of subsystems. Once the weights of critic network are converged, the control signal can be derived only based on current system state and reference signal. In order to achieve competitive performance, it requires more training data than the model-based and modelfree algorithm. However, collecting input and output data is often easier than identifying the model.

    IV. THEORETICAL ANALYSIS

    In this section, the effects of presence of approximation error and termination condition on the convergence and stability properties are analyzed. Before proceeding to the proof of theorems, an approximated value function based ADP method is first briefly reviewed [47].

    A. Review of Approximated Value Iteration Algorithm

    B. Continuity Analysis

    C. Convergence Analysis

    Next, we will derive the proof that given an upper bounded constraint of approximation error at each iteration, the

    D. Stability Analysis

    V. SIMULATION RESULTS

    In this section, the simulation results of two numerical examples are first presented to illustrate the effectiveness of the proposed method. In addition, a simulation example of an anti-lock brake system (ABS) is included. The simulation examples are run on a laptop computer with Intel Core i7,3.2 GHz processor and 16 GB of memory, running macOS 10.13.6 and MATLAB 2018a (single threading).

    《文心雕龍·章句》作為“安章之總術(shù)”早已得到學(xué)界的普遍認(rèn)同,以今天的文藝評(píng)論眼光來看,毫無疑問是一篇?jiǎng)?chuàng)作論。但對(duì)初中語文教師來說,它也是一篇明晰章句、體悟韻律的鑒賞論,甚至是一篇入門級(jí)的批評(píng)論,對(duì)初中古詩文教學(xué)具有不可替代的指導(dǎo)意義。

    Example 1: First, the regulation problem of a simple scalar system with two subsystem is addressed. Specifically, the regulation problem can be regarded as a special case of the tracking problem with zero reference signal. The system dynamics is described as follows [30]:

    Fig. 3. Evolution of the Critic NN weight elements.

    Fig. 4. State trajectory and switching mode sequence under the proposed method with x0 = 1.5.

    After the training process is completed, the system is controlled by the converged policy with the initial state x0=1.5. The results are presented in Fig. 4. It is shown that the system switches to the first mode when the state becomes smaller than 1, which corresponds to (41). Moreover, let the system starts from different initial states, e.g., x0=1 and x0=?2; the results are given in Figs. 5 and 6, respectively. It is demonstrated that our method works well for different initial states.

    Fig. 5. State trajectory and switching mode sequence under the proposed method with x0 = 1.

    Fig. 6. State trajectory and switching mode sequence under the proposed method with x0 =-2.

    Example 2: A two-tank device with three different modes is considered. There are three positions of the valve which determine the fluid flow into the upper tank: fully open, half open, and fully closed. The objective is to force the fluid level of the lower tank to track the reference signal. Let the fluid heights in the set-up be denoted by x=[x1,x2]T, where x1and x2denote the fluid levels in the upper and lower tank,respectively. The dynamics of three subsystems are given as follows [49]:

    In addition, the dynamics of the reference command generator is described by

    Fig. 7. Evolution of the critic NN weight elements.

    Once the critic network is trained, the policy can be found by simply comparing three scalar values. Selecting the initial states as x0=[1,1]Tand s0=1, the evolution of states under obtained switching policy is shown in Fig. 8. It is shown that the fluid height in the lower tank can track the reference signal well. Furthermore, the results are compared with those of a model-based value iteration algorithm [49]. The trajectories during the interval of [200,300] are highlighted. It is shown that our algorithm achieves the same, if not better,performance without knowing the exact system dynamics. In addition, the values of performance function (3) by using the proposed Q-learning algorithm and value iteration method are 70.724 1 and 72.758 3, respectively which verifies the conclusion.

    Fig. 8. State trajectories and switching mode sequence of Q-learning based x0=[1,1]T and model based method with and s0 = 1.

    In order to test the tracking ability of the proposed algorithm for different time-varying reference signals, the fluid level of lower tank is forced to tracking the reference trajectories generated by , and ,respectively. Both the structure of NNs and parameters are kept the same with those in the previous paragraph. The state trajectories with different reference command generator is presented in Fig. 9 . The simulation results verify the effectiveness of our algorithm for time-varying reference trajectories.

    ˙s=?s2(t) ˙s=?s3(t) ˙s=?s4(t)

    Fig. 9. State trajectories with different reference command generators.

    The policy obtained after the iteration process is utilized to control the plant with the initial state x0=[0,0]T. Starting from the same state, the open-loop controller is derived according to the algorithm proposed in [12]. The trajectories of states under these two controllers are presented in Fig. 10(see top of next page). It is clear that the Q-learning controller achieves a more accurate tracking performance. By using the same Q-learning controller and nonlinear programming based controller, the simulation results with different initial state are presented in Fig. 11 (see next page). This figure illustrates the capability of the proposed method for different initial states.

    Example 3: The anti-lock brake system (ABS) is considered to illustrate the potentials of the proposed algorithm for realworld applications. In order to eliminate the effect of large ranges of state variables, the non-dimensionalised ABS model is described as follows [56]:

    Fig. 10. State trajectories of Q-learning based and nonlinear programming based method with x0=[0.8,0.2]T and the reference signal s(t)=0.5.

    Fig. 11. State trajectories of Q-learning based and nonlinear programming based method with x0=[0,0]T and the reference signal s(t)=0.5.

    Fig. 12. Evolution of the critic NN weight elements.

    Fig. 13. State trajectories and switching mode sequence of Q-learning based method with x0=[0,0.7,0,0]T.

    Furthermore, the robustness of the controller is tested with consideration of two kinds of uncertainties. First, a random noise signal with a magnitude in the range of[?0.1Ff(·),0.1Ff(·)] is added to the longitudinal force Ffin the ABS model (44). The simulation result is given in Fig. 14.The stopping distance and stopping time are 275.3 m and 6.76 s, respectively. The switching number between the three subsystems is 169 times. Compared with the case without noise, the uncertainty leads to about 0.81% increase of stopping distance, 0.75% increase of stopping time and 9 times of mode switching. Specifically, it can be seen in Fig. 14 that at the beginning of the braking process mode 2 is activated to decrease pressure. This unreasonable decision may be incurred by the random noise and leads to the degradation of performance.

    Fig. 14. State trajectories and switching mode sequence of Q-learning based method considering the uncertainty on the longitudinal force.

    In addition, the uncertainty of vehicle mass is considered.During the training process, the input and output data are generated based on (44) with M=500 kg. Once the policy is trained, it is applied to control the vehicle with M=600 kg.The simulation result is presented in Fig. 15. The stopping distance and stopping time are 323.9 m and 7.96 s,respectively. The switching number between three subsystems is 125 times. It is shown that the performance is degraded compared with that without uncertainty. However, the controller has still been successful in braking the vehicle with a admissible stopping distance which demonstrates.

    Fig. 15. State trajectories and switching mode sequence of Q-learning based method considering the uncertainty on the vehicle mass.

    VI. CONCLUSIONS

    In this paper, an approximated Q-learning algorithm is developed to find the optimal scheduling policy for autonomous switching systems with rigorous theoretical analysis. The learning process is totally based on the input and output data of the system and the reference command generator. The simulation results demonstrate the competitive performance of the proposed algorithm and its potential for complex nonlinear systems. Our future work is to investigate the optimal co-design of control and scheduling policies for controlled switching systems and Markov jump systems. In addition, the effect of employing deep NNs as value function approximator should be considered. It is also an interesting topic to deal with external disturbances.

    猜你喜歡
    章句指導(dǎo)意義學(xué)界
    劉玥辰
    中國篆刻(2022年9期)2022-09-26 02:21:54
    工夫、歷史與政教:“學(xué)庸章句序”中的道統(tǒng)說
    原道(2020年1期)2020-03-17 08:09:50
    朱子《中庸章句》的詮釋特點(diǎn)與道統(tǒng)意識(shí)——以鄭玄《中庸注》為參照
    原道(2020年1期)2020-03-17 08:09:46
    術(shù)中快速冰凍對(duì)判斷食管癌切緣范圍的指導(dǎo)意義
    健康教育對(duì)高原地區(qū)剖宮產(chǎn)患者的指導(dǎo)意義
    西藏科技(2015年6期)2015-09-26 12:12:11
    血乳酸檢測對(duì)引起呼吸衰竭常見疾病的臨床指導(dǎo)意義
    業(yè)界·學(xué)界“微天下”
    中國記者(2014年9期)2014-03-01 01:44:23
    業(yè)界·學(xué)界“微天下”
    中國記者(2014年7期)2014-03-01 01:41:10
    業(yè)界·學(xué)界:“微天下”
    中國記者(2014年6期)2014-03-01 01:39:53
    業(yè)界·學(xué)界“微天下”
    中國記者(2014年1期)2014-03-01 01:36:18
    国产成人免费观看mmmm| 国产有黄有色有爽视频| 尤物成人国产欧美一区二区三区| 国产亚洲欧美精品永久| 高清午夜精品一区二区三区| 女的被弄到高潮叫床怎么办| 国产黄片视频在线免费观看| 国产一区二区在线观看日韩| 成人免费观看视频高清| 国产精品麻豆人妻色哟哟久久| 简卡轻食公司| 超碰av人人做人人爽久久| 日韩不卡一区二区三区视频在线| av在线蜜桃| 国产在线视频一区二区| 少妇精品久久久久久久| 免费在线观看成人毛片| 一级毛片我不卡| 国产伦精品一区二区三区视频9| 亚洲一区二区三区欧美精品| 高清黄色对白视频在线免费看 | 免费看日本二区| 超碰97精品在线观看| 国产精品国产三级国产专区5o| 久久精品久久精品一区二区三区| 蜜臀久久99精品久久宅男| 五月开心婷婷网| 久久这里有精品视频免费| 国产免费又黄又爽又色| 中文字幕亚洲精品专区| 五月天丁香电影| av国产久精品久网站免费入址| 亚洲欧美日韩卡通动漫| 免费观看无遮挡的男女| 国精品久久久久久国模美| 18禁在线播放成人免费| 日本av手机在线免费观看| 校园人妻丝袜中文字幕| 最近中文字幕高清免费大全6| 午夜免费鲁丝| 91精品伊人久久大香线蕉| 国产爱豆传媒在线观看| 成年女人在线观看亚洲视频| 色婷婷久久久亚洲欧美| 91久久精品国产一区二区三区| 亚洲精品乱久久久久久| 欧美性感艳星| 老司机影院毛片| av在线app专区| 香蕉精品网在线| 久久国产亚洲av麻豆专区| 免费av中文字幕在线| 国产精品欧美亚洲77777| 高清av免费在线| 有码 亚洲区| 黑人高潮一二区| 亚洲自偷自拍三级| 免费久久久久久久精品成人欧美视频 | 精品一区二区三区视频在线| av国产免费在线观看| 久久久久久久久久久免费av| 精品视频人人做人人爽| 国产欧美日韩一区二区三区在线 | 网址你懂的国产日韩在线| 99re6热这里在线精品视频| 乱码一卡2卡4卡精品| 国产亚洲一区二区精品| 免费黄色在线免费观看| 好男人视频免费观看在线| 国产精品三级大全| 国产精品无大码| av在线蜜桃| 亚洲精品日韩在线中文字幕| 国产亚洲5aaaaa淫片| 亚洲美女黄色视频免费看| 亚洲成人中文字幕在线播放| 久久99热这里只有精品18| 久久久成人免费电影| 成人毛片a级毛片在线播放| 国产一区二区三区综合在线观看 | 99久久人妻综合| 欧美3d第一页| 少妇 在线观看| 国产乱来视频区| 中文字幕亚洲精品专区| 极品少妇高潮喷水抽搐| 一个人免费看片子| 免费人成在线观看视频色| 青春草视频在线免费观看| 赤兔流量卡办理| 国产精品国产av在线观看| 一个人看视频在线观看www免费| 五月开心婷婷网| 国产大屁股一区二区在线视频| 大香蕉97超碰在线| 久久久国产一区二区| 日韩av在线免费看完整版不卡| 成年女人在线观看亚洲视频| 亚洲av综合色区一区| 一级毛片我不卡| 美女福利国产在线 | 免费大片18禁| 高清不卡的av网站| 午夜福利影视在线免费观看| 亚洲人与动物交配视频| 色视频www国产| 香蕉精品网在线| videos熟女内射| 联通29元200g的流量卡| 国产免费又黄又爽又色| 啦啦啦中文免费视频观看日本| 免费观看a级毛片全部| 五月开心婷婷网| 熟女av电影| 精品视频人人做人人爽| 精品亚洲成国产av| 国产欧美日韩一区二区三区在线 | 日本-黄色视频高清免费观看| 看十八女毛片水多多多| 成人高潮视频无遮挡免费网站| 内射极品少妇av片p| 91狼人影院| 91狼人影院| 18+在线观看网站| 80岁老熟妇乱子伦牲交| 欧美成人精品欧美一级黄| a级毛色黄片| 在线亚洲精品国产二区图片欧美 | 久久久成人免费电影| 欧美精品国产亚洲| 嫩草影院新地址| 大陆偷拍与自拍| 午夜福利在线观看免费完整高清在| 亚洲精品,欧美精品| 美女主播在线视频| 交换朋友夫妻互换小说| 亚洲国产日韩一区二区| 99精国产麻豆久久婷婷| 男的添女的下面高潮视频| 免费观看a级毛片全部| 一区二区av电影网| 日本vs欧美在线观看视频 | 亚洲精品,欧美精品| 一区二区av电影网| 男女国产视频网站| 人人妻人人添人人爽欧美一区卜 | 性色av一级| 国产一区有黄有色的免费视频| 欧美高清成人免费视频www| 国产成人freesex在线| 亚洲成色77777| 美女中出高潮动态图| 国产精品蜜桃在线观看| 少妇熟女欧美另类| 成人无遮挡网站| 国产精品免费大片| 国产免费一级a男人的天堂| 国产成人精品福利久久| 欧美xxⅹ黑人| 免费av不卡在线播放| 精品人妻偷拍中文字幕| 又大又黄又爽视频免费| 有码 亚洲区| 欧美zozozo另类| 热99国产精品久久久久久7| 妹子高潮喷水视频| 免费少妇av软件| 不卡视频在线观看欧美| 免费观看av网站的网址| 欧美亚洲 丝袜 人妻 在线| 国产精品久久久久久久电影| 亚洲成色77777| 午夜福利在线观看免费完整高清在| 国产男女超爽视频在线观看| 大香蕉久久网| 日韩,欧美,国产一区二区三区| 十八禁网站网址无遮挡 | 青春草视频在线免费观看| 国产免费又黄又爽又色| 日韩欧美一区视频在线观看 | 男女无遮挡免费网站观看| 欧美97在线视频| 日日摸夜夜添夜夜添av毛片| 国产伦理片在线播放av一区| 亚洲色图av天堂| 亚洲成色77777| 国内少妇人妻偷人精品xxx网站| 免费人妻精品一区二区三区视频| 亚洲一级一片aⅴ在线观看| 激情五月婷婷亚洲| 一区二区三区免费毛片| 国产精品一区www在线观看| 中文字幕制服av| 人人妻人人看人人澡| 看免费成人av毛片| 亚洲中文av在线| 国产黄片美女视频| 天天躁夜夜躁狠狠久久av| 国产成人免费观看mmmm| 美女xxoo啪啪120秒动态图| 国产成人a∨麻豆精品| av女优亚洲男人天堂| 免费在线观看成人毛片| 人妻系列 视频| 精品亚洲成a人片在线观看 | 欧美bdsm另类| 女人十人毛片免费观看3o分钟| 欧美国产精品一级二级三级 | 国产成人精品福利久久| 亚洲第一av免费看| 韩国av在线不卡| 国产精品熟女久久久久浪| 国产成人免费无遮挡视频| 久久97久久精品| av在线播放精品| 日本一二三区视频观看| 建设人人有责人人尽责人人享有的 | 秋霞在线观看毛片| 久久久久久人妻| 伦精品一区二区三区| 国产爱豆传媒在线观看| 亚洲精品色激情综合| 女人十人毛片免费观看3o分钟| 成人午夜精彩视频在线观看| 成人高潮视频无遮挡免费网站| 亚洲精品亚洲一区二区| 国产成人精品婷婷| 嫩草影院入口| 一级黄片播放器| 久热久热在线精品观看| av不卡在线播放| 欧美精品亚洲一区二区| 18禁动态无遮挡网站| 国产精品伦人一区二区| 亚洲精品一区蜜桃| 国产免费又黄又爽又色| 欧美高清成人免费视频www| 色综合色国产| 日本黄大片高清| 国产亚洲5aaaaa淫片| 菩萨蛮人人尽说江南好唐韦庄| 男女边吃奶边做爰视频| 日本av手机在线免费观看| 男人爽女人下面视频在线观看| 女的被弄到高潮叫床怎么办| 久久国产精品男人的天堂亚洲 | 97在线人人人人妻| 人人妻人人爽人人添夜夜欢视频 | 香蕉精品网在线| 中文资源天堂在线| 国产亚洲5aaaaa淫片| 男女边吃奶边做爰视频| 精品一区二区三区视频在线| tube8黄色片| av天堂中文字幕网| 一区二区av电影网| 毛片一级片免费看久久久久| 亚洲欧美一区二区三区国产| 日本黄色片子视频| 高清毛片免费看| 在线播放无遮挡| 国产永久视频网站| 最近最新中文字幕免费大全7| 在线观看免费高清a一片| 丝袜脚勾引网站| 极品少妇高潮喷水抽搐| 成人亚洲精品一区在线观看 | 日本猛色少妇xxxxx猛交久久| 春色校园在线视频观看| 国产高清三级在线| 欧美区成人在线视频| www.色视频.com| 午夜福利视频精品| 蜜桃亚洲精品一区二区三区| 日韩在线高清观看一区二区三区| 91午夜精品亚洲一区二区三区| 日韩av免费高清视频| 国内精品宾馆在线| 亚洲精品日本国产第一区| 极品教师在线视频| 欧美 日韩 精品 国产| 精品99又大又爽又粗少妇毛片| 97在线人人人人妻| 国产成人精品一,二区| 99国产精品免费福利视频| 亚洲中文av在线| 久久国产精品大桥未久av | 午夜福利在线观看免费完整高清在| 男女下面进入的视频免费午夜| 免费观看无遮挡的男女| 欧美成人精品欧美一级黄| 麻豆成人av视频| 精品国产乱码久久久久久小说| 久久6这里有精品| 免费黄网站久久成人精品| h日本视频在线播放| 日本wwww免费看| 毛片一级片免费看久久久久| 国产精品久久久久久久久免| kizo精华| 三级国产精品欧美在线观看| 国产 一区精品| 国产精品av视频在线免费观看| 校园人妻丝袜中文字幕| 日韩免费高清中文字幕av| 香蕉精品网在线| 国产精品国产av在线观看| 高清午夜精品一区二区三区| 3wmmmm亚洲av在线观看| av一本久久久久| 亚洲四区av| 高清在线视频一区二区三区| 亚洲最大成人中文| 国产午夜精品久久久久久一区二区三区| 夫妻午夜视频| 国产人妻一区二区三区在| 国产大屁股一区二区在线视频| 老司机影院毛片| 欧美bdsm另类| 亚洲成人手机| 男人狂女人下面高潮的视频| 成人综合一区亚洲| 免费大片黄手机在线观看| 亚洲美女视频黄频| 日韩免费高清中文字幕av| 99热这里只有是精品50| 亚洲欧美一区二区三区黑人 | 国产精品伦人一区二区| 亚洲在久久综合| av在线观看视频网站免费| 一级片'在线观看视频| 97热精品久久久久久| 亚洲精品乱码久久久v下载方式| 国产精品一区二区三区四区免费观看| 久久精品国产自在天天线| 男女国产视频网站| 久久人人爽人人片av| 国产色婷婷99| 日本-黄色视频高清免费观看| 观看av在线不卡| 毛片一级片免费看久久久久| 国产日韩欧美亚洲二区| 身体一侧抽搐| 亚洲国产欧美人成| 久久久久性生活片| 久久av网站| 婷婷色综合大香蕉| 99久久综合免费| 精品亚洲乱码少妇综合久久| 午夜福利在线观看免费完整高清在| 国产亚洲欧美精品永久| 亚洲欧美日韩另类电影网站 | 午夜免费男女啪啪视频观看| 成人亚洲精品一区在线观看 | 五月玫瑰六月丁香| 欧美成人a在线观看| 韩国av在线不卡| 联通29元200g的流量卡| 国产精品.久久久| 国产黄色免费在线视频| 丰满人妻一区二区三区视频av| 超碰av人人做人人爽久久| 欧美丝袜亚洲另类| 尾随美女入室| av国产免费在线观看| 欧美日韩国产mv在线观看视频 | 我的女老师完整版在线观看| 狂野欧美激情性bbbbbb| 人妻一区二区av| 亚洲综合色惰| 亚洲精品日本国产第一区| 18+在线观看网站| 3wmmmm亚洲av在线观看| 一本久久精品| 秋霞伦理黄片| 欧美亚洲 丝袜 人妻 在线| 亚洲内射少妇av| 国产无遮挡羞羞视频在线观看| 亚洲精华国产精华液的使用体验| 欧美精品一区二区免费开放| 国产av国产精品国产| 嘟嘟电影网在线观看| 免费看不卡的av| 欧美精品一区二区大全| 麻豆国产97在线/欧美| 黄色视频在线播放观看不卡| 人妻 亚洲 视频| 亚洲av中文av极速乱| 99九九线精品视频在线观看视频| 天堂俺去俺来也www色官网| 青春草视频在线免费观看| 亚洲av日韩在线播放| 一边亲一边摸免费视频| 在现免费观看毛片| 欧美老熟妇乱子伦牲交| 精华霜和精华液先用哪个| 久久99热这里只频精品6学生| 亚洲人成网站在线播| 欧美日韩一区二区视频在线观看视频在线| 色婷婷av一区二区三区视频| 精品久久久久久久末码| 亚洲精品国产av蜜桃| 中文资源天堂在线| 国产一区二区在线观看日韩| 一本久久精品| 国产成人午夜福利电影在线观看| 欧美日韩精品成人综合77777| 国语对白做爰xxxⅹ性视频网站| a级一级毛片免费在线观看| 美女主播在线视频| 下体分泌物呈黄色| 人妻制服诱惑在线中文字幕| 亚洲国产av新网站| 一本一本综合久久| 少妇的逼水好多| 精品人妻视频免费看| 黄色怎么调成土黄色| av在线老鸭窝| 亚洲精品aⅴ在线观看| 丰满乱子伦码专区| 亚洲美女搞黄在线观看| 天堂中文最新版在线下载| xxx大片免费视频| 三级国产精品片| 精品国产一区二区三区久久久樱花 | 国产免费福利视频在线观看| 国产高清国产精品国产三级 | 精品久久久久久久久亚洲| 18禁裸乳无遮挡免费网站照片| 赤兔流量卡办理| 久久精品久久久久久噜噜老黄| a 毛片基地| 亚洲色图av天堂| 欧美少妇被猛烈插入视频| 纯流量卡能插随身wifi吗| 免费观看在线日韩| 97超视频在线观看视频| 免费观看的影片在线观看| 赤兔流量卡办理| 亚洲精品乱码久久久久久按摩| 中国国产av一级| 日本黄色片子视频| 精品少妇黑人巨大在线播放| 亚洲熟女精品中文字幕| 久久97久久精品| 又爽又黄a免费视频| 99视频精品全部免费 在线| 国产精品三级大全| 欧美3d第一页| 永久网站在线| 欧美日韩视频高清一区二区三区二| 国产日韩欧美亚洲二区| 深夜a级毛片| 久久久久精品性色| 免费大片黄手机在线观看| 精品久久久久久久久av| 97在线人人人人妻| 深夜a级毛片| a级毛色黄片| 国产免费福利视频在线观看| 久久97久久精品| 久久毛片免费看一区二区三区| 久久久久久久久大av| 一级黄片播放器| 人体艺术视频欧美日本| 午夜免费鲁丝| 80岁老熟妇乱子伦牲交| 最近手机中文字幕大全| 欧美成人a在线观看| 少妇的逼水好多| 九九久久精品国产亚洲av麻豆| 少妇 在线观看| 国产一区二区三区综合在线观看 | 亚洲av二区三区四区| 美女主播在线视频| 热99国产精品久久久久久7| 另类亚洲欧美激情| 国产精品一及| 韩国高清视频一区二区三区| 欧美最新免费一区二区三区| 亚洲伊人久久精品综合| 国产美女午夜福利| 伦理电影大哥的女人| 少妇人妻 视频| 少妇人妻精品综合一区二区| 插逼视频在线观看| 一二三四中文在线观看免费高清| 国产成人午夜福利电影在线观看| 亚洲怡红院男人天堂| 熟女人妻精品中文字幕| www.av在线官网国产| 国产亚洲午夜精品一区二区久久| 欧美成人午夜免费资源| 国产精品偷伦视频观看了| 色婷婷久久久亚洲欧美| 看十八女毛片水多多多| 国产综合精华液| 男人添女人高潮全过程视频| 精品久久久久久久久亚洲| 狂野欧美白嫩少妇大欣赏| 欧美日韩综合久久久久久| av国产精品久久久久影院| 国产精品国产av在线观看| 亚洲一级一片aⅴ在线观看| 久久久久久久大尺度免费视频| 这个男人来自地球电影免费观看 | 一区二区三区乱码不卡18| 看免费成人av毛片| 在线观看国产h片| 欧美97在线视频| 国产欧美另类精品又又久久亚洲欧美| 国产成人aa在线观看| 国产成人精品福利久久| 少妇裸体淫交视频免费看高清| 亚洲av在线观看美女高潮| 亚洲精品乱码久久久久久按摩| 欧美xxxx黑人xx丫x性爽| 国产av一区二区精品久久 | 搡老乐熟女国产| 尤物成人国产欧美一区二区三区| 亚洲精品亚洲一区二区| 六月丁香七月| 日韩国内少妇激情av| 日韩中文字幕视频在线看片 | 国产极品天堂在线| 最近手机中文字幕大全| 成人亚洲精品一区在线观看 | 秋霞在线观看毛片| 大话2 男鬼变身卡| 美女cb高潮喷水在线观看| 波野结衣二区三区在线| 嫩草影院新地址| 色5月婷婷丁香| 麻豆国产97在线/欧美| 黄色欧美视频在线观看| 一本色道久久久久久精品综合| 国产乱人偷精品视频| 亚洲怡红院男人天堂| 久久久久久久国产电影| 91aial.com中文字幕在线观看| 国产精品一区www在线观看| 制服丝袜香蕉在线| 美女xxoo啪啪120秒动态图| 久久久久久久国产电影| 亚洲精品亚洲一区二区| 久久久久久久精品精品| 国产精品国产av在线观看| 亚洲自偷自拍三级| 久久精品国产亚洲av天美| 只有这里有精品99| 国产精品99久久99久久久不卡 | 成人亚洲精品一区在线观看 | 久久人人爽人人片av| 午夜福利网站1000一区二区三区| 插阴视频在线观看视频| 一个人看的www免费观看视频| 久久精品久久精品一区二区三区| 十八禁网站网址无遮挡 | 国产亚洲91精品色在线| 日日啪夜夜爽| 搡女人真爽免费视频火全软件| 啦啦啦啦在线视频资源| 日韩av不卡免费在线播放| 国产精品久久久久久av不卡| 免费看不卡的av| 国产极品天堂在线| 久久国内精品自在自线图片| 一级毛片黄色毛片免费观看视频| 免费大片18禁| 欧美+日韩+精品| 少妇裸体淫交视频免费看高清| 国产又色又爽无遮挡免| 一级a做视频免费观看| 老女人水多毛片| 色吧在线观看| 亚洲国产毛片av蜜桃av| 日日撸夜夜添| 在线观看人妻少妇| 亚洲欧美成人综合另类久久久| 亚洲精品国产av蜜桃| 国产免费一级a男人的天堂| 亚洲va在线va天堂va国产| 精品国产露脸久久av麻豆| 欧美成人一区二区免费高清观看| 亚洲在久久综合| 搡老乐熟女国产| 汤姆久久久久久久影院中文字幕| 亚洲精品日本国产第一区| 精品国产露脸久久av麻豆| 插阴视频在线观看视频| 人人妻人人澡人人爽人人夜夜| 亚洲成人手机| 欧美区成人在线视频| av不卡在线播放| 高清黄色对白视频在线免费看 | 久久人人爽人人爽人人片va| 国产 一区 欧美 日韩| 少妇人妻久久综合中文| 亚洲不卡免费看| 日韩制服骚丝袜av| 中文字幕制服av| 日本wwww免费看| 老熟女久久久| 日日撸夜夜添| 一级爰片在线观看| 一级片'在线观看视频| 日韩欧美一区视频在线观看 | 日韩三级伦理在线观看| 国产一区有黄有色的免费视频| 最近最新中文字幕免费大全7| 精品久久久噜噜| 一级片'在线观看视频| 欧美变态另类bdsm刘玥| 久久综合国产亚洲精品| 免费大片黄手机在线观看|