• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Neural-network-based stochastic linear quadratic optimal tracking control scheme for unknown discrete-time systems using adaptive dynamic programming

    2021-10-13 07:16:26XinChenFangWang
    Control Theory and Technology 2021年3期

    Xin Chen·Fang Wang

    Abstract In this paper,a stochastic linear quadratic optimal tracking scheme is proposed for unknown linear discrete-time (DT) systems based on adaptive dynamic programming (ADP) algorithm.First,an augmented system composed of the original system and the command generator is constructed and then an augmented stochastic algebraic equation is derived based on the augmented system.Next,to obtain the optimal control strategy,the stochastic case is converted into the deterministic one by system transformation,and then an ADP algorithm is proposed with convergence analysis.For the purpose of realizing the ADP algorithm,three back propagation neural networks including model network,critic network and action network are devised to guarantee unknown system model,optimal value function and optimal control strategy,respectively.Finally,the obtained optimal control strategy is applied to the original stochastic system,and two simulations are provided to demonstrate the effectiveness of the proposed algorithm.

    Keywords Stochastic system·Optimal tracking control·Adaptive dynamic programming·Neural networks

    1 Introduction

    As is well known,optimal tracking control (OTC) plays a significant role in control field and develops fast in both theory [1–4] and applications [5–7].The aim of OTC is to design a controller,which enables the output to track a reference trajectory by minimizing a predefined performance index.However,traditional OTC approaches,such as feedback linearization [1] and plant inversion [2],are usually dragged in complex mathematical analysis and have trouble in controlling highly nonlinear plants.As for the linear quadratic tracking (LQT) problem,solutions can be obtained by solving an algebraic Riccati equation (ARE)for feedback term and a noncausal difference equation for feedforward term [8].Nevertheless,it is worth pointing out that the method mentioned above requirea priorisystem dynamics.Therefore,it challenges us to deal with the optimal tracking control problems with completely unknown system information.

    The key point of OTC is to calculate the nonlinear Hamilton–Jacobi–Bellman (HJB) equation,which is too complex to obtain an analytical solution.Though dynamic programming (DP) works as an effective method for solving HJB,it is often computationally untenable due to“curse of dimensionality”[9].To approximate solutions of HJB equation,adaptive dynamic programming (ADP) algorithms have been extensively employed and developed.Value iteration(VI) [10] and policy iteration (PI) [11] pave the way for the achievement of ADP algorithm.To overcome the unknown system,researchers try to rebuild the model base on datadriven techniques [12].By using useful input-output data,data-driven models,such as Markov models,neural network(NN) models and others,can replace system dynamics with input-output mapping.For discrete-time (DT) systems,ADP algorithms are proposed to deal with the OTC problems relying on NN based data-driven model [3,13].As for continuous system (CT),a synchronous PI algorithm scheme is applied to tackle the OTC with unknown dynamics via rebuilding system model [14].However,model reconstruction method [3,13–15] may be subjected to modeling accuracy.To get rid of it,a simultaneous policy iteration(SPI) algorithm is proposed to deal with the optimal control problem for partially unknown nonlinear system [16],and further the author [17] extends the SPI algorithm to deal with optimal control problems for completely unknown nonlinear system based on least squares method and Mont Carlo integration technique.Also,Bahare [18] proposed a PI algorithm and a VI one to solve the LQT ARE online only depending on measured input,output,and reference trajectory data.Besides,a Q-learning algorithm is proposed to obtain the optimal control by calculating an augmented ARE,not relying on the system dynamics or the command generator dynamics [4].

    Note that the aforementioned ADP based schemes provide multiform approaches for OTC problem,however,only the noise-free cases are taken into consideration.In fact,there will exist intrinsic nonlinear characteristic in LQT when original system is subjected to multiplicative noises,so that standard tools for LQT cannot be applied directly.Although traditional adaptive control methods can guarantee good tracking performance for stochastic system,the optimality aspect of the system is usually ignored [19].

    As we know,the stochastic linear quadratic (SLQ) optimal control problem is complicated due to the existence of multiplicative noises,but there is an equivalent relationship between the feasibility of the SLQ optimal control problem and the solvability of the stochastic algebraic equation (SAE)[20].Moreover,with the help of linear matrix inequalities[21],semidefinite programming [22],and Lagranger multiplier theorem [23],solving the SLQ optimal control problem becomes easier.Nevertheless,the aforementioned schemes[20–23] work under the prerequisite that system dynamics is completely known.To overcome the difficulty that the model is unknown,the authors [24] proposed an ADP algorithm to solve the SLQ optimal control problem based on 3 NN models.Moreover,the authors [25] adopted a Q-learning algorithm to settle SLQ optimal control problem for modelfree DT systems.Moreover,the authors [26] investigate non-model-based ADP algorithm to address optimal control problem for CT stochastic systems influenced by multiplicative noises.

    To the best of our knowledge,there seems to be a lot of ADP-based SLQ optimal control schemes while SLQ optimal tracking control has obtained seldom attention.The SLQ optimal tracking control problem were investigated in [27,28];however,only the control-dependent noise was discussed and system dynamics should be completely known in advance.When the model is unknown,there may exist huge challenges in SLQ optimal tracking problems for stochastic systems with multiplicative noises.Besides,a non-stable command generator is taken into account in this paper,which leads to traditional mean square concepts in terms ofxkin [24,25]being not suitable anymore,thus the stability of the system cannot be guaranteed.

    Facing the aforementioned difficulties,we propose a model unknown SLQ optimal tracking scheme using the ADP algorithm.The main contributions can be summarized as follows:

    (1) To solve the SLQ optimal tracking problem for unknown systems with multiplicative noises,an ADP algorithm is proposed in this paper,and a model-criticaction structure is introduced to obtain the optimal control strategy for stochastic systems whose dynamics are unknown.

    (2) To ensure the stability of the system,mean square concepts aboutekis newly defined,and a discount factor is introduced to the cost function,then an augmented SAE are derived to obtain the optimal control based on the augmented system.

    The rest of this paper is organized as follows.In Sect.2,we give the problem formulation and conversion.In Sect.3,we carry out the derivation and convergence proof of the VI ADP algorithm.In Sect.4,we make use of back propagation neural networks (BPNN) to realize the ADP algorithm.In Sect.5,two examples are given to illustrate the effectiveness of the proposed scheme.Finally,the conclusion is mentioned in Sect.6.

    2 Problem formulation and conversion

    2.1 Problem formulation

    Consider the linear stochastic DT system described as follows:

    wherexk∈?n,uk∈?mandyk∈?prefer to system state,control input and system output,respectively.Initial state of system (1) isx0,A,C∈?n×nandB,D∈?n×mare given constant matrices.One-dimensional stochastic disturbance sequenceωk(k=0,1,2,…,ω0=0) is defined on the given probability space (Ω,F,P),which is actually a measure space with total measure equals to 1,that is P(Ω)=1 .Moreover,Ω,F,P are defined as sample space,set of events and probability measure,respectively.The stochastic sequence is assumed to meet the following condition:

    where Fk=σ{ωk|k=0,1,2,…} refers to theσ? algebra obtained byωk.x0is irrelevant withωk,k=0,1,2,…,and E(?) denotes the mathematical expectation.

    The tracking error is described by

    whererkis the reference trajectory.

    Assumption 1The reference trajectory for the SLQ optimal tracking problem is generated by the command generator

    A cost function is essential to measure the optimality in SLQ optimal tracking problem.Therefore,the quadratic cost function to be optimized is denoted as

    whereQandRare positive semidefinite symmetric matrix and positive definite symmetric matrix,respectively.

    The cost function (4) usually can be used whenFis Hurwitz.However,by adding a discount factor in (4),we can tackle the SLQ tracking control problem even for the cases that the command generator dynamicsFis not Hurwitz.Consider the discounted cost function as follows:

    where the discount factor satisfies 0<γ≤1 .It is worth mentioning thatγ=1 can only be used whenFin (3) is Hurwitz.

    Considering thatFis not Hurwitz in this paper,the mean square definition in terms ofxkin [24] is no longer suitable.Thus we provide some new definitions.

    Definition 1ukis considered to be mean-square stabilizing ate0if there exists a linear feedback form ofuk.Hence for every initiale0,system (2) can lead to

    Definition 2System (3) with a mean-square stabilizing feedback control is considered to be mean-square stabilizable.

    Definition 3ukis called admissible if it satisfies the following three conditions:first,it is a Fk? adapted and measurable stochastic process;second,it is mean-square stabilizing;third,it enables the cost function to reach the minimum value.All admissible controls are gathered in a setUad.

    The goal of SLQ optimal tracking control problem is to seek for an admissible control which can not only minimize the cost function (5) but also stabilize system (2) for each initial statee0,namely

    To achieve the goal above,an augmented system including the system dynamics (1) and the reference trajectory dynamics (3) is constructed first as follows:

    Based on (7),cost function (5) can be further denoted as

    whereQ1=[G?I]TQ[G?I].

    Then,the optimal tracking control with linear feedback form is given by

    where the constant matrixKis regarded as mean square stabilizing control gain matrix if it satisfies Definition 1.

    Therefore,the cost function (8) can be further transformed into the following equation with respect toK,namely

    Thus,the goal of SLQ optimal tracking control problem (6)can be further expressed as

    Definition 4The SLQ optimal tracking control problem is considered well-posed if

    It is well known that there is an equivalent relationship between the feasibility of the SLQ optimal control problem and the solvability of the SAE.Next,it is shown that the SLQ optimal tracking problem is well posed with the help of augmented SAE.Therefore,we provide the following lemma first.

    Lemma 1The SLQ optimal tracking control problem is called well posed if there exists admissible control uk=KXk∈Uadand the following related value function:

    where the symmetric matrix P meets the following augmented SAE:

    Then,the following assumptions are made to ensure the existence of admissible

    Assumption 2The tracking error system (2) is mean-square stabilizable.

    Assumption 3The augmented system (7) is controllable.

    2.2 Problem conversion

    It is well known that ADP algorithm has obtained huge success in deterministic OTC designs [3,4,13–18],which inspires us to solve the SLQ optimal tracking problem by transforming the stochastic system into a deterministic one.

    Accordingly,the cost function (10) is rewritten as a deterministic form

    Remark 1The deterministic system (20) is independent on the stochastic disturbanceωkand will only be decided by initial stateZ0and control gain matrixK,which create well conditions for the apply of ADP algorithm.

    3 ADP algorithm and convergence proof

    In this section,we propose a value iteration ADP algorithm to obtain the optimal control for the SLQ optimal tracking problem.Thus we provide the formula of optimal control and related SAE first.

    3.1 The derivation of value iteration ADP algorithm

    whereP?satisfies the the augmented SAE (12) andZkis the state of deterministic system (19).

    An essential condition for the optimality is the first-order necessary condition.By calculating the derivatives of optimal value function (22) with regard toK,we have the following HJB equation:

    From Lemma 2,the SLQ optimal tracking problem could be effectively dealt with by the solution of augmented SAE.The difficulty is that the augmented SAE is usually hard to calculate the analytical solution and it requires full knowledge of the system dynamics.Unfortunately,it becomes impossible to solving SAE when system dynamics are totally unknown.To deal with tricky SLQ optimal tracking problem with unknown system,we provide a value iteration ADP scheme as follows.

    Assume that value function begins with initialV0(?)=0,then initial control gain matrixK0can be calculated by

    It is worth pointing out thatiis the iteration index whilekis the time index.Next,it is important to show the convergence proof of the proposed method,which is iterating between(31) and (32).

    3.2 Convergence proof of value iterative ADP method

    Before proving the convergence,some lemmas are primarily provided.

    Lemma 3Let the value function sequence{Vi}be denoted in(32).Suppose that K is mean-square stabilizing control gain matrix,and there exists a least upper bound which meets0 ≤Vi(Zk)≤V?(Zk)≤Ω(Zk) ,where the optimal value function V?(Zk)is shown in(22).

    Considering both (40) and (41),we come to the conclusion that

    Theorem 2Assume that the sequences{Ki}and{Vi}are denoted as(31)and(32),then V∞=V?and K∞=K?,where K?is mean square stabilizing.

    ProofFrom the conclusion about sequence {Vi} in Lemma 3,it follows that

    According to the convergence proof,we know that during the process of value iteration based on deterministic systemZk,the proposed ADP algorithm can lead toSinceK?is mean square stabilizing,for stochastic system,the tracking error between the output and the the reference signal can be ensured mean square stabilizable,that is

    4 Realization of the iterative ADP scheme

    We have proved that the value iteration ADP method will be convergent to the optimal solution of the DT HJB equation.It is clear that the proposed method can be perfectly solved by the iteration between (31) and (32).In this section,we consider how to achieve the proposed scheme without knowing system dynamics.

    To achieve this,we are going to apply three BPNNs,including a model network for unknown system dynamic,a critic network for value function approximation and an action network for control gain matrix approximation.We assume that the BPNN is made up of input layer (IL),hidden layer(HL) and output layer (OL).Besides,the amount of neurons in HL isn,and weighting matrix between IL and HL isψwhileζdenotes weighting matrix between HL and OL.The output of the BPNN is expressed as

    where vec(x) means vectorization of input matrixxandρ(?)∈?nrepresents the bounded activation function,which is denoted as

    To deal with the unknown system dynamic,a model network is firstly designed to identify the unknown system.Then based on the model network,the critic network and action network are employed to approximate the optimal value function and control gain matrix.The whole structure diagram is shown in Fig.1.

    Fig.1 Structure diagram of the iterative ADP algorithm

    For the model network,we provide initial stateZkand control gain matricesK,the output of model network is

    To achieve our purpose,the weighting matrices are renewed using gradient descent method:

    whereαmdescribes the learning rate andiis the iterative step in the updating process of weighting matrices.

    When the training of model network succeeds,the weight matrixes will keep fixed.Next,the critic network is designed for value function approximation based on well trained model network.By providing the input stateZk,the output of critic network is

    whereαc >0 means the learning rate for critic network.

    The action network aims to obtain control gain matrixK,which regardsZkas input and the output is given by

    whereαa >0 refers to the learning rate for action network.

    The gradient descent method is a powerful way of seeking for a local minimum of a function and will finally be convergent where the gradient is zero.

    5 Simulation results

    In this section,two simulation examples are performed to demonstrate the effectiveness of the proposed method.

    5.1 Example 1

    Most existing researches on tracking control for stochastic system are limited to either state or control-dependent multiplicative noises.In fact,it is more common that both of them exist in the SLQ optimal tracking problem.Next,considering the following linear DT stochastic system with both control and state dependent multiplicative noises,the one-dimensional output optimal tracking control problem is studied:

    SetQ=10,R=1 andγ=0.8 for cost function (5) while initial state for augmented system (19) is chosen as

    The structures of three BPNNs including model network,critic network and action network are selected as 12-8-9,9-8-1 and 9-8-3,respectively.Moreover,the initial values of weight matrices in three BPNNs are all set to be stochastic in [?1,1] .To start with,set the learning rateαm=0.05,then we train the model network for 500 iterative steps with 1000 sample data.Next,we are going to perform the ADP algorithm based on the well trained model network.The action network and the critic network are trained for 300 iterative steps with each 500 inner training iterations under the condition that learning ratesαcandαaare both selected as 0.01.

    The trajectory of value function is depicted in Fig.2,which reveals the value function is a nondecreasing sequence in the iteration process.Thus the effectiveness of Lemma 4 is verified.

    Fig.2 Convergence of the value function during the learning process

    In addition,Fig.3 describes the curves of control gain matrix acquired by the iterative ADP algorithm,in which three components of control gain matrix finally become convergent to fixed values.Furthermore,by defining||K?K?||=norm(K?K?),we contrast theKobtained by ADP algorithm with optimal solutionK?from SAE (26).Figure 4 shows that ||K?K?|| turns into zero finally,which indicates that the ADP algorithm converges very closely to the optimal tracking controller and demonstrates the effectiveness of the ADP algorithm.

    Fig.3 Curves of control gain matrix

    Fig.4 Convergence of control gain matrix K to optimal K?

    The obtainedKabove is then applied to the original system (58).Fig.5 displays that the mean square errorsturns into zero ultimately,which illustrates that system (2) is mean-square stabilizable andKis mean-square stabilizing.We know that mean-square stabilizing is a kind of concept in the sense of statistics,which is used to describe the stability of stochastic system.Further,we are going to describe the system output in the sense of statistics based on mathematical expectation.As shown in Fig.6,the expectation of system output E(y) can track the reference signal effectively,which further proves the effectiveness of the proposed ADP algorithm.

    Fig.5 Curve of mean square errors

    Fig.6 Curves of expectation of output E(y) and reference signal r

    5.2 Example 2

    In this section,a more complex situation,in which the twodimensional output optimal tracking control problem is studied.The linear DT stochastic system with both control and state-dependent multiplicative noise is described by

    Three networks containing model network,critic network and action network are all established by BPNNs with structures of 20-8-16,16-8-1 and 16-8-4,respectively.Moreover,weight matrixes in three networks are initialized to be random in [?1,1] .We firstly train the model network for 500 iterative steps with 1000 sample data usingαm=0.01 .Furthermore,for the purpose of achieving the ADP algorithm,the action network and the critic network are iterated for 200 steps with each 1000 inner training iterations usingαc=αa=0.001.

    Based on the simulation results,it can be seen that the value function is monotonically nondecreasing in Fig.7,which further proves the correctness of Lemma 4.

    Fig.7 Convergence of the value function during the learning process

    Besides,as is shown in Fig.8,four components of the control gain matrixKcalculated by the ADP algorithm are convergent finally.ThenKand the optimalK?obtained by ADP algorithm and analytical algorithm respectively are contrasted in Fig.9,where we can see that ||K?K?|| becomes zero as time steps increase.Thus it can be concluded thatKis convergent toK?and the ADP algorithm is feasible.

    Fig.8 Trajectory of control gain matrix

    Fig.9 Convergence of control gain matrix to optimal K?

    Then,the obtained control gain matrixKis applied to stochastic system (60).Figure 10 shows that mean square errors become zero finally,which illustrates thatKis meansquare stabilizing.Then we consider the system output in the sense of statistics based on mathematical expectation.From Fig.11 and Fig.12,it is clear that E(y1) and E(y2) can achieve effective tracking with respect to reference signalr1andr2respectively,thus the ADP algorithm is valid.

    Fig.10 Curves of mean square errors

    Fig.11 Curves of expectation of output 1 E(y1) and reference signal r1

    Fig.12 Curves of expectation of output 2 E(y2) and reference signal r2

    6 Conclusions

    This paper deals with optimal tracking control problem for stochastic system with unknown model.To obtain the optimal control strategy for this problem,a value iterative ADP algorithm is proposed.We first use the BPNN to rebuild the model via data driven technique.Then based on the well trained model,the cost function and the control gain matrix are ensured to get close to the optimal values during the iterative process of the proposed method.Ultimately,two simulation examples are implemented to verify the effectiveness of the proposed algorithm.

    AcknowledgementsThis work was supported by the National Natural Science Foundation of China (No.61873248),the Hubei Provincial Natural Science Foundation of China (Nos.2017CFA030,2015CFA010),and the 111 project (No.B17040).

    女人被狂操c到高潮| 99久久久亚洲精品蜜臀av| 午夜精品国产一区二区电影| 国产成年人精品一区二区 | 国产精品亚洲av一区麻豆| 女警被强在线播放| 搡老乐熟女国产| 无限看片的www在线观看| 在线十欧美十亚洲十日本专区| 丝袜美足系列| 精品一区二区三区视频在线观看免费 | 欧美午夜高清在线| а√天堂www在线а√下载| 欧美日韩福利视频一区二区| 久久人妻av系列| 免费在线观看视频国产中文字幕亚洲| 国产欧美日韩一区二区三区在线| 国产黄a三级三级三级人| 啪啪无遮挡十八禁网站| 亚洲国产中文字幕在线视频| 欧美日韩国产mv在线观看视频| 999久久久精品免费观看国产| avwww免费| 久久人妻福利社区极品人妻图片| 国产区一区二久久| 久久久久久久久中文| 在线观看www视频免费| av欧美777| 精品国产乱码久久久久久男人| 亚洲人成电影免费在线| 国产成人av激情在线播放| 97人妻天天添夜夜摸| 日韩成人在线观看一区二区三区| 久久中文字幕一级| 久久久精品国产亚洲av高清涩受| 丁香六月欧美| 国产亚洲av高清不卡| 99re在线观看精品视频| 欧美在线一区亚洲| 日韩欧美在线二视频| 亚洲va日本ⅴa欧美va伊人久久| 精品电影一区二区在线| 欧美日韩中文字幕国产精品一区二区三区 | 女性生殖器流出的白浆| 日本wwww免费看| 久久青草综合色| 亚洲欧美激情在线| 人人妻人人爽人人添夜夜欢视频| 亚洲色图 男人天堂 中文字幕| 国产av一区在线观看免费| 色婷婷久久久亚洲欧美| 亚洲美女黄片视频| 亚洲欧美一区二区三区黑人| 国产精品亚洲av一区麻豆| 在线观看舔阴道视频| 免费在线观看影片大全网站| 国产成年人精品一区二区 | 国产欧美日韩一区二区三区在线| 亚洲人成77777在线视频| 欧美色视频一区免费| 国产av一区在线观看免费| 女性被躁到高潮视频| 精品久久久久久,| 嫩草影院精品99| 99久久国产精品久久久| 搡老熟女国产l中国老女人| 大型黄色视频在线免费观看| 亚洲性夜色夜夜综合| 视频区图区小说| 亚洲色图综合在线观看| 精品电影一区二区在线| 在线免费观看的www视频| 一个人观看的视频www高清免费观看 | 久9热在线精品视频| 9色porny在线观看| 亚洲成人久久性| 深夜精品福利| 国产精品美女特级片免费视频播放器 | 免费观看精品视频网站| 午夜免费成人在线视频| 天堂中文最新版在线下载| 十分钟在线观看高清视频www| 午夜日韩欧美国产| 久久国产乱子伦精品免费另类| 91成年电影在线观看| 国产国语露脸激情在线看| 亚洲 欧美一区二区三区| 欧美日韩亚洲高清精品| 日韩视频一区二区在线观看| 欧美一级毛片孕妇| 又紧又爽又黄一区二区| 久久中文看片网| 大香蕉久久成人网| 中文字幕av电影在线播放| 女人被躁到高潮嗷嗷叫费观| 国产免费现黄频在线看| 欧美中文日本在线观看视频| 777久久人妻少妇嫩草av网站| 欧美黑人精品巨大| 久久精品亚洲av国产电影网| 日日爽夜夜爽网站| 日本五十路高清| 嫩草影院精品99| 久久久久久亚洲精品国产蜜桃av| 国产精品1区2区在线观看.| 在线观看免费视频网站a站| 久久天躁狠狠躁夜夜2o2o| 日韩欧美在线二视频| 一级黄色大片毛片| 精品国产美女av久久久久小说| 久久天躁狠狠躁夜夜2o2o| 天堂影院成人在线观看| 纯流量卡能插随身wifi吗| 女生性感内裤真人,穿戴方法视频| 久久香蕉激情| 91九色精品人成在线观看| 午夜影院日韩av| 日本免费a在线| aaaaa片日本免费| 精品久久久精品久久久| 久久国产精品男人的天堂亚洲| 狠狠狠狠99中文字幕| 校园春色视频在线观看| 妹子高潮喷水视频| 女性生殖器流出的白浆| 亚洲欧美激情综合另类| 亚洲在线自拍视频| 亚洲全国av大片| 国内毛片毛片毛片毛片毛片| 法律面前人人平等表现在哪些方面| 在线观看免费视频日本深夜| 国产欧美日韩精品亚洲av| 亚洲国产欧美日韩在线播放| 亚洲精品美女久久av网站| 99热国产这里只有精品6| 精品国产国语对白av| 村上凉子中文字幕在线| 久久精品国产亚洲av高清一级| 亚洲男人的天堂狠狠| 少妇 在线观看| 日韩欧美三级三区| 国产区一区二久久| 亚洲成人免费电影在线观看| videosex国产| 欧美中文综合在线视频| 亚洲国产欧美日韩在线播放| 免费观看人在逋| 婷婷丁香在线五月| 午夜两性在线视频| 中文字幕人妻丝袜制服| 国产精品一区二区在线不卡| 日韩精品青青久久久久久| 俄罗斯特黄特色一大片| 亚洲欧洲精品一区二区精品久久久| 在线观看www视频免费| 琪琪午夜伦伦电影理论片6080| 亚洲一区高清亚洲精品| 精品少妇一区二区三区视频日本电影| 女性生殖器流出的白浆| 日韩欧美在线二视频| 亚洲精品在线美女| 俄罗斯特黄特色一大片| 黑人猛操日本美女一级片| 激情视频va一区二区三区| 国产一区二区在线av高清观看| 老司机午夜十八禁免费视频| 一级a爱视频在线免费观看| 亚洲精品美女久久久久99蜜臀| 国产成人av教育| 一本大道久久a久久精品| 男女下面进入的视频免费午夜 | 婷婷六月久久综合丁香| 视频区图区小说| www日本在线高清视频| 丰满的人妻完整版| av片东京热男人的天堂| 国产亚洲av高清不卡| av有码第一页| 欧洲精品卡2卡3卡4卡5卡区| 18禁观看日本| 老鸭窝网址在线观看| 国产aⅴ精品一区二区三区波| 成人av一区二区三区在线看| 母亲3免费完整高清在线观看| 日韩大尺度精品在线看网址 | 国产亚洲精品久久久久5区| 高清在线国产一区| 纯流量卡能插随身wifi吗| 精品免费久久久久久久清纯| 亚洲欧洲精品一区二区精品久久久| 亚洲成人久久性| 男女之事视频高清在线观看| 一级毛片精品| 婷婷六月久久综合丁香| 国产亚洲欧美98| 国产精品一区二区精品视频观看| 色综合站精品国产| 高清在线国产一区| 无遮挡黄片免费观看| 一级,二级,三级黄色视频| 真人做人爱边吃奶动态| 一区二区日韩欧美中文字幕| 久9热在线精品视频| 欧美乱色亚洲激情| 长腿黑丝高跟| 亚洲欧美激情综合另类| 亚洲精品中文字幕在线视频| 中文字幕另类日韩欧美亚洲嫩草| av国产精品久久久久影院| 免费少妇av软件| 这个男人来自地球电影免费观看| 亚洲国产欧美网| 精品一品国产午夜福利视频| 色婷婷av一区二区三区视频| 亚洲熟女毛片儿| 嫩草影院精品99| 亚洲精品国产色婷婷电影| 每晚都被弄得嗷嗷叫到高潮| cao死你这个sao货| 搡老熟女国产l中国老女人| 久久天堂一区二区三区四区| 婷婷丁香在线五月| 乱人伦中国视频| 亚洲国产欧美一区二区综合| 一本大道久久a久久精品| 欧美黄色片欧美黄色片| x7x7x7水蜜桃| 日日爽夜夜爽网站| 人妻丰满熟妇av一区二区三区| 欧美日韩一级在线毛片| 多毛熟女@视频| 在线观看舔阴道视频| 精品一区二区三卡| 天堂中文最新版在线下载| 可以免费在线观看a视频的电影网站| 亚洲自偷自拍图片 自拍| 欧美黑人精品巨大| 午夜免费激情av| 欧美一级毛片孕妇| 亚洲人成网站在线播放欧美日韩| 欧美激情极品国产一区二区三区| 欧美中文日本在线观看视频| 老司机在亚洲福利影院| 精品一品国产午夜福利视频| 亚洲aⅴ乱码一区二区在线播放 | 无人区码免费观看不卡| 国产99久久九九免费精品| 精品国产亚洲在线| 国产不卡一卡二| 成在线人永久免费视频| 老司机午夜十八禁免费视频| 色播在线永久视频| 亚洲全国av大片| 国产精品日韩av在线免费观看 | 国产精品 国内视频| 国产激情久久老熟女| 成人18禁高潮啪啪吃奶动态图| 不卡av一区二区三区| 国产在线观看jvid| 久久精品国产亚洲av香蕉五月| 久久天躁狠狠躁夜夜2o2o| 日韩欧美在线二视频| 亚洲欧美激情在线| 美女午夜性视频免费| 大型av网站在线播放| 身体一侧抽搐| 每晚都被弄得嗷嗷叫到高潮| 国产欧美日韩精品亚洲av| bbb黄色大片| 久久热在线av| 不卡一级毛片| 91精品三级在线观看| 国产国语露脸激情在线看| 亚洲av成人一区二区三| 国产真人三级小视频在线观看| 啪啪无遮挡十八禁网站| av天堂久久9| 久久香蕉精品热| 少妇被粗大的猛进出69影院| 最近最新中文字幕大全免费视频| 亚洲精品国产色婷婷电影| 国产一区二区三区视频了| 亚洲熟妇中文字幕五十中出 | 国产在线精品亚洲第一网站| 国产成人欧美| 成在线人永久免费视频| 成熟少妇高潮喷水视频| 国产精品99久久99久久久不卡| 国产又爽黄色视频| 高清黄色对白视频在线免费看| 国产精品 欧美亚洲| 动漫黄色视频在线观看| 美女高潮到喷水免费观看| 欧美黄色淫秽网站| 国产精品99久久99久久久不卡| 亚洲男人天堂网一区| 99热国产这里只有精品6| 99国产精品一区二区三区| 一进一出好大好爽视频| 欧美日韩视频精品一区| 9191精品国产免费久久| 日韩精品免费视频一区二区三区| 欧美成狂野欧美在线观看| 搡老熟女国产l中国老女人| 国产xxxxx性猛交| 黄片大片在线免费观看| av国产精品久久久久影院| 国产成人系列免费观看| 母亲3免费完整高清在线观看| 久久人人爽av亚洲精品天堂| www.自偷自拍.com| 人成视频在线观看免费观看| 中文字幕最新亚洲高清| 亚洲欧美日韩另类电影网站| 亚洲精品美女久久av网站| 99精品久久久久人妻精品| 一区二区三区国产精品乱码| 嫩草影视91久久| 中文字幕人妻丝袜制服| 欧美成人免费av一区二区三区| 一本综合久久免费| 男女午夜视频在线观看| 久久中文字幕人妻熟女| 十分钟在线观看高清视频www| www日本在线高清视频| 婷婷六月久久综合丁香| 亚洲一区二区三区欧美精品| 国产aⅴ精品一区二区三区波| 悠悠久久av| 国产精品爽爽va在线观看网站 | a级毛片黄视频| 欧美黑人欧美精品刺激| 免费在线观看黄色视频的| 亚洲av片天天在线观看| 啦啦啦免费观看视频1| 国产伦人伦偷精品视频| 黄色丝袜av网址大全| 少妇 在线观看| 久久人妻熟女aⅴ| 色播在线永久视频| 淫秽高清视频在线观看| 9191精品国产免费久久| 久久久国产精品麻豆| 可以在线观看毛片的网站| 十八禁网站免费在线| 天堂影院成人在线观看| 久久精品国产亚洲av香蕉五月| 精品日产1卡2卡| 涩涩av久久男人的天堂| 操美女的视频在线观看| 亚洲全国av大片| 久久香蕉激情| 日韩免费av在线播放| 波多野结衣一区麻豆| 黑人巨大精品欧美一区二区mp4| 午夜亚洲福利在线播放| 日日夜夜操网爽| 亚洲精品一二三| 国产高清videossex| 一a级毛片在线观看| 成人国语在线视频| 精品久久久久久,| 国产区一区二久久| 亚洲国产毛片av蜜桃av| 99久久综合精品五月天人人| 午夜福利欧美成人| 日本三级黄在线观看| 久久亚洲精品不卡| 久久久久精品国产欧美久久久| 两个人免费观看高清视频| 亚洲成国产人片在线观看| 午夜影院日韩av| 最新在线观看一区二区三区| 一级a爱片免费观看的视频| www国产在线视频色| 涩涩av久久男人的天堂| www国产在线视频色| 亚洲国产精品sss在线观看 | 日韩精品青青久久久久久| 黄色片一级片一级黄色片| 丝袜在线中文字幕| 波多野结衣一区麻豆| 老司机靠b影院| 桃红色精品国产亚洲av| 欧美亚洲日本最大视频资源| 操出白浆在线播放| 精品人妻1区二区| 欧美日韩视频精品一区| 18禁观看日本| 亚洲成a人片在线一区二区| 日本五十路高清| 女人精品久久久久毛片| 在线视频色国产色| 国产激情久久老熟女| 久久精品aⅴ一区二区三区四区| 欧洲精品卡2卡3卡4卡5卡区| 宅男免费午夜| 亚洲 欧美一区二区三区| 国产黄a三级三级三级人| 中文字幕av电影在线播放| 国产一区二区激情短视频| 一区二区三区国产精品乱码| 久久人妻福利社区极品人妻图片| 不卡av一区二区三区| 在线看a的网站| 亚洲va日本ⅴa欧美va伊人久久| 最近最新中文字幕大全免费视频| 成年版毛片免费区| 超色免费av| 夜夜看夜夜爽夜夜摸 | 日韩国内少妇激情av| 免费女性裸体啪啪无遮挡网站| 欧美 亚洲 国产 日韩一| 亚洲七黄色美女视频| 欧美不卡视频在线免费观看 | 每晚都被弄得嗷嗷叫到高潮| 欧美日韩福利视频一区二区| 最好的美女福利视频网| 精品久久蜜臀av无| 99久久人妻综合| 欧美成狂野欧美在线观看| 亚洲人成网站在线播放欧美日韩| 国产欧美日韩一区二区精品| 亚洲 国产 在线| 99热国产这里只有精品6| 国产精品影院久久| 少妇的丰满在线观看| 亚洲国产毛片av蜜桃av| 免费av中文字幕在线| 欧美午夜高清在线| 久久国产精品影院| 国产精华一区二区三区| 一区二区三区激情视频| 99久久国产精品久久久| 一级a爱片免费观看的视频| 久久久国产一区二区| 丁香欧美五月| 女人被狂操c到高潮| 亚洲国产看品久久| av网站在线播放免费| 一进一出好大好爽视频| 黄色怎么调成土黄色| 久久久国产精品麻豆| 久久精品亚洲精品国产色婷小说| av超薄肉色丝袜交足视频| 一级黄色大片毛片| a在线观看视频网站| 亚洲精品成人av观看孕妇| 视频在线观看一区二区三区| 精品一区二区三区视频在线观看免费 | 国产国语露脸激情在线看| 99久久人妻综合| 中出人妻视频一区二区| 国产一卡二卡三卡精品| 99精品在免费线老司机午夜| 色哟哟哟哟哟哟| 欧美日韩福利视频一区二区| 久久人人精品亚洲av| 国产精品久久电影中文字幕| 男女床上黄色一级片免费看| 精品国产国语对白av| 亚洲国产精品一区二区三区在线| 亚洲欧美激情在线| 热99re8久久精品国产| 久久人人精品亚洲av| 亚洲国产中文字幕在线视频| 中文字幕人妻丝袜一区二区| 色综合婷婷激情| 欧美成人性av电影在线观看| 一二三四在线观看免费中文在| 嫁个100分男人电影在线观看| 身体一侧抽搐| 最新在线观看一区二区三区| 免费av毛片视频| 国产激情欧美一区二区| 丝袜美腿诱惑在线| 欧美精品亚洲一区二区| 69精品国产乱码久久久| 亚洲国产毛片av蜜桃av| 最新美女视频免费是黄的| 女生性感内裤真人,穿戴方法视频| av电影中文网址| 交换朋友夫妻互换小说| 免费在线观看影片大全网站| 国产单亲对白刺激| 亚洲,欧美精品.| 国产激情欧美一区二区| 在线看a的网站| 亚洲 欧美 日韩 在线 免费| 日本vs欧美在线观看视频| 亚洲片人在线观看| 99精品在免费线老司机午夜| 亚洲欧美一区二区三区黑人| 精品国产超薄肉色丝袜足j| 亚洲欧美激情综合另类| 久久精品国产亚洲av香蕉五月| 久久青草综合色| 中文字幕另类日韩欧美亚洲嫩草| 午夜两性在线视频| 日本撒尿小便嘘嘘汇集6| 日韩欧美一区视频在线观看| 午夜免费鲁丝| 精品免费久久久久久久清纯| 热re99久久精品国产66热6| 色婷婷av一区二区三区视频| 国产精品国产av在线观看| 久久国产亚洲av麻豆专区| 亚洲国产精品一区二区三区在线| 亚洲精品国产区一区二| www国产在线视频色| 亚洲,欧美精品.| 国产亚洲精品综合一区在线观看 | 激情视频va一区二区三区| 亚洲欧美激情综合另类| 国产精品久久久人人做人人爽| 日日摸夜夜添夜夜添小说| 在线看a的网站| 亚洲精品国产区一区二| 久久草成人影院| 人人妻人人爽人人添夜夜欢视频| 日韩免费高清中文字幕av| 高清毛片免费观看视频网站 | 久久久国产成人免费| 国产精品久久久av美女十八| 最新美女视频免费是黄的| 天天添夜夜摸| 在线观看66精品国产| 三级毛片av免费| 日韩欧美一区视频在线观看| 美女福利国产在线| 色精品久久人妻99蜜桃| 日韩欧美国产一区二区入口| 国产av一区在线观看免费| 久久久久久大精品| 侵犯人妻中文字幕一二三四区| 国产成人一区二区三区免费视频网站| 80岁老熟妇乱子伦牲交| 真人做人爱边吃奶动态| a级毛片在线看网站| 91字幕亚洲| 日本三级黄在线观看| 久久久国产精品麻豆| 亚洲精品一卡2卡三卡4卡5卡| 99久久国产精品久久久| 男女午夜视频在线观看| 中文字幕色久视频| 久久狼人影院| 久久精品影院6| 久久伊人香网站| 国产精品国产av在线观看| 天堂影院成人在线观看| 国产欧美日韩一区二区三| 亚洲国产中文字幕在线视频| 丰满的人妻完整版| 欧美久久黑人一区二区| 日本精品一区二区三区蜜桃| 啦啦啦在线免费观看视频4| 午夜福利免费观看在线| 午夜福利影视在线免费观看| 欧美成人性av电影在线观看| av免费在线观看网站| 国产成人影院久久av| 久久精品91蜜桃| 日韩有码中文字幕| 精品人妻1区二区| 亚洲第一欧美日韩一区二区三区| 国产又爽黄色视频| 久久伊人香网站| 国产精品二区激情视频| 日日摸夜夜添夜夜添小说| 99精国产麻豆久久婷婷| 国产免费男女视频| 亚洲性夜色夜夜综合| 757午夜福利合集在线观看| 国产精品久久久久久人妻精品电影| 手机成人av网站| www.www免费av| 成人av一区二区三区在线看| 免费在线观看黄色视频的| 精品久久久久久,| 看免费av毛片| 日韩人妻精品一区2区三区| 黑人巨大精品欧美一区二区蜜桃| 国产aⅴ精品一区二区三区波| 日本免费a在线| 女人被躁到高潮嗷嗷叫费观| 多毛熟女@视频| av在线天堂中文字幕 | 在线永久观看黄色视频| 久久精品亚洲精品国产色婷小说| 国产熟女午夜一区二区三区| 久久狼人影院| 99久久99久久久精品蜜桃| 99riav亚洲国产免费| 国产在线观看jvid| 91精品国产国语对白视频| 久久天躁狠狠躁夜夜2o2o| 村上凉子中文字幕在线| 窝窝影院91人妻| 久久精品成人免费网站| 老汉色∧v一级毛片| 国产av又大| 国产xxxxx性猛交| 看黄色毛片网站| 中文欧美无线码| 99re在线观看精品视频| 午夜日韩欧美国产| 如日韩欧美国产精品一区二区三区| 精品久久久久久,| 91精品国产国语对白视频| 嫁个100分男人电影在线观看| 国产xxxxx性猛交| 精品卡一卡二卡四卡免费| 久久午夜亚洲精品久久| 亚洲国产看品久久|