• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Neural-network-based stochastic linear quadratic optimal tracking control scheme for unknown discrete-time systems using adaptive dynamic programming

    2021-10-13 07:16:26XinChenFangWang
    Control Theory and Technology 2021年3期

    Xin Chen·Fang Wang

    Abstract In this paper,a stochastic linear quadratic optimal tracking scheme is proposed for unknown linear discrete-time (DT) systems based on adaptive dynamic programming (ADP) algorithm.First,an augmented system composed of the original system and the command generator is constructed and then an augmented stochastic algebraic equation is derived based on the augmented system.Next,to obtain the optimal control strategy,the stochastic case is converted into the deterministic one by system transformation,and then an ADP algorithm is proposed with convergence analysis.For the purpose of realizing the ADP algorithm,three back propagation neural networks including model network,critic network and action network are devised to guarantee unknown system model,optimal value function and optimal control strategy,respectively.Finally,the obtained optimal control strategy is applied to the original stochastic system,and two simulations are provided to demonstrate the effectiveness of the proposed algorithm.

    Keywords Stochastic system·Optimal tracking control·Adaptive dynamic programming·Neural networks

    1 Introduction

    As is well known,optimal tracking control (OTC) plays a significant role in control field and develops fast in both theory [1–4] and applications [5–7].The aim of OTC is to design a controller,which enables the output to track a reference trajectory by minimizing a predefined performance index.However,traditional OTC approaches,such as feedback linearization [1] and plant inversion [2],are usually dragged in complex mathematical analysis and have trouble in controlling highly nonlinear plants.As for the linear quadratic tracking (LQT) problem,solutions can be obtained by solving an algebraic Riccati equation (ARE)for feedback term and a noncausal difference equation for feedforward term [8].Nevertheless,it is worth pointing out that the method mentioned above requirea priorisystem dynamics.Therefore,it challenges us to deal with the optimal tracking control problems with completely unknown system information.

    The key point of OTC is to calculate the nonlinear Hamilton–Jacobi–Bellman (HJB) equation,which is too complex to obtain an analytical solution.Though dynamic programming (DP) works as an effective method for solving HJB,it is often computationally untenable due to“curse of dimensionality”[9].To approximate solutions of HJB equation,adaptive dynamic programming (ADP) algorithms have been extensively employed and developed.Value iteration(VI) [10] and policy iteration (PI) [11] pave the way for the achievement of ADP algorithm.To overcome the unknown system,researchers try to rebuild the model base on datadriven techniques [12].By using useful input-output data,data-driven models,such as Markov models,neural network(NN) models and others,can replace system dynamics with input-output mapping.For discrete-time (DT) systems,ADP algorithms are proposed to deal with the OTC problems relying on NN based data-driven model [3,13].As for continuous system (CT),a synchronous PI algorithm scheme is applied to tackle the OTC with unknown dynamics via rebuilding system model [14].However,model reconstruction method [3,13–15] may be subjected to modeling accuracy.To get rid of it,a simultaneous policy iteration(SPI) algorithm is proposed to deal with the optimal control problem for partially unknown nonlinear system [16],and further the author [17] extends the SPI algorithm to deal with optimal control problems for completely unknown nonlinear system based on least squares method and Mont Carlo integration technique.Also,Bahare [18] proposed a PI algorithm and a VI one to solve the LQT ARE online only depending on measured input,output,and reference trajectory data.Besides,a Q-learning algorithm is proposed to obtain the optimal control by calculating an augmented ARE,not relying on the system dynamics or the command generator dynamics [4].

    Note that the aforementioned ADP based schemes provide multiform approaches for OTC problem,however,only the noise-free cases are taken into consideration.In fact,there will exist intrinsic nonlinear characteristic in LQT when original system is subjected to multiplicative noises,so that standard tools for LQT cannot be applied directly.Although traditional adaptive control methods can guarantee good tracking performance for stochastic system,the optimality aspect of the system is usually ignored [19].

    As we know,the stochastic linear quadratic (SLQ) optimal control problem is complicated due to the existence of multiplicative noises,but there is an equivalent relationship between the feasibility of the SLQ optimal control problem and the solvability of the stochastic algebraic equation (SAE)[20].Moreover,with the help of linear matrix inequalities[21],semidefinite programming [22],and Lagranger multiplier theorem [23],solving the SLQ optimal control problem becomes easier.Nevertheless,the aforementioned schemes[20–23] work under the prerequisite that system dynamics is completely known.To overcome the difficulty that the model is unknown,the authors [24] proposed an ADP algorithm to solve the SLQ optimal control problem based on 3 NN models.Moreover,the authors [25] adopted a Q-learning algorithm to settle SLQ optimal control problem for modelfree DT systems.Moreover,the authors [26] investigate non-model-based ADP algorithm to address optimal control problem for CT stochastic systems influenced by multiplicative noises.

    To the best of our knowledge,there seems to be a lot of ADP-based SLQ optimal control schemes while SLQ optimal tracking control has obtained seldom attention.The SLQ optimal tracking control problem were investigated in [27,28];however,only the control-dependent noise was discussed and system dynamics should be completely known in advance.When the model is unknown,there may exist huge challenges in SLQ optimal tracking problems for stochastic systems with multiplicative noises.Besides,a non-stable command generator is taken into account in this paper,which leads to traditional mean square concepts in terms ofxkin [24,25]being not suitable anymore,thus the stability of the system cannot be guaranteed.

    Facing the aforementioned difficulties,we propose a model unknown SLQ optimal tracking scheme using the ADP algorithm.The main contributions can be summarized as follows:

    (1) To solve the SLQ optimal tracking problem for unknown systems with multiplicative noises,an ADP algorithm is proposed in this paper,and a model-criticaction structure is introduced to obtain the optimal control strategy for stochastic systems whose dynamics are unknown.

    (2) To ensure the stability of the system,mean square concepts aboutekis newly defined,and a discount factor is introduced to the cost function,then an augmented SAE are derived to obtain the optimal control based on the augmented system.

    The rest of this paper is organized as follows.In Sect.2,we give the problem formulation and conversion.In Sect.3,we carry out the derivation and convergence proof of the VI ADP algorithm.In Sect.4,we make use of back propagation neural networks (BPNN) to realize the ADP algorithm.In Sect.5,two examples are given to illustrate the effectiveness of the proposed scheme.Finally,the conclusion is mentioned in Sect.6.

    2 Problem formulation and conversion

    2.1 Problem formulation

    Consider the linear stochastic DT system described as follows:

    wherexk∈?n,uk∈?mandyk∈?prefer to system state,control input and system output,respectively.Initial state of system (1) isx0,A,C∈?n×nandB,D∈?n×mare given constant matrices.One-dimensional stochastic disturbance sequenceωk(k=0,1,2,…,ω0=0) is defined on the given probability space (Ω,F,P),which is actually a measure space with total measure equals to 1,that is P(Ω)=1 .Moreover,Ω,F,P are defined as sample space,set of events and probability measure,respectively.The stochastic sequence is assumed to meet the following condition:

    where Fk=σ{ωk|k=0,1,2,…} refers to theσ? algebra obtained byωk.x0is irrelevant withωk,k=0,1,2,…,and E(?) denotes the mathematical expectation.

    The tracking error is described by

    whererkis the reference trajectory.

    Assumption 1The reference trajectory for the SLQ optimal tracking problem is generated by the command generator

    A cost function is essential to measure the optimality in SLQ optimal tracking problem.Therefore,the quadratic cost function to be optimized is denoted as

    whereQandRare positive semidefinite symmetric matrix and positive definite symmetric matrix,respectively.

    The cost function (4) usually can be used whenFis Hurwitz.However,by adding a discount factor in (4),we can tackle the SLQ tracking control problem even for the cases that the command generator dynamicsFis not Hurwitz.Consider the discounted cost function as follows:

    where the discount factor satisfies 0<γ≤1 .It is worth mentioning thatγ=1 can only be used whenFin (3) is Hurwitz.

    Considering thatFis not Hurwitz in this paper,the mean square definition in terms ofxkin [24] is no longer suitable.Thus we provide some new definitions.

    Definition 1ukis considered to be mean-square stabilizing ate0if there exists a linear feedback form ofuk.Hence for every initiale0,system (2) can lead to

    Definition 2System (3) with a mean-square stabilizing feedback control is considered to be mean-square stabilizable.

    Definition 3ukis called admissible if it satisfies the following three conditions:first,it is a Fk? adapted and measurable stochastic process;second,it is mean-square stabilizing;third,it enables the cost function to reach the minimum value.All admissible controls are gathered in a setUad.

    The goal of SLQ optimal tracking control problem is to seek for an admissible control which can not only minimize the cost function (5) but also stabilize system (2) for each initial statee0,namely

    To achieve the goal above,an augmented system including the system dynamics (1) and the reference trajectory dynamics (3) is constructed first as follows:

    Based on (7),cost function (5) can be further denoted as

    whereQ1=[G?I]TQ[G?I].

    Then,the optimal tracking control with linear feedback form is given by

    where the constant matrixKis regarded as mean square stabilizing control gain matrix if it satisfies Definition 1.

    Therefore,the cost function (8) can be further transformed into the following equation with respect toK,namely

    Thus,the goal of SLQ optimal tracking control problem (6)can be further expressed as

    Definition 4The SLQ optimal tracking control problem is considered well-posed if

    It is well known that there is an equivalent relationship between the feasibility of the SLQ optimal control problem and the solvability of the SAE.Next,it is shown that the SLQ optimal tracking problem is well posed with the help of augmented SAE.Therefore,we provide the following lemma first.

    Lemma 1The SLQ optimal tracking control problem is called well posed if there exists admissible control uk=KXk∈Uadand the following related value function:

    where the symmetric matrix P meets the following augmented SAE:

    Then,the following assumptions are made to ensure the existence of admissible

    Assumption 2The tracking error system (2) is mean-square stabilizable.

    Assumption 3The augmented system (7) is controllable.

    2.2 Problem conversion

    It is well known that ADP algorithm has obtained huge success in deterministic OTC designs [3,4,13–18],which inspires us to solve the SLQ optimal tracking problem by transforming the stochastic system into a deterministic one.

    Accordingly,the cost function (10) is rewritten as a deterministic form

    Remark 1The deterministic system (20) is independent on the stochastic disturbanceωkand will only be decided by initial stateZ0and control gain matrixK,which create well conditions for the apply of ADP algorithm.

    3 ADP algorithm and convergence proof

    In this section,we propose a value iteration ADP algorithm to obtain the optimal control for the SLQ optimal tracking problem.Thus we provide the formula of optimal control and related SAE first.

    3.1 The derivation of value iteration ADP algorithm

    whereP?satisfies the the augmented SAE (12) andZkis the state of deterministic system (19).

    An essential condition for the optimality is the first-order necessary condition.By calculating the derivatives of optimal value function (22) with regard toK,we have the following HJB equation:

    From Lemma 2,the SLQ optimal tracking problem could be effectively dealt with by the solution of augmented SAE.The difficulty is that the augmented SAE is usually hard to calculate the analytical solution and it requires full knowledge of the system dynamics.Unfortunately,it becomes impossible to solving SAE when system dynamics are totally unknown.To deal with tricky SLQ optimal tracking problem with unknown system,we provide a value iteration ADP scheme as follows.

    Assume that value function begins with initialV0(?)=0,then initial control gain matrixK0can be calculated by

    It is worth pointing out thatiis the iteration index whilekis the time index.Next,it is important to show the convergence proof of the proposed method,which is iterating between(31) and (32).

    3.2 Convergence proof of value iterative ADP method

    Before proving the convergence,some lemmas are primarily provided.

    Lemma 3Let the value function sequence{Vi}be denoted in(32).Suppose that K is mean-square stabilizing control gain matrix,and there exists a least upper bound which meets0 ≤Vi(Zk)≤V?(Zk)≤Ω(Zk) ,where the optimal value function V?(Zk)is shown in(22).

    Considering both (40) and (41),we come to the conclusion that

    Theorem 2Assume that the sequences{Ki}and{Vi}are denoted as(31)and(32),then V∞=V?and K∞=K?,where K?is mean square stabilizing.

    ProofFrom the conclusion about sequence {Vi} in Lemma 3,it follows that

    According to the convergence proof,we know that during the process of value iteration based on deterministic systemZk,the proposed ADP algorithm can lead toSinceK?is mean square stabilizing,for stochastic system,the tracking error between the output and the the reference signal can be ensured mean square stabilizable,that is

    4 Realization of the iterative ADP scheme

    We have proved that the value iteration ADP method will be convergent to the optimal solution of the DT HJB equation.It is clear that the proposed method can be perfectly solved by the iteration between (31) and (32).In this section,we consider how to achieve the proposed scheme without knowing system dynamics.

    To achieve this,we are going to apply three BPNNs,including a model network for unknown system dynamic,a critic network for value function approximation and an action network for control gain matrix approximation.We assume that the BPNN is made up of input layer (IL),hidden layer(HL) and output layer (OL).Besides,the amount of neurons in HL isn,and weighting matrix between IL and HL isψwhileζdenotes weighting matrix between HL and OL.The output of the BPNN is expressed as

    where vec(x) means vectorization of input matrixxandρ(?)∈?nrepresents the bounded activation function,which is denoted as

    To deal with the unknown system dynamic,a model network is firstly designed to identify the unknown system.Then based on the model network,the critic network and action network are employed to approximate the optimal value function and control gain matrix.The whole structure diagram is shown in Fig.1.

    Fig.1 Structure diagram of the iterative ADP algorithm

    For the model network,we provide initial stateZkand control gain matricesK,the output of model network is

    To achieve our purpose,the weighting matrices are renewed using gradient descent method:

    whereαmdescribes the learning rate andiis the iterative step in the updating process of weighting matrices.

    When the training of model network succeeds,the weight matrixes will keep fixed.Next,the critic network is designed for value function approximation based on well trained model network.By providing the input stateZk,the output of critic network is

    whereαc >0 means the learning rate for critic network.

    The action network aims to obtain control gain matrixK,which regardsZkas input and the output is given by

    whereαa >0 refers to the learning rate for action network.

    The gradient descent method is a powerful way of seeking for a local minimum of a function and will finally be convergent where the gradient is zero.

    5 Simulation results

    In this section,two simulation examples are performed to demonstrate the effectiveness of the proposed method.

    5.1 Example 1

    Most existing researches on tracking control for stochastic system are limited to either state or control-dependent multiplicative noises.In fact,it is more common that both of them exist in the SLQ optimal tracking problem.Next,considering the following linear DT stochastic system with both control and state dependent multiplicative noises,the one-dimensional output optimal tracking control problem is studied:

    SetQ=10,R=1 andγ=0.8 for cost function (5) while initial state for augmented system (19) is chosen as

    The structures of three BPNNs including model network,critic network and action network are selected as 12-8-9,9-8-1 and 9-8-3,respectively.Moreover,the initial values of weight matrices in three BPNNs are all set to be stochastic in [?1,1] .To start with,set the learning rateαm=0.05,then we train the model network for 500 iterative steps with 1000 sample data.Next,we are going to perform the ADP algorithm based on the well trained model network.The action network and the critic network are trained for 300 iterative steps with each 500 inner training iterations under the condition that learning ratesαcandαaare both selected as 0.01.

    The trajectory of value function is depicted in Fig.2,which reveals the value function is a nondecreasing sequence in the iteration process.Thus the effectiveness of Lemma 4 is verified.

    Fig.2 Convergence of the value function during the learning process

    In addition,Fig.3 describes the curves of control gain matrix acquired by the iterative ADP algorithm,in which three components of control gain matrix finally become convergent to fixed values.Furthermore,by defining||K?K?||=norm(K?K?),we contrast theKobtained by ADP algorithm with optimal solutionK?from SAE (26).Figure 4 shows that ||K?K?|| turns into zero finally,which indicates that the ADP algorithm converges very closely to the optimal tracking controller and demonstrates the effectiveness of the ADP algorithm.

    Fig.3 Curves of control gain matrix

    Fig.4 Convergence of control gain matrix K to optimal K?

    The obtainedKabove is then applied to the original system (58).Fig.5 displays that the mean square errorsturns into zero ultimately,which illustrates that system (2) is mean-square stabilizable andKis mean-square stabilizing.We know that mean-square stabilizing is a kind of concept in the sense of statistics,which is used to describe the stability of stochastic system.Further,we are going to describe the system output in the sense of statistics based on mathematical expectation.As shown in Fig.6,the expectation of system output E(y) can track the reference signal effectively,which further proves the effectiveness of the proposed ADP algorithm.

    Fig.5 Curve of mean square errors

    Fig.6 Curves of expectation of output E(y) and reference signal r

    5.2 Example 2

    In this section,a more complex situation,in which the twodimensional output optimal tracking control problem is studied.The linear DT stochastic system with both control and state-dependent multiplicative noise is described by

    Three networks containing model network,critic network and action network are all established by BPNNs with structures of 20-8-16,16-8-1 and 16-8-4,respectively.Moreover,weight matrixes in three networks are initialized to be random in [?1,1] .We firstly train the model network for 500 iterative steps with 1000 sample data usingαm=0.01 .Furthermore,for the purpose of achieving the ADP algorithm,the action network and the critic network are iterated for 200 steps with each 1000 inner training iterations usingαc=αa=0.001.

    Based on the simulation results,it can be seen that the value function is monotonically nondecreasing in Fig.7,which further proves the correctness of Lemma 4.

    Fig.7 Convergence of the value function during the learning process

    Besides,as is shown in Fig.8,four components of the control gain matrixKcalculated by the ADP algorithm are convergent finally.ThenKand the optimalK?obtained by ADP algorithm and analytical algorithm respectively are contrasted in Fig.9,where we can see that ||K?K?|| becomes zero as time steps increase.Thus it can be concluded thatKis convergent toK?and the ADP algorithm is feasible.

    Fig.8 Trajectory of control gain matrix

    Fig.9 Convergence of control gain matrix to optimal K?

    Then,the obtained control gain matrixKis applied to stochastic system (60).Figure 10 shows that mean square errors become zero finally,which illustrates thatKis meansquare stabilizing.Then we consider the system output in the sense of statistics based on mathematical expectation.From Fig.11 and Fig.12,it is clear that E(y1) and E(y2) can achieve effective tracking with respect to reference signalr1andr2respectively,thus the ADP algorithm is valid.

    Fig.10 Curves of mean square errors

    Fig.11 Curves of expectation of output 1 E(y1) and reference signal r1

    Fig.12 Curves of expectation of output 2 E(y2) and reference signal r2

    6 Conclusions

    This paper deals with optimal tracking control problem for stochastic system with unknown model.To obtain the optimal control strategy for this problem,a value iterative ADP algorithm is proposed.We first use the BPNN to rebuild the model via data driven technique.Then based on the well trained model,the cost function and the control gain matrix are ensured to get close to the optimal values during the iterative process of the proposed method.Ultimately,two simulation examples are implemented to verify the effectiveness of the proposed algorithm.

    AcknowledgementsThis work was supported by the National Natural Science Foundation of China (No.61873248),the Hubei Provincial Natural Science Foundation of China (Nos.2017CFA030,2015CFA010),and the 111 project (No.B17040).

    freevideosex欧美| 亚洲第一区二区三区不卡| 妹子高潮喷水视频| 人人妻人人添人人爽欧美一区卜| 国产精品人妻久久久影院| 成人黄色视频免费在线看| 成年美女黄网站色视频大全免费| 男人添女人高潮全过程视频| 丝瓜视频免费看黄片| 男人爽女人下面视频在线观看| 久久精品久久精品一区二区三区| 一本色道久久久久久精品综合| 中文天堂在线官网| 亚洲中文av在线| 国产精品不卡视频一区二区| 精品人妻在线不人妻| 成人午夜精彩视频在线观看| 精品亚洲成国产av| av在线老鸭窝| 热re99久久精品国产66热6| 丝袜脚勾引网站| 99九九在线精品视频| 久久综合国产亚洲精品| 亚洲欧洲国产日韩| 久久免费观看电影| 免费女性裸体啪啪无遮挡网站| 精品人妻熟女毛片av久久网站| 成年动漫av网址| 久久久国产欧美日韩av| 亚洲国产成人一精品久久久| 在现免费观看毛片| 黄色毛片三级朝国网站| 国产精品欧美亚洲77777| 国产一区二区在线观看av| 亚洲激情五月婷婷啪啪| 中文字幕人妻丝袜制服| 最近中文字幕2019免费版| 岛国毛片在线播放| 亚洲久久久国产精品| 亚洲人成网站在线观看播放| 精品视频人人做人人爽| 中文字幕亚洲精品专区| 欧美亚洲日本最大视频资源| 国产精品久久久久成人av| 午夜福利视频精品| 亚洲,一卡二卡三卡| 有码 亚洲区| 制服诱惑二区| 人成视频在线观看免费观看| 大陆偷拍与自拍| 尾随美女入室| 少妇的逼水好多| 久久精品国产a三级三级三级| 久热久热在线精品观看| 亚洲精品第二区| 亚洲欧美中文字幕日韩二区| 一区二区三区乱码不卡18| 亚洲精品日本国产第一区| 少妇猛男粗大的猛烈进出视频| xxx大片免费视频| 成年女人毛片免费观看观看9 | 亚洲欧美一区二区三区久久| 亚洲一级一片aⅴ在线观看| 边亲边吃奶的免费视频| 人人妻人人澡人人爽人人夜夜| 国产又爽黄色视频| 丁香六月天网| 欧美av亚洲av综合av国产av | www.熟女人妻精品国产| 日日撸夜夜添| 久久久精品国产亚洲av高清涩受| 成人毛片a级毛片在线播放| 最近中文字幕2019免费版| 夫妻性生交免费视频一级片| 精品国产露脸久久av麻豆| 性高湖久久久久久久久免费观看| 黄片小视频在线播放| 18+在线观看网站| 日韩精品免费视频一区二区三区| 天天影视国产精品| videos熟女内射| 少妇人妻精品综合一区二区| 亚洲欧美日韩另类电影网站| 在线免费观看不下载黄p国产| 国产精品一二三区在线看| 黄色 视频免费看| 亚洲av.av天堂| 欧美日韩一级在线毛片| 久久精品国产自在天天线| 中文精品一卡2卡3卡4更新| 久久99精品国语久久久| 亚洲一码二码三码区别大吗| a级毛片在线看网站| 1024视频免费在线观看| 免费人妻精品一区二区三区视频| 日本av免费视频播放| 精品国产乱码久久久久久小说| 国产极品天堂在线| 日产精品乱码卡一卡2卡三| 这个男人来自地球电影免费观看 | 欧美激情高清一区二区三区 | 亚洲成色77777| 交换朋友夫妻互换小说| 欧美精品一区二区免费开放| 看非洲黑人一级黄片| 国产一区二区在线观看av| 亚洲成国产人片在线观看| av网站在线播放免费| 各种免费的搞黄视频| 免费播放大片免费观看视频在线观看| 又大又黄又爽视频免费| 亚洲欧美成人综合另类久久久| av免费在线看不卡| 精品久久蜜臀av无| 亚洲综合色网址| 亚洲情色 制服丝袜| 国产片内射在线| 人人澡人人妻人| 校园人妻丝袜中文字幕| 免费观看无遮挡的男女| 日韩欧美精品免费久久| 亚洲av免费高清在线观看| 午夜福利影视在线免费观看| 建设人人有责人人尽责人人享有的| 亚洲三区欧美一区| 伊人久久大香线蕉亚洲五| 午夜影院在线不卡| 美女中出高潮动态图| 成年女人在线观看亚洲视频| 成年动漫av网址| 欧美+日韩+精品| 欧美最新免费一区二区三区| 老司机影院毛片| 一级片免费观看大全| 亚洲熟女精品中文字幕| 国产国语露脸激情在线看| 日韩大片免费观看网站| 日韩av免费高清视频| 久久综合国产亚洲精品| 美女福利国产在线| 欧美精品亚洲一区二区| 久久久久久免费高清国产稀缺| 日本91视频免费播放| 1024香蕉在线观看| 成人亚洲精品一区在线观看| 亚洲一码二码三码区别大吗| 日韩中字成人| 一边亲一边摸免费视频| av在线播放精品| 亚洲国产最新在线播放| 国产精品久久久久成人av| 国产日韩欧美视频二区| 熟妇人妻不卡中文字幕| 男女无遮挡免费网站观看| 国产成人一区二区在线| 国产av精品麻豆| 麻豆av在线久日| 日日撸夜夜添| 9色porny在线观看| 2021少妇久久久久久久久久久| 国产精品国产三级国产专区5o| 三上悠亚av全集在线观看| 国产xxxxx性猛交| 国产探花极品一区二区| 高清黄色对白视频在线免费看| 91aial.com中文字幕在线观看| 亚洲国产av新网站| 伦理电影免费视频| 亚洲一区中文字幕在线| 999久久久国产精品视频| 午夜影院在线不卡| 伦理电影大哥的女人| 天天躁夜夜躁狠狠久久av| 少妇人妻 视频| 一级毛片电影观看| 色播在线永久视频| 国产精品久久久久久久久免| 美女午夜性视频免费| 人人妻人人澡人人爽人人夜夜| 丝袜人妻中文字幕| 亚洲一级一片aⅴ在线观看| 亚洲av电影在线观看一区二区三区| 春色校园在线视频观看| 亚洲国产欧美日韩在线播放| 亚洲国产毛片av蜜桃av| 欧美日本中文国产一区发布| 天天躁夜夜躁狠狠久久av| 国产精品亚洲av一区麻豆 | 国产精品 欧美亚洲| 美女高潮到喷水免费观看| 成年女人在线观看亚洲视频| 五月开心婷婷网| 亚洲精品久久久久久婷婷小说| 国产精品欧美亚洲77777| 最近中文字幕高清免费大全6| 在线观看美女被高潮喷水网站| 99久久精品国产国产毛片| 国产无遮挡羞羞视频在线观看| 亚洲第一av免费看| 黄色一级大片看看| 免费人妻精品一区二区三区视频| 日韩av免费高清视频| 丰满少妇做爰视频| 天堂俺去俺来也www色官网| 成年女人在线观看亚洲视频| 欧美最新免费一区二区三区| 香蕉国产在线看| 亚洲内射少妇av| 99久久综合免费| 一级黄片播放器| 午夜影院在线不卡| 亚洲人成77777在线视频| 婷婷色综合大香蕉| 国产片内射在线| 午夜福利影视在线免费观看| 国产精品蜜桃在线观看| 热99久久久久精品小说推荐| 大话2 男鬼变身卡| 亚洲成国产人片在线观看| 久久综合国产亚洲精品| 久久久久国产精品人妻一区二区| 国产免费又黄又爽又色| 成人亚洲欧美一区二区av| 午夜av观看不卡| 中文精品一卡2卡3卡4更新| 精品少妇内射三级| 香蕉精品网在线| av视频免费观看在线观看| 国产成人精品久久二区二区91 | 亚洲久久久国产精品| 人成视频在线观看免费观看| 啦啦啦在线观看免费高清www| 最近手机中文字幕大全| 女人被躁到高潮嗷嗷叫费观| 欧美黄色片欧美黄色片| 男女边摸边吃奶| 免费久久久久久久精品成人欧美视频| 久久 成人 亚洲| 国产成人欧美| 日韩在线高清观看一区二区三区| 你懂的网址亚洲精品在线观看| 91国产中文字幕| 国产高清国产精品国产三级| 曰老女人黄片| 欧美av亚洲av综合av国产av | 国产毛片在线视频| 成年美女黄网站色视频大全免费| 国产97色在线日韩免费| 午夜91福利影院| videos熟女内射| 波野结衣二区三区在线| 国产极品粉嫩免费观看在线| 亚洲情色 制服丝袜| 国产精品99久久99久久久不卡 | 极品少妇高潮喷水抽搐| 九草在线视频观看| 成人手机av| 青青草视频在线视频观看| 在线观看www视频免费| 久久毛片免费看一区二区三区| 满18在线观看网站| 久久久久视频综合| 日韩免费高清中文字幕av| 91成人精品电影| 美女脱内裤让男人舔精品视频| 日本vs欧美在线观看视频| 亚洲婷婷狠狠爱综合网| 涩涩av久久男人的天堂| 美女脱内裤让男人舔精品视频| 亚洲欧美成人综合另类久久久| 欧美日韩视频高清一区二区三区二| 人妻少妇偷人精品九色| 国产xxxxx性猛交| 伦理电影大哥的女人| 看非洲黑人一级黄片| 日日爽夜夜爽网站| 欧美 日韩 精品 国产| 十八禁网站网址无遮挡| 69精品国产乱码久久久| 久久 成人 亚洲| 亚洲精品一区蜜桃| 成年人免费黄色播放视频| 亚洲熟女精品中文字幕| 中文字幕av电影在线播放| 亚洲美女搞黄在线观看| 成人免费观看视频高清| av免费观看日本| 91精品伊人久久大香线蕉| 成人二区视频| 人人妻人人爽人人添夜夜欢视频| 麻豆av在线久日| 国产高清国产精品国产三级| 欧美成人午夜精品| 国产精品国产av在线观看| 免费人妻精品一区二区三区视频| 久久99热这里只频精品6学生| 久久人人97超碰香蕉20202| 久久久久久久久久人人人人人人| 一本—道久久a久久精品蜜桃钙片| 精品久久久精品久久久| 一级爰片在线观看| 男的添女的下面高潮视频| √禁漫天堂资源中文www| 国产精品99久久99久久久不卡 | 国产淫语在线视频| 日本-黄色视频高清免费观看| 菩萨蛮人人尽说江南好唐韦庄| 久久韩国三级中文字幕| 十八禁高潮呻吟视频| 久久久久久久久久人人人人人人| 久久久久人妻精品一区果冻| 丝袜美足系列| 国产在线一区二区三区精| 90打野战视频偷拍视频| 久久久久视频综合| 下体分泌物呈黄色| 午夜免费男女啪啪视频观看| 精品国产一区二区三区久久久樱花| 精品一区二区三区四区五区乱码 | 国产 精品1| 国产亚洲av片在线观看秒播厂| 久久人妻熟女aⅴ| 丰满乱子伦码专区| 亚洲精品在线美女| 黄色一级大片看看| 女人精品久久久久毛片| 99久久精品国产国产毛片| h视频一区二区三区| 街头女战士在线观看网站| 最近中文字幕高清免费大全6| 亚洲美女搞黄在线观看| 中文字幕人妻丝袜制服| 黑人猛操日本美女一级片| 国产亚洲一区二区精品| 亚洲国产日韩一区二区| 看非洲黑人一级黄片| 夜夜骑夜夜射夜夜干| 考比视频在线观看| 成人黄色视频免费在线看| 国产1区2区3区精品| 久久久久久人妻| 亚洲经典国产精华液单| 中文精品一卡2卡3卡4更新| 一边摸一边做爽爽视频免费| 99国产精品免费福利视频| 日韩一卡2卡3卡4卡2021年| av不卡在线播放| 午夜免费男女啪啪视频观看| 汤姆久久久久久久影院中文字幕| 精品少妇久久久久久888优播| www.自偷自拍.com| 国产精品.久久久| xxx大片免费视频| 侵犯人妻中文字幕一二三四区| 99国产精品免费福利视频| 久久这里只有精品19| 亚洲欧美中文字幕日韩二区| 18禁国产床啪视频网站| 男人操女人黄网站| 亚洲婷婷狠狠爱综合网| 欧美97在线视频| 亚洲人成电影观看| 热re99久久国产66热| 国产免费又黄又爽又色| 青草久久国产| 精品国产国语对白av| 精品一品国产午夜福利视频| 亚洲精品,欧美精品| 777米奇影视久久| 久久综合国产亚洲精品| 一边亲一边摸免费视频| 国产精品秋霞免费鲁丝片| 在线观看免费视频网站a站| 亚洲国产欧美网| 啦啦啦中文免费视频观看日本| 青春草国产在线视频| 性高湖久久久久久久久免费观看| 国产成人精品婷婷| 纯流量卡能插随身wifi吗| 在线看a的网站| 久久青草综合色| 日本91视频免费播放| 汤姆久久久久久久影院中文字幕| 少妇猛男粗大的猛烈进出视频| 精品一区二区三卡| 观看av在线不卡| 五月天丁香电影| 99国产精品免费福利视频| 国产精品亚洲av一区麻豆 | 水蜜桃什么品种好| 国产白丝娇喘喷水9色精品| 亚洲内射少妇av| 久久久久久久国产电影| 国产成人精品无人区| 日本爱情动作片www.在线观看| 久久久国产欧美日韩av| 国产精品偷伦视频观看了| av电影中文网址| 国产毛片在线视频| 国产97色在线日韩免费| 黄网站色视频无遮挡免费观看| 99精国产麻豆久久婷婷| 国产视频首页在线观看| 中文字幕亚洲精品专区| 欧美精品一区二区大全| 国产av国产精品国产| 亚洲三级黄色毛片| 欧美日韩视频精品一区| 女性被躁到高潮视频| 国产白丝娇喘喷水9色精品| 国产精品三级大全| 999久久久国产精品视频| 亚洲精品aⅴ在线观看| 热re99久久国产66热| 久久久欧美国产精品| 色婷婷av一区二区三区视频| 水蜜桃什么品种好| 国产精品.久久久| 中文字幕最新亚洲高清| 亚洲精品一区蜜桃| 超色免费av| www日本在线高清视频| 精品酒店卫生间| av一本久久久久| 欧美av亚洲av综合av国产av | 欧美人与性动交α欧美精品济南到 | 免费在线观看黄色视频的| 水蜜桃什么品种好| 看十八女毛片水多多多| 国产视频首页在线观看| 精品亚洲乱码少妇综合久久| 亚洲精品,欧美精品| 欧美人与性动交α欧美软件| 99re6热这里在线精品视频| 国产成人免费观看mmmm| 精品国产超薄肉色丝袜足j| 午夜福利在线观看免费完整高清在| 黑人欧美特级aaaaaa片| 国产乱人偷精品视频| 亚洲天堂av无毛| av不卡在线播放| 看免费成人av毛片| 国产精品免费大片| 久久精品国产亚洲av天美| 久久久亚洲精品成人影院| 国产日韩欧美亚洲二区| 亚洲视频免费观看视频| 亚洲伊人久久精品综合| 一区二区三区激情视频| 亚洲一码二码三码区别大吗| 高清不卡的av网站| 国产不卡av网站在线观看| 久久久久国产一级毛片高清牌| 午夜精品国产一区二区电影| 母亲3免费完整高清在线观看 | 在线亚洲精品国产二区图片欧美| 日韩人妻精品一区2区三区| 天天躁日日躁夜夜躁夜夜| 日韩,欧美,国产一区二区三区| 男女边吃奶边做爰视频| 丁香六月天网| 如日韩欧美国产精品一区二区三区| 一二三四在线观看免费中文在| 一级毛片黄色毛片免费观看视频| 在线观看美女被高潮喷水网站| 午夜福利乱码中文字幕| 亚洲国产av影院在线观看| 久久久久久免费高清国产稀缺| 人成视频在线观看免费观看| 亚洲综合精品二区| 美女高潮到喷水免费观看| av一本久久久久| 男人添女人高潮全过程视频| 久久这里只有精品19| 成人二区视频| 日韩一本色道免费dvd| 国产精品久久久久成人av| 成人亚洲精品一区在线观看| 青青草视频在线视频观看| 日韩精品免费视频一区二区三区| 极品人妻少妇av视频| 欧美成人精品欧美一级黄| 18在线观看网站| 国产精品熟女久久久久浪| 性高湖久久久久久久久免费观看| 亚洲情色 制服丝袜| 成年动漫av网址| 中国三级夫妇交换| 精品99又大又爽又粗少妇毛片| 香蕉国产在线看| 侵犯人妻中文字幕一二三四区| 在线看a的网站| 午夜老司机福利剧场| 波野结衣二区三区在线| 天美传媒精品一区二区| 18禁国产床啪视频网站| 男女边吃奶边做爰视频| 天天影视国产精品| 最近最新中文字幕免费大全7| 日韩制服丝袜自拍偷拍| 黄片无遮挡物在线观看| 2021少妇久久久久久久久久久| 国产精品av久久久久免费| 久久 成人 亚洲| 老汉色av国产亚洲站长工具| 精品第一国产精品| kizo精华| 国产片内射在线| 精品一品国产午夜福利视频| 一本色道久久久久久精品综合| 免费黄网站久久成人精品| 国产一区二区三区av在线| 国产成人aa在线观看| 天天操日日干夜夜撸| 婷婷色综合www| 大陆偷拍与自拍| 午夜日韩欧美国产| 国产白丝娇喘喷水9色精品| 久久97久久精品| 国产在线视频一区二区| 日本爱情动作片www.在线观看| 免费在线观看完整版高清| 激情五月婷婷亚洲| 两个人免费观看高清视频| 国产男人的电影天堂91| 在线观看人妻少妇| 久久这里只有精品19| 狠狠精品人妻久久久久久综合| 美女主播在线视频| 亚洲精品中文字幕在线视频| 久久99精品国语久久久| 亚洲精品成人av观看孕妇| 亚洲精品视频女| 欧美精品亚洲一区二区| 母亲3免费完整高清在线观看 | 丁香六月天网| 桃花免费在线播放| 91成人精品电影| 七月丁香在线播放| 高清在线视频一区二区三区| 菩萨蛮人人尽说江南好唐韦庄| 亚洲美女搞黄在线观看| 成人国产麻豆网| 亚洲伊人色综图| 久久久久久伊人网av| 女人高潮潮喷娇喘18禁视频| 亚洲伊人久久精品综合| 久久精品国产亚洲av天美| 色婷婷av一区二区三区视频| 97在线人人人人妻| 日日撸夜夜添| 国产野战对白在线观看| 美女国产视频在线观看| 高清在线视频一区二区三区| 99久久中文字幕三级久久日本| 天天躁夜夜躁狠狠躁躁| 26uuu在线亚洲综合色| 黑人巨大精品欧美一区二区蜜桃| 久久精品久久久久久噜噜老黄| 久久精品国产亚洲av天美| 久久久精品国产亚洲av高清涩受| 99久久人妻综合| 国产精品秋霞免费鲁丝片| 国产乱人偷精品视频| 国产一区二区在线观看av| 国产野战对白在线观看| 中文字幕色久视频| 久久精品熟女亚洲av麻豆精品| 夫妻午夜视频| 免费在线观看完整版高清| 日韩成人av中文字幕在线观看| 久久久久国产网址| 男男h啪啪无遮挡| 人体艺术视频欧美日本| 亚洲综合精品二区| 亚洲第一区二区三区不卡| 亚洲国产精品成人久久小说| 男的添女的下面高潮视频| 少妇 在线观看| 国产精品偷伦视频观看了| 久热久热在线精品观看| 欧美日韩一级在线毛片| av线在线观看网站| 中国三级夫妇交换| 在线观看免费视频网站a站| www.熟女人妻精品国产| 人人妻人人爽人人添夜夜欢视频| 国精品久久久久久国模美| 成人免费观看视频高清| 美国免费a级毛片| 交换朋友夫妻互换小说| 婷婷色综合www| 伦理电影大哥的女人| 午夜福利视频精品| videosex国产| 精品国产国语对白av| 黄频高清免费视频| 黄色视频在线播放观看不卡| 久久久久久久久久人人人人人人| 丰满迷人的少妇在线观看| 欧美日本中文国产一区发布| 久久青草综合色| 日韩在线高清观看一区二区三区| 又粗又硬又长又爽又黄的视频| 另类亚洲欧美激情| 亚洲精品久久午夜乱码| 午夜福利在线免费观看网站| 亚洲四区av| 亚洲精品自拍成人| 妹子高潮喷水视频| 亚洲欧美成人综合另类久久久| 国产野战对白在线观看| 欧美中文综合在线视频| 尾随美女入室| 国产免费视频播放在线视频|