• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Parallel Reinforcement Learning-Based Energy Efficiency Improvement for a Cyber-Physical System

    2020-05-22 02:59:06TengLiuBinTianYunfengAiandFeiYueWang
    IEEE/CAA Journal of Automatica Sinica 2020年2期

    Teng Liu,, Bin Tian,, Yunfeng Ai,, and Fei-Yue Wang,

    Abstract—As a complex and critical cyber-physical system (CPS),the hybrid electric powertrain is significant to mitigate air pollution and improve fuel economy. Energy management strategy(EMS) is playing a key role to improve the energy efficiency of this CPS. This paper presents a novel bidirectional long shortterm memory (LSTM) network based parallel reinforcement learning (PRL) approach to construct EMS for a hybrid tracked vehicle (HTV). This method contains two levels. The high-level establishes a parallel system first, which includes a real powertrain system and an artificial system. Then, the synthesized data from this parallel system is trained by a bidirectional LSTM network.The lower-level determines the optimal EMS using the trained action state function in the model-free reinforcement learning (RL)framework. PRL is a fully data-driven and learning-enabled approach that does not depend on any prediction and predefined rules. Finally, real vehicle testing is implemented and relevant experiment data is collected and calibrated. Experimental results validate that the proposed EMS can achieve considerable energy efficiency improvement by comparing with the conventional RL approach and deep RL.

    I. Introduction

    CYBER physical systems (CPSs) are defined as a system wherein the physical components are deeply intertwined with the software components to exhibit various and distinct behavioral patterns [1]. Recent increased demands of performance and complex usage pattern accelerate the development and research of CPSs [2]–[4]. Being a typical application of CPS in green transportation, hybrid electric vehicles (HEVs)show great potential to reduce energy consumption and air pollution [5], [6]. In such a system, hybrid electric powertrain and driving environments constitute the physical resources,communication and control data compose the cyber part of this system [7], [8]. Strong nonlinearities and uncertainties of the interactions between the cyber and physical resources increase difficulties in control, management, and optimization of HEVs [9], [10]. Especially, energy management of HEV is critical, and several challenges remain to be resolved, such as optimization, calculation time, and adaptability [11], [12].

    Energy management strategies (EMSs) has been active research for decades because it can achieve remarkable energy efficiency. Existing EMSs are generally classified into three different categories, rule-based, optimization-based, and learning-based ones. Rule-based strategies depend on a set of predefined criterions without knowledge of real-world driving conditions [13], [14]. Binary control as a typical example is used to adjust power split between battery and engine as the state of charge (SOC) exceeds the threshold values. When the trip information is prior known, many approaches have been applied to search the optimal control strategies, such as dynamic programming (DP) [15], stochastic dynamic programming (SDP) [16], Pontryagin’s minimum principle(PMP) [17], model predictive control (MPC) [18], and equivalent consumption minimization strategy (ECMS) [19].However, these strategies are usually inappropriate for various driving environments [20]. Due to the ultrafast development of computing capability, learning-based methods emerge great potential in learning control strategies from the recorded historical driving data [21], [22]. This type of methods needs to be further developed.

    As a complex CPS, hybrid electric powertrain still faces several issues to handle energy management problems. The first one is data lack [23]. The controller needs to collect new data and learn new model parameters to derive different strategies for new driving conditions. The second one is data inefficiency [24]. Large-dimension actions and states of complex CPS need to be calibrated and scheduled reasonably to guide the controller. The final one is universality. Adaptive and efficient control strategies need to be generated to accommodate the dynamic real-world driving conditions.

    Fig. 1. Bidirectional LSTM network-based parallel reinforcement learning framework.

    To address these difficulties, we develop a novel bidirectional long short-term memory (LSTM) network based parallel reinforcement learning (PRL) framework to construct EMS for a hybrid tracked vehicle (HTV), see Fig. 1 as an illustration. This framework involves two levels. In the highlevel structure, an artificial vehicle powertrain system is built analogy to the real vehicle to constitute the parallel powertrain system. The large synthesized data from this parallel system is utilized to relieve the data lack problem. A bidirectional LSTM network is proposed to represent dependence between multi-actions and state. This network can capture more details of the interactions between multi-action embeddings to solve the data inefficiency problem. In the lower-level skeleton,model-free reinforcement learning (RL) algorithm is finally used to compute the adaptive control strategy based on the trained data.

    This literature involves three perspectives of contributions:1) A parallel system of the HTV is constructed to generate large synthesized data based on the limited real historical data;2) A bidirectional LSTM network is proposed to train the available data to model effectively the action state function;3) Mode-free RL technique is applied to derive the adaptive EMS to accommodate different driving conditions.Experimental results illustrate that the proposed EMS can achieve considerable energy efficiency improvement by comparing with the conventional RL approach and deep RL.

    The remainder of this paper is organized as follows. Section II describes the high-level architecture of a deep neural network for data estimation and the bidirectional LSTM network framework. Section III describes the modeling of the hybrid electric powertrain, wherein the optimal control problem is constructed, and the structure of the lower-level model-free RL algorithms are also introduced. In Section IV,the data collection in real vehicle tests and synthesized data processing are elaborated, and experiment results of three control strategies comparison are presented. Key takeaways are summarized in Section V.

    II. Bidirectional LSTM Framework

    Bidirectional LSTM network framework for action state function estimation is introduced in this section. First,multilayer deep neural network is constructed via considering powertrain state and actions as inputs. The states are the SOC in battery and generator speed, and actions are the engine torque, power demand, and motor speed. Based on this network, bidirectional LSTM theory is formulated to approximate the action value function. The detailed components are illustrated as follows.

    A. Multilayer Neural Network

    A deep neural network is a logical-mathematical model that seeks to simulate the behavior and function of a biological neuron [25]. Three layers named input layer, hidden layer, and output layer are included in this network, see Fig. 2(a) as an illustration. The input vectorz= [z1,z2, …,zN] are weighted by elementsω1,ω2, …,ωN, then summed with a biasband imposed by an activation functionfto generate the neuron output as follow:

    Fig. 2. Multilayer neural network for states and actions training. (a) Deep neural network construction; (b) LSTM memory block.

    wherexdepicts the net input,ymeans the neuron output,Nis total inputs index andzjis thejth input.

    The log-sigmoid activation function is adopted in this paper,and thus the output of the overall networks is depicted as

    wheref2andf1represent the activation function of the hidden layer and output layer, respectively.Sis the number of neurons in the hidden layer,is the weight connecting thejth input andith neuron in the hidden layer,represents the weight connecting theith source of hidden layer to the output layer neuron.depicts the bias of theith neuron in the hidden layer andis the bias of the neuron in the output layer.

    B. Forward Pass of LSTM

    A memory block is the key constituent part of an LSTM network. For each block, three adaptive and multiplicative gating units are shared by multiple cells, as shown in Fig.2(b). Furthermore, a recurrently self-connected linear unit called constant error carousel (CEC) is the core of each block.The CEC can provide short-term memory storage for extended time periods by recirculating activation and error signals indefinitely. The three gating units are able to be trained to recognize, store and read information from the memory block.All the cells are combined into the block to share the same gates and reduce the number of adaptive parameters [26].

    In this paper, the LSTM network is operated in bidirectional courses and the time steps are discreted ast= 0, 1, 2, … . The two courses are named forward pass and backward pass,which mean the updating of the units’ activation and the calculation of the error signals for weights. The notations in the following manuscript are defined as:jis the index of the memory block,vdepicts the sequence of cells in the blockj,and thusmeans thevth cell in thejth memory block.ycis the output of the memory cell, which is calculated by the cell statesc, cell inputzc, input gatezin, output gatezout, and forget gatezθ.ωlmis the weight connecting the celllandm. The components of one cell are described as follow.

    1) Input:In the forward pass, the cell input is first computed as

    This variable is affected by the input squashing functiongto generate the new cell state next.

    The input gate activationyinis derived by applying a logistic sigmoid squashing functionfinwith range [0, 1] to the gate’s net inputzin

    2) Cell State:The memory cell statescis initialized to zero whent= 0, and then it accumulates based on the input and discounted factor of the forget gate. First, the forget gate activation is defined as

    wherefθrepresents a logistic sigmoid function and ranges from 0 to 1. Then, the new cell state is derived as follow:

    What information to store in the memory block is decided by the input gate and when to erase the outdated information is determined by the forget gate. By doing this, the memory block can retain fresh data and the cell state cannot grow to infinity.

    3) Output:The read access to the information is controlled by the output gate via multiplying the output from the CEC.The relevant activation is calculated by applying the squashing function ([0, 1]) into the net input

    Then, the cell outputycis described by the cell state and output gate activation as follow:

    Finally, the activation of the output unitskis depicted as

    wheremrange over all units, andfkis the output squashing function.

    C. Backward Pass of LSTM

    LSTM’s backward pass is a truncated version of real-time recurrent learning (RTRL) for weights to cell input, input gates, and forget gates. Also, it fuses the error of backpropagation (BP) in the output units and output gates efficiently.

    1) Output Units and Gates:Based on the targettk, the squared error objective function is depicted as

    whereekis the externally injected error. Gradient descent algorithm is used to minimize the objective function. The weightωlmis decided by the variation Δωlm, which is calculated via the negative gradient ofEtimes the learning rateα.Hence, the standard BP weight changes of output units are

    The standard BP is also utilized to compute the weight changes for connections to the output gate from source unitsm

    2) Truncated RTRL Partials:The forward propagation is necessary in time for the partial derivatives in RTRL. These partials for weights at the cellinput gate (in), and forget gate (θ) are updated as follow:

    where whent= 0, these partials equal to zero.

    3) RTRL Weight Changes:In backward pass, the RTRL partials are employed to compute weight changes Δωlmfor connections to the forget gate, cell and input gate as

    At each memory cell, the internal state erroris determined as

    D. Bidirectional LSTM Outline

    In bidirectional recurrent nets, the forward and backward sequences of each training are regarded as two independent recurrent nets and are connected to the same output layer.Taking the time sequence fromt– 1 totas an example, the outline that combines the bidirectional algorithm and LSTM is illustrated as follow.

    1) Forward Pass:Feed all input data of the sequence into the LSTM and decide all the output units.

    a) For the forward states (from timet– 1 tot) and backward states (from timettot– 1), realize the forward pass process in Section II-B;

    b) For the output layer, realize the forward pass process in Section II-B.

    2) Backward Pass:Compute the relevant partial derivatives of error for the sequence used in the forward pass.

    a) For the output neurons, achieve the backward pass process introduced in Section II-C;

    b) For the forward states (from timettot– 1) and backward states (from timet– 1 tot), achieve the backward pass process discussed in Section II-C.

    3) Update Weight Changes:Finally, (16) to (19) are used to update RTRL weight changes.

    III. Powertrain Model and Parallel Reinforcement Learning

    In this section, the energy management of a hybrid tracked vehicle (HTV) is constructed as an optimization control problem. Modeling of the battery pack and engine-generator set (EGS) combine with the optimization objective are first introduced. To resolve the data lack problem of a complex CPS, a parallel system of the hybrid electric powertrain is then proposed to generate the artificial data. Real and artificial driving data constitute the synthesized data, which is trained to approximate the action state function. Finally, Q-learning algorithm is applied to compute the optimal control action according to the trained data from the bidirectional LSTM network.

    The studied complex CPS is a self-built HTV and Fig. 3 depicts the sketch of the powertrain architecture. The main energy sources to propel the powertrain are the EGS and battery [10]. Table I lists the key characteristics of the HTV powertrain.

    A. Powertrain Modeling

    Fig. 3. Sketch of the self-built HTV architecture.

    TABLE I Specification of the HTV

    For EGS, the rated engine power is 52 kW at the speed 6200 rpm. The rated generator output power is 40 kW within the speed range from 3000 rpm to 3500 rpm. The generator speed is the first state variable and is computed based on the torque equilibrium restraint

    wherengandneare the rotational speeds,TgandTeare the torques of generator and engine, respectively.Teis one of the control variables in this work.JeandJgare the rotational moment of inertias for the engine and generator, severally.iegis the gear ratio connects the generator and engine, and 0.1047 is the transformation parameter which means 1 r/min =0.1047 rad/s.

    The output voltage and torque of the generator are derived as follow:

    whereKeis the electromotive force coefficient,UgandIgare the generator current and voltage, respectively.Kxngis the electromotive force, andKx= 3PLg/π, in whichLgis the armature synchronous inductance,Pis the poles number.

    In the hybrid electric powertrain, SOC of battery is selected as another state variable. The output voltage and derivative of SOC in the battery are depicted via the equivalent first-order model

    whereIbatandCbatare the battery current and capacity, respectively.Pbatis the battery power,rinis the battery internal resistance andVocis the open circuit voltage.Ubatis the output voltage of battery,rdis(SOC) andrch(SOC) describe the internal resistance during discharging and charging, respectively.

    The optimization control goal to be minimized is a trade-off between the charge sustaining constraint and fuel consumption over a finite horizon as

    Furthermore, the instantaneous physical limits need to be observed to guarantee the reliability and safety of the powertrain:

    wherene,min,ne,max,Te,min, andTe,maxare the permitted lower and upper bounds of the engine speed and torque, respectively.nmis the motor speed,nm,minandnm,maxare its boundary values.Pdem,min, andPdem,maxare the threshold of power demand admissible sets, same as theng,minandng,max.

    Note that the core of this article focuses on discussing the PRL technique for a complex CPS, the traction motors are assumed as the power conversion devices with the identical efficiency and the battery aging is not considered in this study[9], [10].

    B. Parallel Powertrain System

    Fei-Yue Wang first initialized the parallel system theory in 2004 [28], [29], in which ACP method was proposed to deal the

    complex CPS problem. ACP approach represents artificial societies (A) for modeling, computational experiments (C) for analysis, and parallel execution (P) for control. An artificial system is usually built by modeling, to explore the data and knowledge as the real system does. Through executing independently and complementally in these two systems, the learning model can be more efficient and less data-hungry.ACP approach has been employed in several fields to discuss the different problems in complex CPSs [30]–[32].

    Fig. 4. Parallel powertrain system for the self-built HTV.

    For a self-built HTV, there are not sufficient environments provided for it to operate to obtain enough actual data. Hence,we build an artificial powertrain system in MATLAB/Simulink to address the data lack problem in action state function training. This artificial system combines with the real powertrain system constitute the parallel system, see Fig. 4(a)as an illustration. To neglect the steering power (too small),the speeds of two tracks are equivalent to the average speed of them. By taking a few field test data as guidance and regulating the parameters of powertrain model and environments, large artificial data is acquired, including SOC,generator speed, engine torque, engine speed, power demand,battery current, battery voltage, and two motors speed. The synthesized data from the parallel system is collected and calibrated to derive the optimal EMS using the bidirectional LSTM network and reinforcement learning.

    C. Reinforcement Learning

    A learning agent interacts with a stochastic environment in reinforcement learning (RL) framework, and this interaction is modeled as a discrete discounted Markov decision process(MDP). The MDP is expressed as a quintuplewhereandare the set of control actions and state variables,is the transition probability matrix (TPM),is the reward function, andis a discount factor.Especially, the state variables in this paper involves SOC and generator speed, control actions consist of the engine torque,power demand, and motor speed, reward functionrepresents the fuel consumption rate, anddenotes the probability related with the transfer from statesto next states′ taking actiona.

    The value function is defined as the expected future reward

    Then, the finite expected discounted and accumulated rewards is summarized as the optimal value function whereπis the control policy, which depicts the control action distribution with the time sequence. To deduce the optimal control action at each time instant, (26) is reformulated recursively as

    The optimal control policy is determined based on the optimal value function in (27)

    Furthermore, the action value function and its corresponding optimal measure are described as follow:

    Fig. 5 shows the bidirectional LSTM-based deep reinforcement network is utilized to estimate the action value function in RL. This structure includes two deep neural networks, one for state variablesembeddings and another for control sub-actionsembeddings. The bidirectional LSTM network is supposed to capture more information on how the individual embeddings combined into an integrated embedding due to its nonlinear structure.

    The inner product is used to compute newthrough combining the states and sub-actions neuron output as

    whereK1andK2are the number of the states and sub-actions,respectively.denotes the expected accumulated future rewards relevant with the specific state variablest.

    Finally, the action value function corresponding to an optimal control policy can be computed using the Q-learning algorithm as [33]

    Fig. 5. Bidirectional LSTM-based deep neural network.

    Algorithm 1 describes the pseudo-code of the Q-learning algorithm. The discount factoris settled as 0.96, the decaying factoris related with the time instantand given asto accelerate the convergence rate, the iterative timesis 10 000, and the sample time is

    Algorithm 1: Q-learning Algorithm 1. Extract Q(s, a) from training and initialize iteration number Nit 2. Repeat time instant k = 1, 2, 3, …3. Based on Q(s, .), choose action a (ε-greedy policy)4. Executing action a and observe r, s'5. Define a* = arg mina Q(s', a)6. Q(s, a)←Q(s, a) + μ(r(s, a) + γmin a' Q(s', a') – Q(s, a))7. s←s'8. until s is terminal

    The TPMs of power demand and vehicle modeling are inputs of RL technique for optimal EMS calculation. The RL algorithm is realized in Matlab via the Markov decision process (MDP) toolbox presented in [34] and a microprocessor with an Intel quad-core CPU ofand RAMThe proposed EMS is compared with the conventional RL approach and deep RL to demonstrate its optimality and availability in the next section.

    IV. Experiment Results and Discussions

    The proposed bidirectional LSTM enabled PRL-based energy management strategy (EMS) is assessed on the selfbuilt HTV powertrain in this section. First, data collection and processing are introduced in detail. We operate the HTV in the real scenarios to collect real vehicle driving data. Based on this data, we generate the synthesized data from the parallel system to use for action value function estimation, including all the states and control variables. Then, the presented PRL-based energy management strategy is compared with the conventional RL and deep RL approaches to evaluate its availability and optimality. Simulation results indicate that the proposed energy management strategy is superior to the two benchmarking techniques in control performance.

    A. Data Collection and Processing

    The real vehicle experiment is implemented on the self-built HTV in the suburb to represent the cross-country scenarios,and the real and target driving cycles are depicted in Fig. 6.The vehicle data and powertrain states are collected with a sampling frequency offrom the CAN bus. The collected driving data is applied to create large artificial data in the parallel powertrain system. Observing the physical constraints of powertrain, the inputs of the parallel system are engine throttle and rotational speed randomly, and outputs are the state variables and control actions. Fig. 7 illustrates a period of the generated driving data from the parallel powertrain system.

    Fig. 6. Real vehicle testing scenario and the corresponding driving cycles.

    Fig. 7. One period of the generated driving data from the parallel system.

    Furthermore, to eliminate the influence of different variable units on training, the input state variables and control actions of the network are scaled to the range from 0 to 1.

    B. Comparison of Different EMSs

    Based on the trained action-value function, the proposed bidirectional LSTM enabled PRL-based EMS is compared with the conventional RL and deep RL controls to certify its availability and optimality in this section. In the energy management problem, the simulation cycle is a real vehicle driving cycle, and the initial values of the state variable SOC and generator speed are 0.7 and 1200 rpm, respectively.

    The SOC trajectories of a certain driving cycle and the corresponding generator speed are illustrated in Fig. 8. It can be discerned that the SOC trajectory imposed on the proposed model free EMS is close to that in deep RL control, and they are different from that in conventional RL control. This can be explained by the different power split between the EGS and battery, which is decided by the action value functions. It demonstrates that the training process in the deep neural network can improve the accuracy and optimality of control policy derived by the Q-learning algorithm. An analogous result in the generator speed trajectory is also given in Fig. 8.

    Fig. 8. State variables SOC and generator speed trajectories for the different control strategies.

    Taking engine torque as an example, the above observation can be explained by the different distribution of engine torque with the state variables. Being a control variable, different values of engine torque decide multiple operative modes of the powertrain, as shown in Fig. 9.

    The convergence processes of the action value function in the proposed EMS, conventional RL and deep RL are illustrated in Fig. 10 . The mean discrepancy depicts the deviation of two action value functions per 100 iterations.Note that, the increase of iterative number accompanies with a decreased mean discrepancy, which implies the convergence characteristic of the Q-learning algorithm.

    Fig. 9. Control action engine torque with the state variables.

    Fig. 10. Convergence rates of the action value function in three controls.

    Fig. 10 also describes that the proposed control is superior to the conventional RL and deep RL controls in control performance, and the convergence rate is a little slower than them. This can be illuminated by the additional training process of the action value function in the bidirectional LSTM network. With an accepted calculation speed, the proposed EMS adapts to the real-time driving conditions more suitably than the conventional RL and deep RL controls, which demonstrates its availability.

    Table II describes the fuel consumption after SOC correction and computation time for the three control strategies. It is apparent that the fuel consumption under the PRL-enabled EMS is lower than those in conventional RL-based and deep RL controls, which demonstrates its optimality. Also, the consumed time of PRL is lower than that of deep RL and conventional RL, which implies that it is potential to be applied in real-time.

    TABLE II The Fuel Consumption in Three Control Strategies

    V. Conclusion

    We propose a novel bidirectional LSTM network based PRL framework to construct EMS for an HTV in this paper.First, the up-level builds an artificial vehicle powertrain system analogy to the real vehicle to constitute the parallel powertrain system. Second, a bidirectional LSTM network is proposed to train the large synthesized data from this parallel system to represent dependence between multi-actions and states. Third, in the lower-level skeleton, model-free RL algorithm is finally used to compute the adaptive control strategy based on the trained data.

    Tests prove the optimality and availability of the proposed energy management strategy. In addition, the advantages in control performance and energy efficiency imply that the proposed adaptive control can be applied in real situations.

    The proposed combination of bidirectional LSTM network and RL is indeed a simplified specification of the so-called parallel learning [35] which aims to build a more general framework for data-driven intelligent control. Future work focuses on applying the parallel learning and PRL framework into different research fields of automated vehicles, such as driving style recognition [36], braking intensity estimation[37], [38], and lane changing intention prediction [39], [40].The parallel system could generate abundant driving data and evaluate the performance of different controllers easily.

    嫩草影院新地址| 18禁在线无遮挡免费观看视频| 国产精品综合久久久久久久免费| 人妻制服诱惑在线中文字幕| 中文字幕亚洲精品专区| 一个人观看的视频www高清免费观看| 少妇丰满av| 欧美日韩一区二区视频在线观看视频在线 | 男女边吃奶边做爰视频| 亚洲国产精品sss在线观看| 亚洲成色77777| 99热这里只有是精品50| av线在线观看网站| 能在线免费观看的黄片| 在线免费十八禁| 尤物成人国产欧美一区二区三区| 伦理电影大哥的女人| 亚洲精品亚洲一区二区| 97超视频在线观看视频| 一级毛片我不卡| 亚洲av不卡在线观看| 免费观看精品视频网站| 寂寞人妻少妇视频99o| 热99在线观看视频| 我要看日韩黄色一级片| 热99re8久久精品国产| 久久久a久久爽久久v久久| 久久久久国产网址| 国产一区二区在线观看日韩| 一夜夜www| 少妇熟女欧美另类| 啦啦啦韩国在线观看视频| 熟女电影av网| 精品久久久久久电影网 | 国产精品.久久久| 九九在线视频观看精品| www.色视频.com| 久久久精品94久久精品| 成人无遮挡网站| 麻豆精品久久久久久蜜桃| 夜夜爽夜夜爽视频| av天堂中文字幕网| 久久婷婷人人爽人人干人人爱| 久久综合国产亚洲精品| 狂野欧美白嫩少妇大欣赏| 国产精品伦人一区二区| 久久久久久久久久成人| .国产精品久久| 村上凉子中文字幕在线| 国产视频首页在线观看| 两个人视频免费观看高清| 午夜激情福利司机影院| 欧美一级a爱片免费观看看| 伦理电影大哥的女人| 人妻夜夜爽99麻豆av| 免费播放大片免费观看视频在线观看 | av在线天堂中文字幕| 精品欧美国产一区二区三| 精品久久久久久久久久久久久| 日韩,欧美,国产一区二区三区 | 成年av动漫网址| 欧美人与善性xxx| 午夜福利在线在线| 亚洲av电影在线观看一区二区三区 | 最近最新中文字幕免费大全7| 天美传媒精品一区二区| 免费看日本二区| 午夜视频国产福利| 国产午夜精品久久久久久一区二区三区| 99在线人妻在线中文字幕| 午夜免费激情av| 久久99热6这里只有精品| 亚洲欧美成人精品一区二区| 99久久九九国产精品国产免费| 国产成人freesex在线| 午夜福利高清视频| 亚洲最大成人av| 国产精品女同一区二区软件| 欧美性猛交╳xxx乱大交人| 免费av毛片视频| 成人毛片60女人毛片免费| 中文资源天堂在线| 91精品国产九色| 搡女人真爽免费视频火全软件| 久久久久久国产a免费观看| 欧美激情国产日韩精品一区| 神马国产精品三级电影在线观看| 欧美性猛交╳xxx乱大交人| 精品无人区乱码1区二区| 国产精品野战在线观看| 亚洲真实伦在线观看| 淫秽高清视频在线观看| 麻豆乱淫一区二区| 特级一级黄色大片| 久久精品夜夜夜夜夜久久蜜豆| 99国产精品一区二区蜜桃av| 男女那种视频在线观看| 国内少妇人妻偷人精品xxx网站| 丝袜美腿在线中文| 国产av一区在线观看免费| 人体艺术视频欧美日本| 国产一区二区亚洲精品在线观看| 秋霞在线观看毛片| 久久久久国产网址| 国产一区亚洲一区在线观看| 久久久久性生活片| 亚洲在久久综合| 国产不卡一卡二| 久久6这里有精品| 中文字幕免费在线视频6| 岛国毛片在线播放| 97超视频在线观看视频| 人妻系列 视频| 国产免费男女视频| 精品久久久久久电影网 | 大香蕉久久网| 免费av毛片视频| 亚洲av免费高清在线观看| 成人综合一区亚洲| 亚洲欧美日韩无卡精品| 丰满少妇做爰视频| 看黄色毛片网站| 日日啪夜夜撸| 亚洲国产最新在线播放| 久久这里有精品视频免费| 久久99蜜桃精品久久| 国产黄色小视频在线观看| 超碰97精品在线观看| 日韩成人伦理影院| 99热网站在线观看| 久久久国产成人免费| 秋霞伦理黄片| 免费观看a级毛片全部| 搡老妇女老女人老熟妇| 欧美高清性xxxxhd video| .国产精品久久| 亚洲国产欧洲综合997久久,| 一级av片app| 国产精品国产高清国产av| 亚洲av男天堂| 亚洲欧美日韩高清专用| 国产成人一区二区在线| 久久这里只有精品中国| 午夜爱爱视频在线播放| 国产精品一区二区三区四区免费观看| www.色视频.com| 简卡轻食公司| 欧美日本视频| 高清日韩中文字幕在线| 人妻少妇偷人精品九色| 日韩,欧美,国产一区二区三区 | 欧美变态另类bdsm刘玥| 看非洲黑人一级黄片| 国产亚洲av嫩草精品影院| 大又大粗又爽又黄少妇毛片口| 中文字幕av成人在线电影| 天美传媒精品一区二区| 在线播放国产精品三级| 国产免费一级a男人的天堂| 国产精品爽爽va在线观看网站| 91精品国产九色| 欧美日韩综合久久久久久| 亚洲国产欧美人成| 啦啦啦啦在线视频资源| 国产免费一级a男人的天堂| 最近视频中文字幕2019在线8| 国产v大片淫在线免费观看| 国产精品1区2区在线观看.| 亚洲18禁久久av| 亚洲人与动物交配视频| 网址你懂的国产日韩在线| 亚洲国产欧洲综合997久久,| 极品教师在线视频| 免费av观看视频| 久久99蜜桃精品久久| 好男人视频免费观看在线| 亚洲av成人精品一区久久| 国产三级中文精品| 精品久久久久久久末码| 青春草视频在线免费观看| 亚洲av中文av极速乱| 久久久精品欧美日韩精品| 免费观看的影片在线观看| 国产久久久一区二区三区| 精品久久久噜噜| av在线亚洲专区| 高清午夜精品一区二区三区| 黄片wwwwww| 一级二级三级毛片免费看| 美女cb高潮喷水在线观看| 亚洲五月天丁香| 国产精品久久久久久精品电影| 精品一区二区免费观看| 久久综合国产亚洲精品| 国产精品美女特级片免费视频播放器| 国产 一区 欧美 日韩| 欧美激情久久久久久爽电影| 2022亚洲国产成人精品| 国产一区二区亚洲精品在线观看| 大香蕉97超碰在线| 欧美高清性xxxxhd video| 国内精品一区二区在线观看| ponron亚洲| 国产成人精品婷婷| 午夜免费激情av| 国产大屁股一区二区在线视频| 国产白丝娇喘喷水9色精品| 久久欧美精品欧美久久欧美| 一级毛片我不卡| 91精品一卡2卡3卡4卡| 3wmmmm亚洲av在线观看| 久久久久性生活片| 寂寞人妻少妇视频99o| 偷拍熟女少妇极品色| av线在线观看网站| 汤姆久久久久久久影院中文字幕 | 一边亲一边摸免费视频| 欧美另类亚洲清纯唯美| 日韩av在线免费看完整版不卡| 日本爱情动作片www.在线观看| 国产亚洲av片在线观看秒播厂 | 亚洲成人精品中文字幕电影| 18禁在线无遮挡免费观看视频| 欧美高清成人免费视频www| 中文欧美无线码| 国产在视频线精品| 看免费成人av毛片| 久久精品综合一区二区三区| 欧美一级a爱片免费观看看| 美女大奶头视频| 亚洲av免费高清在线观看| 内射极品少妇av片p| av免费在线看不卡| 久久久久久久国产电影| 亚洲激情五月婷婷啪啪| 日本一二三区视频观看| www.av在线官网国产| 久久精品影院6| 国产精品蜜桃在线观看| 亚洲精品乱码久久久久久按摩| 91久久精品国产一区二区成人| 亚洲丝袜综合中文字幕| 高清日韩中文字幕在线| 亚洲精品乱码久久久v下载方式| 午夜福利高清视频| 国产精品熟女久久久久浪| 97超碰精品成人国产| 色吧在线观看| 国产精品蜜桃在线观看| 男女啪啪激烈高潮av片| 午夜免费激情av| 久久鲁丝午夜福利片| 国产精品一区二区三区四区久久| 国产精品永久免费网站| 亚洲av中文av极速乱| 一边摸一边抽搐一进一小说| 亚洲国产精品合色在线| 一级二级三级毛片免费看| 国产色婷婷99| 精品一区二区免费观看| 中文字幕制服av| 91午夜精品亚洲一区二区三区| av国产久精品久网站免费入址| 中文资源天堂在线| 欧美一区二区国产精品久久精品| 久久精品国产鲁丝片午夜精品| 日韩av在线免费看完整版不卡| 男人的好看免费观看在线视频| 中国美白少妇内射xxxbb| av免费在线看不卡| 亚洲图色成人| 国产 一区精品| 麻豆成人午夜福利视频| 亚洲电影在线观看av| 日日摸夜夜添夜夜爱| 啦啦啦韩国在线观看视频| 精品久久久久久电影网 | 国产精品,欧美在线| 特大巨黑吊av在线直播| 2022亚洲国产成人精品| 国产精品美女特级片免费视频播放器| 国产一级毛片七仙女欲春2| 嫩草影院入口| 免费看光身美女| 色综合站精品国产| 99热这里只有是精品50| 亚洲一级一片aⅴ在线观看| 成人二区视频| 欧美一区二区精品小视频在线| 91精品伊人久久大香线蕉| 精品久久久久久久人妻蜜臀av| 国内揄拍国产精品人妻在线| 啦啦啦啦在线视频资源| 日产精品乱码卡一卡2卡三| 一二三四中文在线观看免费高清| 午夜免费男女啪啪视频观看| av又黄又爽大尺度在线免费看 | 自拍偷自拍亚洲精品老妇| 春色校园在线视频观看| 三级经典国产精品| 18禁在线播放成人免费| av专区在线播放| www.色视频.com| 蜜桃亚洲精品一区二区三区| 又粗又爽又猛毛片免费看| 一边摸一边抽搐一进一小说| 夫妻性生交免费视频一级片| 日本爱情动作片www.在线观看| 97超碰精品成人国产| 中文精品一卡2卡3卡4更新| av线在线观看网站| 男女下面进入的视频免费午夜| 久久人人爽人人片av| 人妻夜夜爽99麻豆av| 一边摸一边抽搐一进一小说| 欧美日韩综合久久久久久| av在线播放精品| kizo精华| 久久精品综合一区二区三区| 精品久久久久久久人妻蜜臀av| 97超视频在线观看视频| 日本一本二区三区精品| 国产成人91sexporn| 如何舔出高潮| 亚洲欧美日韩无卡精品| 国产一级毛片在线| 国产亚洲5aaaaa淫片| 51国产日韩欧美| 午夜福利在线观看免费完整高清在| 国内精品美女久久久久久| 久久精品久久精品一区二区三区| 尾随美女入室| 九色成人免费人妻av| 国产久久久一区二区三区| 少妇被粗大猛烈的视频| 欧美成人一区二区免费高清观看| 婷婷色av中文字幕| 国产成人freesex在线| 精品久久久久久久久久久久久| 亚洲欧美成人精品一区二区| 神马国产精品三级电影在线观看| 欧美一区二区亚洲| 日本三级黄在线观看| 1000部很黄的大片| 国语自产精品视频在线第100页| 国产精品一区二区性色av| 国产一区亚洲一区在线观看| 久久久国产成人免费| 中文资源天堂在线| 成人三级黄色视频| 卡戴珊不雅视频在线播放| 国产成人a∨麻豆精品| 色哟哟·www| 在线免费观看不下载黄p国产| 欧美97在线视频| 国产乱人视频| or卡值多少钱| 欧美成人免费av一区二区三区| 最近中文字幕2019免费版| 亚洲内射少妇av| 欧美高清性xxxxhd video| 国产免费一级a男人的天堂| 美女xxoo啪啪120秒动态图| 嘟嘟电影网在线观看| 精品熟女少妇av免费看| 久久综合国产亚洲精品| 老师上课跳d突然被开到最大视频| ponron亚洲| 亚洲av熟女| 久久这里有精品视频免费| 久久婷婷人人爽人人干人人爱| 亚洲成人久久爱视频| 伦理电影大哥的女人| 久久精品夜夜夜夜夜久久蜜豆| 国产一级毛片在线| 国产一区二区三区av在线| 久久精品国产99精品国产亚洲性色| av在线蜜桃| 免费看美女性在线毛片视频| 看非洲黑人一级黄片| 国产精品精品国产色婷婷| 婷婷色av中文字幕| 国产白丝娇喘喷水9色精品| 国产精品一及| 51国产日韩欧美| 久久鲁丝午夜福利片| 丰满乱子伦码专区| 国产精品蜜桃在线观看| 黄色一级大片看看| 亚洲成人中文字幕在线播放| 国产成人免费观看mmmm| 久久久久网色| 久久久久久久亚洲中文字幕| 搡女人真爽免费视频火全软件| 成人鲁丝片一二三区免费| 欧美不卡视频在线免费观看| 欧美一区二区亚洲| 亚洲图色成人| 国产极品精品免费视频能看的| 亚洲国产最新在线播放| 小蜜桃在线观看免费完整版高清| 2022亚洲国产成人精品| 99久国产av精品国产电影| 午夜免费男女啪啪视频观看| 亚洲伊人久久精品综合 | 最后的刺客免费高清国语| 美女被艹到高潮喷水动态| 久久精品久久久久久噜噜老黄 | 国产高潮美女av| 少妇裸体淫交视频免费看高清| 亚洲精品一区蜜桃| 亚洲乱码一区二区免费版| 麻豆乱淫一区二区| 麻豆久久精品国产亚洲av| 校园人妻丝袜中文字幕| 国产成人a区在线观看| www日本黄色视频网| 又爽又黄无遮挡网站| 在线a可以看的网站| 天美传媒精品一区二区| 久久久久久久亚洲中文字幕| 少妇被粗大猛烈的视频| 晚上一个人看的免费电影| 国产免费一级a男人的天堂| 久久综合国产亚洲精品| 国产免费又黄又爽又色| 亚洲精品成人久久久久久| 水蜜桃什么品种好| 日韩av不卡免费在线播放| 一区二区三区乱码不卡18| 国产又色又爽无遮挡免| 亚洲国产精品专区欧美| 人妻系列 视频| 国产高潮美女av| 99久国产av精品国产电影| h日本视频在线播放| 亚洲自偷自拍三级| 国产免费又黄又爽又色| 成人国产麻豆网| 女人被狂操c到高潮| 国产女主播在线喷水免费视频网站 | 国产三级中文精品| 超碰97精品在线观看| 女人久久www免费人成看片 | 精品久久久噜噜| 99久久精品热视频| 老师上课跳d突然被开到最大视频| 精华霜和精华液先用哪个| 久久人人爽人人片av| 国产成人精品久久久久久| 国产一区二区在线av高清观看| 国产成人a区在线观看| 国产精品福利在线免费观看| 免费看av在线观看网站| 欧美日韩一区二区视频在线观看视频在线 | 欧美潮喷喷水| 爱豆传媒免费全集在线观看| 国产色爽女视频免费观看| 国产精品国产三级国产av玫瑰| 一级爰片在线观看| 国产伦精品一区二区三区视频9| 在线观看av片永久免费下载| 久久国内精品自在自线图片| 白带黄色成豆腐渣| videossex国产| 人体艺术视频欧美日本| 春色校园在线视频观看| 夜夜看夜夜爽夜夜摸| 国产免费视频播放在线视频 | 成人毛片60女人毛片免费| 国产亚洲av嫩草精品影院| 九九在线视频观看精品| 亚洲五月天丁香| 波野结衣二区三区在线| 九九热线精品视视频播放| 99久久精品一区二区三区| 亚洲精品乱久久久久久| 国产精品蜜桃在线观看| 美女大奶头视频| 国产精品国产三级国产专区5o | 国产免费视频播放在线视频 | 国产伦理片在线播放av一区| 高清午夜精品一区二区三区| 精品一区二区免费观看| 日韩一区二区三区影片| 国产免费福利视频在线观看| 狠狠狠狠99中文字幕| 国产伦精品一区二区三区四那| 久久婷婷人人爽人人干人人爱| 老司机影院成人| av.在线天堂| av在线天堂中文字幕| 精品久久久久久久久久久久久| 我的老师免费观看完整版| 一级二级三级毛片免费看| 观看免费一级毛片| 看十八女毛片水多多多| 免费观看人在逋| 国产成人a区在线观看| 久久久久精品久久久久真实原创| 亚洲欧美中文字幕日韩二区| 国产午夜精品论理片| 又粗又硬又长又爽又黄的视频| 如何舔出高潮| 免费人成在线观看视频色| 久久久久久国产a免费观看| 国产午夜精品一二区理论片| av在线天堂中文字幕| 一级黄色大片毛片| 欧美xxxx性猛交bbbb| 免费播放大片免费观看视频在线观看 | 天堂网av新在线| 欧美性猛交╳xxx乱大交人| 久久久久久久久久久免费av| 人妻少妇偷人精品九色| 最新中文字幕久久久久| 级片在线观看| 国产真实伦视频高清在线观看| 亚洲欧美精品综合久久99| 亚洲av福利一区| 伦精品一区二区三区| 春色校园在线视频观看| 久久99精品国语久久久| 麻豆精品久久久久久蜜桃| 欧美bdsm另类| 国产激情偷乱视频一区二区| 日本爱情动作片www.在线观看| 小说图片视频综合网站| 日韩欧美精品免费久久| 国产免费福利视频在线观看| 91久久精品国产一区二区成人| 国产精品国产高清国产av| 日韩欧美精品免费久久| 国产中年淑女户外野战色| 国产乱人视频| 麻豆av噜噜一区二区三区| 久久精品国产自在天天线| 色尼玛亚洲综合影院| 亚洲精品色激情综合| 欧美97在线视频| 国产男人的电影天堂91| 亚洲av中文字字幕乱码综合| 免费看美女性在线毛片视频| 人人妻人人澡人人爽人人夜夜 | 久久久久精品久久久久真实原创| 久久久久久久国产电影| 亚洲国产精品专区欧美| 亚洲人成网站在线播| 菩萨蛮人人尽说江南好唐韦庄 | 亚洲国产高清在线一区二区三| 2021少妇久久久久久久久久久| 中文天堂在线官网| 欧美丝袜亚洲另类| 日韩一本色道免费dvd| 最近的中文字幕免费完整| 深夜a级毛片| 在线播放国产精品三级| 亚洲欧洲日产国产| 欧美日韩国产亚洲二区| 国产黄片视频在线免费观看| .国产精品久久| 精品一区二区三区视频在线| 精品人妻偷拍中文字幕| 三级国产精品片| 黄片无遮挡物在线观看| 久久精品国产鲁丝片午夜精品| 免费观看a级毛片全部| 亚洲天堂国产精品一区在线| 在线免费观看的www视频| 国产精品人妻久久久久久| 观看美女的网站| 网址你懂的国产日韩在线| 美女cb高潮喷水在线观看| 中文字幕av成人在线电影| 日本黄色片子视频| 26uuu在线亚洲综合色| 久久久久久久国产电影| 日韩三级伦理在线观看| 中文字幕av在线有码专区| 91久久精品国产一区二区成人| 夜夜看夜夜爽夜夜摸| 精品无人区乱码1区二区| 日本午夜av视频| 尤物成人国产欧美一区二区三区| 97超视频在线观看视频| av专区在线播放| 国产精品精品国产色婷婷| 久久鲁丝午夜福利片| 天天躁日日操中文字幕| 最近中文字幕2019免费版| 亚洲国产精品成人久久小说| 欧美精品一区二区大全| 秋霞在线观看毛片| 亚洲精品日韩av片在线观看| 国产一级毛片在线| 菩萨蛮人人尽说江南好唐韦庄 | 91久久精品电影网| 国产麻豆成人av免费视频| 亚洲成人中文字幕在线播放| 国产乱人偷精品视频| 国产亚洲91精品色在线| 秋霞伦理黄片| 青青草视频在线视频观看| 国产白丝娇喘喷水9色精品| 午夜免费激情av| 久久久午夜欧美精品| 桃色一区二区三区在线观看| 九九久久精品国产亚洲av麻豆| 国产色爽女视频免费观看| 噜噜噜噜噜久久久久久91| 特级一级黄色大片| 国语对白做爰xxxⅹ性视频网站| 欧美精品一区二区大全| 国产精品伦人一区二区| 国产成人91sexporn|