• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Cooperative Channel Assignment for VANETs Based on Dual Reinforcement Learning

    2021-12-15 12:49:10XutingDuanYuanhaoZhaoKunxianZhengDaxinTianJianshanZhouandJianGao
    Computers Materials&Continua 2021年2期

    Xuting Duan, Yuanhao Zhao,Kunxian Zheng,*, Daxin Tian,Jianshan Zhou,3 and Jian Gao

    1Beijing Advanced Innovation Center for Big Data and Brain Computing, Beijing, 100191, China

    2School of Transportation Science and Engineering,Beihang University, Beijing, 100191, China

    3Department of Engineering and Design, University of Sussex, Brighton, BN1 9RH, UK

    4Research Institute of Highway Ministry of Transport, Beijing, 100088,China

    Abstract:Dynamic channel assignment (DCA) is significant for extending vehicular ad hoc network(VANET)capacity and mitigating congestion.However,the un-known global state information and the lack of centralized control make channel assignment performances a challenging task in a distributed vehicular direct communication scenario.In our preliminary field test for communication under V2X scenario, we find that the existing DCA technology cannot fully meet the communication performance requirements of VANET.In order to improve the communication performance, we firstly demonstrate the feasibility and potential of reinforcement learning (RL) method in joint channel selection decision and access fallback adaptation design in this paper.Besides, a dual reinforcement learning (DRL)-based cooperative DCA (DRL-CDCA) mechanism is proposed.Specifically,DRL-CDCA jointly optimizes the decision-making behaviors of both the channel selection and back-off adaptation based on a multi-agent dual reinforcement learning framework.Besides, nodes locally share and incorporate their individual rewards after each communication to achieve regional consistency optimization.Simulation results show that the proposed DRL-CDCA can better reduce the one-hop packet delay, improve the packet delivery ratio on average when compared with two other existing mechanisms.

    Keywords: Vehicular ad hoc networks;reinforcement learning;dynamic channel assignment

    1 Introduction

    VANET is a specific application of MANET(Mobile Ad-hoc Network)in vehicle to vehicle/vehicle to infrastructure communication scenario.As a research hotspot of intelligent transportation, VANETs lay a crucial foundation for various intelligent vehicular functions.Traditional MANETs adopt the singlechannel communication mode where all nodes can only access one common channel for data transmission.With the widespread use of connected vehicles in the future, the quantity of VANETs nodes increase continuously,leading to progressively fierce competition in wireless resources.The single-channel communication mode tends to cause severe resources conflicts when large numbers of nodes access the channel concurrently.Therefore, the capacity of the wireless network using the traditional single-channel communication mode is seriously limited by the quantity of channels.

    As a widely used VANETs wireless communication protocol standard, WAVE (Wireless Access in Vehicular Environment) provides a 75 MHz bandwidth in the 5.9 GHz frequency band for vehicle to vehicle/vehicle to infrastructure communication and divides the 75 MHz bandwidth into seven channels.These channels enable nodes to transmit data packets simultaneously under diverse channels.CH178 is the control channel (CCH) that can only be used to transmit control and public security information.CH174,CH176, CH180 and CH182 are service channels (SCHs), used to transmit both public security and private service information.CH184 and CH172 are reserved for future use.IEEE 1609.4 is used for multi-channel operations, such as channel synchronization, coordination, and switching of WAVE.WAVE providers broadcast WAVE Service Advertisement (WSA) packets containing the offered services information and the network parameters necessary to join the advertised Basic Service Set (BSS) [1].After receiving the WSA,the WAVE users interested in the service access the corresponding SCH in the SCH Interval (SCHI) to obtain service data.However, we find that the quality of service (QoS) of VANET both in line-of-sight(LOS) and none-line-of-sight (NLOS) scenarios are not ideal in the preliminary field experiment.This channel coordination mode is not suitable for a vehicular direct communication scenario.For example, in a unicast multi-hop routing scenario,the data transmitting node needs to transmit data on a specific SCH with the next hop node selected by the routing protocol.The process of the optimal SCH selection and the channel coordination therein is not clearly defined in IEEE 1609.4.

    The limited communication range, the change of network topology, and the distributed execution of channel assignments between the nodes make the global network state of VANETs unknown to the nodes.Therefore, the channel assignment of local vehicular direct communication is actually an optimization problem under an unknown state model.In addition, the lack of centralized control makes DCA more challenging.This paper applies RL to the DCA problem in a dynamic environment because of its widespread usage and outstanding performance in the field of optimal decision-making without state models.In order to meet the dual requirements of VANETs for network capacity and latency, we design a dual RL framework to jointly optimizes the decision-making behaviors of both the channel selection and back-off adaptation, and achieve multi-agent collaboration by sharing their individual rewards.Finally, our DRL-CDCA is compared with two other conventional baseline mechanisms under the same simulation scenario.After simulation, it can be found that DRL-CDCA is superior to two other conventional baseline mechanisms in the one-hop packet delay and the packet delivery ratio on average.

    The main contributions of this paper are summarized as follows:

    ? As an important branch of machine learning,the existing RL theory has made some achievements in many fields, but it is mainly based on the interaction between environment and robot and game development.Owing to few people effort the application prospect of RL theory in the Internet of vehicles, its development of RL theory in promoting joint channel selection and medium access control (MAC) layer back-off for vehicle to vehicle (V2V) communication and networking is not mature.We propose the first work to demonstrate the feasibility and potential of RL based method in joint channel selection decision and access fallback adaptation design to enhance V2V communication.

    ? In addition,we improve the original RL theory to combine RL theory with vehicle field,and design and combine two components to adapt to and improve the decision-making performance of RL agent in V2V communication:(I)A dual Q network structure for joint optimization of channel selection and reverse adaptation;(II)A distributed consensus reward mechanism to promote cooperative decisionmaking among learners Behavior.

    The remainder of this paper is organized as follows:Section 2 mainly introduces related work in channel assignment.Section 3 mainly displays the previous field experiments and the problems found according to the experimental results Section 4 describes the system model and problem formulation.Section 5 describes the details of the mechanism proposed in this paper.The performance of our DRL-CDCA is compared to two other existing mechanisms in Section 6.Finally,concluding remarks are presented in Section 7.

    2 Related Work

    Q-learning-based DCA proposed in Nie et al.[2] uses RL to solve DCA problem in a cellular mobile communication system.Q-learning-based DCA is a single agent RL (SARL) mechanism, as well as a centralized one.The base station as a centralized node assigns the channel to each communication node pair.However, the channel assignment of VANETs is not done by a central node like the base station of Q-learning-based DCA; Instead, each node assigns the channel independently.Therefore, the centralized channel assignment mechanism like Q-learning-based DCA for cellular mobile communication systems cannot adapt to VANET.A novel deep RL (DRL)-based DCA (DRL-DCA) algorithm is proposed in Liu et al.[3], where the system state is reformulated into an image-like fashion, and then, convolutional neural network is used to extract useful features.DRL-DCA models the multibeam satellite system as the agent and the service event as the environment.From the perspective of VANETs, DRL-DCA is equivalent to the RSU (roadside unit)-based channel assignment mechanism, which is a centralized channel assignment mechanism.Wei et al.[4] combines RL method Q-learning with deep neural network to approximate the value function in complex control application.The RL-CAA mentioned in Ahmed et al.[5], and the RLAM mentioned in Louta et al.[6] are also the centralized channel assignment mechanisms in different application scenarios, and are not suitable for the distributed channel assignment problem of the vehicular direct communication scenario studied in this paper.Therefore, this paper models the Markov decision process of RL for distributed scenarios, and designs the DCA mechanism based on the distributed RL model.

    In recent years,some advanced MAC(medium access control)protocols[7-9]have been designed to enhance the communication capabilities of VANET.An adaptive multi-channel assignment and coordination(AMAC) scheme for the IEEE 802.11p/1609.4 is proposed in Almohammedi et al.[10], which exploits channel access scheduling and channel switching in a novel way.However, AMAC’s channel selection mechanism is still based on WBSS (WAVE Basic Service Set) service release-subscription, not for vehicular direct communication scenarios.A mechanism called safety communication based adaptive multi-channel assignment is proposed in Chantaraskul et al.[11] to adaptively adjust the channel switching interval.However, there is no mention of the strategy of SCH selection in Chantaraskul et al.[11].An RSU-coordinated synchronous multi-channel MAC scheme for VANETs is proposed in Li et al.[12], which supports simultaneous transmissions on different SCHs.However, in the scenario where the vehicle-to-vehicle is directly connected, the MAC mechanism without RSU cooperation cannot be realized.In Ribal et al.[13], deep reinforcement learning is applied to VANETs.Specifically, it is used to implement the vehicle to RSU that meets QoS requirements.The RSU can distinguish different system states according to its remaining battery, the quantity of the mobile nodes and the communication requests before assigning suitable SCH to each OBU(on board unit).The method proposed in Ribal et al.[13] still belongs to the RSU-based channel assignment mechanism, which cannot solve the channel assignment problem in a fully distributed scenario.

    Furthermore,due to the uneven traffic flow density,VANET node density will also be affected.At the same time,the routing is of multi hop and multi node,which brings severe challenges to the operation and optimization of networking and transmission.Liu et al.[14]gives a novel multi-hop algorithm for wireless network with unevenly distributed nodes.In this algorithm,each unknown node estimates its location using the mapping model by elastic net constructed with anchor nodes.However, as an important feature of VANET, the randomness and uncertainty of the target motion are not discussed in depth.A referential node scheme for VANET is proposed in Wang et al.[15], where further analysis for channel assignment is not implemented.

    It can be seen that a large part of the channel assignment mechanism for VANETs is a central channel assignment mechanism, and some of the existing fully distributed channel assignment mechanisms are still basically based on the WBSS service release-subscription,not for the vehicular direct communication scenario.

    3 Preliminary Field Experiment

    In the preliminary work of this paper,a field experiment for vehicle road collaborative application was carried out in Tong Zhou automobile test field,Beijing.The filed testing environment is shown in Fig.1.In the experiment, the communication performance of VANET is measured in LOS and NLOS scenarios respectively in different level of vehicle speed.As a representative index to evaluate network performance, the QoS of the VANET was emphatically noticed.

    Figure 1:Beijing Tong Zhou automobile test base

    During the experiment,a vehicle equipped with on-board unit runs on the test road,while roadside units are deployed in a relatively fixed location.On-board terminal and roadside terminal form network through communication between on-board unit and roadside unit.Topology structure of VANETs’testing is shown in Fig.2.When the experiment launching,the applications of driving road collaboration are developed by the data exchange and share between roadside and vehicle.The tester monitors the on-board unit and roadside unit with computer in and out of the vehicle respectively.Then QoS indicators are further calculated.

    In this field experiment,QoS refers to the selection of delay time,delay variation,package loss rates,and throughput.And the ordinary DCA strategy is employed in vehicular networking.The results in various speed and visibility scenarios are shown in Fig.3.

    The results reveal that although the current strategy has almost successfully controlled the time delay in less than 10 ms,the delay variation is changed frequently.Meanwhile,although packet loss does not appear to be serious,considering that there is only one vehicle participating in the test and there is only one single hop routing in this network, its test results can not reflect the universal situation of multi hop and multi routing.In addition, the throughput is within 2 Mb/s in most case while the rated throughput of the communication equipment is 4 Mb/s.Employing the existing DCA strategy, the throughput is 50% less than the rated throughput in order to meet the communication accessibility.

    Figure 2:Topology structure of VANET performance testing

    Figure 3:QoS test results in various speed and visibility scenarios

    The field experiment results illustrate that the existing DCA is difficult to adapt to VANET.To optimize the channel allocation and networking problems in VANET,this paper proposes DRL-CDCA.Considering the difficulty of deploying the multi hop and multi routing experiment under the condition of multi vehicles in the test field, this paper explores the situation of multi hop and multi routing by simulation.

    4 System Model and Problem Formulation

    The current statesinobserved by RL agentiat then-th time slot is related to pervious statesn-1and actionan-1, which is a Markov Decision Process (MDP), and can be described by a 5-tuple array(Si,Ai,Pi,Ri,γ):

    ?Si:A set of states observed by RL agentiat different time slot,wheresin∈Si,sinis the system state observed by the RL agentiat then-th time slot.

    ?Ai:A set of actions performed by RL agentiat different time slot, whereain∈Ai,ainis the action performed by the RL agentiat then-th time slot.

    ?Pi:The state transition probability, which is the probability distribution of the state transition after taking actionain states.The probability of changing fromsintosi(n+1)after taking actionaincan be expressed asp(si(n+1)|sin,ain).

    ?Ri:The reward function,whererin∈Ri,rinis the reward obtained by the RL agentiat then-th time slot.

    ? γ:The discount factor,which is used to assign the weight between real-time rewards and long-term rewards.When γ=0, the agent only considers real-time rewards, and γ=1 means that long-term rewards and real-time rewards are equally important.

    In order to jointly optimize channel allocation and back-off adaptation,we apply dual MDP to the DCA problem.The state,action and reward functions of each MDP in dual MDP are described in detail below.

    4.1 State

    We assume that the quantity of SCH isK.The state of each MDP in dual MDP are described by follow.

    ?Channel Selection:The state of channel selection MDP isscin={Cin,Ein}, whereCin=[c1in,...,ckin,...,cKin] denotes the channel busy situation, andckinis the number of CNPs(communication node pairs) intending to transmit over thek-th SCH.Eindenotes the local communication requirement of nodeiatn-th time slot.

    ?Back-off Adaptation:The state of back-off adaptation MDP iswhereWi(n-1)denotes the back-off window size in the(n-1)-th time slot.

    4.2 Action

    ?Channel Selection:The action of channel selection MDP iskin(1 ≤kin≤K),which represents the index of the SCH selected in then-th time slot by RL agenti.

    ?Back-off Adaptation:The action of back-off adaptation MDP iswin, andwin∈{w1in,w2in,w3in},wherew1indenotes that agentimaintains the current bock-off window size asWin=Wi(n-1),w2indenotes that agentiincreases the back-off window size asWin=2Wi(n-1)+1,w3indenotes that agentireduces the back-off window size asWin=

    4.3 Reward

    We take consideration the dynamics user communication demand and communication performance as indicators of model training.The reward function is designed as follows.

    where ξ and ? are two positive weights,gtradenotes the number of packets transferred by agenti,grecdenotes the number of packets successfully delivered to the receiver,andgquedenotes the number of packets waiting in the buffer queue that need to be transferred in then-th time slot.We use the packet delivery ratioto denote the communication performance, and the sum of packets transferred and waiting in the buffer to denote the user communication demand at then-th time slot.

    In order to achieve multi-agent collaborative optimization in a distributed manner,we propose a novel reward formulation with a weighted sum strategy,termed consensus reward,which is constructed as:

    whereR′inis the set of local rewards of the neighbor nodes of nodei,andr′injis the local reward of nodej,βinjis the weight ofr′inj.

    The model inputSi,actionAi,and rewardRiof the dual MDP are defined above.To obtain the optimal strategy,the state transition probabilityPiand the discount factor γ must be determined.Between them,γ is artificially set.The precondition for determining the state transition probabilityPiis the known environment model.However, it is hardly to accurately obtain the system state at the next time slot in VANETs.Consequently, the DCA problem is an unknown environment model problem.In this paper, Q-learning as a widely used RL algorithm is used to achieve the optimal strategy.The base Q-learning strategy update formula is as follows[16]:

    whereQn(sin,ain)denotes the value function when taking actionainin statesin,and α is the learning rate that denotes the magnitude of the strategy update.

    5 Proposed Channel Assignment Mechanism

    5.1 Strategy Execution and Update

    The model input of DRL-CDCA is a vector of continuous values.Therefore, the state-action value cannot be stored by Q-table.DRL-CDCA uses the neural network to approximate the state-action value.Specifically, we construct two neural networks, i.e., dual neural networks, one of which de noted byis used for channel selection and the other denoted byfor back-off adaptation.θ and ? are the weights of the dual neural networks.Fig.4 shows our methodological framework.

    Weights θ and ? are updated as following:

    Figure 4:A scenario of multiple co-existing communication node pairs (CNPs) Driven by multi-agent DRL-CDCA

    5.2 Channel Access Process

    VANETs nodesiandjexchange RTS/CTS in the CCH interval for channel coordination,and then nodesiandjswitch to the selected SCHkfor data transmission in the SCH interval.To better understand channel access,an example shown in Fig.5 is introduced.There are communication demands between nodes A and B,nodes C and D in the CCH interval.Nodes A and C compete for the access to CCH.Assume that node A first obtains the transmission opportunity.After node A successfully accesses CCH, it sends RTS to node B.When node B receives RTS, it selects SCH channel.Assume that SCH [1] is selected as the transmission channel of nodes A and B in SCH interval.Node B broadcasts CTS containing the information about channel coordination between itself and node A.After receiving CTS, the neighboring nodes of node B update their local state.Then node C attempts to access CCH again.Assume that SCH [3] is selected as the transmission channel of nodes C and D in the SCH interval.In SCH interval, nodes A and C do not immediately send data to the MAC layer.Instead,they randomly evade for a period of time when entering SCHI.The window sizeWinof the random back-off process is determined by ?.Then nodes A and B switch to SCH [1], nodes C and D switch to SCH [3], and transmit data in their own SCH.The receiving node will send feedback ACK as soon as it receives the data packet.The overall algorithm is given in Algorithm 1.

    Algorithm 1:Algorithm for Multi-agent DRL-CDCA

    6 Results and Discussion

    6.1 Simulation Parameters

    The simulation experiment in this paper is based on Veins, which is further based on two simulators:OMNeT++, an event-based network simulator, and SUMO, a road traffic simulator.The neural networks of DRL-CDCA is supported by the third-party machine learning C++ library MLPACK.The simulation scenario shown in Fig.6 is a part of the scenario based on the city of Erlangen.The parameter settings are given in Tab.1.

    Figure 5:Channel access process

    This paper compares the performance of the multi-agent DRL-CDCA with other two existing channel assignment mechanisms as follow:

    ?The random assignment mechanism(Random):Each CNP randomly selects the SCH in each time slot.

    Figure 6:Channel access process

    Table 1:Simulation parameters

    ?The greedy selection mechanism(Greedy):Each CNP selects the SCH that is currently reserved by the least other CNPs.

    We use the following metrics to compare the performance of different mechanisms:

    ?Packet delivery ratio:The ratio of the number of ACKs and the number of packets sent in the entire simulation time,representing the adaptability of the channel assignment mechanism to the dynamic network.

    ?One hop packet delay:The average time required for each data sent from the application layer of the source to the application layer of the destination, which is critical for fast data transfer in VANETs.

    Figure 7:Convergence performance of Multi-agent DRL-CDCA

    6.2 Results

    Fig.7 shows the convergence of the multi-agent DRL-CDCA, and it can be seen that the Q-value remains unchanged until 1000 iterations [6-8].In fact, the channel assignment decisions in VANET are typically highly repetitive, so multi-agent DRL-CDCA can converge quickly.

    Fig.8 evaluates the performance of the DRL-CDCA and other two existing mechanisms with a variable vehicular quantity.It is clear that the performance of all the channel assignment mechanisms become worse as more vehicles appear on the road.Fig.8(a) plots the change of the packet delivery ratio with different vehicular quantity.In fact, with the increase of the vehicular quantity, the quantity of the data packets also increases, leading to more frequent collisions of data packets.Consequently, the probability of successful transmission gradually decreases.Fig.8(b) plots the change of the one-hop packet delay with different vehicular quantity.As the communication demand increases, the busy time of the channel also increases, which leads to an increase in the back-off duration of the node, hence the one-hop packet delay also increases as the number of vehicles increases.Our method can outperform other two conventional baseline mechanisms even in a highly dense situation.For instance, when the vehicle density is set to 400, the packet delivery ratio of the multi-agent DRL-CDCA is 13.83% and 21.98% higher than Random and Greedy, respectively.And the one-hop packet delay of our method is 73.73% and 73.65% lower than that of the other two methods,respectively.

    Fig.9 evaluates the performance of the DRL-CDCA and other two existing mechanisms with a variable vehicle speed.It can be found that the performance of all the channel assignment mechanisms become worse as the vehicular speed increases.Fig.9(a) plots the change of the packet delivery ratio with different vehicular speeds.As the vehicle speed increases, the network topology of VANET changes rapidly, which easily cause the destination node to leave the signal coverage of the source node or move back off the building.This undoubtedly tends to cause packet reception failures and decrease of the packet delivery ratio.Fig.9(b) plots the change of the one-hop packet delay with different vehicle speeds.When vehicle speed changes, it has little effect on the “busyness” degree of the channel.Meanwhile, when the node accesses the channel, the random back-off process is almost unaffected by speed change.Therefore, the one-hop packet delay does not change much with changes in speed.The WAVE protocol stack is a wireless communication standard whose design is based on the VANET high-speed mobile environment.Its CCHI and SCHI are both 50 ms.When a neighboring node moves at a speed of 120 km/h, it only moves 1.7 m within 50 ms.Therefore, during the channel coordination and data transmission processes,the VANET topology hardly changes.As a result, the performance of each channel assignment mechanism does not decrease much.

    Figure 9:Simulation results:(a)and(b)compare the performance of different methods in terms of the mean and the standard deviation of packet delivery ratio and one hop packet delay under different vehicular speeds.(a) Packet delivery ratio (b)One hop packet delay

    To sum up, the figures above confirm the advantage of our proposed method that can achieve higher efficiency performances.Random does not consider the network state, and only selects the SCH in a random manner.It may cause in very few nodes using a certain SCH, resulting in a huge waste of wireless communication resources.And it may also cause a large number of nodes using another certain SCH, resulting in severe data collisions that degrade network performance.Greedy may cause the SCH with the smallest load to become the SCH with the largest load, and make other SCHs underutilized.Compared with other channel assignment mechanisms,the multi-agent DRL-CDCA is based on the dual Qnetworks trained by the past experience including consensus reward to make a collaborative optimization.Obviously, the performance of multi-agent DRL-CDCA is significantly better than other channel assignment mechanisms.

    7 Conclusion

    In this paper,a dual reinforcement learning(DRL)-based cooperative DCA(DRL-CDCA)mechanism is proposed, which enable the nodes to learn the optimal channel selection and back-off adaptation strategy from past experiences.Then the performances of the proposed mechanisms are compared with two other existing mechanism under the same simulation scenario.The simulation results show that DRL-CDCA improves the overall performance compared with other two conventional baseline mechanism obviously.

    Acknowledgement:The author(s)received no specific help from somebody other than the listed authors for this study.

    Funding Statement:This research was supported in part by Beijing Municipal Natural Science Foundation Nos.L191001 and 4181002, the National Natural Science Foundation of China under Grant Nos.61672082 and 61822101,the Newton Advanced Fellow-ship under Grant No.62061130221.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲午夜精品一区,二区,三区| 十八禁网站免费在线| 国产精品国产高清国产av| 别揉我奶头~嗯~啊~动态视频| 女生性感内裤真人,穿戴方法视频| 亚洲三区欧美一区| 亚洲第一欧美日韩一区二区三区| 久久香蕉国产精品| 精品国产亚洲在线| 国产亚洲欧美98| 一区福利在线观看| 老司机福利观看| 在线天堂中文资源库| 亚洲精品av麻豆狂野| 免费看十八禁软件| 他把我摸到了高潮在线观看| 国产免费男女视频| 制服诱惑二区| 精品国产一区二区三区四区第35| 国产成人影院久久av| 国产男靠女视频免费网站| 国内久久婷婷六月综合欲色啪| 成人亚洲精品一区在线观看| 欧美三级亚洲精品| 国产v大片淫在线免费观看| 99在线人妻在线中文字幕| 国产精品国产高清国产av| 狂野欧美激情性xxxx| 亚洲狠狠婷婷综合久久图片| 久久久久久九九精品二区国产 | 亚洲欧美激情综合另类| 他把我摸到了高潮在线观看| 精品国产亚洲在线| 91大片在线观看| 俺也久久电影网| 日韩欧美一区视频在线观看| 变态另类成人亚洲欧美熟女| 一级毛片女人18水好多| 可以在线观看的亚洲视频| 欧美激情极品国产一区二区三区| 大型黄色视频在线免费观看| 国产亚洲精品综合一区在线观看 | 国产精品99久久99久久久不卡| 国内精品久久久久久久电影| 亚洲狠狠婷婷综合久久图片| 日韩精品免费视频一区二区三区| 日韩国内少妇激情av| 一进一出抽搐动态| 琪琪午夜伦伦电影理论片6080| 欧美亚洲日本最大视频资源| 成人欧美大片| 精品久久蜜臀av无| 一个人观看的视频www高清免费观看 | 色老头精品视频在线观看| 欧美日韩亚洲国产一区二区在线观看| 亚洲色图 男人天堂 中文字幕| 亚洲熟妇熟女久久| 12—13女人毛片做爰片一| 人妻丰满熟妇av一区二区三区| 18禁黄网站禁片午夜丰满| 99国产综合亚洲精品| 午夜a级毛片| 亚洲av成人不卡在线观看播放网| 精品国产国语对白av| 99久久综合精品五月天人人| 99国产精品一区二区蜜桃av| 两个人免费观看高清视频| 国产精品99久久99久久久不卡| 亚洲av片天天在线观看| 99精品久久久久人妻精品| 欧美日韩中文字幕国产精品一区二区三区| 国产精品免费视频内射| 色尼玛亚洲综合影院| 亚洲国产精品成人综合色| 午夜福利欧美成人| 日韩免费av在线播放| 一本综合久久免费| 高清在线国产一区| 国产熟女午夜一区二区三区| 国产一区二区三区在线臀色熟女| 两个人免费观看高清视频| 91字幕亚洲| 精品不卡国产一区二区三区| 麻豆一二三区av精品| 校园春色视频在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲性夜色夜夜综合| 午夜福利欧美成人| 亚洲国产欧美日韩在线播放| 精品不卡国产一区二区三区| 在线观看66精品国产| 欧美精品亚洲一区二区| 国内精品久久久久久久电影| 久99久视频精品免费| 免费看十八禁软件| 免费搜索国产男女视频| 夜夜爽天天搞| 日本a在线网址| 久久香蕉国产精品| 亚洲av美国av| 久久久久久久久久黄片| 欧美在线黄色| 婷婷精品国产亚洲av在线| 18禁观看日本| 亚洲 国产 在线| 国内少妇人妻偷人精品xxx网站 | 无遮挡黄片免费观看| 亚洲国产欧美日韩在线播放| 久久精品人妻少妇| 悠悠久久av| 动漫黄色视频在线观看| 亚洲国产精品合色在线| 精品一区二区三区av网在线观看| 久久草成人影院| 中文字幕精品亚洲无线码一区 | 999精品在线视频| 免费一级毛片在线播放高清视频| 国产亚洲欧美精品永久| 18美女黄网站色大片免费观看| 免费高清在线观看日韩| 亚洲国产毛片av蜜桃av| 精品国产超薄肉色丝袜足j| 女人爽到高潮嗷嗷叫在线视频| 久久久久国产一级毛片高清牌| 两性夫妻黄色片| 久久精品国产亚洲av高清一级| 久久天堂一区二区三区四区| 成人国产一区最新在线观看| 国产精品免费视频内射| 色播在线永久视频| 精品第一国产精品| 精品久久蜜臀av无| 欧美黄色淫秽网站| 午夜精品久久久久久毛片777| 女性生殖器流出的白浆| 欧美黑人精品巨大| 婷婷精品国产亚洲av| 国产亚洲精品av在线| 麻豆一二三区av精品| 91老司机精品| 老司机午夜十八禁免费视频| 亚洲一卡2卡3卡4卡5卡精品中文| 国产成人影院久久av| 亚洲av第一区精品v没综合| 视频区欧美日本亚洲| 美女免费视频网站| 国产亚洲精品第一综合不卡| 国产一区在线观看成人免费| 欧美精品亚洲一区二区| 久久久国产精品麻豆| 一区二区三区高清视频在线| 女性被躁到高潮视频| 级片在线观看| 国产精品爽爽va在线观看网站 | 久久久久久久久中文| 三级毛片av免费| 亚洲av熟女| 女生性感内裤真人,穿戴方法视频| 亚洲美女黄片视频| 久久久久久免费高清国产稀缺| 看片在线看免费视频| 亚洲av电影不卡..在线观看| 色精品久久人妻99蜜桃| 国产一区二区激情短视频| 国产av一区二区精品久久| 午夜福利高清视频| 国产精品一区二区免费欧美| 精品一区二区三区四区五区乱码| 久久天躁狠狠躁夜夜2o2o| 亚洲av日韩精品久久久久久密| 国产高清激情床上av| 日韩成人在线观看一区二区三区| 免费一级毛片在线播放高清视频| 麻豆成人午夜福利视频| 久久中文字幕人妻熟女| 亚洲欧美精品综合一区二区三区| 欧美日韩亚洲国产一区二区在线观看| 国产成人一区二区三区免费视频网站| 两个人视频免费观看高清| 国产精品免费一区二区三区在线| 日韩欧美国产一区二区入口| 国产亚洲精品一区二区www| 国产亚洲精品综合一区在线观看 | av中文乱码字幕在线| 又黄又粗又硬又大视频| 欧美绝顶高潮抽搐喷水| 99热6这里只有精品| 欧美黑人巨大hd| 国产成人精品无人区| 亚洲九九香蕉| 国产三级黄色录像| avwww免费| 人人妻,人人澡人人爽秒播| 动漫黄色视频在线观看| 亚洲精品一区av在线观看| 久热这里只有精品99| 国产黄a三级三级三级人| 久久精品影院6| 亚洲av熟女| 久久香蕉激情| 精品欧美一区二区三区在线| 免费在线观看亚洲国产| 黄色片一级片一级黄色片| 丝袜美腿诱惑在线| 大型av网站在线播放| 视频区欧美日本亚洲| 精品久久久久久久毛片微露脸| 免费高清视频大片| 国产精品二区激情视频| 亚洲久久久国产精品| 岛国视频午夜一区免费看| 久久精品aⅴ一区二区三区四区| 在线视频色国产色| 亚洲国产毛片av蜜桃av| 国产高清激情床上av| 日韩精品青青久久久久久| 动漫黄色视频在线观看| 亚洲精品在线美女| 久久久久国产精品人妻aⅴ院| 精品国产国语对白av| 日本一本二区三区精品| 黑人欧美特级aaaaaa片| 99热这里只有精品一区 | 日日摸夜夜添夜夜添小说| 国产人伦9x9x在线观看| 一卡2卡三卡四卡精品乱码亚洲| 男人的好看免费观看在线视频 | √禁漫天堂资源中文www| 99热6这里只有精品| 成人手机av| 好男人在线观看高清免费视频 | 91国产中文字幕| 日韩av在线大香蕉| 成人午夜高清在线视频 | 老熟妇仑乱视频hdxx| 欧美亚洲日本最大视频资源| 精品少妇一区二区三区视频日本电影| 欧美不卡视频在线免费观看 | 免费高清在线观看日韩| 女生性感内裤真人,穿戴方法视频| 啦啦啦 在线观看视频| 午夜福利一区二区在线看| 丰满的人妻完整版| 国产蜜桃级精品一区二区三区| 国产91精品成人一区二区三区| 免费在线观看完整版高清| 亚洲片人在线观看| 日韩成人在线观看一区二区三区| 日韩成人在线观看一区二区三区| 国产爱豆传媒在线观看 | 99re在线观看精品视频| av中文乱码字幕在线| 免费观看人在逋| 日韩欧美三级三区| 成人亚洲精品一区在线观看| 亚洲精品美女久久av网站| 久久久久久国产a免费观看| 露出奶头的视频| av在线天堂中文字幕| 久久精品国产亚洲av高清一级| 亚洲中文av在线| 国产精品自产拍在线观看55亚洲| 99国产综合亚洲精品| 亚洲狠狠婷婷综合久久图片| 国产高清有码在线观看视频 | 国产黄片美女视频| 亚洲自偷自拍图片 自拍| 两性午夜刺激爽爽歪歪视频在线观看 | 观看免费一级毛片| 国产高清videossex| 搡老熟女国产l中国老女人| 亚洲国产精品sss在线观看| av视频在线观看入口| 精品久久久久久久久久免费视频| 最好的美女福利视频网| 午夜精品在线福利| 侵犯人妻中文字幕一二三四区| 免费高清在线观看日韩| 日本三级黄在线观看| 老汉色∧v一级毛片| 久久久久久久精品吃奶| 在线观看舔阴道视频| 午夜福利欧美成人| 免费在线观看成人毛片| 悠悠久久av| 国产精品久久久久久人妻精品电影| 波多野结衣巨乳人妻| 91麻豆精品激情在线观看国产| 三级毛片av免费| 亚洲人成网站在线播放欧美日韩| av视频在线观看入口| 成人精品一区二区免费| 亚洲av成人av| 我的亚洲天堂| 色综合亚洲欧美另类图片| 国产亚洲欧美98| 亚洲国产高清在线一区二区三 | 国产精品一区二区免费欧美| 在线永久观看黄色视频| 校园春色视频在线观看| 久久国产乱子伦精品免费另类| 久久香蕉激情| 亚洲精品中文字幕在线视频| 欧美成人一区二区免费高清观看 | 国产激情偷乱视频一区二区| 99热只有精品国产| 正在播放国产对白刺激| 欧美黑人精品巨大| 午夜久久久在线观看| 亚洲真实伦在线观看| 好男人在线观看高清免费视频 | 黄频高清免费视频| 麻豆国产av国片精品| 国产v大片淫在线免费观看| 日日摸夜夜添夜夜添小说| 丝袜在线中文字幕| 老汉色av国产亚洲站长工具| tocl精华| 99国产精品99久久久久| 97碰自拍视频| 美女高潮喷水抽搐中文字幕| 亚洲九九香蕉| 久久久久精品国产欧美久久久| 亚洲国产毛片av蜜桃av| 日韩一卡2卡3卡4卡2021年| 欧美亚洲日本最大视频资源| 国产精华一区二区三区| 看免费av毛片| 妹子高潮喷水视频| 久久婷婷人人爽人人干人人爱| 欧美性猛交黑人性爽| 久9热在线精品视频| 观看免费一级毛片| 白带黄色成豆腐渣| 国产一区二区激情短视频| 制服人妻中文乱码| 草草在线视频免费看| 成人国语在线视频| 亚洲国产毛片av蜜桃av| 亚洲第一欧美日韩一区二区三区| 2021天堂中文幕一二区在线观 | 亚洲一区二区三区色噜噜| 黄频高清免费视频| 日韩国内少妇激情av| 黄色视频,在线免费观看| 人妻丰满熟妇av一区二区三区| 午夜福利一区二区在线看| 成熟少妇高潮喷水视频| 欧美在线黄色| 校园春色视频在线观看| av欧美777| 欧美黄色淫秽网站| 91成人精品电影| 国内精品久久久久久久电影| 好男人电影高清在线观看| 久久午夜亚洲精品久久| 精品国产超薄肉色丝袜足j| 最近最新免费中文字幕在线| 国产黄片美女视频| 亚洲国产欧美一区二区综合| 久久国产亚洲av麻豆专区| 亚洲国产看品久久| 人妻久久中文字幕网| 国产三级在线视频| 亚洲黑人精品在线| 欧美精品亚洲一区二区| 国产精品日韩av在线免费观看| xxx96com| 亚洲美女黄片视频| 后天国语完整版免费观看| 在线观看66精品国产| 久久久久久久精品吃奶| 亚洲欧美精品综合一区二区三区| 黄色视频不卡| 真人做人爱边吃奶动态| 欧美性猛交黑人性爽| 最新美女视频免费是黄的| 日韩欧美一区二区三区在线观看| 欧美黄色淫秽网站| 国产一级毛片七仙女欲春2 | 午夜福利免费观看在线| 无限看片的www在线观看| 此物有八面人人有两片| 久99久视频精品免费| 侵犯人妻中文字幕一二三四区| 久久亚洲真实| 久久精品亚洲精品国产色婷小说| 国产亚洲精品久久久久5区| 久久久久免费精品人妻一区二区 | 97超级碰碰碰精品色视频在线观看| 在线观看www视频免费| 一个人观看的视频www高清免费观看 | 国产精品综合久久久久久久免费| 大香蕉久久成人网| 精品福利观看| 亚洲七黄色美女视频| 精品国产乱子伦一区二区三区| 精品乱码久久久久久99久播| 啦啦啦观看免费观看视频高清| 亚洲精品久久成人aⅴ小说| 成熟少妇高潮喷水视频| 午夜福利18| 亚洲免费av在线视频| 最好的美女福利视频网| 人人妻人人澡人人看| 自线自在国产av| 国产成人影院久久av| 女同久久另类99精品国产91| 十八禁人妻一区二区| 欧美精品啪啪一区二区三区| 99久久无色码亚洲精品果冻| 丁香六月欧美| 免费高清在线观看日韩| 女人爽到高潮嗷嗷叫在线视频| 波多野结衣高清作品| 夜夜躁狠狠躁天天躁| 国产在线观看jvid| 国产精品永久免费网站| 宅男免费午夜| 欧美国产日韩亚洲一区| 免费av毛片视频| 成人18禁高潮啪啪吃奶动态图| 免费电影在线观看免费观看| 男女那种视频在线观看| 精品国产美女av久久久久小说| 伦理电影免费视频| 亚洲,欧美精品.| 欧美大码av| 国产男靠女视频免费网站| 后天国语完整版免费观看| 老熟妇仑乱视频hdxx| 日韩大码丰满熟妇| 黄网站色视频无遮挡免费观看| 欧美成狂野欧美在线观看| 麻豆成人av在线观看| a在线观看视频网站| 动漫黄色视频在线观看| 制服诱惑二区| 伊人久久大香线蕉亚洲五| 69av精品久久久久久| 国产乱人伦免费视频| 日韩大码丰满熟妇| 黑人巨大精品欧美一区二区mp4| 男人的好看免费观看在线视频 | 天天躁狠狠躁夜夜躁狠狠躁| 久久久国产成人精品二区| 亚洲av成人不卡在线观看播放网| 性欧美人与动物交配| 悠悠久久av| 亚洲人成伊人成综合网2020| 搡老岳熟女国产| 欧美丝袜亚洲另类 | tocl精华| 在线av久久热| 欧美绝顶高潮抽搐喷水| avwww免费| 日韩av在线大香蕉| 人人澡人人妻人| 午夜福利在线在线| 搡老妇女老女人老熟妇| 国产99久久九九免费精品| 身体一侧抽搐| 亚洲熟妇熟女久久| 很黄的视频免费| 757午夜福利合集在线观看| 久热这里只有精品99| 国产av又大| av免费在线观看网站| 久久精品国产亚洲av香蕉五月| 欧美人与性动交α欧美精品济南到| 午夜免费鲁丝| 国产亚洲精品久久久久久毛片| 亚洲国产看品久久| 成人手机av| 国产人伦9x9x在线观看| 老司机深夜福利视频在线观看| av中文乱码字幕在线| 日本精品一区二区三区蜜桃| 男女之事视频高清在线观看| 欧美日韩亚洲国产一区二区在线观看| 色老头精品视频在线观看| 免费高清在线观看日韩| 国产av一区在线观看免费| 亚洲熟女毛片儿| 亚洲第一青青草原| 很黄的视频免费| 成人特级黄色片久久久久久久| 免费电影在线观看免费观看| 香蕉国产在线看| 午夜日韩欧美国产| 成人av一区二区三区在线看| 精品久久久久久久毛片微露脸| 成人国语在线视频| 日本免费一区二区三区高清不卡| 亚洲人成伊人成综合网2020| 欧美黑人精品巨大| 特大巨黑吊av在线直播 | 色av中文字幕| 久久婷婷人人爽人人干人人爱| 国产麻豆成人av免费视频| 亚洲av片天天在线观看| 国产不卡一卡二| 日韩大码丰满熟妇| 国产亚洲精品综合一区在线观看 | 精品卡一卡二卡四卡免费| 又紧又爽又黄一区二区| 亚洲无线在线观看| 欧洲精品卡2卡3卡4卡5卡区| 啦啦啦免费观看视频1| xxxwww97欧美| 99久久久亚洲精品蜜臀av| 国产亚洲av嫩草精品影院| 两性午夜刺激爽爽歪歪视频在线观看 | 熟妇人妻久久中文字幕3abv| 99国产综合亚洲精品| 久久国产精品影院| 视频在线观看一区二区三区| 自线自在国产av| 欧洲精品卡2卡3卡4卡5卡区| 色综合亚洲欧美另类图片| 中文字幕高清在线视频| 国产精品 国内视频| 亚洲欧美精品综合一区二区三区| 久久精品影院6| 亚洲国产日韩欧美精品在线观看 | 中文字幕高清在线视频| 黑丝袜美女国产一区| 精品不卡国产一区二区三区| 很黄的视频免费| 日韩成人在线观看一区二区三区| 国产精品一区二区免费欧美| 国产伦人伦偷精品视频| 成人国产综合亚洲| 日本一区二区免费在线视频| 日韩精品免费视频一区二区三区| 国产人伦9x9x在线观看| 成在线人永久免费视频| 亚洲三区欧美一区| 两性夫妻黄色片| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲欧美日韩高清在线视频| 久久精品影院6| 色老头精品视频在线观看| 久久香蕉激情| 91字幕亚洲| 午夜久久久在线观看| 色综合站精品国产| 午夜日韩欧美国产| 久久久久国内视频| 久久久久亚洲av毛片大全| 黄色片一级片一级黄色片| 日韩欧美一区视频在线观看| 精品卡一卡二卡四卡免费| 视频在线观看一区二区三区| 久久久久久人人人人人| 午夜成年电影在线免费观看| 久久久国产精品麻豆| 欧美绝顶高潮抽搐喷水| 最近在线观看免费完整版| 国产成人系列免费观看| 国产成人啪精品午夜网站| 女生性感内裤真人,穿戴方法视频| 老熟妇乱子伦视频在线观看| 精品欧美国产一区二区三| 丁香欧美五月| 看免费av毛片| 国产精品av久久久久免费| 欧美成人午夜精品| www国产在线视频色| avwww免费| 色播在线永久视频| 十八禁人妻一区二区| 老熟妇仑乱视频hdxx| 国产精品 欧美亚洲| 亚洲性夜色夜夜综合| 欧美 亚洲 国产 日韩一| 香蕉国产在线看| 巨乳人妻的诱惑在线观看| 熟女少妇亚洲综合色aaa.| 亚洲精品美女久久久久99蜜臀| 亚洲va日本ⅴa欧美va伊人久久| 免费搜索国产男女视频| 好男人电影高清在线观看| 亚洲天堂国产精品一区在线| 亚洲熟妇中文字幕五十中出| 免费在线观看视频国产中文字幕亚洲| 青草久久国产| 国产精品亚洲一级av第二区| 亚洲欧美激情综合另类| 男女午夜视频在线观看| 久久久久国产一级毛片高清牌| 老汉色av国产亚洲站长工具| 精品福利观看| 男人操女人黄网站| 久久久久亚洲av毛片大全| 首页视频小说图片口味搜索| 国产不卡一卡二| 国产成人精品久久二区二区免费| 99riav亚洲国产免费| 精品国产亚洲在线| 99热6这里只有精品| 巨乳人妻的诱惑在线观看| www.www免费av| 91成人精品电影| 国产激情欧美一区二区| 免费女性裸体啪啪无遮挡网站| 午夜福利高清视频| 国产精品永久免费网站| 精品乱码久久久久久99久播| 女性被躁到高潮视频| 中文字幕精品亚洲无线码一区 | 国产亚洲av嫩草精品影院| av在线播放免费不卡| 久久人妻av系列| 欧美成人性av电影在线观看|