• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Energy efficient opportunistic routing for wireless multihop networks: a deep reinforcement learning approach*

    2022-05-23 14:29:24JINXiaohanYANYanZHANGBaoxian

    JIN Xiaohan, YAN Yan, ZHANG Baoxian

    (Research Center of Ubiquitous Sensor Networks, University of Chinese Academy of Sciences, Beijing 100049, China) (Received 20 April 2020; Revised 12 May 2020)

    Abstract Opportunistic routing has been an efficient approach for improving the performance of wireless multihop networks due to its salient features to take advantage of the broadcast and lossy nature of wireless channels. In this paper, we propose a deep reinforcement learning based energy efficient opportunistic routing algorithm for wireless multihop networks, which enables a learning agent to train and learn optimized routing policy to reduce the transmission time while balancing the energy consumption to extend the life of the network in an opportunistic way. Furthermore, the proposed algorithm can significantly alleviate the cold start problem and achieve better initial performance. Simulation results demonstrate that the proposed algorithm yield better performance as compared with existing algorithms.

    Keywords deep reinforcement learning; wireless multihop networks; opportunistic routing

    Opportunistic routing (OR)[1-3]has been an efficient routing strategy to improve the performance of a wireless multihop network. It takes advantage of the broadcast nature of the wireless medium and makes routing decisions by choosing next relay node among a group of the forwarder list in an online manner. Energy consumption is also a crucial issue in the study of a wireless multihop network to achieve balanced energy consumption and thus prolonged network lifetime. However, most existing opportunistic routing algorithms in the literature fail to consider the optimal trade-off between the routing performance and energy consumption in dynamic wireless environment[4]. It is thus highly desirable to design efficient opportunistic routing algorithm which can learn energy efficient routing policy in-telligently as needed.

    Reinforcement learning (RL)[5]is a classic machine learning method for solving sequence decision problems and has been widely used in image processing, natural language processing and also wireless environments for improved network performance. The basic framework of RL is shown in Fig. 1. Agent with decision-making capabilities constantly interacts with the environment to imple-ment the learning process. However, traditional RL methods are not scalable in many complex real-world applications. Such scalability issue could be addressed by combining the deep neural networks and RL into deep reinforcement learning (DRL) methods. With the generalization ability and feature extraction ability of deep neural networks, DRL has achieved breakthroughs in many fields, e.g., AlphaGo in the field of man-machine rivalry[6], automatic navigation[7]and machine translation[8], etc.

    Fig.1 Reinforcement learning framework

    Recently, RL has been a promising technique for improving the performance of wireless networks[9]. However, existing research on the routing and DRL in the field of wireless multihop networks are mostly conducted independently, which largely limits DRL’s potential to improve the performance of wireless network. In-depth analysis are needed to exploit the potentials of DRL in wireless networks. For this purpose, in this paper, we propose an opportunistic deep Q-network (ODQN) routing algorithm for wireless multihop networks, which is designed to optimize the routing and energy consumption performance intelligently by joint design and optimization of the opportunistic routing and DRL technology. The major work and contributions of this paper are given as follows:

    1) We formulate the opportunistic routing problem in wireless multihop networks as a Markov decision process and define the corresponding state space, action space, and reward function.

    2) We propose a new energy efficient opportunistic routing algorithm using DRL to solve packet routing problem in wireless multihop networks, which can balance the routing perfor-mance and energy consumption effectively. The proposed algorithm can effectively alleviate the cold start problem in the traditional DQN based algorithm and improve the performance at the early stage of the learning.

    3) Extensive simulations are conducted and the results show that the proposed algorithm demon-strates appreciable gain as compared with existing work.

    1 Related work

    Traditional routing protocols developed for wireless multihop networks can be divided into the following four types: 1) routing-table-based proactive protocols like DSDV[10]and OLSR[11], 2) on-demand protocols like AODV[12], DSR[13], 3) hybrid routing protocols like ZRP[14], and 4) opportunistic routing protocols like ExOR[1], MORE[2], GeRaF[3].

    Recently, some RL based routing algorithms have been proposed. Q-routing, proposed by Boyan and Littman[15], uses the Q-learning[16]algorithm to achieve adaptive routing. The simulation results indicated that Q-learning is effective under time varying network load level, traffic patterns and topologies. However, Q-routing, as well as its variants, e.g., predictive Q-routing[17]and dual Q-routing[18], has obvious disadvantages in wireless networks-They need to maintain a large Q-table and are not scalable to large state space. In Ref. [19], a reinforcement learning-based opportunistic routing (RLOR) algorithm has been proposed for providing live video streaming services over multi-hop wireless networks, which could effectively achieve a proper tradeoff between the transmission reliability and latency. By designing a distributed energy-efficient RL based routing algorithm for the wireless mesh IoT networks, Ref. [20] achieves considerable improvements in power efficiency, failure rate, and spectrum use efficiency.

    Compared with the above traditional “shallow” RL based algorithms, DRL based algorithms, which takes advantage of deep neural networks[21]to improve the performance of RL[22], has been investigated in the area of wireless networks. Authors of Ref. [23] presented a distributed routing approach based on multi-agent RL which could simultaneously optimize the travel time of routed entities and energy consumption. Valadarsky et al.[24]applied DRL for network routing and showed that learning from the historical routing schemes with the consideration of the demand matrix and link utilization provides an efficient approach to smartly choose the optimal routing structure for future data forwarding. A DRL method proposed by Stampa et al.[25]optimizes the routing performance in SDN with the aim of reducing transmission delay. The experimental results show that there is significant improvement on transmission delay over various traffic intensities. Although existing work employ DRL to facilitate network routing in various environments, none of them consider both routing performance and network lifetime simultaneously under lossy wireless network environment.

    2 Network model and problem formualtion

    In this section, we formulate the energy efficient wireless opportunistic routing problem as a Markov decision process (MDP).

    A.System model

    We model a wireless multihop network withNnodes as a graphG=(V,E), where each vertexv?Vrepresents a node, and every edgee?Erepresent a link between a pair of nodes. Each node could be served as the source node or destination node. When a packetpfrom source nodess?Vbeing sent towards its destinationd?Varrives at a nodej, it will be processed as follows:

    · Ifj=d, then the packet has arrived its destination and this packet’s process terminates successfully.

    · Otherwise, the packet has been forwarded to an intermediate next hop, as chosen according to the learnt routing policy, which will be illustrated in Section IV.

    When a node has packets to send or in its queue, it is treated as in the working state and each transmission will consume certain energy. Otherwise, the node has no energy consumption. When the energy of a node is lower than a given threshold, the node is treated as inactive and unreachable.

    B.Problem formulation

    We model the energy efficient opportunistic routing problem in wireless multihop networks under the centralized RL formulation, where a central agent is assumed to be responsible for learning the routing policy and chooses the action for each node according to the global state. We formulate the optimization of energy efficient opportunistic routing as an MDP defined by a tuple (S,A,P,R), whereSis a finite set of states,Ais a finite set of actions,Pis a transition probability from statestto statest+1after the agent taking actionatat time slott, andRis the immediate reward obtained after actionatis performed indicating how good the current action is in the immediate sense.

    Next, to formulate the above problem as an MDP, we give the details of the related observation space, action space, and reward function as follows.

    1) State space

    2) Action space

    3) Reward

    R: The rewardrt(st,at) at time slottis the immediately reward that agent can get when performing joint actionatwith the state transiting fromsttost+1. It is defined as a function of the average packet transmission time and the residual energy as follows

    (1)

    (2)

    4) Problem definition

    By defining the state and action space and the reward, we can then formally formulate energy efficient opportunistic routing problem as the MDP. The objective of DRL algorithm is to find the deterministic optimal policyπ*to maximizes the total accumulated rewardrt(st,at). Thus, based on the formulations above, we define the problem of DRL based energy efficient opportunistic routing as maximization of the cumulative future reward:

    Gt=rt+γrt+1+γ2rt+2+….

    (3)

    Our objective is to learn an optimal policyπ*to route each packet correctly while maximizing the network lifetime, which has been proved to be NP-hard[29]. This makes it necessary to explore alternative solutions with the aim of maximizing the performance. In this paper, we explore the potential of the DRL to find the optimal energy efficient opportunistic routing policy. In the next section, we propose the ODQN-Routing algorithm for solving this problem.

    3 ODQN-Routing

    In this section, we propose the ODQN-Routing algorithm, which can adapt to the dynamic changes of wireless medium and network topology to optimize the network routing and energy use performance by adjusting the routing policy intelligently.

    A. Algorithm overview

    Our proposed ODQN-Routing is based on the deep Q-network (DQN)[30], which transforms the Q-table update process into a function fitting problem in high-dimensional state-action space.

    We assume a centralized ODQN agent which is responsible for interacting with the environment and learning the policy. Moreover, we assume that there exists a separate control channel, which could be used to collect and distribute control information directly between the centralized agent and each node in the network, an assumption made in a lot of existing DRL algorithms in this area. The framework of the ODQN agent is shown in Fig.2. The agent observes the network information and gets the current network statest, selects an joint actionatfor all nodes according to the learnt policy with theε-greedy principle, which randomly selects neighbor nodes as the next hop with a probability ofε, and selects the next hop node with the largestQvalue with a probability of 1-ε. Then the agent gets the rewardrt(st,at), and enters the next statest+1.The experience tuple (st,at,rt,st+1) is stored in the replay buffer. In the training phase, a small batch of state transition pair samples are randomly extracted from the replay buffer to train the ODQN agent.

    B.ODQN agent training

    The neural network architecture of our ODQN-Routing agent is shown in Fig.3, which has one input layer, two fully connected hidden layers and one output layer. Letθdenote the training neural network parameters, andθ-denote the target neural network parameters. ODQN agent tries to minimize the loss function defined as the difference between the target value and the estimated value.

    LDQN(θ)=E[(y-Q(s,a;θ))2].

    (4)

    whereyis the target value denoted as

    (5)

    The gradient descent method is used to update the parameters with learning rateα.

    (6)

    (7)

    The parameters of the training neural network are copied to the target neural network everyCsteps, and then the parameters of the target neural network are updated.

    Fig.2 ODQN agent framework

    Fig.3 Neural network structure in ODQN-Routing

    C.Cold start alleviation

    There usually will be a cold start problem[31]for DRL based algorithm in real-world application as DRL requires huge amount of data before reaching acceptable performance. Thus, its performance could be compromised as the performance of RL during the learning process could be extremely poor due to its trial-and-error nature.

    Therefore, to alleviate the cold start problem, we adopt the pre-training mechanism as in Ref. [31] in our proposed ODQN-Routing algorithm to speed up the agent’s learning speed at the beginning phase. We use demonstration data as the starting policy to pre-train the agent so that it could perform well at the start of learning, and then let the agent interact with the environment to continue improving the routing performance by using its own collected data. In this paper, the demonstration data is collected by the AODV protocol to pre-train the agent to accelerate the learning process.

    D.Proposed ODQN-Routing algorithm

    The detailed ODQN-Routing algorithm is given in Algorithm 1, which can be divided into two phases: pre-training and policy learning.

    Initialization: In line 1, we collect the demonstration data of AODV and store such data in the demonstration data bufferdemo, which will not be modified during the execution, initialize replay bufferreplaywith capacity, demonstration ratioηand number of pre-training stepsk.We initialize training neural network with random weightsθand target neural network with weightsθ-=θ.

    Pre-trainingphase: From line 2 to line 8, before starting interaction with the environment, we performksteps pre-training phase by sampling the mini-batch tuples from demonstration data bufferdemo.We train the agent forksteps with a loss functionLpreas

    Lpre=LDQN+γ1Ls+γ2L2.

    (8)

    whereγ1,γ2are parameters that regulate the weight betweenLsandL2.Lpreis defined as the combination of the DQN lossLDQNin (3), the supervised large margin classification lossLs, and theL2regulari-zation loss on the neural network’s weights and biases to prevent overfitting on the demonstration data. The supervised lossLsis defined as

    (9)

    (10)

    whereaeis the action from the demonstration data.

    Then the loss functionLpreis minimized using the gradient descent optimizer. The target neural network’s parametersθ-is updated byθeveryCsteps. The purpose of pre-training phase is to make the agent learn the starting policy from the demon-stration data to gain better initial performance.

    Policylearningphase: Onceksteps pre-training is completed, the agent enters the policy learning phase to interact with the environment to learn its own policy as in lines 9-23. From line 11 to line 14, the agent collects the information and get the statest, chooses actionat, receives rewardrt, and enters next statest+1.The self-generated data tuple (st,at,rt,st+1) will be added into replay bufferreplay, where the oldest experience will be over-written when the buffer is full. In line 15, we use both demonstration data and self-generated data as samples to train the neural networks and useto control the proportion of demonstration data, which will be decreased with the training time as indicated from line 19 to line 22, whereτis a small step value used to fine-tuneη,ddemoanddreplayare the tuples sampled from demonstration data bufferdemoand replay bufferreplay.We calculate the loss function using target neural network in line 16, which uses loss functionLprefor demonstration data andLDQNfor self-generated data. Then from line 17 to line 18, we perform a gradient descent step to updateθandθ-.

    Algorithm1.ODQN-Routing algorithm

    --------------------

    1:Initialization:

    Initialize the training neural network’s parametersθand target neural network’s parametersθ-, replay bufferreplayand demonstration data bufferdemo, demonstration ratioηand number of pre-training stepsk.

    2:Pre-training:

    3:fort= 1, 2, …,kdo

    4: Samplentuples in demonstration data bufferdemoto pre-train the agent.

    5: Calculate loss functionLpreusing target neural network.

    7: Update the target neural network’s parameterθ-everyCsteps:θ-←θ.

    8:endfor

    9:Policylearning:

    10:fort= 1, 2,…,do

    11: The agent gets the statest.

    12: Selectε-greedy actionat.

    13: Get the rewardrtand enter the next statest+1.

    14: Store (st,at,rt,st+1) into replay bufferreplay.

    15: Samplentuples fromdemoandreplaywith a fractionηof the samples fromdemoto train the agent.

    16: Calculate loss functionLpreusing target neural network: demonstration data useLpre, self-generated data useLDQN.

    17: Use gradient descent to update the parameters of the training neural network.

    18: Update target neural network’s parameters everyCsteps:θ-←θ.

    19:ifη>0 then

    20:η←η-τ

    21:else

    22:η←0

    23:Endfor

    --------------------

    4 Simulation results

    In this section, we conduct extensive simulations to evaluate the performance of ODQN-Routing algorithm by comparing it with existing work.

    A.Simulation environment

    We used Python 3.6 to build the event-driven simulator and used TensorFlow as the deep learning framework to implement DRL based routing. We built the wireless multihop network environment on the basis of Networkx[32]. In the simulations, we implemented the following algorithms for comparison purpose: (a) The proposed ODQN-Routing algorithm, (b) AODV algorithm, (c) Q-Routing algorithm, and (d) ODQN-Routing algorithm without pre-training, referred to as DQN-Routing.

    In the simulations, we implemented the simulation experiments with 10, 20, 30, 40, 50 nodes in the network, respectively, and each node has a random number of neighbor nodes. Each simulation lasts for 2 500 s. Each reported result was averaged over 50 tests, each of which was due to a different random network topology. The bandwidth of each link in the wireless network under simulations was randomly set among 1, 2, 5.5, and 11 Mbit/s. The packet generation rate is subject to a Poisson distribution with the mean value of 10 packets per second and the packet size is set to 1 000 bytes with randomly generated source node and destination node. The queue length of each node is set to 500 packets. We used 1 000 packets based on AODV to pre-train the neural networks for 500 steps. The parameter settings in our simulations are shown in Table 1, where the settings related to deep Q-learning were also used in Refs. [31,33].

    In the simulations, the initial energy of each node was set to 1 Joule. The transmit energy for packet transmission is 50 nJ/bit and that for processing queued packets is 40 nJ/bit. Moreover, when the residual energy of a network node is less than half of the mean residual energy of nodes in the network, it is treated as an inactive node and will be temporally removed from the network topology for any routing operations.

    Table 1 Parameter settings

    In the simulations, we compared the performance of different algorithms in terms of following measures:

    Average packet transmission time

    (11)

    We definetpas the total transmission time from the source to destination node of packetp∈P,Pis the set of packets in the network and |P| is the total number of packets.

    Network lifetime

    (12)

    We use the proportion of active nodes remaining in the network to characterize the network lifetime, whereNactiveis the number of active nodes in the network when the simulation terminates and |V| represents the network size.

    Average path length

    (13)

    whereLpis the path length of packetp∈P.

    Packet loss rate

    (14)

    whereDnis the number of packets that fail to reach their destinations.

    B.Simulation results

    Averagetransmissiontime: Figure 4 shows the variation of average packet transmission time of different algorithms. It can be seen that at the beginning of the simulation, the average packet transmission time of Q-Routing and DQN-Routing are much larger than the other two algorithms. That is because the agent is in the process of policy learning and explores routing path through continuous trials and errors. Thus the reward obtained is unstable. In contrast, as the ODQN-Routing allows pre-training the agent using demonstration data from AODV, the routing policy can be optimized much more quickly. It can also be seen that during the training process, the transmission time of ODQN-Routing and DQN-Routing can both cover to a lower level than the traditional reinforcement learning routing algorithm Q-Routing. In general, the transmission time by ODQN-Routing is 50.1% shorter than that by Q-Routing, and 30.2% shorter than that by DQN-Routing. The results can verify that ODQN-Routing can significantly improve the average transmission time performance.

    Fig.4 Comparison of average transmission time

    Fig.5 Comparison of average network lifetime

    Networklifetime: Figure 5 compares the network lifetime performance of different algorithm. In Fig. 5, at the beginning of simulations, the percentage of active nodes by ODQN-Routing, Q-Routing, and DQN-Routing are quite similar as all nodes have high level residual energy at this moment. As the simulations elove, ODQN-Routing and DQN-Routing allow the ODQN agent to fully train and learn optimalized routing policy through DRL technology to adaptively select the nexthops to avoid rapid depletion of nodes in the network. Specifically, ODQN-Routing prolongs the network lifetime by 57.1% as compared with Q-Routing and 10.7% as compared with DQN-Routing algorithm. In the simulations, the value ofw2was set to 0.93 to effectively balance the energy consumption while reducing packet transmission time.

    Averagepathlength: Figure 6 (a) compares the average path length performance of different algorithms. The results show that the average path length by ODQN-Routing is shorter than that by Q-Routing and DQN-Routing. As compared with Q-Routing, ODQN-Routing could effectively reduce the average path length by up to 40.1%, indicating that ODQN-Routing could learn more effective routing poliy by considering the ETX values to destinations when making packet forwarding decisions, which could effectively avoid the detouring of the data packets to longer paths. The average path length by ODQN-Routing is 13.8% shorter than that by DQN-Routing, indicating the use of demostration data for the agent pre-training can significantly improve the learning efficiency, which enables the agent to learn optimalized routing policy faster.

    Fig.6 Comparison of average path length and average packet loss rate

    Packetlossrate: Figure 6 (b) compares the packet loss rate performance of different algorithms. In Fig. 7, it is seen that ODQN-Routing outperforms other algorithms in terms of packet loss rate. Moreover, ODQN-Routing can reduce the packet loss rate by 30.1% and 24.8% as compared with AODV and Q-Routing, respectively.

    5 Conclusion

    In this paper, we investigated how to improve the energy efficiency by applying DRL based routing in wireless multihop network. Regarding this, a formulation of energy efficient opportunistic routing was built as an MDP problem. To address the problem, we first defined the corresponding state space, action space, and reward function, and then proposed the ODQN-Routing algorithm. Extensive simulations were conducted and the results demonstrate that the proposed algorithm yields better performance as compared with existing work. Our work demonstrates that DRL based routing has significant advantages in improving the routing performance of wireless networks. In the future, we shall consider how to combine inverse reinforcement learning with wireless routing to strengthen the routing intelligence by learning more intrinsic reward functions. We also plan to design a full decentralized multi-agent reinforcement learning based algorithm for enabling efficient distributed routing in wireless multihop networks.

    韩国av在线不卡| tube8黄色片| 免费看不卡的av| 国产熟女欧美一区二区| 少妇被粗大猛烈的视频| 午夜久久久在线观看| 高清不卡的av网站| 简卡轻食公司| videos熟女内射| 人人妻人人爽人人添夜夜欢视频| 九九在线视频观看精品| 哪个播放器可以免费观看大片| 啦啦啦在线观看免费高清www| 人人妻人人添人人爽欧美一区卜| av不卡在线播放| 久久久久久久国产电影| 蜜桃在线观看..| 精品熟女少妇av免费看| 欧美人与性动交α欧美精品济南到 | 久久99热这里只频精品6学生| 美女cb高潮喷水在线观看| 亚洲精品国产av蜜桃| 91精品一卡2卡3卡4卡| 日本-黄色视频高清免费观看| 亚洲欧洲国产日韩| 日本爱情动作片www.在线观看| 少妇熟女欧美另类| 精品一品国产午夜福利视频| 国产精品久久久久久久电影| 精品国产一区二区久久| 久久久久久久精品精品| 麻豆精品久久久久久蜜桃| 成年女人在线观看亚洲视频| 精品熟女少妇av免费看| 久久久久久伊人网av| 久久午夜福利片| 女的被弄到高潮叫床怎么办| 欧美+日韩+精品| a级毛片在线看网站| 亚洲综合色网址| 一区二区三区乱码不卡18| 亚洲图色成人| 国产毛片在线视频| 18在线观看网站| 97在线视频观看| 国产片特级美女逼逼视频| 一个人免费看片子| 国产av码专区亚洲av| 极品人妻少妇av视频| 在线观看免费高清a一片| 国产精品蜜桃在线观看| 国产探花极品一区二区| 亚洲精品日韩av片在线观看| 国产亚洲最大av| 熟女人妻精品中文字幕| 精品视频人人做人人爽| 国产免费一级a男人的天堂| 亚洲精品色激情综合| 一本久久精品| 涩涩av久久男人的天堂| 国产成人精品在线电影| 精品久久久久久电影网| 热re99久久国产66热| 亚洲精品乱码久久久v下载方式| www.av在线官网国产| 婷婷色综合大香蕉| 久久青草综合色| 国产一级毛片在线| 精品人妻熟女毛片av久久网站| 成人手机av| 青春草亚洲视频在线观看| 少妇熟女欧美另类| 色5月婷婷丁香| 这个男人来自地球电影免费观看 | 亚洲久久久国产精品| 91国产中文字幕| 9色porny在线观看| 久久精品久久久久久久性| 91精品三级在线观看| 亚洲色图 男人天堂 中文字幕 | 一级毛片我不卡| 精品人妻一区二区三区麻豆| 欧美最新免费一区二区三区| 国产国语露脸激情在线看| 啦啦啦视频在线资源免费观看| 女的被弄到高潮叫床怎么办| 久久精品人人爽人人爽视色| 国产精品秋霞免费鲁丝片| 精品少妇内射三级| 乱人伦中国视频| 黑丝袜美女国产一区| 美女国产视频在线观看| 一级毛片我不卡| 一级爰片在线观看| 制服丝袜香蕉在线| 久久久国产一区二区| 丝袜脚勾引网站| 亚洲丝袜综合中文字幕| 日韩一本色道免费dvd| 91久久精品国产一区二区三区| 一级黄片播放器| av视频免费观看在线观看| 视频在线观看一区二区三区| 伦理电影免费视频| 国产精品一二三区在线看| 午夜福利影视在线免费观看| 丁香六月天网| 纵有疾风起免费观看全集完整版| 有码 亚洲区| 狂野欧美激情性bbbbbb| a级片在线免费高清观看视频| 一区二区三区精品91| a级毛片免费高清观看在线播放| 亚洲精品第二区| 欧美人与善性xxx| 日韩中字成人| 国产免费一区二区三区四区乱码| 国产日韩欧美亚洲二区| 欧美xxⅹ黑人| 夫妻午夜视频| 三级国产精品片| 18+在线观看网站| 欧美精品一区二区免费开放| 日日爽夜夜爽网站| 日日啪夜夜爽| 国产免费福利视频在线观看| 一级a做视频免费观看| 免费黄色在线免费观看| 麻豆成人av视频| 国产一区二区在线观看日韩| 热re99久久国产66热| 午夜影院在线不卡| 久久热精品热| 视频中文字幕在线观看| 国产欧美日韩一区二区三区在线 | 精品一区二区三区视频在线| 另类亚洲欧美激情| 日本午夜av视频| 亚洲高清免费不卡视频| 国产高清三级在线| 欧美精品一区二区大全| 在线观看国产h片| 飞空精品影院首页| 免费高清在线观看日韩| 精品午夜福利在线看| 九色亚洲精品在线播放| 国产成人一区二区在线| 人人妻人人澡人人看| 午夜福利影视在线免费观看| 两个人免费观看高清视频| 国产黄色免费在线视频| 国产黄片视频在线免费观看| 亚洲图色成人| 中文天堂在线官网| 黑人欧美特级aaaaaa片| 蜜桃久久精品国产亚洲av| 国产精品久久久久久久电影| 精品一区在线观看国产| 亚洲成人一二三区av| 人人妻人人澡人人爽人人夜夜| 亚洲精品乱久久久久久| 国产成人aa在线观看| 久久久久久久久久成人| 草草在线视频免费看| 2021少妇久久久久久久久久久| 亚洲第一av免费看| 亚洲精品乱码久久久久久按摩| 校园人妻丝袜中文字幕| 精品人妻一区二区三区麻豆| 91午夜精品亚洲一区二区三区| 在线观看美女被高潮喷水网站| 国产在线一区二区三区精| 精品国产露脸久久av麻豆| 欧美另类一区| 女的被弄到高潮叫床怎么办| 亚洲欧美精品自产自拍| 中文字幕最新亚洲高清| 久久久久久久国产电影| 我的老师免费观看完整版| 中国国产av一级| 成年av动漫网址| 日本黄大片高清| 国产日韩一区二区三区精品不卡 | 成人手机av| 日韩欧美精品免费久久| 99久久精品一区二区三区| 亚洲熟女精品中文字幕| 性色avwww在线观看| 一区二区日韩欧美中文字幕 | 日韩大片免费观看网站| 亚洲精品第二区| 亚洲av二区三区四区| 精品久久国产蜜桃| 飞空精品影院首页| 在线看a的网站| 欧美性感艳星| 黑人巨大精品欧美一区二区蜜桃 | 黑人猛操日本美女一级片| 丝袜在线中文字幕| 免费观看在线日韩| 18禁在线播放成人免费| 国产亚洲av片在线观看秒播厂| 国产黄色免费在线视频| 精品国产国语对白av| h视频一区二区三区| av免费观看日本| 91精品伊人久久大香线蕉| 中文字幕久久专区| 欧美日韩视频精品一区| 我要看黄色一级片免费的| 91精品一卡2卡3卡4卡| 久久久久久伊人网av| 国产乱来视频区| 免费av不卡在线播放| 有码 亚洲区| 熟妇人妻不卡中文字幕| 91久久精品电影网| av国产精品久久久久影院| 亚洲三级黄色毛片| 久久久久精品久久久久真实原创| 欧美精品亚洲一区二区| 人人妻人人澡人人爽人人夜夜| av一本久久久久| 狠狠精品人妻久久久久久综合| 亚洲国产精品一区二区三区在线| 视频中文字幕在线观看| 日韩视频在线欧美| 视频区图区小说| 午夜激情福利司机影院| 全区人妻精品视频| 欧美性感艳星| 黄色欧美视频在线观看| 婷婷色综合大香蕉| 人妻少妇偷人精品九色| 看十八女毛片水多多多| 中国国产av一级| 又黄又爽又刺激的免费视频.| 欧美日韩在线观看h| 久久国内精品自在自线图片| 亚洲久久久国产精品| 又粗又硬又长又爽又黄的视频| 亚洲精品456在线播放app| 永久网站在线| 卡戴珊不雅视频在线播放| 秋霞伦理黄片| 亚洲av成人精品一二三区| 精品人妻熟女av久视频| 99精国产麻豆久久婷婷| 校园人妻丝袜中文字幕| av专区在线播放| 亚洲伊人久久精品综合| 99热这里只有是精品在线观看| 高清午夜精品一区二区三区| 人人澡人人妻人| 看非洲黑人一级黄片| 国产老妇伦熟女老妇高清| 黄色配什么色好看| 男女国产视频网站| 涩涩av久久男人的天堂| 国产熟女午夜一区二区三区 | 少妇被粗大猛烈的视频| 国产老妇伦熟女老妇高清| 久久久久久久久久久免费av| 亚洲精品国产色婷婷电影| 午夜福利视频在线观看免费| 久久精品国产自在天天线| 男人爽女人下面视频在线观看| 国产精品一区二区三区四区免费观看| 97在线视频观看| 日本91视频免费播放| 国产一区二区三区av在线| 精品一区二区三区视频在线| 国产欧美亚洲国产| 黑人猛操日本美女一级片| 午夜福利视频在线观看免费| 国产极品粉嫩免费观看在线 | 国产免费一级a男人的天堂| 大片免费播放器 马上看| 日日摸夜夜添夜夜添av毛片| 自拍欧美九色日韩亚洲蝌蚪91| 特大巨黑吊av在线直播| 国产精品国产三级专区第一集| 一本一本综合久久| 日韩熟女老妇一区二区性免费视频| 久久影院123| 久久久久久久亚洲中文字幕| 久久精品国产自在天天线| 国产乱来视频区| 午夜福利网站1000一区二区三区| 99久久综合免费| 国产成人一区二区在线| 激情五月婷婷亚洲| 精品一区二区三卡| 菩萨蛮人人尽说江南好唐韦庄| 性色av一级| 母亲3免费完整高清在线观看 | 国产一区亚洲一区在线观看| 九九爱精品视频在线观看| 大香蕉久久成人网| 久久久久精品性色| 老司机影院毛片| 美女xxoo啪啪120秒动态图| 亚洲丝袜综合中文字幕| 精品久久蜜臀av无| 涩涩av久久男人的天堂| 少妇人妻 视频| 性色av一级| 一级,二级,三级黄色视频| 日本欧美国产在线视频| 久久久久久久久久人人人人人人| 国产黄片视频在线免费观看| 人妻一区二区av| 91久久精品国产一区二区三区| 夜夜爽夜夜爽视频| 99热网站在线观看| 午夜精品国产一区二区电影| 免费人妻精品一区二区三区视频| 丝袜脚勾引网站| 午夜福利在线观看免费完整高清在| 久久韩国三级中文字幕| 麻豆成人av视频| 国产亚洲最大av| 亚洲精品久久成人aⅴ小说 | 青春草国产在线视频| 亚洲中文av在线| 精品久久蜜臀av无| 大香蕉97超碰在线| 国产免费视频播放在线视频| 在线观看免费高清a一片| 人妻 亚洲 视频| 亚洲熟女精品中文字幕| 91精品国产国语对白视频| 午夜免费男女啪啪视频观看| 久久精品久久久久久噜噜老黄| 五月天丁香电影| 18+在线观看网站| 婷婷色综合大香蕉| 老司机影院成人| 国产在线视频一区二区| 亚洲av男天堂| 少妇 在线观看| 中国美白少妇内射xxxbb| 亚洲欧美成人综合另类久久久| 久久久久网色| 99国产精品免费福利视频| 久久久久久伊人网av| 狂野欧美激情性bbbbbb| av有码第一页| 精品午夜福利在线看| 成人二区视频| 久久人妻熟女aⅴ| 亚洲情色 制服丝袜| 亚洲图色成人| 亚洲精品亚洲一区二区| 亚洲成人手机| 日韩电影二区| 亚洲av成人精品一二三区| 欧美xxxx性猛交bbbb| 国产免费现黄频在线看| 黑人猛操日本美女一级片| 亚洲不卡免费看| 久久久亚洲精品成人影院| 丰满少妇做爰视频| 精品国产一区二区三区久久久樱花| 久久 成人 亚洲| 欧美激情极品国产一区二区三区 | 国产精品嫩草影院av在线观看| 高清午夜精品一区二区三区| 国产精品久久久久成人av| 最近中文字幕2019免费版| 亚洲高清免费不卡视频| 99精国产麻豆久久婷婷| 亚洲精品国产av成人精品| 国产成人精品在线电影| 99九九在线精品视频| 免费高清在线观看日韩| 能在线免费看毛片的网站| 精品人妻熟女毛片av久久网站| 久久久国产一区二区| 九九爱精品视频在线观看| 色94色欧美一区二区| 国产淫语在线视频| 久久久久久久久久久免费av| 久久午夜综合久久蜜桃| 男人添女人高潮全过程视频| 中国三级夫妇交换| 欧美日韩亚洲高清精品| 日韩av在线免费看完整版不卡| 亚洲高清免费不卡视频| 欧美人与善性xxx| 97在线视频观看| 又黄又爽又刺激的免费视频.| 日本wwww免费看| 天堂俺去俺来也www色官网| 国产永久视频网站| 我要看黄色一级片免费的| 国产精品一区二区在线观看99| 丰满迷人的少妇在线观看| 国产色婷婷99| 日韩三级伦理在线观看| 老熟女久久久| 精品国产乱码久久久久久小说| 黄色视频在线播放观看不卡| 性高湖久久久久久久久免费观看| 国产视频首页在线观看| av黄色大香蕉| 一级,二级,三级黄色视频| 王馨瑶露胸无遮挡在线观看| 精品人妻在线不人妻| 我的女老师完整版在线观看| 少妇人妻精品综合一区二区| 伦理电影大哥的女人| 一级毛片电影观看| 亚洲少妇的诱惑av| 成人手机av| 亚洲欧美中文字幕日韩二区| 黄色一级大片看看| a 毛片基地| 国产有黄有色有爽视频| 校园人妻丝袜中文字幕| 80岁老熟妇乱子伦牲交| 美女国产视频在线观看| 如何舔出高潮| 日产精品乱码卡一卡2卡三| 国内精品宾馆在线| 国产免费现黄频在线看| 久久99热这里只频精品6学生| 97超碰精品成人国产| 亚洲综合色惰| 午夜精品国产一区二区电影| 亚洲av国产av综合av卡| 2021少妇久久久久久久久久久| 国产白丝娇喘喷水9色精品| 亚洲欧洲精品一区二区精品久久久 | av专区在线播放| 亚洲欧美成人综合另类久久久| 建设人人有责人人尽责人人享有的| 午夜日本视频在线| 国产亚洲精品久久久com| 黄色一级大片看看| 97在线视频观看| 国产免费又黄又爽又色| 观看av在线不卡| a 毛片基地| 日本免费在线观看一区| 欧美性感艳星| 99久久精品一区二区三区| 国产精品国产三级国产av玫瑰| 99热这里只有精品一区| 久久99蜜桃精品久久| 精品一区二区三区视频在线| 国产av码专区亚洲av| 在线观看免费高清a一片| 亚洲,欧美,日韩| 成人影院久久| 两个人免费观看高清视频| 国产成人精品婷婷| 国产高清三级在线| 久久久久久久久大av| 国产深夜福利视频在线观看| 欧美日韩精品成人综合77777| 久久精品国产亚洲网站| 日韩一本色道免费dvd| 久久久久精品性色| 大话2 男鬼变身卡| 免费日韩欧美在线观看| 国产在线视频一区二区| av电影中文网址| 天天躁夜夜躁狠狠久久av| 日本黄色片子视频| 又大又黄又爽视频免费| 青春草视频在线免费观看| 国产视频首页在线观看| 久久精品人人爽人人爽视色| 一级片'在线观看视频| 亚洲精品乱码久久久v下载方式| 一区二区三区精品91| 久久99热6这里只有精品| 视频在线观看一区二区三区| 午夜福利影视在线免费观看| 亚洲经典国产精华液单| 午夜福利视频精品| 亚洲在久久综合| 国产精品免费大片| 久久狼人影院| 日韩大片免费观看网站| 精品亚洲成国产av| 亚洲av男天堂| 新久久久久国产一级毛片| 菩萨蛮人人尽说江南好唐韦庄| 午夜精品国产一区二区电影| 69精品国产乱码久久久| 亚洲精品456在线播放app| 国产熟女午夜一区二区三区 | 亚洲精品亚洲一区二区| 狂野欧美白嫩少妇大欣赏| 日本-黄色视频高清免费观看| 夜夜骑夜夜射夜夜干| 插逼视频在线观看| 99九九在线精品视频| 免费高清在线观看日韩| 丝袜美足系列| 亚洲中文av在线| 久久国产亚洲av麻豆专区| 日韩熟女老妇一区二区性免费视频| 国产av国产精品国产| 美女脱内裤让男人舔精品视频| 久久久久国产精品人妻一区二区| 视频区图区小说| 精品视频人人做人人爽| 久久97久久精品| 久久99一区二区三区| 婷婷成人精品国产| 色吧在线观看| 97超碰精品成人国产| 久久午夜福利片| 亚洲精品乱码久久久久久按摩| 99热这里只有精品一区| √禁漫天堂资源中文www| 一个人看视频在线观看www免费| 精品国产国语对白av| 亚洲av中文av极速乱| 18禁观看日本| 日韩av免费高清视频| 岛国毛片在线播放| a 毛片基地| 久久久久精品久久久久真实原创| 久久这里有精品视频免费| 最近中文字幕高清免费大全6| 寂寞人妻少妇视频99o| 久久精品久久久久久久性| 中国三级夫妇交换| 久久久久久久久久成人| 免费日韩欧美在线观看| 国产熟女午夜一区二区三区 | 久久综合国产亚洲精品| 美女脱内裤让男人舔精品视频| 国产乱来视频区| 99久久中文字幕三级久久日本| 亚洲国产精品999| 99九九在线精品视频| 2022亚洲国产成人精品| 国产乱人偷精品视频| 在线观看国产h片| 国产有黄有色有爽视频| 亚洲成人一二三区av| 大码成人一级视频| 久久精品国产亚洲av涩爱| 老司机亚洲免费影院| 91aial.com中文字幕在线观看| 老女人水多毛片| 天堂8中文在线网| 亚洲国产精品一区二区三区在线| 欧美日韩亚洲高清精品| 夫妻午夜视频| 黑丝袜美女国产一区| 建设人人有责人人尽责人人享有的| 十分钟在线观看高清视频www| 人妻一区二区av| 热99国产精品久久久久久7| 超色免费av| 亚洲国产毛片av蜜桃av| av在线播放精品| 高清黄色对白视频在线免费看| 午夜精品国产一区二区电影| 男女国产视频网站| 一级毛片黄色毛片免费观看视频| 蜜桃国产av成人99| 免费久久久久久久精品成人欧美视频 | 大又大粗又爽又黄少妇毛片口| 免费观看无遮挡的男女| 亚洲精品乱码久久久久久按摩| 精品人妻一区二区三区麻豆| 黄片播放在线免费| 99国产综合亚洲精品| 亚洲精品第二区| 精品国产乱码久久久久久小说| 午夜视频国产福利| 国产成人a∨麻豆精品| 在线看a的网站| 蜜桃久久精品国产亚洲av| av天堂久久9| 午夜福利在线观看免费完整高清在| 欧美日韩亚洲高清精品| 精品一区在线观看国产| 免费观看在线日韩| 纯流量卡能插随身wifi吗| 一级a做视频免费观看| 成人亚洲欧美一区二区av| 国产视频内射| 国产一区亚洲一区在线观看| 黄色一级大片看看| 99热全是精品| 丰满饥渴人妻一区二区三| 又粗又硬又长又爽又黄的视频| 亚洲经典国产精华液单| 久久精品久久久久久久性| 国产国拍精品亚洲av在线观看| 免费av不卡在线播放| 婷婷色av中文字幕| 亚洲内射少妇av| .国产精品久久| 国产亚洲一区二区精品| 一本—道久久a久久精品蜜桃钙片| 99re6热这里在线精品视频| 国产精品99久久99久久久不卡 | 热99久久久久精品小说推荐| 一本大道久久a久久精品| 少妇被粗大猛烈的视频| 在线播放无遮挡| 精品一区二区三区视频在线| www.色视频.com| 欧美人与善性xxx| 狂野欧美激情性bbbbbb| 爱豆传媒免费全集在线观看|