• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Q-Learning Based Computation Offloading Strategy for Mobile Edge Computing

    2019-04-29 03:21:20YifeiWeiZhaoyingWangDaGuoandRichardYu
    Computers Materials&Continua 2019年4期

    Yifei Wei , Zhaoying Wang Da Guo and F. Richard Yu

    Abstract: To reduce the transmission latency and mitigate the backhaul burden of the centralized cloud-based network services, the mobile edge computing (MEC) has been drawing increased attention from both industry and academia recently. This paper focuses on mobile users’ computation offloading problem in wireless cellular networks with mobile edge computing for the purpose of optimizing the computation offloading decision making policy. Since wireless network states and computing requests have stochastic properties and the environment’s dynamics are unknown, we use the modelfree reinforcement learning (RL) framework to formulate and tackle the computation offloading problem. Each mobile user learns through interactions with the environment and the estimate of its performance in the form of value function, then it chooses the overhead-aware optimal computation offloading action (local computing or edge computing) based on its state. The state spaces are high-dimensional in our work and value function is unrealistic to estimate. Consequently, we use deep reinforcement learning algorithm, which combines RL method Q-learning with the deep neural network(DNN) to approximate the value functions for complicated control applications, and the optimal policy will be obtained when the value function reaches convergence. Simulation results showed that the effectiveness of the proposed method in comparison with baseline methods in terms of total overheads of all mobile users.

    Keywords: Mobile edge computing, computation offloading, resource allocation, deep reinforcement learning.

    1 Introduction

    As smartphones are getting more and more popular, a variety of new mobile applications such as face recognition, natural language processing, augmented reality are becoming an increasing part of daily life, and thus people need high rate computation and high amount of computational resource [Wang, Liang, Yu et al. (2017)]. As we all know, cloud computing depends on its powerful centralized computing capability which meets the demands of resource-limited end users for effective computation. However, moving all the distributed data and high demand computation applications to the cloud server will result in heavy burden on network performance and the long latency for resource transmission between users and cloud computing devices, which will degrade the quality of service [Shi, Cao, Zhang et al. (2016); Bao and Ding (2016)].

    In order to further reduce the latency and enhance the network performance while provide powerful computational capability for end users, the mobile edge computing(MEC) has been proposed to deploy computing resources closer to the end users. As a remedy to the problem which cloud computing exists, mobile edge computing makes the function of cloud at the edge of the networks which obtains a tradeoff between computation-intensive and latency-critical for mobile users [Mao, You, Zhang et al.(2017)]. Mobile edge computing enables the mobile user’s equipment (UEs) execute computation offloading by sending computation tasks to the MEC server through wireless cellular networks [Wang, Liang, Yu et al. (2017)], which means the MEC server executes the computational task on behalf of the UE. In mobile edge computing, network edge devices such as base stations, access points and routers, are empowered with computing and storage capabilities to serve users’ requests as a substitute of clouds [Patel,Naughton, Chan et al. (2014)]. In this paper, we consider an edge system as the combination of an edge device (macro-cell) and the associated edge servers which provides IT services, environments and cloud computing capabilities to meet mobile users’ low-latency and high-bandwidth service requirements.

    The survey of computation offloading illustrated two objectives for computation offloading: reduce the execution time and mitigate the energy consumption [Kumar, Liu,Lu et al. (2013)]. Computation offloading decisions are classified into two parts: what computation to offload, and where to offload computation. The decision in regard to what computation to offload is generally considered as the partitioning problem which computation task are partitioned into different components and make decision of whether to offload each component or not [Huang, Wang and Niyato (2012); Alsheikh, Hoang,Niyato et al. (2015)]. Another decision of where to offload computation tasks focuses on the binary decision of local computation or offloading the computation task to computing devices, which is similar to the decision of what to offload.

    A number of previous works have discussed the computation offloading and resource allocation problem in mobile edge computing scenarios [Yu, Zhang and Letaief (2016);Mao, Zhang, Song et al. (2017); Mao, Zhang and Letaief (2016)]. Wang et al. [Wang,Liang, Yu et al. (2017)] considered the computation offloading decision, physical resource block (PRB) allocation, and MEC computation resource allocation as optimization problems in wireless cellular networks by graph coloring method. Xu et al.[Xu and Ren (2017)] presented the optimal policy of dynamic workload offloading and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost) by using online learning algorithm which including value iteration and reinforcement learning (RL). While Liu et al. [Liu, Mao, Zhang et al. (2016)]proposed a two-timescale stochastic optimization problem as the Markov decision process in MEC scene and solved the problem using linear programming problem.

    In this paper, we focus on the computation offloading decision making problem of whether to compute on local equipment or to offload the task to the MEC server for cloud computing and propose an efficient deep reinforcement learning (DRL) scheme. By making the right decision of computation offloading, mobile user can enhance the computation efficiency and decrease the energy consumption. Each agent learns through interactions with the environment and evaluates its performance in the form of value function. Since wireless network states and computing requests have stochastic properties which causes the value function is intractable to evaluate by traditional RL algorithm, we apply the deep neural network (DNN) to approximate the action-value function with a reinforcement learning method deep Q-learning. Each agent chooses action in state and receives an immediate reward, then it uses DNN to approximate the value functions. After the value functions reach convergence, the user is capable to select the overhead-aware optimal computation offloading strategy based on its state and learning results. We aim to minimize the total overheads in terms of computational time and energy consumption of all users. Simulation results have proved that the proposed deep reinforcement learning based computation offloading policy performances effectively compared with baseline methods in this work.

    2 System models and problem formulation

    In this section we will introduce the system models including network model,communication model and computation model adopted in this work.

    2.1 Network model

    An environment of one macrocell andN small cells in the terminology of LTE standards is considered here. An MEC server is placed in the macro eNodeB (MeNB), and all the small cell eNodeBs (SeNBs) are connected to the MeNB as well as the MEC server. In this paper, it is assumed that the SeNBs are connected to the MeNB in wired manner[Jafari, López-Pérez, Song et al. (2015)]. The set of small cells is denoted as N ={1,2,…,N}, and we let M={1,2,…,M}denotes the set of mobile user equipment and define that each single-antenna UE is associated with one SeNB. We assume that each UE has a computation-intensive and latency-sensitive task to be completed at each time slott. Each UE can execute the computation task locally, or offload the computation task to the MEC server via the SeNB with which it is connected.MEC server can handle all computing tasks because of its multi-tasking capability.Similar to many previous works in mobile cloud computing [Barbera, Kosta, Mei et al.(2013)] and mobile networking [Iosifidis, Gao, Huang et al. (2013)] to get tractable analysis, we consider a quasi-static scenario where the set of mobile users mremain unchanged during a computation offloading period (e.g., within several seconds),whereas may change during different periods. The network model is shown in Fig. 1.

    Figure 1: Network model

    2.2 Communication model

    Because every SeNB is connected to the MEC server, UE can offload computation tasks to the MEC server through the SeNB with which it is connected. The computation offloading decision of UE mis denoted as am∈{0,1},?m. Specifically, we set am= 0when UEmdecides to compute its task on its local equipment and am= 1when UE decides to offload its computation task to MEC server by wireless manner. There areKorthogonal FDM (Frequency Division Multiplexing) sub-channels without interference to each other between UEs and SeNBs, and each sub-channel bandwidth is assumed asw. By given the computation offloading decision profile of all the UEs asa={a1, a2,…,am}, we describe the Signal to Interference plus Noise Ratio (SINR)γm(t)and uplink data rate rm(t)of UE mat time slot t as

    where km(t)∈[0,1,...,K]denotes the number of sub-channels allocated by SBS to users andpm(t )is transmission power of UEm.Gm,n(t),Gi,n(t)denote the channel gain between UEmand SeNBn, UEi and SeNBn. And σ(t)is the additive white Gaussian noise. For the sake of simplicity, we omit(t )in the following expressions, e.g.,rmstands forrm(t ), unless time slottis emphasized.

    2.3 Computation model

    We consider each mobile user m has a computational taskat each time slott, which can be computed either locally on the mobile user's equipment or remotely on the MEC server by computation offloading, as inChen [Chen (2014)].Bm(in KB) denotes the computation input data which includingprogram codes and input parameters andDm(in Megacycles) stands for the total number of CPU cycles to complete the computational task Jm.stands for the maximum tolerable delay for executing the computation task Jm. A user can apply the methods in [Yang, Cao, Tang et al. (2012)] to obtain the information ofBm,Dmand. Next wewill discuss the overhead in terms of computation time and energy consumption for both local computation and MEC computation offloading cases.

    2.3.1 Local computing

    In this case, the computational task J is executed on local mobile equipment.m represents the computational capacity (e.g., CPU cycles per second) of UEm .The situation is allowed here that different UEs may have different computational capabilities.The computational time executed locally by UE mis expressed as

    and the energy consumption for computation is given as

    where ρmis the coefficient representing the energy consumed by each CPU cycle.

    According to the realistic measurements in Wen et al. [Wen, Zhang, Luo et al. (2012)],we set

    According to the computational time in Eq. (3) and energy consumption in Eq. (4), we can describe the total overheads of the local computation by UE mas

    2.3.2 MEC server computing

    We will state the case where the computational task Jmis offloaded to the MEC server in this section. UE mwould generate the extra overhead of transmission time and energy consumption for offloading the computation input data to MEC server and downloading the computation outcome data to local equipment. The transmission time and energy consumption of UE mare computed respectively as

    When the computation input data are uploaded to the MEC server, MEC server will execute the computation task on behalf of UE. Letdenotes the computational capability (i.e., CPU cycles per second) of the MEC server assigned to UEm. We assume thatfor computing resources allocated to all users can not exceed the computational capability of the MEC server fc. So the computation time and corresponding energy consumption of the MEC server on task Jmare given as

    After the completion of the computing task executing by MEC server, the computation outcome dataneeds to be transmitted back to the mobile users. Therefore, the downlink transmission timeand energy consumptionare given as

    we can compute the total overheads by offloading the computational task of UEm to MEC server as

    2.4 Problem formulation

    Our goal is to optimize expected long-term utility performance of all users, and at each time slott which is a decision step each user has only one task to perform. Specifically,we aim at minimizing the total overheads of all the users which can execute tasks on local mobile users' equipment or perform computation offloading with mobile edge computing.By minimizing the total overheads, users can make the overhead-aware optimal decision of computation offloading which of great importance in augmenting the computational efficiency and reducing the latency. We can model the optimization formulation of the problem as follows:

    subject to

    The first term of Eq. (15) is the overheads generated by local computing and the second term is the overheads due to the computation offloading. The first constraint ensures that an overhead-aware solution can be obtained by finding the optimal values of the offloading decision profilea . The second constraint means that the delay for performing each calculation task cannot exceed the maximum tolerance delay. The third constraint manifests that the computing resources allocated to all users for offloading computation tasks cannot exceed the total amount of computing resources of the MEC server. And the last constraint specifies that the bandwidth allocated to all users cannot exceed the total spectrum bandwidthW . However, objective formula is difficult and impractical to solve due to the fact that ais binary variable, the feasible set of problem is non-convex and the optimization formulation is not a convex function. From another perspective, the problem can be viewed as sequence decision problem which need make continuous decisions to achieve the ultimate goal. In the following section, we will propose deep reinforcement learning algorithm to optimize the computation offloading problem.

    3 Computation offloading algorithm based on deep RL

    The reinforcement learning algorithm aims at solving the sequential decision problem and general sequential decision problems can be expressed in the framework of the Markov decision process (MDP). The MDP describes a stochastic decision process of an agent interacting with an environment or system. At each decision time, the system stays in a certain state s and the agent chooses an actionathat is available at this state. After the action is performed, the agent receives an immediate rewardRand the system transits to a new states′according to the transition probability. The goal of an MDP or RL is to find an optimal policy which is a mapping from state to action to maximize or minimize a certain objective function [Alsheikh, Hoang, Niyato et al. (2015)].

    3.1 Definitions using RL

    To model this problem using RL, we set the following definitions:

    Agent:the mobile userm who has computation-intensive and delay-sensitive tasks to complete.

    State:stands for the state of the agentmwhich constitutes SINR and computational capability of agent m. Let stdenotes the system state at time slott,where s( t)={s1( t),s2(t),…sn(t)}.

    Action:am∈{0,1}where am= 0represents for the UEmchooses to compute task on local equipment, whileam= 1means the UEmchooses to offload the computation task to MEC server. Andat={a1( t),a2(t),…an(t)}denotes the computation offloading decision profile of all UEs at time slott.

    Reward:The reward of all mobile users with computation tasks at time slott denotes asThe first term denotes the minus of the total overheads of local computation by UEm, and second term denotes the minus of the total overheads of computation offloading of UEmwith MEC.

    An agent chooses an actiona at a particular states, and evaluates its performance in the form of state-value function based on the received immediate rewardRand its estimate value of the state to which it is taken. After the convergence of state-value functions, it learns the optimal policy π*judged by long-term discounted reward [Watkins and Dayan(1992)]. The discounted expected reward is defined by Bellman expectation equation as follows [Wei, Yu and Song (2010); Wei, Yu, Song et al. (2018)]

    where R( s, a)is the immediate reward received by the agent when it selects actionaat statesandγ∈(0,1)is a discount factor,is the transition probability from states tos′when the agent chooses the action a. The discounted expected reward for taking actionaincludes the immediate reward and the future expected return.

    According to the theory of Bellman's optimal equation, if we denote the V*(s)as the maximum total discounted expected reward at every state and it can be solved recursively by solving the following equation:

    then the optimal policy π*can be obtained when the total discounted expected reward is maximum as follows:

    However, the reward and probability are unknown in RL method which means it is a model-free based policy. For finite state MDP, action-value functions are usually stored in a lookup table and can be recursively learned. So we have to learn the Q-value which is defined as

    The Q-value stands for the discounted expected reward for taking actiona at statesand following policyπthereafter. The update of Q-values for an optimal policy π*in conventional RL method Q-learning is performed as

    where Qtis the target value including current reward r and the maximum Q-value maax Q( st+1,a)in next state and the Q( st, at)is estimated value.α∈[0,1]is the learning rate.

    3.2 Value function representation and approximation using DNN

    In conventional RL method Q-learning, Q-table can be used to store the Q-value of each state action pair when the state and action spaces are discrete and the dimension is not high. However the state spaces are high-dimensional in our work, it's unrealistic to use Q-table mentioned in the previous section. Accordingly, function Qw(s, a)is used to represent and approximate value functionQ( s, a)in RL to reduce the dimension in our work. Deep neural network has the advantage of extracting complex features in feature learning or representation learning [Bengio, Courville and Vincent (2013); Khatana,Narang and Thada (2018)], so we use DNN which is a nonlinear approximation to approximate the value function and improve the Q-learning method.

    Deep reinforcement learning combines RL with deep learning (DL). The Q-value can be represented asQw(s, a)using DNN with two convolutional layers and two fully connected layers that are parameterized by a set of parametersw={w1, w2,…,wn}. Each hidden layer is composed of nonlinear analog neurons which can transform linear combination into an output value using non-linear activation functions (e.g., sigmoid, tanh, ReLU, etc).The output of thejth neuron in layer i can be formulated as

    The DNN can be trained to update the value function by updating the parameters w including weightswand biasesb. And the best fitting weightscan be learned by iteratively minimizing the loss functionL(w), which is the mean-squared error (MSE)between the estimated value and the target value, i.e.,

    where w are the parameters of the neural network, andis the target value. The error between the target value and the estimated valueQw(st, at)is called temporal-difference(TD) error, denoted as

    Since DNN may cause the training of RL algorithm unstable and diverge due to the nonstationary targets and the correlations between samples. We adopt target network with fixed parametersw-updated in a slower cycle, and experience replay which stores experiencein a replay buffer Dand randomly sample a mini-batch of the experience to train the network, so the target value and loss function become:

    where the parameterswused for approximating the estimated value updates at every step while the fixed parametersw-for approximating target value updates at each fixed steps.Stochastic gradient descent method is applied to minimize the loss function, and the update of the parameterswdefined as follows:

    where ?ωQw(st, at)is the gradient of Qw(st, at).

    The deep reinforcement learning algorithm performed by mobile users for computation offloading decision making is presented in Tab. 1. For each state, the agent chooses action randomly with the probability of1-εand with the probability ofεchooses the action with maximum action value function which calledεgreedy strategy. When the agent performs the action in state stand receives the immediate reward r, it will observes the subsequent state st+1and approximate the action-value functionby DNN.After the convergence of action-value functions, each mobile user can select the overhead-aware optimal computation offloading action based on its state to minimize the total overheads of all users.

    Table 1: Deep reinforcement learning based computation offloading algorithm

    4 Simulation results and discussion

    In this section, we assess the performance of the proposed deep RL based computation offloading decision method compared with two baseline schemes. Simulation scenarios are presented that there are 10 small cells randomly deployed. The transmission power of UE m is set to be pm=100mWatts. The spectrum bandwidth is set as W=10MHZ,while the additive white Gaussian noiseσ=-100dBm. The channel gain model presented in 3GPP standardization is adopted here. We applied the face recognition as the computation task here [Soyata, Muraleedharan, Funai et al. (2012)]. The size of computation input dataBm(KB) and the total number of CPU cyclesDm(Megacycles)randomly distributed in the range[1000, 10000]. The computational capabilityof a mobile userm is assigned from the set {0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1.0}GHz at random which reveals the heterogeneity of mobile user’s computational capability. The total computational capability of the MEC server isfc=100GHz. We assume that the weighting factor of computation time asandfor energyweighting factor correspondingly.

    Firstly, we demonstrate the convergence of the proposed deep RL algorithm. Fig. 2 shows the total rewards of all UEs at every episode with different learning rates. As we can see, the proposed learning strategy with learning rate of 0.01 obtain the reward per episode fluctuates around -400 after 1000 episodes, while algorithm with learning rate of 0.001 and 0.0001 obtain the rewards per episode fluctuates around -500 and -450 after 1000 episodes. As expected, different learning rates result in different convergence performance, and the algorithm with 0.01 learning rate outperforms compared with other learning rates. The fluctuation of the curve after algorithm converges is for theεgreedy strategy adopted here which users not always choose the action with maximum action value function and have the possibility to choose action randomly.

    Figure 3: Computational capability versus total overheads with different schemes

    We will show the performance of proposed scheme in comparison with baseline methods,including the local computation policy, which executes all the computational task on local mobile user’s equipment, and the edge computation policy which offloads all the tasks of UEs to the MEC server for edge computing. Fig. 3 demonstrates that with the increase of computational capability, the total overheads for edge computation policy and proposed learning algorithm decreases due to the change of MEC server’s computational capability will influence the computation offloading policy of mobile users. With the increasing computational capability of MEC server, edge computation strategy performs better than local computation due to its multi-tasking capability. However, baseline methods are not effective than proposed learning method on account of proposed method can obtain the optimal overhead aware policy according to its learning result.

    Fig. 4 shows the relationship between the number of mobile users and the total overheads of all the mobile users. The total overheads increase gradually with the number of users grows. The overhead generated by edge computation method is less than overhead of local computation method gradually due to the increasing of number of users with more computation tasks to execute. While local computation policy consumes more time and energy than the baseline schemes on account of the limited computational capability when the number of users increases. Contrasted with baseline methods, the proposed learning algorithm always obtains the minimum overhead which means the proposed scheme can achieve the optimal computation offloading decision for reducing the latency,energy consumption and improving the efficiency.

    Figure 4: Number of mobile users versus total overheads with different schemes

    The assignment of weighting factor which will represent different states of users. Mobile user which is sensitive to delay will take more proportion ofinto account while user in low-battery state will consider more proportion offor the overhead computation.Fig. 5 presents that when the weighting factor of time increases from 0 to 1 (while the proportion of energy decreases from 1 to 0 accordingly), total overheads rise due to the fact that the computational and transmission time occupy more proportion in total overheads. As we can see from the above results, the decision-making performance of the proposed learning algorithm performances better than baseline methods in terms of total overheads of all the mobile users.

    Figure 5: Weighting factor of time versus total overheads with different schemes

    5 Conclusion

    In this paper, we propose a deep reinforcement learning approach for computation offloading decision issue with mobile edge computing. The problem is formulated as minimizing the total overheads of all the users which can execute tasks on local mobile users’ device or offload the computation to MEC server. In order to solve this problem,we apply deep neural network in RL framework to approximate action-value action and obtain the overhead-aware optimal computation offloading strategy based on deep Q-learning method. The performance evaluation of proposed method is compared with two baseline methods. Simulation results showed that the proposed policy can achieve better performance than baseline methods in terms of total overheads which reduces the latency,energy consumption and enhances the computation efficiency.

    Acknowledgement:This work was supported by the National Natural Science Foundation of China (61571059 and 61871058).

    一区二区三区高清视频在线| 国产真人三级小视频在线观看| 久久精品人妻少妇| 久久天堂一区二区三区四区| 黄频高清免费视频| 日本撒尿小便嘘嘘汇集6| 免费在线观看日本一区| 久久香蕉精品热| 日韩成人在线观看一区二区三区| 久久香蕉国产精品| 黄片大片在线免费观看| 国产精品永久免费网站| 久久国产精品影院| 男女下面进入的视频免费午夜| 久久久久久国产a免费观看| 一本久久中文字幕| 国产精品一区二区免费欧美| 亚洲欧美精品综合一区二区三区| 亚洲成人中文字幕在线播放| 岛国在线免费视频观看| 国产亚洲av嫩草精品影院| 老司机午夜福利在线观看视频| 色综合婷婷激情| 99久久久亚洲精品蜜臀av| 国产91精品成人一区二区三区| 亚洲国产精品成人综合色| 首页视频小说图片口味搜索| 麻豆一二三区av精品| 日本黄大片高清| 两个人免费观看高清视频| 两个人视频免费观看高清| 欧美黄色淫秽网站| 99国产精品一区二区三区| 午夜福利在线在线| 精品国产乱码久久久久久男人| 国产成人一区二区三区免费视频网站| 欧美日韩国产亚洲二区| 欧美在线黄色| 精品国产乱码久久久久久男人| 在线看三级毛片| 免费一级毛片在线播放高清视频| 人妻夜夜爽99麻豆av| 国产av一区二区精品久久| 国产午夜精品久久久久久| 国产欧美日韩一区二区三| 伦理电影免费视频| 免费在线观看成人毛片| 欧美黄色淫秽网站| 亚洲第一电影网av| 中文字幕人妻丝袜一区二区| 久久久久久久午夜电影| 日本免费a在线| 十八禁人妻一区二区| 欧美日韩一级在线毛片| 中出人妻视频一区二区| 免费在线观看日本一区| 欧美不卡视频在线免费观看 | 亚洲,欧美精品.| 叶爱在线成人免费视频播放| 久久久久国内视频| 久久精品成人免费网站| 国产1区2区3区精品| 亚洲精品国产一区二区精华液| 九色国产91popny在线| 亚洲中文日韩欧美视频| 欧美日韩精品网址| 亚洲熟女毛片儿| 嫩草影视91久久| 亚洲片人在线观看| 亚洲18禁久久av| 香蕉丝袜av| tocl精华| 特大巨黑吊av在线直播| 一级毛片女人18水好多| 亚洲欧美日韩东京热| 男人舔女人的私密视频| 国产精品自产拍在线观看55亚洲| 精品一区二区三区四区五区乱码| 99久久99久久久精品蜜桃| 国产三级中文精品| 久久精品国产亚洲av高清一级| 人妻久久中文字幕网| www日本在线高清视频| 一级a爱片免费观看的视频| 欧美日韩乱码在线| 老司机午夜福利在线观看视频| 又黄又爽又免费观看的视频| 色综合亚洲欧美另类图片| 啪啪无遮挡十八禁网站| 欧美大码av| 全区人妻精品视频| av有码第一页| 成年女人毛片免费观看观看9| 亚洲自偷自拍图片 自拍| 日韩欧美精品v在线| 黄频高清免费视频| 久久伊人香网站| 国产精品98久久久久久宅男小说| 国产精品久久久人人做人人爽| 麻豆av在线久日| 两个人看的免费小视频| 亚洲专区中文字幕在线| 免费搜索国产男女视频| 日本一本二区三区精品| 日韩欧美国产一区二区入口| 少妇裸体淫交视频免费看高清 | 又黄又粗又硬又大视频| 亚洲国产看品久久| 最近最新免费中文字幕在线| 最新美女视频免费是黄的| 亚洲 欧美 日韩 在线 免费| 国产一区二区激情短视频| 三级毛片av免费| 欧美日本视频| 草草在线视频免费看| 一边摸一边抽搐一进一小说| 99热这里只有是精品50| 女人高潮潮喷娇喘18禁视频| 国产精品自产拍在线观看55亚洲| 成熟少妇高潮喷水视频| 亚洲中文字幕一区二区三区有码在线看 | 婷婷亚洲欧美| 亚洲精品色激情综合| 午夜精品久久久久久毛片777| 天堂av国产一区二区熟女人妻 | 一级作爱视频免费观看| 日韩国内少妇激情av| 男人舔女人下体高潮全视频| 久久精品91无色码中文字幕| 久久人妻av系列| 精品一区二区三区视频在线观看免费| 国产亚洲欧美在线一区二区| 成年免费大片在线观看| 天天一区二区日本电影三级| 日韩欧美一区二区三区在线观看| 两性夫妻黄色片| 看黄色毛片网站| 久久久国产欧美日韩av| 毛片女人毛片| 精品高清国产在线一区| 在线观看一区二区三区| 麻豆国产av国片精品| 看黄色毛片网站| 五月伊人婷婷丁香| 精品欧美国产一区二区三| 国产精品野战在线观看| 国产野战对白在线观看| 69av精品久久久久久| 一本精品99久久精品77| 99国产极品粉嫩在线观看| 51午夜福利影视在线观看| 天堂影院成人在线观看| 亚洲精品在线观看二区| 午夜福利视频1000在线观看| 波多野结衣巨乳人妻| 男人舔奶头视频| 搡老妇女老女人老熟妇| 国产aⅴ精品一区二区三区波| 老司机靠b影院| 精品国产乱码久久久久久男人| 色尼玛亚洲综合影院| 国产99白浆流出| 亚洲狠狠婷婷综合久久图片| 91麻豆av在线| 人妻夜夜爽99麻豆av| 国内毛片毛片毛片毛片毛片| 精品久久久久久久久久久久久| 老汉色∧v一级毛片| 俄罗斯特黄特色一大片| 国产成人一区二区三区免费视频网站| 最新在线观看一区二区三区| 日韩 欧美 亚洲 中文字幕| 丁香欧美五月| 亚洲国产欧洲综合997久久,| 亚洲精品久久国产高清桃花| 成人亚洲精品av一区二区| 亚洲国产精品999在线| 村上凉子中文字幕在线| 制服人妻中文乱码| 国产高清videossex| 搡老熟女国产l中国老女人| 亚洲成人久久性| 91麻豆精品激情在线观看国产| 69av精品久久久久久| 亚洲第一欧美日韩一区二区三区| 欧美日韩国产亚洲二区| 大型黄色视频在线免费观看| 欧美一区二区国产精品久久精品 | 国产激情久久老熟女| 国产精品国产高清国产av| 日韩欧美精品v在线| 国内毛片毛片毛片毛片毛片| 午夜成年电影在线免费观看| 一区二区三区高清视频在线| 叶爱在线成人免费视频播放| 观看免费一级毛片| 少妇人妻一区二区三区视频| 亚洲五月婷婷丁香| 国产亚洲精品一区二区www| 男女下面进入的视频免费午夜| 狠狠狠狠99中文字幕| 两个人看的免费小视频| 午夜a级毛片| tocl精华| 黑人巨大精品欧美一区二区mp4| 午夜福利免费观看在线| 一本精品99久久精品77| 亚洲国产看品久久| 夜夜夜夜夜久久久久| 国产精品免费视频内射| 香蕉丝袜av| 黑人欧美特级aaaaaa片| 18禁黄网站禁片免费观看直播| 午夜福利视频1000在线观看| 亚洲va日本ⅴa欧美va伊人久久| 在线观看www视频免费| 国产精品影院久久| 男插女下体视频免费在线播放| 日韩欧美在线乱码| 欧美黑人欧美精品刺激| www.精华液| 日韩欧美 国产精品| 欧美日韩亚洲综合一区二区三区_| 观看免费一级毛片| 国产真人三级小视频在线观看| 香蕉国产在线看| 熟妇人妻久久中文字幕3abv| 岛国视频午夜一区免费看| 久久精品国产99精品国产亚洲性色| 国产亚洲精品久久久久久毛片| 亚洲狠狠婷婷综合久久图片| 国产在线精品亚洲第一网站| 五月伊人婷婷丁香| 成年免费大片在线观看| 91字幕亚洲| 99riav亚洲国产免费| 中文资源天堂在线| 亚洲熟妇熟女久久| 精品久久久久久久久久久久久| 女人高潮潮喷娇喘18禁视频| 毛片女人毛片| 黑人巨大精品欧美一区二区mp4| 国产精品av久久久久免费| 亚洲在线自拍视频| 久久婷婷人人爽人人干人人爱| 无人区码免费观看不卡| 亚洲中文字幕一区二区三区有码在线看 | 少妇裸体淫交视频免费看高清 | 亚洲激情在线av| 在线看三级毛片| 妹子高潮喷水视频| 精品久久久久久久末码| 亚洲一区中文字幕在线| 日本免费a在线| 日日夜夜操网爽| 国产精品久久久久久精品电影| av片东京热男人的天堂| 国产视频一区二区在线看| 亚洲精品久久成人aⅴ小说| 国产精品精品国产色婷婷| 草草在线视频免费看| 久久午夜亚洲精品久久| 欧美最黄视频在线播放免费| 色在线成人网| 免费搜索国产男女视频| 久久人妻av系列| 此物有八面人人有两片| 欧美日韩亚洲综合一区二区三区_| e午夜精品久久久久久久| 欧美日韩瑟瑟在线播放| 天堂√8在线中文| 精品久久蜜臀av无| 国产精品1区2区在线观看.| 日韩欧美一区二区三区在线观看| 欧美成人性av电影在线观看| 国产真实乱freesex| 亚洲自偷自拍图片 自拍| 在线永久观看黄色视频| 人成视频在线观看免费观看| 欧美大码av| ponron亚洲| 久久久久久大精品| 一级毛片精品| 他把我摸到了高潮在线观看| 成人亚洲精品av一区二区| 精品人妻1区二区| 黄片大片在线免费观看| 亚洲av五月六月丁香网| 国产一区二区三区在线臀色熟女| 一级毛片女人18水好多| 欧美黄色片欧美黄色片| 亚洲欧洲精品一区二区精品久久久| 一级a爱片免费观看的视频| 99热6这里只有精品| 亚洲 欧美 日韩 在线 免费| 色精品久久人妻99蜜桃| 欧美不卡视频在线免费观看 | 99久久99久久久精品蜜桃| 久久精品影院6| 国产aⅴ精品一区二区三区波| 男人舔女人下体高潮全视频| 可以免费在线观看a视频的电影网站| 成人国产一区最新在线观看| 最近在线观看免费完整版| 国产精品一区二区精品视频观看| 国产成人精品无人区| 亚洲成av人片在线播放无| 亚洲18禁久久av| 国产片内射在线| 在线观看一区二区三区| 亚洲avbb在线观看| 99re在线观看精品视频| 老司机午夜福利在线观看视频| 午夜免费成人在线视频| 亚洲人与动物交配视频| 老司机靠b影院| 母亲3免费完整高清在线观看| 国产亚洲精品一区二区www| 久久久久久久午夜电影| 中文字幕av在线有码专区| 久久久水蜜桃国产精品网| 日韩欧美国产在线观看| 免费电影在线观看免费观看| 丰满人妻熟妇乱又伦精品不卡| 中文字幕人成人乱码亚洲影| 性欧美人与动物交配| 欧美日韩黄片免| 露出奶头的视频| 国产精品九九99| 久久精品影院6| bbb黄色大片| 中文字幕高清在线视频| 国产欧美日韩精品亚洲av| 最新美女视频免费是黄的| 国产免费av片在线观看野外av| 99久久综合精品五月天人人| 午夜福利18| 久久久精品国产亚洲av高清涩受| 久久精品人妻少妇| 很黄的视频免费| 一进一出抽搐gif免费好疼| 日本一二三区视频观看| 一级黄色大片毛片| 一边摸一边抽搐一进一小说| 日本成人三级电影网站| 国产精华一区二区三区| 欧美日韩黄片免| 青草久久国产| 每晚都被弄得嗷嗷叫到高潮| 一级作爱视频免费观看| 欧美+亚洲+日韩+国产| 久久久国产成人免费| 色哟哟哟哟哟哟| 欧美成狂野欧美在线观看| 亚洲 欧美 日韩 在线 免费| 久久九九热精品免费| 在线观看免费视频日本深夜| 亚洲国产欧美一区二区综合| 欧美不卡视频在线免费观看 | 欧美 亚洲 国产 日韩一| 黄色视频,在线免费观看| 欧美高清成人免费视频www| 久久久久久大精品| 国产亚洲精品一区二区www| 极品教师在线免费播放| 亚洲一区二区三区不卡视频| 国产伦人伦偷精品视频| 欧美性长视频在线观看| 岛国在线免费视频观看| 国产精品一区二区精品视频观看| 1024视频免费在线观看| 国产午夜福利久久久久久| 午夜免费激情av| 精品少妇一区二区三区视频日本电影| 国产精华一区二区三区| 999精品在线视频| 久久午夜亚洲精品久久| 999久久久国产精品视频| 黄色片一级片一级黄色片| 成人国产一区最新在线观看| 国产日本99.免费观看| 日本五十路高清| 亚洲人成伊人成综合网2020| 午夜福利18| 老司机午夜福利在线观看视频| 国产在线观看jvid| 久久伊人香网站| 午夜免费观看网址| 日韩大尺度精品在线看网址| 亚洲国产欧美人成| 人人妻人人看人人澡| 视频区欧美日本亚洲| 午夜日韩欧美国产| 18禁美女被吸乳视频| 亚洲自偷自拍图片 自拍| 欧美成人性av电影在线观看| 国产精品1区2区在线观看.| 久久精品成人免费网站| 亚洲一区二区三区色噜噜| tocl精华| 亚洲精品在线观看二区| 亚洲欧美精品综合久久99| 男女下面进入的视频免费午夜| 久久伊人香网站| 国产精品一及| 九色国产91popny在线| 国产精品一区二区三区四区久久| 免费在线观看日本一区| 精品乱码久久久久久99久播| 亚洲男人的天堂狠狠| 国产又色又爽无遮挡免费看| 成人亚洲精品av一区二区| 日韩精品青青久久久久久| 午夜免费激情av| 老汉色∧v一级毛片| 国产精品美女特级片免费视频播放器 | 亚洲成人精品中文字幕电影| 精品久久久久久成人av| 精品国产美女av久久久久小说| 欧美久久黑人一区二区| 亚洲精品粉嫩美女一区| 黑人操中国人逼视频| 免费看美女性在线毛片视频| 99国产精品一区二区三区| 免费搜索国产男女视频| 国产午夜福利久久久久久| av福利片在线| 精品久久久久久,| 亚洲男人的天堂狠狠| 91麻豆精品激情在线观看国产| 淫妇啪啪啪对白视频| 久久久久久国产a免费观看| 搡老熟女国产l中国老女人| 狠狠狠狠99中文字幕| 亚洲精品一卡2卡三卡4卡5卡| 看片在线看免费视频| 九色成人免费人妻av| 波多野结衣巨乳人妻| 免费看美女性在线毛片视频| 欧美精品亚洲一区二区| 好男人在线观看高清免费视频| ponron亚洲| 国产区一区二久久| 国产精品香港三级国产av潘金莲| 亚洲人成网站高清观看| 色综合站精品国产| 久久精品国产综合久久久| 欧美日韩乱码在线| 一二三四在线观看免费中文在| 嫁个100分男人电影在线观看| 亚洲精品中文字幕在线视频| 97超级碰碰碰精品色视频在线观看| 亚洲成人中文字幕在线播放| 国产aⅴ精品一区二区三区波| 十八禁网站免费在线| 亚洲国产欧美网| 性色av乱码一区二区三区2| 亚洲一区中文字幕在线| 久久午夜亚洲精品久久| 最好的美女福利视频网| 色尼玛亚洲综合影院| 国产精品一区二区三区四区久久| 9191精品国产免费久久| 亚洲一码二码三码区别大吗| 一边摸一边做爽爽视频免费| av福利片在线| 欧美在线黄色| 淫妇啪啪啪对白视频| 亚洲人成77777在线视频| 两性午夜刺激爽爽歪歪视频在线观看 | 成人一区二区视频在线观看| 日本免费a在线| 三级男女做爰猛烈吃奶摸视频| 欧美成狂野欧美在线观看| 亚洲专区中文字幕在线| 成人三级做爰电影| 丰满人妻一区二区三区视频av | 两人在一起打扑克的视频| 亚洲片人在线观看| 黄色a级毛片大全视频| 国产精品自产拍在线观看55亚洲| av免费在线观看网站| 午夜精品久久久久久毛片777| 午夜福利18| 国产人伦9x9x在线观看| 久久久久久国产a免费观看| 两性午夜刺激爽爽歪歪视频在线观看 | 香蕉久久夜色| 一边摸一边抽搐一进一小说| 老熟妇仑乱视频hdxx| 可以免费在线观看a视频的电影网站| 国产精品久久久久久精品电影| 日日摸夜夜添夜夜添小说| 日韩欧美免费精品| 国产精品久久久久久久电影 | 国产真实乱freesex| 久久国产乱子伦精品免费另类| 级片在线观看| 国产精品日韩av在线免费观看| 在线国产一区二区在线| 日本a在线网址| 亚洲电影在线观看av| 在线看三级毛片| 黄色女人牲交| 日日摸夜夜添夜夜添小说| 制服丝袜大香蕉在线| 少妇熟女aⅴ在线视频| 久久精品国产亚洲av高清一级| 欧美性长视频在线观看| а√天堂www在线а√下载| 久久精品国产综合久久久| 久久久久久大精品| 啦啦啦韩国在线观看视频| 99精品欧美一区二区三区四区| 久久国产精品人妻蜜桃| 丰满的人妻完整版| 中文在线观看免费www的网站 | 久久伊人香网站| 在线永久观看黄色视频| 91大片在线观看| 男女床上黄色一级片免费看| 夜夜躁狠狠躁天天躁| 国产成年人精品一区二区| 九色国产91popny在线| 国产成年人精品一区二区| 啦啦啦观看免费观看视频高清| 九色成人免费人妻av| 国产1区2区3区精品| or卡值多少钱| 欧美乱色亚洲激情| 91av网站免费观看| 国产成+人综合+亚洲专区| 亚洲九九香蕉| 两性午夜刺激爽爽歪歪视频在线观看 | 老司机深夜福利视频在线观看| 男插女下体视频免费在线播放| 国产亚洲精品第一综合不卡| 天堂影院成人在线观看| 欧美性猛交╳xxx乱大交人| 熟女电影av网| 久久久国产成人精品二区| 国产精品久久久久久精品电影| 欧美高清成人免费视频www| 久久精品成人免费网站| 亚洲精品一区av在线观看| 观看免费一级毛片| 国产伦人伦偷精品视频| 色哟哟哟哟哟哟| 人妻久久中文字幕网| 国产午夜精品久久久久久| 欧美日韩精品网址| 很黄的视频免费| 18禁国产床啪视频网站| 中文资源天堂在线| 青草久久国产| 国产真人三级小视频在线观看| 国产99久久九九免费精品| 岛国在线免费视频观看| 亚洲专区中文字幕在线| 亚洲第一电影网av| 国产精品av久久久久免费| 精品国产超薄肉色丝袜足j| 中出人妻视频一区二区| 麻豆国产97在线/欧美 | 日韩欧美免费精品| 老汉色∧v一级毛片| 欧美日韩亚洲国产一区二区在线观看| 床上黄色一级片| 国产精品免费视频内射| 国产区一区二久久| 亚洲av成人av| 国语自产精品视频在线第100页| 日韩有码中文字幕| 欧美黑人精品巨大| 亚洲国产日韩欧美精品在线观看 | 国产精品久久视频播放| 欧美又色又爽又黄视频| 国产人伦9x9x在线观看| 国产亚洲精品久久久久5区| 99久久久亚洲精品蜜臀av| 日本三级黄在线观看| 听说在线观看完整版免费高清| 亚洲成人免费电影在线观看| 精品欧美一区二区三区在线| 国产黄a三级三级三级人| 久热爱精品视频在线9| aaaaa片日本免费| 99精品欧美一区二区三区四区| 又黄又爽又免费观看的视频| 久久这里只有精品19| 岛国视频午夜一区免费看| 国产精品国产高清国产av| 精品无人区乱码1区二区| 岛国视频午夜一区免费看| 亚洲午夜精品一区,二区,三区| 老司机午夜十八禁免费视频| 丰满人妻一区二区三区视频av | 国产精品 欧美亚洲| 丝袜美腿诱惑在线| 国产高清视频在线播放一区| 欧洲精品卡2卡3卡4卡5卡区| 久久久精品欧美日韩精品| 亚洲精品中文字幕在线视频| 欧洲精品卡2卡3卡4卡5卡区| 丝袜美腿诱惑在线| 久久久久久人人人人人| 级片在线观看| 最近最新中文字幕大全免费视频| 999久久久国产精品视频| 国产又黄又爽又无遮挡在线| 精品免费久久久久久久清纯| 欧美+亚洲+日韩+国产| 91老司机精品| 中亚洲国语对白在线视频| 亚洲国产精品999在线|