• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A DQN-Based Cache Strategy for Mobile Edge Networks

    2022-08-24 03:29:22SiyuanSunJunhuaZhouJiuxingWenYifeiWeiandXiaojunWang
    Computers Materials&Continua 2022年5期

    Siyuan Sun,Junhua Zhou,Jiuxing Wen,Yifei Wei and Xiaojun Wang

    1School of Electronic Engineering,Beijing University of Posts and Telecommunications,Beijing,100876,China

    2State Key Laboratory of Intelligent Manufacturing System Technology,Beijing Institute of Electronic System Engineering,Beijing,100854,China

    3Ningbo Sunny Intelligent Technology Co.,LTD,Yuyao,315400,China

    4Dublin City University,Dublin 9,Ireland

    Abstract: The emerging mobile edge networks with content caching capability allows end users to receive information from adjacent edge servers directly instead of a centralized data warehouse, thus the network transmission delay and system throughput can be improved significantly.Since the duplicate content transmissions between edge network and remote cloud can be reduced, the appropriate caching strategy can also improve the system energy efficiency of mobile edge networks to a great extent.This paper focuses on how to improve the network energy efficiency and proposes an intelligent caching strategy according to the cached content distribution model for mobile edge networks based on promising deep reinforcement learning algorithm.The deep neural network (DNN) and Q-learning algorithm are combined to design a deep reinforcement learning framework named as the deep-Q neural network(DQN),in which the DNN is adopted to represent the approximation of action-state value function in the Q-learning solution.The parameters iteration strategies in the proposed DQN algorithm were improved through stochastic gradient descent method, so the DQN algorithm could converge to the optimal solution quickly, and the network performance of the content caching policy can be optimized.The simulation results show that the proposed intelligent DQN-based content cache strategy with enough training steps could improve the energy efficiency of the mobile edge networks significantly.

    Keywords: Mobile edge network;edge caching;energy efficiency

    1 Introduction

    With the rapid development of wireless communication networks and the fast growing of smart devices, various mobile Internet applications has been spawned, such as voice recognition,autonomous driving,virtual reality and augmented reality.These emerging services and applications put forward higher requirements for high capacity, low latency and low energy consumption.By deploying edge server at the wireless access network,the end user’s computing tasks can be executed near the edge of the network,which could effectively reduce the congestion of the backhaul network,greatly shortens the service delay,and meets the needs of delay-sensitive applications.The mobile edge networks are essentially the subsidence of cloud computing capability,and could provide third-party application at the edge nodes,making it possible for service innovation of mobile edge entry.

    With the popularity of video services, the traffic of video content grows explosively.The huge data traffic is mainly caused by the redundant transmission of popular content.Edge nodes also have certain storage capacity for caching, and edge caching is becoming more and more important.Deploying a cache in an edge network can avoid data redundancy caused by many repeated content deliveries.By analyzing the content popularity and proactively caching the popular content from the core network to the mobile edge server,then the request for the repeated content could be transmitted directly from the nearby edge nodes without going back to the remote core network, which greatly reduces the transmission delay and effectively alleviates the pressure on the backhaul link and the core network.Edge cache has been widely studied because it can effectively improve user experience and reduce energy consumption[1].Enabling caching ability in a mobile edge system is a promising approach to reduce the use of centralized databases [2].However, due to the edge equipment size,the communicating,computing and caching resources in a mobile edge network are limited.Besides,end users’mobility makes mobile edge networks, becoming a dynamic system, where the energy consumption may increase because of improper caching strategies.In order to address these problems,we focus on how to improve energy efficiency in a cache-abled mobile edge network through smart caching strategies.In this article,we study a mobile edge network with unknown content popularity,and design a dynamic and smart content cache policy based on online learning algorithm which utilizes deep reinforcement learning(DRL).

    The rest of this article is arranged as follows.Firstly, Section 2 elaborated the existing related work and some solutions on the content cache strategy in mobile edge networks.Section 3 introduce a cache-enabled mobile edge network scenario and builds related system models and energy models.Section 4 analyzes the energy efficiency of the system, and formulates the optimization problem in the system according to the deep reinforcement learning models,then the caching content distribution strategy is proposed to solve the energy efficiency optimization problem of mobile edge networks.Finally,we design some numerical simulations and analyze the results,which show that the proposed strategy could greatly improve mobile edge networks energy efficiency without decreasing the system performance.

    2 Related Work

    In this section, we will investigate the existing related work and some solutions on mobile edge networks with caching capability and artificial intelligence(AI)in network resource management.

    2.1 Energy-Aware Mobile Edge Networks with Cache Ability

    By simultaneously developing computing offloads and smart content caches near the edge of the network,mobile edge network can further improve the efficiency of network content distribution and computing capabilities,and effectively reducing latency and improving service quality and the energy consumption of cellular networks.Thus, adding cache ability at the network edge has become one of the most important approaches[3,4].Authors in[5]separated the controlling and communicating functions of heterogeneous wireless networks with software defined network(SDN)-based techniques.Under the proposed network architecture,the cache-enabled of macro base stations and relay nodes are overlaid and cooperated in a limited backhaul scenario to meet service quality requirement.Authors in[6]investigated cache policies which aim to improve energy efficiency in information centric networks.Numerous papers[7,8]focus on caching policies based on user demands,e.g.,authors in[9]proposed a light-weight cooperating edge storage management strategy thus the utilization of edge caching can be maximized and the bandwidth cost can be decreased.Jiang et al.[10]proposed a caching and delivering policy for femto cellulars where access nodes and user devices are all able to cache local contents,the proposed policy aims at realizing a cooperative allocation strategy for communicating and caching resources in a mobile edge network.A deep-Learning-based content popularity prediction algorithm is developed for software defined networks in[11].Besides,due to the users’mobility,how to cache contents properly is challenging in mobile edge network,thus many works[12–15]aimed at addressing mobility-aware caching problems.Sun et al.[16]proposed a mobile edge cloud framework which uses big medical sensor data and aims at predicting diseases.

    Those works inspire us to research on how to improve the energy consumption of the mobile edge networks by taking advantage of cache ability.However,cache strategies from the work above are all based on user demands and behavior,which are features that are difficult to extract or forecast.To solve this problem,many researchers work on using the architectures of the network function virtualization or software defined networks.By separating control and communicate planes and virtualizing network device functions, those techniques realized a flexible and intelligent way to manage edge resources in mobile edge networks.Thus, many smart energy efficiency-oriented caching policies are able to be integrated as network applications, which are operated by network managers and run on top of a centralized network controller [17].Li et al.[18] surveyed Software-Defined Network Function Virtualization.Mobility Prediction as a Service which was proposed in [19] offers an on-demand long-term management by predicting user activities,moreover,the function is virtualized as a network service, which is fully placed on top of the cloud and works as a cloudified service.Authors in [20]designed a network prototype which takes advantage of not only content centric networks but also Mobile Follow-Me Cloud, therefore the performance of cache-aided edge networks are improved.Those works suggested feasible paradigms of mobile edge network virtualization with cache ability.Furthermore,authors in[21]designed a content-centric heterogeneous networks architecture which is able to cache contents and compute local data,in the analyzed network scenario,users associated to various network services but were all allowed to share the communication,computation and storage resources in one cellular.Tan et al.[22] proposed a full duplex-enabled software defined networks framework for mobile edge computing and caching scenario, and the first framework suits network services which are sensitive to data rate,while the second one suits network services which are sensitive to data computing speed.

    2.2 Artificial Intelligence in Network Management

    In recent years, deploying more intelligence in networks is a promising approach to realize effectively organizing[23,24],managing and optimizing network resources.Xie et al.[25]investigated how to use machine learning(ML)algorithms to add more intelligence to software defined networks.Reinforcement learning (RL) and related algorithms has been adapted for automatic goal-oriented decision-making for ages [26].Wei et al.[27,28] suggested a joint optimization for edge resource allocating problem, the proposed strategy uses the model-free actor-critic reinforcement learning to solve the joint optimization problems of content caching,wireless resource allocation and computation offloading,thus the overall network delay performance is improved.Qu et al.[29,30]proposed a novel controlled flexible representation and a novel secure and controllable quantum image steganography algorithm for quantum image,this algorithm allows the sender to control all the process stages during a content transmitting phrase,thus a better information-oriented security is obtained.Deploying cache in the edge network can enhance content delivery networks and reducing the data traffic caused by a large number of repeated content requests[31].There are many related works,mainly focusing on computing offloading and cache decisions and resources allocation[32].

    However most of RL algorithms needs lots of computation power.Although the SDN architecture offers an efficient flow-based programmable management approach,the local controllers mostly have limited resources (e.g., storage, CPU.) for data processing.Thus an optimization algorithm with controllable computing time is required for mobile edge networks.Amokrane et al.[33] proposed a computation efficient approach which is based on Ant-Colony optimization to solve the formulated flow-based routing problem,the simulation results showed that the Ant Colony-based approach can substantially decrease computation time comparing with other optimal algorithms.

    Various deep reinforcement learning main architectures and algorithms are presented in[34],in which reinforcement learning algorithms are reproduced with deep learning methods.The authors highlighted the impressive performance of deep neural networks(DNN)when solving various management problems (e.g., video gaming).With development of deep learning, reinforcement learning has been successfully combined with DNN [35].In wireless sensor networks, Toyoshima et al.[36]described a design of a simulation system using DQN.

    This paper takes advantages of DQN,which can efficiently obtain a dynamic optimized solution without having a priori knowledge of the dynamic statistics.

    3 System Model

    This section analyzes a caching edge network scenario with unknown content popularities,then we build the energy model for the system.The main notations used in the rest of this article are listed in Tab 1.

    Table 1:Table of main notations

    3.1 Network Model

    A typical mobile edge network scenario is shown in Fig.1,which contains not only communicating but also caching resources.The edge accessing devices include macro base stations(MBSs)and relay nodes(RNs),they all have different communicate and cache abilities,but any of them can connect to end users and cache data from end user or far cloud.We consider a set of relay nodesRwith limited storage space(in this paper we assume each RN can storeNfiles)while the macro base stationBhas unlimited cache space since it connects to far cloud.This article considers a wireless network where orthogonal frequency division multiplexing-based multiple access technology is adopted,thus the loss of spectral efficiency can be ignored.

    Figure 1:Example of an edge cellular network with caching ability

    We consider a cellular wireless accesnetwork that consists of an MBS and a set of RNs, the a large number of end users who are served by relayRmare denoted asUm.Each RN can connect to the MBS and its neighboring RNs.Due to the caching ability,Nis the maximum servicing user number of an RN.At timet, the files cached innthcache block on relay nodeRmis denoted asFcm,n(t), the files requested by userUm,nis denoted asFrm,n(t).We use variablesIctm,nandIltm,nto express how the requested fileFrm,n(t) are delivered to userUm,n

    and

    3.2 Energy Model

    The optimization problem here is how to maximize the system energy efficiency.According to the network architecture which has been analyzed in Section 3.1,the system energy consumption in a mobile edge network can be divided into three components:

    1.Basic powerPbasic:the mobile edge network operational power, it takes into account all the power consumed by the macro base station and RNs other than the transmitting power.

    2.Transmitting power in the mobile edge networkPedge:power consumption of data transferring among edge nodes,it happens when a requested file is cached in the mobile edge network.

    3.Transmitting power between the mobile edge network and far cloudPcloud:power consumption of data transferring between local nodes and far cloud,it happens when a requested file is not stored at the mobile edge network.

    Thus the system total energy consumption can be written as:

    ThePedgeandPcloudare varying in practical system,especially with different placement of edge nodes and users’equipment.To simplify the analysis,here consider a dynamic network,where the location palaces of the communication equipment are unpredictable but follow a specific distribution,thus the power consumption of transmitting a file between two edge nodes can be a constant,we express the constant asα1.Similarity,the power consumption of transmitting a file between the edge network and cloud is denoted a constantα2

    According to the formula(1)and(2),the system energy consumption can be written as:

    wherePsys(t)denotes the system power consumption at time t.

    In a mobile edge network, the propagation model can be represented by a function of radiated power:

    wherePrxandPtxrespectively denote transmitted and received power,θdenotes correction parameter,d denotes the distance between the transmitting device and receiving device,λis used to denote the path loss exponent in the formulation.

    In order to meet the user QoS,a fixed transmission rateR0should be a maintained.Therefore,the received power can be calculated as follow according to Shannon formulation:

    where W andN0respectively denote transmission bandwidth Gaussian white noise power density.

    Comparing to a traditional mobile edge network,a cache-enabled mobile edge network focuses on delivering contents to end users rather than transmitting data.Thus the EE in the analyzed system is defined as the size of delivered contents during a time slot,instead of transmitted data.The EE can be written as follow:

    where L denote content size of each file.

    Therefore,the system EE maximization problem can be written as an optimization formulation as follow:

    where the first constraint means that the users’QoS should be satisfied by defining the minimal SINR threshold, the second constraint means that each requested file should be transferred to the edge node which serves the user.We use the optimization variablesIltm,n(t),Irtm,n(t)to represent the cache distribution strategy.

    4 Solution with DQN Algorithm

    In this section we first formulate the energy efficiency-oriented cached content distribution optimization problem, which has been discussed in Section 3, according to reinforcement learning model,then we give the solution of a caching strategy based on DQN algorithm.

    4.1 Reinforcement Learning Formulation

    There is an agent, an action space and a state space in a general reinforcement learning model,the agent learns which action in the action space should be taken at which state in the state space.For example,we usesτto denote the cached content distribution state in a mobile cache-aided edge network,andsτupdates across adjustment stepsτ={1,2...}.Thus,the caching distribution state at stepτcan be written as

    The state space is represented as S,where ?sτ∈S.

    During a time slot,the caching distribution state changes from an initial state s1to a terminal state sterm.In order to deliver the requested content to corresponding user,the first caching distribution state s1of time slott,is the terminal state of last time slot,

    And the terminal state of time slot t sterm(t)is the users’requests at t time slot,it can be represented as formulation(11):

    In our problem,we assume that there is a blank storage blockvin a mobile edge network,thus the cache distribution in the mobile edge network can be adjusted through swapping the empty cache space.The vacant storage block can cache the contents from a valid storage block which is located in a neighboring accessing node, thus the vacant storage block is movable in the edge network.The swapping action which is executed in iteration stepτis represented as aτ.Thus the action spaceAτat iteration stepτof the designed model can be written as

    The executed action serial,which starts from the initial state s1to the terminal state sterm,during the time slot t is represented byAt,

    The reinforcement learning system obtains an immediate reward after an action is performed.Because reinforcement learning algorithm is using for a model-free optimization problem.The value of the immediate reward is zero until the agent reaches the terminal state.We denote the immediate reward asrτ, according to the energy efficiency optimization problem which has been analyzed in Section 3,rτcan be calculated as following formulation:

    whereR(s,a)is an energy consumption function related to state and action.

    4.2 DQN Algorithm

    RL problems can be described as the optimal control decision making problem in MDP.Qlearning is one of effective model-free RL algorithms, which has been analyzed in Section 4.2.The long-term reward,which is learned by Q-learning algorithm,of performing actionaτat environment statesτis represented as a numerical valueQ(sτ,aτ).Thus, the agent in a dynamic environment performs the action which can obtain a maximal Q value.In this paper, the EE optimized caching strategy adjusts cached content distribution state in a mobile cache-aided edge network by using learned information of Q-learning algorithm.In each iteration,the Q-value is updated according to the formulation as follows:

    whereαis the learning rate (0<α <1),γis the discount factor (0< γ <1),rτis the reward received when moving from the current statesτto the next statesτ+1,rτcan be calculated according to formulation(14).

    Based on (15), Q-learning algorithm is effective when solving the problem with small state and action spaces, where a Q-table is able to be stored to represent all the Q values of each state-action pair.However, the process becomes extremely slow in a complex environment where the state and action space dimensions are very high.Deep Q-learning is a method that combines neural network and Q-learning algorithm.Neural network can solve the problem that needs to store and retrieve a large number of Q values.

    The agent in our RL model is trained by DQN algorithm in this paper, the DQN algorithm uses two neural networks to realize the convergence of value function.As we have analyzed, due to the complex and high-dimensional network cached content distribution state space, calculating all the optimal Q-values requires significant computations.Thus, we approximate Q-values with a Qfunction,which can be trained to the optimal Q value by updating the parameterθ.The calculation formula for Q-function is as follows:

    whereθis the weight vector of the neural network, it is updated according to network training iterations.

    DQN uses a memory bank to learn the previous experience.At each update, some previous experience is randomly selected for learning, while the target network is updated with two network structures with the same parameters and different network structures, making the neural network update more efficient.When the neural network can’t maintain convergence, there will be some problems such as unstable training or difficult training.DQN uses experience playback and target networks to improve on these issues.Experience replay is usually used to store the collected sequence of observations(s,a,r,s′).This transfer information is stored in the cache,and the information in the buffer becomes the experience of the agent.The core idea of the experience replay mechanism is to train the DQN using the transfer information in the cache,rather than the information at the end of the loop.The experiences of each cycle are correlated with each other,so randomly selecting a batch of training samples from the cache will reduce the correlation between experiences and help to enhance the generalization ability of agents.At each epoch, the neural network is trained by minimizing the following loss function:

    whereytis the target value for iterationτ.

    The process of the proposed DQN-based cache strategy based can be illustrated with pseudocode as shown in Algorithm 1.

    Algorithm 1:The pseudocode of the Cache Strategy process based on DQN Algorithm Reset neural network net_performance with random weights θ.Reset target neural network net_target with weights θ- =θ.Reset the experience replay memory Dmemory for the DQN with capacity C.For epi=1,...,K do Reset the initial state vector sequence x(0)with the random caching distribution.For t=1...T do Choose an action a(t)from the action space with ε-greedy policy:Execute the selected action a(t),calculate the immediate reward.Store the tuple(x(t),a(t),R(t),x(t+1)) in Dmemory.Randomly sample a tuple(x(j),a(j),R(j),x(j+1))Set yi =R(j), if episode terminates at step j+1 R(j)+ε max Qnet_target(x,a;θ-), otherwise Iteratively optimize the DNN parameters by means of minimizing the loss function.Update the neural network parameters Qnet_target ←Qnet_performance every S steps.End For End For

    5 Simulation

    This section conducts a number of simulations to evaluate and test the performance of the proposed DQN-based caching strategy in mobile edge networks.The numerical simulations are conducted in the simulation software MatLab 2020a.We firstly test and verify the energy efficiency improvement with the DQN training-based caching policy in a simple cellular, then we evaluate the energy efficiency of the proposed caching policy in a more general complex network scenario.

    Firstly,we set a cellular network scenario that consists of an MBS,an RN and 3 end users.The cellular covers an area with radiusRa= 0.5km.We set a polar coordinate, and place the location of MBS in the center.We assume that there are 10 files cached in the network.The end users are distributed in the overall area according to the Poisson point process.The popularity distribution of all the service contents follow the zipf distribution.The basic energy consumption of an MBS is set asPm= 66mW, while an RN assumes 10% energy of an MBS.The calculated path loss from MBSs to RNs is formulated asPLB,R=11.7+37.6lg(d),the path loss from RNs to MBSs is formulated asPLR,B=42.1+27lg(d), the pass loss among RNs is formulated asPLR,R=38.5+27lg(d) and the path loss from RNs to end users is formulated asPLR,U=30.6+36.7lg(d).The Q-learning algorithm parameters are set asα=0.2,γ=0.8,R=100.The parameters of the adapted DNN can be seen in Tab.2.

    Table 2:Parameters of adopted DNN

    As shown in Fig.2,we evaluate the system energy efficiency with a random caching policy,where the service contents are cached in either the MBS or RN randomly, and compare with the energy efficiency performance of the caching policy with DQN training.The red line with triangles in the figure denotes the system energy efficiency with DQN training,while the blue line with squares denotes the system energy efficiency with random caching policy.We generate user distribution ten times and each of them is used for a certain training step number.As can be seen in the figure, with enough training steps, the energy efficiency of DQN training caching policy can improve the system energy efficiency.

    For a better analysis on the performance of proposed policy and its convergency,we simulate the relationship between training steps and system energy cost.It can be seen in Fig.3 that system energy cost declines with the increasing of training steps,which means that the algorithm trends to converge with 1500 steps.

    In order to test the proposed caching policy in complex network scenes,we simulate a mobile edge network with different numbers of RNs.In this part,we set the end user number is 20,the numbers of RNs are M=2,3,4,5,6.Consider a polar coordinate situation,the MBS is located at the place[0,0],and the m-th RN is located at the place[500,2πm/M].The Q-learning algorithm parameters in the formulation 15 are fixed.

    We evaluate and compare the energy efficiency performance of the networks under three types of mobile edge networks architecture, includes a mobile edge network without caching ability, a mobile network with random caching strategy and a mobile edge network with the proposed DQNbased caching strategy.A mobile edge network without caching ability (as shown in Fig.4 “non cache”) firstly send the requested contents to the MBS from far cloud, then deliver the contents to corresponding end users through edge nodes.A mobile edge network with random caching strategy caches contents in edge nodes by random,under this caching strategy,a requested content has to be acquired from cloud when it is not stored in an edge node who cannot connect to corresponding end user directly.

    Figure 2:Comparison of the random and DQN policies

    Figure 3:Cost in each training as function training numbers

    The energy efficiency comparison with multi-RNs is showed in Fig.4.By comparing the mobile edge networks energy efficiency with and without storage function, we can see that adding caching ability to mobile edge networks can significantly increase system energy efficiency.By comparing the energy efficiency of mobile edge networks with DQN-based caching strategy and random caching strategy, we can see that the proposed DQN-based caching strategy shows better energy efficiency improvement than random caching strategy when M>2.Besides,we can also see the network energy efficiency as a function of RN numbers,the more RNs in a mobile edge network,the better network performance the proposed caching strategy can achieve.

    Figure 4:Energy efficiency comparison with multi-RNs

    Fig.5 shows the computation time when algorithm converges for Q-learning and DQN.Due to the enormous computation burden,general Q-learning algorithm usually costs lots of computation time,which is unacceptable in real scenes.The experimental results show that the algorithm convergence speed of the proposed DQN policy is significant faster than general Q-learning method.

    Figure 5:Computation time comparison

    6 Conclusion

    In this paper,we investigate and focus on the energy efficiency problem in mobile edge networks with caching capability.A cache-enabled mobile edge network architecture is analyzed in a network scenario with unknown content popularity.Then we formulate the energy efficiency optimization problem according the main purpose of a content-based edge network.To address the problem,we put forward a dynamic online caching strategy using the deep reinforcement learning framework named deep-Q learning algorithm.The numerical simulation results indicate the proposed caching policy can be found quickly and it can improve the system energy efficiency performance significantly in both networks with single RN and multi-RNs.Besides,the convergence speed of the proposed DQN algorithms is significant faster than general Q-learning.

    Funding Statement:This work was supported by the National Natural Science Foundation of China(61871058,WYF,http://www.nsfc.gov.cn/).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    搡老乐熟女国产| 亚洲精品色激情综合| 国产精品久久久久久久久免| 亚洲av免费在线观看| 永久网站在线| 岛国毛片在线播放| 欧美日本视频| 国产成人aa在线观看| 色综合站精品国产| 国产一区二区在线观看日韩| 麻豆久久精品国产亚洲av| 午夜精品一区二区三区免费看| 男插女下体视频免费在线播放| 精品午夜福利在线看| 人体艺术视频欧美日本| 天堂俺去俺来也www色官网 | 又粗又硬又长又爽又黄的视频| 国产精品一区二区性色av| 小蜜桃在线观看免费完整版高清| 日本一本二区三区精品| 蜜桃久久精品国产亚洲av| 婷婷色av中文字幕| 晚上一个人看的免费电影| 亚洲av在线观看美女高潮| 国产探花在线观看一区二区| 亚洲性久久影院| 久久这里有精品视频免费| 日本欧美国产在线视频| 亚洲熟妇中文字幕五十中出| 欧美激情久久久久久爽电影| 国产精品一二三区在线看| 一夜夜www| 精品久久久精品久久久| 国产精品爽爽va在线观看网站| 国产亚洲5aaaaa淫片| 亚洲av二区三区四区| 中文字幕久久专区| 国产单亲对白刺激| 久久久久性生活片| 97热精品久久久久久| 看非洲黑人一级黄片| 欧美区成人在线视频| 亚洲人成网站高清观看| 日本一二三区视频观看| 国产亚洲最大av| 五月玫瑰六月丁香| 91aial.com中文字幕在线观看| 国产伦在线观看视频一区| 嘟嘟电影网在线观看| 18禁在线无遮挡免费观看视频| 国国产精品蜜臀av免费| 男女啪啪激烈高潮av片| 男女边摸边吃奶| 人体艺术视频欧美日本| 国产日韩欧美在线精品| 啦啦啦啦在线视频资源| 直男gayav资源| 成年女人在线观看亚洲视频 | 亚洲婷婷狠狠爱综合网| 97人妻精品一区二区三区麻豆| 精华霜和精华液先用哪个| 精品一区二区免费观看| 精品国产三级普通话版| 午夜激情久久久久久久| 免费大片18禁| 国产伦精品一区二区三区视频9| 麻豆久久精品国产亚洲av| 国产一区二区三区综合在线观看 | 在线免费观看的www视频| 天天一区二区日本电影三级| av线在线观看网站| 韩国av在线不卡| 日韩一区二区三区影片| 水蜜桃什么品种好| 高清欧美精品videossex| 国产高清国产精品国产三级 | 欧美xxxx性猛交bbbb| 一级毛片久久久久久久久女| 国产亚洲最大av| 国产av不卡久久| 国产精品日韩av在线免费观看| 日韩av免费高清视频| 97超视频在线观看视频| 亚洲av男天堂| 哪个播放器可以免费观看大片| 精华霜和精华液先用哪个| 国产永久视频网站| 色网站视频免费| 高清毛片免费看| 99久久精品国产国产毛片| 亚洲内射少妇av| 国产男人的电影天堂91| 乱人视频在线观看| 国语对白做爰xxxⅹ性视频网站| 午夜免费激情av| 建设人人有责人人尽责人人享有的 | 全区人妻精品视频| 精品国内亚洲2022精品成人| 极品少妇高潮喷水抽搐| 狂野欧美白嫩少妇大欣赏| 丝袜美腿在线中文| 一级二级三级毛片免费看| 人体艺术视频欧美日本| 亚洲人成网站在线观看播放| 高清毛片免费看| 久久草成人影院| 欧美极品一区二区三区四区| 免费黄频网站在线观看国产| 欧美潮喷喷水| 亚洲天堂国产精品一区在线| 免费看不卡的av| av黄色大香蕉| 一级黄片播放器| 一级毛片我不卡| 观看美女的网站| 免费观看无遮挡的男女| 日韩欧美三级三区| 久久99精品国语久久久| 两个人的视频大全免费| 一个人看视频在线观看www免费| 看黄色毛片网站| 九九在线视频观看精品| 熟妇人妻久久中文字幕3abv| 成人一区二区视频在线观看| 午夜久久久久精精品| 久久久久久久大尺度免费视频| 禁无遮挡网站| 搡老妇女老女人老熟妇| 精品久久国产蜜桃| 精品人妻视频免费看| 日本黄大片高清| 国产黄片美女视频| 国产精品人妻久久久久久| 亚洲三级黄色毛片| 十八禁国产超污无遮挡网站| 亚洲美女搞黄在线观看| 免费人成在线观看视频色| 国产麻豆成人av免费视频| 成人性生交大片免费视频hd| 久久久久精品性色| 男女视频在线观看网站免费| 国产免费视频播放在线视频 | 高清毛片免费看| 免费看av在线观看网站| 丰满少妇做爰视频| 欧美性猛交╳xxx乱大交人| 精品午夜福利在线看| 日韩不卡一区二区三区视频在线| 少妇丰满av| 久久久久久久久久黄片| 免费av毛片视频| 草草在线视频免费看| 男女边摸边吃奶| 精品一区二区三区人妻视频| 精品酒店卫生间| 黄色欧美视频在线观看| 欧美成人精品欧美一级黄| 国产高清国产精品国产三级 | 91精品伊人久久大香线蕉| 亚洲精品日韩av片在线观看| 成人亚洲精品一区在线观看 | 亚洲四区av| 日韩一区二区三区影片| 亚洲国产精品国产精品| 干丝袜人妻中文字幕| 男人爽女人下面视频在线观看| 一级二级三级毛片免费看| 色综合亚洲欧美另类图片| 国产精品无大码| 美女脱内裤让男人舔精品视频| 午夜精品一区二区三区免费看| 久久精品久久久久久久性| 精品久久久精品久久久| 国产综合精华液| 九色成人免费人妻av| 青春草亚洲视频在线观看| 国产亚洲最大av| 日韩不卡一区二区三区视频在线| 日本色播在线视频| 亚洲自偷自拍三级| 日韩av在线大香蕉| 国产黄色视频一区二区在线观看| 日本爱情动作片www.在线观看| 国产黄a三级三级三级人| 国产成人免费观看mmmm| 精品国产三级普通话版| 深夜a级毛片| 久久久久久伊人网av| 精品久久久久久电影网| 日韩中字成人| 久久99蜜桃精品久久| 亚洲精品自拍成人| av网站免费在线观看视频 | 大话2 男鬼变身卡| 夜夜看夜夜爽夜夜摸| 一级毛片黄色毛片免费观看视频| 国产男女超爽视频在线观看| 在线播放无遮挡| 国产激情偷乱视频一区二区| 成人亚洲欧美一区二区av| av福利片在线观看| av福利片在线观看| 欧美不卡视频在线免费观看| 国产精品不卡视频一区二区| 成人一区二区视频在线观看| 十八禁国产超污无遮挡网站| 亚洲av二区三区四区| 精品99又大又爽又粗少妇毛片| 亚洲精品久久午夜乱码| 久久久亚洲精品成人影院| 国产男女超爽视频在线观看| 久久久久久久大尺度免费视频| 淫秽高清视频在线观看| 免费观看在线日韩| 老司机影院毛片| 精品国内亚洲2022精品成人| 乱系列少妇在线播放| 乱系列少妇在线播放| 简卡轻食公司| 麻豆乱淫一区二区| or卡值多少钱| 91av网一区二区| 亚洲伊人久久精品综合| 久久99精品国语久久久| 国产精品一区二区性色av| 夫妻性生交免费视频一级片| 亚洲国产高清在线一区二区三| 亚洲欧美一区二区三区黑人 | 好男人在线观看高清免费视频| 熟女人妻精品中文字幕| 国产精品一区www在线观看| 亚洲精品第二区| 亚洲精品456在线播放app| 99热全是精品| 99久久人妻综合| 久久精品国产亚洲av天美| 久久精品国产鲁丝片午夜精品| 人妻一区二区av| 国内少妇人妻偷人精品xxx网站| 国内精品美女久久久久久| 一级毛片 在线播放| 观看美女的网站| 国产精品久久久久久av不卡| 国产片特级美女逼逼视频| 亚洲精品456在线播放app| 国产在线一区二区三区精| 免费看不卡的av| 国产黄色免费在线视频| 在线免费观看的www视频| 狂野欧美激情性xxxx在线观看| 亚洲欧洲日产国产| 校园人妻丝袜中文字幕| 最后的刺客免费高清国语| 日韩 亚洲 欧美在线| 亚洲欧美清纯卡通| 搡女人真爽免费视频火全软件| 国产精品伦人一区二区| 免费看光身美女| 夫妻午夜视频| 日韩人妻高清精品专区| 成人亚洲欧美一区二区av| av天堂中文字幕网| 日本免费在线观看一区| 91午夜精品亚洲一区二区三区| 深夜a级毛片| 高清日韩中文字幕在线| 欧美成人一区二区免费高清观看| 亚洲精品一区蜜桃| 中文字幕av在线有码专区| 啦啦啦韩国在线观看视频| 赤兔流量卡办理| 日本熟妇午夜| 看十八女毛片水多多多| 成人毛片a级毛片在线播放| 一边亲一边摸免费视频| 久久久国产一区二区| or卡值多少钱| 国产高清有码在线观看视频| 性色avwww在线观看| 男人爽女人下面视频在线观看| 老司机影院成人| 亚洲国产精品成人久久小说| 狂野欧美白嫩少妇大欣赏| 久久99热6这里只有精品| 久久韩国三级中文字幕| 黄片wwwwww| 国内精品美女久久久久久| a级一级毛片免费在线观看| 超碰97精品在线观看| 国产美女午夜福利| 日本欧美国产在线视频| 91精品国产九色| 人妻少妇偷人精品九色| 亚洲高清免费不卡视频| 一级二级三级毛片免费看| 天美传媒精品一区二区| 九色成人免费人妻av| 精品久久久久久久久亚洲| 日韩伦理黄色片| 亚洲怡红院男人天堂| 国产成人福利小说| 久久综合国产亚洲精品| 91久久精品国产一区二区成人| 男人爽女人下面视频在线观看| 特大巨黑吊av在线直播| 国产成人精品一,二区| 大香蕉久久网| 国产极品天堂在线| 国产 亚洲一区二区三区 | 国产精品人妻久久久影院| 免费看美女性在线毛片视频| 一本一本综合久久| 午夜福利成人在线免费观看| 国产午夜精品论理片| 久久99热这里只有精品18| 男人爽女人下面视频在线观看| 日日摸夜夜添夜夜添av毛片| 又大又黄又爽视频免费| 亚洲四区av| 国产午夜精品一二区理论片| 少妇丰满av| 午夜亚洲福利在线播放| 亚洲人成网站在线播| 99久久精品一区二区三区| 亚洲成人av在线免费| 亚洲国产精品专区欧美| 久久久久久久久久久免费av| 汤姆久久久久久久影院中文字幕 | 美女黄网站色视频| 女人十人毛片免费观看3o分钟| 国产片特级美女逼逼视频| 亚洲精品一区蜜桃| kizo精华| 久久久久久九九精品二区国产| 我的女老师完整版在线观看| 久久久精品欧美日韩精品| 五月天丁香电影| 女的被弄到高潮叫床怎么办| 内射极品少妇av片p| 成人毛片a级毛片在线播放| 爱豆传媒免费全集在线观看| 欧美日韩国产mv在线观看视频 | 波多野结衣巨乳人妻| h日本视频在线播放| 国产不卡一卡二| av在线播放精品| 蜜桃亚洲精品一区二区三区| 欧美激情国产日韩精品一区| 国产永久视频网站| 成人国产麻豆网| 国产视频首页在线观看| 亚洲欧美清纯卡通| 丝瓜视频免费看黄片| 麻豆成人av视频| 婷婷色麻豆天堂久久| 校园人妻丝袜中文字幕| 男女那种视频在线观看| 日本免费在线观看一区| 婷婷色麻豆天堂久久| 国产国拍精品亚洲av在线观看| 亚洲欧美日韩卡通动漫| 人妻少妇偷人精品九色| 中文字幕久久专区| 全区人妻精品视频| 我的女老师完整版在线观看| 一本久久精品| 久久精品久久久久久久性| 赤兔流量卡办理| 婷婷色综合大香蕉| 少妇猛男粗大的猛烈进出视频 | 丰满少妇做爰视频| 在线观看人妻少妇| 亚洲成色77777| 日日撸夜夜添| 韩国高清视频一区二区三区| 尾随美女入室| 神马国产精品三级电影在线观看| 国产国拍精品亚洲av在线观看| 国产在视频线精品| 国产久久久一区二区三区| 大香蕉久久网| eeuss影院久久| 国产成人a∨麻豆精品| 国产在线一区二区三区精| 国产av码专区亚洲av| 七月丁香在线播放| 国产亚洲av嫩草精品影院| 欧美高清性xxxxhd video| 国产精品一区二区在线观看99 | 免费播放大片免费观看视频在线观看| 一夜夜www| 亚洲久久久久久中文字幕| 黄色日韩在线| 国产欧美另类精品又又久久亚洲欧美| 男人爽女人下面视频在线观看| a级一级毛片免费在线观看| 亚洲精品一二三| 亚洲av成人精品一二三区| a级毛色黄片| 大话2 男鬼变身卡| 久久6这里有精品| 97超碰精品成人国产| 国产精品av视频在线免费观看| 一级黄片播放器| 亚洲欧美精品自产自拍| 日本av手机在线免费观看| 免费看日本二区| av天堂中文字幕网| 听说在线观看完整版免费高清| 国产色爽女视频免费观看| 男女下面进入的视频免费午夜| 三级国产精品片| 身体一侧抽搐| 一区二区三区四区激情视频| 国产高清三级在线| 国产不卡一卡二| 97精品久久久久久久久久精品| 99久久精品一区二区三区| 久久久亚洲精品成人影院| 国产精品女同一区二区软件| 少妇熟女欧美另类| 十八禁网站网址无遮挡 | 天堂中文最新版在线下载 | 成人亚洲欧美一区二区av| 亚洲av男天堂| 一级毛片我不卡| 国产精品麻豆人妻色哟哟久久 | 男女视频在线观看网站免费| 久久久久久久国产电影| 色网站视频免费| 久久久久久国产a免费观看| 国产精品国产三级国产专区5o| 麻豆成人av视频| 久久国产乱子免费精品| 国模一区二区三区四区视频| 欧美97在线视频| 97人妻精品一区二区三区麻豆| 女人十人毛片免费观看3o分钟| 精品亚洲乱码少妇综合久久| av.在线天堂| 麻豆国产97在线/欧美| 国产一级毛片在线| 夫妻性生交免费视频一级片| 亚洲精品成人久久久久久| 最后的刺客免费高清国语| 在线播放无遮挡| 国产成年人精品一区二区| 美女高潮的动态| 特级一级黄色大片| 免费播放大片免费观看视频在线观看| 一区二区三区免费毛片| 国产免费又黄又爽又色| 久久久午夜欧美精品| 国产av国产精品国产| videos熟女内射| 国产不卡一卡二| 超碰97精品在线观看| 国产午夜福利久久久久久| 国产亚洲91精品色在线| 欧美成人精品欧美一级黄| 欧美xxⅹ黑人| 搡老妇女老女人老熟妇| 丝袜喷水一区| 国产亚洲5aaaaa淫片| 六月丁香七月| 国产免费又黄又爽又色| 亚洲真实伦在线观看| 嫩草影院入口| 午夜免费男女啪啪视频观看| 国产午夜福利久久久久久| 岛国毛片在线播放| 久久人人爽人人爽人人片va| 久久草成人影院| 麻豆久久精品国产亚洲av| 国产综合精华液| 亚洲在线观看片| 亚洲自拍偷在线| 午夜精品在线福利| 亚洲无线观看免费| 在线免费观看的www视频| 天堂俺去俺来也www色官网 | 丰满人妻一区二区三区视频av| 亚洲精品成人av观看孕妇| 网址你懂的国产日韩在线| 97在线视频观看| 亚洲国产精品国产精品| 久久久久久九九精品二区国产| 国产黄片视频在线免费观看| 99热这里只有是精品50| 精品人妻视频免费看| 简卡轻食公司| 三级经典国产精品| or卡值多少钱| 国产高清三级在线| 国产真实伦视频高清在线观看| 亚洲精品国产av成人精品| av在线亚洲专区| 国产一区亚洲一区在线观看| 国产精品久久久久久久久免| 三级国产精品欧美在线观看| 国产精品久久久久久av不卡| 18+在线观看网站| 色吧在线观看| 国产人妻一区二区三区在| 亚洲成色77777| 听说在线观看完整版免费高清| 久久久久久久亚洲中文字幕| 精华霜和精华液先用哪个| 亚洲国产精品成人久久小说| 少妇高潮的动态图| 欧美性猛交╳xxx乱大交人| 国产一区有黄有色的免费视频 | 亚洲18禁久久av| 久久久久久久国产电影| 在线观看av片永久免费下载| 一本久久精品| 高清毛片免费看| 深夜a级毛片| 国产亚洲av嫩草精品影院| 国产精品99久久久久久久久| 精品久久国产蜜桃| videos熟女内射| 3wmmmm亚洲av在线观看| 毛片女人毛片| 色综合色国产| 亚洲av一区综合| 日韩视频在线欧美| 久久99热这里只有精品18| 久久草成人影院| 美女大奶头视频| 亚洲婷婷狠狠爱综合网| 精品酒店卫生间| 亚洲婷婷狠狠爱综合网| 白带黄色成豆腐渣| 一夜夜www| 国产成人精品福利久久| 亚洲精品影视一区二区三区av| 欧美最新免费一区二区三区| 午夜老司机福利剧场| 国产有黄有色有爽视频| 国产精品嫩草影院av在线观看| 网址你懂的国产日韩在线| 免费看美女性在线毛片视频| 波多野结衣巨乳人妻| 草草在线视频免费看| 国产麻豆成人av免费视频| 大话2 男鬼变身卡| 毛片女人毛片| 亚洲国产高清在线一区二区三| 嫩草影院入口| 黄色一级大片看看| 日本av手机在线免费观看| 亚洲无线观看免费| 日本爱情动作片www.在线观看| 久久久久久久久久久免费av| 日本av手机在线免费观看| 国产成人午夜福利电影在线观看| 日韩av免费高清视频| 国产免费又黄又爽又色| 国产 亚洲一区二区三区 | 国产精品蜜桃在线观看| av国产免费在线观看| 欧美高清性xxxxhd video| 亚洲综合色惰| 夜夜爽夜夜爽视频| 亚洲av男天堂| 国产亚洲av片在线观看秒播厂 | 性色avwww在线观看| 夜夜看夜夜爽夜夜摸| 一本久久精品| 亚洲成人av在线免费| videossex国产| 国产精品1区2区在线观看.| 一区二区三区四区激情视频| 成人亚洲精品一区在线观看 | 成人高潮视频无遮挡免费网站| 大片免费播放器 马上看| 亚洲国产高清在线一区二区三| 禁无遮挡网站| 日韩欧美 国产精品| 97精品久久久久久久久久精品| 婷婷色麻豆天堂久久| 亚洲成人久久爱视频| 国产乱人视频| 51国产日韩欧美| 精品不卡国产一区二区三区| 18禁在线无遮挡免费观看视频| 国产精品1区2区在线观看.| 最近手机中文字幕大全| 国产高潮美女av| 久久精品久久久久久久性| 日本与韩国留学比较| 女人被狂操c到高潮| 午夜福利在线在线| 免费在线观看成人毛片| 午夜福利在线在线| 久久99热6这里只有精品| 五月伊人婷婷丁香| 久久精品国产亚洲av天美| 亚洲一级一片aⅴ在线观看| 国产 一区 欧美 日韩| 黑人高潮一二区| 青春草视频在线免费观看| 久久久精品欧美日韩精品| 国产探花极品一区二区| videossex国产| 一区二区三区高清视频在线| 水蜜桃什么品种好| 国产极品天堂在线| 婷婷色av中文字幕| 欧美bdsm另类| 丝袜喷水一区| 狂野欧美激情性xxxx在线观看| 成年女人在线观看亚洲视频 | 嫩草影院精品99|