• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Adaptive User Service Deployment Strategy for Mobile Edge Computing

    2022-10-27 04:49:50GangLiJingboMiaoZihouWangYanniHanHongyanTanYanweiLiuKunZhai
    China Communications 2022年10期

    Gang Li,Jingbo Miao,Zihou Wang,Yanni Han,Hongyan Tan,Yanwei Liu,Kun Zhai

    1 Institute of Information Engineering,Chinese Academy of Sciences,Beijing 100093,China

    2 School of Cyber Security,University of Chinese Academy of Sciences,Beijing 101408,China

    3 National Computer Network Emergency Response Technical Team/Coordination Center of China,Beijing 100029,China

    4 Sunwise Intelligent Technology Company,Beijing 100190,China

    *The corresponding author,email:wangzh@cert.org.cn

    Abstract:Mobile edge computing(MEC)is a cloud server running at the edge of a mobile network,which can effectively reduce network communication delay.However,due to the numerous edge servers and devices in the MEC,there may be multiple servers and devices that can provide services to the same user simultaneously.This paper proposes a userside adaptive user service deployment algorithm ASD(Adaptive Service Deployment)based on reinforcement learning algorithms.Without relying on complex system information,it can master only a few tasks and users.In the case of attributes,perform effective service deployment decisions,analyze and redefine the key parameters of existing algorithms,and dynamically adjust strategies according to task types and available node types to optimize user experience delay.Experiments show that the ASD algorithm can implement user-side decision-making for service deployment.While effectively improving parameter settings in the traditional Multi-Armed Bandit algorithm,it can reduce user-perceived delay and enhance service quality compared with other strategies.

    Keywords:edge computing;adaptive algorithm;reinforcement learning;computing unloading;service deployment

    I.INTRODUCTION

    With the widespread use of smart devices and the development of Internet of Things technology,the complexity of the computing tasks that mobile devices need to handle continues to increase[1].In the traditional cloud computing framework,users only accept cloud data as consumers of data.However,with the rapid increase in the amount of data generated by edge terminals and computing requirements,network bandwidth has become a bottleneck that limits the efficiency of the entire computing network[2].Mobile Edge Computing(MEC)is a cloud server that runs at the edge of a mobile network.It has high computing power and decentralizes the service provision capabilities initially concentrated in the network center to the edge network closer to users.Relying on the combination of a small edge computing platform and a mobile edge network to enhance the effect of user experience[3].However,due to a large number of edge servers and devices in the MEC,there may be multiple servers and devices that can provide services to the same user simultaneously.In addition,when the user moves,the computing resources used by related applications may switch between multiple edge nodes.Computing offloading is one of MEC’s key technologies[4,5],which can be used to migrate computingintensive tasks from mobile devices to MEC servers.The current research work can be roughly divided into two categories,strategies based on global system information and designs based on reinforcementlearning methods.The Table 1 lists the related classic works with their corresponding environmental settings and concerns.In the service strategy based on global system information,each node in the MEC network has accurate system-level information.Wang et al.[6]used user location distribution,preferences,system load,database location,and other information to predict and calculate the future data transfer,data processing,and service session transfer overhead.Yang L et al.[7]predicted the load that the user will request in the future based on the user’s mobile mode and other conditions.Nadembega A et al.[8]proposed a mobility-based service migration prediction framework MSMP and planned the data transmission sequence from the mobile data center to the user according to the user’s mobile mode.Ouyang T et al.[15]expressed the decision status of each node in different time slots as graph nodes and the service migration cost as edge weights.The above work requires mastering or predicting accurate user future mobility information based on historical information,but in a real environment.Among them,users often do not have a fixed-mobile mode,and it is challenging to predict user’s moving trends.Taleb T[9]used the twodimensional Markov decision process to analyze the service migration overhead.Ouyang T et al.[10]used Lyapunov optimization to transform the long-term optimization problem into a series of real-time Optimization.These methods are based on global system information,but for edge base stations or ordinary users,it is impossible to obtain complete system information at the edge of the network and execute user-side decisions for service deployment.

    Table 1.Summary of two types of classic works for computing offloading.

    The strategy based on the reinforcement learning method transforms the MEC service deployment strategy problem into a scenario similar to the Multi-Armed Bandit(MAB)problem.Kao YH et al.[11]made assumptions about the equipment and channel conditions and quickly learned the unpredictable resource availability and channel information in the dynamic environment of the task.Dai P et al.[13]proposed a method called UL(Utility-table based Learning).The Multi-Armed Bandit algorithm treats the MEC server in the transportation network as both the user and the selected rocker.Sun Y et al.[12]combined MAB problem theory and UCB algorithm to develop a learning-based task offloading framework.The above work ignores that the status of selected objects and user needs will change over time.Li L et al.[16]first proposed this new context-related Bandit algorithm LinUCB in 2014 to solve Yahoo’s news personalized recommendation problem.The return expectation of each object in the algorithm is related to the characteristics of the object.The vector satisfies the linear relationship.After the selection is made,the parameters of the linear function are updated according to the return value.Finally,the effect of the dynamic update of the selection strategy is realized.G.Nikolov et al.[14]proposed an improved version of the LinUCB algorithm,which was applied to the problem of wireless interface selection.The channel quality parameters were converted into estimated data rates,and interface selection was performed.Li T et al.[17]proposed privacy protection and task offloading scheme in MEC,which expresses the task assignment and privacy protection problems as semiparametric contextual Multi-Armed Bandit problems and then designs PAOTO(Privacy-Aware Online Task Offloading)based on the Thompson sampling architecture.The algorithm improves the optimal time delay and energy consumption without requiring systemlevel information to protect privacy.

    Figure 1.ASD user service deployment structure.

    This paper proposes a user-side adaptive user service deployment strategy.Based on the Multi-Armed Bandit algorithm in reinforcement learning,it can execute effective service deployment decisions.

    II.PROBLEM DEFINITION

    2.1 Scene Deployment

    Discretize the continuous timeline into time slotst∈={1,2,3,...,T}.The user chooses the MEC server to provide computing services;that is,computing offloading tasks occur.The process can be divided into four steps:1)Mobile users make task offloading or migration decisions based on their environment;2)Send task-related data to the MEC server;3)MEC server completes calculation tasks;4)The MEC server returns the calculation result to the mobile user.

    The vectorw=[wt1,...,wtM,wtc,wtl]represents the dynamic service deployment decision of t time slot.Among them,wti(i=1,...,M),wtcandwtlrespectively indicate whether to select the i-node,cloud computing center,and user equipment to perform t-slot computing tasks.represents all MEC servers that can provide computing services,c represents a cloud computing center,and l represents a user’s local device,and=∪{c,l}represents all computing nodes that can perform the current task.Because each time slot user can only select only one object to perform the task,there are the following restrictions on the decision vector:

    2.2 Optimization Goal

    In the mobile edge computing network architecture,it is generally believed that the user experience delay depends on the calculation delay and the communication delay[10,18].Considering that this research scenario involves task migration,the additional overhead of task migration needs to be calculated.Figure 1 shows the occurrence stages of each delay that needs to be considered in the service deployment scenario.We redefine the parameters to improve the classic Lin-UCB algorithm.Notes that the theoretically provided way of solving existing problems through parameterization is not limited to this scenario.

    2.2.1 Calculation Delay

    λtrepresents the amount of offloading task calculations in time slot t,ctirepresents the computing capacity of node i available in time slot t,that is,the number of basic instruction operations that can be completed per second.Given the service deployment decisionwtof time slot t,the calculation delay can be expressed as:

    λt=vtnt,wherevtindicates the amount of data involved in user tasks,ntindicates computational complexity.

    2.2.2 Communication Delay

    The communication delay is composed of two parts,and the user decides to offload the task to an external computing node,which will cause access delay.If the service is not bound to the corresponding server,the transmission delay will be caused by the communication between the servers.

    Given the service deployment decisionwtof t time slot,the communication delay perceived by the user can be further expressed as:

    gi,lttrepresents the communication delay of the user,whereltrepresents the base station connected in time slot t,which depends on the coverage of which base station the user’s real-time location.

    2.2.3 Migration Overhead Delay

    Users are mobile,and their connected base stations will change with movement,accompanied by the migration of computing tasks.The migration overhead delay can be expressed as:

    Where the comprehensive parameterftj,irepresents the migration overhead delay from the previous computing node j to the current computing node.The above equation represents the migration overhead delay from node j to node i when the user selects the node in time slot t-1 to j,the comprehensive overhead of task migration when node i is selected in time slot t.

    2.2.4 Optimize Goal

    The final optimization problem can be seen as finding the minimum value of the weighted sum of three delays within a given limited time range T:

    Whereωt1,ωt2,ωt3are the dynamic weights for calculating delay,communication delay,and migration overhead respectively,it can be adjusted according to its optimization preferences and operating requirements for the task.

    III.SERVICE DEPLOYMENT STRATEGY MODEL

    The expectation of the delay feedbackrtigenerated by each node is a linear function of the context feature vectorxtiwhen executing the task,namely:

    Whereθ*iis the parameter vector of the node,which is the amount that the algorithm cannot obtain and needs to be estimated.The ASD algorithm is based on the improvement of the LinUCB algorithm,so this premise also needs to be met.In order to realize this hypothesis,we first need to defineθ*iandxtiaccording to the experimental scenario.This process can also be understood as feature selection for contextual information.From the user experience delay calculation formula in the previous section,it can be seen that the user experience delaydtcaused by the selection of node i in time slot t can be expressed as:

    ctirepresents the computing power of node i in time slot t,gti,ltrepresents the communication delay that will occur when the user in time slot t is in the coverage area of the base station corresponding to the access to the MEC serverlt,andftj,irepresents when the user in time slot t-1 When the j node is reached,the service migration overhead that will occur when the user in the t time slot selects the i node.

    Using the accurate valueλtthat the algorithm can grasp,the position of the user in time slot t and the selected node j in time slot t-1,the context feature vectorxtiis defined as follows:

    Whereλtindicates the calculation requirement of time slot t.ltrepresents the real-time location of the user,ktrepresents the service node selected by the user in the last time slot,and transforms the context information into the above-defined parameters,which achieves the theoretical premise of using the contextual multi-armed gaming machine algorithm in the service deployment scenario.

    It is worth noting that the decisions made by the Multi-Armed Bandit algorithm can only tend to the optimal but cannot achieve the theoretical optimal.Because the user experience delayrticaused by the selection of node i in time slot t cannot be accurately predicted,although there is a theoretically optimal nodebecause the algorithm cannot grasp the true value ofθ*i,it can only be estimated asθti.Calculate the estimated valuerti=θ*?i xtiof the user experience delay generated by the time slot selection node,then the difference between the selected node n and the actual best node after each decision is defined as the regret valueΔtof this selection:

    Δtcan measure the gap between this choice and the actual optimal choice.The smaller the value,the better the choice.For a long-term decision sequence with T time slots,the cumulative regret valueRTis used to measure the effectiveness of the entire decision sequence.

    IV.CALCULATION OF KEY PARAMETERS AND ALGORITHM

    On the premise ofrtiandxtisatisfying the linear relationship with,the ASD algorithm calculates the index valuePtiof each available node in each time slot:

    InFi,we estimate the node parameter vectorθtiand the current context feature vectorxtiof the t time slot,namely:

    Among them,θticalculated from the historical information matrixDiand the historical environment feedback vectorci:

    whereAi=DTi Di+I,bi=DTi ci,and the historical information matrixDiis composed ofxtiselected as task execution nodes for the first m times before the i-nodes.

    TheSiaims to reduce the long-term cumulative user experience delay and defines the node type asyiwhich is positively related to the node’s computing power.From the context vectorxti,we can know the amount of calculationλtrequired for the current task.To evaluate the level of the current calculation amount,the historical information matrixDican be used to calculate the historical average calculation amountby Equation(14).When,it means that the calculation amount of the current task is relatively high.At this time,the probability that the node with strong calculation ability is selected should be increased.

    Replace the parameterαwith the type impact factorui,which is defined as:

    Because the goal of the strategy is to obtain the lowest environmental return value,the node with the smallest indicator value will be selected at each time slot t,so the original parameter is reversed,andSican be obtained:

    On this basis,the algorithm also considers the influence of the total number of time slots T and the total number of types of context information Z on the decision result.Assuming that the context information contains n types of features,each type of feature hasZipossible values.For example,assuming that there areZ1kinds of computing tasks,Z2kinds of geographic locations accessible to users,andZ3kinds of service nodes,the total number of context information types Z is defined as:

    The ASD algorithm(shown in Algorithm 1)takes into account the influence of node types and the total number of time slots on service deployment decisions and achieves the design goal of adaptive and dynamic adjustment of service deployment strategies with context information such as task types.At the same time,there is no need to specify algorithm parameters.It avoids the unreasonable value of the algorithm parameter and the adverse effect on the algorithm result.

    V.EXPERIMENT AND RESULTS

    5.1 Experiment Environment

    The operating system is Windows 10 64bit,the processor uses Intel core i7-6700hq@2.60Ghz,the memory is 16g,and the python version used is 3.6.5.

    5.2 Method Design

    The user makes an independent service deployment decision in each time slot t∈{0,1,2,...,T}in a continuous time range T.In order to compare the effects of different strategies more intuitively,this article makes the following basic constraints on the experimental scenario:

    Constraint 1:All MEC service nodes are distributed in a 3×3 grid network,and the user moves to the adjacent grid every 20 time slots,and the direction of movement is random.

    Constraint 2:The length of a time slot is greater than the minimum time interval between two tasks;that is,the user only needs to perform task deployment decisions once in a time slot.

    Constraint 3:The communication delay part of the user experience delay is related to the user’s geographic location and the location distance of the service deployment node to calculate the data volume of the task.

    Figure 2.Comparison of average delay of ?-greedy strategies with different ?values in simplifed scenarios.

    Constraint 4:The computing power of computing nodes will change over time,the frequency of a single node conforms to a certain distribution,and the frequencies of the same type of nodes are independent and identically distributed.

    5.3 Evaluation Index

    5.3.1 Accumulated Regret(ACR)

    In a single time slot t,the theoretically best nodeis calculated from the estimated valuertiof user experience delay generated after node i is selected to calculate the regret value of a single decision:

    Δtcan measure the correctness of this choice.The smaller the value,the better the choice.Accumulating the decision regret value of the first T time slots is the cumulative regret ACR:

    5.3.2 Average Delay(AVD)

    The average delay is the average value of the user experience delay in the previous T time slots,which is defined as follows:

    It can express the dynamic variability of the decisionmaking effect as the time slot and the number of decisions increases.The lower the value,the closer the strategy is to the theoretically optimal strategy,and the better user experience brought by service deployment.

    5.3.3 Regret Optimization Ratio(ROR)

    For all n strategies,we calculate the average regret value AVR(average regret)of T time slot decisions of each strategy for comparison and select the largest average regret value among them as the optimization benchmark,and then perform the effect of each strategy on reducing the regret value of a single decision quantitative comparison.Assuming that the average regret value of the strategyw′is the largest,isAV R(w′),the average regret value of the strategyw′is defined as:

    The decision regret value of the theoretically optimal offline optimal service deployment strategy is always 0,so its optimization ratio is 100%.Therefore,the higher the value,the better the optimization effect of the strategy.

    5.4 Method Comparison

    Compare ASD with several existing methods:service deployment strategy based on?-greedy[19],service deployment strategy based on UCB[20],service deployment strategy based on LinUCB[16],AUSP[15]strategy,random strategy,follow strategy,all local strategy and all cloud computing center strategy.UCB and?-greedy algorithm represent the idea of the classic Multi-Armed Bandit algorithm,which are compared to verify the performance of the improved context algorithm.Comparing with the LinUCB algorithm shows that the ASD algorithm solves the problem of parameter setting.The AUSP algorithm is a similar work chosen to show our optimization performance.

    Figure 3.Average delay comparison of LINUCB strategies with different α values in complex scenarios.

    5.5 Algorithm Parameter Setting and Result Analysis

    Because the setting of theαvalue has an important effect on the decision-making result.However,under different experimental environment parameter settings,theαvalue required to obtain the optimal decision is different.In the?-greedy algorithm,the random value threshold?is not easy to specify before the experiment.In order to observe the optimal effect of the LinUCB strategy and?-greedy strategy in a specific experimental scenario,it is necessary to take differentαand?values for testing.

    It can be seen from Figure 2 that when the rewards of nodes are gradually grasped in the later stage of the experiment,a lower average delay can be obtained.In the algorithm comparison experiment,the?value is 0.03,and the number of time slots is 1000.The?-greedy strategy can obtain the lowest long-term average delay.

    Compare the three strategies that contain contextual information,and use the average delay as an indicator to test the parameters,and the results are shown in Figure 3.It can be seen that when the number of time slots is 1000 andα=1.5,the LinUCB strategy can obtain a lower long-term average delay and accumulated regret.

    5.5.1 Simple Scene

    In the simplified scenario,the experimental results of the Bandit algorithm strategy and other strategies are shown in Figure 4.

    The simple service deployment strategy does not have the characteristics of learning and dynamic adjustment,so its decision feedback value is always in a relatively stable state,and the average regret value and average delay basically do not change with the number of time slots,so the accumulated regret Basically,it has a linear growth relationship with the number of time slots.

    In the simplified scenario,the three service deployment strategies using contextual information can all achieve faster convergence and lower average delay and accumulated regret value.

    5.5.2 Complex Scene

    Figure 4.Comparison of experimental results between ASD strategy and other strategies in simplifed scenario(top)and complex scenarios(bottom).

    In complex scenarios,both the user location and task type change over time,and the same node has different capabilities to provide services to users in different locations and different task types.In this scenario,the comparison results of the ASD strategy,UCB strategy,?-greedy strategy and the four simple strategies are shown in Figure 4.

    The optimization effect of the UCB strategy and?-greedy strategy becomes very limited,only slightly better than the random strategy.The simple strategy also ignores changes in the user’s situation and does not affect dynamic adjustment.Therefore,these three simple strategies are no longer the optimal strategies,and the average delay and accumulated regret value have been maintained at a relatively large level.

    The ASD strategy has a higher average delay in the initial stage of the experiment,even higher than the random strategy with the worst overall effect.It requires a certain number of explorations to gradually adjust the strategy using context information and system feedback values.When the number of slots is about 400,the average delay reached the lowest,and has maintained a downward trend;the overall optimization effect is the best.

    Figure 5.Comparison of experimental results of ASD strategy,LINUCB strategy and AUSP strategy in complex scenarios.

    It can be seen that the simple Multi-Armed Bandit algorithm strategy that can achieve a certain optimization effect in a simple scene has a much worse optimization effect in a complex environment that introduces the changing attributes of users and nodes.The ASD strategy effectively avoids the adverse impact of environmental complexity on the decision-making effect,realizes the effective use of context information,and still maintains a good delay optimization effect,indicating that this strategy does achieve adaptive users for service deployment.Notes that the“follow”strategy is close to the optimal of the experiments due to environmental parameters.More specifically,the“follow”strategy avoiding communication delay naturally becomes a good strategy with overall communication delay set to be high,while the calculation delay leaves the cloud strategy better.The key point is their effects are entirely controlled by the environment and cannot be adjusted dynamically.

    The comparison result of ASD strategy,AUSP strategy,and LinUCB strategy is shown in Figure 5.The LinUCB strategy with a value of 1.5 has the highest average delay in the initial 100 time slots,but the average delay drops the fastest in the first 300 time slots,and its final accumulated regret value and average delay are the lowest among the three strategies.The final accumulated regret value and average delay of the ASD strategy are higher than the parameter-optimized LinUCB strategy and lower than the AUSP strategy,and its decision optimization effect lies between the two strategies.The final average delay is about 6.8%higher than the theoretical minimum average delay,which is within an acceptable range.

    Table 2.Regret optimization ratio results of different strategies.

    To quantitatively compare the actual regret optimization effects of various strategies in complex scenarios,based on the theoretically optimal offline optimal strategy and the random strategy with the worst actual comprehensive effect,the regret optimization ratio results of various service deployment strategies are obtained as shown in the Table 2.It can be seen from the table that in the preset complex experimental scenario,the ASD strategy based on environmental context information optimizes the average user experience delay to 69.48% of the offline optimal service deployment strategy based on global system information.Higher than LinUCB,UCB,and other simple service deployment strategies.It is not easy to specify the appropriate algorithm parameters in the actual service deployment environment because the parameter comparison test experiment cannot be carried out before the decision.It has practical use-value.

    VI.CONCLUSION

    Aiming at the service deployment problem in the mobile edge computing network,the proposed models the service deployment problem concerning the parameter types involved in the contextual Multi-Armed Bandit problem and uses a reinforcement learning algorithm to propose an adaptive service deployment Strategy to reduce long-term user experience delay and regret value.The ASD strategy algorithm is proposed.Based on the effective use of the context information and historical decision information of users and nodes in the scene,the algorithm parameters are redefined by using parameters such as node type,task type,and the total number of time slots,and get rid of the parameters.Specify the impact on the optimization effect.In the experimental results,indicators such as accumulated regret,average delay,and regret optimization ratio are better than other algorithms.The ASD strategy algorithm includes features in various complex scenarios and does not require additional system information.Larger experiments to complete the calculation of the theoretical optimal route require too much time and data storage,which provides a good starting point for discussion and further research.

    ACKNOWLEDGEMENT

    This work is supported in part by the Industrial Internet Innovation and Development Project“Industrial robot external safety enhancement device”(TC200H030)and the Cooperation project between Chongqing Municipal undergraduate universities and institutes affiliated to CAS(HZ2021015).

    亚洲最大成人av| 九草在线视频观看| 久久鲁丝午夜福利片| 全区人妻精品视频| 黑人高潮一二区| 久久综合国产亚洲精品| 久久人人爽人人片av| 精品亚洲乱码少妇综合久久| 看免费成人av毛片| 少妇被粗大猛烈的视频| 久久久精品欧美日韩精品| 久久久久九九精品影院| 中文在线观看免费www的网站| 欧美激情久久久久久爽电影| 中文欧美无线码| 国产男人的电影天堂91| 禁无遮挡网站| 中文字幕久久专区| 日韩一本色道免费dvd| 一级片'在线观看视频| 最近手机中文字幕大全| 国产色爽女视频免费观看| 女人久久www免费人成看片| 精品人妻视频免费看| 舔av片在线| 精品一区二区三区视频在线| 日韩一区二区视频免费看| 网址你懂的国产日韩在线| 黄片无遮挡物在线观看| 校园人妻丝袜中文字幕| 国产精品美女特级片免费视频播放器| 成人欧美大片| 日韩精品青青久久久久久| xxx大片免费视频| 插阴视频在线观看视频| 尤物成人国产欧美一区二区三区| 国产成人精品福利久久| 久久这里只有精品中国| 日韩成人av中文字幕在线观看| 午夜激情欧美在线| 日韩一区二区视频免费看| 三级国产精品欧美在线观看| av又黄又爽大尺度在线免费看| 18禁裸乳无遮挡免费网站照片| 亚洲人成网站在线播| 日韩国内少妇激情av| 亚洲精品久久久久久婷婷小说| 国产成人精品婷婷| 精品人妻熟女av久视频| 黄色欧美视频在线观看| 久久久久久九九精品二区国产| 国产一区亚洲一区在线观看| 国产精品久久久久久精品电影小说 | 亚洲精品,欧美精品| 久久精品综合一区二区三区| 久久久久性生活片| 赤兔流量卡办理| 国产精品综合久久久久久久免费| 亚洲国产av新网站| 欧美日韩视频高清一区二区三区二| 国产麻豆成人av免费视频| 国产探花在线观看一区二区| 日韩欧美 国产精品| 99久久中文字幕三级久久日本| 日日摸夜夜添夜夜爱| 亚洲精品456在线播放app| 国产又色又爽无遮挡免| 午夜福利高清视频| 久久久精品欧美日韩精品| 熟女人妻精品中文字幕| 久久久欧美国产精品| 91久久精品电影网| 噜噜噜噜噜久久久久久91| 国产黄色视频一区二区在线观看| 18+在线观看网站| 国产爱豆传媒在线观看| 国产成人精品婷婷| 最近2019中文字幕mv第一页| 国产午夜精品论理片| 汤姆久久久久久久影院中文字幕 | 男女那种视频在线观看| 边亲边吃奶的免费视频| 精品一区在线观看国产| 三级国产精品片| 亚洲精品一二三| 亚洲精品自拍成人| 少妇裸体淫交视频免费看高清| 久久久久性生活片| 少妇高潮的动态图| 99久久中文字幕三级久久日本| 久久久久九九精品影院| 成人午夜精彩视频在线观看| 2021天堂中文幕一二区在线观| 搡老乐熟女国产| 免费在线观看成人毛片| 精品少妇黑人巨大在线播放| 日韩中字成人| 国产亚洲av嫩草精品影院| 国产精品蜜桃在线观看| 一级毛片aaaaaa免费看小| 日韩成人伦理影院| 你懂的网址亚洲精品在线观看| 精品国产一区二区三区久久久樱花 | 国产伦精品一区二区三区视频9| 日本猛色少妇xxxxx猛交久久| 国精品久久久久久国模美| 日本欧美国产在线视频| 国产69精品久久久久777片| 久久久久久久久久久丰满| 欧美潮喷喷水| 岛国毛片在线播放| 91久久精品电影网| 亚洲怡红院男人天堂| 一级毛片电影观看| 毛片一级片免费看久久久久| 国产淫片久久久久久久久| 看黄色毛片网站| 一个人看视频在线观看www免费| 99久久九九国产精品国产免费| 欧美日韩一区二区视频在线观看视频在线 | 久久综合国产亚洲精品| 亚洲av免费高清在线观看| 啦啦啦啦在线视频资源| 亚洲人成网站在线观看播放| 白带黄色成豆腐渣| 国产一级毛片七仙女欲春2| 少妇人妻一区二区三区视频| 少妇被粗大猛烈的视频| 七月丁香在线播放| 18禁在线播放成人免费| 美女cb高潮喷水在线观看| 午夜激情欧美在线| 国产高清三级在线| 国产精品.久久久| 2022亚洲国产成人精品| 七月丁香在线播放| 欧美xxxx黑人xx丫x性爽| 久久精品国产亚洲av天美| 大陆偷拍与自拍| 国产成人免费观看mmmm| 大片免费播放器 马上看| 三级国产精品片| 天天一区二区日本电影三级| 久久精品夜夜夜夜夜久久蜜豆| 欧美zozozo另类| 2021少妇久久久久久久久久久| 人妻少妇偷人精品九色| 国产一区二区三区综合在线观看 | 国产亚洲最大av| 亚洲国产最新在线播放| 极品教师在线视频| 日韩一区二区视频免费看| 黄色日韩在线| 人人妻人人澡欧美一区二区| 亚洲国产欧美在线一区| 国产黄片视频在线免费观看| 最近手机中文字幕大全| 欧美成人精品欧美一级黄| 青春草视频在线免费观看| 久久久久国产网址| 亚洲真实伦在线观看| 久久久久久久午夜电影| 国产v大片淫在线免费观看| 国产成人精品福利久久| 久久久国产一区二区| 非洲黑人性xxxx精品又粗又长| 免费看光身美女| 色综合亚洲欧美另类图片| 好男人在线观看高清免费视频| 中文字幕av在线有码专区| 国产精品久久视频播放| 亚洲伊人久久精品综合| 日韩欧美精品免费久久| 国产又色又爽无遮挡免| 亚洲怡红院男人天堂| 大香蕉97超碰在线| 国产成人午夜福利电影在线观看| 亚洲伊人久久精品综合| 99热这里只有是精品在线观看| 看免费成人av毛片| 国产黄a三级三级三级人| 三级国产精品片| 免费大片黄手机在线观看| 我的女老师完整版在线观看| 韩国高清视频一区二区三区| 精品不卡国产一区二区三区| 日韩av免费高清视频| 秋霞在线观看毛片| 在线a可以看的网站| 免费不卡的大黄色大毛片视频在线观看 | 免费av观看视频| 日韩精品有码人妻一区| 秋霞在线观看毛片| 成年女人看的毛片在线观看| 观看美女的网站| 伊人久久国产一区二区| 国产精品国产三级专区第一集| 91精品伊人久久大香线蕉| 精品国内亚洲2022精品成人| 哪个播放器可以免费观看大片| 国语对白做爰xxxⅹ性视频网站| 可以在线观看毛片的网站| 精品人妻视频免费看| 亚洲精品国产av成人精品| 免费av观看视频| 日本wwww免费看| 国产成人aa在线观看| 国产毛片a区久久久久| 久久久久久久久久黄片| 蜜桃亚洲精品一区二区三区| 亚洲激情五月婷婷啪啪| 丰满少妇做爰视频| 国产又色又爽无遮挡免| 亚洲18禁久久av| 99久国产av精品| 欧美xxⅹ黑人| 久久韩国三级中文字幕| 国产精品99久久久久久久久| 国产欧美另类精品又又久久亚洲欧美| 国产亚洲一区二区精品| 免费观看在线日韩| 国产综合精华液| 国产 一区 欧美 日韩| 人人妻人人澡欧美一区二区| 成年人午夜在线观看视频 | 色综合站精品国产| 免费黄频网站在线观看国产| av国产免费在线观看| 久久国内精品自在自线图片| 欧美激情久久久久久爽电影| 嫩草影院精品99| 中文字幕人妻熟人妻熟丝袜美| 男女边摸边吃奶| 好男人视频免费观看在线| 亚洲精品一二三| 高清日韩中文字幕在线| 蜜桃亚洲精品一区二区三区| 国产极品天堂在线| 精品一区二区三区视频在线| 欧美区成人在线视频| 偷拍熟女少妇极品色| 老师上课跳d突然被开到最大视频| 亚洲精品日韩在线中文字幕| 亚洲四区av| 国产探花在线观看一区二区| 人妻夜夜爽99麻豆av| 国产男人的电影天堂91| 18禁裸乳无遮挡免费网站照片| av黄色大香蕉| 黄色日韩在线| 久久久成人免费电影| 精品久久久久久久末码| 亚洲天堂国产精品一区在线| 久久久久久久久久人人人人人人| 日韩伦理黄色片| 69av精品久久久久久| 亚洲精品国产av蜜桃| 精品酒店卫生间| 精品国产三级普通话版| 性色avwww在线观看| 日韩电影二区| 久久99精品国语久久久| 一区二区三区四区激情视频| 精品99又大又爽又粗少妇毛片| 高清视频免费观看一区二区 | 99热全是精品| 成人欧美大片| 日韩av在线免费看完整版不卡| 亚洲欧洲日产国产| 人妻少妇偷人精品九色| 赤兔流量卡办理| 免费看a级黄色片| 视频中文字幕在线观看| 亚洲一区高清亚洲精品| 日韩国内少妇激情av| 日韩精品有码人妻一区| 一区二区三区免费毛片| 伊人久久国产一区二区| 我的老师免费观看完整版| 激情五月婷婷亚洲| 蜜桃亚洲精品一区二区三区| 老司机影院毛片| 国产一区亚洲一区在线观看| 国产人妻一区二区三区在| 亚洲经典国产精华液单| 观看美女的网站| 亚洲人成网站在线观看播放| 少妇的逼好多水| 又爽又黄a免费视频| 深夜a级毛片| 少妇人妻精品综合一区二区| 麻豆久久精品国产亚洲av| 毛片一级片免费看久久久久| 欧美+日韩+精品| 夫妻午夜视频| 联通29元200g的流量卡| 国产精品国产三级国产av玫瑰| 国产精品久久久久久久久免| 亚洲精品,欧美精品| 精品人妻熟女av久视频| 亚洲精品aⅴ在线观看| 欧美xxxx黑人xx丫x性爽| 久久精品国产亚洲av涩爱| 国产黄a三级三级三级人| 老女人水多毛片| av免费观看日本| 熟女人妻精品中文字幕| 国产永久视频网站| 国语对白做爰xxxⅹ性视频网站| 日韩,欧美,国产一区二区三区| 久久久久久国产a免费观看| 亚洲在线自拍视频| 七月丁香在线播放| 又爽又黄a免费视频| 国产一区二区在线观看日韩| 亚洲av不卡在线观看| 亚洲av国产av综合av卡| 久久精品夜色国产| 亚洲精品aⅴ在线观看| 亚洲综合精品二区| av福利片在线观看| 日韩欧美精品免费久久| 在线a可以看的网站| 久久韩国三级中文字幕| 99久久九九国产精品国产免费| 久久综合国产亚洲精品| 亚洲性久久影院| 亚洲欧洲日产国产| 国产探花在线观看一区二区| 亚洲av成人av| 青春草视频在线免费观看| 美女黄网站色视频| 真实男女啪啪啪动态图| 亚洲国产精品专区欧美| 久久精品国产亚洲av天美| 日本色播在线视频| 久久精品久久久久久久性| 男的添女的下面高潮视频| 国产亚洲精品久久久com| 欧美区成人在线视频| 卡戴珊不雅视频在线播放| 91狼人影院| 欧美性感艳星| 日韩在线高清观看一区二区三区| 亚洲精品国产成人久久av| a级一级毛片免费在线观看| 少妇被粗大猛烈的视频| 视频中文字幕在线观看| 久久精品国产亚洲av天美| 人妻一区二区av| 免费观看a级毛片全部| 青春草国产在线视频| 午夜福利高清视频| 亚洲人与动物交配视频| 国产乱人视频| 久99久视频精品免费| 日本-黄色视频高清免费观看| av卡一久久| 国产一区二区三区综合在线观看 | 夫妻午夜视频| 国产午夜精品一二区理论片| 青春草亚洲视频在线观看| 久久这里有精品视频免费| 久久人人爽人人爽人人片va| 精品久久久久久久人妻蜜臀av| 久久人人爽人人爽人人片va| 成人欧美大片| 舔av片在线| 国产精品人妻久久久久久| 日韩av免费高清视频| 欧美潮喷喷水| 成人特级av手机在线观看| 联通29元200g的流量卡| 男女边吃奶边做爰视频| 国产激情偷乱视频一区二区| 精品久久久久久久久av| 国产午夜精品一二区理论片| 乱人视频在线观看| 欧美人与善性xxx| 国产精品女同一区二区软件| 国产探花极品一区二区| 一级毛片久久久久久久久女| 久久久久国产网址| 性插视频无遮挡在线免费观看| 看免费成人av毛片| 午夜福利在线观看吧| 免费av不卡在线播放| 国产精品久久久久久av不卡| 国产精品久久久久久久电影| 国产亚洲精品久久久com| 免费观看av网站的网址| 久久久久网色| ponron亚洲| 国产老妇伦熟女老妇高清| 天堂中文最新版在线下载 | 久久人人爽人人片av| 神马国产精品三级电影在线观看| 超碰97精品在线观看| 国产真实伦视频高清在线观看| a级毛色黄片| 欧美日韩在线观看h| 婷婷色av中文字幕| 日本wwww免费看| 伊人久久国产一区二区| 国产男人的电影天堂91| 亚洲av中文字字幕乱码综合| 欧美性猛交╳xxx乱大交人| 久久久久久久久久成人| 国精品久久久久久国模美| 色综合色国产| 一区二区三区免费毛片| 国产午夜福利久久久久久| 五月天丁香电影| 在线观看av片永久免费下载| 建设人人有责人人尽责人人享有的 | 国产单亲对白刺激| av播播在线观看一区| 色视频www国产| 深夜a级毛片| 国产一级毛片在线| 日本wwww免费看| 中文在线观看免费www的网站| 免费不卡的大黄色大毛片视频在线观看 | 欧美日韩精品成人综合77777| 午夜福利成人在线免费观看| 久久久久久久久大av| 免费少妇av软件| 2018国产大陆天天弄谢| 久久久久免费精品人妻一区二区| 搡老妇女老女人老熟妇| 国产精品一区二区在线观看99 | 国产一区二区三区综合在线观看 | 日本爱情动作片www.在线观看| 日韩,欧美,国产一区二区三区| 国产av国产精品国产| 免费无遮挡裸体视频| 精品久久久久久久末码| 国产免费福利视频在线观看| 少妇人妻精品综合一区二区| 精华霜和精华液先用哪个| 欧美成人午夜免费资源| 亚洲精品色激情综合| 麻豆精品久久久久久蜜桃| 日韩av不卡免费在线播放| 少妇熟女欧美另类| 最新中文字幕久久久久| 亚洲av电影不卡..在线观看| 色综合亚洲欧美另类图片| 国产免费又黄又爽又色| 国产精品日韩av在线免费观看| 久久亚洲国产成人精品v| 久久99热这里只频精品6学生| 久久久久国产网址| 最后的刺客免费高清国语| 肉色欧美久久久久久久蜜桃 | 亚洲丝袜综合中文字幕| 青春草亚洲视频在线观看| 伊人久久精品亚洲午夜| 久久午夜福利片| 亚洲欧美精品自产自拍| 在线免费观看不下载黄p国产| 中文欧美无线码| 亚洲久久久久久中文字幕| 免费不卡的大黄色大毛片视频在线观看 | 精品国产三级普通话版| 国产一区有黄有色的免费视频 | 在线观看人妻少妇| 久久精品综合一区二区三区| 日韩国内少妇激情av| 欧美xxxx黑人xx丫x性爽| 国产成人精品久久久久久| 成人性生交大片免费视频hd| 日本与韩国留学比较| 晚上一个人看的免费电影| 色综合亚洲欧美另类图片| 男女那种视频在线观看| 国产精品精品国产色婷婷| 内射极品少妇av片p| 免费观看性生交大片5| 成人亚洲欧美一区二区av| 国内精品宾馆在线| 亚洲欧美精品专区久久| 亚洲av不卡在线观看| 菩萨蛮人人尽说江南好唐韦庄| 男女那种视频在线观看| 97在线视频观看| 国产伦在线观看视频一区| 69av精品久久久久久| .国产精品久久| 亚洲av免费高清在线观看| 日韩一本色道免费dvd| 国产69精品久久久久777片| 少妇丰满av| 国产69精品久久久久777片| 高清av免费在线| 少妇猛男粗大的猛烈进出视频 | 成人鲁丝片一二三区免费| 亚洲精品亚洲一区二区| 午夜爱爱视频在线播放| 日韩欧美 国产精品| 最近2019中文字幕mv第一页| 纵有疾风起免费观看全集完整版 | 国产精品麻豆人妻色哟哟久久 | 国产综合懂色| 九九久久精品国产亚洲av麻豆| 国产91av在线免费观看| 男女下面进入的视频免费午夜| 极品教师在线视频| 亚洲欧美成人综合另类久久久| 久久久久久久国产电影| 国产不卡一卡二| 午夜精品国产一区二区电影 | 日韩欧美 国产精品| 国产人妻一区二区三区在| 日韩av在线免费看完整版不卡| 熟女人妻精品中文字幕| 纵有疾风起免费观看全集完整版 | 亚洲欧美成人精品一区二区| 夫妻午夜视频| 久久亚洲国产成人精品v| 视频中文字幕在线观看| www.av在线官网国产| 成人午夜高清在线视频| 国产精品99久久久久久久久| 国产综合精华液| 国产伦在线观看视频一区| 我要看日韩黄色一级片| 免费观看精品视频网站| 高清av免费在线| 亚洲人成网站在线观看播放| 精品久久国产蜜桃| 中文字幕av在线有码专区| 国产精品三级大全| 成人欧美大片| 午夜激情久久久久久久| 亚洲久久久久久中文字幕| 26uuu在线亚洲综合色| 国产高清三级在线| 国产综合精华液| 久久久久久久久久人人人人人人| 乱系列少妇在线播放| 日本色播在线视频| 国产高清三级在线| 成人漫画全彩无遮挡| 老司机影院毛片| 熟女人妻精品中文字幕| 日韩中字成人| 国产色婷婷99| 免费观看性生交大片5| 日本黄大片高清| 一级av片app| 国产精品国产三级国产av玫瑰| 欧美+日韩+精品| 久久国内精品自在自线图片| 久久99精品国语久久久| 直男gayav资源| 丰满乱子伦码专区| 人妻制服诱惑在线中文字幕| 在线 av 中文字幕| 国产不卡一卡二| 美女被艹到高潮喷水动态| 嫩草影院新地址| 欧美三级亚洲精品| 国产乱来视频区| 精品一区二区免费观看| 夜夜爽夜夜爽视频| 超碰av人人做人人爽久久| 成人亚洲欧美一区二区av| 三级毛片av免费| 26uuu在线亚洲综合色| 亚洲三级黄色毛片| 国产黄片视频在线免费观看| 肉色欧美久久久久久久蜜桃 | 久久久久久久久中文| 国产精品久久久久久av不卡| 在线观看av片永久免费下载| 免费黄网站久久成人精品| 床上黄色一级片| 黄色配什么色好看| 国产片特级美女逼逼视频| av免费观看日本| 国产免费又黄又爽又色| 日本爱情动作片www.在线观看| 天堂中文最新版在线下载 | 极品少妇高潮喷水抽搐| 国产视频首页在线观看| 久久久久久久久久人人人人人人| 午夜福利视频精品| 精品国产一区二区三区久久久樱花 | 国产欧美日韩精品一区二区| 丰满乱子伦码专区| 国产午夜精品论理片| 偷拍熟女少妇极品色| 内射极品少妇av片p| 91精品伊人久久大香线蕉| 国产成人freesex在线| 日日摸夜夜添夜夜爱| 黄色一级大片看看| 午夜福利在线观看吧| 国产一级毛片七仙女欲春2| 国产精品久久久久久精品电影小说 | 午夜福利视频精品| 99视频精品全部免费 在线| 中文乱码字字幕精品一区二区三区 | 国产精品人妻久久久影院| 国产伦精品一区二区三区四那| 高清av免费在线| 免费av观看视频| 啦啦啦中文免费视频观看日本| 免费无遮挡裸体视频| 国产精品熟女久久久久浪| 亚洲精品日韩在线中文字幕| 国产麻豆成人av免费视频| 欧美日韩在线观看h| 老师上课跳d突然被开到最大视频|