• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Adaptive User Service Deployment Strategy for Mobile Edge Computing

    2022-10-27 04:49:50GangLiJingboMiaoZihouWangYanniHanHongyanTanYanweiLiuKunZhai
    China Communications 2022年10期

    Gang Li,Jingbo Miao,Zihou Wang,Yanni Han,Hongyan Tan,Yanwei Liu,Kun Zhai

    1 Institute of Information Engineering,Chinese Academy of Sciences,Beijing 100093,China

    2 School of Cyber Security,University of Chinese Academy of Sciences,Beijing 101408,China

    3 National Computer Network Emergency Response Technical Team/Coordination Center of China,Beijing 100029,China

    4 Sunwise Intelligent Technology Company,Beijing 100190,China

    *The corresponding author,email:wangzh@cert.org.cn

    Abstract:Mobile edge computing(MEC)is a cloud server running at the edge of a mobile network,which can effectively reduce network communication delay.However,due to the numerous edge servers and devices in the MEC,there may be multiple servers and devices that can provide services to the same user simultaneously.This paper proposes a userside adaptive user service deployment algorithm ASD(Adaptive Service Deployment)based on reinforcement learning algorithms.Without relying on complex system information,it can master only a few tasks and users.In the case of attributes,perform effective service deployment decisions,analyze and redefine the key parameters of existing algorithms,and dynamically adjust strategies according to task types and available node types to optimize user experience delay.Experiments show that the ASD algorithm can implement user-side decision-making for service deployment.While effectively improving parameter settings in the traditional Multi-Armed Bandit algorithm,it can reduce user-perceived delay and enhance service quality compared with other strategies.

    Keywords:edge computing;adaptive algorithm;reinforcement learning;computing unloading;service deployment

    I.INTRODUCTION

    With the widespread use of smart devices and the development of Internet of Things technology,the complexity of the computing tasks that mobile devices need to handle continues to increase[1].In the traditional cloud computing framework,users only accept cloud data as consumers of data.However,with the rapid increase in the amount of data generated by edge terminals and computing requirements,network bandwidth has become a bottleneck that limits the efficiency of the entire computing network[2].Mobile Edge Computing(MEC)is a cloud server that runs at the edge of a mobile network.It has high computing power and decentralizes the service provision capabilities initially concentrated in the network center to the edge network closer to users.Relying on the combination of a small edge computing platform and a mobile edge network to enhance the effect of user experience[3].However,due to a large number of edge servers and devices in the MEC,there may be multiple servers and devices that can provide services to the same user simultaneously.In addition,when the user moves,the computing resources used by related applications may switch between multiple edge nodes.Computing offloading is one of MEC’s key technologies[4,5],which can be used to migrate computingintensive tasks from mobile devices to MEC servers.The current research work can be roughly divided into two categories,strategies based on global system information and designs based on reinforcementlearning methods.The Table 1 lists the related classic works with their corresponding environmental settings and concerns.In the service strategy based on global system information,each node in the MEC network has accurate system-level information.Wang et al.[6]used user location distribution,preferences,system load,database location,and other information to predict and calculate the future data transfer,data processing,and service session transfer overhead.Yang L et al.[7]predicted the load that the user will request in the future based on the user’s mobile mode and other conditions.Nadembega A et al.[8]proposed a mobility-based service migration prediction framework MSMP and planned the data transmission sequence from the mobile data center to the user according to the user’s mobile mode.Ouyang T et al.[15]expressed the decision status of each node in different time slots as graph nodes and the service migration cost as edge weights.The above work requires mastering or predicting accurate user future mobility information based on historical information,but in a real environment.Among them,users often do not have a fixed-mobile mode,and it is challenging to predict user’s moving trends.Taleb T[9]used the twodimensional Markov decision process to analyze the service migration overhead.Ouyang T et al.[10]used Lyapunov optimization to transform the long-term optimization problem into a series of real-time Optimization.These methods are based on global system information,but for edge base stations or ordinary users,it is impossible to obtain complete system information at the edge of the network and execute user-side decisions for service deployment.

    Table 1.Summary of two types of classic works for computing offloading.

    The strategy based on the reinforcement learning method transforms the MEC service deployment strategy problem into a scenario similar to the Multi-Armed Bandit(MAB)problem.Kao YH et al.[11]made assumptions about the equipment and channel conditions and quickly learned the unpredictable resource availability and channel information in the dynamic environment of the task.Dai P et al.[13]proposed a method called UL(Utility-table based Learning).The Multi-Armed Bandit algorithm treats the MEC server in the transportation network as both the user and the selected rocker.Sun Y et al.[12]combined MAB problem theory and UCB algorithm to develop a learning-based task offloading framework.The above work ignores that the status of selected objects and user needs will change over time.Li L et al.[16]first proposed this new context-related Bandit algorithm LinUCB in 2014 to solve Yahoo’s news personalized recommendation problem.The return expectation of each object in the algorithm is related to the characteristics of the object.The vector satisfies the linear relationship.After the selection is made,the parameters of the linear function are updated according to the return value.Finally,the effect of the dynamic update of the selection strategy is realized.G.Nikolov et al.[14]proposed an improved version of the LinUCB algorithm,which was applied to the problem of wireless interface selection.The channel quality parameters were converted into estimated data rates,and interface selection was performed.Li T et al.[17]proposed privacy protection and task offloading scheme in MEC,which expresses the task assignment and privacy protection problems as semiparametric contextual Multi-Armed Bandit problems and then designs PAOTO(Privacy-Aware Online Task Offloading)based on the Thompson sampling architecture.The algorithm improves the optimal time delay and energy consumption without requiring systemlevel information to protect privacy.

    Figure 1.ASD user service deployment structure.

    This paper proposes a user-side adaptive user service deployment strategy.Based on the Multi-Armed Bandit algorithm in reinforcement learning,it can execute effective service deployment decisions.

    II.PROBLEM DEFINITION

    2.1 Scene Deployment

    Discretize the continuous timeline into time slotst∈={1,2,3,...,T}.The user chooses the MEC server to provide computing services;that is,computing offloading tasks occur.The process can be divided into four steps:1)Mobile users make task offloading or migration decisions based on their environment;2)Send task-related data to the MEC server;3)MEC server completes calculation tasks;4)The MEC server returns the calculation result to the mobile user.

    The vectorw=[wt1,...,wtM,wtc,wtl]represents the dynamic service deployment decision of t time slot.Among them,wti(i=1,...,M),wtcandwtlrespectively indicate whether to select the i-node,cloud computing center,and user equipment to perform t-slot computing tasks.represents all MEC servers that can provide computing services,c represents a cloud computing center,and l represents a user’s local device,and=∪{c,l}represents all computing nodes that can perform the current task.Because each time slot user can only select only one object to perform the task,there are the following restrictions on the decision vector:

    2.2 Optimization Goal

    In the mobile edge computing network architecture,it is generally believed that the user experience delay depends on the calculation delay and the communication delay[10,18].Considering that this research scenario involves task migration,the additional overhead of task migration needs to be calculated.Figure 1 shows the occurrence stages of each delay that needs to be considered in the service deployment scenario.We redefine the parameters to improve the classic Lin-UCB algorithm.Notes that the theoretically provided way of solving existing problems through parameterization is not limited to this scenario.

    2.2.1 Calculation Delay

    λtrepresents the amount of offloading task calculations in time slot t,ctirepresents the computing capacity of node i available in time slot t,that is,the number of basic instruction operations that can be completed per second.Given the service deployment decisionwtof time slot t,the calculation delay can be expressed as:

    λt=vtnt,wherevtindicates the amount of data involved in user tasks,ntindicates computational complexity.

    2.2.2 Communication Delay

    The communication delay is composed of two parts,and the user decides to offload the task to an external computing node,which will cause access delay.If the service is not bound to the corresponding server,the transmission delay will be caused by the communication between the servers.

    Given the service deployment decisionwtof t time slot,the communication delay perceived by the user can be further expressed as:

    gi,lttrepresents the communication delay of the user,whereltrepresents the base station connected in time slot t,which depends on the coverage of which base station the user’s real-time location.

    2.2.3 Migration Overhead Delay

    Users are mobile,and their connected base stations will change with movement,accompanied by the migration of computing tasks.The migration overhead delay can be expressed as:

    Where the comprehensive parameterftj,irepresents the migration overhead delay from the previous computing node j to the current computing node.The above equation represents the migration overhead delay from node j to node i when the user selects the node in time slot t-1 to j,the comprehensive overhead of task migration when node i is selected in time slot t.

    2.2.4 Optimize Goal

    The final optimization problem can be seen as finding the minimum value of the weighted sum of three delays within a given limited time range T:

    Whereωt1,ωt2,ωt3are the dynamic weights for calculating delay,communication delay,and migration overhead respectively,it can be adjusted according to its optimization preferences and operating requirements for the task.

    III.SERVICE DEPLOYMENT STRATEGY MODEL

    The expectation of the delay feedbackrtigenerated by each node is a linear function of the context feature vectorxtiwhen executing the task,namely:

    Whereθ*iis the parameter vector of the node,which is the amount that the algorithm cannot obtain and needs to be estimated.The ASD algorithm is based on the improvement of the LinUCB algorithm,so this premise also needs to be met.In order to realize this hypothesis,we first need to defineθ*iandxtiaccording to the experimental scenario.This process can also be understood as feature selection for contextual information.From the user experience delay calculation formula in the previous section,it can be seen that the user experience delaydtcaused by the selection of node i in time slot t can be expressed as:

    ctirepresents the computing power of node i in time slot t,gti,ltrepresents the communication delay that will occur when the user in time slot t is in the coverage area of the base station corresponding to the access to the MEC serverlt,andftj,irepresents when the user in time slot t-1 When the j node is reached,the service migration overhead that will occur when the user in the t time slot selects the i node.

    Using the accurate valueλtthat the algorithm can grasp,the position of the user in time slot t and the selected node j in time slot t-1,the context feature vectorxtiis defined as follows:

    Whereλtindicates the calculation requirement of time slot t.ltrepresents the real-time location of the user,ktrepresents the service node selected by the user in the last time slot,and transforms the context information into the above-defined parameters,which achieves the theoretical premise of using the contextual multi-armed gaming machine algorithm in the service deployment scenario.

    It is worth noting that the decisions made by the Multi-Armed Bandit algorithm can only tend to the optimal but cannot achieve the theoretical optimal.Because the user experience delayrticaused by the selection of node i in time slot t cannot be accurately predicted,although there is a theoretically optimal nodebecause the algorithm cannot grasp the true value ofθ*i,it can only be estimated asθti.Calculate the estimated valuerti=θ*?i xtiof the user experience delay generated by the time slot selection node,then the difference between the selected node n and the actual best node after each decision is defined as the regret valueΔtof this selection:

    Δtcan measure the gap between this choice and the actual optimal choice.The smaller the value,the better the choice.For a long-term decision sequence with T time slots,the cumulative regret valueRTis used to measure the effectiveness of the entire decision sequence.

    IV.CALCULATION OF KEY PARAMETERS AND ALGORITHM

    On the premise ofrtiandxtisatisfying the linear relationship with,the ASD algorithm calculates the index valuePtiof each available node in each time slot:

    InFi,we estimate the node parameter vectorθtiand the current context feature vectorxtiof the t time slot,namely:

    Among them,θticalculated from the historical information matrixDiand the historical environment feedback vectorci:

    whereAi=DTi Di+I,bi=DTi ci,and the historical information matrixDiis composed ofxtiselected as task execution nodes for the first m times before the i-nodes.

    TheSiaims to reduce the long-term cumulative user experience delay and defines the node type asyiwhich is positively related to the node’s computing power.From the context vectorxti,we can know the amount of calculationλtrequired for the current task.To evaluate the level of the current calculation amount,the historical information matrixDican be used to calculate the historical average calculation amountby Equation(14).When,it means that the calculation amount of the current task is relatively high.At this time,the probability that the node with strong calculation ability is selected should be increased.

    Replace the parameterαwith the type impact factorui,which is defined as:

    Because the goal of the strategy is to obtain the lowest environmental return value,the node with the smallest indicator value will be selected at each time slot t,so the original parameter is reversed,andSican be obtained:

    On this basis,the algorithm also considers the influence of the total number of time slots T and the total number of types of context information Z on the decision result.Assuming that the context information contains n types of features,each type of feature hasZipossible values.For example,assuming that there areZ1kinds of computing tasks,Z2kinds of geographic locations accessible to users,andZ3kinds of service nodes,the total number of context information types Z is defined as:

    The ASD algorithm(shown in Algorithm 1)takes into account the influence of node types and the total number of time slots on service deployment decisions and achieves the design goal of adaptive and dynamic adjustment of service deployment strategies with context information such as task types.At the same time,there is no need to specify algorithm parameters.It avoids the unreasonable value of the algorithm parameter and the adverse effect on the algorithm result.

    V.EXPERIMENT AND RESULTS

    5.1 Experiment Environment

    The operating system is Windows 10 64bit,the processor uses Intel core i7-6700hq@2.60Ghz,the memory is 16g,and the python version used is 3.6.5.

    5.2 Method Design

    The user makes an independent service deployment decision in each time slot t∈{0,1,2,...,T}in a continuous time range T.In order to compare the effects of different strategies more intuitively,this article makes the following basic constraints on the experimental scenario:

    Constraint 1:All MEC service nodes are distributed in a 3×3 grid network,and the user moves to the adjacent grid every 20 time slots,and the direction of movement is random.

    Constraint 2:The length of a time slot is greater than the minimum time interval between two tasks;that is,the user only needs to perform task deployment decisions once in a time slot.

    Constraint 3:The communication delay part of the user experience delay is related to the user’s geographic location and the location distance of the service deployment node to calculate the data volume of the task.

    Figure 2.Comparison of average delay of ?-greedy strategies with different ?values in simplifed scenarios.

    Constraint 4:The computing power of computing nodes will change over time,the frequency of a single node conforms to a certain distribution,and the frequencies of the same type of nodes are independent and identically distributed.

    5.3 Evaluation Index

    5.3.1 Accumulated Regret(ACR)

    In a single time slot t,the theoretically best nodeis calculated from the estimated valuertiof user experience delay generated after node i is selected to calculate the regret value of a single decision:

    Δtcan measure the correctness of this choice.The smaller the value,the better the choice.Accumulating the decision regret value of the first T time slots is the cumulative regret ACR:

    5.3.2 Average Delay(AVD)

    The average delay is the average value of the user experience delay in the previous T time slots,which is defined as follows:

    It can express the dynamic variability of the decisionmaking effect as the time slot and the number of decisions increases.The lower the value,the closer the strategy is to the theoretically optimal strategy,and the better user experience brought by service deployment.

    5.3.3 Regret Optimization Ratio(ROR)

    For all n strategies,we calculate the average regret value AVR(average regret)of T time slot decisions of each strategy for comparison and select the largest average regret value among them as the optimization benchmark,and then perform the effect of each strategy on reducing the regret value of a single decision quantitative comparison.Assuming that the average regret value of the strategyw′is the largest,isAV R(w′),the average regret value of the strategyw′is defined as:

    The decision regret value of the theoretically optimal offline optimal service deployment strategy is always 0,so its optimization ratio is 100%.Therefore,the higher the value,the better the optimization effect of the strategy.

    5.4 Method Comparison

    Compare ASD with several existing methods:service deployment strategy based on?-greedy[19],service deployment strategy based on UCB[20],service deployment strategy based on LinUCB[16],AUSP[15]strategy,random strategy,follow strategy,all local strategy and all cloud computing center strategy.UCB and?-greedy algorithm represent the idea of the classic Multi-Armed Bandit algorithm,which are compared to verify the performance of the improved context algorithm.Comparing with the LinUCB algorithm shows that the ASD algorithm solves the problem of parameter setting.The AUSP algorithm is a similar work chosen to show our optimization performance.

    Figure 3.Average delay comparison of LINUCB strategies with different α values in complex scenarios.

    5.5 Algorithm Parameter Setting and Result Analysis

    Because the setting of theαvalue has an important effect on the decision-making result.However,under different experimental environment parameter settings,theαvalue required to obtain the optimal decision is different.In the?-greedy algorithm,the random value threshold?is not easy to specify before the experiment.In order to observe the optimal effect of the LinUCB strategy and?-greedy strategy in a specific experimental scenario,it is necessary to take differentαand?values for testing.

    It can be seen from Figure 2 that when the rewards of nodes are gradually grasped in the later stage of the experiment,a lower average delay can be obtained.In the algorithm comparison experiment,the?value is 0.03,and the number of time slots is 1000.The?-greedy strategy can obtain the lowest long-term average delay.

    Compare the three strategies that contain contextual information,and use the average delay as an indicator to test the parameters,and the results are shown in Figure 3.It can be seen that when the number of time slots is 1000 andα=1.5,the LinUCB strategy can obtain a lower long-term average delay and accumulated regret.

    5.5.1 Simple Scene

    In the simplified scenario,the experimental results of the Bandit algorithm strategy and other strategies are shown in Figure 4.

    The simple service deployment strategy does not have the characteristics of learning and dynamic adjustment,so its decision feedback value is always in a relatively stable state,and the average regret value and average delay basically do not change with the number of time slots,so the accumulated regret Basically,it has a linear growth relationship with the number of time slots.

    In the simplified scenario,the three service deployment strategies using contextual information can all achieve faster convergence and lower average delay and accumulated regret value.

    5.5.2 Complex Scene

    Figure 4.Comparison of experimental results between ASD strategy and other strategies in simplifed scenario(top)and complex scenarios(bottom).

    In complex scenarios,both the user location and task type change over time,and the same node has different capabilities to provide services to users in different locations and different task types.In this scenario,the comparison results of the ASD strategy,UCB strategy,?-greedy strategy and the four simple strategies are shown in Figure 4.

    The optimization effect of the UCB strategy and?-greedy strategy becomes very limited,only slightly better than the random strategy.The simple strategy also ignores changes in the user’s situation and does not affect dynamic adjustment.Therefore,these three simple strategies are no longer the optimal strategies,and the average delay and accumulated regret value have been maintained at a relatively large level.

    The ASD strategy has a higher average delay in the initial stage of the experiment,even higher than the random strategy with the worst overall effect.It requires a certain number of explorations to gradually adjust the strategy using context information and system feedback values.When the number of slots is about 400,the average delay reached the lowest,and has maintained a downward trend;the overall optimization effect is the best.

    Figure 5.Comparison of experimental results of ASD strategy,LINUCB strategy and AUSP strategy in complex scenarios.

    It can be seen that the simple Multi-Armed Bandit algorithm strategy that can achieve a certain optimization effect in a simple scene has a much worse optimization effect in a complex environment that introduces the changing attributes of users and nodes.The ASD strategy effectively avoids the adverse impact of environmental complexity on the decision-making effect,realizes the effective use of context information,and still maintains a good delay optimization effect,indicating that this strategy does achieve adaptive users for service deployment.Notes that the“follow”strategy is close to the optimal of the experiments due to environmental parameters.More specifically,the“follow”strategy avoiding communication delay naturally becomes a good strategy with overall communication delay set to be high,while the calculation delay leaves the cloud strategy better.The key point is their effects are entirely controlled by the environment and cannot be adjusted dynamically.

    The comparison result of ASD strategy,AUSP strategy,and LinUCB strategy is shown in Figure 5.The LinUCB strategy with a value of 1.5 has the highest average delay in the initial 100 time slots,but the average delay drops the fastest in the first 300 time slots,and its final accumulated regret value and average delay are the lowest among the three strategies.The final accumulated regret value and average delay of the ASD strategy are higher than the parameter-optimized LinUCB strategy and lower than the AUSP strategy,and its decision optimization effect lies between the two strategies.The final average delay is about 6.8%higher than the theoretical minimum average delay,which is within an acceptable range.

    Table 2.Regret optimization ratio results of different strategies.

    To quantitatively compare the actual regret optimization effects of various strategies in complex scenarios,based on the theoretically optimal offline optimal strategy and the random strategy with the worst actual comprehensive effect,the regret optimization ratio results of various service deployment strategies are obtained as shown in the Table 2.It can be seen from the table that in the preset complex experimental scenario,the ASD strategy based on environmental context information optimizes the average user experience delay to 69.48% of the offline optimal service deployment strategy based on global system information.Higher than LinUCB,UCB,and other simple service deployment strategies.It is not easy to specify the appropriate algorithm parameters in the actual service deployment environment because the parameter comparison test experiment cannot be carried out before the decision.It has practical use-value.

    VI.CONCLUSION

    Aiming at the service deployment problem in the mobile edge computing network,the proposed models the service deployment problem concerning the parameter types involved in the contextual Multi-Armed Bandit problem and uses a reinforcement learning algorithm to propose an adaptive service deployment Strategy to reduce long-term user experience delay and regret value.The ASD strategy algorithm is proposed.Based on the effective use of the context information and historical decision information of users and nodes in the scene,the algorithm parameters are redefined by using parameters such as node type,task type,and the total number of time slots,and get rid of the parameters.Specify the impact on the optimization effect.In the experimental results,indicators such as accumulated regret,average delay,and regret optimization ratio are better than other algorithms.The ASD strategy algorithm includes features in various complex scenarios and does not require additional system information.Larger experiments to complete the calculation of the theoretical optimal route require too much time and data storage,which provides a good starting point for discussion and further research.

    ACKNOWLEDGEMENT

    This work is supported in part by the Industrial Internet Innovation and Development Project“Industrial robot external safety enhancement device”(TC200H030)and the Cooperation project between Chongqing Municipal undergraduate universities and institutes affiliated to CAS(HZ2021015).

    一本大道久久a久久精品| 亚洲av福利一区| 国产av精品麻豆| 在线天堂中文资源库| 亚洲国产日韩一区二区| 国产探花极品一区二区| 免费在线观看完整版高清| 久久亚洲国产成人精品v| 一级爰片在线观看| 欧美97在线视频| 国产精品麻豆人妻色哟哟久久| 宅男免费午夜| 久久人人爽人人片av| 一本大道久久a久久精品| 热re99久久国产66热| 亚洲国产欧美网| 综合色丁香网| 亚洲精品久久午夜乱码| 久久久久人妻精品一区果冻| 97在线视频观看| 免费av中文字幕在线| 精品午夜福利在线看| 亚洲一级一片aⅴ在线观看| 少妇的丰满在线观看| 中文字幕人妻熟女乱码| 视频区图区小说| 日本午夜av视频| 亚洲精品,欧美精品| 国产精品成人在线| 免费人妻精品一区二区三区视频| 久久久亚洲精品成人影院| 2022亚洲国产成人精品| 在线看a的网站| 制服丝袜香蕉在线| 久久99一区二区三区| 天天操日日干夜夜撸| 黄频高清免费视频| 看免费成人av毛片| 亚洲中文av在线| 亚洲男人天堂网一区| 亚洲,欧美精品.| 欧美黄色片欧美黄色片| 亚洲国产精品999| 尾随美女入室| 宅男免费午夜| 国产精品二区激情视频| 久久av网站| 99国产精品免费福利视频| 免费在线观看完整版高清| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 青春草视频在线免费观看| 精品一品国产午夜福利视频| 国产免费又黄又爽又色| 国产淫语在线视频| 国产精品麻豆人妻色哟哟久久| 美女中出高潮动态图| 午夜福利一区二区在线看| 久久狼人影院| 国产精品香港三级国产av潘金莲 | 大码成人一级视频| 少妇的丰满在线观看| 一边亲一边摸免费视频| 高清av免费在线| 亚洲激情五月婷婷啪啪| 青草久久国产| 亚洲美女黄色视频免费看| 亚洲色图 男人天堂 中文字幕| 69精品国产乱码久久久| 哪个播放器可以免费观看大片| 日产精品乱码卡一卡2卡三| 精品人妻偷拍中文字幕| 91aial.com中文字幕在线观看| 亚洲国产av新网站| 各种免费的搞黄视频| 有码 亚洲区| 我的亚洲天堂| 99九九在线精品视频| 女性被躁到高潮视频| 亚洲五月色婷婷综合| 香蕉精品网在线| 国产精品蜜桃在线观看| 精品亚洲成a人片在线观看| 亚洲三级黄色毛片| 青春草视频在线免费观看| 国产精品国产av在线观看| 热99久久久久精品小说推荐| 亚洲欧美成人精品一区二区| 熟女av电影| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 永久免费av网站大全| 亚洲av中文av极速乱| av免费观看日本| 亚洲国产av影院在线观看| 亚洲成国产人片在线观看| 亚洲内射少妇av| 制服人妻中文乱码| 咕卡用的链子| 国产精品嫩草影院av在线观看| 最近2019中文字幕mv第一页| 97人妻天天添夜夜摸| 亚洲成人一二三区av| 涩涩av久久男人的天堂| 久久这里有精品视频免费| 不卡视频在线观看欧美| 人体艺术视频欧美日本| 最近最新中文字幕大全免费视频 | 亚洲av免费高清在线观看| 一级a爱视频在线免费观看| 90打野战视频偷拍视频| 久久久久久人人人人人| 精品人妻一区二区三区麻豆| 欧美97在线视频| 18禁国产床啪视频网站| 黑丝袜美女国产一区| 午夜福利网站1000一区二区三区| 这个男人来自地球电影免费观看 | 大陆偷拍与自拍| 丝袜美腿诱惑在线| 91午夜精品亚洲一区二区三区| 免费高清在线观看视频在线观看| 只有这里有精品99| 免费播放大片免费观看视频在线观看| 国产一区亚洲一区在线观看| 亚洲av电影在线观看一区二区三区| 丝袜美足系列| 久久精品夜色国产| 国产精品蜜桃在线观看| 在线观看一区二区三区激情| 亚洲视频免费观看视频| 国产成人精品福利久久| 人体艺术视频欧美日本| 纵有疾风起免费观看全集完整版| 日韩精品有码人妻一区| 在线观看三级黄色| 人人澡人人妻人| 精品卡一卡二卡四卡免费| 午夜激情av网站| 中文字幕制服av| 美女午夜性视频免费| av视频免费观看在线观看| 成人亚洲欧美一区二区av| 亚洲av欧美aⅴ国产| 人人妻人人澡人人看| 国产亚洲午夜精品一区二区久久| 日日摸夜夜添夜夜爱| 97人妻天天添夜夜摸| 免费黄网站久久成人精品| 一区福利在线观看| 国产精品二区激情视频| 国产成人精品久久二区二区91 | 精品酒店卫生间| 你懂的网址亚洲精品在线观看| 亚洲精品久久久久久婷婷小说| 免费播放大片免费观看视频在线观看| 国产高清国产精品国产三级| 精品少妇内射三级| 日韩一区二区三区影片| 精品亚洲成a人片在线观看| 国产精品二区激情视频| 国产精品av久久久久免费| 汤姆久久久久久久影院中文字幕| 99久久中文字幕三级久久日本| 亚洲精品久久午夜乱码| 9热在线视频观看99| 十八禁高潮呻吟视频| 91精品三级在线观看| 国产女主播在线喷水免费视频网站| 国产极品粉嫩免费观看在线| 男女高潮啪啪啪动态图| 国产在视频线精品| a级毛片黄视频| 国产无遮挡羞羞视频在线观看| 亚洲少妇的诱惑av| 亚洲国产精品999| 满18在线观看网站| 国产精品一国产av| 国产精品人妻久久久影院| 久久午夜综合久久蜜桃| 午夜老司机福利剧场| 超碰97精品在线观看| 午夜福利视频在线观看免费| 国产探花极品一区二区| 交换朋友夫妻互换小说| 国产 一区精品| xxx大片免费视频| av国产精品久久久久影院| 亚洲国产精品成人久久小说| 狠狠精品人妻久久久久久综合| 午夜免费观看性视频| 十八禁网站网址无遮挡| 最近最新中文字幕大全免费视频 | 久久精品国产a三级三级三级| 考比视频在线观看| 99国产综合亚洲精品| 国产 一区精品| 欧美另类一区| 免费在线观看视频国产中文字幕亚洲 | 免费人妻精品一区二区三区视频| 亚洲美女视频黄频| 午夜久久久在线观看| 久久韩国三级中文字幕| 久久久精品区二区三区| 精品亚洲乱码少妇综合久久| 久久久欧美国产精品| 日韩三级伦理在线观看| 日韩中文字幕视频在线看片| 人人妻人人添人人爽欧美一区卜| 久久久久网色| 久久av网站| 色网站视频免费| 黄片无遮挡物在线观看| av片东京热男人的天堂| 国产综合精华液| 欧美日韩精品网址| 天天躁夜夜躁狠狠躁躁| 日韩,欧美,国产一区二区三区| 人成视频在线观看免费观看| 久久久久久伊人网av| 观看美女的网站| 国产精品熟女久久久久浪| 老熟女久久久| 中文字幕人妻丝袜一区二区 | 考比视频在线观看| 99国产综合亚洲精品| 亚洲熟女精品中文字幕| 一边亲一边摸免费视频| 在线观看免费高清a一片| 妹子高潮喷水视频| av视频免费观看在线观看| 亚洲av电影在线进入| 18在线观看网站| 午夜激情久久久久久久| 午夜老司机福利剧场| 女人被躁到高潮嗷嗷叫费观| 国产精品不卡视频一区二区| 久久精品国产亚洲av高清一级| 婷婷色综合大香蕉| 黑人欧美特级aaaaaa片| 中文乱码字字幕精品一区二区三区| 亚洲伊人色综图| 国产在线视频一区二区| 如日韩欧美国产精品一区二区三区| 国产亚洲欧美精品永久| 91精品伊人久久大香线蕉| 国产一级毛片在线| 捣出白浆h1v1| 午夜av观看不卡| 中国国产av一级| 美女国产高潮福利片在线看| 老女人水多毛片| 亚洲人成77777在线视频| 国产 一区精品| 国产高清国产精品国产三级| 国产精品国产三级国产专区5o| 波多野结衣av一区二区av| videosex国产| 亚洲美女视频黄频| 亚洲 欧美一区二区三区| 黄色 视频免费看| 亚洲男人天堂网一区| 欧美亚洲 丝袜 人妻 在线| 一边亲一边摸免费视频| 亚洲四区av| 一边摸一边做爽爽视频免费| 美女福利国产在线| 寂寞人妻少妇视频99o| 青青草视频在线视频观看| 亚洲欧美成人综合另类久久久| 国产精品不卡视频一区二区| 波野结衣二区三区在线| 亚洲天堂av无毛| 免费观看性生交大片5| 啦啦啦啦在线视频资源| 免费看不卡的av| 只有这里有精品99| 青春草亚洲视频在线观看| 久久精品久久久久久久性| 99国产精品免费福利视频| 青草久久国产| 亚洲情色 制服丝袜| 天堂俺去俺来也www色官网| 国产精品不卡视频一区二区| 日韩av在线免费看完整版不卡| 男女国产视频网站| 久久久久国产精品人妻一区二区| 精品一品国产午夜福利视频| 最近最新中文字幕大全免费视频 | 亚洲,欧美,日韩| 国产精品成人在线| 啦啦啦中文免费视频观看日本| 久久久久久久国产电影| 叶爱在线成人免费视频播放| 日本vs欧美在线观看视频| 亚洲欧美清纯卡通| 久久国产精品男人的天堂亚洲| 亚洲av日韩在线播放| 汤姆久久久久久久影院中文字幕| 日韩av免费高清视频| 中文字幕av电影在线播放| 免费看av在线观看网站| www.av在线官网国产| 国产麻豆69| 午夜福利影视在线免费观看| 国产又爽黄色视频| 久久久久久久久久久久大奶| 免费av中文字幕在线| 热re99久久精品国产66热6| 大香蕉久久网| 日韩中字成人| 啦啦啦视频在线资源免费观看| 丝瓜视频免费看黄片| 欧美日韩一区二区视频在线观看视频在线| 最近最新中文字幕免费大全7| 久久精品夜色国产| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 国产精品人妻久久久影院| 国产精品嫩草影院av在线观看| 精品人妻在线不人妻| 久久99精品国语久久久| 最近中文字幕高清免费大全6| 欧美成人午夜免费资源| 成人亚洲欧美一区二区av| av又黄又爽大尺度在线免费看| 中文乱码字字幕精品一区二区三区| 久热这里只有精品99| 日韩电影二区| 国产在线免费精品| 色婷婷av一区二区三区视频| 超碰成人久久| 精品视频人人做人人爽| 1024视频免费在线观看| 在线观看国产h片| 亚洲精品在线美女| 亚洲,欧美,日韩| 亚洲天堂av无毛| av在线播放精品| 欧美精品高潮呻吟av久久| 电影成人av| 丰满迷人的少妇在线观看| 在线免费观看不下载黄p国产| 亚洲在久久综合| 极品人妻少妇av视频| 国产精品久久久久成人av| 美女午夜性视频免费| 这个男人来自地球电影免费观看 | 亚洲av.av天堂| av电影中文网址| 日韩视频在线欧美| 亚洲一区二区三区欧美精品| 国产日韩一区二区三区精品不卡| 欧美日韩亚洲高清精品| 久久精品人人爽人人爽视色| 久久这里只有精品19| av不卡在线播放| 亚洲三级黄色毛片| 日韩人妻精品一区2区三区| 天堂8中文在线网| 亚洲美女搞黄在线观看| 丝袜美腿诱惑在线| xxx大片免费视频| 最近的中文字幕免费完整| 99re6热这里在线精品视频| 视频区图区小说| 欧美日韩精品网址| 久久亚洲国产成人精品v| 国产黄色免费在线视频| 高清不卡的av网站| av在线app专区| 91午夜精品亚洲一区二区三区| 黄片播放在线免费| 九九爱精品视频在线观看| 午夜福利网站1000一区二区三区| 在线观看国产h片| 热re99久久精品国产66热6| 99精国产麻豆久久婷婷| 97在线视频观看| 女的被弄到高潮叫床怎么办| 一级毛片黄色毛片免费观看视频| 男女边吃奶边做爰视频| 丰满饥渴人妻一区二区三| 亚洲中文av在线| 国产精品人妻久久久影院| 韩国高清视频一区二区三区| 90打野战视频偷拍视频| 18禁裸乳无遮挡动漫免费视频| 水蜜桃什么品种好| 成年人免费黄色播放视频| 中国三级夫妇交换| 亚洲国产色片| 亚洲国产精品999| 美女国产视频在线观看| 国产片特级美女逼逼视频| 99国产精品免费福利视频| 日本vs欧美在线观看视频| 9热在线视频观看99| 国产欧美亚洲国产| 在线观看三级黄色| 999精品在线视频| 肉色欧美久久久久久久蜜桃| av一本久久久久| 欧美 日韩 精品 国产| 国产精品av久久久久免费| av一本久久久久| 18禁观看日本| 成人毛片a级毛片在线播放| av国产精品久久久久影院| 一级爰片在线观看| 国产一区二区三区av在线| 超碰成人久久| 午夜福利,免费看| xxx大片免费视频| 韩国高清视频一区二区三区| 久久久久久久久久人人人人人人| 免费av中文字幕在线| 18禁观看日本| 国产成人精品久久二区二区91 | 日韩av不卡免费在线播放| 啦啦啦视频在线资源免费观看| 天天影视国产精品| 观看av在线不卡| 丝袜脚勾引网站| 在线天堂中文资源库| 国产亚洲欧美精品永久| 久久久久久久国产电影| 人成视频在线观看免费观看| 免费观看无遮挡的男女| 两个人免费观看高清视频| 精品国产乱码久久久久久小说| 色婷婷久久久亚洲欧美| 日本黄色日本黄色录像| 久久久久国产一级毛片高清牌| 丝袜在线中文字幕| 母亲3免费完整高清在线观看 | 亚洲欧美日韩另类电影网站| 波野结衣二区三区在线| 欧美精品av麻豆av| 在线观看人妻少妇| 免费观看a级毛片全部| 人妻少妇偷人精品九色| 国产高清不卡午夜福利| 中文字幕最新亚洲高清| 亚洲第一av免费看| 最黄视频免费看| 97精品久久久久久久久久精品| 性少妇av在线| 国产精品亚洲av一区麻豆 | 国产成人a∨麻豆精品| 久久久国产欧美日韩av| 男女啪啪激烈高潮av片| 看免费av毛片| 新久久久久国产一级毛片| 国产成人精品久久久久久| 黄片播放在线免费| 精品亚洲成国产av| 亚洲色图 男人天堂 中文字幕| 又黄又粗又硬又大视频| 成年人午夜在线观看视频| 亚洲国产看品久久| 婷婷色综合大香蕉| 亚洲四区av| 久久av网站| 可以免费在线观看a视频的电影网站 | 国产黄频视频在线观看| 久久精品国产综合久久久| 国产不卡av网站在线观看| 极品人妻少妇av视频| 另类精品久久| 男女免费视频国产| 成人影院久久| 久久久久久人人人人人| 纵有疾风起免费观看全集完整版| 校园人妻丝袜中文字幕| 国产亚洲精品第一综合不卡| 国产精品无大码| 男人爽女人下面视频在线观看| 日本黄色日本黄色录像| av网站在线播放免费| 免费黄网站久久成人精品| 久久精品亚洲av国产电影网| 最近2019中文字幕mv第一页| 叶爱在线成人免费视频播放| 久久综合国产亚洲精品| 另类亚洲欧美激情| 精品国产一区二区久久| 妹子高潮喷水视频| 国产免费福利视频在线观看| 黄片播放在线免费| 亚洲国产毛片av蜜桃av| 国产 一区精品| 777久久人妻少妇嫩草av网站| 国产一区二区激情短视频 | 久久精品亚洲av国产电影网| 99久久综合免费| 99re6热这里在线精品视频| 丰满乱子伦码专区| 日韩视频在线欧美| 一本—道久久a久久精品蜜桃钙片| 午夜精品国产一区二区电影| 国产精品无大码| 亚洲精品第二区| 满18在线观看网站| 久久国产亚洲av麻豆专区| 亚洲男人天堂网一区| 午夜久久久在线观看| 欧美在线黄色| 亚洲av男天堂| 韩国高清视频一区二区三区| 亚洲精品aⅴ在线观看| 日韩欧美精品免费久久| 在线精品无人区一区二区三| 夫妻午夜视频| 国产精品香港三级国产av潘金莲 | 国产一区有黄有色的免费视频| 久久精品国产自在天天线| 亚洲精品中文字幕在线视频| 亚洲精品久久久久久婷婷小说| 777久久人妻少妇嫩草av网站| 亚洲第一区二区三区不卡| 亚洲成人手机| 大香蕉久久网| 观看美女的网站| 人人妻人人爽人人添夜夜欢视频| www.精华液| 久久国产精品男人的天堂亚洲| 少妇人妻 视频| 亚洲精品国产av蜜桃| 中文欧美无线码| 少妇 在线观看| 美女大奶头黄色视频| 国产精品熟女久久久久浪| 午夜福利在线观看免费完整高清在| 国产免费一区二区三区四区乱码| 女性生殖器流出的白浆| 亚洲,一卡二卡三卡| 午夜免费男女啪啪视频观看| 高清av免费在线| 国产又爽黄色视频| 自拍欧美九色日韩亚洲蝌蚪91| 一个人免费看片子| 日韩中文字幕视频在线看片| 男女午夜视频在线观看| 日韩大片免费观看网站| 在线观看人妻少妇| 菩萨蛮人人尽说江南好唐韦庄| 欧美 日韩 精品 国产| 91aial.com中文字幕在线观看| 一级黄片播放器| 日产精品乱码卡一卡2卡三| 久久久久网色| 大话2 男鬼变身卡| 精品久久久精品久久久| 久久久国产精品麻豆| 9色porny在线观看| 深夜精品福利| 日本爱情动作片www.在线观看| 捣出白浆h1v1| 久久韩国三级中文字幕| 国产白丝娇喘喷水9色精品| 看免费av毛片| 捣出白浆h1v1| 黄色毛片三级朝国网站| 国产高清不卡午夜福利| 欧美 亚洲 国产 日韩一| 日本-黄色视频高清免费观看| 夜夜骑夜夜射夜夜干| 90打野战视频偷拍视频| 王馨瑶露胸无遮挡在线观看| 欧美最新免费一区二区三区| 人妻人人澡人人爽人人| av.在线天堂| 一本大道久久a久久精品| 亚洲av日韩在线播放| 高清欧美精品videossex| 亚洲欧美一区二区三区黑人 | 精品国产乱码久久久久久男人| 夜夜骑夜夜射夜夜干| 久久99一区二区三区| 水蜜桃什么品种好| 亚洲av中文av极速乱| 国产精品一二三区在线看| 亚洲色图 男人天堂 中文字幕| 欧美成人精品欧美一级黄| 亚洲国产av影院在线观看| 国产野战对白在线观看| 又粗又硬又长又爽又黄的视频| av视频免费观看在线观看| 黄片播放在线免费| 岛国毛片在线播放| 一级毛片我不卡| 寂寞人妻少妇视频99o| 亚洲欧美一区二区三区久久| 亚洲成av片中文字幕在线观看 | www日本在线高清视频| 五月开心婷婷网| 国精品久久久久久国模美| 国产精品av久久久久免费| 中文乱码字字幕精品一区二区三区| 日韩一本色道免费dvd| 在线免费观看不下载黄p国产| 青草久久国产| 九九爱精品视频在线观看| 成人国产av品久久久| 可以免费在线观看a视频的电影网站 | 免费在线观看视频国产中文字幕亚洲 | 精品少妇黑人巨大在线播放| av国产精品久久久久影院| 国产男女超爽视频在线观看| 五月天丁香电影| 最新的欧美精品一区二区| 国产欧美日韩一区二区三区在线| 少妇人妻久久综合中文| 少妇精品久久久久久久| 国产激情久久老熟女| 高清视频免费观看一区二区| 午夜福利视频精品|