• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    ECC:Edge Collaborative Caching Strategy for Differentiated Services Load-Balancing

    2021-12-15 08:12:56FangLiuZhenyuanZhangZunfuWangandYutingXing
    Computers Materials&Continua 2021年11期

    Fang Liu,Zhenyuan Zhang,Zunfu Wang and Yuting Xing

    1School of Design,Hunan University,Changsha,China

    2School of Computer Science and Engineering,Sun Yat-Sen University,Guangzhou,China

    3Department of Computing,Imperial College London,UK

    Abstract:Due to the explosion of network data traffic and IoT devices,edge servers are overloaded and slow to respond to the massive volume of online requests.A large number of studies have shown that edge caching can solve this problem effectively.This paper proposes a distributed edge collaborative caching mechanism for Internet online request services scenario.It solves the problem of large average access delay caused by unbalanced load of edge servers,meets users’differentiated service demands and improves user experience.In particular,the edge cache node selection algorithm is optimized,and a novel edge cache replacement strategy considering the differentiated user requests is proposed.This mechanism can shorten the response time to a large number of user requests.Experimental results show that,compared with the current advanced online edge caching algorithm,the proposed edge collaborative caching strategy in this paper can reduce the average response delay by 9%.It also increases the user utility by 4.5 times in differentiated service scenarios,and significantly reduces the time complexity of the edge caching algorithm.

    Keywords:Edge collaborative caching;differentiated service;cache replacement strategy;load balancing

    1 Introduction

    With the advent of 5G era,the number of edge network devices has greatly increased,and the amount of data that needs to be processed or cached on the edge is explosively increasing.For example,Intel reported that the data generated by an autonomous vehicle in one day was 4TB in 2016.Cisco predicted that by 2021,the growth rate of mobile data to be processed will far exceed the capacity of data centers.There are currently billions of IoT devices connected to the Space,Air,Ground,and Sea (SAGS) network,and a large amount of data is generated [1].If all these data were up-loaded to the cloud for processing,it would certainly not be able to meet the low latency requirements of complex applications,such as on-board applications.Also,it is well known that there is a serious mismatch in I/O speed between the CPU and the disk.Adding a memory cache between the two can solve this problem.Therefore,the edge cache in the network path between cloud and Internet of Things (IoT) devices is a good choice to reduce network latency.Yu et al.[2]proposed an algorithm based on the “IoT-Edge-Cloud” three-layer multi-hop model to evenly distribute computing tasks to network devices to process large amounts of data with huge potential value generated by IoT devices.Moreover,Liu et al.[3]proposed a matrixbased data sampling to alleviate the problems of data redundancy and high energy consumption that artificial intelligence is facing in collecting and processing big data.

    Besides the latency issues,with the growth of network users,the traditional cloud-IoT or Cloud-Edge-IoT transmission architecture will cause some problems such as overloading of edge servers,single point failure of transmission links,and redundant transmission (such as hot videos)on resource-limited links.

    In view of the above defects,there has been much research on the strategy of caching the most popular content locally according to the popularity of content,and a lot of work follows the Zipf distribution [4].These methods can solve some of the above problems,but Zwolenski points out that the popularity of content on the Internet is easy to change greatly in a certain period of time,and it is easy to cause great deviation by using popularity [5].In addition,Zhou et al.[6]proposed a new type of mobile photo selection scheme for congestion detection to reduce data redundancy on the server.

    With the proliferation of Internet online requests,service providers will face the challenge of huge bandwidth overhead and the quality of service for users will be difficult to guarantee.Deploying the service close to the user and running the service cache on the edge server can effectively reduce access latency and improve the user’s utility.

    There is no doubt that collaborative caching strategy can reduce the probability of obtaining services from the original server.However,there are few effective caching schemes for caching the hottest services locally according to the popularity of services.It is worth noting that this is based on services.Taking services as the research object,the heat of services in different nodes is different for a period of time,which will lead to different loads on nodes (i.e.,the requested number of the services).It is necessary to keep services load balance to prevent the server from going down.Not only does it reduce the service access delay but also improves the user’s experience.

    At present,most of the work doesn’t consider the difference of user requests,that is,the demand of the Internet for differentiated services.For example,some users are VIPs who can prior to occupying server resources but some are not.Nielsen,a global monitoring and data analysis company,pointed out that 39% of consumers are willing to buy products with better quality but relatively expensive prices,while 15% of consumers are willing to buy products with basic functions but relatively cheap prices,and 1% of consumers are willing to buy low-priced products at the expense of quality [4].Therefore,providing differentiated services can bring higher commercial benefits to service providers.In addition,most of the existing work does not consider the extra cost of a large number of sudden service requests on edge nodes,such as queuing delay,which will greatly increase the average access delay and lead to a bad experience for users.Caching services or applications which users frequently request to collaborative edge nodes can effectively reduce average access latency and network traffic [7-9].

    In the Internet online request service scenario,we will supplement the differentiated Internet services and the average queuing delay of mass emergent requests at edge collaborative nodes,in order to meet the differentiated service demands of users,reduce the average request access delay and improve user experience.

    The characteristics of the edge collaborative caching strategy proposed in this paper are as follows:

    · Our research object is different from the traditional file or content caching problem.We study the service caching problem in the Internet scenario where common content popularity model such as Zipf is not applicable to this scenario.

    · The congestion of services in the node will damage user experience.When a user requests a service and the service runs on the edge node to answer the request,it needs to occupy node resources,such as CPU and memory resources.So the high concurrency of the service may easily cause other services to wait or even the node to go down,extremely increasing average access time of services.

    · The node selection is divided into two stages where service should be placed.Referring to the characteristics of multi-node cooperative architecture (local cache hit delay<neighbor node hit delay<<cloud delay),when the service request frequency is low,it is randomly placed including the neighbor nodes and the local node.When the service request frequency reaches a large threshold,the service prefetching is placed in the local node.

    · The same service delay,different user benefits.Differentiated services are common in Internet service scenarios.While considering the average access latency,we also need to consider the user benefits.Based on a real dataset,we classify different user requests into eight levels.Take it into cache replacement stage,optimizing the utility of users.

    · The assumption of node blocking in point b and the assumption of differentiated service in point d above have been verified in real Google Trace (it is observed that there is blocking phenomenon and request level classification),and experiments are carried out based on this Trace,which reduces the delay and improves the user benefit.

    Our key contributions of this paper are listed as follows:

    · We specifically describe the online request application scenario of distributed edge cache,analyzes the Google data set,and finds that introducing relaying mechanism in nodes with relatively balanced must break the load balance.In particular,some cache algorithms without considering load balancing have too much queueing delay in some hot nodes,which leads to too much average access delay and too little benefit for users.

    · By optimizing the average access time of online service requests and differentiated services for user requests,we propose a collaborative edge caching algorithm with differentiated services and load balancing.Compared with the classical cache replacement algorithm and the advanced online edge cache algorithm,analyzes the effectiveness of our proposal.

    The rest of the paper is organized as follows.Section 2 introduces the related work.Section 3 introduce our system modeling and problem formulation.We present effective algorithms in Section 4.Experiments and performance evaluations are in Section 5.Finally,Section 6 concludes this paper and discusses the future work.

    2 Related Works

    Among the traditional online caching algorithms,the most widely used is LRU [10].With low spatial complexity,it performs well in cache hit ratio evaluation because online requests often have the characteristics of “l(fā)ocality principle.” On the edge cooperative cache scenario,LRU which is naturally modified also performs well in hit ratio evaluation [9].Edge caching technology has developed rapidly and it can be traced back to content distribution network technology [11].In recent years,many excellent works were proposed on edge caching research.According to the use of tools or the field of studies,it can be divided into three kinds:D2D(Device-to-Device) communication aided edge caching [12,13],Game theory aided edge caching which reduces operator cost or increases profits [14],and edge collaborative caching which reduces the service response/access time [8,9].

    D2D communication aided edge caching.Golrezaei et al.[11]proposed an architecture for caching popular video content based on the edge caching net-work assisted by D2D,and proved that D2D communication can effectively improve system throughput.On the D2D aided edge network,Wang et al.proposed an effective hierarchical collaborative caching algorithm for unloading network traffic,which takes the social behavior and preferences of mobile users and cache content size into account [12].Besides,it is also a popular method to establish game models for edge cache network and it takes system cost and benefit as the optimization goal.Li et al.[13]on the edge of the small commercial network,proposed that the competition relation between NEP(network equipment providers) and VSP (video service provider) can be built as a Stackelberg Game model.By describing the cache equipment rental and deployment strategy,it optimizes the benefit of the NEP and VSP.

    Game theory aided edge caching.Cao et al.[14]conducted auction modeling for the content delivery relationships among SP (service providers),users and content providers on edge cache network,and reduced the cost of SP and maximized the revenue of SP by using Myerson optimal auction model.Wu et al.[15]devised a distributed game-theoretical mechanism with resources sharing among network service providers with the objective to minimize the social cost of all network service providers,by introducing a novel cost sharing model and a coalition formation game.

    Edge collaborative caching.Tan et al.[8]studied online service caching in the multiple edge node collaboration scenario,and proposed an asymptotic optimal cache replacement algorithm with the goal of optimizing network traffic and other costs.Ma et al.pointed out that due to the heterogeneity of edge resource capacity and the inconsistency of edge storage computing capacity,it is difficult to make full use of edge resource storage and computing capacity in the absence of collaboration between edge nodes.To solve this problem,they considered edge collaborative caching based on Gibbs sampling and Water falling algorithm,reducing the service outsourcing traffic and response time [4].Hao et al.[7]proposed an edge intelligent algorithm in the heterogeneous Internet of Things architecture to jointly optimize the wireless communication cost,the collaborative cache and the computing unloading cost in the edge cloud,so as to minimize the total delay of the system.Wu et al.[16]proposed Edge-oriented Collaborative Caching (ECC) in information centric networking (ICN).In ECC,edge devices (such as edge server,micro data center,etc.) cache file contents while routers only maintain file cache indexes which are used to redirect subsequent requests towards the cached file content.Ren et al.[17]proposed a hybrid collaborative caching (Hy-CoCa) design that places cache in nodes,node groups and nodes in the network according to content popularity to further reduce delay and energy consumption.

    There have been some researches on edge service caching algorithms,but the difference of service requirements has not been considered.In addition,most studies have ignored the unbalanced load of a large number of user requests on the edge server,which may cause congestion on edge servers,or even server downtime.

    3 System Modeling and Problem Formulation

    From the collaborative model of edge caching,it can be divided into cloud-edge,edge-edge and edge-IoT collaboration.The system proposed in this paper is based on the mode of cloudedge and edge-edge collaboration to study the cooperative caching strategy.In this model,we study the cooperative caching strategy applied by Internet service in edge nodes.It is reasonable to assume that the cloud center has configured with all Internet service applications.Due to the limited storage capacity of edge nodes,the installation configuration can only be performed in the node after downloading/acquiring source files (or application installation packages) from the cloud center.Usually,due to the limited capacity of edge nodes,after installing new service applications,edge nodes will discard their source files (or application installation packages).

    In the system architecture in Fig.1,when an Internet user issues a request for application services which were deployed in the cloud or edge nodes,edge nodes will respond to the request with four different actions according to their cache hits.Denote the user request asr:=(f,s,p),the requested service asf,the edge node/server ass,and the priority of the request asp.

    Figure 1:An example for edge collaborative caching system

    First,if the local edge nodes,such as the base station,router and other devices with storage and networking capacity,has deployed the service,the user requestr:=(f,s,p)locally hit.The access delay of the local hit request is recorded astl,generally speaking,tlis small.

    Second,if the local edge nodesis not hit,and the neighbor nodes′has deployed servicesf,swill relay the requestr:=(f,s,p)tos′.The access delay of the relaying hit requestris denoted astr,which is usually small.

    Third,if none of the edge nodes are hit,the local nodeswill send requestr:=(f,s,p)to the cloud data center (i.e.,bypassing).The user requestris not hit and the request access delay is denoted astbwhich is large usually.

    Fourth,if the local node or neighbor node does not hit multiple times,the edge nodesands′will download the service application source file or application installation package from the cloud and configure it tosors′.This action is denoted as fetch,and the time cost/delay istfwhich is usually large.

    It is reasonable to assume that all service applications (i.e.,files,for simple) are accessible in the cloud center,and that the edge nodes have limited cache space,so only some files can be cached.Suppose there aremedge nodes in the cache system,denoted as the node set S,and the cache space of the nodesiiskifile slots (i=1,...,m),that is,the total capacity of the cache system isAssume that all the file sets are F each file takes up a slot in the file.The user request is denoted asr:=(f,s)∈F×S,where(f,s)represents the requestraccessffrom the edge nodes.If cached,the response will be quick,otherwise it will be relayed to a neighbor nodes′or bypassed to the cloud for a response.

    As mentioned above,the high concurrency of the service may easily cause request congestion.When a request is blocked in a node,there is a queueing delay.Define the queueing ratio for the user’s request,called the mean blocking ratepqueueing,as shown in Eq.(1).

    wherenqueueingrepresents the number of requests blocking the queue,andnrequestrepresents the number of requests.

    Define the total queueing delay of the request in the node as shown in Eq.(2).

    whereKis the number of queueing tasks andtavgQis the average queuing delay,which is usually set to 100 milliseconds (ms).

    Cache hit ratiohris an essential performance indicator for cache system evaluation,and its definition is shown in Eq.(3).

    wherehlocalrepresents the number of local hits,hrelayrepresents the number of relay hits,andnrequestrepresents the number of requests.

    Furthermore,average access delaytavgis also an essential performance indicator for cache system evaluation,and its definition is shown in Eq.(4).

    In addition,we observe the load balancing of nodes with the edge node load varianceva,which is defined as Eq.(5) and represents the stability of node response delay.From the perspective of users,the greater the variance of node load,the greater the delay difference of service response request perceived by users (i.e.,the delay of user request response is sometimes large and sometimes small).

    whereAVG(hc)is the function that finds the average number of loads on all nodes.

    As mentioned above,differentiated services are common in Internet service application scenarios.In order to meet the needs of differentiated services and optimize user benefits,we analyzed the relationship between the priority and frequency of user requests in the Google dataset and found that they are not inversely proportional,as shown in Fig.2.Considering the priority and frequency of user requests,the definition of service level of user requests is shown in Eq.(6).

    Figure 2:Priority and frequency of user request in google dataset

    When the user requestr:=(f,s,p)get the response in the cache system,as shown in Fig.1,according to the service level and the response action obtained,the user utilityurequestis intuitively defined as shown in Eq.(7).

    Since different service requests have different user utility,in order to improve sum of user utility,we can replace the services with lower cumulative utility first.If the cumulative utility is the same,then consider the least recently used (LRU) strategy to replace the least recently used service.

    4 Distributed Edge Collaborative Caching Mechanism

    Similar to the memory cache in a computer,edge caching process can be roughly divided into three stages,data prefetch,node selection and cache replacement stage.In particular,we optimize an algorithm for node selection stage based on the hot load node of probability choice node (i.e.,Select-the-node-with-probability),and boost a differentiated service cache replacement algorithm for cache replacement stage based on the recent minimum user benefit respectively (i.e.,LUULRU-Fetch).

    Algorithm 1:Select-node-with-probability (r)Input:load count hc(si) of edge node si,user request r:=(f,s)Output:selected node s′′=nodeIndexselected 1:nodeIndexselected=s 2:if edge node s have a slot q,and f in q then 3:terminated.4:else 5:denote max load count in all edge nodes as hcmax=MAX(hc)6:initialize random interval size randomsize=0 7:for i=0,...,m then 8:tmp=hcmax-hc(si)9:randomsize+=tmp 10:pi+=tmp 11:randomnum=RANDOM(0,randomsize)12:for i=0,...,m then 13:if randomnum≤pi then 14:nodeIndexselected=i 15:return nodeIndexselected

    Above,lines 5 to 11 in algorithm 1 show how to obtain random numbers for the node selection stage.Where,line 5 of the algorithm represents the maximum number of loadsMAX(hc)within the node load count tablehc,andRANDOM(0,randomsize)of line 11 represents a random value within the intervalRANDOM(0,randomsize).

    Differentiated services are common in Internet scenarios.For example,compared with normal users,VIPs can get superior service quality and better user experience.In order to meet the needs of differentiated services,we put forward differentiated service strategies.

    Al gorithm 2:Least user utility-least recently used cache replacement algorithm (LUULRU-Fetch)Input:user request r:=(f,s),user utility uj of requested file j,selected node s′′in selecting node stage.Output:the file fselected selected and replaced.1:if node s′′have a slot q,and file f in q,then 2:terminated.3:else 4:if the cache capacity of node s′′is full,then 5:if the least-user-utility file in s′′is greater than 1 then 6:Find the least recently used file from the above list,and denote it as fselected 7:else if the least-user-utility file in s′′is only 1 then 8:denote it as fselected 9:replace fselected from s′′10:cache f to s′′.

    Edge collaborative caching strategy is proposed as follows (overall strategy,integrating differentiated service strategy and load balancing strategy).Combined with the node load balancing strategy and differentiated service strategy proposed above in the three stages of the cache replacement process (data prefetching,node selection and cache replacement).The specific algorithm is shown in Algorithm 3.

    Algorithm 3:Edge collaborative caching strategy Input:λ=tf tr,μ=ηλ≥tftr,hc(si)=0,U=0,r:=(f,s,p),S1(f),S2(r)Output:cache hit ratio hr,average access delay t,user utility U,node load variance va 1:for each request r:=(f,s,p) do 2:add f to file request queue S1(f),add r to request queue S2(r)3:if s have a slot q,and f in q then 4:hc(s)+=1,U+=getUtility(f,p,‘local’),s responses r and its delay is tl 5:else if neighbor s′have a slot q and f in q then 6:hc(s′)+=1,U+=getUtility(f,p,‘relay’),s responses r and its delay is tl+tr 7:else U+=getUtility(f,p,‘bypass’),cloud responses r and its delay is tl+tr+tb 8:/* The following is the cache replacement process,including data prefetching,node selection,and cache replacement */9:if |S1(f)|=λ then 10:Call Select-node-with-probability(r) algorithm 11:Call LUULRU-Fetch(r) algorithm 12:Empty S1(f)13:if |S2(r)|=μ then 14:Call LUULRU-Fetch(r) algorithm 15:Empty S2(r)

    In the input section of Algorithm 3,whereηis the smallest available i nteger.S1(f)andS2(r)mean request queues for loggingfandrrespectively,initializing empty.Denote service load count in nodesiashc(si)and initialize 0,user utilityU=0,pinr:=(f,s,p)mean its priority.

    Lines 1 to 7 of algorithm 3 describe the response action of edge node or neighbor node,or cloud service/response when the user requestraccessf.In line 4,hc(s)represents the number of requests/loads processed in the nodes,getUtility(f,p,‘local’)represents Eq.(6) to calculate the user utility offwhen the user request priority ispand the local server is served at the local nodes.getUtility(f,p,‘relay’)of line 6 andgetUtility(f,p,‘bypass’)of line 7 and so on.Relay means that the user’s request is forwarded by the local node and responded by the neighboring node.Bypass means that user requests are bypassed by local nodes and responded by the cloud.

    Lines 9 to 15 of Algorithm 3 describe the cache replacement (also known as cache update)process.Lines 9 to 12 describe that after prefetchingf,Algorithm 1 (Select-node-with-probability(r)) is used for node selection,and finally Algorithm 2 (LUULRU-Fetch) is used for cache replacement.

    Lines 13 to 15 of Algorithm 3,similar to lines 9 to 12,describe the process after prefetchingf,selecting the current node,and finally using algorithm 2 (LUULRU-Fetch) for cache replacement.

    5 Performance Evaluation

    Based on the Task Event Table [18]in the real Data set Google Cluster Data 2011-2,we conducted a large number of experimental tests and performance analysis compared with the baselines.Due to probabilistic selection existing in some baselines and the proposed algorithm,we conducted 10 times and analyzed the results of the proposed algorithm improvements.Specifically,the baselines are the advanced Camul [8]and the classic LRUwithRelay and LRUwithoutRelay algorithms.The baselines are used to test and analyze important performance indicators such as hit ratio,average access delay and so on.

    5.1 Experimental Setup

    Considering the system shown in Fig.1,we set some important parameters and explain them as follows.According to the experimental test results of Maheshwari et al.on edge cloud system in 2018,we settl=1,tr=10,tb=100,and the unit of time was simply set as milliseconds(ms) [19].Refer to Camul [8],the ratio of operation cost of FETCH and bypass is 10,so settf=1000ms.Due to the huge difference between the two poles of queueing delay in Google Cluster Data 2011-2,which is 1us~1013us,analysis of its “queue delay-ranking”shows that it has a strong long tail effect,so we adopted the approximate average value of the long tail and set it astp=100ms.In the actual scenario,queueing delay is closely related to machine I/O capability,data/request arrival rate,etc.,and there is no universal value.Therefore,it is feasible to use the approximate value in the long tail for algorithm verification [18-23].The selected experimental platform is shown in Tab.1.

    5.2 Additional Overhead Analysis of Algorithms

    Suppose there arenuser requests,mnodes in the system,and each node haseslots.Compared with the traditional LRU algorithm,the extra space overhead of each algorithm is shown in Tab.2.

    In short,the space overhead of each algorithm is:LRUwithoutRelay<LRUwithRelay<Proposal<Camul.

    Table 1:Experimental platform

    Table 2:The extra space overhead of each algorithm compared to LRU

    5.3 Experimental Results

    5.3.1 Impact of Cache Number

    Observe the experimental results,as shown in Fig.3,and explore the impact of the number of cache nodes on the algorithm performance:(a) with the increase in the number of nodes,similar to other algorithms with relaying mechanism,the hit ratio of proposal is almost unchanged;(b)As the number of nodes increases,the number of queueing tasks in each node decreases,the queueing delay decreases,and the hit ratio slightly increases,so the average access delay decreases;(c) As the number of nodes increases,the number of local hits decreases and the number of relaying hits increases for the algorithm with relaying mechanism.According to user utility Eq.(7),it can be seen that the user utility decreases.In addition,for LRUwithoutRelay algorithm,since nodes do not have relaying mechanism and fetch delay are relatively more,its user utility is usually low.(d) Except for LRUwithoutRelay which does not have the relaying mechanism and remains the load balance of original traces,which leads to the lowest load variance,the proposals are all better than the other two algorithms.(e) In terms of the algorithm’s time overhead,Proposal is obviously superior to Camul,and even slightly superior to LRUwithRelay algorithm when the number of nodes is large.

    5.3.2 Impact of Cache Size

    Based on the experimental results,as shown in Fig.4,we can analyze the impact of cache node capacity on algorithm performance:(a) the hit ratio of each algorithm increases with the increase of cache size,and the proposal is very close to Camul.(b) With the increase of cache capacity,the average access delay of LRUwithoutRelay decreases.The benefits brought by relaying mechanism make the other three algorithms not significantly changed.(c) Since user utility of LRUwithoutRelay is far less than 0 in small capacity,it is not shown here.It can be seen from Fig.4c that proposal is superior to other algorithms.(d) With the exception of LRUwithoutRelay,the proposal is superior to other algorithms in terms of the load variance of performance indicators.(e) Since the classical LRUwithoutRelay algorithm is the simplest,it has the lowest time cost.LRUwithRelay is similar to the proposal,while Camul needs to record the status of the slot in the node,so its time cost increases with the increase of cache capacity.

    Figure 3:Impact of cache number

    Figure 4:Impact of cache size

    5.3.3 Impact of Average Queueing Delay

    As shown in Fig.5,the experiment shows the effect of average queuing delay on the performance of the algorithm:(a) it can be found that with the increase of average queueing delay,the average access delay of all algorithms increases gradually.The more balanced the algorithm is,the slower the increase rate is.(b) With the increase of average queueing delay,the user utility of each algorithm decreases gradually.In addition,when the average queueing delay is within a reasonable range of 60 ms~100 ms,the user utility of the proposal is optimal.

    Figure 5:Impact of average queueing delay

    In addition,as can be seen from Figs.3a and 4a,the proposal is not superior to LRUwithoutRelay on an important indicator of cache hit ratio.This is because LRUwithRelay with relaying mechanism treats all nodes as a whole,and its content repetition proportion between nodes is 0,while the content repetition between proposal and Camul is greater than 0.Besides,LRUwithoutRelay regards each node as independent and its content repetition between nodes is the highest.

    6 Conclusions and Future Work

    This paper describes the online request scenario of edge collaborative caching,analyzes the Google data set and finds that the introduction of relaying mechanism in nodes with relatively balanced load must increase the variance of node service load.However,some cache algorithms without considering load balancing have large queueing delay in some nodes,which leads to large average access delay and low utility for users.

    By optimizing the average access delay of online service requests and differentiated services scenes,we proposed an edge collaborative caching algorithm based on differentiated services and load balancing.Based on differentiation of Internet service scenarios and the introduction of relaying mechanisms,we considered serious request queuing delays from the node load imbalance,compared to the classic cache replacement algorithm and the current advanced online edge caching algorithm.The experimental results show that our proposal not only guarantees the load balancing server process the request,but also reduces the average user requests access latency,improving the utility of users.

    By caching the requested content,multiple nodes can provide services for users to speed up transfer.In the future,we will study the influence of single point transmission and coordinated multiple points (CoMP) transmission on edge cache,such as energy consumption and computational complexity,so as to find a compromise between cooperative multi-point transmission and single point transmission.

    Funding Statement:This work is supported by the National Natural Science Foundation of China(62072465) and the Key-Area Research and Development Program of Guang Dong Province(2019B010107001).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    老熟妇乱子伦视频在线观看| 日韩免费av在线播放| 亚洲中文字幕日韩| 女人高潮潮喷娇喘18禁视频| 国产三级在线视频| 美女高潮喷水抽搐中文字幕| 亚洲aⅴ乱码一区二区在线播放 | xxx96com| 免费在线观看视频国产中文字幕亚洲| 亚洲中文字幕一区二区三区有码在线看 | 深夜精品福利| 中亚洲国语对白在线视频| 国产麻豆成人av免费视频| 亚洲国产精品久久男人天堂| 91老司机精品| 又黄又爽又免费观看的视频| 无遮挡黄片免费观看| 亚洲精品美女久久久久99蜜臀| cao死你这个sao货| 亚洲第一欧美日韩一区二区三区| 免费高清视频大片| 禁无遮挡网站| 久久人妻av系列| 曰老女人黄片| 国产真人三级小视频在线观看| 成人亚洲精品一区在线观看| av福利片在线| 两个人看的免费小视频| 女同久久另类99精品国产91| 午夜福利一区二区在线看| 免费看美女性在线毛片视频| 性少妇av在线| 9191精品国产免费久久| 国产又色又爽无遮挡免费看| 精品一品国产午夜福利视频| 久久久久久久久久久久大奶| 免费久久久久久久精品成人欧美视频| 欧美精品啪啪一区二区三区| 亚洲自拍偷在线| 女性生殖器流出的白浆| 后天国语完整版免费观看| 国产精品久久电影中文字幕| 国产乱人伦免费视频| 久久国产乱子伦精品免费另类| 国产一区二区激情短视频| 午夜免费激情av| 亚洲avbb在线观看| 亚洲色图av天堂| 亚洲国产毛片av蜜桃av| 少妇裸体淫交视频免费看高清 | 久久香蕉国产精品| 中文字幕人妻熟女乱码| 国产成人免费无遮挡视频| 动漫黄色视频在线观看| 国产亚洲精品第一综合不卡| 制服丝袜大香蕉在线| 波多野结衣一区麻豆| www.精华液| 欧美 亚洲 国产 日韩一| 久久婷婷成人综合色麻豆| 久久婷婷人人爽人人干人人爱 | 曰老女人黄片| 国产精品久久久久久精品电影 | 色综合欧美亚洲国产小说| 精品国产国语对白av| 大型黄色视频在线免费观看| 视频区欧美日本亚洲| 好男人在线观看高清免费视频 | 妹子高潮喷水视频| www.999成人在线观看| 51午夜福利影视在线观看| 国产欧美日韩综合在线一区二区| 亚洲中文日韩欧美视频| 亚洲一区二区三区色噜噜| 久久精品国产清高在天天线| 两性夫妻黄色片| 欧美另类亚洲清纯唯美| 国产精品1区2区在线观看.| 男人舔女人下体高潮全视频| 亚洲黑人精品在线| 免费久久久久久久精品成人欧美视频| 久热爱精品视频在线9| 国产精品香港三级国产av潘金莲| 国产精品自产拍在线观看55亚洲| 亚洲国产精品成人综合色| 久久久久九九精品影院| 亚洲成国产人片在线观看| 九色国产91popny在线| 亚洲中文字幕日韩| 久久久久久国产a免费观看| 欧美另类亚洲清纯唯美| 国产精品 欧美亚洲| 久久久久国产精品人妻aⅴ院| 美女免费视频网站| www.www免费av| 黄色丝袜av网址大全| 亚洲专区国产一区二区| 久久精品影院6| 精品熟女少妇八av免费久了| 熟妇人妻久久中文字幕3abv| 在线永久观看黄色视频| 桃红色精品国产亚洲av| 丁香六月欧美| 美女大奶头视频| 日日干狠狠操夜夜爽| 亚洲欧美精品综合久久99| 久久国产精品人妻蜜桃| 国产高清视频在线播放一区| 淫秽高清视频在线观看| 成人亚洲精品av一区二区| 午夜激情av网站| 色综合亚洲欧美另类图片| 午夜免费激情av| 50天的宝宝边吃奶边哭怎么回事| 99国产极品粉嫩在线观看| 黑丝袜美女国产一区| 日韩一卡2卡3卡4卡2021年| av电影中文网址| 精品人妻1区二区| 欧美日韩亚洲综合一区二区三区_| 日本vs欧美在线观看视频| 好看av亚洲va欧美ⅴa在| av天堂在线播放| 中文字幕另类日韩欧美亚洲嫩草| 日韩欧美国产在线观看| 日韩精品青青久久久久久| 嫩草影院精品99| 大码成人一级视频| 久9热在线精品视频| 九色国产91popny在线| 99久久久亚洲精品蜜臀av| 亚洲av日韩精品久久久久久密| 日本vs欧美在线观看视频| 欧美国产精品va在线观看不卡| 国产又色又爽无遮挡免费看| 一区在线观看完整版| 黄色毛片三级朝国网站| 亚洲第一av免费看| 给我免费播放毛片高清在线观看| 亚洲av成人一区二区三| 精品人妻在线不人妻| 日韩高清综合在线| 男男h啪啪无遮挡| 最好的美女福利视频网| av网站免费在线观看视频| 日韩欧美国产在线观看| 日韩欧美国产一区二区入口| av视频在线观看入口| 别揉我奶头~嗯~啊~动态视频| 精品国产乱子伦一区二区三区| 久久人人97超碰香蕉20202| 免费高清在线观看日韩| 久久影院123| 少妇粗大呻吟视频| 午夜亚洲福利在线播放| 国产精品一区二区在线不卡| 午夜精品国产一区二区电影| 久久久久久久精品吃奶| 国产高清videossex| 妹子高潮喷水视频| 亚洲av第一区精品v没综合| 午夜成年电影在线免费观看| 777久久人妻少妇嫩草av网站| 少妇被粗大的猛进出69影院| 无人区码免费观看不卡| 久久国产精品影院| 淫秽高清视频在线观看| 亚洲天堂国产精品一区在线| 91成人精品电影| 1024视频免费在线观看| 在线国产一区二区在线| 色综合亚洲欧美另类图片| 久久性视频一级片| 久久精品国产综合久久久| 精品国产一区二区三区四区第35| 亚洲人成电影免费在线| 精品福利观看| 成人亚洲精品一区在线观看| 久久热在线av| 一级a爱片免费观看的视频| 又大又爽又粗| 国产免费男女视频| 人人妻人人爽人人添夜夜欢视频| 91麻豆av在线| 午夜福利高清视频| 亚洲五月天丁香| 淫妇啪啪啪对白视频| 人人妻人人澡欧美一区二区 | 麻豆成人av在线观看| 亚洲成av片中文字幕在线观看| x7x7x7水蜜桃| 成年版毛片免费区| 亚洲无线在线观看| 国产aⅴ精品一区二区三区波| 纯流量卡能插随身wifi吗| 少妇裸体淫交视频免费看高清 | 久久午夜亚洲精品久久| 757午夜福利合集在线观看| 18禁国产床啪视频网站| 热99re8久久精品国产| 亚洲午夜精品一区,二区,三区| 欧美老熟妇乱子伦牲交| 欧美丝袜亚洲另类 | 天堂√8在线中文| bbb黄色大片| 9色porny在线观看| av中文乱码字幕在线| 两个人视频免费观看高清| 最近最新中文字幕大全电影3 | 精品国产一区二区三区四区第35| 如日韩欧美国产精品一区二区三区| 黑人操中国人逼视频| 欧美黑人精品巨大| 日韩一卡2卡3卡4卡2021年| 少妇 在线观看| 亚洲精品美女久久久久99蜜臀| 性少妇av在线| 亚洲五月色婷婷综合| 19禁男女啪啪无遮挡网站| 久久精品国产99精品国产亚洲性色 | 亚洲第一av免费看| 免费在线观看黄色视频的| 日韩国内少妇激情av| 深夜精品福利| 校园春色视频在线观看| 欧美 亚洲 国产 日韩一| 99国产精品免费福利视频| 女人被狂操c到高潮| 亚洲人成网站在线播放欧美日韩| 国产亚洲精品久久久久5区| 精品第一国产精品| 男女下面进入的视频免费午夜 | 久久人妻福利社区极品人妻图片| 国产精品,欧美在线| 99国产综合亚洲精品| 国产亚洲欧美98| 黄片播放在线免费| 在线观看www视频免费| 91精品三级在线观看| tocl精华| 久久久久久久久免费视频了| av免费在线观看网站| 88av欧美| 国产欧美日韩一区二区精品| 欧美最黄视频在线播放免费| 不卡一级毛片| 亚洲第一青青草原| 成人手机av| 99精品在免费线老司机午夜| 午夜福利高清视频| 日韩欧美免费精品| 亚洲一卡2卡3卡4卡5卡精品中文| 两人在一起打扑克的视频| 国产亚洲精品综合一区在线观看 | 男女下面插进去视频免费观看| 男人操女人黄网站| 国产精品亚洲av一区麻豆| 淫妇啪啪啪对白视频| 精品福利观看| 国产在线观看jvid| 久久久精品欧美日韩精品| 免费在线观看黄色视频的| 久久热在线av| 手机成人av网站| 自线自在国产av| 国产私拍福利视频在线观看| 国产成人精品久久二区二区91| 国产麻豆成人av免费视频| 人人澡人人妻人| 夜夜看夜夜爽夜夜摸| 一进一出好大好爽视频| 日韩大码丰满熟妇| 在线播放国产精品三级| 久久香蕉精品热| 国产精品久久久人人做人人爽| 亚洲欧美一区二区三区黑人| 欧美中文综合在线视频| av免费在线观看网站| 午夜a级毛片| 亚洲黑人精品在线| 亚洲第一欧美日韩一区二区三区| 日韩大尺度精品在线看网址 | 亚洲五月婷婷丁香| 啦啦啦观看免费观看视频高清 | 1024视频免费在线观看| 午夜福利成人在线免费观看| 成人国产综合亚洲| 国产又色又爽无遮挡免费看| 51午夜福利影视在线观看| 日本三级黄在线观看| 精品少妇一区二区三区视频日本电影| 国产高清激情床上av| 满18在线观看网站| 国产成人啪精品午夜网站| 在线十欧美十亚洲十日本专区| 国产主播在线观看一区二区| 夜夜爽天天搞| 我的亚洲天堂| 国产高清视频在线播放一区| 91精品三级在线观看| a在线观看视频网站| 亚洲 欧美一区二区三区| 悠悠久久av| 啦啦啦 在线观看视频| 欧美乱妇无乱码| 免费在线观看日本一区| 亚洲国产毛片av蜜桃av| 中国美女看黄片| 黑人欧美特级aaaaaa片| 一区二区三区高清视频在线| 亚洲av第一区精品v没综合| 黄色丝袜av网址大全| 黄网站色视频无遮挡免费观看| 老汉色av国产亚洲站长工具| 国产亚洲欧美98| 女性被躁到高潮视频| 一进一出好大好爽视频| 青草久久国产| 真人做人爱边吃奶动态| 午夜福利18| 久久精品国产亚洲av高清一级| 熟女少妇亚洲综合色aaa.| 激情在线观看视频在线高清| 亚洲精品久久成人aⅴ小说| 99国产精品一区二区蜜桃av| 亚洲欧美日韩高清在线视频| 亚洲av成人不卡在线观看播放网| 俄罗斯特黄特色一大片| www.精华液| 老司机靠b影院| 两个人视频免费观看高清| 久久人人爽av亚洲精品天堂| 午夜视频精品福利| 久久精品人人爽人人爽视色| 免费不卡黄色视频| 精品福利观看| 久久精品国产清高在天天线| 日韩大码丰满熟妇| 国产精品一区二区免费欧美| 久久香蕉精品热| 黄色视频不卡| 国产亚洲欧美98| 变态另类成人亚洲欧美熟女 | 国内久久婷婷六月综合欲色啪| 又紧又爽又黄一区二区| videosex国产| 欧美色欧美亚洲另类二区 | 亚洲精品美女久久久久99蜜臀| 免费在线观看黄色视频的| 超碰成人久久| 少妇被粗大的猛进出69影院| 国产精品秋霞免费鲁丝片| 国产精品99久久99久久久不卡| 啦啦啦 在线观看视频| 精品久久蜜臀av无| 午夜激情av网站| 在线天堂中文资源库| 免费久久久久久久精品成人欧美视频| 亚洲中文字幕一区二区三区有码在线看 | 亚洲欧美日韩另类电影网站| 精品乱码久久久久久99久播| x7x7x7水蜜桃| 亚洲一区中文字幕在线| 黄色视频不卡| 免费久久久久久久精品成人欧美视频| 99久久久亚洲精品蜜臀av| 国产精品久久久久久人妻精品电影| 国产精品电影一区二区三区| 亚洲国产毛片av蜜桃av| 国产色视频综合| 欧美日本视频| 亚洲国产毛片av蜜桃av| а√天堂www在线а√下载| 免费在线观看亚洲国产| 在线十欧美十亚洲十日本专区| 在线观看免费视频网站a站| 麻豆国产av国片精品| 岛国在线观看网站| 丝袜在线中文字幕| 欧美日韩精品网址| 午夜福利免费观看在线| 中文字幕色久视频| 久久久久久免费高清国产稀缺| 久久香蕉国产精品| 欧美日韩亚洲国产一区二区在线观看| 日韩精品青青久久久久久| av网站免费在线观看视频| 看免费av毛片| 亚洲色图av天堂| 亚洲精品av麻豆狂野| 欧美成人午夜精品| 亚洲精品粉嫩美女一区| 中文字幕色久视频| 动漫黄色视频在线观看| 国产精品自产拍在线观看55亚洲| 国产精品一区二区在线不卡| 亚洲自拍偷在线| 午夜亚洲福利在线播放| 国产熟女午夜一区二区三区| 免费在线观看亚洲国产| www日本在线高清视频| 日韩欧美一区视频在线观看| 很黄的视频免费| 亚洲自偷自拍图片 自拍| 国产精品免费一区二区三区在线| 国产国语露脸激情在线看| 天天一区二区日本电影三级 | 黑人巨大精品欧美一区二区蜜桃| 国产伦一二天堂av在线观看| 一级a爱片免费观看的视频| a在线观看视频网站| 亚洲午夜理论影院| 亚洲第一av免费看| 久久香蕉激情| 欧美国产精品va在线观看不卡| 国产精品二区激情视频| 99久久精品国产亚洲精品| 又紧又爽又黄一区二区| 亚洲九九香蕉| 两个人看的免费小视频| 18禁国产床啪视频网站| 亚洲精品国产一区二区精华液| 亚洲国产精品sss在线观看| 黄色片一级片一级黄色片| 身体一侧抽搐| 在线观看免费日韩欧美大片| 少妇 在线观看| av网站免费在线观看视频| 欧美激情久久久久久爽电影 | 亚洲在线自拍视频| av视频免费观看在线观看| 免费久久久久久久精品成人欧美视频| 美女大奶头视频| 成人国语在线视频| 日本五十路高清| 99香蕉大伊视频| 亚洲国产欧美日韩在线播放| 国产色视频综合| 亚洲电影在线观看av| 视频在线观看一区二区三区| 777久久人妻少妇嫩草av网站| 亚洲aⅴ乱码一区二区在线播放 | 色播在线永久视频| 大型黄色视频在线免费观看| 国产伦人伦偷精品视频| 欧美在线一区亚洲| 欧美久久黑人一区二区| 法律面前人人平等表现在哪些方面| 精品熟女少妇八av免费久了| 怎么达到女性高潮| 女同久久另类99精品国产91| 一夜夜www| 最新美女视频免费是黄的| 最好的美女福利视频网| cao死你这个sao货| 少妇的丰满在线观看| 亚洲 国产 在线| 天堂动漫精品| 91麻豆精品激情在线观看国产| а√天堂www在线а√下载| 亚洲精品美女久久av网站| 日韩免费av在线播放| 精品国产美女av久久久久小说| 免费人成视频x8x8入口观看| 啦啦啦韩国在线观看视频| 日韩欧美三级三区| e午夜精品久久久久久久| 精品少妇一区二区三区视频日本电影| 淫秽高清视频在线观看| 国产精品 欧美亚洲| 丝袜在线中文字幕| 99国产精品一区二区蜜桃av| 一边摸一边抽搐一进一出视频| 欧美日本中文国产一区发布| 欧美一级a爱片免费观看看 | 国产三级黄色录像| 欧美乱妇无乱码| 老汉色∧v一级毛片| 19禁男女啪啪无遮挡网站| 精品第一国产精品| 亚洲欧美日韩另类电影网站| 午夜精品国产一区二区电影| a在线观看视频网站| 亚洲一区二区三区不卡视频| videosex国产| 亚洲欧美激情综合另类| 又黄又爽又免费观看的视频| 在线观看舔阴道视频| 乱人伦中国视频| 两个人视频免费观看高清| 成人18禁高潮啪啪吃奶动态图| 亚洲第一av免费看| 在线av久久热| 男女午夜视频在线观看| 国产欧美日韩综合在线一区二区| 999精品在线视频| 99riav亚洲国产免费| 国产精品98久久久久久宅男小说| 19禁男女啪啪无遮挡网站| 免费观看精品视频网站| 最好的美女福利视频网| 国产伦人伦偷精品视频| 黄色片一级片一级黄色片| 亚洲自偷自拍图片 自拍| 91麻豆精品激情在线观看国产| 欧美黄色淫秽网站| 成人永久免费在线观看视频| www.999成人在线观看| 国产麻豆69| 身体一侧抽搐| 国产精品自产拍在线观看55亚洲| 欧美国产日韩亚洲一区| av视频免费观看在线观看| 99久久国产精品久久久| ponron亚洲| 精品人妻1区二区| 亚洲片人在线观看| 精品欧美一区二区三区在线| 99国产极品粉嫩在线观看| 久久人妻福利社区极品人妻图片| 精品久久久久久成人av| 12—13女人毛片做爰片一| 男人操女人黄网站| 国产成人免费无遮挡视频| 欧美激情 高清一区二区三区| 亚洲第一欧美日韩一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 国产成人精品久久二区二区91| 日韩欧美国产在线观看| 他把我摸到了高潮在线观看| aaaaa片日本免费| 欧美日本亚洲视频在线播放| 国产熟女午夜一区二区三区| 天堂动漫精品| 最新在线观看一区二区三区| 涩涩av久久男人的天堂| 大香蕉久久成人网| 欧美乱码精品一区二区三区| 色综合亚洲欧美另类图片| 国产片内射在线| 国产野战对白在线观看| 中文字幕色久视频| 12—13女人毛片做爰片一| bbb黄色大片| 成人特级黄色片久久久久久久| 久久九九热精品免费| 中文字幕精品免费在线观看视频| 免费观看人在逋| 可以在线观看毛片的网站| 伊人久久大香线蕉亚洲五| 99国产综合亚洲精品| 国产精品电影一区二区三区| 亚洲九九香蕉| 窝窝影院91人妻| 欧美日韩乱码在线| 黄色毛片三级朝国网站| 美女高潮喷水抽搐中文字幕| 国产成人精品久久二区二区免费| 国产一区二区在线av高清观看| 国产成+人综合+亚洲专区| 成人亚洲精品一区在线观看| 国产精品乱码一区二三区的特点 | 精品久久蜜臀av无| av天堂在线播放| 精品久久久久久成人av| 久久国产乱子伦精品免费另类| 亚洲第一电影网av| 婷婷六月久久综合丁香| 看免费av毛片| 长腿黑丝高跟| 久久久久久久久免费视频了| 男女床上黄色一级片免费看| 久久人人爽av亚洲精品天堂| 国产一卡二卡三卡精品| 亚洲天堂国产精品一区在线| 国产黄a三级三级三级人| 国产成人av教育| 久久久久亚洲av毛片大全| 视频在线观看一区二区三区| 亚洲精品国产区一区二| 久久亚洲真实| 九色亚洲精品在线播放| 国产一区二区三区综合在线观看| 精品国内亚洲2022精品成人| 国产主播在线观看一区二区| 最近最新中文字幕大全电影3 | 久久 成人 亚洲| 国产av一区二区精品久久| 淫妇啪啪啪对白视频| 岛国视频午夜一区免费看| 波多野结衣巨乳人妻| 长腿黑丝高跟| 久久精品国产综合久久久| 亚洲中文av在线| 非洲黑人性xxxx精品又粗又长| 在线观看66精品国产| 桃红色精品国产亚洲av| netflix在线观看网站| 99国产精品免费福利视频| 高清在线国产一区| 亚洲第一av免费看| 亚洲天堂国产精品一区在线| 久久精品国产亚洲av香蕉五月| 成人18禁高潮啪啪吃奶动态图| 日韩欧美一区二区三区在线观看| 黑人欧美特级aaaaaa片| av欧美777| 黄片播放在线免费| 亚洲人成77777在线视频| 国产成人免费无遮挡视频| 欧美激情高清一区二区三区| 国产1区2区3区精品| 天堂√8在线中文| 91在线观看av| av网站免费在线观看视频|