• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Investigating and Modelling of Task Offloading Latency in Edge-Cloud Environment

    2021-12-14 06:07:08JaberAlmutairiandMohammadAldossary
    Computers Materials&Continua 2021年9期

    Jaber Almutairi and Mohammad Aldossary

    1Department of Computer Science,College of Computer Science and Engineering,Taibah University,Al-Madinah,Saudi Arabia

    2Department of Computer Science,College of Arts and Science,Prince Sattam bin Abdulaziz University,Al-Kharj,Saudi Arabia

    Abstract:Recently,the number of Internet of Things(IoT)devices connected to the Internet has increased dramatically as well as the data produced by these devices.This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing.However,different service architecture and offloading strategies have a different impact on the service time performance of IoT applications.Therefore,this paper presents an Edge-Cloud system architecture that supports scheduling offloading tasks of IoT applications in order to minimize the enormous amount of transmitting data in the network.Also,it introduces the offloading latency models to investigate the delay of different offloading scenarios/schemes and explores the effect of computational and communication demand on each one.A series of experiments conducted on an EdgeCloudSim show that different offloading decisions within the Edge-Cloud system can lead to various service times due to the computational resources and communications types.Finally,this paper presents a comprehensive review of the current state-of-the-art research on task offloading issues in the Edge-Cloud environment.

    Keywords:Edge-cloud computing;resource management;latency models;scheduling;task offloading;internet of things

    1 Introduction

    Internet of Things (IoT) technology has quickly evolved in recent years,where the number of devices that are connected to the internet (IoT) has increased massively.Some studies predict that in the upcoming three years,more than 50 billion devices will be connected to the internet [1,2],which will produce a new set of applications such as Autonomous Vehicles,Augmented Reality(AR),online video games and Smart CCTV.

    Thus,Edge Computing has been proposed to deal with the huge change in the area of the distributed system.For enhancing customer experience and accelerating job execution,IoT task offloading enables mobile end devices to release heavy computation and storage to the resource-rich nodes in collaborative Edges or Clouds.Nevertheless,resource management at the Edge-Cloud environment is challenging because it deals with several complex factors (e.g.,different characteristics of IoT applications and heterogeneity of resources).

    Additionally,how different service architecture and offloading strategies quantitatively impact the end-to-end service time performance of IoT applications is still far from known particularly given a dynamic and unpredictable assortment of interconnected virtual and physical devices.Also,latency-sensitive applications have various changing characteristics,such as computational demand and communication demand.Consequently,the latency depends on the scheduling policy of applications offloading tasks as well as where the jobs will be placed.Therefore,Edge-Cloud resource management should consider these characteristics in order to meet the requirements of latency-sensitive applications.

    In this regard,it is essential to conduct in-depth research to investigate the latency within the Edge-Cloud system,the impact of computation and communication demands and resource heterogeneity to provide a better understanding of the problem and facilitate the development of an approach that aims to improve both applications’QoS and Edge-Cloud system performance.Also,efficient resource management will play an essential role in providing real-time or near realtime use for IoT applications.

    The aim of this research is to investigate and model the delay for latency-sensitive applications within the Edge-Cloud environment as well as provide a detailed analysis of the main factors of service latency,considering both applications characteristics and the Edge-Cloud resources.The proposed approach is used to minimize the overall service time of latency-sensitive applications and enhance resource utilization in the Edge-Cloud system.This paper’s main contributions are summarized as follows:

    · An Edge-Cloud system architecture that includes the required components to support scheduling offloading tasks of IoT applications.

    · An Edge-Cloud latency models that show the impact of different tasks’offloading scenarios/schemes for time-sensitive applications in terms of end-to-end service times.

    · An evaluation of the proposed offloading latency models that consider computation and communication as key parameters with respect to offloading to the local edge node,other edge nodes or the cloud.

    The remainder of this paper is organized as follows:Section 2 presents the system architecture that supports scheduling offloading tasks of IoT applications,followed by the descriptions of the required components and their interactions within the proposed architecture.Section 3 latency-sensitive applications.Section 4 presents the latency models,followed by experiments and evaluation in Section 5.A thorough discussion of the related work is presented in Section 6.Finally,Section 7 concludes this paper and discusses the future work

    2 Proposed System Architecture

    As illustrated in Fig.1,the Edge-Cloud system from bottom to the top consists of three layers/tiers:IoT devices (end-user devices),multiple Edge Computing nodes and the Cloud (service provider).The IoT level is composed of a group of connected devices (e.g.,smartphones,selfdriving cars,smart CCTV);these devices have different applications where each application has several tasks (e.g.,smart CCTV [3]application consists of movement dedication and face recognition).These services can be deployed and executed in different computing resources (connected Edge node,other Edge nodes or Cloud),where the infrastructure manager and service providers have to decide where to run these services.

    Figure 1:An overview of edge-cloud system

    In this proposed system,at the Edge level,each Edge Computing node is a micro datacenter with a virtualized environment.It has been placed close to the connected IoT devices at the base station or Wi-Fi access point.These edge nodes have been distributed geographically and could be owned by the same Cloud provider or other brokers [4].Note that,it has limited computational resources compared to the resources in the cloud.Each edge node has a node manager that can manage computational resources and application services that run on.All the edge nodes have connected to theEdge Controller.

    The offloading tasks can be achieved when the IoT devices decide to process the task remotely in Edge-Cloud environments.Applications running on IoT devices can send their offloadable tasks to be processed by the Edge-Cloud system through their associated Edge node.We assume that each IoT application is deployed in a Virtual Machine (VM) in the edge node and the cloud.IoT devices offload tasks which belong to a predefined set of applications,these tasks are varied in term of the computational requirement (task length) and communication demand (amount of transferred data).It is assumed that tasks are already offloaded from the IoT devices,and each task is independent;thus,the dependency between the tasks is not addressed in this paper.The locations of IoT devices are important for the service time performance because it is assumed that each location is covered by a dedicated wireless access point with an Edge node and the IoT devices connect to the related WLAN when they move to the covered location.

    The associated Edge can process IoT tasks and also can be processed collaboratively with other edge nodes or the cloud,based on Edge orchestrator decisions.For example,if an IoT application is located in an edge node faraway from its connected edge,its data traffic has to be routed to it via a longer path in the Edge-Cloud system.At the cloud level,a massive amount of resources that enable IoT applications’tasks to be processed and stored.

    The proposed architecture is just a possible implementation of other architectures in the literature such as [2,5,6].The main difference in the proposed architecture is the introduced layer between the edge nodes and the cloud.This layer responsible for managing and assign offloading tasks to the edge nodes.More details about the required components and their interactions within the proposed architecture are follow.

    2.1 Edge Controller

    Edge Controller (EC) is designed similar to [7-9],some studies calledEdge Orchestrator,which is a centralized component responsible for planning,deploying and managing application services in the Edge-Cloud system.EC communicates with other components in the architecture to know the status of resources in the system (e.g.,available and used),the number of IoT devices,their applications’tasks and where IoT tasks have been allocated (e.g.,Edge or Cloud).EC consists of the following components:Application Manager,Infrastructure Manager,MonitoringandPlanner.The location of theEdge Controllercan be deployed in any layer between Edge and Cloud.For example,in [10],EC act as an independent entity in the edge layer that manages all the edge nodes in its control.It is also responsible for scheduling the offloading tasks in order to satisfy applications’users and Edge-Cloud System requirements.The EC is synchronizing its data with the centralized Cloud because if there is any failure,other edge nodes can take EC responsibility from the cloud [11,12].

    2.2 Application Manager

    The application manager is responsible for managing applications running in the Edge-Cloud system.This includes requirements of application tasks,such as the amount of data to be transferred,the amount of computational requirement (e.g.,required CPU) and the latency constraints.Besides,the number of application users for each edge node.

    2.3 Infrastructure Manager

    The role of the infrastructure manager is to be in charge of the physical resources in the entire Edge-Cloud system.For instance,processors,networking and the connected IoT devices for all edge nodes.As mentioned earlier,Edge-Cloud is a virtualized environment;thus,this component responsible for the VMs as well.In the context of this research,this component provides the EC with the utilization level of the VMs.

    2.4 Monitoring

    The main responsibility of this component is to monitoring application tasks (e.g.,computational delay and communication delay) and computational resources (e.g.,CPU utilization) during the execution of applications’tasks in the Edge-Cloud system.Furthermore,detecting the tasks’failures due to network issues or the shortage of computational resources.

    2.5 Planner

    The main role of this component is to propose the scheduling policy of the offloading tasks in the Edge-Cloud system and the location where they will be placed (e.g.,local edge,other edges or the cloud).In the context of this research,the proposed approach for offloading tasks works on this component and passes its results to EC for execution.

    3 Latency-sensitive Applications

    Latency-sensitive applications can be defined as applications that have high sensitivity to any delays accrue in communication or computation during the interaction with the Edge-Cloud system.

    For instance,the IoT device sends data to the point that processing is complete at the edge node or the cloud in the back end of the network,and the subsequent communications are produced by the network in response to receive the results.There are many examples of latencysensitive applications and the acceptable service time varies depending on the application type,which affected by the amount of transferred data and the required computation volume [13].For example,self-driving cars consist of several services,the work presented in [14]classified these services in categories based on their latency-sensitivity,quality constraints and workload profile (required communication and computation).First,critical applications,which must be processed in the car’s computational resources,for instance,autonomous driving and road safety applications.Second,high-priority applications,which can be offloaded but with minimum latency,such as image aided navigation,parking navigation system and traffic control.Third,low-priority applications,which can be offloaded and not vital as high-priority applications (e.g.,infotainment,multimedia,and speech processing).Tab.1 presents more examples of latency-sensitive applications in different technology sectors [13].

    Table 1:Latency-sensitive applications

    4 Latency Models

    Investigating and modelling the various offloading decisions for IoT tasks that can increase the Quality of Service (QoS) has attracted the attention of many researchers in the field.With the increasing number of IoT devices,the amount of produced data,the need for an autonomous system that requires a real-time interaction as well as the lake of support from the central Cloud due to network issues;service time has been considered as one of the most important factors to be handled in Edge Computing [15-17].

    One of the main characteristics of Edge Computing is to reduce the latency level.Additionally,it has been proved through literature that using Edge Computing will enhance application performance in terms of overall service time comparing to the traditional Cloud system[18-20].However,different offloading decisions within the Edge-Cloud system can lead to various service time due to the computational resources and communications types.The current real-world applications measure the latency between the telecommunication service provider and the cloud services [21].Also,a few existing works compare the latency between offloading to the edge or the cloud.Yet,the latency between multiple edge nodes that work collectively to process the offloading tasks are not considered.Consequently,investigating the latency of the Edge-Cloud system is an essential step towards developing an effective scheduling policy due to the following reasons.Firstly,task allocation in the Edge-Cloud system is not only two choices (e.g.,either at IoT device or in the cloud),but could be on any edge nodes.Moreover,edge nodes connected in a loosely coupled way on heterogeneous wireless networks (i.e.,WLAN,MAN and WAN),making the process of resource management and the offloading decision more sophisticated.Secondly,given that task processing is allocated among multiple edge nodes working collectively and the cloud,it is challenging to make an optimal offloading decision.

    Therefore,this paper introduces the latency models to investigate the delay of different offloading scenarios/schemes.Also,it explores the effect of computational and communication for each offloading scenario/scheme,which are:(1) offloading to the local edge,(2) offloading to the local edge with the cloud and (3) offloading to the local edge,other available edge nodes and the cloud.The list of the latency models’parameters and their notations are shown in Tab.2.

    Table 2:Summary of notations

    4.1 Latency to Local Edge

    This is known as a one-level offloading system,which is basically offloading to “Cloudlet”or “Local Edge”.It aims to provide a micro-data center that supports IoT devices within a specific area such as a coffee shop,mall center and airport [22,23].Thus,IoT devices can offload their tasks to be processed on the edge or cloud,as an example.This offloading scenario/scheme provides ultra-low latency due to the avoidance of network backhaul delays.

    To clarify,IoT devices send their offloading tasks through the wireless network,and then the tasks will be processed by the edge node and finally send the results to IoT devices,as shown in Fig.2.The end-to-end service time composed of two delays,network delay and computational delay.The network delay consists of the time of sending the data to the edge and the time to receive the output from the edge to the IoT device.The computation time is the time from arriving the task to the edge node until the processing has completed.Therefore,the end-to-end service time latency is the sum of communication delay and computational delay [24],which can be calculated as follows:

    Figure 2:Latency to local edge

    4.2 Latency to Local Edge with the Cloud

    In this offloading scenario/scheme,rather than relying on only one Edge node,the IoT tasks can be processed collaboratively between the connected Edge node and the cloud servers.This will combine the benefits of both Cloud and Edge Computing,where the cloud has a massive amount of computation resources,and the edge has lower communication time [25].In this scenario/scheme,the edge can do part of the processing such as pre-processing,and the rest of the tasks will be processed in the cloud.

    As illustrated in Fig.3,IoT sends the computation tasks to the connected edge and then part of these tasks forwarded to the cloud.Once the cloud finishes the computation,it will send the result to the edge,and the edge will send it to the IoT devices.This scenario/scheme consists of communication time (e.g.,the time between the IoT device to the edge node and the time between edge nodes to the cloud) and computation time (e.g.,processing time in the edge and processing time in the cloud).Thus,the end-to-end service time can be calculated as follows:

    Figure 3:Latency to local edge with the cloud

    4.3 Latency to Multiple Edge Nodes with the Cloud

    This is known as a three-level offloading scenario/scheme [26],that aims to utilize more resources at the edge layer and support the IoT devices in order to reduce the overall service time.It adds another level by considering other available computation resources in the edge layer.Basically,it distributes IoT tasks over three levels:connected edge,other available edge nodesandthe cloud.Theedge controller(edge orchestrator) controllers all edge servers by Wireless Local Area Network (WLAN) or Metropolitan Area Network (MAN),which have low latency compared to Wild Area Network (WAN).

    As illustrated in Fig.4,IoT sends the computation tasks to the connected edge and then part of these tasks transferred to other available resources in the edge level through the edge controller and the rest to the cloud.This will help to decrease the dependency of cloud processing as well as increase the utilization of computing resources at the edge [20].This scenario/scheme consists of communication time (e.g.,the time between the IoT device to the edge node,the time between edge node to other collaborative edge node and the time between edge nodes to the cloud) and computation time (e.g.,processing time in the edge,processing time in other collaborative edge node and processing time in the cloud).Thus,the end-to-end service time can be calculated as follows:

    Figure 4:Latency to multiple edge nodes with the cloud

    5 Experiments and Evaluation

    5.1 Design of Experiments

    A number of simulation experiments have been conducted on EdgeCloudSim in order to obtain the results of different offloading scenarios/schemes,and their influence on overall service time.EdgeCloudSim provides sufficient models to represent some specific situations.For example,the service time model is designed to represent the several kinds of delay taking place in the WLAN,MAN,and WAN as well as mobile devices and even the delay of processing in the CPUs of VMs.Thus,experiments in this paper are practically conducted within this simulation to investigate and evaluate the performance of IoT applications over the three different offloading scenarios/schemes.All the experiments are repeated five times,and the statistical analysis is performed to consider the mean values of the results to avoid any anomalies from the simulation results.We assume that we have three edge nodes connected to the cloud.Each edge node has two servers,and each of them has four VMs with a similar configuration.The number of edge nodes does not matter in the context of this research as long it more than two,because one of our aims to investigate the latency between two edge nodes.The cloud contains an unlimited number of computational resources.We got inspiration from other related works such as [24,27]to design the experiments and its parameters (e.g.,number of IoT devices,edge nodes and the amount of transferred data for each offloading tasks).

    Tab.3 represents the key parameters of the simulation environment.The warm-up period is used to allow the system to evolve to a condition more representative of a steady state before getting the simulation output.A number of iterations are used to avoid any anomalies from the simulation results.

    Table 3:Key parameters of the simulation environment

    5.2 Results and Discussion

    The conducted experiments show the results of three different offloading scenarios/schemes,offloading to the local edge (i.e.,cloudlet),offloading to the local edge with the cloud and offloading to multiple edge nodes with the cloud.The aim of these experiments is to investigate and evaluate the processing delays,network delays and end-to-end service delays of the three offloading scenarios/schemes.This will increase our understanding of the offloading decision in the Edge-Cloud system in order to design an efficient Edge-Cloud resource management.

    Fig.5 presented the overall service time of the three offloading scenarios/schemes.Offloading to one-level is has the lowest service time.This result is consistent with work in [24,28],this is because of the avoidance of major latency between the end device and the cloud.Two-offloading levels have lower service time performance than the three-offloading.This shows the overall service time will never be truly minimized unless the network time is considered in the offloading process.However,these results may be somewhat limited by the number of IoT devices and the system load.

    Also,the results presented in Fig.6 have shown a significant difference in network time between one level offloading and the others (two-level and three-levels).As mentioned earlier,this is due to the avoidance of WAN and MAN delays.Two offloading levels lower than the three offloading levels because of the further communications between edge nodes.

    In terms of processing time,offloading to the edge and the cloud has the lowest service time comparing to others,as depicted in Fig.7.The reason is that the local edge has limited computational resources;thus,if the number of IoT devices increases,the processing delays will increase due to limited capacity.On the other hand,offloading to multiple edge nodes with the cloud has the highest processing time.However,the result of processing time was not very encouraging;thus,more investigation will be held on the impact of the parameter of processing time (computational demand),as a part of future work.

    6 Offloading Approaches in Edge-cloud Environment:state-of-the-art

    Figure 5:End-to-end service time for three offloading scenarios/schemes

    Figure 6:Network time for three offloading scenarios/schemes

    Figure 7:Processing time for three offloading scenarios/schemes

    Computation offloading is not a new paradigm;it is widely used in the area of Cloud Computing.Offloading transfers computations from the resource-limited mobile device to resource-rich Cloud nodes in order to improve the execution performance of mobile applications and the holistic power efficiency.Users devices are evenly located at the edge of the network.They could offload computation to Edge and Cloud nodes via WLAN network or 4G/5G networks.Generally,if a single edge node is insufficient to deal with the surging workloads,other edge nodes or cloud nodes are ready for assisting such an application.This is a practical solution to support IoT applications by transferring heavy computation tasks to powerful servers in the Edge-Cloud system.Also,it is used to overcome the limitations of IoT devices in terms of computation power (e.g.,CPU and memory) and insufficient battery.It is one of the most important enabling techniques of IoT,because it allows performing a sophisticated computational more than their capacity [29].Thus,the decisions of computational offloading in the context of IoT can be summarized as follows:

    · First,whether the IoT device decides to offload a computational task or not.In this case,several factors could be considered,such as the required computational power and transferred data.

    · Second,if there is a need for offloading,does partial offloading or full offloading.Partial offloadingrefers to the part of the tasks that will be processed locally at the IoT device and other parts in the Edge-Cloud servers.Also,factors such as task dependency and task priority can be considered in this case.Full offloadingmeans,the whole application will be processed remotely in the Edge-Cloud servers [30].

    In terms of the objectives of computation offloading in the context of Edge Computing,it can be classified into two categories;objectives that focus onapplication characteristicsand objectives that focus onEdge-Cloud resources.Several studies [7,31-33]had aim to minimize service latency,energy consumption and mandatory cost,as well as maximize total revenue and resource utilization.In fact,scheduling offloading tasks is a challenging issue in the Edge-Cloud Computing paradigm,since it considers several trade-offs form application requirements (e.g.,reduce latency) and system requirements (e.g.,maximize resource utilization).Thus,developing an efficient resource management technique,that meets the requirements of both application and system,is an active area of research.

    In the following subsections,some of the studies conducted on task offloading in Edge-Cloud environments to reduce the latency and maximize resource utilization,are reviewed and discussed,as illustrated in Fig.8.

    Figure 8:Classification of the topics reviewed

    6.1 Task Offloading Based on Application Characteristics

    As stated in [34-36],scheduling offloaded tasks that focused on application characteristics is considered significantly important,especially,with the increase of IoT applications.Therefore,this subsection presents the conducted studies on task offloading,which mainly focuses on application characteristics including (computation and communication demands,and latency-sensitivity).

    · Computation and Communication Demands

    There are many ongoing research projects focusing on the task computation and communication demands of IoT applications.For example,Wang et al.[37]proposed an online approximation algorithm that mainly objective to balance the load and minimizing resource utilization in order to enhance application performance.This work considers the attributes of computational and communications for homogenous resources,without considering the service latency.Rodrigues et al.[24],presented a hyper method for minimizing service latency and reduce power consumption.This method aims to reduce the communication and computational delays by migrating the VM to an unloaded server.The authors investigate the impact of tasks computational and communication demands.They evaluate their approach under realistic conditions by mathematical modelling.However,their method does not consider the application delay constraints as well as the offloading to the cloud.Deng et al.[16],proposed an approximate approach for minimizing network latency and power consumption by allocating workload between Fog and Cloud.However,their approach does not optimize the trade-off between all mentioned objectives(e.g.,computational delay and resource utilization).

    Zeng et al.[38]designed a strategy for task offloading that aims to minimize the completion time.In their work,both computation time and transmission time are considered.Also,the authors investigate the impact of other factors such as I/O interrupt requests and storage activities.However,delay-constraints applications and resource heterogeneity are not considered in their work.Fan et al.[39]designed an allocation scheme that aims to minimize service latency for IoT applications,by taking into account both computation and communication delays.Furthermore,the authors investigate the impact of the overloaded VM on processing time,and they evaluated their work with different types of applications.However,the proposed method does not show the effectiveness of the heterogeneity of the VMs in terms of service time and also does not consider the latency-sensitive application.

    · Latency Sensitivity

    In terms of application latency-sensitivity,a number of studies are conducted in order to enhance the overall service time in the Edge-Cloud environment.For instance,Mahmud et al.[34]proposed a latency-aware policy that aims to meet the required deadlines for offloading tasks.This approach considering task dependency as well as the computational and communication requirements.Also,the resource utilization at the edge level is considered.However,the issue of resource heterogeneity dose not addressed in their work.Azizi et al.[40]designed a priority-based service placement policy that prioritizes tasks with deadlines;thus,the nearest deadlines scheduled first.Further,their work considers both computational and communication demands.However,their evaluation does not address the issue when the system has multi IoT devices with different resource utilization.Sonmez et al.[27]presented an approach for task offloading that targets latency-sensitive applications.This approach is based on fuzzy logic,which focused on delay as a key factor along with computational and communication demands.Nevertheless,in this approach resource heterogeneity is not considered.

    6.2 Task Offloading Based on Edge-cloud Resources

    This subsection presents the literature of offloading tasks and mainly focused on resource utilization and resource heterogeneity as main objectives.

    · Resource Utilization

    Scheduling offloading tasks based on resource utilization or resource heterogeneity has received considerable critical attention from many researchers.For example,Nan et al.[41]developed an online optimization algorithm for offloading tasks that aim to minimize the cost of renting Cloud services by utilizing resources at the edge using the Lyapunov technique.Further,their algorithm guarantees the availability of edge resources and ensures processing the task within the required time.Yet,this algorithm does not consider the impact of computational and communication demands for latency-sensitive applications.Xu et al.[6]proposed a model for resource allocation that aims to maximize resource utilization and reduce task execution latency,as well as,reducing the dependability on the cloud in order to minimize Cloud cost.However,this work only considers resource utilization and does not refer to resource heterogeneity.Besides,application uploading and downloading data are not addressed in their work,which plays a significant role in overall service time.Li and Wang [42]introduced a placement approach that aims to reduce edge nodes’energy consumption and maximize resource utilization.They evaluated the proposed algorithm through applied numerical analysis based on the Shanghai Telecom dataset.However,their work does not provide any information regarding the application characteristics(e.g.,computation,communication and delay-sensitivity).

    · Resource Heterogeneity

    Resource heterogeneity for the offloading decision plays a critical role to enhance the performance of service time in the Edge-Cloud environment.Thus,a number of studies have investigated the impact of resource heterogeneity on service time.For instance,Scoca et al.[43]proposed a scour-based algorithm for scheduling offloading tasks that considers both computation and communication parameters.Furthermore,their algorithm considers a heterogeneous VMs and sorts heavy tasks to be allocated to the most powerful VM.However,their algorithm does not consider server utilization as key parameters,which could affect the performance of service time.Roy et al.[44]proposed a strategy for task allocation that allocates different application tasks to an appropriate edge server by considering resource heterogeneity.This approach aims to reduce the execution latency as well as balancing the load between edge nodes.Yet,task communication time is not considered in this approach.Taneja et al.[45]proposed a resource-aware placement for IoT offloading tasks.Their approach ranks the resources at the edge with their capabilities and then assigns tasks to a suitable server based on the task’s requirements (e.g.,CPU,RAM and Bandwidth).However,this method focused on improving the performance of application service time,but without explicitly considering application latency-sensitivity.

    Ultimately,with the dynamicity of IoT workload demands,Edge-Cloud service providers aimed to find a balance between utilizing Edge-Cloud resources efficiently and satisfying QoS objectives of IoT applications.Consequently,designing a new task offloading mechanism can contribute to enhancing resource utilization and supporting the latency-sensitive application requirements in the Edge-Cloud environment.

    Section 6.1has reviewed the related work on offloading tasks that are mainly focusing on application parameters such as computation demands,communication demands and latencysensitivity in Edge-Cloud environments.The presented work in [24,38,39]considered these application parameters in order to minimize the service time.However,these works lack to consider the impact of resource parameters such as server utilization and VMs heterogeneity.

    Table 4:Comparison of the works addressing task offloading decisions

    As discussed earlier inSection 6.2,the work presented in [37,43,45]considered the resource utilization and resource heterogeneity as key parameters to schedule offloading tasks in the Edge-Cloud environment.While,some related works such as [16,42,44]have considered application requirements (e.g.,computation or communication) but,without explicitly considering the latencysensitivity of IoT applications.Hence,there is still a need for an efficient resource management technique that takes into account the application characteristics (competition,communication and latency),as well as resource parameters (resource utilization and heterogeneity) in order to meet the requirements of IoT applications (service time and task offloading) and utilize Edge-Cloud resources efficiently.Tab.4 provides a comparison summary of the closely related work on offloading tasks that consider both application and resource parameters in the Edge-Cloud environment.

    7 Conclusion and Future Work

    This paper has presented an Edge-Cloud system architecture that supports scheduling offloading tasks of IoT application,as well as the explanation of the required components and their interactions within the system architecture.Furthermore,it has presented the offloading latency models that consider computation and communication as key parameters with respect to offloading to the local edge node,other edge nodes or the cloud.This paper has concluded by discussing a number of simulation experiments conducted on EdgeCloudSim to investigate and evaluate the latency models of three different offloading scenarios/schemes,followed by a comprehensive review of the current state-of-the-art research on task offloading issues in the Edge-Cloud environment.

    As a part of future work,we intend to extend our approach by adopting the fuzzy logic algorithm which considers application characteristics (e.g.,CPU demand,network demand and delay sensitivity) as well as resource utilization and resource heterogeneity in order to minimize the overall time of latency-sensitive applications.

    Funding Statement:In addition,the authors would like to thank the Deanship of Scientific Research,Prince Sattam bin Abdulaziz University,Al-Kharj,Saudi Arabia,for supporting this work.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    青春草视频在线免费观看| 永久网站在线| 91av网一区二区| 日韩欧美一区二区三区在线观看| 91在线精品国自产拍蜜月| 少妇人妻精品综合一区二区 | 99热6这里只有精品| 观看免费一级毛片| 变态另类成人亚洲欧美熟女| 91av网一区二区| 免费看美女性在线毛片视频| 美女高潮的动态| 午夜精品一区二区三区免费看| av天堂中文字幕网| 久久精品国产清高在天天线| 18禁在线无遮挡免费观看视频 | 国产精品免费一区二区三区在线| 男人的好看免费观看在线视频| 免费看日本二区| 可以在线观看毛片的网站| 亚洲国产高清在线一区二区三| 国产蜜桃级精品一区二区三区| 观看美女的网站| 桃色一区二区三区在线观看| 亚洲无线观看免费| 亚洲国产精品sss在线观看| 激情 狠狠 欧美| 能在线免费观看的黄片| 欧美潮喷喷水| 亚洲精品亚洲一区二区| a级毛片a级免费在线| 麻豆成人午夜福利视频| 真人做人爱边吃奶动态| 99久久久亚洲精品蜜臀av| 亚州av有码| 18禁裸乳无遮挡免费网站照片| 色尼玛亚洲综合影院| 国产精品国产高清国产av| 18禁黄网站禁片免费观看直播| 免费大片18禁| 色av中文字幕| 国产毛片a区久久久久| 日本色播在线视频| 卡戴珊不雅视频在线播放| 精品99又大又爽又粗少妇毛片| 国产探花极品一区二区| 最新中文字幕久久久久| 国产成人a∨麻豆精品| 色av中文字幕| 九九在线视频观看精品| 99热网站在线观看| 国产亚洲av嫩草精品影院| 天堂影院成人在线观看| 国产成人精品久久久久久| 久久中文看片网| 性色avwww在线观看| 一本精品99久久精品77| 国产精品一区二区性色av| 十八禁网站免费在线| 毛片女人毛片| 亚洲av中文字字幕乱码综合| 99国产极品粉嫩在线观看| 激情 狠狠 欧美| 国产精品一二三区在线看| 中国美白少妇内射xxxbb| 日本五十路高清| 久久久精品欧美日韩精品| 在线免费十八禁| 校园春色视频在线观看| 精品99又大又爽又粗少妇毛片| 国产精品久久久久久av不卡| 波野结衣二区三区在线| 久久人人精品亚洲av| 97人妻精品一区二区三区麻豆| 久久精品国产清高在天天线| 国产一区二区三区在线臀色熟女| 免费搜索国产男女视频| 午夜爱爱视频在线播放| 99久国产av精品| 久久久精品大字幕| 成人永久免费在线观看视频| 欧美最新免费一区二区三区| 亚洲激情五月婷婷啪啪| 色吧在线观看| 国产黄a三级三级三级人| 国产真实乱freesex| 欧美潮喷喷水| 日本五十路高清| 欧美日韩乱码在线| 亚洲图色成人| 1000部很黄的大片| 欧美日韩乱码在线| 国产 一区 欧美 日韩| 日本黄色视频三级网站网址| 亚洲国产欧洲综合997久久,| 久久久久性生活片| 伦理电影大哥的女人| 精品国产三级普通话版| 午夜福利视频1000在线观看| 国产色婷婷99| 午夜福利18| 亚洲av第一区精品v没综合| 精品日产1卡2卡| 国产白丝娇喘喷水9色精品| 久久精品国产亚洲av香蕉五月| 性色avwww在线观看| 啦啦啦观看免费观看视频高清| 三级国产精品欧美在线观看| 日韩欧美免费精品| 久久久久精品国产欧美久久久| 成年女人看的毛片在线观看| 最近视频中文字幕2019在线8| 干丝袜人妻中文字幕| 日韩精品中文字幕看吧| 国产精品一及| 亚洲国产色片| 不卡视频在线观看欧美| 草草在线视频免费看| 99riav亚洲国产免费| 精品一区二区三区av网在线观看| 久久人人精品亚洲av| 69av精品久久久久久| 亚洲国产精品久久男人天堂| 色哟哟哟哟哟哟| 久久久久久国产a免费观看| 韩国av在线不卡| 伊人久久精品亚洲午夜| 97超视频在线观看视频| a级毛色黄片| 少妇熟女aⅴ在线视频| 国产高清激情床上av| 性色avwww在线观看| 99精品在免费线老司机午夜| 欧美日韩精品成人综合77777| 亚洲五月天丁香| 国产色爽女视频免费观看| 国产高清视频在线播放一区| 两个人视频免费观看高清| 日韩欧美国产在线观看| 波多野结衣高清无吗| 一个人免费在线观看电影| 内地一区二区视频在线| 欧美一区二区精品小视频在线| 听说在线观看完整版免费高清| 人妻久久中文字幕网| 插逼视频在线观看| 少妇的逼好多水| 97超级碰碰碰精品色视频在线观看| 日韩精品中文字幕看吧| 国产一区二区亚洲精品在线观看| 99热这里只有精品一区| 嫩草影视91久久| 日日干狠狠操夜夜爽| 亚洲天堂国产精品一区在线| 男女啪啪激烈高潮av片| 亚洲精品久久国产高清桃花| 欧美日本视频| 国产精品,欧美在线| 久久精品夜色国产| 看免费成人av毛片| 日韩三级伦理在线观看| 久久人人爽人人片av| 成人高潮视频无遮挡免费网站| 丰满的人妻完整版| 欧美色欧美亚洲另类二区| 少妇高潮的动态图| 亚洲国产欧美人成| 精品人妻一区二区三区麻豆 | 天美传媒精品一区二区| 人妻丰满熟妇av一区二区三区| 蜜臀久久99精品久久宅男| 卡戴珊不雅视频在线播放| 天堂网av新在线| 欧美日本亚洲视频在线播放| 禁无遮挡网站| 国产精品乱码一区二三区的特点| 国产精品,欧美在线| 韩国av在线不卡| 亚洲av二区三区四区| 国产国拍精品亚洲av在线观看| 夜夜夜夜夜久久久久| 亚洲成人av在线免费| 亚洲精品粉嫩美女一区| 欧美中文日本在线观看视频| 三级国产精品欧美在线观看| 亚洲高清免费不卡视频| 尾随美女入室| 亚洲av成人av| 特大巨黑吊av在线直播| 中文字幕av成人在线电影| 女人十人毛片免费观看3o分钟| 黄色欧美视频在线观看| 国产高清激情床上av| 成人av在线播放网站| 日韩国内少妇激情av| av免费在线看不卡| 免费看日本二区| 永久网站在线| 美女内射精品一级片tv| av在线亚洲专区| 美女黄网站色视频| 欧美成人免费av一区二区三区| 不卡一级毛片| 天天躁日日操中文字幕| 久久久久免费精品人妻一区二区| www.色视频.com| 99热全是精品| 国产精品一区二区性色av| 嫩草影院新地址| 国产在线精品亚洲第一网站| 一a级毛片在线观看| 午夜免费激情av| 久久久a久久爽久久v久久| 少妇高潮的动态图| 人人妻,人人澡人人爽秒播| 变态另类丝袜制服| 亚洲va在线va天堂va国产| 国产乱人偷精品视频| 精品午夜福利在线看| 午夜福利视频1000在线观看| 最近最新中文字幕大全电影3| h日本视频在线播放| 一a级毛片在线观看| 观看美女的网站| 精品日产1卡2卡| 成人高潮视频无遮挡免费网站| www日本黄色视频网| 亚洲七黄色美女视频| 亚洲欧美精品综合久久99| 日韩国内少妇激情av| 亚洲av成人av| 国产aⅴ精品一区二区三区波| 人人妻,人人澡人人爽秒播| 插阴视频在线观看视频| 99久久九九国产精品国产免费| 久久久久久国产a免费观看| 国内久久婷婷六月综合欲色啪| 91午夜精品亚洲一区二区三区| 久久天躁狠狠躁夜夜2o2o| 国产亚洲精品综合一区在线观看| 99热只有精品国产| 国产91av在线免费观看| 久久国产乱子免费精品| 亚洲国产色片| 午夜福利在线观看免费完整高清在 | 色噜噜av男人的天堂激情| 麻豆国产av国片精品| 亚洲色图av天堂| 久久久久久大精品| 一级毛片aaaaaa免费看小| 菩萨蛮人人尽说江南好唐韦庄 | 国产精品精品国产色婷婷| 久久久a久久爽久久v久久| 18禁黄网站禁片免费观看直播| 成人av在线播放网站| 波野结衣二区三区在线| 亚洲aⅴ乱码一区二区在线播放| 日日摸夜夜添夜夜添小说| 男女做爰动态图高潮gif福利片| 美女 人体艺术 gogo| 欧美日韩综合久久久久久| 日本免费a在线| 青春草视频在线免费观看| 婷婷六月久久综合丁香| 亚洲成a人片在线一区二区| 欧美三级亚洲精品| 亚洲精品影视一区二区三区av| 在线播放无遮挡| 日韩欧美三级三区| 一本精品99久久精品77| 久久99热这里只有精品18| 最新在线观看一区二区三区| 亚洲激情五月婷婷啪啪| 国产精品一区www在线观看| 免费av不卡在线播放| 久久综合国产亚洲精品| 在线播放无遮挡| АⅤ资源中文在线天堂| 精品久久久久久久久久免费视频| 国产私拍福利视频在线观看| 蜜桃久久精品国产亚洲av| 免费看a级黄色片| 成人性生交大片免费视频hd| 亚洲欧美精品自产自拍| 99riav亚洲国产免费| 久久久久国产精品人妻aⅴ院| 日韩强制内射视频| 日日摸夜夜添夜夜添小说| 久久久久久久久久黄片| 一本精品99久久精品77| 一进一出好大好爽视频| 九九爱精品视频在线观看| 欧美日韩在线观看h| 国产黄色视频一区二区在线观看 | h日本视频在线播放| 欧美性感艳星| 在线播放无遮挡| 少妇的逼好多水| 久久久a久久爽久久v久久| 精品国产三级普通话版| 欧美成人免费av一区二区三区| 午夜福利在线在线| 狂野欧美白嫩少妇大欣赏| 国产一级毛片七仙女欲春2| 男女视频在线观看网站免费| 黄色配什么色好看| 精品免费久久久久久久清纯| 日韩国内少妇激情av| 乱人视频在线观看| 中文资源天堂在线| 国产黄a三级三级三级人| 十八禁国产超污无遮挡网站| 91麻豆精品激情在线观看国产| 校园春色视频在线观看| 日韩精品有码人妻一区| 欧美高清性xxxxhd video| 美女大奶头视频| 欧美色欧美亚洲另类二区| 久久热精品热| 三级毛片av免费| 三级国产精品欧美在线观看| 麻豆国产av国片精品| 你懂的网址亚洲精品在线观看 | 一夜夜www| 少妇的逼水好多| 天堂动漫精品| 精品一区二区三区av网在线观看| 天堂动漫精品| 联通29元200g的流量卡| 国产一区亚洲一区在线观看| 又黄又爽又刺激的免费视频.| 国产高清激情床上av| 女的被弄到高潮叫床怎么办| 一级a爱片免费观看的视频| 夜夜夜夜夜久久久久| 国产三级中文精品| 热99re8久久精品国产| 国产精品综合久久久久久久免费| 熟女人妻精品中文字幕| 波多野结衣高清无吗| 乱系列少妇在线播放| 永久网站在线| 久久久国产成人免费| 色哟哟哟哟哟哟| 午夜精品在线福利| 国产免费一级a男人的天堂| 最近手机中文字幕大全| 你懂的网址亚洲精品在线观看 | 欧美激情在线99| 特级一级黄色大片| av视频在线观看入口| 日本熟妇午夜| 91久久精品国产一区二区成人| 人妻制服诱惑在线中文字幕| 久久6这里有精品| 一个人看视频在线观看www免费| 超碰av人人做人人爽久久| 午夜日韩欧美国产| 久久综合国产亚洲精品| 日本熟妇午夜| 天天躁日日操中文字幕| 熟女电影av网| 毛片女人毛片| 欧美一区二区精品小视频在线| 别揉我奶头~嗯~啊~动态视频| 午夜福利在线观看免费完整高清在 | 国产精品美女特级片免费视频播放器| 日韩av不卡免费在线播放| 国产麻豆成人av免费视频| 日韩欧美 国产精品| 别揉我奶头~嗯~啊~动态视频| 国产免费一级a男人的天堂| 麻豆国产av国片精品| 亚洲成人久久性| 成人性生交大片免费视频hd| 欧美又色又爽又黄视频| 麻豆av噜噜一区二区三区| 亚洲精品一区av在线观看| 秋霞在线观看毛片| 无遮挡黄片免费观看| 一级av片app| 日韩中字成人| 欧美高清性xxxxhd video| 久久精品国产亚洲av香蕉五月| 国内久久婷婷六月综合欲色啪| 久久久成人免费电影| 久久精品国产鲁丝片午夜精品| 国产精华一区二区三区| 免费观看在线日韩| 精品人妻熟女av久视频| 2021天堂中文幕一二区在线观| 日本撒尿小便嘘嘘汇集6| av天堂在线播放| 亚洲成a人片在线一区二区| 午夜影院日韩av| 日本黄色片子视频| 色噜噜av男人的天堂激情| 亚洲精品乱码久久久v下载方式| 国产蜜桃级精品一区二区三区| 亚洲天堂国产精品一区在线| 长腿黑丝高跟| 久久久久久久久中文| 99九九线精品视频在线观看视频| 天天一区二区日本电影三级| 国产精品久久视频播放| 国产成人精品久久久久久| 别揉我奶头~嗯~啊~动态视频| 天天躁夜夜躁狠狠久久av| 久久人人爽人人片av| 亚洲成人精品中文字幕电影| 欧美成人精品欧美一级黄| 精品国产三级普通话版| 国产色爽女视频免费观看| 长腿黑丝高跟| 中国美白少妇内射xxxbb| 日韩成人av中文字幕在线观看 | 久久久国产成人免费| 久久精品人妻少妇| 免费看av在线观看网站| 看非洲黑人一级黄片| 特大巨黑吊av在线直播| 久久精品国产清高在天天线| 可以在线观看毛片的网站| 久久久精品欧美日韩精品| 久久精品国产亚洲av涩爱 | 性欧美人与动物交配| 日本色播在线视频| 欧美国产日韩亚洲一区| 日韩三级伦理在线观看| 在线国产一区二区在线| 亚洲av熟女| 变态另类成人亚洲欧美熟女| 国产片特级美女逼逼视频| 日日啪夜夜撸| 国产91av在线免费观看| 国产精品人妻久久久影院| 精品国内亚洲2022精品成人| 美女xxoo啪啪120秒动态图| 日韩国内少妇激情av| 亚洲精品影视一区二区三区av| av.在线天堂| 乱系列少妇在线播放| 国产男靠女视频免费网站| 特大巨黑吊av在线直播| 亚洲一级一片aⅴ在线观看| 国产精品电影一区二区三区| 深爱激情五月婷婷| 国产白丝娇喘喷水9色精品| 午夜福利视频1000在线观看| 男人舔奶头视频| 久久久国产成人精品二区| 亚洲美女视频黄频| 亚洲在线观看片| 综合色av麻豆| 黄色一级大片看看| 一本一本综合久久| 国产三级在线视频| 久久精品国产鲁丝片午夜精品| 亚洲18禁久久av| 麻豆一二三区av精品| 一级毛片久久久久久久久女| 三级男女做爰猛烈吃奶摸视频| 日韩在线高清观看一区二区三区| 免费看美女性在线毛片视频| 欧美最新免费一区二区三区| 在线免费十八禁| 亚洲乱码一区二区免费版| 一卡2卡三卡四卡精品乱码亚洲| 日韩欧美一区二区三区在线观看| 联通29元200g的流量卡| 一个人免费在线观看电影| 亚洲专区国产一区二区| 乱系列少妇在线播放| 国产真实伦视频高清在线观看| 久久久久久久久久久丰满| 国产国拍精品亚洲av在线观看| 久久久久免费精品人妻一区二区| 嫩草影院入口| 成人av一区二区三区在线看| 亚洲一区高清亚洲精品| 国产精品一及| 日韩在线高清观看一区二区三区| 毛片一级片免费看久久久久| 三级国产精品欧美在线观看| 欧美中文日本在线观看视频| 精品乱码久久久久久99久播| 欧美日本视频| 在线观看午夜福利视频| 97碰自拍视频| 成年av动漫网址| 亚洲自偷自拍三级| 国产亚洲精品av在线| 日日摸夜夜添夜夜爱| 国产成人精品久久久久久| 国产视频内射| 国产蜜桃级精品一区二区三区| 国产精品电影一区二区三区| av在线播放精品| 人人妻人人澡人人爽人人夜夜 | 特大巨黑吊av在线直播| 国产不卡一卡二| 国产在视频线在精品| a级毛片免费高清观看在线播放| 日韩精品中文字幕看吧| 国产欧美日韩一区二区精品| 18禁黄网站禁片免费观看直播| av在线老鸭窝| av在线天堂中文字幕| av福利片在线观看| 国产成人aa在线观看| 少妇丰满av| 免费观看精品视频网站| 69人妻影院| 亚洲一区二区三区色噜噜| 最新在线观看一区二区三区| 国产精品免费一区二区三区在线| 中文资源天堂在线| 久久久久九九精品影院| 午夜福利18| 国产色爽女视频免费观看| 中国国产av一级| 日本成人三级电影网站| 美女 人体艺术 gogo| 婷婷六月久久综合丁香| 少妇的逼水好多| 亚洲精品影视一区二区三区av| 日韩人妻高清精品专区| 亚洲人成网站在线播| 国产一区二区在线av高清观看| 黄色一级大片看看| 久99久视频精品免费| 一区福利在线观看| av专区在线播放| 夜夜爽天天搞| 日韩av不卡免费在线播放| 欧洲精品卡2卡3卡4卡5卡区| 成年女人永久免费观看视频| 伦理电影大哥的女人| 可以在线观看的亚洲视频| 在线观看一区二区三区| 国产av麻豆久久久久久久| av视频在线观看入口| 91久久精品电影网| 成熟少妇高潮喷水视频| 国产一区二区亚洲精品在线观看| 婷婷精品国产亚洲av| 99热6这里只有精品| 久久久久久伊人网av| 日日摸夜夜添夜夜添av毛片| 禁无遮挡网站| 五月伊人婷婷丁香| 国产av麻豆久久久久久久| 久久韩国三级中文字幕| 日韩精品青青久久久久久| 99久久成人亚洲精品观看| 听说在线观看完整版免费高清| 熟女电影av网| 国产精品伦人一区二区| 国产成人福利小说| .国产精品久久| 国产日本99.免费观看| 精品乱码久久久久久99久播| 伊人久久精品亚洲午夜| 午夜爱爱视频在线播放| 久久久成人免费电影| 亚洲无线观看免费| 日韩亚洲欧美综合| 最新中文字幕久久久久| 看黄色毛片网站| 欧美成人一区二区免费高清观看| 亚洲精品在线观看二区| 丝袜喷水一区| 成人午夜高清在线视频| 嫩草影视91久久| 国产aⅴ精品一区二区三区波| 一个人免费在线观看电影| 99久久久亚洲精品蜜臀av| 亚洲国产精品久久男人天堂| 成人漫画全彩无遮挡| 亚洲一区二区三区色噜噜| 精品99又大又爽又粗少妇毛片| 九九热线精品视视频播放| 熟女电影av网| 欧美成人精品欧美一级黄| 97在线视频观看| 成人亚洲欧美一区二区av| 亚洲一区高清亚洲精品| 九色成人免费人妻av| 亚洲专区国产一区二区| 干丝袜人妻中文字幕| 国产国拍精品亚洲av在线观看| 黄色日韩在线| 免费看av在线观看网站| 国产一区二区在线av高清观看| 亚洲成人av在线免费| 成年女人毛片免费观看观看9| 国产国拍精品亚洲av在线观看| .国产精品久久| 免费看av在线观看网站| 国产精品三级大全| 日韩,欧美,国产一区二区三区 | 欧美又色又爽又黄视频| 又爽又黄无遮挡网站| 3wmmmm亚洲av在线观看| av黄色大香蕉| 精品一区二区免费观看| 人妻丰满熟妇av一区二区三区| a级毛片免费高清观看在线播放| 欧美绝顶高潮抽搐喷水| 天天躁日日操中文字幕| 亚洲成人久久性| 国产单亲对白刺激| 国产精品,欧美在线| 看黄色毛片网站| 欧美在线一区亚洲|