• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Task Offloading in Edge Computing Using GNNs and DQN

    2024-03-23 08:15:50AsierGarmendiaOrbegozoJoseDavidNunezGonzalezandMiguelAngelAnton

    Asier Garmendia-Orbegozo ,Jose David Nunez-Gonzalez,? and Miguel Angel Anton

    1Department of Applied Mathematics,University of the Basque Country UPV/EHU,Eibar,20600,Spain

    2TECNALIA,Basque Research and Technology Alliance(BRTA),San Sebastian,20009,Spain

    ABSTRACT In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSim and compares them with the two default methods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.

    KEYWORDS Edge computing;edge offloading;fog computing;task offloading

    1 Introduction

    Various computing centers can be found in a local networking environment,with possible interconnections.In the Edge Computing paradigm,this interconnection facilitates the transmission of information is of particular interest.In many cases,when resource-constrained devices are allocated to solve computationally expensive tasks,they can become overloaded and not powerful enough regarding processability and memory availability.In this way,weaker computers can alleviate their computational load by assigning different tasks to more powerful devices nearby.These devices can vary depending on their complexity and proximity to these end-user devices.This variation of possible destinations is appropriate to distinguish different layers in an architecture,dividing it into the cloud,fog/edge,and IoT(Internet of Things)layers.The cloud tier comprises remote network servers rich in general resources with the capacity to store,manage,and process data.This general computer is the richest regarding processability and resource availability and often acts as a network orchestrator.In contrast,at the lowest level,it can find sensors,gadgets,and other IoT devices equipped with restricted computing capabilities but offer immediate responses to users.Its function is to collect information from the environment and act on environmental changes,among others.Meanwhile,other layers can be defined,such as the Fog and Edge layers,with greater capacities than the previous ones but with less processing and memory capacity than the cloud,being a valuable alternative for different types of computations.

    Management and decision-making tools based on Artificial Intelligence (AI) algorithms have great potential to offer new and more efficient services that improve people’s living conditions.These services could be of various types,from the classification of multiple classes of land cover[1]to systems where pollution forecasts are made[2].These tools are possible thanks to the collection of information from the physical environment in real-time (RT) and the subsequent use of this data in complex Machine Learning (ML) and Deep Learning (DL) models.The models require high computability for the training phase and a large amount of available memory to store their parameters for the last inference.Consequently,IoT devices cannot store such an amount of data or train deep models,so they need to adjust the data and model sizes or send these mappings to more powerful devices.This could alleviate the problem of lack of resources faced by IoT devices,but a drop in accuracy would be inevitable.When end-user devices intend to perform certain calculations but are not equipped with sufficient resources or are overloaded,they have the opportunity to transmit their assigned tasks to other devices over the network given the interconnectivity between different nodes.A proper offloading strategy is crucial to avoid situations where certain nodes in an architecture absorb all tasks from nearby end devices.As a result,load balancing between nodes must be ensured and all tasks must be executed successfully.

    In addition,depending on the environment in which this paradigm is located or in which the application is intended to be used,some alternatives will be more beneficial.For example,if there are latency requirements for tasks,streaming to the nearest nodes will be more appropriate than streaming to the cloud,although cloud servers are unlimited in memory and offer the highest processing capabilities.The weakness of using this alternative is that transmitting information from end-user devices to the cloud involves some delay and possible loss of information over the network.This loss could result from connection loss or other erroneous message information.The information can be vulnerable to intrusion attacks and inaccurate or incomplete.Transferring confidential information to the cloud is not the right decision because the vulnerability grows with increased information exposure over the network.

    In contrast,IoT devices do not expose information over the network when they perform a task,making them the most secure option.The latency required by many applications also prevents using the cloud as a final computing center due to increased delay.The essential requirement of RT computing is immediate response,which is impossible to achieve using cloud computing.On the other hand,end-user devices offer immediate feedback,but their limitations can lead to a lack of model accuracy.Excessively reducing the size of the models and the data needed to represent them leads to a severe drop in the performance of the resulting models.For example,a simple actuator has to give a specific response depending on the values in the environment.In order to obtain the answer,it can be necessary to apply an ML model that would not be feasible to compute on the IoT device or not with its original structure.However,the interconnectivity between different computers at different layers offers the possibility of computing these models in other computing centers,alleviating the computational load of these tiny devices.

    In order to solve the problems mentioned above,it is essential to establish an appropriate task offloading policy.This would indicate in each case if it is necessary to carry out the download process,and if so,what the most appropriate destination will be in each case.

    There are many alternatives intended to help with the task offloading decision problem.Some researchers opted for optimization algorithms.Other studies have chosen methods based on AI.Other alternatives,such as population-based and control theory methods,are outside the research interest.

    This study applies Graph Neural Networks (GNNs) and Deep Q-Networks (DQN) to decide if it is feasible to offload a task from an end-user device to a richer computing center in terms of processability and available memory in each situation and to determine which is the best destination available in the area.It represents the local network structure in which different types of devices can be found in the Graph structure where each node is a computer,and the edges are the interconnections between them.This architecture can be extrapolated to an area of local networks where different IoT devices are interconnected with each other and with more powerful Edge Datacenters that offer the possibility of offloading tasks from small devices to Datacenters such as a Smart Home with small gadgets and a central router.In addition,the agent learns the network environment in the DQN learning process,where each action will be a decision to offload the task to one of the potential destination computers surrounding the source computer originating the task.Each state will represent each situation where all the characteristics of computers will be reflected.In each scenario,this research establishes a general remote cloud server that will serve as the orchestrator of the offload strategy and a fixed number of Edge Computing data centers and end-user devices or Edge devices.The proposed methods are evaluated by observing the success rate of the generated tasks,workload balance,and energy consumption.Finally,This study analyzes which computing nodes are most suitable for downloading the success rates of each device type.

    The main contributions made in this work are the following.We offer a novel alternative to establish a task offloading strategy in a local network environment.The network architecture is almost replicated in the GNN architecture and the quality of a network connection for download issues has been rated with a novel parameter called download rating.Furthermore,environment updates are fully considered in the DQN learning process.Our methodology offers an innovative way to offload tasks in a local network environment,ensuring load balancing and completion of tasks within the desired latency.An overview of the procedure is given in Fig.1.

    The rest of the paper is organized as follows.Section 2 reviews some of the most representative works published in the literature.Section 3 specifies the new algorithms proposed by this work.Section 4 presents the materials and methodology applied in this work.In Section 5,we carry out different experiments of the task offloading paradigm using a known simulator and the results are presented.In Section 6,these results are analyzed and conclusions are reached.

    Figure 1:Overview of the entire process

    2 State of the Art

    Over the last decade,several researchers have found the task offloading paradigm a conflict of interest.The opportunity to transfer tasks from resource-constrained devices to resource-rich computing data centers can alleviate the computational load on end-user devices and complete tasks that were not feasible to complete at the source due to processing and memory constraints.

    Those techniques have been widely used in different areas.Different techniques have been used in virtual reality (VR) applications,such as fog computing-based radio access networks (F-RAN)[3] or ML-based intelligent programming solutions [4].In autonomous vehicle applications,task offloading techniques have been used to improve performance by reducing latency and transmission cost,as was done in [5].Real-time traffic management was feasible by distributing decision-making tasks to Edge devices[6].In the area of robotic task offloading,new paradigms have emerged,in[7],they presented an approach to simultaneous localization and mapping(SLAM)for RGB-D cameras like the Microsoft Kinect,and in [8],a novel Robot-Inference-and-Learning-as-a-Service (RILaaS)platform for low-latency and secure inference serving of deep models that could be deployed on robots was introduced.Nonetheless,there are already commercial solutions for offloading tasks in robotics[9–11].Similarly,cloud-based solutions can be found in video streaming applications[12,13].However,offloading Edge should improve performance as in [14,15],by enabling gateways and facilitating caching and transcoding mechanisms,respectively.The challenge of transferring computationally expensive tasks to Edge nodes has been addressed in [16–18] in the area of disaster management,but is still underexplored in this field.In the IoT field,task offloading has been of special interest since its inception,since these devices with limited resources often face this drawback.Due to the long delays involved in network transfer between IoT devices and the cloud,edge offloading needs to be considered.The collaboration between IoT devices and Edge devices could be useful in the area of smart health,being a good alternative to help paralyzed patients [19].However,due to the growing number of IoT devices,the best option would be the collaboration between the Edge and Cloud servers as they did in[20]proposing a paradigm that foresees an IoT Cloud Provider(ICP)-oriented cooperation,which allows all devices that belong to the same public/private owner to participate in the federation process.

    Different strategies have been proposed to solve the problem of task offloading.Optimization algorithms have become a very useful and frequently used solution for this paradigm.Mixed integer programming(MIP)has become a useful tool for resource allocation problems,addressing network synthesis and allocation issues[21].In other words,they opted for greedy heuristic solutions[22–24]to solve the task offloading problem.The main advantage of these is that they offer a low execution time,they do not require specialized optimization tools for their resolution and rather they can be expressed as pseudocode,easily implementable in any programming language.These become much more efficient when the task offloading problem is modeled as a nonlinear constrained optimization problem,or when the scale of the scenario is large enough [25].In this case,a greedy heuristic could estimate the exact solution [13,22,24].In other words,game theory was chosen,formulating the problem of partial task offloading in a multi-user infrastructure,Edge Computing and multi-channel wireless interference environment as an offloading game [26].The Cloud-Edge game could be seen as an infrastructure game in which the players are the corresponding infrastructures[27].Contract theory[28–30] and local search [31,32] are another type of optimization solutions for the task offloading problem.

    Another interesting approach to solving the problem of task offloading is the use of methods based on AI.This branch includes all ML methods,including Supervised Learning,Unsupervised Learning,DL,and Deep Reinforcement Learning(DRL)methods.The download destination could be chosen following the simplest models,such as a regression model [33] or regression trees [34].However,given the dynamism of network environments,modeling has been performed with the support vector regressor [35] and the nearest neighbor regressor [36] for future load prediction and energy efficient utilization of the Edge servers,respectively.In[37],a resource-aware offloading video analysis in Mobile Edge Computing and a resource-aware offloading (ROA) algorithm using the radial basis function networks(RBFN)method to improve reward were proposed under the resource deadline constraint.Taking into account unsupervised models,clustering models are useful tools to group resources depending on the distance between computing nodes[38]and task demands[39]and analyze the allocated resources[40].

    DL can be an accurate tool for making task offloading decisions,based on the resource usage of the processing Edge nodes,the workload,and the quality of services(QoS)constraints defined in the Service Level Agreement(SLA)[41].In[42],a new multi-objective strategy based on biogeographybased optimization(BBO) algorithm for Mobile Edge Computing(MEC) offloading was proposed to satisfy multiple user requirements(execution time,power consumption,energy,and cost).In[43],a task offloading model based on dynamic priority adjustment was proposed.Second,a multi-objective optimization model for task scheduling was constructed based on the task offloading model,which optimizes the time delay and energy consumption.In[44],they proposed an Improved Gorilla Troops Algorithm(IGTA)to offload dependent tasks in MEC environments with three objectives:minimizing the application execution latency,the power consumption of light devices,and the used cost of MEC resources.DL models have been used to minimize the computational load under dynamic network conditions and constrained computational resources[45].A model that also considers the challenges of speed,power,and security,while satisfying QoS with dynamic needs,has been proposed to determine the combination of different computing nodes[46].In[47],they developed a novel calibrated contextual bandit learning(CCBL)algorithm,where users learn the computational delay functions of micro base stations and predict the task offloading decisions of other users in a decentralized manner.At[48],they presented a novel federated learning framework for GAN,namely Collaborated g Ame Parallel Learning(CAP),which supports parallel training of data and models for GAN and achieves collaborative learning between edge servers,devices,and Cloud.Furthermore,they proposed a Mix-Generator(Mix-G)module that splits a generator into the sharing layer and the personalizing layer.

    DRL techniques have emerged as an interesting alternative to typical task-offloading policies.Deep Q networks have been used to solve the task offloading problem[49]and have been optimized by introducing a short-term memory(LSTM)[50]into them.An intelligent partial offloading scheme was proposed in[51],namely digital twin-assisted intelligent partial offloading(IGNITE),which combines the improved clustering algorithm with the digital twin (DT) technique,in which unreasonable decisions can be avoided by reducing the size of the decision space and finding the optimal offloading space in advance.In the same field,reference [52] proposed a mobility-dependent task offloading(MESON)scheme for urban vehicle edge calculation(VEC)and developed a DRL-based algorithm to train the offloading strategy.To improve the training efficiency,a vehicle mobility detection algorithm was further designed to detect the communication time between vehicles and Road Side Units(RSUs).In this way,MESON was able to avoid unreasonable decisions by reducing the size of the action space.Finally,the DRL algorithm was used to train the offloading strategy.In [53],they used the Markov decision process(MDP)that minimizes the total completion time.In[54],they considered a wireless MEC system that governs a binary offloading decision to execute the task locally on the Edge devices or on the remote server,proposing a Reinforcement Learning-based Intelligent Offloading online(RLIO)framework that adopts the optimal offloading policy.

    Other approaches that differ from those mentioned above include population-based methods and control theory-based methods.Swarm Intelligence methods[55,56]and Evolutionary Algorithms[57,58] are the two variants of population-based methods that have been proposed to address the problem.Solutions based on control theory include optimal control [59,60],state feedback control[61]and Lyapunov optimization processes[62]among others.

    Most of the mentioned studies implemented using an outdated methodology that has been surpassed by recent models such as deep models or Reinforcement Learning (RL) models,or those that chose to use these techniques are single objective and/and do not care about task’features and the actual workload of the destinations.In contrast,this study applies a simplistic approach that considers the nature of the tasks and the updated status of potential download destinations,facilitating user understanding while achieving high accuracy and competitive performance.It provides a methodology representing the network architecture in a graph and an RL technique that considers all the key factors when determining the optimal download decision.The research proposes a novel feature to evaluate the goodness of a download destination,which is a critical factor in determining whether a potential download route is valuable for a given task.Table 1 compares the latest and most relevant works,specifying the methodology proposed in each work.

    Table 1:Comparative of current works on edge computing

    3 Proposed Algorithms

    This study proposes a well-known DRL technique using GNN and DQN to solve the task offloading problem.Brief descriptions of both architectures are given in this section.Finally,the training processes of both algorithms are explained utilizing Fig.2,showing the complete architecture procedure.

    Figure 2:Training procedure of the entire architecture divided in different steps

    3.1 Graph Neural Network

    Graphs are a data structure representing a collection of elements (nodes) and their connections(edges).A GNN is a type of neural network(NN)that works directly with the graph’s structure.In the used case,each node in the network represents a computing center that can be an IoT/Edge device,an Edge server,or a cloud server.Graphs are a data structure representing a collection of elements(nodes)and their connections(edges).The edges between these nodes represent the connections between the different computing centers,which can be the download paths of the tasks that must be completed to meet their requirements.

    Each node represents each computing device,a potential destination for the download task in question.Furthermore,the edges represent the connection between these devices,whose characteristics are as follows.The characteristics of each node are determined by the computing device’s available RAM,millions of instructions per second(MIPS),central processing unit and memory,and the desired task latency and file size in bits associated with the task.The characteristics of Edge are determined by the offload classification defined in this work,that is,the number of tasks successfully executed using the offload path divided by the total number of tasks offloaded using the path.

    The output size of the network will be determined by the number of possible destinations of the task initially assigned to the IoT/Edge device.The number of output neurons will be equal to those possible destinations.The output would be binary,downloading/not downloading to each possible destination.

    To train the network,we apply the real data produced by the architecture following two wellknown offloading algorithms,Trade-Off and Round Robin.The download destinations for each task obtained following any of the mentioned algorithms would be the actual data used to train the network.Once the network is trained,the input would be the task with its characteristics and the output would be a binary decision of the possible download destinations.A brief description of our algorithm is provided in Algorithm 1.

    3.2 Deep Q-Network

    RL is a framework in which the agent attempts to learn from its environment by obtaining different rewards on each action performed in that environment.The agent’s objective is to maximize the sum of rewards obtained by performing consecutive actions following its policy,and by optimizing this policy the problem in question is solved.After obtaining an observation of its environment(st)the agent actsatfollowing its policyπ(at|st).Consequently,depending on the action performed in that observation,a reward and the next observation(st+1)are obtained.

    DQN was developed by[63].Deep neural networks(DNN)and replay techniques were used to optimize the Q-learning process.Q-learning is based on the functionQthat measures the expected return or the discounted sum of rewards obtained from statesby taking actionafirst and following policyπ.An optimal functionQ?is defined and,using the Bellman optimization equation(see Eq.(1))as an iterative update,convergence of the functionQis guaranteed.

    Representing the functionQby combining all possible actions and states is not the most practical option in most cases.For this reason,a function approximator is used for this.Using the NN approximation can be done using parametersθand minimizing the loss function.

    In the use case,the actions were the possible decision to download to each of the potential destinations on the network,given the state of the environment.The state of the environment will be determined by the task’s characteristics and each device’s state and capabilities.The properties of the task that conditioned the state of the environment were the maximum allowed latency and the size of the file in bits belonging to the task.Similarly,each computing device’s available RAM,MIPS,central processing unit,and memory determined the rest of the state properties.If the task requirements were successfully met,the reward for downloading to a given computing center would be 1,and -1 if the requirements for that action were not met.Following the technique above,the optimal download policy was obtained.Finally,the optimized policy would determine the optimal download destination for each task.Algorithm 2 summarizes the method.

    3.3 Training Procedure&Orchestration of Tasks

    In the case of GNN,it is first necessary to perform a training process following any of the two default methods available in the simulator.In each iteration,any devices that make up the IoT layer will randomly create a task.All devices will send a message to the cloud reporting their actual status,even if they have a task to solve (Step 1 in Fig.2).In this scenario,the cloud will orchestrate the download action following the predetermined algorithm by sending a message to the device (Step 2 in Fig.2),and this will be downloaded to the destination(Step 3 in Fig.2).After all,if the task has been completed by meeting the requirements,that will be a positive result for the subsequent training of the GNN;otherwise,it will be negative.Once the entire training procedure of the default algorithm is completed,the GNN will use the download decisions and the output generated in the previous step as ground truth and perform the training process after defining the download rating for each network connection.As an edge feature.For both training procedures,the graph shape was determined by the network structure(influenced by the number of IoT devices),the learning rate was 0.001,the optimizer was Stochastic Gradient Descent(SGD),and 10000 was the number of epochs.The GNN training will be conducted through the cloud.Finally,the cloud will decide to download after each device sends the message with the information about its status(Step 1 in Fig.2),and the cloud will return the message to the task-generating device informing about the download destination(Step 2 in Fig.2).After sending the task to the target device(Step 3 in Fig.2)and completing the task on this device,the results will be sent back to the source device(Step 4 in Fig.2).

    In the case of DQN,each device will send information about its status to the cloud (Step 1 in Fig.2).There,taking the state of the environment based on the offloading policy,the optimal action must be taken.If the task was completed meeting the requirements,the reward will be 1,and 0 otherwise.In this way,an optimal offloading policy will be obtained after converging the Q function.

    Finally,the Q function obtained will determine the optimal download destination in the cloud after each device sends its state to the cloud,and it returns the message to the task generator indicating where to download its assigned task (Step 2 in Fig.2).After sending the task to the target device(Step 3 in Fig.2) and completing the task on this device,the results will be sent back to the source device(Step 4 in Fig.2).

    4 Material&Methodology

    This section explains the environment in which the methodologies presented in the previous section were applied in the experimental process.The software and hardware used in the experiments are also described.

    For the experimental processes we chose to use a well-known Edge Computing simulator called PureEdgeSim[64].The simulator offers high configurability through its modular design.In this way,by editing each module and adjusting it to the user’s needs,it is simple and feasible to reproduce the desired environment in each case.

    The hardware environment in which all development of our work took place is a ×64 Ubuntu 20.04.4 LTS Operating System equipped with an Intel Core i7-11850H working at 2.5 GHz ×16 and 32 GB DDR-4 RAM and a NVIDIA T1200 Laptop GPU (driver version: 510.47.03,CUDA version:11.6).

    The study established between 10 and 30 end-user devices in this case,forming the IoT-Edge layer.It repeated the experiment 3 times and compared the results of applying the abovementioned algorithms to make task-offloading decisions.These devices were dynamic,and their range of motion was limited to 200×200 units.The Fog-Edge layer comprised four data centers,each located symmetrically in the coverage area.Each of these Edge Data Centers covered an area of 100×100 units.Finally,a resource-rich cloud platform offered greater computing and memory.

    Each of the end-user devices was interconnected with each other.In this way,interconnections between them were feasible.Similarly,each of these end-user devices was connected to the nearest Edge Datacenters,and all were connected to the cloud.

    The orchestrator of the decision to download was the cloud.It was equipped with 200 cores,40,000 MIPS,16 GB of RAM,and 1 TB of memory.

    The Edge Datacenters were equipped with ten cores,40,000 MIPS,16 GB of RAM,and 200 GB of memory.Its idle power consumption was 100 Wh,with a maximum consumption of 250 Wh.

    Finally,the number of Edge devices or end-user devices was 10,20,and 30 in each experimental test.Their operating system was Linux and they had an architecture of×86.These devices had dynamic behavior in some cases,with a speed of 1.8 m/s.The type of network connection used to interconnect with the rest of the devices was WiFi with a bandwidth of 1300 Mbits/s,with a latency of 0.005 s.There were 5 different types of Edge devices and their characteristics are summarized in Table 2.

    Table 2:Characteristics of different types of Edge devices

    Each device could spawn any applications or tasks whose specifications are summarized in Table 3.Container size refers to the size of the application in kB.The request size refers to the download request sent to the orchestrator and then to the device where the task will be downloaded in kB.Result size refers to the downloaded task results in kB.

    Table 3:Characteristics of different types of tasks

    This study applied the offload decision algorithms against the default methods provided by the simulator,Round Robin,and Trade-Off.It introduced different options regarding possible download destinations by including all devices,Edge devices only,Edge Data Centers only,Edge Data Centers and cloud-only,Edge devices and cloud-only,and Edge devices and Edge Data Centers only.

    In total,there were 6 offloading configurations × 3 number of Edge device possibilities × 4 algorithms=72 simulation configurations.Each simulation time was established at 200 s.

    5 Experiments&Results

    In the experimental process,we considered the following parameters:energy consumption,tasks executed in each layer,and success rate.Additionally,we considered the distribution of the workload among different devices.Task failure could be due to different reasons,such as lack of available memory,violation of latency constraints,or network traffic congestion.His explanation is given below:

    ? Success rate:The ratio of the number of successfully executed tasks divided by the total number of tasks.

    ? Energy consumption:The power consumed by all devices of each type during each experimentation process.

    ? Workload distribution:Refers to the number of tasks distributed by each type of device in each experimentation process.

    5.1 Tests with 10 Edge Devices

    First,only ten devices were placed in the end-user layer,and these devices were divided into various types following the percentages shown in Table 2.These randomly generated the three types of tasks following the percentages and generation rates listed in Table 3.The success rate results are depicted in Table 4.

    Table 4:Success rate of different algorithms including different types of destiny devices (10 Edge devices)

    Table 5:Energy consumption in Wh of Edge devices for different algorithms including different types of destiny devices(10 Edge devices)

    Table 6:Energy consumption in Wh of Edge Datacenters for different algorithms including different type of destiny devices(10 Edge devices)

    As can be seen,the most critical environments were when there were no Edge Data Centers available as possible download destinations.This could be because the Edge devices were not equipped with sufficient capabilities to computationally support the rest of the devices’tasks.Likewise,the cloud was too far from these end-user devices,so latency requirements were not met in most cases where the cloud was the download destination.Those problems were solved when tasks were offloaded to Edge Data Centers,which are computationally less powerful than the cloud platform but still have high capabilities.In the same way,being located close to these Edge devices,task latency was not an issue.

    The energy consumption of Edge devices and Edge Data Centers are shown in Tables 5 and 6,respectively.Energy consumption was higher in cases where there were no Edge Data Centers available as possible download destinations,and the algorithm chosen to decide the download destination was GNN or DQN.In these cases,because the best download destinations were not available,the Edge devices that can perform the tasks consume more power.However,in the default algorithms,this is not the case.Most of the tasks could have been offloaded to the cloud,which hurts the success rate,as seen in Table 4.Edge Data Centers show a similar consumption pattern for all algorithms,slightly lower for all download policies when they were not potential download destinations.

    5.2 Tests with 20 Edge Devices

    We repeated the experiment from the previous subsection by changing the number of Edge devices to 20.The results of the success rates are shown in Table 7.

    Table 7:Success rate of different algorithms including different type of destiny devices (20 Edge devices)

    Table 8:Energy consumption in Wh of Edge devices for different algorithms including different types of destiny devices(20 Edge devices)

    Table 9:Energy consumption in Wh of Edge Datacenters for different algorithms including different type of destiny devices(20 Edge devices)

    The success rate trend continued when we doubled the number of Edge devices.However,as the number of free Edge devices must have been higher than in the previous experiment the success rates were higher when Edge devices were involved and not Edge Data Centers.In cases where Edge Data Centers were possible destinations,rates are 100%or close to it.In this test,when Edge Datacenters were not potential destinations,performance degradation was observed when Edge devices were the only potential destinations for the Round Robin algorithm and when the cloud was also a potential offload destination for the Trade-Off algorithm.In these cases,the Edge devices would not be sufficient to respond to the tasks and the cloud would not respond within the desired latency,respectively.

    Tables 8 and 9 show the energy consumption of edge devices and Edge Data Centers,respectively.

    There is no evident variation in the power consumption of Edge Data Centers when doubling the number of Edge devices generating tasks.However,the power consumption of the Edge devices was almost double that in the previous case since more tasks were generated while the number of Edge Data Centers was fixed.Since computing platforms with higher capabilities were the same for a larger number of tasks,more tasks were offloaded to devices with limited resources.As in the previous experiment,Edge device consumption was slightly higher for cases where Edge Data Centers do not receive any tasks.

    5.3 Tests with 30 Edge Devices

    Finally,we replicated the experiment from the previous subsections by changing the number of Edge devices to 30.The results of the success rates are shown in Table 10.

    Table 10:Success rate of different algorithms including different types of destiny devices (30 Edge devices)

    Table 11:Energy consumption in Wh of Edge devices for different algorithms including different types of destiny devices(30 Edge devices)

    Table 12:Energy consumption in Wh of Edge Datacenters for different algorithms including different types of destiny devices(30 Edge devices)

    Overall the success rates are better than with 10 Edge devices but comparable to the case of 20 devices.

    The power consumption of the Edge devices and Edge Data Centers are shown in Tables 11 and 12,respectively.The increase was proportional to previous cases,with a low variation in the consumption of Edge Data Centers and a significant variation in the consumption of Edge devices.Apart from the lineal increase in consumption due to the higher number of tasks computed in these types of devices,in the case when Edge devices and Cloud where potential destinies the highest energy consumptions were reported when DQN and GNN were the applied algorithms,as consequence of a higher number of tasks offloaded to Edge devices than to the cloud.As in the previous cases,when a greater proportion of tasks were offloaded to Edge devices (higher energy consumption) the success rates were higher,due to the latency violation that occurred when the cloud was in charge of performing the task.

    5.4 Task Distribution with Varying Edge Devices

    Next,this study established all types of computing devices as possible offloading destinations,and by varying the number of Edge devices between 10 and 30,as in the previous tests,this research observed the distribution of the download destinations in each case.The algorithms considered were GNN and DQN.

    Fig.3 indicates that the incremental trend toward computing tasks on Edge devices remains for both algorithms as the number of Edge devices grows and,consequently,the number of generated tasks does as well.This agrees with the increase in energy consumption observed in the previous sections.For both algorithms,the Edge Data Centers cannot attend to more tasks in the case of 30 Edge devices,relegating the rest of the tasks to the Edge devices,consequently having them attend to more tasks.

    Figure 3:Task distribution with different algorithms and number of Edge devices

    5.5 Success Rate of Different Layers

    Finally,we established as possible offloading destinies all types of computation devices and varied the number of Edge devices between 10 and 30 as in previous tests we observed the success rate of different layers,to determine the optimal destination for computing the tasks generated by end-user devices.The algorithms regarded were GNN and DQN.

    In this case,the worst results were given by the cloud platform.Although in terms of computational capabilities,it is the best option compared to the rest of the devices,the latency requirements were more difficult to meet due to the long time required to cross the entire network.This was expected.However,Edge devices were as good as Edge Datacenters in terms of accuracy for 10 and 20 devices.In the latter case,where 30 Edge devices were generating tasks,the Edge Data Centers were fully occupied,so more tasks were offloaded to resource-constrained devices.In isolated cases,the Edge devices were not able to complete the task,which is the reason why in the last case the Edge devices did not reach 100% accuracy for both algorithms.Fig.4 shows the success rates for each layer &algorithm with different numbers of Edge devices.

    Figure 4:Success rates with different algorithms and number of Edge devices

    6 Discussion&Conclusion

    Section 5 presents the results obtained in the experimental process and observes different parameters.

    Regarding success rate,there is a clear trend in favor of cases where Edge Data Centers were included in the download destinations.This is because they had sufficient computability and were located close to the Edge devices from where the tasks were generated.The worst results were obtained when only the cloud was available as a powerful computing center.In this situation,where network traffic congestion will have caused longer delays in task responses,meeting latency requirements will have been challenging.Tasks that had been offloaded to other Edge devices cannot be executed due to the lack of resources.As the number of Edge devices grew,the success rates of cases where Edge Data Centers were excluded improved significantly due to the increased number of free Edge devices.In this case,the number of Edge devices with sufficient computability grew,and fewer tasks had to be transmitted to the cloud.

    Considering the differences between the different algorithms regarding success rate,there was a slightly favorable trend toward DQN,especially when the number of Edge devices was significant.In this case,there were possible offloading destinations,that is,more possible actions given the state of the environment.By learning an optimal policy,it will be more feasible to reach the optimal download destination with this algorithm.GNN also outperformed the two default simulator methods when the number of Edge devices was 20 and 30.This was because the graph was complex,and although optimizing the network would be more difficult,the decision to offload was closer to optimal.

    In terms of energy efficiency,there was no big difference between the first 3 tests.Obviously,in the case of 20 and 30 devices,the power consumption of Edge devices grew linearly from~3 to ~10 Wh and ~13 Wh,respectively,due to the larger amount of generated and downloaded tasks to these devices.On the contrary,the energy consumption of the Edge Data Centers almost remained at ~34 Wh,even though the number of generated tasks increased.This was because the Edge Data Centers were full and other types of devices were needed to handle the rest of the tasks.This would have caused a reduction in the success rate,especially in the case of the 2 default algorithms and when the cloud and Edge Data Centers were included.In this situation,the rest of the tasks that were not attended to by the Edge Data Centers would have been offloaded to the cloud,meeting the problems mentioned in the previous paragraphs.

    Regarding the distribution of tasks between different types of devices,we observed that when the number of Edge devices was not too high (10 or 20 devices),the Edge Data Centers were the destinations for most tasks.In contrast,when the number of devices grew to 30,they did not have enough free memory space,or their processors were busy.As a result,more tasks were offloaded to Edge devices,and a slight increase in the number of tasks offloaded to the cloud was also observed.In the experiment,this study compared only the proposed algorithms since they had the best performance in terms of success rate.Among them,DQN decided to download more tasks to Edge Data Centers,becoming a better alternative due to the better performance when Edge Data Centers were included in the possible download destinations.

    Finally,the success rates of different types of devices were carefully compared.This study established all types of devices as possible destinations for the two proposed algorithms.It changed the number of Edge devices to between 10 and 30.There was a clear difference between the performance of the cloud and the rest of the devices.As mentioned in this section,the violation of the latency requirement is responsible for such performance degradation,given the high delay caused when traversing the network to transfer the task to the cloud and return the results to the Edge devices.Between Edge devices and Edge Data Centers,the latter had the best success rate.The larger capacities and larger memory were superior in computability compared to Edge devices.However,with an algorithm good enough to orchestrate all tasks between all possible destinations,offloading the less demanding tasks to the weakest computation centers,the success rate can be preserved with a higher number of generated tasks.That is why the proposed algorithms outperform the two default algorithms:they can offload less demanding tasks to weaker devices and more complex ones to Edge Data Centers.In this way,the task load was balanced between all available devices,meeting latency requirements.

    We saw that our proposed algorithms outperform the default PureEdgeSim simulator methods in terms of success rate and load balancing.For example,for the case in which 30 Edge devices generated tasks,GNN and DQN achieved an improvement of 22.1% and 23.8% respectively concerning the Trade-Off when the Edge Datacenters were not included as potential destinations.However,GNN achieved an average improvement of 3.6% concerning Trade-Off and Round Robin,and DQN achieved an average improvement of 4.1% concerning Trade-Off and Round Robin.In other works,such as [53],they achieved an average improvement of 20.48%,16.28%,and 12.36% concerning random download,higher data rate download (HDR),and the largest computing device (HCD),respectively.In[51],they achieved a 20% reduction in total computation delay and a 25% reduction in average computation delay compared to the GK-means DQN-based offloading policy.In our case,the rest of the offloading policies analyzed offered a better result since they offered decent behavior in most cases.However,our methods significantly improved the success rates of the mentioned algorithms offer quite similar energy consumption,and have more to do with the distribution of tasks in different layers.Our network environments and experimental setup are completely different compared to those used in the works just mentioned.Therefore,the comparison cannot be made directly between different works.The distribution of tasks was different in our two algorithms,with a larger number of tasks being offloaded to the Edge Data Centers when DQN was applied.This resulted in a slight improvement in the success rate due to the greater capabilities of this type of computing center.Between both types of algorithms,the best results were offered by DQN with a slight variation.The ability to obtain the optimal policy increased when the number of Edge devices and,consequently,the number of generated tasks was larger.The same was true for GNN:by having more nodes and a broader network structure,the algorithm was able to reach a near-optimal offloading decision.

    These algorithms could be a useful tool to provide proper orchestration in an environment where many IoT devices are requested to solve complex tasks and the characteristics of the environment are constantly updated.For example,in a Smart Building,several sensors can be located that detect different parameters and have to react by activating any other system based on the readings they obtain.Deciding what action to take may require the use of ML or DL techniques to take the optimal action.In this situation,these small devices could alleviate the computational burden of these deep models by offloading them to other powerful devices such as Edge Data Centers.

    In this research,the introduction of GNN and DQN to the paradigm of task-offloading in a local network environment involving IoT,Edge,and Cloud layers is carried out.The similarity between the architectures of a graph and a local network involving the just mentioned devices favored the use of GNN to satisfactorily solve the task offloading paradigm.The offloading ratio used as an edge feature in this study is a good predictor of how good a potential target can be at accomplishing a task.Furthermore,the use of DQN slightly improved the results obtained with GNN.The learning process of the latter favors the consideration of the constant updates of an environment.The novelty offered using the proposed methodology in a local networking environment is the consideration of constant network updates and the scoring of network connections using the novel offloading rating parameter,being both GNNs and DQN powerful tools to impose an optimized offloading strategy in an environment made up of resource-constrained devices.

    Among the limitations found during this research work,it is worth highlighting the difficulties in reproducing other methodologies provided by the literature using the PureEdgeSim simulator.The complexity of the simulator was an advantage in adjusting the properties of the network environment to the needs.On the contrary,the reproduction of any algorithm has a high complexity.At the same time,as other environmental properties can directly impact the state of the network,such as vandalism attacks,natural disasters,or intrusion attacks,these must be considered in the simulation,applying a random appearance factor to them.

    In future work,more algorithms can be implemented using the simulator to compare them with those presented in this study.Until now,the only default implementable algorithms for the simulator in question were tried and tested against our methods,and due to the complexity of the simulator,no others were implemented.Other types of network structures can be interesting for research and applicable using the methodology proposed in this work.Furthermore,combining RL techniques with the graph will open up an exciting research area.

    Acknowledgement:The authors wish to express their appreciation to the reviewers for their helpful suggestions which greatly improved the presentation of this paper.This work is partially supported by the project a Optimization of Deep Learning algorithms for Edge IoT devices for sensorization and control in Buildings and Infrastructures(EMBED)funded by the Gipuzkoa Provincial Council and approved under the 2023 call of the Guipuzcoan Network of Science,Technology and Innovation Program with File Number 2023-CIEN-000051-01.

    Funding Statement:This work has received funding from TECNALIA,Basque Research and Technology Alliance(BRTA).This work is partially supported by the project a Optimization of Deep Learning algorithms for Edge IoT devices for sensorization and control in Buildings and Infrastructures(EMBED) funded by the Gipuzkoa Provincial Council and approved under the 2023 call of the Guipuzcoan Network of Science,Technology and Innovation Program with File Number 2023-CIEN-000051-01.

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design: A.Garmendia-Orbegozo,J.D.Nunez-Gonzalez and M.A.Anton;data collection: A.Garmendia-Orbegozo,J.D.Nunez-Gonzalez and M.A.Anton;analysis and interpretation of results:A.Garmendia-Orbegozo,J.D.Nunez-Gonzalez and M.A.Anton;draft manuscript preparation:A.Garmendia-Orbegozo,J.D.Nunez-Gonzalez and M.A.Anton.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:All data used in the experimental process was generated using the PureEdgeSim simulator,which is perfectly reproducible following the instructions of Section 4.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    青春草亚洲视频在线观看| a级片在线免费高清观看视频| 国产高清不卡午夜福利| 69精品国产乱码久久久| 看免费av毛片| 水蜜桃什么品种好| 久久狼人影院| 日本-黄色视频高清免费观看| 最黄视频免费看| 在线观看国产h片| 久久久久精品性色| 巨乳人妻的诱惑在线观看| 人人妻人人澡人人爽人人夜夜| 男人操女人黄网站| av天堂久久9| 国产 精品1| 国产又爽黄色视频| 亚洲av欧美aⅴ国产| 亚洲国产av影院在线观看| 精品少妇久久久久久888优播| 亚洲精品第二区| 国产精品香港三级国产av潘金莲 | 999久久久国产精品视频| 久久毛片免费看一区二区三区| 男女免费视频国产| 国产亚洲午夜精品一区二区久久| 国产在视频线精品| 美女午夜性视频免费| 熟女电影av网| 精品久久蜜臀av无| 一本久久精品| 精品国产一区二区三区四区第35| av.在线天堂| 亚洲伊人久久精品综合| 亚洲精品国产av成人精品| 久久精品国产鲁丝片午夜精品| 亚洲中文av在线| 哪个播放器可以免费观看大片| 两个人看的免费小视频| 亚洲精品中文字幕在线视频| 国产成人一区二区在线| 国产一区二区三区av在线| 久久午夜综合久久蜜桃| 成人二区视频| 久久久国产精品麻豆| 亚洲色图 男人天堂 中文字幕| 99久久中文字幕三级久久日本| 男人操女人黄网站| 久久亚洲国产成人精品v| 久久av网站| 国产免费现黄频在线看| 夫妻性生交免费视频一级片| 国产有黄有色有爽视频| 高清黄色对白视频在线免费看| 丝袜人妻中文字幕| 亚洲av国产av综合av卡| 99香蕉大伊视频| 日韩不卡一区二区三区视频在线| 亚洲精品乱久久久久久| 国产激情久久老熟女| 欧美精品av麻豆av| 日本猛色少妇xxxxx猛交久久| 亚洲精品国产色婷婷电影| 久久 成人 亚洲| 日韩 亚洲 欧美在线| av视频免费观看在线观看| 男的添女的下面高潮视频| 亚洲欧洲日产国产| 99热全是精品| 久久鲁丝午夜福利片| 精品国产乱码久久久久久小说| 啦啦啦视频在线资源免费观看| 中文字幕亚洲精品专区| 亚洲人成77777在线视频| 国产极品粉嫩免费观看在线| 青青草视频在线视频观看| 婷婷色av中文字幕| 久久精品人人爽人人爽视色| 色婷婷av一区二区三区视频| 我要看黄色一级片免费的| 欧美精品高潮呻吟av久久| 免费观看av网站的网址| 国产男人的电影天堂91| 国产日韩欧美视频二区| 大香蕉久久网| 一本—道久久a久久精品蜜桃钙片| 久久国内精品自在自线图片| 久久国产精品男人的天堂亚洲| 亚洲国产精品一区二区三区在线| 色哟哟·www| 老司机影院成人| 在线免费观看不下载黄p国产| 亚洲av福利一区| 夫妻午夜视频| 91精品伊人久久大香线蕉| 久久精品久久久久久久性| 男女无遮挡免费网站观看| 国产精品av久久久久免费| 久久99蜜桃精品久久| 久久久国产一区二区| av在线播放精品| 97精品久久久久久久久久精品| 男人爽女人下面视频在线观看| 在线观看www视频免费| 妹子高潮喷水视频| 精品亚洲成国产av| 熟女av电影| 美女大奶头黄色视频| 亚洲熟女精品中文字幕| 国产成人免费观看mmmm| 日本猛色少妇xxxxx猛交久久| 啦啦啦中文免费视频观看日本| 毛片一级片免费看久久久久| 国产 精品1| 黄网站色视频无遮挡免费观看| 欧美av亚洲av综合av国产av | 一区二区三区精品91| 黄色怎么调成土黄色| 亚洲国产成人一精品久久久| 国产有黄有色有爽视频| 欧美精品av麻豆av| 国产成人欧美| 制服丝袜香蕉在线| 久久热在线av| 女人高潮潮喷娇喘18禁视频| 纵有疾风起免费观看全集完整版| 丰满乱子伦码专区| 九九爱精品视频在线观看| 欧美激情高清一区二区三区 | 免费日韩欧美在线观看| 亚洲,欧美精品.| 69精品国产乱码久久久| 一本大道久久a久久精品| 午夜免费观看性视频| 十八禁高潮呻吟视频| 国产成人精品久久久久久| 欧美另类一区| 免费黄网站久久成人精品| 香蕉丝袜av| 亚洲内射少妇av| 国产日韩欧美视频二区| 国产精品.久久久| 亚洲成国产人片在线观看| 国产 精品1| 亚洲,欧美,日韩| 亚洲欧美成人综合另类久久久| 国产在视频线精品| 国产成人一区二区在线| 可以免费在线观看a视频的电影网站 | 水蜜桃什么品种好| 久久精品国产亚洲av涩爱| 亚洲欧美成人综合另类久久久| av不卡在线播放| 考比视频在线观看| 搡女人真爽免费视频火全软件| 春色校园在线视频观看| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 欧美日韩av久久| 黄片小视频在线播放| 欧美bdsm另类| 精品少妇一区二区三区视频日本电影 | 97在线人人人人妻| 国产精品国产三级专区第一集| 亚洲色图综合在线观看| 精品人妻在线不人妻| 制服诱惑二区| 亚洲欧美中文字幕日韩二区| 久久精品国产亚洲av涩爱| 99久久中文字幕三级久久日本| 人妻少妇偷人精品九色| 亚洲av电影在线进入| 一级片'在线观看视频| 三级国产精品片| 免费高清在线观看日韩| 国产男女内射视频| 自线自在国产av| 亚洲色图综合在线观看| 巨乳人妻的诱惑在线观看| 大香蕉久久成人网| 久久久久网色| 久久精品人人爽人人爽视色| 夫妻午夜视频| 国产亚洲av片在线观看秒播厂| 久久久久久久久久久久大奶| 一区二区三区四区激情视频| 日韩电影二区| 日韩中字成人| 最近2019中文字幕mv第一页| 性高湖久久久久久久久免费观看| 欧美精品国产亚洲| 成人亚洲精品一区在线观看| 中文精品一卡2卡3卡4更新| 欧美成人午夜精品| 日本-黄色视频高清免费观看| 国产一区二区 视频在线| 亚洲图色成人| 午夜日本视频在线| 啦啦啦在线免费观看视频4| 免费少妇av软件| 欧美精品一区二区免费开放| 好男人视频免费观看在线| 日韩欧美一区视频在线观看| 69精品国产乱码久久久| 91精品伊人久久大香线蕉| 国产视频首页在线观看| 男人舔女人的私密视频| 母亲3免费完整高清在线观看 | 考比视频在线观看| 亚洲伊人久久精品综合| 亚洲精品第二区| 性高湖久久久久久久久免费观看| www.熟女人妻精品国产| 在线观看一区二区三区激情| 国产精品一国产av| 最近2019中文字幕mv第一页| 国产av精品麻豆| 伦理电影免费视频| 国产成人精品一,二区| 国产精品无大码| 亚洲欧美清纯卡通| 伦理电影大哥的女人| 欧美少妇被猛烈插入视频| 国产爽快片一区二区三区| 天堂8中文在线网| 国产乱来视频区| 久久久久久久久久人人人人人人| 亚洲欧洲精品一区二区精品久久久 | 日韩 亚洲 欧美在线| 90打野战视频偷拍视频| 色视频在线一区二区三区| 久久国产精品男人的天堂亚洲| 欧美日韩视频精品一区| 天天躁日日躁夜夜躁夜夜| 国产毛片在线视频| 一边亲一边摸免费视频| av在线app专区| 国产黄频视频在线观看| 黄色视频在线播放观看不卡| 国产高清不卡午夜福利| 高清不卡的av网站| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 另类精品久久| 男女免费视频国产| 卡戴珊不雅视频在线播放| 成年动漫av网址| 日本wwww免费看| 精品少妇一区二区三区视频日本电影 | 精品一区二区三卡| 国产片特级美女逼逼视频| 中文字幕人妻丝袜一区二区 | 啦啦啦在线免费观看视频4| 老熟女久久久| 成年人免费黄色播放视频| 免费看av在线观看网站| 久久av网站| 亚洲国产精品一区二区三区在线| 观看美女的网站| 在线观看人妻少妇| 国产免费现黄频在线看| 欧美人与性动交α欧美精品济南到 | 免费人妻精品一区二区三区视频| √禁漫天堂资源中文www| 欧美人与性动交α欧美软件| 飞空精品影院首页| 伦精品一区二区三区| 男女下面插进去视频免费观看| 精品久久蜜臀av无| 国产精品熟女久久久久浪| 久久精品国产鲁丝片午夜精品| 人人妻人人澡人人爽人人夜夜| 亚洲av成人精品一二三区| av国产精品久久久久影院| 999精品在线视频| 岛国毛片在线播放| 黄片播放在线免费| 男人添女人高潮全过程视频| 亚洲av男天堂| 免费看av在线观看网站| 大香蕉久久成人网| 韩国av在线不卡| 亚洲欧美精品自产自拍| 99久久综合免费| 天天躁狠狠躁夜夜躁狠狠躁| 免费播放大片免费观看视频在线观看| 如何舔出高潮| 国产黄色免费在线视频| av电影中文网址| 九草在线视频观看| 精品第一国产精品| 国产不卡av网站在线观看| 大码成人一级视频| 日本91视频免费播放| 香蕉国产在线看| 丝袜脚勾引网站| 欧美日韩一区二区视频在线观看视频在线| 亚洲国产最新在线播放| 老司机影院毛片| 精品少妇久久久久久888优播| 亚洲人成网站在线观看播放| 啦啦啦在线观看免费高清www| 久久人人97超碰香蕉20202| 久久99蜜桃精品久久| 亚洲一区中文字幕在线| 免费观看性生交大片5| 国产淫语在线视频| 2018国产大陆天天弄谢| 日本欧美视频一区| 精品久久蜜臀av无| 免费少妇av软件| 99热网站在线观看| 精品人妻一区二区三区麻豆| 国产极品粉嫩免费观看在线| 亚洲国产成人一精品久久久| 哪个播放器可以免费观看大片| 韩国av在线不卡| 精品人妻偷拍中文字幕| 亚洲欧美精品综合一区二区三区 | 我的亚洲天堂| 国产成人一区二区在线| 欧美日韩视频高清一区二区三区二| 赤兔流量卡办理| 亚洲综合色网址| 极品少妇高潮喷水抽搐| 色吧在线观看| 美女视频免费永久观看网站| 91aial.com中文字幕在线观看| 亚洲欧美精品综合一区二区三区 | av国产久精品久网站免费入址| 久久精品国产亚洲av涩爱| 青春草亚洲视频在线观看| av视频免费观看在线观看| 亚洲内射少妇av| 日本欧美视频一区| 国产亚洲欧美精品永久| 日韩视频在线欧美| 国产男女内射视频| 欧美变态另类bdsm刘玥| 男女国产视频网站| 亚洲精华国产精华液的使用体验| 男女高潮啪啪啪动态图| 国产午夜精品一二区理论片| 国产视频首页在线观看| 尾随美女入室| 亚洲精品久久午夜乱码| 国产一区二区三区综合在线观看| 校园人妻丝袜中文字幕| 国产成人精品福利久久| 大片免费播放器 马上看| 久久鲁丝午夜福利片| 日本av手机在线免费观看| 人人妻人人爽人人添夜夜欢视频| 黑人欧美特级aaaaaa片| 国产在线视频一区二区| 亚洲色图综合在线观看| 久久精品人人爽人人爽视色| 王馨瑶露胸无遮挡在线观看| 叶爱在线成人免费视频播放| 成人影院久久| 欧美老熟妇乱子伦牲交| 我的亚洲天堂| av有码第一页| 咕卡用的链子| 精品国产乱码久久久久久男人| 亚洲婷婷狠狠爱综合网| 黑丝袜美女国产一区| 一二三四中文在线观看免费高清| 日韩电影二区| 美女大奶头黄色视频| 成人亚洲精品一区在线观看| 亚洲国产av影院在线观看| 另类精品久久| 久久免费观看电影| 美女福利国产在线| 欧美+日韩+精品| 99香蕉大伊视频| 午夜日韩欧美国产| 考比视频在线观看| 观看av在线不卡| 欧美日韩精品网址| 精品少妇黑人巨大在线播放| 少妇的丰满在线观看| 97人妻天天添夜夜摸| 大陆偷拍与自拍| 久久久精品94久久精品| 久久人人97超碰香蕉20202| 满18在线观看网站| 卡戴珊不雅视频在线播放| 久久精品熟女亚洲av麻豆精品| 少妇被粗大猛烈的视频| 国产老妇伦熟女老妇高清| 亚洲,欧美精品.| 中文精品一卡2卡3卡4更新| 国产精品蜜桃在线观看| 欧美精品一区二区大全| 免费观看无遮挡的男女| 男人添女人高潮全过程视频| 国产欧美日韩综合在线一区二区| 国产成人aa在线观看| 最近中文字幕2019免费版| 国产日韩一区二区三区精品不卡| 精品人妻一区二区三区麻豆| 91aial.com中文字幕在线观看| 色婷婷久久久亚洲欧美| 天天操日日干夜夜撸| 亚洲欧美精品综合一区二区三区 | 久久久久人妻精品一区果冻| 欧美+日韩+精品| 国产av国产精品国产| 九草在线视频观看| 美国免费a级毛片| 亚洲欧美精品自产自拍| 最近的中文字幕免费完整| 只有这里有精品99| 午夜精品国产一区二区电影| 国产一区二区在线观看av| 一个人免费看片子| 欧美少妇被猛烈插入视频| 黄片小视频在线播放| 最近中文字幕高清免费大全6| 亚洲中文av在线| 欧美日韩国产mv在线观看视频| 免费女性裸体啪啪无遮挡网站| 国产片特级美女逼逼视频| 午夜日本视频在线| 18禁动态无遮挡网站| 菩萨蛮人人尽说江南好唐韦庄| 日本91视频免费播放| 一级片'在线观看视频| 另类亚洲欧美激情| 黄色怎么调成土黄色| 成人亚洲精品一区在线观看| 欧美国产精品va在线观看不卡| 国产成人精品无人区| 最近的中文字幕免费完整| 欧美精品一区二区免费开放| 久久精品国产亚洲av天美| 女人精品久久久久毛片| 午夜福利乱码中文字幕| 人人澡人人妻人| 在线精品无人区一区二区三| 久久精品国产综合久久久| 日本免费在线观看一区| 亚洲欧美精品综合一区二区三区 | 9热在线视频观看99| 视频在线观看一区二区三区| 蜜桃国产av成人99| 亚洲av.av天堂| 久久狼人影院| 国产成人av激情在线播放| 色94色欧美一区二区| 中文字幕精品免费在线观看视频| 交换朋友夫妻互换小说| 韩国高清视频一区二区三区| 亚洲美女视频黄频| 精品99又大又爽又粗少妇毛片| 不卡视频在线观看欧美| 激情五月婷婷亚洲| 国产成人精品一,二区| 精品一区二区三区四区五区乱码 | 亚洲精品av麻豆狂野| 国产不卡av网站在线观看| 久久午夜综合久久蜜桃| 亚洲,欧美精品.| 黄片播放在线免费| 欧美黄色片欧美黄色片| 久久久久久久久免费视频了| 亚洲国产精品999| av一本久久久久| 秋霞伦理黄片| 中文精品一卡2卡3卡4更新| 亚洲精品国产av蜜桃| 亚洲av欧美aⅴ国产| 一二三四中文在线观看免费高清| 美女中出高潮动态图| 观看av在线不卡| a级毛片黄视频| 黄色配什么色好看| 亚洲精品国产一区二区精华液| 女的被弄到高潮叫床怎么办| 国产成人精品无人区| 中文字幕av电影在线播放| 亚洲精品久久成人aⅴ小说| 99热国产这里只有精品6| 777久久人妻少妇嫩草av网站| 日韩熟女老妇一区二区性免费视频| 欧美中文综合在线视频| 亚洲美女搞黄在线观看| 午夜日本视频在线| av电影中文网址| 十八禁网站网址无遮挡| 国产成人精品婷婷| 久久久久久人人人人人| 国产女主播在线喷水免费视频网站| 亚洲色图综合在线观看| 亚洲精品久久成人aⅴ小说| 精品国产一区二区久久| 免费黄色在线免费观看| 丝袜美腿诱惑在线| 校园人妻丝袜中文字幕| 在线观看免费视频网站a站| 国产麻豆69| 啦啦啦啦在线视频资源| 成人毛片60女人毛片免费| 毛片一级片免费看久久久久| 日韩在线高清观看一区二区三区| 十分钟在线观看高清视频www| 亚洲av综合色区一区| 国产成人精品无人区| 大片免费播放器 马上看| 免费在线观看完整版高清| 又粗又硬又长又爽又黄的视频| 自线自在国产av| 国产精品偷伦视频观看了| 免费日韩欧美在线观看| 妹子高潮喷水视频| 亚洲精品aⅴ在线观看| 两性夫妻黄色片| 建设人人有责人人尽责人人享有的| 伊人亚洲综合成人网| 美国免费a级毛片| 免费大片黄手机在线观看| 七月丁香在线播放| www日本在线高清视频| 日韩,欧美,国产一区二区三区| 国产精品亚洲av一区麻豆 | 男男h啪啪无遮挡| 日韩在线高清观看一区二区三区| 777米奇影视久久| 国产在线一区二区三区精| 成人手机av| 丝袜脚勾引网站| 黄频高清免费视频| 久久青草综合色| 欧美老熟妇乱子伦牲交| 亚洲内射少妇av| 亚洲欧美色中文字幕在线| 精品一区二区三区四区五区乱码 | 看免费成人av毛片| 久久久久人妻精品一区果冻| 男女无遮挡免费网站观看| 国产精品亚洲av一区麻豆 | 免费黄网站久久成人精品| 欧美精品一区二区免费开放| 国产精品免费视频内射| 高清黄色对白视频在线免费看| 亚洲国产最新在线播放| 亚洲中文av在线| 看免费av毛片| 伦精品一区二区三区| 国产高清国产精品国产三级| 久久婷婷青草| 欧美精品国产亚洲| 免费黄频网站在线观看国产| 最新中文字幕久久久久| 婷婷成人精品国产| 欧美日韩成人在线一区二区| 日韩 亚洲 欧美在线| 国产女主播在线喷水免费视频网站| 最近手机中文字幕大全| 欧美最新免费一区二区三区| 黄色毛片三级朝国网站| 老汉色av国产亚洲站长工具| av天堂久久9| 国产精品国产av在线观看| 国产精品三级大全| 久久精品熟女亚洲av麻豆精品| 国产一区亚洲一区在线观看| videos熟女内射| 亚洲综合色惰| www.自偷自拍.com| 最黄视频免费看| 中文字幕亚洲精品专区| 亚洲天堂av无毛| 欧美成人精品欧美一级黄| 日本av免费视频播放| 黄网站色视频无遮挡免费观看| 亚洲av国产av综合av卡| 亚洲欧美日韩另类电影网站| a级毛片在线看网站| 最近2019中文字幕mv第一页| 伊人亚洲综合成人网| 777久久人妻少妇嫩草av网站| 久久久久久久久久久久大奶| 午夜激情av网站| 一级,二级,三级黄色视频| 一二三四在线观看免费中文在| a级毛片在线看网站| 天堂俺去俺来也www色官网| 街头女战士在线观看网站| 三上悠亚av全集在线观看| 美女国产视频在线观看| 国产亚洲精品第一综合不卡| 99热网站在线观看| 老女人水多毛片| 99久久中文字幕三级久久日本| 精品亚洲成国产av| 久久久久视频综合| 美女午夜性视频免费| 国产精品免费大片| 国产精品国产av在线观看| 在线观看国产h片| 国产精品二区激情视频| av国产精品久久久久影院| 伦理电影大哥的女人| 王馨瑶露胸无遮挡在线观看| 亚洲经典国产精华液单| 国产高清国产精品国产三级| 亚洲第一av免费看| 国产激情久久老熟女| 老女人水多毛片| 国产欧美日韩综合在线一区二区| 美女视频免费永久观看网站| 日韩伦理黄色片| 亚洲少妇的诱惑av| 日韩中文字幕视频在线看片|