• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-Agent Deep Q-Networks for Efficient Edge Federated Learning Communications in Software-Defined IoT

    2022-08-24 03:29:32ProhimTamSaMathAhyoungLeeandSeokhoonKim
    Computers Materials&Continua 2022年5期

    Prohim Tam,Sa Math,Ahyoung Lee and Seokhoon Kim,3,*

    1Department of Software Convergence,Soonchunhyang University,Asan,31538,Korea

    2Department of Computer Science,Kennesaw State University,Marietta,GA 30060,USA

    3Department of Computer Software Engineering,Soonchunhyang University,Asan,31538,Korea

    Abstract: Federated learning (FL) activates distributed on-device computation techniques to model a better algorithm performance with the interaction of local model updates and global model distributions in aggregation averaging processes.However,in large-scale heterogeneous Internet of Things(IoT) cellular networks, massive multi-dimensional model update iterations and resource-constrained computation are challenging aspects to be tackled significantly.This paper introduces the system model of converging softwaredefined networking (SDN) and network functions virtualization (NFV) to enable device/resource abstractions and provide NFV-enabled edge FL(eFL)aggregation servers for advancing automation and controllability.Multi-agent deep Q-networks(MADQNs)target to enforce a self-learning softwarization,optimize resource allocation policies, and advocate computation offloading decisions.With gathered network conditions and resource states, the proposed agent aims to explore various actions for estimating expected longterm rewards in a particular state observation.In exploration phase,optimal actions for joint resource allocation and offloading decisions in different possible states are obtained by maximum Q-value selections.Action-based virtual network functions(VNF)forwarding graph(VNFFG)is orchestrated to map VNFs towards eFL aggregation server with sufficient communication and computation resources in NFV infrastructure (NFVI).The proposed scheme indicates deficient allocation actions,modifies the VNF backup instances, and reallocates the virtual resource for exploitation phase.Deep neural network(DNN)is used as a value function approximator,and epsilongreedy algorithm balances exploration and exploitation.The scheme primarily considers the criticalities of FL model services and congestion states to optimize long-term policy.Simulation results presented the outperformance of the proposed scheme over reference schemes in terms of Quality of Service(QoS)performance metrics,including packet drop ratio,packet drop counts,packet delivery ratio,delay,and throughput.

    Keywords: Deep Q-networks; federated learning; network functions virtualization;quality of service;software-defined networking

    1 Introduction

    The fast-growing deployment of Internet of Things(IoT)in cellular networks has exponentially increased in massive data volumes and heterogeneous service types with the requirement of ultrareliable low-latency communication(URLLC).By 2025,International Data Corporation(IDC)forecasts that the growth of data generated from 41.6 billion IoT devices will reach 79.4 ZB,which requires big data orchestration and network automation to be intelligent and adequate in future scenarios[1,2].To control abundant IoT taxonomies and provide sufficient resources, machine learning and deep learning algorithms have been applied to develop smart solutions in edge intelligence for various service purposes by gathering local data for model training and testing [3,4].Meanwhile, because IoT deployment has grown rapidly in various privacy-sensitive sectors such as Internet of Healthcare Things (IoHT), Internet of Vehicles (IoV), and Internet of People (IoP), the uses of local raw data have to be user-consented and legally authorized before being transmitted to the central cloud[5,6].With these challenging issues,an intelligent provisioning scheme necessitates considering the security of local data privacy,communication reliability,and adequate computation resources.

    Federated learning (FL) secures local data privacy, reduces communication costs, and provides a latency-efficient approach by distributing global model selection and primary hyperparameters,denoted asW0G,from central parameter server to localkclients for local model computation[7,8].Intiterations,WtGobtains the optimal model performance by aggregation averaging of multi-dimensional local model updates in a single parameter server.However, over numerous iterations, the client and parameter server communications generate heavy traffic congestions and unreliable processes,particularly in peak hour intervals.Edge FL(eFL)partitions the iterations of round communications in two preeminent steps:(1) The local modelswnkonndata batch from selectedkparticipants are aggregated in optimal edge server selection, and (2) Global communications are orchestrated to transmit between edge servers and a central parameter server in an appropriate interval[9–11].This technique reduces cloud-centric communications and improves learning precision.Therefore,a system model for offering edge aggregation servers based on specific service-learning model criticalities is applicable to enhance resource-constrained IoT environments.

    Multi-access edge computing (MEC) leverages computation powers and storage capacities of the central cloud to provide a latency-efficient system, adequate Quality of Service (QoS) performance,and additional serving resources in edge networks[12,13].5G radio access networks(RAN)support stable connectivity and adaptability between massive users and MEC entities for driving big data communication traffics with the deployment of millimeter-Wave(mmWave),multiple-input and multiple-output (MIMO) antennas, device-to-device (D2D), and radio resource management(RRM)functions.Moreover,to extend a global view of network environments and efficiently control heterogeneous MEC entities, software-defined networking (SDN) has been adopted.An adaptive transmission architecture in IoT networks is advanced by joint SDN and MEC federation to enable an intelligent edge optimization for low-deadline optimal path selection[14].SDN separates the data plane (DP) and control plane (CP) to enable programmable functions, which adequately control the policies, flow tables, and actions on domain resources management within RAN, core side,network functions virtualization (NFV), and MEC [15,16].The convergence of MEC, SDN, and NFV enables the networking application programming interfaces (API), sufficient resource pools,flexible orchestration,and programmability for logically enabling resource sharing virtualization in an adaptive approach.To optimally allocate the resources and recommend the offloading decisions within NFV infrastructure (NFVI)-MEC, an intelligent agent or deep reinforcement learning approaches have a capability to apply as enablers for network automation(eNA)in order to interact with particular IoT device statuses,resource utilization,and network congestion states.

    Deep Q-network (DQN) has notably been used for addressing resource allocation and computation offloading problems in massive IoT networks [17].There are three main procedures to construct DQN-based model,including epsilon-greedy strategy,deep neural network(DNN)function approximator, and q-learning algorithm based on Bellman equation for handling Markov decision process(MDP)problem.S,A,R,andγrepresent the batch of potential states,actions,rewards,and discount factors for future rewards, respectively [18,19].In the initial stept, the agent explores by epsilon-greedy method and randomly selects an actionatfor sampling the rewardrt(st,at) in order to further calculate q-value of that particularst.At timet+1, the environment feedbacks the nextstate observationst+1based on the transitionp(.|st,at).This exploration strategy will iteratively execute until the optimal q-value and policy are defined.Algorithm design based on reinforcement principles feasibly observes scheduler states and explores rule actions to propose scheduling rules for adaptive resource management and enabling QoS provisioning scheme.Moreover, a model-free multi-agent approach feasibly tackles the heterogeneity of core backbone network for efficient traffic control and channel reassignment in SDN-based IoT networks[20].

    1.1 Paper Contributions

    In this paper,the proposed system architecture is adopted to deploy multi-controller placement in NFV architecture for observing various state abstractions.Multi-agent DQNs(MADQNs)explores actions on resource placement and computation decisions for offloadingwnktowards appropriate eFL aggregation server.Centralized controller abstracts IoT device and resource statuses to gather state spaces for the proposed adaptive resource allocation agent (PARAA).Decentralized controllers as a virtualized infrastructure manager (VIM) and VNFs are presented to abstract NFVI states for proposed intelligent computation offloading agent(PICOA).MADQNs obtain the maximum future long-term reward expectation of joint state spaces by using q-value function and DNN approximator.The optimal policy is defined before exploitation phase and obtained by a centralized controller.The proposed scheme extends MADQNs by rendering virtual network functions(VNF)forwarding graph(VNFFG)and upgrading the deficient allocation actions to approach sufficient serving MEC resource pools.The proposed controller updates the forwarding rule reactively for long-term sufficiency.An experimental simulation is conducted to illustrate the performance of proposed scheme.The custom environment and DQN agent were developed by using OpenAI Gym library,TensorFlow,Keras,and the concept of Bellman equation.To evaluate the QoS metrics in SDN/NFV aspects, Mininet and RYU SDN controller are conducted.In NFV management and orchestration (MANO), mini-nfv framework is applied on top of Mininet to develop the descriptors using TOSCA NFV template.Finally, simulation on 5G new radio (NR) networks is conducted to present an end-to-end (E2E)perspective by using ns-3,a discrete-event network simulator.

    1.2 Paper Organizations

    The rest of the paper is organized as follows.The system models,including architectural framework and preliminaries of proposed MADQNs components,are presented in Section 2.The proposed approach is thoroughly described in Section 3.In Section 4, simulation setup, performance metrics,reference schemes,and result discussions are shown.Section 5 presents the final conclusion.

    2 System Models

    2.1 Architectural Framework

    In the system architecture,SDN CP allows a programmable DQN-based mechanism to observe states of the network environment via OpenFlow (OF) protocol in southbound interface (SBI),which allows the cluster head to contribute significant roles for data of IoT nodes and resource utilization collection [21].The proposed SDN/NFV-enabled architecture for supporting MADQNs programmability and offering multiple eFL servers within NFVI-MEC environment is shown in Fig.1.In the proposed system architecture,the centralized SDN controller communicates with NFV-MANO layer for management functions in VNF manager(VNFM)and VIM through orchestration interfaces[22].Ve-Vnfm interface interacts between SDN controller as VNFs and VNFM for operating the lifecycle network services and resources management.Nf-Vi interface allows the controllability of NFVI resource pools for central SDN controller as a VIM [23].To activate connectivity services between virtual machine(VM)and VNFs,Vn-Nf logical interface is used in the proposed architecture to adjust the virtual storage and computing resources based on VNFs mapping orchestration.To configure resource allocation based on optimal PARAA policy, decentralized SDN controllers as a VIM and VNFs are proposed in this scheme to formulate the parameterization of actionbased VNFFG rendering for service function chaining (SFC) management system.The proposed MANO manages the VNF placement with appropriate element management system(EMS),virtual deployment unit(VDU),and VM capabilities based on allocation policy in particular congestion state spaces.The resource-constrained state observation leads to a prior for agents to adjust the backup instances with model service prioritization.After the resources are adjusted, PICOA computes the policy to advocate eFL server for local model aggregation offloading.Within multi-controllers, the flow entry installation process is configured reactively in the centralized entity.Each cluster head is commanded by OF protocol with a flow rule installation.Since proactive mode has the capability for each OF-enabled switch to set up the flow rules internally,the proposed agent controller will prioritize the reactive rule installation to ensure the proposed central policy configuration.Agent controller checks the packet flow with all the global tables and updates counters for instruction set executions.In our proposed scheme,the flow priority,hard timeout,and idle timeout are measured by the remaining MEC resources, time intervals, and criticalities of FL model services.However, if there is no match within global tables,the agent controller executes the add-flow method based on the particular stateaction approximation to accordingly append datapath id,match details,actions,priority,and buffer id.With different dimensional features and scale values, SDN database entity is expected to handle the storage and preprocessing phases.For the proposed agent model, the data requiring from SDN database is uplink/downlink resource adjustment statuses,resource of eFL MEC nodes,and default core resource utilization system.With these features,the agent feasibly acquires the state observation spaces for sampling and exploring the potential actions.

    2.2 Proposed DQN Components

    In this context, main components of MADQNs consist of state, action, reward, and transition probability.For the hyperparameters, the values are optimized by standard parameterization for controlling the behavior of the learning model such as learning rateα,discount factorγ,epsilonε,and mini-batch sizesqm.In software-defined IoT networks,the local,distributed,and centralized resources for communication and computation are complex to measure thoroughly.Moreover,the observation and discrete values will be challenging to capture.Therefore,each element was assigned in percentile scales.

    Figure 1:The system architecture for MADQNs approach and virtualization of eFL servers

    State:in MADQNs environment, the state spaces are comprised of two main observations for PARAA and PICOA.For PARAA,the state consists of control statuses and a global functional view,including the extant maximum and minimum resources denoted asresmaxandresmin,respectively.For PICOA,the state spaces are abstracted by decentralized controllers,including the maximum eFL nodeicapacities, cost of VNFmplacement at eFL nodei, and computation cost of local modelwnkat eFL nodei, denoted as,cvnf mi, andcpiw, respectively.The joint state observation contains two significant spaces such as uplink/downlink resource statuses and resource increment/decrement discrete adjustment in default resource utilization system,which is denoted asrescandrespace,respectively.Eqs.(1)and(2)presents the expression of state spaces.The increment/decrement level was indicated according to the positive and negative weights,denoted asω+andω-,of a particular peak/off-peak network congestion.Based on the experience replay,optimal resource targets are defined.

    Action:in this environment, the batch of potential actions refers to the resource updates and SFC, which are collectively mapped by VNFFG parameterization towards virtual MEC resource pools in the NFVI entity.Numerically,the action spacesaspecify the discretization operation scale of increment,decrement,and static,denoted asAPARAA ∈{0,1,2}.The percentage of applied action values is set within a restrained allocation step to reach an optimal balance of downlink and uplink capacities.The scheme forecasts computing and storage resources for processing VNFmand allocation index in serving eFL nodei,denoted as(cpm,srm)andaimi,respectively.In the proposed system architecture,ieFL aggregation server decisions are provided in PICOA asAPICOA ∈{1,2,...,i}by evaluating the task execution efficiency.

    Reward:the intermediate reward in a particular timet, denoted asrt(st,at), is maximized when the agent reaches the optimal resource allocationresoe, which is adaptable based on three essential conditions, including transmission intervals, experienced q-value, and the remaining resource percentile.Moreover, in IoT peak hour congestion, the resource increment requires the extra serving available resources in virtual computational blocks, denoted asresxt.The output of computational capabilities from the selected action towards virtual MEC resource pools in each VNF is the main component for model aggregation completion.The reward considers the number of VNF requests and the computational costs of each VNF in that particular selected VNFFG rendering.The output of reward determines the efficiency of resource allocation and eFL server selection from actions of PARAA and PICOA agents in a defined state.

    Transition Probability:different policy determines distinct transition step for sampling the next state observation.In the early stage,the randomness of transition policy allows the agents to explore the actions without specified probabilities.However,once the exploration strategy reaches an optimal goal of resource allocation rewards,epsilon-greedy policy executes the transition,denotes asp(.|st,at),by performing the exploitation strategy follows the given action to its state pair.Thereafter,when the agent receives the next state spacesst+1from environment feedback,the agent will check the variation and diversity in the experience pools to enforce a particular action for that state space.

    3 Multi-Agent Deep Q-Networks for Efficient Edge Federated Learning Communications

    To describe the MADQNs softwarization framework with the proposed controllers towards virtual resource allocation and eFL aggregation server selection, this section delivers two primary aspects of the proposed scheme, including the algorithm flows for multi-agent in NFVI-MEC and self-organizing agent controllers for collaborative updates in NFV-enabled eFL.

    3.1 Algorithm Flow for MADQNs in Proposed Environment

    To optimize the policy of the model, q-table and DNN are computing in parallel behavior to support the trade-off between time-critical and precision.However, DNN acts as a central control which structures as a prime approximator.Each potential state-action pair has a q-value that accumulates in both q-table and approximated DNN output layer after the exploration strategy.With a feedforward network, numerous weight initializations, neurons, and multiple layers of perceptron,the q-value decision-making is more accurate,yet execution time is simultaneously high.To optimize a policy for a long-term self-learning environment, the randomness in exploration processes of the networking environment has to be handled.The hyperparameters are required to be well-assigned and related to the fine-grained scenario.The optimal policy for exploitation strategy as the end goal is denoted asπ*,which further expresses in Eqs.(3)–(6).Each policy interprets the agent and observation differently based on the value function and q-value function with distinct transition probabilityp.The required parameter consists of the beginning state resource conditionss0,current statest,next statest+1observation ofcriticality intervals,and sample actionat.Subsequently,the working process of MADQNs elements in NFVI-MEC environment is described in three major functional phases, including value and q-value functions, function approximator, and experience replay.

    The value function is computed for policy transformation and low-dimensional perspective to get the value of statesand create sample paths.It is significant to identify the resource condition at a particular time.To differentiate between each random exploration policy,the cumulative reward is a key value to maximize the expectation.Value function captures a vector of reward to follow a particular policyπfor evaluating the performance of an agent by defining the expected future rewards.The value function denoted asof an input state observationsinto a policyπis used for returning the expected outcome following the MDP.In our proposed environment,the value function is executed in exploitation strategy following the policy.The q-value function can be expressed in order to adapt to a specific computation state,which follows the Bellman Equation to label the qvalue for state-action pairs.Towards the optimal q-valueQ*(st,at), the formulation with proposed state observations and action spaces in our setup environment features is presented(see Eq.(7)).The expected main requirements are the rewards of that particular state-action pair and the value of the next state that the environment ends up in,which is expressed asThe expectation Es′~εexpresses the randomness of states′observation.To solve for the optimal policy,the iterative update has to be executed.With known optimal policy,the best action will be chosen at states′to maximize the q-value.However,this process is only supported in the short-period networking simulation process,but not supported in long-term sustainability and iterative execution;therefore,the function approximator comes to take place.

    DNN estimates the functionsQ(s,a;θ,b) based on biasesband weightsθon each neuron connector between each perceptron which is equivalent to peak hour and off-peak hour intervals in networking priority.The input processing of each possible state observation from time 0 to initial timet, including the resource conditions of the network environment, towards an optimal actionvalue selection is based on the congestion status.And weights and bias in one sample perceptron are adjusted.The rectified linear unit (ReLU) activation function is used to transform the sum of each connector weight and bias intervals from input until the output layer.Algorithm 1 presents the MADQNs flow towards the optimal resource allocation policy and eFL selection in the proposed framework.Agent execution starts with hyperparameters and parameters initialization.The total reward container per episode, particular reward in each episode, the number of episodes, discrete state spaces,q-table,the starting epsilon value,the final epsilon value,and particular epsilon-decaying value are denoted asre,er,nume,sdiscrete,qtable,εs,εe, andε-, respectively.The scheme targets the gathered state observations for applying agent learning.The joint allocation and computation costs are considered for calculating expected rewards,including the number of VNFs to attain eFL nodeidecision,denoted asnvnfi.However,to detail the architectural stack of applied DNN,a TensorFlowbased implementation is designed to reach the optimal model with accurate parameter estimation.The experience replay, denoted aset=(st,at,rt,st+1), feeds the online and target networks for choosing the actions and approximating its q-value.Since there are numerous possible continuous networking states,the discrete state observations are utilized to input in the first layer as a mini-batch of resource conditions for approximating an optimal increment/decrement between uplink and downlink communications.However,ifω+is high,the resource utilization system is also increasingly enlarged to solve the bottleneck issues.The double dense layers with ReLU are applied to analyze the state differentiation and suitable actions to maximize the upcoming reward.For the output layer with linear,the action q-value is triggered based on the dense layer conditions.If the gradient update evaluates an unsatisfied precision,the model will be reprocessed.Until the model is accepted,the compiling process is executed with Adam optimizer and mean squared error(MSE)metric.

    Algorithm 1:Pseudocode for the proposed MADQNs towards optimal action selection w,resc,respace],resmin,resmax,A,resoe,resxt,ω-,ω+Ensure:optimal actions on allocation policies and eFL server selection from each episode for orchestrating NFVI-MEC resource pools 1:Initialize re,γ,α,nume,qm,sdiscrete,qtable,ε,εs,εe,ε-2:for each episode in range(nume)do 3:Initialize each episode reward er 4:Transform the state s to sdiscrete 5:while true do 6:if random()>ε then 7:Agent selects action a by a=argmax(qtable[sdiscrete])/DNN 8:else 9:Agent selects random action a 10:end if 11:Calculate reward r based on computation and placement costs 12.Perform selected action a,then enter next-state s′by p(.|s,a)13:Add the defined reward r to the initialized episode reward er 14:Transform the next-state s′to the clustering chunk s′Require:s [resi mec,cvnf i j,cpi discrete 15:if the next-state resource resc has a stable and optimal allocation statuses do 16:Input the maximum q-value for state-action pair 17:else 18:Initialize future maximum q-value by max(qtable[s′discrete])19:Initialize current-q-value Qc for state-action pair 20:QNEW ← (1-α)Qc+ α·(Q*(s,a)-Qc)(see Eq.(7))21:Input new q-value QNEW for the state-action pair 22:end if 23:Update sdiscrete to s′discrete 24:end while 25:if εe ≥episode ≥εs then 26:ε-=ε-27:end if 28:re.append(er)29:end for

    By applying the proposed MADQNs model, the average reward aggregation for the resource allocation environment is obtained.The average reward output of optimal resource allocation is steady for most of the episodes but remains some downward marks towards limited resource utilization,which leads to unstable management.The steady and unsteady state-action pairs are detected and needed to be significantly enhanced for avoiding high packet drop scenarios in heterogeneous local model update communications.

    3.2 Self-Organizing Agent Controllers for Optimal Edge Aggregation Decisions

    The implicit algorithm flow is proposed to handle the instability of MADQNs model in NFVIMEC environment by leveraging the capabilities of the agent controllers and orchestrator.The proposed method installs flow rules for each IoT cluster head with the adjustment of uplink/downlink resource utilization priority.The orchestrator configures VNFFG descriptors following the resource allocation policy from Algorithm 1 towards eFL aggregation with optimal MEC resource pools.The proposed agent controller requires to orchestrate the flow entry tables of multiple IoT cluster heads by applying the convergence of resource allocation policy and OF controller flow stats.Each state-action(s,a) pair from MADQNs-based model is transformed into the flow configuration pair (sf,af) by updating the uplink/downlink resource statuses and peak hour/off-peak hour intervals into instruction sets and priority of the entries,respectively.Based on the priority and instruction sets of each traffic flow,orchestrator gains the prior information to handle the virtual resource pool adjustment in NFVI.Fig.2 presents the state transition of MADQNs and controller management within SDN/NFV system.When the client updates the local model,the pipeline processing is performed.Integrated PARAA and PICOA algorithms optimize the resource allocation policies and eFL aggregation MEC selections as described in Algorithm 1.Within the inspected deficient actions in training phase,the proposed scheme adjusts the policy and appends sufficient virtual resource pools for optimizing the serving capacity of the selected eFL node,as described in Algorithm 2.

    4 Performance Evaluation

    Figure 2:The state transition for proposed controllers to install forwarding rule of local model updates

    4.1 Simulation Setup

    To prove the theoretical approach, this section describes the three main simulation adoption environments,including MADQNs model construction,SDN/NFV control performance,and 5G NR network experiment to capture the E2E QoS performances.

    By using OpenAI Gym library[24],the environment setup requires four primary functions.The initialization(init)function declares the available characteristics of state observations(see Eqs.(1)and(2))in the setup environment.The init exploress0within the epsilon-greedy random exploration.The allocation step function updates the new state environment, gives a reward, and completes statutes after the agent controller performs any specific actions.Finally, whether restarting the simulation,changing the network circumstances, or starting a new episode, the reset function is used.The goal of the MADQNs model is to interact with the setup environment and choose the optimal action for a specific networking state in order to optimize eFL offloading server decisions.To train and test the models,we used TensorFlow and Keras[25,26].Fig.3 presents the total average rewards per 100 episodes with threeαperformances,including 0.01,0.05,and 0.09.The rewards are outputted in negative numbers since the setup assigned the non-optimal reward as -0.5, which was cumulatively summed until the end of episodes.In each episode,theQ*(s,a)are gathered.In this environment setup,the optimalαis 0.09,which fluctuates around-128.3075.

    Algorithm 2:Proposed self-organizing agent controllers for optimizing eFL aggregation selection Require:PARAA and PICOA decisions based on optimal actions Ensure:optimal flow entry installation,resource orchestration,and eFL server selection 1:for each local model update in t iteration do 2:Transform the discrete state st to match with the flow criteria(st)f 3:for each OFPT_PACKET_IN in range of cluster heads CH do 4:if no match found in local CH tables do 5:Apply optimal policies of PARAA and PICOA to bridge traffic through VNFs 6:Transform the optimal discrete action at to adapt with the flow stats(at)f 7:VNFFG creation for rendering to SFC,then,install the flow entry for execution 8:else 9:Execute the instruction sets of the found flow entry 10:end if 11:for each selected eFL server in range(i)do 12:Perform edge aggregation for optimal wt i 13:end for[edge aggregation]14:end for[OFPT_PACKET_OUT]15:[Global Server]16:Compute averaging aggregation on[wt1, wt2,...wt i]for global model Wt+1G 17:end for[t iteration]

    To capture the particular QoS performance metrics of the proposed controllers and NFV modules,mini-nfv on top of Mininet is used to create the data plane topology,VNF descriptors,and VNFFG descriptors.Mini-nfv supports the external SDN controller platform for experimentation.The forwarding rule installation is configured by FlowManager and RYU-based platform [27–31].The descriptors set the VDU and VM capabilities based on selected actions from the optimal policy table.Each flow entry is configured following the forwarding graph.Fig.4 presents the interaction of the convergence; however, the virtual links for communication perspective are still restricted for explicit fine-grained performance.

    Figure 3:The total average rewards per 100 episodes within MADQNs model construction

    Figure 4:The interaction of optimal policy outputs for SDN/NFV-based control entities

    A discrete-event network simulator,namely ns-3,is used in this environment to perform the E2E convergence[32–34].The simulation was executed for 430 s,which adjusted into 4 consecutive network congestion conditions to reflect the service-learning criticalities of FL communication reliability.In this setup,there are 4 eFL nodes,and the virtual extended networks loading was configured between 0 to 250.Additionally,there are 4 remote radio heads(RRHs),and the user data rate is between 20 to 72 Mbps.The model updates will rely on the network situation,and the congestion environment will increase the loss probability between clients and aggregation servers.The congestion states lowered the model accuracy and reduced global model reliability.The payload size was set to 1024 bytes,and QoS class identifier(QCI)mechanism is set as user datagram protocol(UDP).At the core side,the point-topoint(P2P)link bandwidth was configured to 9 Gb/s,and the buffer queuing discipline was operated by random early detection (RED) queue algorithm.The default link delay of MEC was configured as 2 ms.The hyperparameters of MADQNs are prior configured to conduct the experiments with maximized output expectations in terms of computation intensity and time constraints.The learning rateαis set to 0.09 in this environment.γ,nume,andεvalues are set to 0.95,1000,and 0.5,respectively.The main hyperparameters and parameters configuration used in overall simulation is shown in Tab.1.

    Table 1:Simulation parameters

    4.2 Reference Schemes and Performance Metrics

    To illustrate the proposed and reference approaches in overall performances, four different resource control and eFL selection policies were simulated.The resource pools represented the capacities extraction by the proposed actions of the model.Each scheme triggered different actions,which contained the VNFFG mapping to particular virtual resources.Reference schemes were simulated in control policy for IoT congestion scenarios, including maximal rate experiencedbased eFL selection (MRES), single-agent DQN-control (SADQN), and MADQNs.The proposed scheme extended PARAA and PICOA policies by enhancing the deficient actions as described in Algorithm 2.

    The QoS metrics which were used to evaluate the comparison between the reference and proposed approaches are presented as follows[35,36].Delayspecifies the latency time of data communications from the sending node to the receiver node,including propagation,queueing,transmission,and control at the core system,which are denoted asand,respectively,as described in Eq.(8).In the network simulation architecture,J={1, 2,...,j}denotes the number of queueing buffers.

    TPrefers to the communication throughput, which expresses the successful packets delivery ratio over a given communication bandwidthbw(see Eq.(9)).The total, propagation, control, and processing latencies of queuedjentities are denoted as,and,respectively.

    The packet drop ratio in the experimental simulation is the ratio formulation between total packet lost and total packet successfully transmitted.The packet drop counts are illustrated to specifically compare in this particular experimental setup.The packet delivery ratio in the simulation environment is calculated by the subtraction between the total ratio and packet drop ratio.

    4.3 Results and Discussions

    The proposed agent outputted the offloading decisions of 142, 117, 371, and 370 local model updates toward 4 eFL servers,respectively.In SDN/NFV-enabled architecture,the primary consideration is the QoS metrics after installing and executing the forwarding rules[37,38].The comparison between proposed and reference schemes is shown in Fig.5.Within 430 s of 4 consecutive network congestion conditions,the average control delay is 8.4723 ms,which was 28.2833,25.6824,and 11.7175 ms lower than MRES,SADQN,and MADQNs,respectively.

    Figure 5:Comparison of average delay between proposed and reference schemes in SDN/NFV model

    In E2E simulation, the emphasis of FL model reliability in real-time routing networks was considered.Fig.6a depicted the average delays of E2E communications in the edge cloud systems.The data communication between the aggregation servers were utilized the IP network communications.The graph presented the comparisons between the proposed and reference methods with various possibilities of forwarding paths.The proposed scheme performed an average delay of 12.8948 ms, which was 64.3321, 150.9983, and 169.9983 ms lower than MADQNs, SADQN, and MRES,respectively.The proposed scheme distinguished the loading metrics of every possible serving MEC server.The predicted metrics represented the loading statuses of MEC server; therefore, the MEC,which has the lowest loading metrics, will be considered as an optimal server for serving incoming local model update requests.TPcomparison is presented in Fig.6b, which illustrated the notable outperformance over other approaches.The proposed scheme, MADQNs, SADQN, and MRES reached an average throughput of 659.0801, 113.7167, 50.8434, and 47.2032 bps, respectively.The proposed scheme utilized the integrated multi-agent to predict the optimal route with the lowest loading metrics for efficient eFL offloading.The average packet drops ratio of the proposed scheme significantly reached 0.0284%within 430 s simulation,which is 0.1068%,0.1482%,and 0.1446%lower than MADQNs,SADQN,and MRES,respectively.In contrast,the proposed,MADQNs,SADQN,and MRES schemes achieved a closing packet delivery ratio of 99.9965%,99.9853%,99.9501%,and 99.9384%,respectively.Figs.6c,and 6d show the graphical comparison of packet drop ratio and packet delivery ratio,respectively.Moreover,within a particular simulation setup,the packet drop counts of the proposed scheme reached a total of 1309 packets,which was 4083,9746,and 10847 packets lower than MADQNs,SADQN,and MRES,respectively.

    Figure 6:Comparison of(a)E2E average delay,(b)throughput,(c)packet drop ratio,and(d)packet delivery ratio between proposed and reference schemes in E2E communication perspective

    MADQNs deployed the control policies of both deficient and efficient output episodes.The downlink and uplink transmission are strongly congested in heavy multi-dimensional model updates,while multiple virtual MEC is offloaded and reallocated deficiently.To gain unoccupied resource pools for QoS assurances, the proposed scheme extended MADQNs and considered the optimal resource pools for high mission-critical FL model traffics, which covers the networking states with over-bottleneck peak hour circumstances.While the extant communication and computation resources are used, the proposed controllers and orchestrator advance the positive weightsω+to accelerate the serving resources from NFVI.The conditional configuration and orchestration trigger a flexible serving backup instance capacity.

    In the congested FL communication networks,the local modelwnkupdates and global modelWGdistributions have to transmit through long-time queueing before entering the ingress buffer of the routing or switching devices.During the heavy loading networks, the waiting time of the incoming packets can be expired and discarded before forwarding to another network.Therefore, an eNA of optimal resource allocation and sufficient eFL aggregation server offloading is applicable for enhancing reliability.In proposed scheme framework,the transmission fromwnkto eFL nodeiwas enhanced to aggregate reliablewimodels based on the proposed PARAA and PICOA policies.The aggregation averaging procedures between edgewiand parameter server were executed in appropriate intervals or off-peak hours.Furthermore,the proposed approach is capable of alleviating the communication overhead for both computation and communication latency since the proposed method determined the optimal network interface with minimum cost for updating model parameters during congested situations.The proposed scheme considered the serving cost of joint entities which are efficient to each serving path.The queuing system of each SDN entity at the DP network and VNF entities were handled separately.The computation overhead and queuing system in CP were considered.Therefore,the proposed scheme avoided the data forwarding overhead from high computation intensity.In the proposed system,SDN controller was scheduled for the optimal path local model computation with adequate requests.Based on the comparisons,the proposed scheme significantly handled the routing congestion in FL communications in order to meet the criteria of URLLC key performance indicators.

    5 Conclusion

    This paper proposed a multi-agent approach,including PARAA for optimizing virtual resource allocation and PICOA for recommending eFL aggregation server offloading, in order to meet the significance of URLLC for mission-critical IoT model services.SDN/NFV-enabled architectural framework for controlling the proposed forwarding rules and virtual resource orchestration is adopted in software-defined IoT networks.MADQNs model interacted with the gathered state observations and contributed a collection of exploration policies for sampling the allocation rules under the expansion of edge intelligence.To obtain deficient policies, the proposed algorithms targeted weak episodes with low aggregated rewards of optimal learning rate hyperparameter.The proposed agent controller outputs a setup of long-term self-organizing flow entry with sufficient computation and communications resource placement.The optimal actions are used to correspondingly configure the VNFFG descriptors and map towards adequate virtual MEC resource pools with four experimental congestion states.The simulation was conducted in three main aspects.Based on the validation,the proposed scheme contributed a promising approach for achieving efficient eFL communications in future massive IoT congestion states.

    Funding Statement:This work was funded by BK21 FOUR(Fostering Outstanding Universities for Research)(No.5199990914048),and this research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education(NRF-2020R1I1A3066543).In addition,this work was supported by the Soonchunhyang University Research Fund.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产女主播在线喷水免费视频网站| 国产一区有黄有色的免费视频| 国产精品香港三级国产av潘金莲 | 一本一本久久a久久精品综合妖精| 国产av精品麻豆| 日韩av在线免费看完整版不卡| 亚洲欧洲国产日韩| 国产一区二区 视频在线| 久久精品亚洲熟妇少妇任你| 新久久久久国产一级毛片| 亚洲精华国产精华液的使用体验| 肉色欧美久久久久久久蜜桃| 菩萨蛮人人尽说江南好唐韦庄| 亚洲欧洲精品一区二区精品久久久 | 成人漫画全彩无遮挡| 超碰成人久久| 自线自在国产av| 尾随美女入室| 日韩av在线免费看完整版不卡| 最近最新中文字幕免费大全7| av天堂久久9| 老司机亚洲免费影院| 熟妇人妻不卡中文字幕| 亚洲,欧美,日韩| 精品福利永久在线观看| 自拍欧美九色日韩亚洲蝌蚪91| 国产1区2区3区精品| 国产精品一区二区精品视频观看| 精品视频人人做人人爽| 永久免费av网站大全| 操美女的视频在线观看| 9191精品国产免费久久| 女人被躁到高潮嗷嗷叫费观| 美女高潮到喷水免费观看| 久久天躁狠狠躁夜夜2o2o | 99久国产av精品国产电影| 极品少妇高潮喷水抽搐| 男女之事视频高清在线观看 | 亚洲国产欧美在线一区| 国产精品久久久久久精品电影小说| 国产在线一区二区三区精| 亚洲中文av在线| 国产精品一国产av| 久久韩国三级中文字幕| 在线观看人妻少妇| 亚洲欧美一区二区三区黑人| 亚洲精品国产一区二区精华液| 99久国产av精品国产电影| 国产一区有黄有色的免费视频| 精品国产乱码久久久久久男人| 精品一区二区免费观看| 2021少妇久久久久久久久久久| 丁香六月天网| 欧美激情高清一区二区三区 | 国产不卡av网站在线观看| 色精品久久人妻99蜜桃| 天堂8中文在线网| www日本在线高清视频| 一区二区三区激情视频| 热re99久久国产66热| 国产精品久久久久久精品电影小说| 婷婷色综合www| 国产乱来视频区| 欧美日韩亚洲高清精品| 日韩中文字幕欧美一区二区 | 中文欧美无线码| 亚洲欧美成人综合另类久久久| 久久青草综合色| 999精品在线视频| 亚洲成人国产一区在线观看 | 青青草视频在线视频观看| 青春草国产在线视频| 亚洲欧美成人精品一区二区| 婷婷色综合大香蕉| 国产精品女同一区二区软件| 亚洲精品国产av蜜桃| 久久毛片免费看一区二区三区| 侵犯人妻中文字幕一二三四区| av国产久精品久网站免费入址| 亚洲欧美中文字幕日韩二区| 丝瓜视频免费看黄片| 成年人免费黄色播放视频| 亚洲av日韩在线播放| 国产国语露脸激情在线看| av一本久久久久| 两个人看的免费小视频| 免费不卡黄色视频| 天堂8中文在线网| 亚洲免费av在线视频| 国产亚洲欧美精品永久| 久久久久人妻精品一区果冻| 亚洲欧美中文字幕日韩二区| 另类精品久久| 下体分泌物呈黄色| 欧美日韩国产mv在线观看视频| av国产久精品久网站免费入址| 黄色毛片三级朝国网站| 十八禁高潮呻吟视频| av女优亚洲男人天堂| 国产一级毛片在线| 欧美97在线视频| 九草在线视频观看| 婷婷色麻豆天堂久久| 如日韩欧美国产精品一区二区三区| 亚洲精品乱久久久久久| 国产人伦9x9x在线观看| 中文字幕色久视频| 在线观看三级黄色| 天美传媒精品一区二区| 老司机影院成人| 久久久久久久国产电影| 亚洲精品久久成人aⅴ小说| 在线天堂中文资源库| 人成视频在线观看免费观看| 菩萨蛮人人尽说江南好唐韦庄| 精品第一国产精品| 国产精品久久久久久精品电影小说| 97在线人人人人妻| 亚洲,欧美,日韩| 亚洲欧美一区二区三区黑人| 1024视频免费在线观看| av国产精品久久久久影院| 韩国精品一区二区三区| 母亲3免费完整高清在线观看| 青春草亚洲视频在线观看| 亚洲成人一二三区av| www.av在线官网国产| 精品国产乱码久久久久久小说| 在线观看免费高清a一片| 十八禁高潮呻吟视频| 国产av精品麻豆| 精品久久蜜臀av无| 免费看av在线观看网站| 亚洲成色77777| 日日撸夜夜添| av女优亚洲男人天堂| 亚洲第一青青草原| 精品一区二区免费观看| 国产精品久久久久成人av| 在现免费观看毛片| 亚洲第一av免费看| 国产黄频视频在线观看| 亚洲色图综合在线观看| 午夜影院在线不卡| 亚洲熟女精品中文字幕| 老司机亚洲免费影院| 久久天堂一区二区三区四区| 操美女的视频在线观看| 夜夜骑夜夜射夜夜干| 巨乳人妻的诱惑在线观看| 久久免费观看电影| 亚洲国产最新在线播放| 亚洲视频免费观看视频| 日本vs欧美在线观看视频| 国产精品久久久久久精品电影小说| 尾随美女入室| 一本色道久久久久久精品综合| 777久久人妻少妇嫩草av网站| kizo精华| 日本vs欧美在线观看视频| 色婷婷久久久亚洲欧美| 捣出白浆h1v1| 又粗又硬又长又爽又黄的视频| 麻豆乱淫一区二区| 免费观看av网站的网址| 天堂8中文在线网| 高清在线视频一区二区三区| 超色免费av| 久久青草综合色| 午夜福利视频在线观看免费| 国产黄频视频在线观看| 久久精品亚洲熟妇少妇任你| 老汉色av国产亚洲站长工具| 一边亲一边摸免费视频| 国产欧美日韩一区二区三区在线| 男女之事视频高清在线观看 | av一本久久久久| 欧美日韩亚洲高清精品| 一级,二级,三级黄色视频| av国产精品久久久久影院| 亚洲第一av免费看| 又大又黄又爽视频免费| 亚洲精品成人av观看孕妇| 男女床上黄色一级片免费看| 精品第一国产精品| 亚洲精品av麻豆狂野| 久久久久国产精品人妻一区二区| 国产成人系列免费观看| 国产高清不卡午夜福利| 另类亚洲欧美激情| 亚洲精品美女久久av网站| 看免费成人av毛片| 最近中文字幕2019免费版| 日韩伦理黄色片| 天堂8中文在线网| 国产亚洲一区二区精品| 午夜免费男女啪啪视频观看| 色精品久久人妻99蜜桃| 欧美国产精品一级二级三级| 色婷婷久久久亚洲欧美| 亚洲欧美色中文字幕在线| av.在线天堂| 日韩大片免费观看网站| 嫩草影院入口| 熟妇人妻不卡中文字幕| 大香蕉久久网| 99国产综合亚洲精品| www.精华液| 成年av动漫网址| 电影成人av| 热re99久久国产66热| 成人午夜精彩视频在线观看| 亚洲免费av在线视频| 青春草国产在线视频| 国产熟女欧美一区二区| 久久人人爽av亚洲精品天堂| 午夜免费观看性视频| 日本一区二区免费在线视频| 黄网站色视频无遮挡免费观看| 国产精品av久久久久免费| 中文字幕最新亚洲高清| 国产色婷婷99| 日韩伦理黄色片| 777久久人妻少妇嫩草av网站| 日韩熟女老妇一区二区性免费视频| 2018国产大陆天天弄谢| 在线观看人妻少妇| 日本av免费视频播放| 老司机亚洲免费影院| 欧美日本中文国产一区发布| 男女高潮啪啪啪动态图| 丝瓜视频免费看黄片| 这个男人来自地球电影免费观看 | 国产福利在线免费观看视频| 天天躁日日躁夜夜躁夜夜| 国产又色又爽无遮挡免| 桃花免费在线播放| 国产精品蜜桃在线观看| 免费高清在线观看视频在线观看| 亚洲国产精品成人久久小说| 在线天堂最新版资源| 黑人欧美特级aaaaaa片| 另类亚洲欧美激情| 一边亲一边摸免费视频| 国产精品麻豆人妻色哟哟久久| 国产精品无大码| 女性被躁到高潮视频| 日韩熟女老妇一区二区性免费视频| 男女国产视频网站| 欧美97在线视频| 成人国产麻豆网| 亚洲欧美日韩另类电影网站| 精品免费久久久久久久清纯 | 国产爽快片一区二区三区| 母亲3免费完整高清在线观看| 嫩草影院入口| 大香蕉久久网| 人人妻人人添人人爽欧美一区卜| 1024香蕉在线观看| 欧美另类一区| 精品亚洲成a人片在线观看| 亚洲精品久久久久久婷婷小说| 婷婷色综合www| 啦啦啦 在线观看视频| 丰满乱子伦码专区| 男女无遮挡免费网站观看| 国产精品秋霞免费鲁丝片| 亚洲五月色婷婷综合| 香蕉丝袜av| 中文字幕av电影在线播放| 国产精品国产av在线观看| 爱豆传媒免费全集在线观看| av在线app专区| av免费观看日本| 最近2019中文字幕mv第一页| 亚洲精品aⅴ在线观看| 少妇猛男粗大的猛烈进出视频| 国产成人午夜福利电影在线观看| 欧美中文综合在线视频| 一级爰片在线观看| 亚洲精品乱久久久久久| 国产爽快片一区二区三区| 国产精品偷伦视频观看了| 一边摸一边抽搐一进一出视频| 久久女婷五月综合色啪小说| 亚洲精品av麻豆狂野| 欧美激情 高清一区二区三区| 久久久久国产一级毛片高清牌| 亚洲精品国产av蜜桃| 大香蕉久久成人网| 一边摸一边做爽爽视频免费| 天天躁夜夜躁狠狠久久av| 无限看片的www在线观看| 五月开心婷婷网| 久久久久久久久免费视频了| 免费在线观看完整版高清| 免费久久久久久久精品成人欧美视频| 国产xxxxx性猛交| 国产成人免费观看mmmm| 精品亚洲乱码少妇综合久久| 国产极品粉嫩免费观看在线| 女的被弄到高潮叫床怎么办| 欧美人与性动交α欧美精品济南到| 精品少妇一区二区三区视频日本电影 | 精品国产一区二区久久| 久久久久久久大尺度免费视频| 18在线观看网站| 亚洲一区中文字幕在线| 久久精品国产亚洲av高清一级| 大片免费播放器 马上看| 亚洲精品美女久久av网站| 亚洲国产精品一区三区| 欧美黄色片欧美黄色片| 精品第一国产精品| 人人妻人人爽人人添夜夜欢视频| 欧美精品av麻豆av| 亚洲精品一二三| 高清av免费在线| 亚洲国产毛片av蜜桃av| 成人国产av品久久久| 亚洲精品日韩在线中文字幕| 下体分泌物呈黄色| 久久久久久久久久久久大奶| 久久久久精品性色| 男女下面插进去视频免费观看| 婷婷色综合www| 老司机影院毛片| 一二三四中文在线观看免费高清| 九草在线视频观看| 91精品国产国语对白视频| 欧美人与善性xxx| 久久午夜综合久久蜜桃| 亚洲国产精品999| 激情五月婷婷亚洲| 宅男免费午夜| 成人漫画全彩无遮挡| 国产精品国产av在线观看| 午夜久久久在线观看| 男人添女人高潮全过程视频| 高清av免费在线| 一级爰片在线观看| 一区福利在线观看| 亚洲免费av在线视频| 男人舔女人的私密视频| 一区二区av电影网| 亚洲图色成人| 国产成人免费观看mmmm| 亚洲国产毛片av蜜桃av| 成人三级做爰电影| 人人妻,人人澡人人爽秒播 | 狂野欧美激情性bbbbbb| 韩国精品一区二区三区| 丝袜美腿诱惑在线| 日韩av免费高清视频| 一级a爱视频在线免费观看| 伦理电影大哥的女人| 欧美少妇被猛烈插入视频| 免费日韩欧美在线观看| 人人妻人人爽人人添夜夜欢视频| e午夜精品久久久久久久| 一边摸一边做爽爽视频免费| 大陆偷拍与自拍| 精品一区在线观看国产| 老熟女久久久| 老鸭窝网址在线观看| 91精品伊人久久大香线蕉| 亚洲国产看品久久| 中文字幕人妻熟女乱码| 9热在线视频观看99| 成人免费观看视频高清| 国产激情久久老熟女| 狠狠精品人妻久久久久久综合| 国产成人精品无人区| 一区二区三区四区激情视频| 欧美97在线视频| 人人妻人人澡人人看| 麻豆精品久久久久久蜜桃| 叶爱在线成人免费视频播放| av天堂久久9| 热99久久久久精品小说推荐| 久久久久久久久免费视频了| 19禁男女啪啪无遮挡网站| 日日爽夜夜爽网站| 亚洲成人av在线免费| 精品久久久精品久久久| 国产福利在线免费观看视频| 久久久久国产精品人妻一区二区| 在线观看免费日韩欧美大片| 18禁动态无遮挡网站| 99热国产这里只有精品6| 18禁观看日本| 亚洲国产av新网站| 欧美精品高潮呻吟av久久| 自拍欧美九色日韩亚洲蝌蚪91| 男人添女人高潮全过程视频| 欧美国产精品一级二级三级| 人妻一区二区av| 青春草亚洲视频在线观看| 欧美黄色片欧美黄色片| 黄色毛片三级朝国网站| 精品国产露脸久久av麻豆| 免费在线观看视频国产中文字幕亚洲 | 亚洲国产精品国产精品| 亚洲免费av在线视频| 搡老岳熟女国产| 国产男女内射视频| 亚洲国产精品999| 亚洲精品久久午夜乱码| 成人免费观看视频高清| 亚洲自偷自拍图片 自拍| 制服丝袜香蕉在线| 国产精品免费视频内射| 亚洲色图 男人天堂 中文字幕| 女人被躁到高潮嗷嗷叫费观| 操美女的视频在线观看| 青春草视频在线免费观看| 亚洲欧洲日产国产| 欧美黄色片欧美黄色片| 91精品国产国语对白视频| 色网站视频免费| av不卡在线播放| 国产成人精品久久二区二区91 | 成年美女黄网站色视频大全免费| 97人妻天天添夜夜摸| 这个男人来自地球电影免费观看 | 国产97色在线日韩免费| 日韩 欧美 亚洲 中文字幕| 国产男女内射视频| 一本一本久久a久久精品综合妖精| 久久精品久久精品一区二区三区| 国产男女超爽视频在线观看| 国产精品嫩草影院av在线观看| 欧美在线一区亚洲| 在线观看www视频免费| av福利片在线| videos熟女内射| 欧美日韩亚洲国产一区二区在线观看 | 97在线人人人人妻| 下体分泌物呈黄色| 中文字幕人妻丝袜一区二区 | 在线观看免费视频网站a站| 久久久久国产一级毛片高清牌| www.自偷自拍.com| 中文欧美无线码| 秋霞在线观看毛片| 老鸭窝网址在线观看| 亚洲婷婷狠狠爱综合网| 人妻 亚洲 视频| 婷婷色综合大香蕉| 夜夜骑夜夜射夜夜干| 久久精品国产亚洲av涩爱| 国产视频首页在线观看| 日韩电影二区| 国产97色在线日韩免费| 男人舔女人的私密视频| 日本av免费视频播放| 一级黄片播放器| 满18在线观看网站| 午夜日韩欧美国产| 久久鲁丝午夜福利片| 亚洲欧洲精品一区二区精品久久久 | 亚洲第一av免费看| 国产成人免费观看mmmm| 韩国精品一区二区三区| 少妇被粗大猛烈的视频| 欧美亚洲日本最大视频资源| 国产精品偷伦视频观看了| 97人妻天天添夜夜摸| 91国产中文字幕| 国产爽快片一区二区三区| 一本色道久久久久久精品综合| 人妻 亚洲 视频| 色视频在线一区二区三区| 国产男女超爽视频在线观看| 黄色视频在线播放观看不卡| 男人操女人黄网站| 波多野结衣av一区二区av| 男女国产视频网站| 成人免费观看视频高清| 国产片特级美女逼逼视频| 国产黄频视频在线观看| 十八禁高潮呻吟视频| 亚洲国产中文字幕在线视频| 欧美日本中文国产一区发布| 免费在线观看视频国产中文字幕亚洲 | 国产精品香港三级国产av潘金莲 | 如何舔出高潮| 可以免费在线观看a视频的电影网站 | 欧美日韩一级在线毛片| 啦啦啦啦在线视频资源| 中文字幕人妻丝袜制服| 欧美日韩综合久久久久久| 成人18禁高潮啪啪吃奶动态图| 国产精品香港三级国产av潘金莲 | 男女下面插进去视频免费观看| 亚洲精品av麻豆狂野| 黑人猛操日本美女一级片| 国产淫语在线视频| 色综合欧美亚洲国产小说| 免费黄频网站在线观看国产| 国产成人免费观看mmmm| 亚洲精品自拍成人| 老司机深夜福利视频在线观看 | 日韩一区二区三区影片| 国产精品秋霞免费鲁丝片| 男女无遮挡免费网站观看| 天美传媒精品一区二区| 亚洲欧美精品自产自拍| 婷婷成人精品国产| 各种免费的搞黄视频| 宅男免费午夜| 一级毛片我不卡| 亚洲精品自拍成人| 亚洲精品一区蜜桃| 亚洲欧洲精品一区二区精品久久久 | 成人手机av| 国产野战对白在线观看| 成人18禁高潮啪啪吃奶动态图| 免费高清在线观看日韩| 秋霞在线观看毛片| 99热网站在线观看| 亚洲第一青青草原| 最近手机中文字幕大全| 一区二区三区精品91| 亚洲少妇的诱惑av| av天堂久久9| 欧美日韩综合久久久久久| 亚洲精品第二区| 日韩一卡2卡3卡4卡2021年| 考比视频在线观看| 丝袜美足系列| 日韩大码丰满熟妇| 涩涩av久久男人的天堂| 9色porny在线观看| 成年美女黄网站色视频大全免费| 国产精品久久久久久精品古装| 国产精品亚洲av一区麻豆 | 咕卡用的链子| 久久精品亚洲av国产电影网| 日本91视频免费播放| 国产一区二区 视频在线| 精品国产一区二区三区久久久樱花| 免费黄色在线免费观看| 操美女的视频在线观看| 久久av网站| 夜夜骑夜夜射夜夜干| 91aial.com中文字幕在线观看| 亚洲欧美色中文字幕在线| 天美传媒精品一区二区| 成人毛片60女人毛片免费| 欧美日韩亚洲高清精品| 亚洲欧洲日产国产| 亚洲人成77777在线视频| 国产亚洲av高清不卡| 一级毛片 在线播放| 亚洲男人天堂网一区| 我要看黄色一级片免费的| 亚洲熟女精品中文字幕| avwww免费| 美女福利国产在线| 又大又爽又粗| 精品亚洲成a人片在线观看| 9色porny在线观看| 亚洲国产精品一区二区三区在线| 中文字幕制服av| 国产成人啪精品午夜网站| 久久精品人人爽人人爽视色| 99热全是精品| 亚洲一级一片aⅴ在线观看| 国产探花极品一区二区| 9热在线视频观看99| 久久这里只有精品19| 欧美激情 高清一区二区三区| 欧美人与性动交α欧美软件| 99香蕉大伊视频| av视频免费观看在线观看| 一区二区三区乱码不卡18| 成人黄色视频免费在线看| 久久婷婷青草| 亚洲婷婷狠狠爱综合网| 在线观看免费日韩欧美大片| 国产xxxxx性猛交| 在线观看免费高清a一片| 日韩大码丰满熟妇| 亚洲第一av免费看| 久久韩国三级中文字幕| 十八禁人妻一区二区| 欧美精品高潮呻吟av久久| 日日爽夜夜爽网站| 这个男人来自地球电影免费观看 | 国产日韩一区二区三区精品不卡| 欧美xxⅹ黑人| 免费黄网站久久成人精品| 在线观看国产h片| 欧美日韩一区二区视频在线观看视频在线| 欧美亚洲日本最大视频资源| svipshipincom国产片| 国产熟女欧美一区二区| 如何舔出高潮| 国产片内射在线| 中文字幕人妻丝袜制服| 午夜精品国产一区二区电影| 搡老乐熟女国产| 在线看a的网站| 99久久人妻综合| 观看美女的网站| 日韩人妻精品一区2区三区| 久久久国产欧美日韩av| 精品少妇久久久久久888优播| 日韩人妻精品一区2区三区| 国产亚洲av片在线观看秒播厂| 亚洲,一卡二卡三卡| 免费高清在线观看视频在线观看| 美女午夜性视频免费| 9191精品国产免费久久|