• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Q-Learning Based Routing Protocol for Congestion Avoidance

    2021-12-14 06:05:56DanielGodfreyBeomSuKimHaoranMiaoBabarShahBashirHayatImranKhanTaeEungSungandKiIlKim
    Computers Materials&Continua 2021年9期

    Daniel Godfrey,Beom-Su Kim,Haoran Miao,Babar Shah,Bashir Hayat,Imran Khan,Tae-Eung Sung and Ki-Il Kim,*

    1Department of Computer Science and Engineering,Chungnam National University,Korea

    2College of Technological Innovation,Zayed University,Abu Dhabi,UAE

    3Institute of Management Sciences,Peshawar,Pakistan

    4Department of Electrical Engineering,University of Engineering and Technology,Peshawar,Pakistan

    5Department of Computer and Telecommunications Engineering,Yonsei University,Korea

    Abstract:The end-to-end delay in a wired network is strongly dependent on congestion on intermediate nodes.Among lots of feasible approaches to avoid congestion efficiently,congestion-aware routing protocols tend to search for an uncongested path toward the destination through rule-based approaches in reactive/incident-driven and distributed methods.However,these previous approaches have a problem accommodating the changing network environments in autonomous and self-adaptive operations dynamically.To overcome this drawback,we present a new congestion-aware routing protocol based on a Q-learning algorithm in software-defined networks where logically centralized network operation enables intelligent control and management of network resources.In a proposed routing protocol,either one of uncongested neighboring nodes are randomly selected as next hop to distribute traffic load to multiple paths or Q-learning algorithm is applied to decide the next hop by modeling the state,Q-value,and reward function to set the desired path toward the destination.A new reward function that consists of a buffer occupancy,link reliability and hop count is considered.Moreover,look ahead algorithm is employed to update the Q-value with values within two hops simultaneously.This approach leads to a decision of the optimal next hop by taking congestion status in two hops into account,accordingly.Finally,the simulation results presented approximately 20% higher packet delivery ratio and 15% shorter end-to-end delay,compared to those with the existing scheme by avoiding congestion adaptively.

    Keywords:Congestion-aware routing;reinforcement learning;Q-learning;Software defined networks

    1 Introduction

    Congestion in a network significantly increases the end-to-end delay.To prevent or remove congestion,many congestion control schemes have been proposed for current TCP/IP networks,where a strictly layered architecture and fully distributed algorithms are applied.Owing to layered restrictions,most of the current algorithms,including congestion control,have been implemented on the transport layer for an end-to-end connection.

    As an alternative for current TCP/IP networks,software defined networks (SDN) have been studied to overcome the drawbacks of legacy networks (i.e.,a lack of adaptability).In contrast to the typical combined model,the data and control plane are separated and operated in an SDN by centralizing the network intelligence and state logically.Therefore,it is feasible to execute or run new centralized algorithms over an SDN without regard to a layered architecture.Based on a centralized functionality,an intelligent algorithm such as a machine-learning algorithm used to solve the complexity can be gradually applied in an SDN.Specifically,machine learning(ML) algorithms have been applied to improve the network performance in the areas of network operation and management.ML techniques have recently been employed to deal with fundamental network problems,for example,traffic prediction,routing and classification,congestion control,resource and fault management,quality of service (QoS),quality of experience management,and network security [1-3].

    Stemming from these observations,a new congestion-aware routing protocol for an SDN is presented herein.Unlike previous studies on end-to-end congestion control,our goal is to develop a routing protocol to manage congestion at the network layer.Thus,it is possible to control congestion in a hop-by-hop approach.In addition,it is extremely feasible to implement this type of protocol in an SDN.A new routing protocol is designed to search for an uncongested path with a Q-learning method known as reinforcement learning.We present a model for a routing protocol with Q-learning properties,which can be defined by the Q-value and reward function.With the Q-value and reward function,we can determine if the next-hop is a congested node.The reward function is characterized by a new buffer occupancy,retransmission ratio,and hop count parameters.Finally,we evaluate the performance of the proposed routing protocol through simulations.

    The main contributions of this paper are as follows:

    · An architecture that employs Q-learning for achieving efficient and intelligent congestionaware routing in an SDN;

    · A Q-learning based routing algorithm that considers a look-ahead algorithm to compute the Q-value;

    · An extensive set of experiments with simulations and an analysis for the proposed routing protocol.

    The rest of this paper is organized as follows.Following the introduction,we describe some previous state-of-the-art studies conducted in this area.The proposed scheme is explained and described.The simulation results are next given.Finally,some concluding remarks and areas of future study are presented.

    2 Related Studies

    In this section,we describe related studies on congestion-aware routing protocols in three parts.First,routing protocols used in an SDN are presented.Second,ML-based routing protocols used in an SDN are analyzed.Third,a congestion-aware routing protocol is detailed.

    2.1 Routing in SDN

    Zhang et al.[4]addressed the performance measurement of the routing protocol in an SDN in terms of a forwarding delay and convergence time for a failure as compared to a legacy protocol.They experimented and concluded that an SDN is beneficial in large-scale networks.In addition,the impact of a link failure in an SDN is less than that in legacy routing protocols.Thus,more robustness against a failure is achieved in an SDN by reducing the convergence time significantly.In terms of the performance evaluation of the routing protocol in an SDN,Gopi et al.[5]focused on the convergence time to recover a link or node failure with respect to the topology scale.Similar to the experiment results of a former study,a shorter convergence time is measured when a large-scale topology is assumed.Akin et al.[6]compared the routing protocol for an SDN with a static and dynamic link cost by implementing it on a Mininet emulator.Incorporating the use of a multi-criteria decision-making method (MCDM) in an SDN,Ali et al.[7]proposed the use of a hierarchical SDN control plane approach for an inter-domain collaboration and QoS class mapping to ensure the E2E quality-of-service for applications in heterogeneous networks with multiple domains of different QoS classes.In this study,the commonly used MCDM,known as TOPSIS,was applied at the controller module to select the most suitable QoS class for each domain in the E2E path.The findings of this study suggest that the use of a single controller with varying QoS classes could lead to a single point failure and E2E service delivery-related issues.For all the cases,it has been proved that the performance of a routing protocol in an SDN is mostly dependent on the accuracy of the network state information.Based on the mentioned study,it is reasonable to determine that the routing protocol in an SDN is more robust than a conventional routing protocol while requiring more accurate network state information.In addition to a performance evaluation,a new routing protocol for an SDN has been continuously studied.

    First,centralized QoS routing protocols for an SDN were analyzed and compared in [8].In addition to a description of outstanding features,the authors employ a novel four-dimensional evaluation framework for QoS routing protocols for a quantitative comparison in terms of the runtime and cost inefficiency.Despite a performance improvement in an SDN,the replacement cost from a legacy network to an SDN will be a major concern.To address this problem,a new QoS routing protocol for SDN hybrid networks was proposed by Lin et al.[9],whose proposed protocol,called simulated annealing based QoS-aware routing (SAQR),dynamically adjusts the weights of three QoS parameters,namely the delay,loss rate,and bandwidth,and achieves an improved delay performance exceeding 20%.

    Second,a number of studies have proposed routing protocols in a specific SDN.Ji et al.[10]proposed an SDN-based geographic routing protocol for vehicularad hocnetworks.Unlike previous geographical routing protocols that use local information,a new protocol makes use of vehicle information,that is,the node location,vehicle density,and digital map,and computes the optimal path based on such information.In parallel with vehicularad hocnetworks,smart-city and IoT applications are regarded to be suitable for SDN infrastructure.To reduce the delay in an SDN,EL-Garoui et al.[11]proposed a new routing protocol based on an SDN by employing a machine learning algorithm as a prediction scheme.As for IoT,a new SDN-based routing was proposed by Shafique et al.[12].The proposed scheme targets the balance between the cost for reconfiguration and the flow allocation in which multiple SDN controllers are assumed.In addition,heterogeneous network traffic is monitored to keep the networks balanced.As a special type of network,a disturbance-awareness routing algorithm [13]based on weather information has been proposed to minimize the network cost function as well as the cost of the risk function in an SDN.Each of the above-mentioned specific network types has its own approach to detect and deal with a link failure.To discuss a link failure and recovery schemes on SDN-based routing schemes,Ali et al.[14]presented a survey that highlights various link failure detection and recovery schemes,mechanisms,and their respective weaknesses in an SDN.In addition,a well-organized classification of link failure recovery approaches was presented based on a review of 49 papers.To combat congestion-related link recovery issues in routing,an introduction of proactive and reactive schemes was further mentioned for both single and multi-objective schemes.

    2.2 ML-Based Routing in SDN

    Differing from the traditional model-driven approach for routing protocols,ML-based routing protocols can capture the growing complexity and adapt to network changes accordingly.However,the management of large-scale data for ML has been a challenge in the current distributed infrastructure.This is why an SDN based on a centralized entity is a suitable architecture for operating ML algorithms.

    Before looking into the details,it is worth mentioning a comprehensive overview [15]for machine learning in an SDN.In this study,the authors provide a survey for machine learning algorithms feasible for an SDN.Following an ML outline,the authors have addressed the challenges and reviewed related studies in terms of several perspectives including a routing optimization.In addition,open issues and challenges for ML in an SDN are discussed.In addition to this survey,we categorize routing protocols based on the type of ML-algorithms and present their key features.

    First,reinforcement learning (RL) to optimize the routing problems in an SDN is presented by C.Fang et al.[16].The proposed RL model contributes to making decisions through interactions with the environment.A combination of RL and neural networks has been proposed for the routing algorithm.Another protocol called V-S routing (variable∈-Greedy function within SARSA-learning routing) is addressed by Yuan et al.[17].The proposed algorithm takes the dynamic priority of the current state in an SDN to avoid a delay as well as improve the link transmission speed.Another scheme to utilize RL has been proposed to meet the QoS requirements.A new algorithm,called reinforcement learning and software-defined networking intelligent routing (RSIR) [18],utilizes RL to search for the best route for all flows with a link state metric(i.e.,bandwidth,loss,and delay).To obtain an optimal path,the proposed algorithm finds the most-rewarding path for every pair of nodes in the network.The simulation results proved that an RSIR can avoid traffic concentration and congestion by applying different edge weights for mentioned metrics.Similar to the mentioned approaches,Hossain et al.[19]present an RL-driven QoS-aware routing algorithm that consists of both QoS monitoring for the delay,packet-loss rate,and RL-based intelligent routing decision-making (RIRD).During operation,if the RL agent selects the path having the lowest delay and packet-loss rate,it should obtain the highest reward value.

    In addition to RL,a deep learning-based QoS routing protocol was proposed by Owusu et al.[20].In this study,the authors mention the real-time application on the Internet and present a framework based on an SDN.A deep neural network is employed to classify the class of traffic and search for appropriate routes to meet the QoS demand.As a new ML framework,federated learning (FL) has recently attracted the interest researchers.As an example,Sacco et al.[21]merges network softwarization and FL to optimize routing decisions in an SDN.Their main contribution is a new path selection algorithm based on long short-term memory (LSTM) to predict the forthcoming traffic on a link based on history.In the case of a high traffic volume,a new path is selected to avoid high-loaded links and take the under-utilized ones.ML-based routing protocols can be deployed for a special objective.Pasca et al.[22]proposed an applicationaware multipath flow routing framework called AMPS.The proposed scheme is composed of a dynamic prioritization of the flow,a path assignment based on priority,and Yen-K-shortest path algorithm to find the path.In addition to traffic,an energy-efficient routing protocol for an SDN called MER-SDN was suggested by Assefa et al.[23].For energy efficiency,a principal component analysis (PCA) was suggested to reduce the feature size,along with a linear regression to train the model.In addition,an integer programming (IP) formulation for energy consumption as a function of the traffic amount and heuristics algorithm are presented.

    2.3 Congestion-Aware Routing

    A general congestion control scheme over the transport layer has a long convergence time under an end-to-end argument principle.Compared to the scheme used in the transport layer,an enhanced functionality in the network layer leads to a reduced convergence time.To identify and remove congestion proactively and reactively,diverse congestion-aware routing protocols have been studied,which we categorized into the underlying target networks.

    First,some congestion-aware routing protocols have been proposed to prevent packet loss in wireless sensor networks [24].In particular,if a lost packet contains important event or data information,it can affect the reliability of the system.To handle this situation appropriately,advanced congestion-aware routing (A-CAR) is a priority and congestion-aware routing protocol in wireless sensor networks.In ACAR,a differential routing policy depending on priority is applied.For a flow with a higher priority,an inside zone path is established,whereas another path is constructed outside a zone for a packet with lower priority.In addition,ACAR can provide mobility support by changing the routing zone accordingly.Unlike flat networks,Farsi et al.[25]proposed a new congestion-aware clustering and routing protocol to properly address congestion issues.Congestion is prevented by the load distribution of the cluster head node between members and the rotation of role changes in the cluster during every round.While taking limited energy as well as a real-time requirement into account,congestion-aware routing needs to cover the mentioned demands.El-Fouly et al.[26]presented the real-time energy-efficient trafficaware approach (RTERTA) in industrial wireless sensor networks.In RTERTA,congestion can be avoided by utilizing underloaded nodes with a hop count to the sink node that is measured by the buffer occupancy in a node.

    Second,unlike static wireless sensor networks,congestion-aware routing has been studied in dynamic networks,including vehicularad hocnetworks.Hung et al.[27]presented an intersectionbased routing protocol called a data congestion-aware routing protocol (DCAR) that is suitable for urban environments.In DCAR,the amount of data and vehicular traffic are estimated.This value is used to construct a routing path.While establishing a path,a look-ahead algorithm for deciding the next intersection is also considered to avoid congestion.Congestion caused by a flooding broadcast was addressed by Liu et al.[28].A novel congestion-aware GPCR routing protocol (CA-GPCR) utilizes a free buffer queue size and the distance between the next node and destination node and restricts the greedy forwarding procedure to avoid congestion.Simulation results show that the CA-CPCR protocol outperforms the existing protocol in terms of packet delivery ratio and delay caused by congestion.In addition,Keykhaie et al.[29]presented the congestion-aware and selfishness aware social routing protocol for use in a delay tolerant network.To distinguish congested and selfish nodes,both the buffer congestion and selfish behavior are measured and used to obtain the utility value.Depending on this value,a more suitable node is selected for message relaying.

    Third,congestion-aware routing for an SDN is proposed.Attarha et al.[30]proposed a method to reroute a flow to avoid congestion in an SDN.To make a decision,link utilization is periodically measured and reported.A new flow is routed according to the network conditions.The controller predicts the congestion and calculates the amount of flow to be rerouted toward the backup paths.Another congestion-aware routing based on a rerouting path in an SDN was proposed by Cheng et al.[31].In the proposed scheme,a flow along the congested route is detoured toward the local path and modeled by the LP.Finally,Ahmed et al.[32]addressed the congestion control and temperature-aware routing over SDN-based wireless body area networks.The authors presented the energy optimized congestion control based on temperature aware routing algorithm based on enhanced multi-objective spider monkey optimization.The proposed routing algorithm introduces the congestion queue length as a major factor in the routing cost model and combines it with other factors such as the residual energy,link reliability,and path loss.

    As previously analyzed,an SDN is capable of implementing complicated algorithms such as ML in a central entity with topology information.In addition,congestion avoidance in the network layer not only can reduce the convergence time but also consequently adapt the network dynamics.However,despite the mentioned benefits,there is no ML-based congestion-aware routing protocol over an SDN.Furthermore,we take Q-learning,which is a model-free technique that does not require prior knowledge about the underlying reward resulting from taking specific action in a particular state.According to this property,Q-learning is suitable to handle dynamic network congestion properly.A typical operation of the Q-learning based congestion-aware routing protocol would appear as summarized below in Fig.1.

    Figure 1:Typical Q-learning based congestion aware routing protocol flow

    3 Q-Learning Based Congestion-Aware Routing in SDN

    In this section,we propose a new Q-learning based congestion-aware routing (QCAR) in an SDN.Both the network architecture and details for a routing protocol are consequently described.

    3.1 Architecture and Component

    To implement QCAR over an SDN,the network architecture including the control plane,data plane,and an application plane is designed as shown in Fig.2.The control plane collects raw data about the network status through periodical messages.The collected information is passed to the application plane.In the application plane,the Q-learning agent and algorithm compute the Q-values for the topology and the best route decision for the flow.This decision is sent to the control plane.Consequently,the control plane requests to update the forwarding table at the data plane.

    Figure 2:QCAR architecture

    3.2 QCAR Routing Protocol

    The QCAR protocol follows the Q-learning technique to define the routes to be followed by flows with source-destination pairs.Each step consists of selecting and performing an action,changing the state (i.e.,moving from one to another),and receiving a reward.The updated Qfunction value at timetis the underlying reward for the execution of actionAtwhile in stateSt,which provides an optimal rewardRt.Next,we provide details about the derived parameters for node and link states in an SDN,RL-agent,and RL-based routing algorithms.

    3.2.1 Node and Link States in SDN

    For the QCAR protocol,we define a set of parameters that indicate the node and link status to be used by the RL agent.For a node,say nodei,the parameters are as follows:the queue length of nodei(QLti),the hop count to the destination (Hti),and retransmitted packet ratio(RPRti,j) over a link between two adjacent nodes,iandj,at timet.Based on measured values,the parameters are computed as follows:

    Queue length:To measure congestion level at arbitrary nodei,we periodically estimate the buffer occupancy based on the queue length of nodei.The queue length is computed at timetaccording to Eq.(1).LetQLtibe the sum of the queue length of nodeiand that of the node two hops ahead at timet.By taking into consideration the queue length of the node two hops away through the look-ahead algorithm,a large value is given to the node whose neighbors are already in a state of congestion.If there are at least two neighbors for nodei,the minimum queue length at the neighbors is considered,whereNidenotes the set of neighbors of nodei.

    Retransmitted packet ratio:In addition to the congestion parameters of a node,the adjacent link reliability affects the congestion because the received packet remains at the buffer until a receiver successfully receives it.To measure the link reliability,we consider the retransmitted packet ratio,which counts for all retransmitted packets owing to propagation-related errors of a link.A link with a larger ratio of retransmitted packets is considered unsatisfying for the traffic demand and hence unreliable.To obtain the RPR of a link between nodesiandjduring the pastsseconds,we use the expression below:

    wherePackets_is the total number of packets sent by nodeiduring the pastsseconds

    from current timet,andPackets_counts the total number of packets transmitted

    to neighborjduring the pasttseconds.Similar to the queue length,the link reliability also employs a look-ahead algorithm,as given in Eq.(3).

    3.2.2 RL-Agent

    Typical RL-problems are usually referred to as discrete-time Markov decision problems owing to the modeling of their solution which is based on 4-tuples (S,A,P,R).Here,Sis the finite set of states,Ais the set of actions,Pis the matrix of state transition probability,andRis the reward function for which the system is continuously looking to maximize.The environment for the RL agent to act on is composed of data packets flowing in a network from a given source to the desired destination.The presence of a given packetpat nodeidefines the state of that packet at timetasSti.An actionAti,jrepresents a decision made by the RL agent to forward the packet from nodeito neighborjas adopted by the policy (πt) controlling state transition with a greedy exploration strategy at timet,as shown in Eq.(4).Upon this action being taken,the state of packetpwill move fromStitoSt+1jand the reward associated with this action will be.

    This means that,instead of finding a path with the maximum reward,our proposed QCAR finds a path with the lowest costs by greedily selecting actions with the lowest rewards provided that all available neighbors have a level of congestion more than the predetermined threshold.In addition,for each state transition (Sn→Sn+1),the Q-function valueQn(Sn,An)associates a reward functionR,which is computed as shown in the following subsection,to estimate the cost of forwarding a packet toward that particular neighbor.

    In Q-learning,the agent learning phase consists of a sequence of stages,called epochs (0,1,...,n...).During thenthepoch at timet,the RL-agent selects an actionAton a packetpat a current stateStand receives a rewardRtas it moves to the next state,St+1.The action-valueQt+1(St,At) is updated based on the following equation:

    whereαis the learning rate that controls how fast the Q-table changes,andγis the discount factor that determines the degree to which the agent considers the effect of the immediate rewards when estimating new Q-values.The initial Q-values,QO(S0,A0),for all the states and actions are initialized to zero before the RL-agent learning phase starts.

    3.2.3 Reward Function

    The reward function used by the RL-agent is based on three measured parameters.The reward is proportional to the queue length,retransmitted packet ratio,and hop count,as defined in Eq.(6).To normalize the mentioned parameters,QmaxandHmaxwere introduced and denote the maximum queue length and maximum allowed hop count,respectively.In addition,Eq.(6) is applied along with the respective tuning weightsω1,ω2,andω3∈[0,1],whereω1+ω2+ω3=1.

    3.3 QCAR Routing Decision

    The general process of the proposed congestion-aware based routing protocol is explained in Algorithm 1,which provides a brief explanation of how the different layers work together to find a better path for all pairs of nodes at the data plane.First,the RL-agent at the application plane is provided with processed link state information from the control layer and given inputs (i.e.,the learning rate,discount factor,network size,training epochs,all (src,dst) pairs,network graph,and weights (ω1,ω2,ω3)).From the given inputs,the RL agent is expected to continuously compute and update the best paths for all pairs of nodes in a given network.

    Algorithm 1:Q-value Update Input:All (src,dst),Network graph,Link states Output: Qt+1 iSti,Ati.1 Initialize α,γ,ω1,ω2,ω3,Q:A × S→R,initialized with 0 2 For each (src,dst) pair do 3 current_state=src 4 While current state is not the destination do 5 Rt+1 ←R(St,At) with Eq.(6)6 Update Q-table 8 St ←St+1 9 End While 10 End For

    Algorithm 2:Selecting Next Hop at Node i 1 If ( NID(i)== NID(dst))2 Deliver a packet to upper layer 3 Else 4 Flag ←False 5 NHi=6 For all nodes in Ni 7 If (QLti<Threshold)8 NHi=NHi ∪NID(i)9 Flag ←True 10 End If 10 End For 11 If (Flag==True)12 Select a next hop in NHi randomly 13 Else 14 Take Q-table and find a path with lowest Q-value 15 End If 16 End If

    The algorithm execution processes to find the optimum paths for all pairs of nodes start by initializing the Q-values of the Q-table to zeros (Line 1).For a given packet at the source node,the first exploration epoch starts by initializing the state of a packet at thesrcnode (Line 1),and from that state selects one action (At) among all possible actions from the current state (Line 2).With the selection of this action,it considers moving to the next state (St+1) (Line 8).Using Eq.(4),the minimum Q-value for this next state is obtained based on all possible actions (Line 6),followed by setting the next state as the current state (Line 8).The state transition loop continues until the current state is equal to the final state (i.e.,the packet reaches thedstnode) (Line 4).Once the final goal is reached,the training epoch ends and a new one starts until they have all run(Line 2).Based on the computed Q-values,the RL-agent computes the optimal routes to forward data packets between the givensrc-dstpairs and forwards them to the flow control module at the control plane.

    The routing algorithm of QCAR is described in Algorithm 2.Initially,if a node is a destination by comparing the node identifier,a packet is passed to the upper layer.Otherwise,a node chooses the next hop for a packet.Choosing the next hop is dependent on the neighbor node’s congestion level.From Lines 5 to 10,we construct the new neighbors’subset (NHi) ofNiwith the only node whose queue length is less than the predetermined threshold.After buildingNHi,,a node performs two different operations.The former is to select a next-hop among theNHi,set randomly to prevent node congestion by distributing packets along with multiple nodes,whereas the latter is to set the next hop as the node along the path with the lowest Q-value.These actions are shown between Lines 11 and 15.That is,when the congestion levels of multiple neighbors are acceptable,the next hop is randomly selected among them.Otherwise,the best route through QL is chosen and set as the next hop for a given packet.

    4 Performance Evaluation

    This section presents an evaluation of the proposed QCAR protocol through simulations based on the network simulator ns-3.First,we illustrate our simulation settings followed by a discussion on the impacts of different settings of the Q-learning-related parameters.In addition,the influence of the data flow rate,the number of traffic sources,the node density,and the maximum buffer size on the system performance will be discussed and ultimately compared between the performance of our proposed QCAR,the shortest path based on Dijkstra’s algorithm,and the traditional Q-learning without a look-ahead,which is represented as QL in the figures.We present the performance comparisons with two performance parameters:packet delivery ratio and end-to-end delay.

    4.1 Simulation Settings

    To verify the routing mechanism based on QCAR,we deployed network topologies with nodes uniformly distributed.To observe how well the proposed approach reacts to different congestion levels,we perform several simulations with different data flow rates,a varied number of traffic sources,varied node densities,and the maximum buffer size.To avoid the formation of long routes between the given source and destination nodes,we limit the formation of the route length to the maximum of only 4 hops.The rate error model with a byte unit is applied to cause packet corruption.According to the probability,a packet is discarded if a byte is corrupted.To best estimate the obtained results,we run each scenario 10 times with different seed values and obtain the averaged results.For the specific configuration of the parameters,see Tab.1 below.

    Table 1:Simulation parameters

    4.2 Impact of Q-Learning Related Parameters on QCAR

    In the QCAR proposed approach,the link-state associated with each node is periodically updated based on the look-ahead method to determine the potential next-hop(s).The updated link states offer information necessary for next-hop selection such as the available queue size and measured link reliability on that particular node.The degree by which the information of the potential neighbor is considered important when selecting the next-hop depends on the discount factor parameter,which ranges between zero and 1.The closer it gets to 1,the higher the impact will be,and vice-versa.In addition,we discuss the impacts of different weight settings (ω1,ω2,ω3)that determines which of the three metrics (available buffer size ratio,link reliability,and hop count) is dominant when computing the routes to the destination node.

    As previously mentioned,the weights are added to comprehensively minimize the effects of the available buffer size,link reliability,and path length in the route selection.We randomly select the ratios and run through the simulations to find a single weight set that gives the best results.We categorize the three sets under different cases with each showing the effect of setting one of the parameters as dominant over the others.In Case 1 (ω1:ω2:ω3=2:1:7) the parameter hop count is placed as the most dominant,whereby the shortest path to the destination is the most favored.Case 2 (ω1:ω2:ω3=2:7:1) favors the formation of a path based on the reliability of the links.Finally,in Case 3 with the ratio of (ω1:ω2:ω3=7:1:2),the nodes prefer the selection of next hops based on the degree of packet congestion.According to the simulation results shown in Figs.3 and 4,assigning a relatively larger weight value to the congestion metric causes more data packets to be delivered at an acceptable increased delay with the QCAR approach.Case 3 shows a better trade-off between the parameters by allowing the nodes to prefer the selection of less congested and shorter routes as much as possible.In Figs.3 and 4 we use a single traffic source by sending packets at a rate of 20 packets per second,which is expected to cause a buffer overflow after some time on certain nodes because the maximum buffer size is only 10 packets.

    Figure 3:Packet delivery ratio vs.weight ratio

    The results suggest that a large discount factor has a better impact on the performance of the QCAR algorithm because it allows nodes to give higher priority to neighbors whose neighbors are less congested and closer to the destination node.In addition,we studied the impacts of different learning rates on both the QL and QCAR approaches and present the results in Figs.5 and 6 below.The learning rate parameter determines how fast nodes update the routing table based on newly computed route information.The higher the learning rate is,the faster the nodes tend to find the optimal route information and vice-versa.However,this reaches its limit as the value approaches 1.At this moment,the nodes will almost always use the newly computed path without considering the effectiveness of the currently used path,which in some cases is better than the newly computed path.

    Figure 4:End-to-end delay vs.weight ratio

    Figure 5:Packet delivery ratio vs.learning rate

    Figure 6:End-to-end delay vs.learning rate

    4.3 Effect of Data Flow Rate

    To learn the effectiveness of the proposed QCAR algorithm on the formation of the shorter less-congested paths,we conducted some simulations at different data flow rates.In this particular set of experiments,a single source node was allowed to send packets at different rates of 5,10,15,20,and 25 packets per second toward a single destination.As can be seen from Fig.7 below,at low data rates,the network has sufficient resources to forward all data packets to the destination.Regarding the packet delivery ratio,for all three approaches,almost all data packets were successfully delivered.Meanwhile,in Fig.8,the shortest-path algorithm performs better in terms of delay because packets are delivered through a shorter and less congested path.

    With a gradual increase of the data flow rate,the performance of the shortest-path approach falls sharply owing to the congestion experienced at the selected short path.Meanwhile,the QL and QCAR approaches adapt better to the increased packet flow rate,thereby avoiding paths with congested neighbors and hence a relatively increased delivery ratio.Our proposed QCAR approach exhibits a better performance compared to the typical Q-learning-based approach by delivering approximately 10% more packets with a slightly reduced delivery delay.This is because the selection of next-hops considers the future possible consequences that could happen 2-hops away if the current action is taken.Simply stated,the QCAR allows for the selection of neighbor nodes that may currently be seen as congested but are soon to be potential next hops,unlike with the QL method.As shown in Fig.8,the delivery delay for both the QL and QCAR approaches increases in proportional to the increase in the data flow rate.This is caused by the tendency of nodes to create longer routes as they try to find less congested next hops.Regardless,the QCAR approach exhibits a shorter delivery delay by 10% compared to that of the traditional Q-learning.All approaches exhibit a sharp increase in delay when the data flow rate is more than 10 packets per second because the maximum buffer size set for this experiment was 10 packets.Hence,it is at this rate when some nodes tend to experience congestion owing to a buffer overflow,upon which our proposed QCAR method reacts accordingly through the random route selection algorithm,which prevents congestion at the intermediate node.

    Figure 7:Packet delivery ratio vs.data rate

    Figure 8:End-to-end delay vs.data rate

    4.4 Effects of Varied Number of Traffic Source Nodes

    In this section,we discuss the impacts of using a varied number of traffic source nodes on all three approaches discussed.We limit the maximum buffer size to 10 packets,in a network of 10 nodes,and observe how the different approaches react to varied traffic sources of 1,3,5,and 7 nodes.In this set of experiments,the intermediate nodes are subjected to the reception of data packets from different sources directed toward different destinations at some point in the simulation time.We expect our proposed QCAR to react better than the shortest-path and the QL approach because nodes use the look-ahead method to detect possible consequences of selecting a node as its next hop.

    To conduct the experiments,each source node is allowed to send data packets at a constant flow rate of 10 packets per second towards a given destination.To create varied congestion levels on nodes,each link connecting two nodes is given a different bandwidth.As can be seen in Fig.9,with a single traffic source,most of the data packets are successfully delivered to their respective destinations within a short time for all schemes because the paths are not congested.As the number of sources of the traffic nodes increases,some intermediate nodes start to experience congestion caused by a traffic burst.The shortest path approach experiences a sharp decline in delivery ratio caused by a buffer overflow because the nodes use fixed routes to forward the data packets.

    Figure 9:Packet delivery ratio vs.source nodes

    Compared to the QL approach and the shortest-path,our proposed scheme can deliver more data packets regardless of the increased traffic flow owing to its ability to distribute traffic by a random selection of next-hops among the nodes with low congestion levels.In addition,periodic updates of the route information allow for a temporary rest of the busy routes,thereby allowing buffered packet forwarding without any loss.The QL mechanism is unable to do this better than QCAR because a node will continue to forward data packets toward a neighbor as long as it can accept data packets without considering what will happen shortly thereafter.The QCAR mechanism delivers approximately 13% more data packets than the QL and 19% more than the shortest-path approach at an acceptable increased delay (see Figs.9 and 10) caused by the tendency of nodes routing packets toward longer routes compared to the shortest-path approach.

    Figure 10:End-to-end delay vs.source nodes

    Figure 11:Packet delivery ratio vs.number of nodes

    4.5 Effect of Varied Number of Nodes

    To observe the impact of increasing the number of nodes,we created three different topologies with 10,30 and 50 nodes to represent small,intermediate,and large node density topologies,respectively.Here,we use three traffic source nodes,each sending data packets at the rate of 10 packets per second during the entire simulation time toward different destinations.Similar to the previous set of experiments,we limit the maximum buffer size to 10 packets only and present the simulation results in Figs.11 and 12 below to reflect the behaviors shown by the three approaches.As can be seen from Fig.11,the shortest path approach exhibits almost a similar tendency by mostly maintaining the amount of data packets delivered for all varied node densities.This is because the shortest-path approach chooses the same short routes regardless of the presence of other nodes.However,the QL and QCAR approaches react differently.Both show a linear increase in the packet delivery ratio.This is caused by the presence of multiple neighbors,which offers additional options to forward data packets without experiencing congestion.

    At some point during the simulation,some intermediate nodes experiencing congestion perform better with the QCAR approach because doing so guarantees better routing decisions by considering nodes up to two hops away.This offers more options to forward data packets compared to the previous QL approach.The QCAR approach sends more data packets at an increased ratio of almost 7% and nearly 20% compared to the QL and shortest-path,respectively.Similar to the previous scenarios,the QL and QCAR approach tend to increase in terms of delivery delay owing to the tendency of nodes selecting longer routes to forward data packets,as shown in Fig.12.

    Figure 12:End-to-end delay vs.number of nodes

    Figure 13:Packet delivery ratio vs.buffer size

    Figure 14:End-to-end delay vs.buffer size

    4.6 Effect of Varied Maximum Buffer Size

    In this set of experiments,we observe the impact of varying the maximum buffer size of the nodes.We set a network of 30 nodes with three traffic sources,all generating packets at a rate of 10 packets per second.It is expected that the packet delivery ratio should increase proportionally to the increase in buffer size.As shown in Fig.13,all approaches exhibit a relative linear increase in packet delivery ratio and reduced delivery delay as the buffer size increases.With our proposed approach,increasing the buffer size means the nodes tend to have a relatively larger subset table of nodes with congestion levels lower than a predetermined threshold (see Algorithm 2).A larger table for nodes actively participating in the routing means that the nodes have increased options to choose the next hops with far less congestion.The QCAR approach performs better by delivering data packets nearly 10% and 5% smaller (10 packets at maximum) and larger (30 packets at maximum) with a relatively shorter delay compared to the traditional Q-learning approach,as shown in Fig.14.

    5 Conclusion

    In this paper,we proposed a new congestion-aware routing protocol based on Q-learning over an SDN architecture.Topology information and the periodical measured value for congestion are used to compute the Q-value and make the best route to avoid a congestion.The performance evaluation reveals that QCAR outperforms the existing scheme by more than 15% in terms of packet delivery ratio and reduced end-to-end delay at a high traffic rate,large network density,and varied buffer size.In addition to the selection of the best route,a load balance along the multiple paths can contribute to congestion avoidance and stabilize the network performance.Based on this research,load balancing with a Q-value for each path and an intelligent next-hop selection instead of a random selection will be studied and evaluated.

    Funding Statement:This work was supported by Institute for Information &communications Technology Planning &Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2019-0-01343,Training Key Talents in Industrial Convergence Security) and Research Cluster Project,R20143,by Zayed University Research Office.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品一区二区在线观看99| 日日撸夜夜添| 看非洲黑人一级黄片| 久久免费观看电影| 91国产中文字幕| 这个男人来自地球电影免费观看 | 欧美日本中文国产一区发布| 午夜免费观看性视频| 一级爰片在线观看| 少妇熟女欧美另类| 国产精品熟女久久久久浪| 九色亚洲精品在线播放| 久久久久人妻精品一区果冻| xxxhd国产人妻xxx| 草草在线视频免费看| 人人妻人人澡人人看| 一级毛片 在线播放| 亚洲综合色网址| 高清av免费在线| 国产亚洲午夜精品一区二区久久| 丰满迷人的少妇在线观看| 9色porny在线观看| 男男h啪啪无遮挡| 99久国产av精品国产电影| 欧美xxxx性猛交bbbb| 一本一本综合久久| 一本久久精品| 美女国产高潮福利片在线看| 男男h啪啪无遮挡| 9色porny在线观看| 日韩成人av中文字幕在线观看| 人人妻人人添人人爽欧美一区卜| 日韩大片免费观看网站| 99热国产这里只有精品6| 99热国产这里只有精品6| 十分钟在线观看高清视频www| 亚洲一级一片aⅴ在线观看| 少妇丰满av| 亚州av有码| 国产亚洲最大av| 国产高清有码在线观看视频| 夫妻午夜视频| 美女xxoo啪啪120秒动态图| 26uuu在线亚洲综合色| 成人手机av| 精品国产一区二区久久| 各种免费的搞黄视频| 亚洲av福利一区| 亚洲av电影在线观看一区二区三区| 国产精品女同一区二区软件| 免费观看无遮挡的男女| 国产免费视频播放在线视频| 成人无遮挡网站| 午夜91福利影院| 曰老女人黄片| 香蕉精品网在线| 91成人精品电影| 久久精品国产鲁丝片午夜精品| 国产日韩欧美在线精品| 色5月婷婷丁香| 成人亚洲精品一区在线观看| 国产女主播在线喷水免费视频网站| 免费观看av网站的网址| 日产精品乱码卡一卡2卡三| www.色视频.com| a级毛片免费高清观看在线播放| 一级二级三级毛片免费看| 久久韩国三级中文字幕| 亚洲欧美中文字幕日韩二区| 国产乱来视频区| 亚洲国产欧美日韩在线播放| 国产亚洲av片在线观看秒播厂| 两个人免费观看高清视频| 秋霞伦理黄片| 久久久欧美国产精品| 五月开心婷婷网| 大陆偷拍与自拍| 两个人的视频大全免费| 日本色播在线视频| 免费播放大片免费观看视频在线观看| 久久久久精品久久久久真实原创| 91国产中文字幕| 蜜臀久久99精品久久宅男| 亚洲第一区二区三区不卡| 亚洲第一区二区三区不卡| 日本色播在线视频| 啦啦啦在线观看免费高清www| √禁漫天堂资源中文www| 亚洲色图综合在线观看| 欧美日韩av久久| 国产精品一二三区在线看| 乱码一卡2卡4卡精品| 欧美老熟妇乱子伦牲交| 美女脱内裤让男人舔精品视频| 在线观看免费日韩欧美大片 | 美女视频免费永久观看网站| 亚洲人成网站在线播| 精品一区二区三卡| 中国美白少妇内射xxxbb| 在线天堂最新版资源| 久久99蜜桃精品久久| 免费不卡的大黄色大毛片视频在线观看| 久久影院123| 9色porny在线观看| 国产亚洲精品久久久com| 最近的中文字幕免费完整| av线在线观看网站| 亚洲国产毛片av蜜桃av| 久久av网站| 熟女电影av网| 国产午夜精品久久久久久一区二区三区| 久久精品国产自在天天线| 国产男女内射视频| 日日摸夜夜添夜夜爱| 亚洲国产精品成人久久小说| 99热这里只有精品一区| videos熟女内射| 亚洲久久久国产精品| 最近中文字幕2019免费版| 亚洲成人av在线免费| 免费高清在线观看视频在线观看| 少妇人妻 视频| 久久ye,这里只有精品| 精品久久久精品久久久| 久久久久精品性色| 另类精品久久| 亚洲丝袜综合中文字幕| www.av在线官网国产| 日本免费在线观看一区| 亚洲精品久久久久久婷婷小说| 久久久久久伊人网av| 五月玫瑰六月丁香| 亚洲精品久久午夜乱码| 欧美xxxx性猛交bbbb| 国产一区有黄有色的免费视频| 伊人久久精品亚洲午夜| 国产白丝娇喘喷水9色精品| 亚洲激情五月婷婷啪啪| 亚洲美女视频黄频| 亚洲av电影在线观看一区二区三区| 欧美日韩av久久| 丝瓜视频免费看黄片| 日日摸夜夜添夜夜添av毛片| 日本免费在线观看一区| 久久精品国产自在天天线| 最新中文字幕久久久久| 色视频在线一区二区三区| 午夜福利影视在线免费观看| 午夜激情福利司机影院| 亚洲人成网站在线播| 亚洲国产精品999| 欧美日韩一区二区视频在线观看视频在线| 国产成人精品福利久久| 视频在线观看一区二区三区| 精品午夜福利在线看| 在线观看美女被高潮喷水网站| av女优亚洲男人天堂| 精品人妻一区二区三区麻豆| 亚洲国产日韩一区二区| 寂寞人妻少妇视频99o| 国产精品无大码| 男女边吃奶边做爰视频| 午夜福利视频精品| 韩国av在线不卡| 亚洲欧美色中文字幕在线| 久久国产精品大桥未久av| 亚洲精品自拍成人| 各种免费的搞黄视频| 啦啦啦视频在线资源免费观看| 亚洲精品中文字幕在线视频| 一级毛片aaaaaa免费看小| 精品国产一区二区三区久久久樱花| 国产熟女午夜一区二区三区 | 国产精品麻豆人妻色哟哟久久| 五月开心婷婷网| 日本色播在线视频| 青青草视频在线视频观看| 99久久精品国产国产毛片| 亚洲人成网站在线播| 国产在线视频一区二区| 在线观看免费日韩欧美大片 | 亚洲丝袜综合中文字幕| 欧美三级亚洲精品| www.色视频.com| av卡一久久| 晚上一个人看的免费电影| 热99久久久久精品小说推荐| 男女免费视频国产| 中国三级夫妇交换| 18禁在线无遮挡免费观看视频| 晚上一个人看的免费电影| 国产探花极品一区二区| 街头女战士在线观看网站| 国产高清国产精品国产三级| 在线 av 中文字幕| 国产亚洲精品第一综合不卡 | 国产熟女欧美一区二区| 菩萨蛮人人尽说江南好唐韦庄| av在线播放精品| 99热这里只有是精品在线观看| 成人午夜精彩视频在线观看| 插逼视频在线观看| 日韩精品有码人妻一区| 久久影院123| 日本爱情动作片www.在线观看| 国产 精品1| 男女无遮挡免费网站观看| 成人毛片60女人毛片免费| xxx大片免费视频| 毛片一级片免费看久久久久| 亚洲精品成人av观看孕妇| 日韩视频在线欧美| 制服丝袜香蕉在线| 久久人人爽人人爽人人片va| 国产精品久久久久成人av| 一级爰片在线观看| 国产免费福利视频在线观看| 七月丁香在线播放| 亚洲婷婷狠狠爱综合网| 18禁观看日本| 精品酒店卫生间| 69精品国产乱码久久久| 国国产精品蜜臀av免费| 亚洲国产精品成人久久小说| 亚洲av成人精品一区久久| 免费av不卡在线播放| 一区二区av电影网| av卡一久久| 丁香六月天网| 国产成人一区二区在线| 一区二区三区精品91| av天堂久久9| 99热这里只有是精品在线观看| 伦精品一区二区三区| 永久免费av网站大全| 久久人人爽人人爽人人片va| 亚洲国产色片| 国产淫语在线视频| 一级,二级,三级黄色视频| 欧美另类一区| 欧美精品亚洲一区二区| 日韩欧美一区视频在线观看| 欧美精品一区二区大全| 日韩强制内射视频| 亚洲精品第二区| 亚洲,欧美,日韩| 永久网站在线| 大片电影免费在线观看免费| 国产片特级美女逼逼视频| 午夜影院在线不卡| 午夜福利网站1000一区二区三区| a级毛片免费高清观看在线播放| 国产 精品1| 国产不卡av网站在线观看| 亚洲精品久久久久久婷婷小说| 欧美性感艳星| 国产一级毛片在线| 九草在线视频观看| 少妇人妻 视频| 久久人人爽av亚洲精品天堂| 韩国高清视频一区二区三区| 精品人妻熟女毛片av久久网站| 观看av在线不卡| 亚洲成人手机| 丰满饥渴人妻一区二区三| 久久鲁丝午夜福利片| 观看av在线不卡| 美女内射精品一级片tv| 久热久热在线精品观看| 亚洲精品美女久久av网站| 国产伦精品一区二区三区视频9| av电影中文网址| 久久精品熟女亚洲av麻豆精品| 天天操日日干夜夜撸| 成人漫画全彩无遮挡| 中文字幕亚洲精品专区| 黑丝袜美女国产一区| 中文精品一卡2卡3卡4更新| 五月天丁香电影| 97在线人人人人妻| 欧美三级亚洲精品| 国产精品三级大全| 免费看av在线观看网站| 精品人妻熟女毛片av久久网站| 免费播放大片免费观看视频在线观看| 性色avwww在线观看| 午夜影院在线不卡| 日韩人妻高清精品专区| 亚洲国产精品一区三区| 欧美 日韩 精品 国产| 十八禁网站网址无遮挡| 成人国产麻豆网| 美女cb高潮喷水在线观看| 久久婷婷青草| 亚洲精品自拍成人| 欧美日韩视频高清一区二区三区二| 久久精品夜色国产| 免费看光身美女| 日韩制服骚丝袜av| a级毛片黄视频| 国产精品人妻久久久久久| 免费观看a级毛片全部| 午夜免费鲁丝| 老熟女久久久| 边亲边吃奶的免费视频| 欧美性感艳星| 新久久久久国产一级毛片| 亚洲欧美精品自产自拍| a级毛片免费高清观看在线播放| 国产精品久久久久久精品古装| 久久精品人人爽人人爽视色| 久久久久久伊人网av| 91精品伊人久久大香线蕉| 新久久久久国产一级毛片| 全区人妻精品视频| 在线观看三级黄色| 久久精品国产亚洲网站| www.av在线官网国产| 99热网站在线观看| 亚洲精品日本国产第一区| 日韩成人伦理影院| 视频区图区小说| 大香蕉97超碰在线| 色5月婷婷丁香| 久久久久久久久大av| 亚洲国产色片| 麻豆成人av视频| 26uuu在线亚洲综合色| 国内精品宾馆在线| 午夜福利,免费看| 人人妻人人澡人人看| 亚洲国产色片| 91精品三级在线观看| 欧美bdsm另类| 午夜91福利影院| 精品视频人人做人人爽| 91aial.com中文字幕在线观看| 大片免费播放器 马上看| 日本91视频免费播放| 亚洲精品aⅴ在线观看| 色视频在线一区二区三区| 国产乱人偷精品视频| 母亲3免费完整高清在线观看 | 亚洲国产精品国产精品| 22中文网久久字幕| 一区二区三区四区激情视频| tube8黄色片| 久久99一区二区三区| 国产综合精华液| 久久久a久久爽久久v久久| 国产精品无大码| 亚洲综合精品二区| 亚洲精华国产精华液的使用体验| 少妇被粗大猛烈的视频| 美女内射精品一级片tv| 国产av一区二区精品久久| 五月伊人婷婷丁香| 日韩欧美精品免费久久| 在线观看免费视频网站a站| 99热国产这里只有精品6| 国产精品嫩草影院av在线观看| 成人午夜精彩视频在线观看| 欧美日韩av久久| 国产不卡av网站在线观看| 2022亚洲国产成人精品| 国产免费一区二区三区四区乱码| 人妻 亚洲 视频| 又大又黄又爽视频免费| 午夜激情av网站| 女人久久www免费人成看片| av又黄又爽大尺度在线免费看| 九色亚洲精品在线播放| 一个人看视频在线观看www免费| 18禁裸乳无遮挡动漫免费视频| 在线观看www视频免费| 亚洲人成77777在线视频| 亚洲内射少妇av| 亚洲av在线观看美女高潮| 欧美精品一区二区大全| 在线观看免费高清a一片| 一区二区日韩欧美中文字幕 | 精品国产一区二区三区久久久樱花| 97超视频在线观看视频| 国产69精品久久久久777片| av.在线天堂| 日本av手机在线免费观看| 精品亚洲乱码少妇综合久久| 一级,二级,三级黄色视频| 免费观看性生交大片5| 日韩电影二区| 国产精品秋霞免费鲁丝片| 麻豆乱淫一区二区| 国产有黄有色有爽视频| 日本av手机在线免费观看| 91成人精品电影| 日韩人妻高清精品专区| 国产成人a∨麻豆精品| 精品人妻在线不人妻| 国产一区二区在线观看av| 最近的中文字幕免费完整| 五月开心婷婷网| 在线观看国产h片| 国产一区有黄有色的免费视频| 亚洲天堂av无毛| 老司机影院成人| 色94色欧美一区二区| 久久人人爽av亚洲精品天堂| 精品亚洲成a人片在线观看| 久久99蜜桃精品久久| 久久久久网色| 国产成人精品在线电影| 国国产精品蜜臀av免费| 亚洲国产精品国产精品| 日本与韩国留学比较| 五月天丁香电影| 免费大片黄手机在线观看| 国产精品欧美亚洲77777| 狠狠精品人妻久久久久久综合| 纵有疾风起免费观看全集完整版| 啦啦啦视频在线资源免费观看| 国产午夜精品一二区理论片| 日日撸夜夜添| 亚洲国产成人一精品久久久| 成年美女黄网站色视频大全免费 | 欧美激情 高清一区二区三区| 午夜福利影视在线免费观看| 亚洲人成网站在线观看播放| 日韩精品有码人妻一区| 亚洲欧美日韩另类电影网站| 国产精品久久久久久久电影| 亚洲精品av麻豆狂野| 国产午夜精品一二区理论片| 国产精品秋霞免费鲁丝片| 日韩 亚洲 欧美在线| 夫妻性生交免费视频一级片| 国产成人av激情在线播放 | 在线天堂最新版资源| 久久精品久久精品一区二区三区| 日本欧美视频一区| 国产精品秋霞免费鲁丝片| 欧美3d第一页| 亚洲精品自拍成人| 亚洲四区av| 国产av精品麻豆| 在线观看免费高清a一片| 亚洲久久久国产精品| 免费av中文字幕在线| 久久久a久久爽久久v久久| 国产一区二区三区av在线| 久久久久网色| av.在线天堂| 国产成人a∨麻豆精品| 99re6热这里在线精品视频| 欧美另类一区| 亚洲国产毛片av蜜桃av| 国产一区二区在线观看日韩| 性色av一级| 午夜日本视频在线| 男人爽女人下面视频在线观看| 性高湖久久久久久久久免费观看| 亚洲国产精品一区三区| 久热这里只有精品99| 亚洲精品色激情综合| 在线亚洲精品国产二区图片欧美 | 亚洲av免费高清在线观看| 国产精品不卡视频一区二区| 亚洲色图 男人天堂 中文字幕 | 91久久精品国产一区二区三区| 精品久久久噜噜| 97超视频在线观看视频| 久久久久久人妻| 午夜av观看不卡| 最近中文字幕高清免费大全6| 亚洲国产精品国产精品| 五月伊人婷婷丁香| 大香蕉97超碰在线| 中国美白少妇内射xxxbb| 丰满乱子伦码专区| 午夜免费观看性视频| av免费在线看不卡| 丝瓜视频免费看黄片| 国产国语露脸激情在线看| 人妻系列 视频| 欧美精品一区二区大全| 精品一区二区三区视频在线| 寂寞人妻少妇视频99o| 精品卡一卡二卡四卡免费| 大香蕉久久网| 亚洲精品av麻豆狂野| 看免费成人av毛片| 一区二区三区四区激情视频| 国产探花极品一区二区| 亚洲情色 制服丝袜| 免费少妇av软件| 国产一区二区在线观看av| 国产爽快片一区二区三区| 少妇丰满av| a级毛片黄视频| 交换朋友夫妻互换小说| 男人爽女人下面视频在线观看| 亚洲欧美一区二区三区黑人 | 欧美精品人与动牲交sv欧美| av女优亚洲男人天堂| 亚洲精品aⅴ在线观看| 2021少妇久久久久久久久久久| 国产熟女欧美一区二区| 亚洲精品国产av成人精品| 99国产综合亚洲精品| 精品少妇内射三级| 尾随美女入室| 黑人欧美特级aaaaaa片| 国产精品不卡视频一区二区| 中国国产av一级| 蜜桃在线观看..| 多毛熟女@视频| 精品人妻在线不人妻| 激情五月婷婷亚洲| kizo精华| 9色porny在线观看| 不卡视频在线观看欧美| 久久97久久精品| 亚洲经典国产精华液单| 午夜91福利影院| 最近中文字幕2019免费版| 欧美日韩视频高清一区二区三区二| 亚洲精华国产精华液的使用体验| 国国产精品蜜臀av免费| 80岁老熟妇乱子伦牲交| 激情五月婷婷亚洲| 久久久国产欧美日韩av| 国产一区二区在线观看av| 国产在线一区二区三区精| 七月丁香在线播放| 午夜影院在线不卡| 国产成人免费观看mmmm| 亚洲av成人精品一区久久| 午夜久久久在线观看| 亚洲欧美色中文字幕在线| 免费久久久久久久精品成人欧美视频 | 亚洲经典国产精华液单| 一区二区三区四区激情视频| 国产在线免费精品| 国产免费一区二区三区四区乱码| 精品酒店卫生间| 人妻人人澡人人爽人人| 成人毛片60女人毛片免费| 亚洲综合精品二区| 人成视频在线观看免费观看| 亚洲欧美中文字幕日韩二区| 亚洲精品日韩在线中文字幕| 午夜激情av网站| 亚洲精品自拍成人| 三级国产精品片| 久久99精品国语久久久| 亚洲精品456在线播放app| 亚洲av成人精品一二三区| 亚洲av综合色区一区| 搡女人真爽免费视频火全软件| 国产日韩欧美亚洲二区| 七月丁香在线播放| 男女无遮挡免费网站观看| 国产日韩一区二区三区精品不卡 | 国产精品国产三级专区第一集| 国产亚洲一区二区精品| 精品国产乱码久久久久久小说| 婷婷色av中文字幕| 精品久久久久久电影网| 在线观看一区二区三区激情| 成人国产av品久久久| 天堂8中文在线网| 少妇高潮的动态图| 夜夜骑夜夜射夜夜干| 熟女电影av网| 欧美日韩av久久| 欧美日韩成人在线一区二区| 国产精品久久久久久久久免| 亚洲天堂av无毛| 日日啪夜夜爽| 国产精品三级大全| 人妻 亚洲 视频| 免费看不卡的av| 观看美女的网站| 99re6热这里在线精品视频| 亚洲婷婷狠狠爱综合网| 两个人的视频大全免费| 欧美日韩av久久| av免费观看日本| 国产高清三级在线| 在线观看三级黄色| 日韩成人伦理影院| 男男h啪啪无遮挡| 成人毛片60女人毛片免费| 欧美丝袜亚洲另类| 国产精品无大码| 中文字幕人妻熟人妻熟丝袜美| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲人成网站在线观看播放| 亚洲精品av麻豆狂野| 亚洲成色77777| 男人爽女人下面视频在线观看| 91国产中文字幕| videos熟女内射| 日韩中文字幕视频在线看片| 亚洲人成网站在线观看播放| 久久精品久久精品一区二区三区| 女性生殖器流出的白浆| 性色avwww在线观看| 久久免费观看电影| 飞空精品影院首页| 色婷婷av一区二区三区视频| 亚洲丝袜综合中文字幕| 日本av手机在线免费观看| 波野结衣二区三区在线| 曰老女人黄片| 天天躁夜夜躁狠狠久久av|