• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Traffic Anomaly Detection Method Based on Improved GRU and EFMS-Kmeans Clustering

    2021-04-28 04:59:38YonghuaHuoYiCaoZhihaoWangYuYanZhongdiGeandYangYang

    Yonghua Huo,Yi Cao,Zhihao Wang,Yu Yan,Zhongdi Ge and Yang Yang,*

    1Science and Technology on Communication Networks Laboratory,The 54th Research Institute of CETC,Shijiazhuang,China

    2Department of Military Representative Office of General Military Equipment Development Shijiazhuang,Shijiazhuang,China

    3State Key Laboratory of Networking and Switching Technology,Beijing University of Posts and Telecommunications,Beijing,100876,China

    ABSTRACT In recent years,with the continuous development of information technology and the rapid growth of network scale,network monitoring and management become more and more important.Network traffic is an important part of network state.In order to ensure the normal operation of the network,improve the availability of the network,find network faults in time and deal with network attacks;it is necessary to detect the abnormal traffic in the network.Abnormal traffic detection is of great significance in the actual network management.Therefore,in order to improve the accuracy and efficiency of network traffic anomaly detection,this paper proposes a comprehensive anomaly detection method based on improved GRU traffic prediction and improved K-means clustering,and cascade the traffic prediction and clustering to achieve the purpose of anomaly detection.Firstly,an improved highway-GRU algorithm HS-GRU(An improved Gate Recurrent Unit neural network based on Highway network and STL algorithm,HS-GRU)is proposed,which combines STL decomposition algorithm with highway GRU neural network and uses this improved algorithm to predict traffic.And then,we proposed the EFMS-Kmeans algorithm(An improved clustering algorithm that combined Mean Shiftalgorithm based on electrostatic force with K-means clustering)to solve the shortcoming of the traditional K-means clustering which cannot automatically determine the number of clustering.The sum of the squared errors(SSE)method and the contour coefficient method were used to double test the clustering effect.After determining the clustering center,the potential energy gradient was directly used for anomaly detection by using the threshold method,which considered the local characteristics of the data and ensured the accuracy of anomaly detection.The simulation results show that the anomaly detection algorithm based on HS-GRU and EFMS-Kmeans clustering proposed in this paper can effectively improve the accuracy of flow anomaly detection and has important application value.

    KEYWORDS Anomaly detection;gated recurrent unit;clustering;mean shift;K-means

    1 Introduction

    In recent years,with the development of Internet technology,the network structure has become increasingly complex and the scale of data has continued to expand.The complex network structure brings huge challenges to network security.Many criminals attack the network to obtain useful information and great wealth.Anomaly detection of network traffic can provide key auxiliary decisions for network management and security early warning,which has very important practical significance.

    Anomaly detection refers to the problem of finding patterns in data that do not conform to expected behavior[1].For network anomaly detection,it means using statistical analysis,data mining,and machine learning methods to find information like abnormal data groups and abnormal interactions in the network,in order to provide support and assistance for subsequent network management.Generally,network anomaly detection requires the use of some data stream processing tools to collect a large amount of network data stream information,and then extract and filter out the data attributes that can be used for anomaly detection,and obtain the classification of the data records,and finally determine whether it is normal by analyzing the data attributes.

    The existing methods of network anomaly detection can be mainly divided into the following sorts:unsupervised learning methods,supervised learning methods and semi-supervised learning methods.Supervised anomaly detection methods mainly include neural network,pattern prediction,probability statistics,incremental SVM and other methods;unsupervised anomaly detection methods mainly include AP cluster anomaly detection based on the kernel adaptive algorithm,K-means clustering anomaly detection;semi-supervised anomaly detection methods are mainly a combination of supervised anomaly detection methods and unsupervised anomaly detection methods,and the main way is to improve the clustering-based anomaly detection method[2].

    The density-based clustering method is one of the common clustering algorithm.By calculating the density,the density-based clustering method can over-come the shortcoming that the distance-based algorithm can only find spherical clusters,and has better effect for anomaly detection[3].However,the anomaly detection method based on density clustering has a high time complexity,which usually adopts global parameters and ignores local characteristics of sample distribution,resulting in poor accuracy of anomaly detection.Therefore,density-based and distance-based clustering algorithms are used in this paper.Firstly,the number and center of categories are automatically determined by using the improved Mean Shift algorithm based on electrostatic force,and then the number of clustering clusters obtained is used as the K value of K-means for anomaly detection.It makes up the shortcoming of K-means algorithm to manually specify the number of clustering K.

    Aiming at the problem that the traditional method has a poor detection effect on single traffic values for network time series data,a traffic prediction-clustering cascade anomaly detection method is proposed in this paper.Firstly,in view of the periodic characteristics of network traffic data,this paper proposes a traffic prediction algorithm HS_GRU(An improved Gate Recurrent Unit neural network based on Highway network and STL algorithm,HS-GRU)which combines the GRU(Gate Recurrent Unit)neural network with the STL(Seasonal Trend decomposition using Loess)decomposition algorithm after the Box-Cox transformation and highway network to predict the network traffic data.The STL decomposition algorithm extracts the periodic component of the network traffic data and then the remaining component is input into the highway GRU neural network model for prediction.In this way,the model can effectively capture the periodicity characteristics of network traffic data and improve the accuracy of traffic prediction.Secondly,this paper proposes an improved Mean Shift clustering method based on electrostatic force(An improved Mean Shift clustering method-based on electrostatic force,EFMS).The concept of electrostatic force was introduced based on Coulomb’s law,and the K-nearest neighbor thought was used to reduce the computational complexity.Meanwhile,the over-dependence of Mean Shift clustering algorithm on bandwidth was solved.Finally,EFMS is combined with the classical K-means algorithm to obtain the improved K-means anomaly detection algorithm(A K-means detection algorithm based on EFMS,EFMS-Kmeans).Based on the clustering number and clustering center obtained by the EFMS algorithm,the K-means method is used for anomaly detection.The threshold method is adopted to directly use the potential energy gradient for anomaly detection,which not only considers the local characteristics of the data,but also ensures the accuracy of anomaly detection.Finally,the anomaly detection method in this paper is obtained,and the prediction results of HS_GRU are used as one of the inputs of the improved EFMS-Kmeans clustering algorithm to predict the abnormal traffic.The improved anomaly detection algorithm proposed in this paper can effectively improve the detection efficiency and adapt to the actual network data.

    The remaining sections of this paper are organized as follows:Section 2 introduces the relevant work,Section 3 introduces the improved algorithm,Section 4 introduces the simulation experiment,and Section 5 summarizes the full text.

    2 Related Work

    Now,information technology is in an era of rapid development.With the rapid development of Internet Technology,the scale of the network is increasing day by day,its structure is becoming more and more complex,the probability of occurrence of failures and abnormalities in the network is greatly increased,and network attacks are common.In order to ensure the normal operation of the network,improve the availability of the network,detect network failures in time,and respond to network attacks,it is necessary to detect abnormal traffic in the network.Anomaly detection is an important data analysis task that can be used to identify network intrusions and network failures[2].Anomaly detection has been widely used in countless application fields,such as medical and public health,fraud detection,intrusion detection,industrial destruction,image processing,sensor networks,robotic behavior and astronomical data[4].

    Abnormal traffic refers to a traffic pattern that is different from the normal network operation when the network is not operating normally or there is network intrusion.Due to various reasons such as equipment failure,data source conflict,abnormal behavior,human error,negligence,etc.,abnormal data will be generated in the network.In the network,traffic data is one of the important data that reflects the network status,and statistical analysis of network traffic data can help effectively manage the network.The abnormal value of traffic usually represents the existence of abnormal conditions in the entire network,including network attacks,equipment failures,etc.Such anomalies need to be dealt with as soon as possible;otherwise,it will have a negative impact on the entire network.It can be seen that abnormal traffic detection is of great significance in actual network management.

    There are various researches on anomaly detection,because anomaly detection has a very wide range of applications and practical value,many researchers will research suitable anomaly detection methods according to the actual needs of their fields.Dias et al.[5]proposed an anomaly detection framework based on normalized flow.The main application scenario of the framework is trajectory anomaly detection,which can perform complex density estimation based on data through neural networks.The main contribution of this paper is to calculate an accurate model likelihood value for each part of the trajectory,which is an important feature of normalized flow.They aggregated the likelihood of segmentation into a single coherent trajectory anomaly score.Such a strategy enables the processing of potentially large sequences with different lengths.Compared with the traditional anomaly detection effect,this method has better effect.

    Almohri et al.[6]applied clustering anomaly detection to anomaly detection of IoT devices.This paper uses binary classifiers and density-based clustering algorithms to model the behavior of benign IoT devices through a real test bed,aiming to capture normal behaviors in the IoT monitoring environment.The results of the IoT test platform show that the classifier and clustering algorithm have good application effects in detecting damaged IoT devices.

    Li et al.[7]applied clustering anomaly detection to road anomaly detection.They use spatial clustering methods to group multiple driving test results based on their spatial density patterns.Finally,the optimized detection results are obtained by integrating the member points of each cluster,and the detection results are good.

    Cluster-based anomaly detection has a wealth of practical experience and theoretical research accumulation.Anomaly detection in many fields is based on clustering,and the most common is based on K-means.Nogues et al.[8]proposed an abnormal network traffic detection algorithm based on K-means,using similarity and dissimilarity to define clustering criteria.However,this method only uses the basic K-means clustering,and does not improve the K-means,and the effect of the algorithm is not ideal.In the document,the author did not specify the specific experimental procedures.

    Aljawarneh et al.[9]proposed an algorithm for incremental clustering dimensionality reduction and intrusion detection based on distance metric.Intrusion detection is a sub-process of anomaly detection.This document uses a clustering feature pattern based on gradually increasing,which can improve the accuracy of intrusion detection.

    Hafeez et al.[10]proposed a combination of fuzzy C-means clustering and fuzzy interpolation scheme for anomaly detection in the Internet of Things to analyze network traffic and detect malicious network activities.Once malicious activity is detected,the anomaly detection system will automatically enforce network access restrictions on the IoT device that generates this activity and prevent it from attacking other devices or services.This document has a wealth of experimental data sets and has achieved good simulation results.

    Kongshavn et al.[11]applied clustering techniques to diagnose distributed denial of service(DDoS)attacks.Use clustering techniques to create geographic clusters in areas that may contain legitimate traffic.Through extensive experimental results,they believe that the use of clustering algorithms is a better solution.

    As a classic and very effective method,clustering algorithm is often combined with other algorithms for anomaly detection.Barletta et al.[12]studied the security of in-vehicle communication networks.An intrusion detection based on distance judgment is proposed.They first used the supervised XY Fusion Kohonen network(XYF),and then combined the XYF network with the K-means clustering algorithm(XYF-K)to improve network efficiency.The model was tested on an open source data set,and the proposed architecture showed high performance in the accuracy of attack message detection.

    Since the use of a single clustering method for anomaly detection is not effective,many experts and scholars have adopted a combination of two clustering methods for anomaly detection.Parwez et al.[13]used unsupervised clustering techniques to do anomaly detection,namely K-means clustering and hierarchical clustering.The combination of the two clustering methods can make up for each other’s defects,and the effect of anomaly detection is greatly improved.However,the document does not improve these two basic clustering algorithms,so the effect of anomaly detection needs to be improved.

    In recent years,with the rise of artificial intelligence and the in-depth study of neural networks,many neural network models have been applied to anomaly detection.Pandeeswari et al.[14]proposed a cloud anomaly detection system based on fuzzy clustering neural network.In this article,they adopted a hybrid algorithm,which is a hybrid of Fuzzy CMeans clustering algorithm and artificial neural network(FCM-ANN)to improve the accuracy of the detection system.The system proposed in this document has been implemented and compared with the naive Bayes classifier and the classic ANN algorithm.The simulation results show that the system can detect anomalies with high detection accuracy and low false alarm rate even under low-frequency attacks.

    In addition to combining with neural networks,clustering algorithms can also be used as a sub-step in a multi-step anomaly detection algorithm for anomaly detection.Bhuyan et al.[15]proposed a method based on multi-step outliers to detect network traffic abnormalities.To support network anomaly recognition based on outliers,they use the following modules:mutual information and feature selection techniques based on generalized entropy to select relevant non-redundant feature subsets;tree-based clustering techniques to identify anomalies.

    Alkasassbeh et al.[16]proposed a novel hybrid method to detect DDoS attacks by monitoring abnormal traffic in the network.This method reads the flow data and builds a predictive model based on it,through which future data can be predicted and compared with the observed data in order to detect abnormal flow.This method combines two methods:traffic prediction and change detection.This method first obtains the predicted value through traffic prediction,and then compares the real value to define the upper and lower boundaries to detect anomalies.This method achieves a high accuracy rate.

    Yaacob et al.[17]studied a new method of detecting potential attacks that may occur in the network by using the Autoregressive Integrated Moving Average(ARIMA)model technology.The solution can provide feedback through its predictive capabilities to provide early warning.Andrysiak et al.[18]proposed the use of fractional ARIMA(Fractional ARIMA,ARFIMA or FARIMA)model for network anomaly detection.The author selects model parameters based on the representation consistency model and its estimation error,and uses the statistical relationship between the prediction and the original network traffic to determine whether the traffic to be detected is normal or under attack.

    In essence,unsupervised anomaly detection using deep learning is mainly hybrid.First,deep neural networks learn complex patterns of data.Then,the output layer representation from the training network is used as the input of traditional anomaly detection algorithms.Inspired by Davis et al.[19],we also adopted a combination of neural network and unsupervised anomaly detection.The main idea of our method is an abnormal flow detection technology based on flow prediction.

    At present,there are many methods for flow prediction,which can be roughly divided into two categories:traditional flow prediction method such as FARIMA[20],artificial intelligence method mainly including neural network[21].

    Katris et al.[22]compared some forecasting model building procedures like ARIMA,FARIMA and Artificial Neural Networks(ANN)models with Internet traffic data.The experiment shows that,predictions may further be improved if we recognize the non-linear structure,existing sometimes in the time series,with the help of ANN.

    Feng et al.[23]proposed Deep Traffic Predictor(DeepTP)to forecast traffic demands from spatial-dependent and long-period cellular traffic,which can be divided into two components:a general feature extractor for modeling spatial dependencies and encoding the external information,and a sequential module for modeling complicated temporal variations.

    Zhou et al.[24]proposed a new network traffic prediction method based on improved echo state network to deal with the complex characters of network traffic,such as mutability,chaos,timeliness,and nonlinearity.

    In view of the time characteristics of traffic,RNN and its variants are frequently used in traffic forecasting.Liu et al.[25]proposed several TM prediction methods based on Neural Networks(NN)and predict TM from three perspectives:predict the overall TM directly,predict each origin-destination(OD)flow separately and predict the overall TM combined with key element correction.The experiment results show that prediction methods based on Recurrent Neural Networks(RNN)can achieve better prediction accuracy than methods leveraging Convolutional Neural Networks(CNN)and Deep Belief Networks(DBN).

    Cao et al.[26]applied the gated recurrent unit(GRU)model and the interactive temporal recurrent convolution network(ITRCN)to single-service traffic prediction and interactive network traffic prediction respectively.The experiments results based on the Yahoo data sets show that the proposed novel method outperforms the conventional GRU and CNN method.

    However,due to the complexity of traffic,in some scenarios,a single traffic forecasting method can no longer meet the demand,and decomposition algorithms are gradually being applied in traffic forecasting.

    Xiang et al.[27]proposed a new hybrid network traffic prediction method based on the combination of the covariation orthogonal prediction and the artificial neural network prediction to capture the burstiness in the network traffic and the accuracy of the new prediction method has been effectively improved.

    Wei et al.[28]introduced a prediction algorithm based on traffic decomposition.The complex correlation structure of the network history traffic is decomposed according to different protocols and then predicted with wavelet method separately.Simulation results show that the combination method can achieve higher prediction accuracy rather than that without traffic decomposition.

    Jiang et al.[29]proposed a new prediction algorithm to network traffic in the large-scale communication network.They use signal analysis theory to transform network traffic from time domain to time-frequency domain and built the prediction models of low-frequency and highfrequency components.Simulation results indicate that the algorithm holds much lower prediction error than previous methods.

    Yang[30]proposed a network traffic prediction method based on Short Time Fourier Transform(STFT)and traffic modeling and use STFT to decompose network traffic into high-frequency components and low frequency components.From simulations,the proposed prediction method outperforms WABR and PCA.

    Network traffic forecasting technology is to predict the changing trend of network traffic in the future.It can provide great help for various important network management functions such as network attack detection,resource allocation,and routing selection.The predicted value obtained according to the traffic forecast can be used for outlier detection.Compared with other anomaly detection methods,the anomaly detection method based on traffic prediction is better in real-time and speed.However,the method of judging abnormalities is usually to judge the flow data only by comparing the actual value and the predicted value,combined with the threshold method.This method based on the flow prediction still has insufficient performance in detecting abnormalities.But,the method based on traffic prediction has the ability to quickly screen out obvious abnormal points,which can provide a guarantee able premise for the cluster-based anomaly detection method.In addition,since the result of traffic prediction is usually obtained through time series data training,it can be considered that the prediction result contains sequence information of past traffic.Adding it to the sample input of clustering detection can add time-series features to clustering samples to achieve detection that is more accurate.Therefore,this article will use traffic prediction technology as a pre processing for anomaly detection.

    The anomaly detection method based on unsupervised clustering can adapt to unlabeled traffic scenarios and has good detection performance.However,the cluster-based anomaly detection method requires a large number of data samples for clustering processing,and the speed is slow.In addition,its detection principle is to find outliers in data samples through clustering.It needs to be based on a premise:that anomaly occupies a minority in a large number of clustered data samples.Most of the current researches directly regard it as the default condition without making discussion that is more rigorous and processing to ensure that the condition is established.Moreover,because traffic data is a type of time series data,cluster-based anomaly detection when faced with this type of data,if it only performs cluster detection on time series flow onedimensional data,it will not be able to find some traffic anomalies related to the time series context.,Which will affect the results of anomaly detection.In addition,the existing solutions still have problems such as insufficient detection accuracy and low time efficiency considerations.In response to these problems,we first introduce the results of traffic prediction into cluster samples to increase the temporal characteristics of cluster samples.Subsequently,based on Mean Shift clustering,the concept of electric potential energy was introduced to highlight local characteristics and improve the accuracy of anomaly detection.

    The anomaly detection method based on improved GRU and EFMS-Kmeans clustering proposed in this paper firstly obtains the traffic prediction value through the improved GRU model,and then uses the anomaly detection method based on the improved Mean Shift clustering to detect traffic anomaly.The method proposed in this paper can predict the trend of future network traffic changes and effectively detect abnormalities in network traffic.This is useful for grasping the current and future traffic status of the entire network,assisting in fault warning,routing planning,network monitoring,and network management.Significant significance,with important scientific research and practical value.

    3 Anomaly Detection Algorithm Based on HS_GRU and EFMS-Kmeans Clustering

    The cascade anomaly detection method based on flow prediction and improved density clustering proposed in this paper is shown in the figure.In this section,we will introduce the flow prediction method and the improved density clustering method in Section 3.1 mainly introduces the improved HS_GRU model,which is used to provide more accurate flow prediction results,so that the latter can be used for anomaly detection based on improved density clustering 3.2.The improved EFMS-Kmeans clustering method is mainly introduced.The traffic prediction results obtained by using the HS_GRU model are introduced into the anomaly detection data as onedimensional data,so as to add timing characteristics to the clustering samples and achieve more accurate detection.

    The cascade anomaly detection method based on flow prediction and improved density clustering proposed in this paper is shown in the Fig.1 below.

    Figure 1:A cascade detection method based on flow prediction and improved density clustering

    3.1 Traffic Prediction Method Based on Improved GRU HS_GRU

    Considering the periodic characteristics of traffic data and the obstruction of gradient information backflow that may occur during RNN network training,we propose HS_GRU which combines GRU with STL algorithm after the Box-Cox transformation and highway network to predict network traffic.

    The overall flow of the network traffic prediction algorithm proposed in this paper which called HS_GRU is as follows:

    Step 1:Process the original data set.The data is sorted according to the format to form an available data set.

    Step 2:Apply Box-Cox transformation to the data to prepare for the next decomposition.

    Step 3:Use the STL decomposition algorithm to decompose and extract periodic components from the data set.

    Step 4:Save the periodic component.Collate and convert the remainder component into the sequence that can be used as supervise learning input.

    Step 5:Input the sequence collated in Step 3 into the highway GRU neural network model.Output the predicted values of the remainder component.

    Step 6:Add the predicted value of the remainder component to the periodic component reserved in Step 3 to produce the final network traffic prediction value.

    In the above steps,Steps 1–3 are the pre processing part of the data.The part mainly uses the Box-Cox transformation and STL decomposition algorithm to extract and decompose the periodic part of the network traffic sequence.Steps 3–5 are the network traffic prediction parts.In these parts,the highway GRU neural network is used to predict the remainder sequence,and the Adam gradient descent algorithm is used to train the parameters.After outputting the predicted value,the final predicted network traffic value is obtained.

    The flow of the algorithm is shown in Algorithm 1 and Fig.2.

    Algorithm 1:HS_GRU Algorithm 1:Initialization: SVI,RV,Y?V 2: YV ←Box-Cox(YV)3: ti ←CaculatePeriod(YV)4:for i=1:N do 5:while robustness ≥robustness_weight do 6:SVI,RV ←SeasonalDecompose(YV,ti)7:end for 8:for i=1:N do 9:S?VI ←HighwayGRU(SVI)10:end for 11:for i=1:N do 12:Y?V ←RV+S?VI 13:end for 14:return Y?V

    3.1.1 The Improvement with STL Algorithm

    In network traffic prediction,it is necessary to master the data characteristics of network traffic to improve the accuracy of network traffic prediction.Due to the strong periodic characteristics of long-term network traffic,this paper intends to decompose and extract it periodically.

    Considering the good performance of ARIMA(Autoregressive Integrated Moving Average model)in the prediction of periodic data,this paper adopts and improves its decomposition algorithm STL after applying the Box-Cox transformation to capture more periodic features.The specific method is as follows:Firstly,the network traffic data is decomposed into periodic component and remainder component by using STL decomposition algorithm.Subsequently,the periodic component is retained and the remainder component is input into the HS_GRU neural network model,thereby outputting the predicted value.Finally,the model output of the previous step is added to the retained periodic component to obtain the final network traffic prediction value.

    Figure 2:The overview of the HS_GRU

    Before STL decomposition,we apply Box-Cox transformation to the data.The Box-Cox transformation is a popular transformation for stabilizing the variance of a time series.

    It is defined as:

    Depending on the parameterλ,the transformation is essentially the identity,the logarithm,or a transformation somewhere between.In this work,we assume that the transformed dependent variable is normally distributed,and then draws a likelihood function aboutλ.When the likelihood function value is maximum,the value ofλis the value we need here.

    Considering that the long-term span network traffic data has multiple periodicities,the traditional STL(Seasonal Trend decomposition using Loess)decomposition algorithm can only extract a certain period when extracting seasonal components.Therefore,this paper improves the addition model of STL,and decomposes the time series into trend component,multiple seasonal component and remainder component.The specific expressions are as follows:

    Svi represents the i-th periodic component,Rv represents the remainder component,v=1,2,...,N represents a certain moment.

    3.1.2 The HS_GRU Neural Network

    Gate Recurrent Unit(GRU)neural network[31]is an improved model of long short-term memory(LSTM)neural network.LSTM neural network is a variant model of recurrent neural network,which can well overcome the problem of long dependence in recurrent neural network,and is widely used to predict time series data.But there are still some problems in the LSTM neural network.The internal structure of the LSTM neural network is complex,which may lead to the long training time and long prediction time to predict time series data.GRU neural network is the improvement of LSTM neural network for this problem.

    In order to further improve the neural network’s ability to predict traffic;this article combines the GRU algorithm with Highway network.A highway network is an approach to optimizing networks and can help solving the problem in network training due to the obstruction of gradient information backflow.Highway networks use learned gating mechanisms to regulate information flow,which allow neural networks to have paths for information to follow across different layers.

    We input the periodic components decomposed by the STL decomposition algorithm into the neural network for prediction.The calculation formula of neurons of the neural network can be expressed as follows:

    Among them,rtrepresents the update gate,ztrepresents the update gate,Wzrepresents the weight of the update gate;ht?1represents the input of the previous neuron;Xtrepresents the input of the current neuron;represents the pending outp

    ut in the original GRU neuron value;represents the transfer value of the original GRU and the highway network;mtrepresents the highway network parameter;htrepresents the output of the neuron of the current neuron;σrepresents the excitation function(Sigmoid function is often used),than represents hyperbolic function tangent function.

    3.2 Anomaly Detection Methods Based on Improved K-means Clustering EFMS-Kmeans

    When the cluster-based anomaly detection method is faced with the flow timing data,if only the convective value,a single data feature,is used for clustering detection,the detection effect will be poor because the flow timing information is ignored.Therefore,a cascade detection method based on flow prediction and density clustering is proposed in this paper.Based on flow prediction in the previous section,the prediction result that increases the input sample of clustering detection and implies the flow timing relationship is another one-dimensional feature to improve the accuracy of anomaly detection.

    By calculating the density,the density-based clustering method can overcome the shortcoming that the distance-based algorithm can only find spherical clusters,and has better effect for anomaly detection.However,the anomaly detection method based on density clustering has a high time complexity,which usually adopts global parameters and ignores local characteristics of sample distribution,resulting in poor accuracy of anomaly detection.Therefore,density-based and distance-based clustering algorithms are used in this paper.

    Mean Shift clustering algorithm can obtain non-spherical clustering results and can cluster different types of data distribution well.The outlier detection method based on Mean Shift clustering algorithm can reduce the influence of clustering results on anomaly detection results.However,Mean Shift clustering algorithm still has some shortcomings.Its bandwidth parameters need to be artificially selected,which will have a great subjective impact on the clustering results.The selection of bandwidth parameters has a great influence on the traditional Mean Shift algorithm,and the selected hour converges slowly.Although the selection of large will accelerate the convergence but the clustering effect will not be very good.Therefore,an improved Mean Shift algorithm of electrostatic power is proposed in this paper.By calculating the electrostatic power between any two points,the appropriate bandwidth is selected and the Mean Shift algorithm is optimized.The improved mean shift algorithm is used to automatically determine the number and center of clusters,and then the center of mass after Mean Shift clustering is used as the initial center of K-means clustering algorithm,which makes up for the shortcoming of K-means algorithm that needs to specify the number of clusters in advance.

    Therefore,this article puts forward an improved Mean Shift clustering method based on electrostatic force(An improved Mean Shift clustering method-based on electrostatic force,EFMS).The concept of electrostatic force was introduced based on Coulomb’s law,and the K-nearest neighbor thought was used to reduce the computational complexity.Meanwhile,the over-dependence of Mean Shift clustering algorithm on bandwidth was solved.Finally,EFMS is combined with the classical K-means algorithm to obtain the improved K-means anomaly detection algorithm(A K-means detection algorithm based on EFMS,EFMS-Kmeans).Based on the clustering number and clustering center obtained by the EFMS algorithm,the K-means method is used for anomaly detection.The threshold method is adopted to directly use the potential energy gradient for anomaly detection,which not only considers the local characteristics of the data,but also ensures the accuracy of anomaly detection.

    The main flow of this method is as follows.

    3.2.1 EFMS Algorithm

    The traditional Mean Shift algorithm performs clustering by calculating the density and Mean Shift vector of sample points.Its core is the Mean Shift vector,and the Mean Shift vector will continuously move to the region with high density.In order to calculate the drift vector more accurately,points in the neighborhood of the center of mass need to set different weights to calculate the drift,that is,to add the kernel function.After improved kernel function,the Mean Shift vector is:

    whereShrefers to a spherical field with a radius of h,defined as follows:is the Gaussian kernel function.After improved by the kernel function,the mean drift will move towards the region where the probability density increases.

    wherehis the bandwidth,the kernel function forms of different bandwidth are also different.Aiming at this problem,this paper presents a improved Mean Shift clustering algorithm based on electrostatic force(An improved Mean Shift clustering method-based on electrostatic force,EFMS),the calculation of any two points,the size of the electrostatic force by any two points of electrostatic force to determine the bandwidth of the average size.

    In Coulomb’s law,there is an electrostatic force between any two stationary charged objects,and the force is:

    wherekis the electrostatic force constant,q1andq2are the charges of two points respectively,andris the distance between two points.

    In physics,the energy of an object is related to a force,and the electrostatic force between any two charges has a potential energy,which represents the potential of the position of the charge.To simplify the calculation,we abstract the electric field into an ideal electric field with the same electric potential everywhere.Therefore,the simplified electric potential energy is defined as follows:

    where the parameter definition is consistent with the electrostatic force.For each sample point,we believe that its charge amount is equal,q1andq2are simplified to 1,andrrepresents the distance between two points.For the sake of simplicity,let’s also take the electrostatic constantkas 1.Then the gravitational potential energy between each sample pointiand sample pointjcan be expressed as:

    It can be seen that the magnitude of electric potential energy tends to 0 gradually with the increase of distance,which also conforms to the concept of electric potential energy 0 at infinite distance in physics.For each sample points,we put forward a new concept of potential energy to help choose the size of the bandwidth.We calculate each sample points to the current sample point of center of mass of potential energy and the average size,get the size of the average potential energy as a basis for the bandwidth,the average of the distance between the two sample points of the potential for bandwidth.

    Definition:For a sample pointi,the potential energy represents the sum of the potential energies of that point and the others and averages them out.Its calculation formula is as follows:

    Bandwidthhis obtained according to the average potential energy,which is defined as follows:

    Thus,the improved Mean Shift vector is calculated as follows:

    (1)EFMS algorithm steps

    Step 1:Randomly select one of the unmarked data points as the starting center point;

    Step 2:Find all data points in the area with center as the center radius as the radius,and consider these points to belong to the same cluster C.At the same time,the number of data points in the cluster is recorded plus 1;

    Step 3:Take center as the center point,calculate the vectors of each element from center to set M,add these vectors and get the vectorshift;

    Step 4:center=center+shift.So the center is moving in the shift direction,and the distance is ‖shift‖;

    Step 5:Repeat Steps 2–4 until shift is small(i.e.,iteration to convergence),remembering the center.Note that points encountered during this iteration should be categorized as cluster C;

    Step 6:If the distance between the center of the current cluster C and the center of other existing clusters C2 is less than the threshold when convergence occurs,then C2 and C are merged,and the occurrence times of data points are merged accordingly.Otherwise,take C as the new cluster;

    Step 7:Repeat Steps 1–5 until all points are marked as visited;

    Step 8:Classification:according to each class and the access frequency of each point,take the class with the highest access frequency as the class belonging to the current point set.

    3.2.2 EFMS-Kmeans Algorithm

    K-means algorithm is a widely used clustering algorithm.K-means algorithm is an unsupervised algorithm focusing on similarity,taking distance as the criterion for similarity measurement between data objects,that is,the smaller the distance between data objects is,the higher their similarity is,and the more likely they are to be in the same class cluster.

    In the traditional K-means algorithm,since the initial “cluster center” points are randomly selected,the final cluster division is related to the randomly selected “cluster center,”that is to say,it may lead to the division of various K clusters.This is because the K-means algorithm converges to the local minimum rather than the global minimum.In order to improve this shortcoming,we will use the clustering center of mass obtained by EFMS algorithm as the initial clustering center of K-means,and the value of K is obtained by EFMS algorithm.Thus,the improved exception detection algorithm EFMS-Kmeans is obtained.The specific process of the algorithm is shown in Fig.3.

    Figure 3:The flowchart of EFMS-Kmeans algorithm

    After the EFMS-Kmeans algorithm gets the clustering result,in order to verify the rationality of the clustering result,this paper USES the elbow method SSE(cluster error variance)and contour coefficient to test the K value.

    (1)The elbow method(sum of the squared errors,SSE):

    SSE uses error variance sum to judge and select the appropriatekvalue.In theory,the smaller the SSE value is,the better the clustering effect will be.Through data testing,the SSE value will gradually approach a minimum value.Its calculation formula is as follows:

    where,Ciis theith cluster,pis the sample point inCi,miis the center of mass ofCi(the mean value of all samples inCi),and SSE is the clustering error of all samples,representing the clustering effect.

    (2)Silhouette Coefficient:

    EFMS-Kmeans was used to divide the data to be classified intokclusters.For every vector in the cluster.Their contour coefficients are calculated separately.Its specific definition is as follows:

    Calculate the average distanceaibetween sampleiand other sample points in the same cluster.The smaller theai,the more sampleishould be clustered to the cluster.aiis called the incluster dissimilarity of samplei.Theaimean of all samples in cluster C is called cluster dissimilarity of cluster C.

    The average distancebijof all samples from sampleito other clusterCjis calculated,which is called the dissimilarity degree between sampleiand clusterCjis defined as the cluster dissimilarity of samplei:bi=min{bi1,bi2,...,bik},the largerbiis,the less sampleibelongs to other clusters.

    The contour coefficient of sampleiwas defined according to the intra-cluster dissimilarityaiand inter-cluster dissimilaritybi,and the calculation formula was as follows:

    The contour coefficients range from[?1,1].The larger the value is,the more reasonable the clustering result will be.Siis close to 1,which indicates that the clustering of sampleiis reasonable;Siis close to ?1,indicating that sampleishould be classified into another cluster.IfSiis approximately 0,then sampleiis on the boundary of the two clusters.

    After confirming the accuracy ofkvalue by elbow method SSE(cluster error variance)and silhouette coefficient,K-means is used for anomaly detection.The steps are as follows:

    (3)Improved EFMS-Kmeans algorithm steps:

    Step 1:pass in the EFMS algorithm clustering cluster number K,and set K clustering centers of K-means algorithm according to the EFMS clustering center;

    Step 2:repeat the following process until the cluster center is no longer changed;

    1)calculate the distance between each sample and each clustering center point,divide the sample to the nearest center point;

    2)calculate the mean of all sample features divided into each category,and take the mean as the new clustering center of each class;

    Step 3:calculate the distance between each point in the cluster and the center of the cluster based on the result of clustering;

    Step 4:compare the distance with the threshold.If it is greater than the threshold,it is considered abnormal;otherwise,it is normal.

    The flow of the algorithm is shown in Algorithm 2 and Fig.2.

    Algorithm 2:EFMS-Kmeans Algorithm 1:Initialization:center,bandwidth,shift 2:do while 3:M ←Caculate(point,center)≤bandwidth 4:for i=1:M do 5:vi=Calculate(center,Mi)6:shift+=vi 7:while shift=0 8:end for 9:get the center of the cluster and the number of the cluster C 10:for i=1:C do 11:C?←K ?Means(C)12:end for 13:for i=1: C?do 14:Outlier ←if CaculateDistant■point,C?j■>α 15:end for 16:return Outlier

    3.3 The Traffic Prediction-Clustering Cascade Anomaly Detection Method

    This paper proposes an anomaly detection method based on the HS_GRU algorithm and the EFMS-Kmeans(Automatically specify the clustering center)algorithm.First,the paper proposes an improved GRU HS_GRU model improved for the problems existing in the traditional model.We combine the STL decomposition algorithm after Box-Cox transformation and highway GRU neural network to predict the network traffic data.The STL decomposition algorithm extracts the periodic component of the network traffic data and then the remaining component is input into the highway GRU neural network model for prediction.Then aiming at the problem that the traditional anomaly detection method has a poor detection effect on the clustering of a single traffic value in network-oriented time series data,this paper uses a traffic predictionclustering cascade anomaly detection method to combine the result of traffic prediction with the original traffic value as an input sample.Given the shortcomings of the traditional clustering algorithm,an improved EFMS-Kmeans algorithm is proposed.The idea of the combination of the density- based and distance-based clustering algorithms are mainly proposed.The anomaly detection method this paper proposed not only considers the local characteristics of the data but also ensures the accuracy of the anomaly detection.

    The brief flow of the algorithm is shown in Algorithm 3.

    Algorithm 3:The overall Algorithm 1:Initialization 2: Dataset ←Box-Cox(Dataset)3: Prediction ←HS_GRU(Dataset)4:Outlier ←EFMS ?Kmeans(Dataset,Prediction)5:return Outlier

    4 Evaluation

    This chapter proves the effectiveness of the improved traffic anomaly detection algorithm based on the HS_GRU and EFMS-Kmeans by experiments.In this paper,the python language is used to carry out the simulation experiment on the network traffic prediction algorithm,and the running environment is Python3.6+tensorflow.The results are described in detail below.

    4.1 Improved GRU Algorithm HS_GRU

    4.1.1 Data Set

    We use two real traffic dataset to compare the predictive performance of the HS_GRU algorithm.One is the aggregated traffic(in bits)in the United Kingdom academic network backbone from an ISP.The data is shown in Fig.4.Another is Internet traffic data(in bits)from a private ISP with centers in 11 European cities.The data is shown in Fig.5.Both datasets are available for download from the benchmark website.

    Figure 4:UK backbone network traffic prediction results

    Figure 5:Network traffic prediction results for 11 European cities

    In order to make the subsequent anomaly detection results more obvious and accurate,this paper centralized the original data set to form data set Traffic_UK_pre and Traffic_EU_pre.The first to use on the network traffic prediction StatsModels tool for the network traffic data collection cycle component extraction,and then the remaining components processing for monitoring sequence,input to the improved neural network model,the output of the corresponding predicted value,and cycle component addition,restore to the network traffic prediction.

    4.1.2 Predictive Indicators

    Root-mean-square error and mean absolute error are commonly used to measure the deviation between the predicted value of parameters and the true value of parameters,which can well reflect the prediction accuracy of the prediction model.

    wheremrepresents the number of all predicted values,yirepresents the actual value of parameters,and ^yirepresents the predicted value given by the model.

    The mean absolute error is also often referred to as the mean absolute deviation,which is the average of the absolute value of the deviation between the predicted value and the true value of all parameters.

    The smaller the root-mean-square error and mean absolute error,the more accurate the predicted value is,and the better the trained neural network fits the actual model.

    4.1.3 Comparison of Experimental Results and Performance

    In order to compare and evaluate the effect and performance of the algorithm model HS_-GRU designed in this paper,we selected the clockwork RNN[21],basic GRU,basic LSTM and ARIMA algorithm for analysis.Adam gradient de-scent algorithm was used to update its parameters.The following figure shows the prediction results of HS_GRU on two data sets(Traffic_UK_pre and Traffic_EU_pre),the below is the detailed display of a part of prediction results.

    In order to better demonstrate the prediction effect of each model,we calculated the evaluation indexes RMSE and MAE of each model on two data sets,and showed them in Tab.1 and Figs.6 and 7.

    Table 1:Comparison of network traffic prediction model performance

    From the above Tab.1,we can see that the HS_GRU network traffic prediction model proposed in this paper performs better than the other four comparison models in RMSE and MAE.

    Specifically,from the perspective of RMSE evaluation indicators:in dataset Traffic_UK_pre,the RMSE indicator of the HS_GRU based traffic prediction model proposed in this paper is better than the CW-RNN model 11.8%,better than the GRU model 32.91%,better than the LSTM model 30.41%.In dataset Traffic_EU_pre,the RMSE index is better than the CW-RNN model 13.88%,better than the GRU model 29.75%,better than the LSTM model 28.51% and better than the ARIMA model 62.51%.

    From the perspective of the MAE evaluation index,in dataset Traffic_UK_pre,the MAE index of the flow prediction model based on HS_GRU proposed in this paper is better than that of the CW-RNN model by 17.84%,better than the GRU model 39.78%,better than the LSTM model 36.57% and better than the ARIMA model 98.9%.In dataset Traffic_EU_pre,MAE is better than the CW-RNN model 15.78%,better than the GRU model 33.21%,better than the LSTM model 31.19% and better than the ARIMA model 46.82%.Experiments show that the HS_GRU based traffic prediction model proposed in this paper has a good prediction effect on network traffic data.

    Figure 6:Comparison of RMSE of each flow prediction model

    Figure 7:Comparison of MAE of each flow prediction model

    4.2 EFMS-Kmeans Algorithm Comparison with Other Clustering Methods

    4.2.1 EFMS-Kmeans Algorithm Accuracy Analysis

    In order to verify the effectiveness of EFMS-Kmeans method,we used this method to perform clustering and anomaly detection on Traffic_UK and Traffic_EU data sets.The following are pictures of the experimental results.

    The figure above shows the number of clusters in Traffic_UK’s traffic data set and the location of the cluster center,which is where the EFMS method makes sense.The method automatically determines the number of clusters and the cluster center of any data set,which can effectively avoid the error caused by manually specifying the number of clusters.

    When EFMS algorithm is used to confirm the number of clustering,the SSE and contour coefficient algorithms built in the method will evaluate the clustering effect.According to the evaluation results,the optimal number of clustering is determined.Below,we choose the evaluation effect of SSE as an example.

    Obviously,whenkvalue is 5,the curvature is the highest,so for the clustering of this data set,the optimal clustering number should be 5.This is consistent with the clustering number of EFMS algorithm,which shows the effectiveness of EFMS algorithm.

    In Fig.8,Traffic_UK data set is automatically divided into 5 clusters.The Fig.9 proves the correctness of the number of clusters by SSE algorithm.According to the number of clusters already divided and the center of the cluster,anomaly detection is carried out.In Fig.10,the black points are the abnormal points.

    Figure 8:EFMS intelligence calculates the number of clusters of Traffic_UK

    Figure 9:SSE of Traffic_UK

    Figure 10:EFMS-Kmeans identifies Traffic_UK’s abnormal points(the black dots are the outliers)

    The figure above shows the number of clusters in Traffic_EU’s traffic data set and the location of the cluster center,which is where the EFMS method makes sense.The method automatically determines the number of clusters and the cluster center of any data set,which can effectively avoid the error caused by manually specifying the number of clusters.

    For the clustering results of this data set,the evaluation effect of SSE is as follows:

    Obviously,whenkvalue is 4,the curvature is the highest,so for the clustering of this data set,the optimal clustering number should be 4.This is consistent with the clustering number of EFMS algorithm,which shows the effectiveness of EFMS algorithm.

    In Fig.11,Traffic_EU data set is automatically divided into 4 clusters.The Fig.12 proves the correctness of the number of clusters by SSE algorithm.According to the number of clusters already divided and the center of the cluster,anomaly detection is carried out.In Fig.13,the black points are the abnormal points.

    As can be seen from the figure above,EFMS-Kmeans has a good clustering and anomaly detection effect.It can get rid of the disadvantages of manual calculation of classification number and category center,and it can effectively and accurately identify the number and cluster center of clusters,thus greatly improving the accuracy of anomaly detection.

    4.2.2 Data Set

    Firstly,we compare the EFMS-Kmeans algorithm with other clustering algorithms to prove the feasibility and superiority of this method.The specific data sets used are as follows:

    The CIC Dataset,which is from the Canadian Network Security Dataset Institute,used the 2017 dataset provided by the Institute in this paper.

    Figure 11:EFMS intelligence calculates the number of clusters of Traffic_EU

    Figure 12:SSE of Traffic_EU

    The KDD99 Dataset,which was a network environment simulating the USAF LAN,collected network connection and system audit data for 9 weeks,simulating various user types,various network traffic and attack methods.

    Figure 13:EFMS-Kmeans identifies Traffic_EU’s abnormal points(The black dots are the outliers)

    The Honeypot dataset is a hacker attack dataset collected by HoneyNet,which can well reflect the hacker attack pattern.The dataset includes 11 months of Snort alarm data from April 2000–February 2001,with about 60–3000 Snort alarm records per month.

    The Sigfox IoT dataset,the Sigfox IoT Dataset is a sample dataset with the communication activity recorded from the real Internet-of-Things(IoT)network deployed by Sigfox.It can be used for anomaly detection in communication networks and other related tasks.

    The WIDE data set,which is derived from the traffic data warehouse project maintained by MAWI Working Group.In this experiment,we used the traffic data from 2018 to 2019.

    These data sets are publicly available on the network.These data sets are all from real networks,and the data collection period is different,almost including all kinds of exception types in the network,which provides good data support for testing our algorithm.

    4.2.3 Evaluation Metrics

    Three popular metrics were used to evaluate the performance of the anomaly detection method,including the recall rate(Re),the accuracy value(Pr),and the F1 value.Specifically,these indicators range from 0 to 1,and the higher the score,the more accurate the detection results.

    4.2.4 Compared Methods and Results

    Several existing clustering methods for anomaly detection are compared with our method.Specifically,traditional methods,including K-means,K-MEDOIDS,CLARANS,OPTICS,and DBSCAN are used for comparison.

    In the comparison experiment,we used the above five methods and the algorithm proposed in this paper to cluster the same data,and we found five different data sets.The specific process is as follows:first,the data set is clustered with the original method;then,according to the clustering results,the traditional K-means method is passed in for anomaly detection by using the distance threshold,calculating the distance from each point to the clustering center,and setting the threshold to detect the anomaly points.The purpose of comparing the original K-means algorithm is to test the clustering effect of manually specifying the clustering center,and to prove the effectiveness of our method based on the comparison.The other four clustering methods cover the improved K-means method,density-based clustering method and hierarchical clustering method,and compare different clustering methods to prove the superiority of the method in this paper.In order to exclude the possibility of experiment,we conducted 3 tests for each method with the same data.The specific results are as follows:

    As can be seen from the below Tab.2,among many clustering methods,EFMS-Kmeans has excellent advantages in recall rate,accuracy and F1.In order to show the comparison of methods more intuitively,we plotted the Re value,Pr value and F1 value of different methods on each data set.

    Table 2:The Anomaly detection results of the compared methods on the experimental datasets

    Fig.14a is a comparison of Re values of different algorithms on the CIC data set.After data analysis,Re value of EFMS-Kmeans algorithm is higher than that of any other method.Compared with K-means algorithm,Re value is increased by 26%.Compared with K-MEDOIDS algorithm,Re value is increased by 18%.Compared with CLARANS algorithm,Re value is increased by 15%.Compared with OPTICS algorithm,Re value is increased by 15%.Compared with DBSCAN algorithm,Re value is increased by 13%.

    Fig.14b is a comparison of Pr values of different algorithms on the CIC data set.Compared with K-means algorithm,Pr value is increased by 33%.Compared with K-MEDOIDS algorithm,Pr value is increased by 35%.Compared with CLARANS algorithm,Pr value is increased by 3%.Compared with OPTICS algorithm,Pr value is increased by 26%.Compared with DBSCAN algorithm,Pr value is increased by 14%.After data analysis,Pr value of EFMS-Kmeans algorithm is higher than that of any other method.

    Figure 14:Re(a),Pr(b),F1(c)of different methods on CIC dataset

    Fig.14c is a comparison of F1 values of different algorithms on the CIC data set.Compared with K-means algorithm,F1 value is increased by 29%.Compared with K-MEDOIDS algorithm,F1 value is increased by 26%.Compared with CLARANS algorithm,F1 value is increased by 10%.Compared with OPTICS algorithm,F1 value is increased by 20%.Compared with DBSCAN algorithm,F1 value is increased by 13%.After data analysis,F1 value of EFMS-Kmeans algorithm is higher than that of any other method.

    Fig.15a is a comparison of Re values of different algorithms on the KDD99 data set.Compared with K-means algorithm,Re value is increased by 20%.Compared with K-MEDOIDS algorithm,Re value is increased by 20%.Compared with CLARANS algorithm,Re value is increased by 17%.Compared with OPTICS algorithm,Re value is increased by 19%.Compared with DBSCAN algorithm,Re value is increased by 12%.After data analysis,Re value of EFMS-Kmeans algorithm is higher than that of any other method.

    Figure 15:Re(a),Pr(b),F1(c)of different methods on KDD99 dataset

    Fig.15b is a comparison of Pr values of different algorithms on the KDD99 data set.Compared with K-means algorithm,Pr value is increased by 28%.Compared with K-MEDOIDS algorithm,Pr value is increased by 22%.Compared with CLARANS algorithm,Pr value is increased by 5%.Compared with OPTICS algorithm,Pr value is increased by 29%.Compared with DBSCAN algorithm,Pr value is increased by 12%.After data analysis,Pr value of EFMS-Kmeans algorithm is higher than that of any other method.

    Fig.15c is a comparison of F1 values of different algorithms on the KDD99 data set.Compared with K-means algorithm,F1 value is increased by 40%.Compared with K-MEDOIDS algorithm,F1 value is increased by 21%.Compared with CLARANS algorithm,F1 value is increased by 12%.Compared with OPTICS algorithm,F1 value is increased by 24%.Compared with DBSCAN algorithm,F1 value is increased by 12%.After data analysis,F1 value of EFMS-Kmeans algorithm is higher than that of any other method.

    Fig.16a is a comparison of Re values of different algorithms on the Honeypot data set.Compared with K-means algorithm,Re value is increased by 29%.Compared with K-MEDOIDS algorithm,Re value is increased by 20%.Compared with CLARANS algorithm,Re value is increased by 17%.Compared with OPTICS algorithm,Re value is increased by 19%.Compared with DBSCAN algorithm,Re value is increased by 15%.After data analysis,Re value of EFMS-Kmeans algorithm is higher than that of any other method.

    Figure 16:Re(a),Pr(b),F1(c)of different methods on Honeypot dataset

    Fig.16b is a comparison of Pr values of different algorithms on the Honeypot data set.Compared with K-means algorithm,Pr value is increased by 24%.Compared with K-MEDOIDS algorithm,Pr value is increased by 25%.Compared with CLARANS algorithm,Pr value is increased by 10%.Compared with OPTICS algorithm,Pr value is increased by 31%.Compared with DBSCAN algorithm,Pr value is increased by 15%.After data analysis,Pr value of EFMS-Kmeans algorithm is higher than that of any other method.

    Fig.16c is a comparison of F1 values of different algorithms on the Honeypot data set.Compared with K-means algorithm,F1 value is increased by 27%.Compared with K-MEDOIDS algorithm,F1 value is increased by 23%.Compared with CLARANS algorithm,F1 value is increased by 14%.Compared with OPTICS algorithm,F1 value is increased by 25%.Compared with DBSCAN algorithm,F1 value is increased by 15%.After data analysis,F1 value of EFMS-Kmeans algorithm is higher than that of any other method.

    Fig.17a is a comparison of Re values of different algorithms on the Sigfox IoT data set.Compared with K-means algorithm,Re value is increased by 25%.Compared with K-MEDOIDS algorithm,Re value is increased by 17%.Compared with CLARANS algorithm,Re value is increased by 14%.Compared with OPTICS algorithm,Re value is increased by 13%.Compared with DBSCAN algorithm,Re value is increased by 11%.After data analysis,Re value of EFMS-Kmeans algorithm is higher than that of any other method.

    Fig.17b is a comparison of Pr values of different algorithms on the Sigfox IoT data set.Compared with K-means algorithm,Pr value is increased by 37%.Compared with K-MEDOIDS algorithm,Pr value is increased by 17%.Compared with CLARANS algorithm,Pr value is increased by 3%.Compared with OPTICS algorithm,Pr value is increased by 24%.Compared with DBSCAN algorithm,Pr value is increased by 13%.After data analysis,Pr value of EFMS-Kmeans algorithm is higher than that of any other method.

    Fig.17c is a comparison of F1 values of different algorithms on the Sigfox IoT data set.Compared with K-means algorithm,F1 value is increased by 31%.Compared with K-MEDOIDS algorithm,F1 value is increased by 17%.Compared with CLARANS algorithm,F1 value is increased by 8%.Compared with OPTICS algorithm,F1 value is increased by 18%.Compared with DBSCAN algorithm,F1 value is increased by 11%.After data analysis,F1 value of EFMS-Kmeans algorithm is higher than that of any other method.

    Figure 17:(Continued)

    Figure 17:Re(a),Pr(b),F1(c)of different methods on Sigfox IoT dataset

    Fig.18a is a comparison of Re values of different algorithms on the WIDE data set.Compared with K-means algorithm,Re value is increased by 30%.Compared with K-MEDOIDS algorithm,Re value is increased by 22%.Compared with CLARANS algorithm,Re value is increased by 19%.Compared with OPTICS algorithm,Re value is increased by 19%.Compared with DBSCAN algorithm,Re value is increased by 13%.After data analysis,Re value of EFMS-Kmeans algorithm is higher than that of any other method.

    Fig.18b is a comparison of Pr values of different algorithms on the WIDE data set.Compared with K-means algorithm,Pr value is increased by 16%.Compared with K-MEDOIDS algorithm,Pr value is increased by 23%.Compared with CLARANS algorithm,Pr value is increased by 9%.Compared with OPTICS algorithm,Pr value is increased by 29%.Compared with DBSCAN algorithm,Pr value is increased by 17%.After data analysis,Pr value of EFMS-Kmeans algorithm is higher than that of any other method.

    Fig.18c is a comparison of F1 values of different algorithms on the WIDE data set.Compared with K-means algorithm,F1 value is increased by 33%.Compared with K-MEDOIDS algorithm,F1 value is increased by 23%.Compared with CLARANS algorithm,F1 value is increased by 15%.Compared with OPTICS algorithm,F1 value is increased by 24%.Compared with DBSCAN algorithm,F1 value is increased by 15%.After data analysis,F1 value of EFMS-Kmeans algorithm is higher than that of any other method.

    Figure 18:Re(a),Pr(b),F1(c)of different methods on WIDE dataset

    As can be seen from the above figure,as far as a single EFMS-Kmeans clustering anomaly detection algorithm is concerned,compared with the traditional anomaly detection algorithm,EFMS-Kmeans shows its own advantages on data sets of different dimensions and has a high anomaly detection effect.

    4.3 Combine HS_GRU with EFMS-Kmeans Algorithm

    4.3.1 Data Set

    For experimental data set,in order to validate the proposed based on the combination of the HS_GRU helped and EFMS-Kmeans network traffic prediction cascade advantages,feasibility and performance of anomaly detection algorithm,there is a lesson on the part of traffic forecast results in the introduction to this section of the anomaly detection,flow value size and with the error of predicted value as input for outlier detection based on clustering.In this paper,10 and 12 outliers were injected into the two flow data sets by manual injection,which mainly included two types of outliers,such as the occurrence of flow value being 0 or abnormal high value,and the large error caused by flow deviation from the normal trend.In this paper,some NSL-KDD data sets(including 20% abnormal data)and NAB data sets(including 15% abnormal data)are also used for comparison experiments to prove the effectiveness of EFMS-Kmeans clustering algorithm.

    4.3.2 Experimental Indicators

    In order to quantify the experimental results and better see the experimental effect of the algorithm,we adopted the common abnormal detection evaluation indexes:Recall rate(Re),accuracy rate(Pr)and F1 value.It is assumed that there are TP true outliers detected correctly,FP normal outliers incorrectly marked as outliers,FN undetected outliers and TN normal outliers detected correctly.

    The recall rate Re represents the ratio of the detected outliers to all the outliers contained in the data set,which reflects the detection status of the outliers in the data set.Its definition is as follows

    Accuracy Pr represents the ratio of detected true abnormal points to all detected abnormal points,which reflects the accuracy of detected abnormal points.Its definition is as follows:

    F1 value is the harmonic mean of precision rate and recall rate and the comprehensive evaluation index of precision rate and recall rate.Its definition is as follows:

    The ranges of recall rate Re,accuracy rate Pr,and F1 are all between 0 and 1.The larger the value of these indicators,the better the abnormal detection results.

    4.3.3 Experimental and Comparative Results

    In order to verify the feasibility and optimization performance of the EFMS-Kmeans algorithm proposed in this paper,we compared it with the cloud environment anomaly detection scheme(En-ABC)based on integrated artificial colony[32].

    (1)Clustering and anomaly detection results on commonly used clustering data sets.

    NLL_KDD dataset is an updated version of KDD99 dataset,which solves the problem of KDD99 data redundancy.It can be used as an effective baseline data set to help researchers compare different intrusion detection methods.NAB is a novel benchmark for evaluating algorithms for anomaly detection in streaming,real-time applications.It is comprised of over 50 labeled real-world and artificial time series data files plus a novel scoring mechanism designed for real-time applications.

    First,we adopted the classical NSL-KDD(including 20% abnormal data)anomaly detection data set and NAB(including 15% abnormal data)anomaly detection data set for the experiment,and obtained the experimental comparison results as follows according to the commonly used anomaly detection evaluation indicators.

    From the data in the below Tab.3,we can see the superiority of EFMS-Kmeans algorithm.In order to make it more intuitive,we show it by means of histograms.

    Table 3:Comparison of EFMS-Kmeans and En-ABC anomaly detection algorithms

    As can be seen from the Fig.19a,the EFMS-Kmeans-based detection method proposed in this paper has a higher abnormal recall rate than the En-ABC detection method on both data sets.In the above two data sets,the algorithm in this paper improves the recall rate by 10.1% and 2.1% respectively compared with En-ABC,with an average increase of 6.1%.

    Figure 19:Comparison of EFMS-Kmeans and En-ABC anomaly detection performance on NSLKDD and NAB data sets(a)recall rate,(b)accuracy,(c)F1

    As can be seen from Fig.19b,the detection method of EFMS-Kmeans proposed in this paper is more accurate than that of EN-ABC detection method in both data sets.In the above two data sets,compared with EN-ABC,the accuracy of this algorithm is improved by 10.7% and 18.1%respectively,with an average increase of 14.4%.

    As can be seen from the Fig.19c,by comparing the F1 values of the two algorithms,it can be found that the improved detection algorithm based on EFMS-Kmeans in this paper has a high level,while the comprehensive F1 values based on En-ABC method are all lower than those in this paper.On the two data sets,the algorithm in this paper improves the comprehensive F1 value by 4.4% and 10.3% respectively compared with the detection method based on En-ABC,with an average increase of 7.35%.Therefore,it can be considered that the anomaly detection method based on EFMS-Kmeans in this paper has better comprehensive performance.

    (2)Abnormal detection results in the flow data set.

    In addition,in order to validate the proposed anomaly detection based on EFMS-Kmeans algorithm,and the advantages of the proposed based on traffic prediction - density clustering feasibility and performance of the cascade way,we will be in front of the traffic prediction results,using the HS_GRU helped 11 European city respectively predicted traffic data sets(Traffic_Euro)and the forecast traffic data sets(Traffic_UK)was introduced to the anomaly detection,which will flow value size and with the error of predicted value as input for outlier detection based on clustering.In this paper,10 and 12 outliers were injected into the two data sets by manual injection,mainly including two types of outliers,such as the occurrence of flow value of 0 or abnormal high value,and large error caused by flow deviation from the normal trend.

    The essence of En-ABC is an exception detection method based on clustering.In this paper,we adopt a cascade anomaly detection method based on flow prediction-improved density clustering.In order to test the algorithm capability more fairly,the CW-RNN method is used for traffic prediction,and the predicted data is passed into the En-ABC method as one-dimensional data,and then the anomaly detection is carried out.The result type of CW-RNN is the same as that of HS_GRU.En-ABC method and EFMS-Kmeans are both clustering algorithms in essence,so the combination is feasible.According to our experiment,the results are as follows:

    In order to more clearly demonstrate the feasibility and superiority of the traffic predictioncluster anomaly detection model proposed in this paper,we used CW-RNN model to predict traffic Traffic_Euro and Traffic_UK data sets,and added the predicted results to the En-ABC anomaly detection algorithm for anomaly detection.Tab.4 shows the detection results of the improved HS_GRU and EFMS-Kmeans combined flow prediction-clustering cascading anomaly detection method in this paper,and the combined CW-RNN and En-ABC based flow prediction-clustering cascading anomaly detection method in these two predicted flow data sets.

    As can be seen from the below Tab.4,the flow prediction-anomaly detection model proposed in this paper based on HS_GRU and EFMS-Kmeans has better anomaly detection effect than the model based on CW-RNN and En-ABC.We can make more intuitive comparison through the histograms of each abnormal indicator.

    Table 4:Comparison of network traffic prediction and anomaly detection model performance

    As can be seen from the Fig.20a,the abnormal recall rate of the traffic forecast-anomaly detection method proposed in this paper based on HS_GRU and EFMS-Kmeans is higher than the result of the traffic forecast-anomaly detection method based on CW-RNN and En-ABC in the two predicted traffic data sets.On the two data sets,the algorithm in this paper improves the recall rate by 14.3% and 3.6%,respectively,compared with the CW-RNN and En-ABC combined methods,with an average increase of 8.95%.

    Figure 20:Comparison of network traffic prediction and anomaly detection model performance(a)recall rate,(b)accuracy,(c)F1

    As can be seen from Fig.20b,the detection method of EFMS-Kmeans proposed in this paper is more accurate than that of EN-ABC detection method in both data sets.In the above two data sets,compared with EN-ABC,the accuracy of this algorithm is improved by 7.6% and 15.2%respectively,with an average improvement of 11.4%.

    As can be seen from the Fig.20c,by comparing the F1 values of the two algorithms,it can be found that the improved detection algorithm based on the combination of HS_GRU and EFMSKmeans in this paper has the highest level.On the two data sets,compared with the detection method based on CW-RNN and En-ABC,the algorithm in this paper improves the comprehensive F1 value by 8.7% and 9.2% respectively,with an average increase of 8.95%.Therefore,it can be considered that the anomaly detection method based on the combination of HS_GRU and EFMS-Kmeans in this paper has better comprehensive performance.

    5 Conclusion

    Taking network traffic as the research object,this paper proposes an anomaly detection method based on improved HS_GRU and EFMS-Kmeans clustering.This paper summarizes the situation of network abnormal traffic detection.By analyzing the problems existing in the current flow anomaly detection methods,a flow anomaly detection method based on flow prediction and clustering is proposed.This method first proposes an improved highway GRU algorithm HS_GRU,which combines the STL decomposition algorithm with the highway GRU neural network,and uses this improved algorithm to predict traffic.Aiming at the deficiency of K-means clustering algorithm,an EFMS-Kmeans algorithm combining density clustering and distance clustering is proposed.By means of Mean Shift algorithm,the number and center of mass of clusters are obtained,and the number and center of mass are transferred to K-means algorithm,so as to make up for the shortage of artificial K value.Finally,we used real British backbone network and 11 European city traffic data sets as well as some NSL-KDD and NAB data sets to verify the performance of the proposed anomaly detection algorithm.The results show that HS_GRU can effectively capture the periodicity of network traffic data and improve the accuracy of traffic prediction.The exception detection method based on HS_GRU and EFMS-Kmeans proposed by us,compared with the traditional exception detection algorithm,can improve the accuracy and recall rate of exception detection more accurately.It has practical application value and significance to practical network.

    Funding Statement:This work was supported by National Key R&D Program of China(2019YFB2103202,2019YFB2103200),Open Subject Funds of Science and Technology on Communication Networks Laboratory(6142104200106).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    99热国产这里只有精品6| 精品熟女少妇八av免费久了| 叶爱在线成人免费视频播放| 国产不卡av网站在线观看| 男人添女人高潮全过程视频| 国产精品久久久av美女十八| 日韩欧美一区二区三区在线观看 | 午夜福利免费观看在线| 国产欧美日韩一区二区三区在线| 国产高清视频在线播放一区 | 中文字幕人妻熟女乱码| 亚洲国产av新网站| 亚洲av欧美aⅴ国产| 汤姆久久久久久久影院中文字幕| 亚洲av日韩精品久久久久久密| 51午夜福利影视在线观看| 人成视频在线观看免费观看| 汤姆久久久久久久影院中文字幕| 国产成人av教育| 久久人妻福利社区极品人妻图片| 精品福利永久在线观看| 国产男女超爽视频在线观看| 精品福利永久在线观看| 中文欧美无线码| 久久精品亚洲av国产电影网| 亚洲色图 男人天堂 中文字幕| 视频区图区小说| 欧美日韩视频精品一区| 亚洲国产av新网站| 亚洲一码二码三码区别大吗| 最近最新免费中文字幕在线| 精品熟女少妇八av免费久了| 免费高清在线观看视频在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 日韩欧美免费精品| 麻豆av在线久日| 精品国产乱码久久久久久小说| 国产av又大| 精品国产乱码久久久久久小说| 亚洲第一欧美日韩一区二区三区 | 国产精品成人在线| 五月开心婷婷网| 纵有疾风起免费观看全集完整版| 国产色视频综合| 日本撒尿小便嘘嘘汇集6| 桃红色精品国产亚洲av| 女警被强在线播放| 色婷婷av一区二区三区视频| 午夜激情久久久久久久| 亚洲avbb在线观看| 黄色怎么调成土黄色| 夜夜骑夜夜射夜夜干| 国产精品免费视频内射| 黄色视频在线播放观看不卡| 99九九在线精品视频| 欧美日韩精品网址| 老熟妇仑乱视频hdxx| 国产亚洲一区二区精品| 乱人伦中国视频| 久久久国产欧美日韩av| 午夜福利乱码中文字幕| 精品乱码久久久久久99久播| 在线精品无人区一区二区三| 国产av精品麻豆| 人成视频在线观看免费观看| 国产色视频综合| 亚洲精品乱久久久久久| 久久九九热精品免费| 精品免费久久久久久久清纯 | 涩涩av久久男人的天堂| 一区二区三区乱码不卡18| 久久久精品区二区三区| 国产高清国产精品国产三级| 99久久综合免费| 亚洲第一欧美日韩一区二区三区 | 国产精品久久久人人做人人爽| 女人久久www免费人成看片| 精品少妇黑人巨大在线播放| 亚洲av日韩在线播放| 免费在线观看完整版高清| 欧美精品高潮呻吟av久久| a级片在线免费高清观看视频| 麻豆乱淫一区二区| 少妇精品久久久久久久| 老司机靠b影院| 久久精品国产亚洲av香蕉五月 | 高清黄色对白视频在线免费看| 欧美激情极品国产一区二区三区| 精品福利永久在线观看| 两性夫妻黄色片| 最黄视频免费看| 国产欧美日韩一区二区精品| 91国产中文字幕| 最新在线观看一区二区三区| 亚洲色图 男人天堂 中文字幕| 国产精品免费大片| 亚洲第一欧美日韩一区二区三区 | 久久人妻福利社区极品人妻图片| 日韩欧美免费精品| 波多野结衣一区麻豆| 国产黄频视频在线观看| 欧美精品一区二区免费开放| 性少妇av在线| 日韩制服骚丝袜av| 久久国产精品人妻蜜桃| 亚洲av电影在线观看一区二区三区| 一本久久精品| 亚洲av成人一区二区三| 亚洲国产精品成人久久小说| 久久人人97超碰香蕉20202| 国产日韩欧美亚洲二区| 国产精品国产av在线观看| 老司机靠b影院| 久久精品国产a三级三级三级| 1024香蕉在线观看| 男人添女人高潮全过程视频| 女人久久www免费人成看片| a在线观看视频网站| 久久精品亚洲av国产电影网| 男女边摸边吃奶| 一区二区av电影网| 久久人妻福利社区极品人妻图片| 久久免费观看电影| 久久久久国产一级毛片高清牌| 一进一出抽搐动态| 后天国语完整版免费观看| 老司机福利观看| 一区二区av电影网| 亚洲欧美日韩高清在线视频 | 18禁黄网站禁片午夜丰满| 国产人伦9x9x在线观看| 亚洲色图综合在线观看| 99久久99久久久精品蜜桃| 久久综合国产亚洲精品| 久久ye,这里只有精品| 老司机亚洲免费影院| 欧美日韩黄片免| 国产精品一区二区精品视频观看| 欧美av亚洲av综合av国产av| 亚洲精品久久成人aⅴ小说| 捣出白浆h1v1| 国产成人影院久久av| 日韩人妻精品一区2区三区| 久久久久国产一级毛片高清牌| 欧美日韩视频精品一区| 久久国产精品人妻蜜桃| 亚洲国产欧美网| 欧美激情久久久久久爽电影 | 啦啦啦在线免费观看视频4| 国产精品国产三级国产专区5o| 国产精品免费视频内射| 不卡一级毛片| 亚洲第一av免费看| 青春草视频在线免费观看| 中文字幕人妻丝袜制服| 亚洲欧美精品自产自拍| 日本wwww免费看| 精品国产一区二区久久| 两性夫妻黄色片| 亚洲精品国产一区二区精华液| 国产成人av激情在线播放| 在线十欧美十亚洲十日本专区| 黑人巨大精品欧美一区二区蜜桃| 99re6热这里在线精品视频| 中亚洲国语对白在线视频| 自拍欧美九色日韩亚洲蝌蚪91| 美国免费a级毛片| 亚洲av成人不卡在线观看播放网 | 脱女人内裤的视频| 韩国高清视频一区二区三区| 欧美黑人精品巨大| 国产有黄有色有爽视频| 久久久精品国产亚洲av高清涩受| 91老司机精品| 777久久人妻少妇嫩草av网站| 天天影视国产精品| 精品国产乱码久久久久久男人| 久久香蕉激情| 黑人欧美特级aaaaaa片| 欧美性长视频在线观看| 欧美av亚洲av综合av国产av| 日韩视频一区二区在线观看| 免费观看a级毛片全部| 国产一区二区 视频在线| 亚洲精品成人av观看孕妇| 欧美成人午夜精品| 日韩欧美一区二区三区在线观看 | 搡老乐熟女国产| 免费久久久久久久精品成人欧美视频| 人妻人人澡人人爽人人| 国产精品国产三级国产专区5o| 精品少妇内射三级| 在线亚洲精品国产二区图片欧美| av国产精品久久久久影院| 在线观看www视频免费| 亚洲国产看品久久| 国产亚洲欧美在线一区二区| 中文字幕人妻熟女乱码| 汤姆久久久久久久影院中文字幕| 伦理电影免费视频| 中文字幕最新亚洲高清| 亚洲精品国产av成人精品| 老熟女久久久| 国产精品九九99| 精品国产一区二区三区四区第35| 狠狠婷婷综合久久久久久88av| 51午夜福利影视在线观看| 亚洲人成电影免费在线| 色视频在线一区二区三区| 免费av中文字幕在线| 精品福利观看| 亚洲成人免费av在线播放| 美女脱内裤让男人舔精品视频| 精品熟女少妇八av免费久了| 亚洲中文日韩欧美视频| 九色亚洲精品在线播放| 亚洲精品久久成人aⅴ小说| 少妇精品久久久久久久| 男女免费视频国产| 国产成+人综合+亚洲专区| 老熟妇乱子伦视频在线观看 | 国产亚洲欧美在线一区二区| 欧美97在线视频| 久久久久久久久久久久大奶| 亚洲欧美成人综合另类久久久| 后天国语完整版免费观看| av国产精品久久久久影院| 一级毛片精品| 精品少妇一区二区三区视频日本电影| 亚洲精品一二三| 视频在线观看一区二区三区| 国产精品.久久久| 日韩制服骚丝袜av| 色综合欧美亚洲国产小说| 日日夜夜操网爽| 亚洲精品国产av蜜桃| 999久久久国产精品视频| 90打野战视频偷拍视频| 黄片大片在线免费观看| 久久九九热精品免费| 91大片在线观看| 国产精品99久久99久久久不卡| 国产深夜福利视频在线观看| 大香蕉久久网| tocl精华| 久久99热这里只频精品6学生| 在线观看舔阴道视频| 一个人免费看片子| 欧美变态另类bdsm刘玥| 国产亚洲精品久久久久5区| 免费久久久久久久精品成人欧美视频| 午夜成年电影在线免费观看| 午夜两性在线视频| 欧美日韩亚洲高清精品| 国产一区二区三区综合在线观看| 视频区图区小说| netflix在线观看网站| av片东京热男人的天堂| 欧美久久黑人一区二区| 国产成人a∨麻豆精品| 精品少妇一区二区三区视频日本电影| 一区二区日韩欧美中文字幕| 欧美在线一区亚洲| 法律面前人人平等表现在哪些方面 | a级毛片在线看网站| 在线观看免费日韩欧美大片| 大片电影免费在线观看免费| 黄色视频在线播放观看不卡| 黄片小视频在线播放| 日韩电影二区| 91av网站免费观看| 一区二区三区精品91| 脱女人内裤的视频| 亚洲熟女毛片儿| 捣出白浆h1v1| 亚洲五月婷婷丁香| www.精华液| 不卡av一区二区三区| 亚洲精品第二区| 精品视频人人做人人爽| 久久精品国产亚洲av香蕉五月 | 亚洲欧美一区二区三区久久| 无遮挡黄片免费观看| 婷婷色av中文字幕| 成年人免费黄色播放视频| 亚洲一区二区三区欧美精品| 青春草亚洲视频在线观看| 亚洲情色 制服丝袜| 亚洲成人免费电影在线观看| 99国产精品免费福利视频| 亚洲av欧美aⅴ国产| 老司机午夜十八禁免费视频| 可以免费在线观看a视频的电影网站| 日韩大片免费观看网站| 水蜜桃什么品种好| 亚洲男人天堂网一区| 免费观看a级毛片全部| 黑人欧美特级aaaaaa片| 国产精品久久久久久精品电影小说| 黄片小视频在线播放| 制服诱惑二区| 91九色精品人成在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 欧美黑人精品巨大| 国产欧美日韩一区二区精品| 两个人看的免费小视频| 成年av动漫网址| 亚洲国产毛片av蜜桃av| 亚洲欧美精品自产自拍| 免费久久久久久久精品成人欧美视频| 亚洲熟女毛片儿| 国产一区二区三区在线臀色熟女 | 国产在视频线精品| 美女午夜性视频免费| 亚洲人成电影观看| 午夜免费鲁丝| 女人被躁到高潮嗷嗷叫费观| 午夜福利在线免费观看网站| www.熟女人妻精品国产| av天堂在线播放| 黄片大片在线免费观看| 一级毛片女人18水好多| 97精品久久久久久久久久精品| 日韩精品免费视频一区二区三区| 国产精品成人在线| 精品少妇久久久久久888优播| 真人做人爱边吃奶动态| 天天躁狠狠躁夜夜躁狠狠躁| 精品人妻1区二区| 久久国产精品大桥未久av| 成人国语在线视频| 日本猛色少妇xxxxx猛交久久| 免费日韩欧美在线观看| 久久国产精品男人的天堂亚洲| 宅男免费午夜| 午夜精品久久久久久毛片777| 热re99久久国产66热| 亚洲va日本ⅴa欧美va伊人久久 | 日韩精品免费视频一区二区三区| 亚洲av国产av综合av卡| 真人做人爱边吃奶动态| 国产视频一区二区在线看| 久久精品人人爽人人爽视色| 香蕉国产在线看| 不卡av一区二区三区| 国产欧美日韩一区二区三 | 国产精品成人在线| 天天躁日日躁夜夜躁夜夜| 两个人免费观看高清视频| 高清黄色对白视频在线免费看| 黄色怎么调成土黄色| 极品少妇高潮喷水抽搐| 中文字幕人妻丝袜制服| av天堂久久9| 新久久久久国产一级毛片| 亚洲性夜色夜夜综合| 亚洲av电影在线观看一区二区三区| 久久精品国产a三级三级三级| 欧美精品人与动牲交sv欧美| 久久影院123| 少妇粗大呻吟视频| 亚洲,欧美精品.| 国产区一区二久久| 一级毛片女人18水好多| 啦啦啦免费观看视频1| 亚洲av日韩在线播放| 脱女人内裤的视频| 婷婷成人精品国产| 狠狠狠狠99中文字幕| 国产精品一区二区免费欧美 | 中文字幕人妻丝袜制服| av天堂久久9| 久久久久久免费高清国产稀缺| 黄频高清免费视频| 国产亚洲欧美精品永久| 成人av一区二区三区在线看 | 大片免费播放器 马上看| 午夜影院在线不卡| 交换朋友夫妻互换小说| 亚洲欧美一区二区三区久久| 欧美+亚洲+日韩+国产| 免费人妻精品一区二区三区视频| 亚洲成人手机| 宅男免费午夜| 免费人妻精品一区二区三区视频| 不卡av一区二区三区| 欧美黑人欧美精品刺激| 日本黄色日本黄色录像| 成年女人毛片免费观看观看9 | 国产免费av片在线观看野外av| 性色av乱码一区二区三区2| 亚洲精品久久午夜乱码| tube8黄色片| 亚洲一码二码三码区别大吗| 男女国产视频网站| 国产一区有黄有色的免费视频| 亚洲av电影在线进入| 久久热在线av| 欧美+亚洲+日韩+国产| 1024视频免费在线观看| 精品一区二区三区av网在线观看 | 精品熟女少妇八av免费久了| 久久人人爽人人片av| 妹子高潮喷水视频| 黄频高清免费视频| 一级毛片精品| 亚洲精品中文字幕在线视频| 国产有黄有色有爽视频| 日本av手机在线免费观看| 男女无遮挡免费网站观看| 后天国语完整版免费观看| 亚洲一区中文字幕在线| 国产伦理片在线播放av一区| 成人国语在线视频| 美国免费a级毛片| 久久久水蜜桃国产精品网| 在线av久久热| 欧美 日韩 精品 国产| 国产xxxxx性猛交| 久久精品久久久久久噜噜老黄| 国产免费视频播放在线视频| 18禁国产床啪视频网站| 一二三四社区在线视频社区8| 国产亚洲欧美精品永久| 亚洲精品国产av成人精品| 亚洲精品美女久久久久99蜜臀| 日本vs欧美在线观看视频| 性色av一级| 蜜桃在线观看..| 一进一出抽搐动态| 亚洲欧美激情在线| 国产精品 国内视频| 久久国产亚洲av麻豆专区| 国产91精品成人一区二区三区 | 高清黄色对白视频在线免费看| 两人在一起打扑克的视频| 另类精品久久| 免费观看av网站的网址| 日日摸夜夜添夜夜添小说| 97精品久久久久久久久久精品| 久久精品国产亚洲av高清一级| 丝袜美足系列| 国产又色又爽无遮挡免| 99热网站在线观看| 欧美国产精品va在线观看不卡| 久久国产精品大桥未久av| 人妻久久中文字幕网| 伊人久久大香线蕉亚洲五| 午夜福利在线免费观看网站| 看免费av毛片| 91成人精品电影| 日韩人妻精品一区2区三区| 91精品国产国语对白视频| 在线精品无人区一区二区三| 在线观看免费午夜福利视频| 老司机深夜福利视频在线观看 | 菩萨蛮人人尽说江南好唐韦庄| 最黄视频免费看| 一区二区三区激情视频| av欧美777| 精品免费久久久久久久清纯 | 桃红色精品国产亚洲av| 日韩 亚洲 欧美在线| 久久人妻熟女aⅴ| 天天躁日日躁夜夜躁夜夜| 国产又爽黄色视频| 中文字幕人妻熟女乱码| 精品国内亚洲2022精品成人 | 欧美一级毛片孕妇| 男人舔女人的私密视频| 日韩有码中文字幕| 狂野欧美激情性bbbbbb| 国产三级黄色录像| 窝窝影院91人妻| 亚洲成av片中文字幕在线观看| 久久午夜综合久久蜜桃| 悠悠久久av| 黄片大片在线免费观看| 久久国产精品影院| 美女高潮到喷水免费观看| 国产av国产精品国产| 日韩 欧美 亚洲 中文字幕| 亚洲精华国产精华精| 久久久久久久精品精品| 三上悠亚av全集在线观看| 国产欧美日韩一区二区精品| 女人爽到高潮嗷嗷叫在线视频| 亚洲人成77777在线视频| 高清黄色对白视频在线免费看| 超碰成人久久| 国产精品一区二区在线观看99| 日本av手机在线免费观看| 黄色 视频免费看| 中文字幕人妻熟女乱码| 午夜两性在线视频| 亚洲精品自拍成人| 国产成人免费观看mmmm| 男女午夜视频在线观看| 午夜老司机福利片| 999久久久精品免费观看国产| 成年人黄色毛片网站| 99国产精品99久久久久| www.av在线官网国产| 亚洲天堂av无毛| 97精品久久久久久久久久精品| 日本精品一区二区三区蜜桃| 淫妇啪啪啪对白视频 | cao死你这个sao货| 熟女少妇亚洲综合色aaa.| 好男人电影高清在线观看| 99久久国产精品久久久| www.自偷自拍.com| 欧美日韩一级在线毛片| 亚洲全国av大片| 女警被强在线播放| 欧美日韩亚洲高清精品| 一区在线观看完整版| 国产一级毛片在线| 午夜精品久久久久久毛片777| 三上悠亚av全集在线观看| 99精品久久久久人妻精品| 午夜精品国产一区二区电影| 高清黄色对白视频在线免费看| 精品福利观看| 老鸭窝网址在线观看| 日韩 亚洲 欧美在线| 又大又爽又粗| 黄网站色视频无遮挡免费观看| 国产成人影院久久av| 激情视频va一区二区三区| 麻豆国产av国片精品| 999久久久精品免费观看国产| 成人黄色视频免费在线看| 国产亚洲一区二区精品| 麻豆国产av国片精品| 91字幕亚洲| 国产97色在线日韩免费| 91字幕亚洲| 午夜老司机福利片| 亚洲成人免费av在线播放| 久久久久国产一级毛片高清牌| 一区福利在线观看| av一本久久久久| 高清视频免费观看一区二区| 国产av国产精品国产| 90打野战视频偷拍视频| 一本久久精品| 纵有疾风起免费观看全集完整版| 侵犯人妻中文字幕一二三四区| 亚洲五月婷婷丁香| 老熟妇仑乱视频hdxx| 精品国产一区二区久久| 精品亚洲乱码少妇综合久久| 欧美日韩福利视频一区二区| 久久人人97超碰香蕉20202| 国产成人免费观看mmmm| 成人国产一区最新在线观看| 日本撒尿小便嘘嘘汇集6| 国产xxxxx性猛交| 久久久久精品国产欧美久久久 | 亚洲精品自拍成人| 性色av一级| 99热国产这里只有精品6| 久久精品国产a三级三级三级| 中文字幕色久视频| 久久久精品免费免费高清| 9色porny在线观看| 日韩,欧美,国产一区二区三区| 欧美日韩国产mv在线观看视频| 久久精品国产亚洲av高清一级| 国产精品一区二区精品视频观看| 国产一区二区三区av在线| 伊人久久大香线蕉亚洲五| 日韩大片免费观看网站| 日韩电影二区| 国产精品一区二区免费欧美 | 久久99热这里只频精品6学生| 黑丝袜美女国产一区| 国产老妇伦熟女老妇高清| 丝袜人妻中文字幕| 亚洲精品一卡2卡三卡4卡5卡 | 久久久久久久久免费视频了| 国产一区二区激情短视频 | 免费观看人在逋| 黄色视频在线播放观看不卡| 香蕉国产在线看| 人人妻,人人澡人人爽秒播| 美女扒开内裤让男人捅视频| 亚洲av美国av| 午夜日韩欧美国产| 啪啪无遮挡十八禁网站| 久久久久久免费高清国产稀缺| 久久这里只有精品19| 久久狼人影院| 50天的宝宝边吃奶边哭怎么回事| 黑丝袜美女国产一区| 手机成人av网站| 精品免费久久久久久久清纯 | 69精品国产乱码久久久| 人人妻,人人澡人人爽秒播| 2018国产大陆天天弄谢| 国产不卡av网站在线观看| 国产成人免费无遮挡视频| 国产亚洲精品久久久久5区| 精品少妇一区二区三区视频日本电影| 中文欧美无线码| 99久久人妻综合| 777米奇影视久久| 一本一本久久a久久精品综合妖精| 亚洲欧美一区二区三区久久| 69av精品久久久久久 | 黑人巨大精品欧美一区二区蜜桃| 欧美日韩亚洲国产一区二区在线观看 | 成在线人永久免费视频| av免费在线观看网站|