• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Intrusion detection model based on deep belief nets

    2015-07-25 06:04:39GaoNiGaoLingHeYiyue2GaoQuanliRenJie
    關(guān)鍵詞:玻爾茲曼西北大學(xué)信念

    Gao NiGao LingHe Yiyue,2Gao QuanliRen Jie

    (1School of Information Science and Technology,Northwest University,Xi’an 710127,China)

    (2School of Econom ics and Management,Northwest University,Xi’an 710127,China)

    Intrusion detection model based on deep belief nets

    Gao Ni1Gao Ling1He Yiyue1,2Gao Quanli1Ren Jie1

    (1School of Information Science and Technology,Northwest University,Xi’an 710127,China)

    (2School of Econom ics and Management,Northwest University,Xi’an 710127,China)

    This paper focuses on the intrusion classification of huge amounts of data in a network intrusion detection system.An intrusion detectionmodelbased on deep belief nets(DBN)is proposed to conduct intrusion detection,and the principles regarding DBN are discussed.The DBN is composed of a multiple unsupervised restricted Boltzmann machine(RBM)and a supervised back propagation(BP)network.First,the DBN in the proposed model is pre-trained in a fast and greedy way,and each RBM is trained by the contrastive divergence algorithm.Secondly,the whole network is fine-tuned by the supervised BP algorithm,which is employed for classifying the low-dimensional features of the intrusion data generated by the last RBM layer simultaneously.The experimental results on the KDD CUP 1999 datasetdemonstrate that the DBN using the RBM network w ith three or more layers outperforms the self-organizing maps(SOM)and neural network(NN)in intrusion classification.Therefore,the DBN is an efficient approach for intrusion detection in high-dimensional space.

    intrusion detection;deep belief nets;restricted Boltzmannmachine;deep learning

    W ith the grow th of network technologies and applications,computer network security has become a crucial issue that needs to be addressed.How to identify network attacks is a key problem.As an important and active security mechanism,intrusion detection(ID)has become a key technology of network security,and has drawn the attention of domestic and foreign scholars world-w ide.Intrusion detection based on different machine learning methods is amajor research project in network security,which aims at identifying unusual access or attacks on internal networks[1].

    In literature,many machine learning methods have made great achievements in IDS,such as the NN[2],support vectormachine(SVM)[3]and SOM[4].Thesemethods have been introduced to the field of intrusion detection by previous researchers.Most of traditional learning machinemethodsw ith shallow architectures have one hidden layer(e.g.,BP)or no hidden layer(e.g.,maximum entropy model).

    Owing to lim ited samples and computing cells,the expressive power of shallow learning methods for complex function is lim ited,and its generalization ability for complex classification problems is subjected to certain constraints[5].The continuous collection of traffic data by the network leads to problems concerning the huge amounts of data in network intrusion detection and prediction.Therefore,how to develop an efficient intrusion detection model oriented towards huge amounts of data is a theoretical and practical problem that should be solved urgently.

    The deep learning model used in large-scale data analysis has an outstanding performance,which is a prom ising way of solving intrusion detection problems.The DBN w ith deep architectures is proposed by Hinton et al.[6],and it uses a learning algorithm which greedily trains one layer at a time w ith an unsupervised learning algorithm employed for each layer[6].Bengio et al.[7]followed the research on deep learning,which shows the strong capacity of learning essential characteristics of data sets from a few samples.A lthough DBN can be trained w ith unlabeled data,DBN is successfully used to initialize deep feedforward neural networks.

    In this paper,an intrusion detectionmodel based on the greedy layer-w ise DBN is presented.In order to improve the performance of DBN,its parameters are explored,including the depth of DBN,the number of nodes in the first hidden layer,the number of nodes in the output layer and so on.Finally,the efficiency of DBN is evaluated on the KDD CUP 1999 dataset.The DBN outperforms SOM and NN in detection accuracy and false positive rate.

    1 Proposed DBN M odel for Intrusion Detection

    An overall framework of the network intrusion detection model based on DBN is trained in three stages,as shown in Fig.1.First,the symbolic attribute features in KDD CUP 1999 dataset[8]are numeralized and subsequently normalized.Then,a DBN is trained on the standardized dataset and the weights of the DBN are used to initialize a neural network.This pre-training procedure consists of learning a stack of RBM s w ith the unsupervised contrastive divergence algorithm.The learned fea-ture activations of one RBM are used as the data for training the next RBM in the stack.The nonlinear high-dimensional input vector is sampled as its corresponding optimal low-dimensional output vector.Finally,a DBN is constructed by unrolling the RBMs and fine-tuned by using the BP algorithm of error derivatives according to the class labels of the input vectors.The obtained DBN can be used to recognize attacks.

    Fig.1 The pipeline for training IDS

    2 Deep Belief Network

    The deep learning model attracts more attention from researchers at home and abroad due to its outstanding learning ability for the complex data[6].A deep learning model containingmultilayer hidden units is used to gradually establish the optimal abstract representation of the raw input at the penultimate layer in Fig.2.The representative deep learning model is the DBN based on the RBM in stacks.Learning DBN is a process of greedy layer-w ise training RBM.

    The DBN[6],which isa probabilistic generativemodel,is a deep neural network classifier of a combination of multilayer unsupervised learning networks named RBM and a supervised learning network named BP[9].In a DBN,the units in each layer are independent given the values of the units in the layer above.Fig.2 shows a multilayer generative model,in which the top two layers have symmetric undirected connections and the lower layers receive directed top-down connections from the layer above.The bottom layer is observable,and the multiple hidden layers are created by stacking multiple RBM s on top of each other.The upward arrows represent the recognition model,and the downward arrows represent the generativemodel.In the greedy initial learning,the recognition connections are tied to the generative connections.In the learning process of the generative model, once the weight parameter W is learned,the original data v can be mapped through WTto infer factorial approximate posterior distributions over the states of the variables in the first hidden layer h1.A DBN with n layers can be represented as a graphicalmodel.The joint distribution of the visible layer v and the hidden layer hk,for k=1∶n,is defined as

    Fig.2 A graphical representation of a DBN and its parameters

    In a bottom-up process,the recognition connections of DBN can be used to infer a factorial representation in one layer from the binary activities in the layer below.In a top-down process,the generative connections of DBN are used to map a state of the associativememory to the raw input.The DBN performs a non-linear transformation on its input vectors and produces low-dimensional output vectors.Using the greedy initial learning algorithm,the original data are perfectly modeled and a better generative model can be acquired.

    The procedure of the training DBN consists of two phases.In the first phase,a layer-w ise greedy learning algorithm is applied to pre-train a DBN,and the RBM of each layer is trained by the CD algorithm[10].To obtain an approximate representation of the input vector v,the follow ing procedure is used.First,the posterior distribu-tionis sampled from the first-level RBM,and the visible variables v are sampled by the posterior distribution).Subsequently,the hidden variables h1are sampled repeatedly in the same way.The alternating Gibbs samplings are repeatedly performed k times until an equilibrium distribution is arbitrarily approached.The optimal representation h1of the input vector v becomes the input for learning the second-level RBM,and a sample h2is computed.The former procedure is repeatedly executed to train each RBM in a bottom-up way until hn-1is computed.A DBN is trained by using the layer-w ise greedy learning method which can be recursively repeated,and m illions of parameters can be learned efficiently.In practice,applying the CD algorithm can avoid the monstrous overall complexity of training DBN,so this algorithm is efficient.

    In the second phase,the parameters of the whole DBN are fine-tuned.Theweights on the undirected connections at the top-level RBM are learned by fitting the posterior distribution of the penultimate layer.Using the BP learning algorithm,exact gradient descent on a global supervised cost function between the actual output vector and the desired output vector can be performed in DBN.This phase aims at obtaining the optimal parameters,which corresponds to the minimized difference between the above two vectors.

    3 Restricted Boltzmann M achine

    3.1 M odel structure

    The core block networks for the DBN are RBMs,each of which is a two-layer neural network that consists of a visible layer and a hidden layer.Each unit of the hidden layer connects to all the units of the visible layer,and the visible and hidden units form a bipartite graph w ith no visible-visible or hidden-hidden connections.RBM is an energy-based undirected generativemodel that uses a layer of binary variables to explain its input data.The visible variables are described as the characteristics of the input data,and the hidden variables automatically generated through machine learning often have no actualmeaning.The undirected model is a binary stochastic neuron,meaning that each of them can latch onto one of two states:on or off.Thismodel is called RBM whose probability distribution obeys the Boltzmann distribution.

    3.2 Inference of RBM parameters

    RBM is an energy-based undirected generative model,which is constructed from a set of visible variables v={vi}and a set of hidden variables h={hj},as shown in Fig.3.Node i is in the visible layer,and node j is in the hidden layer.A property of RBM is the lack of direct connections within nodes of the same layer,while there are connections between the visible layer and the hidden layer.The visible units are conditionally independent given the hidden units states,and vice versa.Therefore,the posterior distributions P(H|V)and P(V|H)are sampled and factorized as

    Fig.3 A graphical representation of RBM and its parameters

    Given a visible v,a low-dimensional representation h can be sampled by the posterior distributionsTherefore,given a hidden variable h,a new representation v can be sampled by the posterior distributionsh).Since hj∈{0,1},the binary hidden unit probabilities are given as

    Since the RBM is a completely symmetric derivation,the binary visible unit probabilities are given as

    whereσdenotes the logistic sigmoid,

    The visible units and the hidden units are assumed to be the binary stochastic units.In an RBM,the energy function w ith every configuration of visible and hidden variables is defined as

    where W is the weightmatrix between the visible variable v and the hidden variable h;b is a visible variable bias;a is a hidden variable bias;and the parametersθ={W,a,b}of the energy function are learned.The probability of any particular configuration of the visible and hidden units is denoted by the following energy function:

    4 Learning by M inim izing Contrastive Divergence

    The optimal joint probability distribution can be acquired w ith the Markov chain method on the condition that the number of iterations is close to infinity.However,it is difficult to guarantee fast convergence and determ ine the step size of the iteration.The maximization process of log likelihood is the same as them inimization process of the Kullback-Leibler(KL)divergence,which is expressed as KL.Here,P0denotes the posterior distribution of the data,anddenotes the equilibrium distribution defined by RBM.Contrastive divergence proposed by Hinton[10]is a fast RBM training method.Instead of m inim izing KL),the m inim ized difference between KL)and KL)is defined as CD=KL(P0‖P∞θ)-KL(Pnθ‖P∞θ)(8)wheredenotes the posterior distribution of the reconstructive visible variables sampled by n steps of Gibbs sampling.The surprising empirical result is that even n=1 often gives a good result.KL)exceeds KL()unless P0=.Since the contrastive divergence is positive,P0is equivalent toThe contrastive divergence is equal to zero only when the RBM model is at equilibrium.

    The parameters of the model,θ={W,a,b},can be adjusted in proportion to the approximate derivative of the contrastive divergence:

    Itworks relatively well in practice that the original vector data is reconstructed only by one Gibbs step[10].The updated parameters are given as

    Using an alternative CD learning algorithm,the highdimensional data is always close to the low-dimensional data.The procedure of the fast CD-k learning algorithm is listed as follows.

    Algorithm 1 TrainRBM(V,ε,M,N,W,a,b)

    Input:V is a sample from the training for the RBM,V={v1,v2,…,vM};εis the learning rate for the stochastic gradient descent in CD;M is the number of the RBM viable units;N is the number of the RBM hidden units;W is the RBM weight matrix;a is the RBM offset vector for viable units;b is the RBM offset vector for hidden units.Initialize parameter Wij=ai=bj=0,i=1,2,…,M,j=1,2,…,N;set the number of iterations T and the number of steps of Gibbs sampling K.

    5 Fine-Tuning A ll Layers of DBN by Back-Propagation Algorithm

    The weightmatrices,which are pre-trained at each layer by contrastive divergence learning,are efficient butnot optimal.Therefore,the unsupervised layer-by-layer training algorithm is performed for each RBM network,and the final supervised fine-tuning learning is used to adjust all the parameters simultaneously.The BP algorithm for feed-forward multi-layered neural network plays a key role in fine-tuning the weights of the connections in the DBN[9].Fine-tuning a DBN based on the BP algorithm consists of two phases.In the first phase,in order to obtain better initialization parameters,the feed-forward DBN is trained by the RBM learning algorithm based on k-step contrastive divergence.In the second phase,a measure of the difference between the actual output vector of DBN and the desired output vector isminimized,and the weight matrices are repeatedly adjusted during the down-pass.The down-pass,which propagates derivatives from the top layer back to the bottom layer,employs the top-down generative connections to activate each lower RBM layer in turn.Then the procedure of the fine-tuning learning algorithm based on the BP algorithm is listed as follows.

    Algorithm 2 FineTuneDBN(example,l,numhid,εfine-tune,W,a,b)

    Input:Example is a training set of〈vi,ti〉(i=1,2,…,m);l is the number of RBM layers;numhid is a set of hidden units{numhid1,numhid2,…,numhidl}at each RBM layer;εfine-tuneis a learning rate for the DBN training;Wkis the weight matrix for RBM at level k,for k from 1 to l;akis the offset vector for viable units for RBM at level k,for k from 1 to l;bkis the offset vector for hidden units for RBM at level k,for k from 1 to l.%Forward propagation

    6 Experimental Results

    6.1 Benchm ark dataset description

    The KDD Cup 1999 dataset[8],which is provided by the Defense Advanced Research Projects Agency(DARPA)and contains the attack data of severalweeks,is employed to assess the performance of various IDS.

    The KDD Cup 1999 dataset contains494 021 records in the training data and 11 850 records in testing data.The data distribution of the dataset is shown in Fig.4.

    Fig.4 Attacks distribution in the KDD Cup 1999 dataset

    6.2 Data preprocessing

    Each record of the KDD Cup 1999 dataset,which is labeled as either normal or one specific kind of attack,is described as a vector w ith 41 attribute values.Those attributes consist of 38 continuous or discrete numerical attributes and 3 categorical attributes.Deep belief networks require floating point numbers for the input neurons,and the values of floating point numbers range from 0 to 1.Therefore,all features are preprocessed to fulfill this requirement.Preprocessing data consists of two phases as follows:

    1)Numeralization of symbolic features.Three symbolic features,including protocol type,service type and flag type,are converted into binary numeric features.For example,protocol type“tcp”is converted into binary numeric features vector{1,0,0},“udp”is converted into vector{0,1,0},and“icmp”is converted into vector{0,0,1}.Since the features“service type”can be expanded into 70 binary features and the features“flag type”can be expanded into 11 binary features.Finally,41 attributes are numeralized as 122 attributes.

    2)Normalization of numeric features.Each numerical value obtained in the first phase isnormalized to the interval[0,1],according to the follow ing data smoothing method.

    6.3 Evaluation measurem ent

    An IDS requires high accuracy,high detection rate and low false alarm rate.In general,the performance of IDS is evaluated in terms of accuracy AC,detection rate DR,and false alarm FA.

    where true positive TPis the number of attack records correctly classified;true negative TNis the number of normal records correctly classified;false positive FPis the number of normal records incorrectly classified;false negative FNis the number of attack records incorrectly classified.

    The squared reconstruction error of the raw input is often used to monitor its performance.The squared reconstruction error is defined as

    where vkiis the i-th component belonging to the k-th sample vector;v′kiis the i-th component belonging to the k-th reconstructed sample vector;n is the total number of samples,and the number of attributes after data preprocessing is 122.

    6.4 Experimental results and analysis

    Experiments are designed and implemented,in whichthe KDD Cup 1999 dataset is used to evaluate the performance of the proposed model.A ll programs are coded in Matlab 7.0 and run in a personal computer w ith an Intel CPU 1.86 GHz and 2 GB memory.

    It is important to determine a proper iteration number.W ith the increase of iteration numbers,the detection rate increases accordingly,as shown in Fig.5.DBN can be expressed as DBNi,where i denotes the number of RBM layers.The deep DBN4is used to evaluate the detection rates corresponding to iteration times from 10 to 500.The curve of detection rate shown in Fig.5 appears to increases and then stabilizes.If the iteration number is greater than 150,the curve w ill be smoother.

    據(jù)王滌非介紹,受中國環(huán)保、美國美盛Plant City工廠關(guān)停等因素影響,全球磷肥供應(yīng)在2018開年之際就注定大幅減少。盡管沙特二期項(xiàng)目陸續(xù)提高開工率帶來了少量額外供應(yīng),但由于中國國內(nèi)一季度需求良好,印度市場2018年開年庫存處于歷史低位,且美國市場因Plant City關(guān)停而需求增加,上半年全球供應(yīng)處于不足狀態(tài),價(jià)格也被持續(xù)穩(wěn)步推漲。

    Fig.5 The iteration number of RBM training

    Tab.1 compares the performances of DBN,SOM and NN on the KDD Cup 1999 dataset.Four different DBNs,including a shallow 122-5 DBN1,a shallow 122-60-5 DBN2,a deep 122-80-40-5 DBN3and a deep 122-110-90-50-5 DBN4,are selected.According to the results in Tab.1,the detection rates of the shallow DBN1and DBN2are not better than that of SOM,but the detection rate of the deep DBN3added one layer RBM is higher than those of SOM and NN.Therefore,the ACof the deep 122-110-90-50-5 DBN4is improved by 2.67%and 6.19%,respectively,compared w ith SOM and NN.The DRof the deep DBN4is improved by 2.76%and 5.7%,respectively.The FAof the deep DBN4 is improved by 0.4%and 0.55%,respectively.Therefore,DBN using the RBM network w ith three or more layers outperforms SOM and NN in AC,DRand FA.

    Tab.1 Performance comparison of the six network structures %

    Larochelle et al.[11]argued that the number of nodes in the first hidden layer has a significant influence on classification performance.Fig.6 compares the performances of DBN w ith different number of nodes in the first hidden layer when a deep 122-110-90-50-5 DBN4is set.According to the results in Fig.6,the classified accuracy is the bestwhen the number of nodes in the first hidden layer is set to be 110.

    Fig.6 Performance comparison of DBN w ith different numbers of nodes in the first hidden layer

    Another important exploration is to choose an optimal number of nodes in the output layer to improve the performance of intrusion detection.In Fig.7,a deep 122-110-90-50-5 DBN4is selected,w ith the different numbers of nodes in the output layer set from 1 to 10.According to the results in Fig.7,the classification accuracy and detection rate are the optimal when the number of nodes in the output layer is set to be 5.

    Fig.7 Performance comparison of DBN w ith different numbers of nodes in the output layer

    The intrusion classification experiment is performed w ith different types of attacks in the KDD CUP 1999 dataset.Tab.2 compares the performances of DBN4,SOM and NN.In Tab.2,the accuracy rate of the DBN4is improved by 2.54%and 5.525%on average,respectively,compared with SOM and NN,and the false alarm rate of the DBN4is improved by 0.56%and 0.72%,respectively.The experimental results show that deep DBN4can effectively enhance the IDS detection rate and reduce their error rate.

    Fig.8 compares the squared reconstruction errors of pre-trained DBN4and random ly initialized DBN4,both of which have the same parameters.As shown in Fig.8,the DBN4w ith pre-training can make the fine-tuning faster than thatw ithout pre-training.A fter 100 iterations in the fine-tuning,the average squared reconstruction error per input vector of pre-trained DBN4is smaller than that ofrandom ly initialized DBN4,as shown in Fig.8.If the iteration number is greater than 450,the curve basically maintains stable.The difference of the average squared reconstruction error between the pre-trained and random ly initialized is about 4.36.The experimental results show that using pre-training and fine-tuning can improve the performance in IDS classification.

    Tab.2 Performance comparison of three classifiers on different types of attacks%_

    Fig.8 Comparison of the squared reconstruction error

    Another challenge in the classification of huge amounts of data using the proposed model is the real-time analysis and processing of data in a short period of time.Therefore,the assessment of the proposed model is crucial.In order to increase the number of experimental data,duplicate records are random ly added to the KDD CUP 1999 dataset,and the running time of our computer using the DBN4model is recorded,as shown in Fig.9.The experimental results show that the running time approximately increases linearly as the number of records increases.

    Fig.9 The scalability of DBN4

    7 Conclusion

    This paper aim s at demonstrating that DBN can be successfully applied in the field of intrusion detection.DBN can not only extract features from high-dimensional representations but also efficiently perform classification tasks.This paper explores the idea of applying DBN to classify attacks accurately.The performance of DBN is evaluated by experiments,and DBN is compared w ith othermachine learning models,such as SOM and NN.Finally,the experimental results on the KDD CUP 1999 dataset show that a good generative model can be acquired by DBN,which perform s well on the intrusion recognition task.To some extent,the DBN,which can replace the traditional shallow machine learning,provides a new design idea and method for future IDS research.

    [1]Kuang F,Xu W,Zhang S.A novel hybrid KPCA and SVM w ith GA model for intrusion detection[J].Applied Soft Computing,2014,18(4):178- 184.

    [2]Beqiri E.Neural networks for intrusion detection systems[J].Global Security Safety&Sustainability,2009,45:156- 165.

    [3]Ahmad I,Abdullah A,Alghamdi A,etal.Optim ized intrusion detection mechanism using soft computing techniques[J].Telecommunication Systems,2013,52(4):2187- 2195.

    [4]Depren O,Topallar M,Anarim E,et al.An intelligent intrusion detection system(IDS)for anomaly and m isuse detection in computer networks[J].Expert Systems with Applications,2005,29(4):713- 722.

    [5]Bengio Y.Learning deep architectures for AI[J].Foundations and Trends in Machine Learning,2009,2(1):1 -127.

    [6]Hinton G E,Osindero S,Teh YW.A fast learning algorithm for deep belief nets[J].Neural Computation,2006,18(7):1527- 1554.

    [7]Bengio Y,Lamblin P,Popovici D,et al.Greedy layerw ise training of deep networks[C]//Advances in Neural Information Processing Systems.Vancouver,Canada,2006:153- 160.

    [8]Stolfo S J,F(xiàn)an W,Lee W K,et al.Cost-based modeling for fraud and intrusion detection:results from the JAM project[EB/OL].(1999-10-28)[2011-06-27].http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.htm l.

    [9]Rumelhart D E,Hinton G E,W illiams R J.Learning representations by back-propagating errors[J].Nature,1986,323(6088):533- 536.

    [10]Hinton G E.Training products of experts by m inim izing contrastive divergence[J].Neural Computation,2002,14(8):1771- 1800

    [11]Larochelle H,Bengio Y,Louradour J,et al.Exploring strategies for training deep neural neural networks[J].Journal ofMachine Learning Research,2009,10(1):1-40.

    基于深度信念網(wǎng)絡(luò)的入侵檢測模型

    高 妮1高 嶺1賀毅岳1,2高全力1任 杰1

    (1西北大學(xué)信息科學(xué)與技術(shù)學(xué)院,西安710127)
    (2西北大學(xué)經(jīng)濟(jì)管理學(xué)院,西安710127)

    研究了入侵檢測系統(tǒng)中海量數(shù)據(jù)分類的問題.討論了深度信念網(wǎng)絡(luò)(DBN)的原理,提出了基于DBN的入侵檢測模型.DBN由多層無監(jiān)督的限制玻爾茲曼機(jī)(RBM)網(wǎng)絡(luò)和一層有監(jiān)督的反向傳播(BP)網(wǎng)絡(luò)構(gòu)成.該入侵檢測模型采用一種快速、貪婪的方法對(duì)DBN網(wǎng)絡(luò)進(jìn)行預(yù)訓(xùn)練,利用對(duì)比分歧算法逐層訓(xùn)練每一個(gè)RBM網(wǎng)絡(luò);然后,利用有監(jiān)督的BP算法對(duì)整個(gè)DBN網(wǎng)絡(luò)進(jìn)行微調(diào),并同時(shí)對(duì)RBM網(wǎng)絡(luò)輸出的低維特征進(jìn)行入侵?jǐn)?shù)據(jù)分類.基于KDD CUP 1999數(shù)據(jù)集的實(shí)驗(yàn)結(jié)果表明,使用3層以上的DBN模型分類效果優(yōu)于自組織映射和神經(jīng)網(wǎng)絡(luò)方法.因此,DBN是一種有效且適用于高維特征空間的入侵檢測方法.

    入侵檢測;深度信念網(wǎng)絡(luò);限制玻爾茲曼機(jī);深層學(xué)習(xí)

    TP393.08

    10.3969/j.issn.1003-7985.2015.03.007

    2015-02-25.

    Biographies:Gao Ni(1982—),female,graduate;Gao Ling(corresponding author),male,doctor,professor,gl@nwu.edu.cn.

    s:The National Key Technology R&D Program during the 12th Five-Year Plan Period(No.2013BAK01B02),the National Natural Science Foundation of China(No.61373176),the Scientific Research Projects of Shaanxi Educational Comm ittee(No.14JK1693).

    :Gao Ni,Gao Ling,He Yiyue,etal.Intrusion detectionmodel based on deep belief nets[J].Journal of Southeast University(English Edition),2015,31(3):339- 346.

    10.3969/j.issn.1003-7985.2015.03.007

    猜你喜歡
    玻爾茲曼西北大學(xué)信念
    基于格子玻爾茲曼方法的流固耦合問題模擬
    西北大學(xué)木香文學(xué)社
    為了信念
    黃河之聲(2021年9期)2021-07-21 14:56:34
    非對(duì)稱彎道粒子慣性遷移行為的格子玻爾茲曼模擬
    發(fā)光的信念
    《西北大學(xué)學(xué)報(bào)》(自然科學(xué)版)征稿簡則
    信念
    民族音樂(2018年4期)2018-09-20 08:59:04
    《我們》、《疑惑》
    西北大學(xué)博物館
    淺談玻爾茲曼分布的微小偏離量所引起的微觀狀態(tài)數(shù)的變化
    给我免费播放毛片高清在线观看| 亚洲高清免费不卡视频| 日韩中字成人| .国产精品久久| 丝袜喷水一区| 国产真实乱freesex| 国产精品,欧美在线| 男女啪啪激烈高潮av片| 中文资源天堂在线| a级毛色黄片| 日本免费a在线| 国产伦精品一区二区三区视频9| 午夜福利在线观看免费完整高清在 | 久久久久久久久中文| 亚洲久久久久久中文字幕| 欧美xxxx黑人xx丫x性爽| 俄罗斯特黄特色一大片| 国产av不卡久久| 久久国产乱子免费精品| 国产免费一级a男人的天堂| 成人高潮视频无遮挡免费网站| 别揉我奶头 嗯啊视频| 国产精品嫩草影院av在线观看| 淫秽高清视频在线观看| 22中文网久久字幕| 亚洲国产精品合色在线| 欧美+日韩+精品| av在线亚洲专区| 国产精品亚洲美女久久久| 国产精品乱码一区二三区的特点| 国产色爽女视频免费观看| 女人被狂操c到高潮| 午夜精品在线福利| 日本免费a在线| 有码 亚洲区| 菩萨蛮人人尽说江南好唐韦庄 | 黄色一级大片看看| 国产精品人妻久久久久久| 内地一区二区视频在线| 成人性生交大片免费视频hd| 一个人免费在线观看电影| 亚州av有码| 日本与韩国留学比较| 婷婷亚洲欧美| 免费一级毛片在线播放高清视频| 成年女人毛片免费观看观看9| 99久国产av精品国产电影| 岛国在线免费视频观看| 天堂√8在线中文| 村上凉子中文字幕在线| 国产精品综合久久久久久久免费| 少妇高潮的动态图| 中国美白少妇内射xxxbb| 免费观看精品视频网站| 91精品国产九色| av女优亚洲男人天堂| 日韩高清综合在线| 成年女人永久免费观看视频| 国产一区亚洲一区在线观看| 热99在线观看视频| 精华霜和精华液先用哪个| 日本-黄色视频高清免费观看| 黄色一级大片看看| 小蜜桃在线观看免费完整版高清| 成年免费大片在线观看| 男人狂女人下面高潮的视频| 麻豆乱淫一区二区| 日本撒尿小便嘘嘘汇集6| 国产黄色视频一区二区在线观看 | 国产精品一二三区在线看| 亚洲欧美清纯卡通| 亚洲av熟女| 久久久久久大精品| 久久中文看片网| 久久精品国产自在天天线| 亚洲精品在线观看二区| 麻豆久久精品国产亚洲av| 一个人看的www免费观看视频| 人妻制服诱惑在线中文字幕| 中文字幕av在线有码专区| 亚洲人成网站在线观看播放| 亚洲中文日韩欧美视频| 99精品在免费线老司机午夜| av在线老鸭窝| 啦啦啦观看免费观看视频高清| 亚洲av电影不卡..在线观看| 免费在线观看成人毛片| 欧美xxxx性猛交bbbb| 日本撒尿小便嘘嘘汇集6| 麻豆成人午夜福利视频| 男女之事视频高清在线观看| 免费高清视频大片| 精品午夜福利在线看| 国产精品一及| 久久久久久国产a免费观看| 亚洲精品成人久久久久久| 此物有八面人人有两片| 久久人人精品亚洲av| 国产精品一区二区三区四区免费观看 | 亚洲精品日韩av片在线观看| 久久久色成人| 男人舔女人下体高潮全视频| 亚洲人成网站在线观看播放| 国产精品一二三区在线看| 亚洲内射少妇av| 嫩草影视91久久| 伊人久久精品亚洲午夜| 国产人妻一区二区三区在| 国产高清视频在线观看网站| 国产精品一区二区性色av| 一本精品99久久精品77| 国国产精品蜜臀av免费| 精品人妻视频免费看| 国产 一区精品| 亚洲精品一区av在线观看| 12—13女人毛片做爰片一| 老熟妇仑乱视频hdxx| 波多野结衣高清无吗| 久久国产乱子免费精品| 91av网一区二区| 久久久成人免费电影| 最新中文字幕久久久久| 国产成人91sexporn| 国产成人福利小说| 免费观看人在逋| 成人鲁丝片一二三区免费| 国产综合懂色| 亚洲第一区二区三区不卡| 亚洲av成人av| 夜夜夜夜夜久久久久| 成人永久免费在线观看视频| 丰满的人妻完整版| 成人av在线播放网站| 中文字幕av在线有码专区| 99国产极品粉嫩在线观看| 97在线视频观看| 麻豆乱淫一区二区| 看十八女毛片水多多多| 国产国拍精品亚洲av在线观看| 能在线免费观看的黄片| 国产精品乱码一区二三区的特点| 色吧在线观看| 免费在线观看成人毛片| 国产白丝娇喘喷水9色精品| www.色视频.com| 欧美成人一区二区免费高清观看| a级一级毛片免费在线观看| 国产三级中文精品| 欧美性感艳星| 又爽又黄a免费视频| 久久精品91蜜桃| 国产国拍精品亚洲av在线观看| 少妇人妻精品综合一区二区 | 亚洲国产日韩欧美精品在线观看| 午夜a级毛片| 久久午夜福利片| 国产aⅴ精品一区二区三区波| 亚洲欧美精品自产自拍| 偷拍熟女少妇极品色| 极品教师在线视频| 日本黄大片高清| 久久久久久国产a免费观看| 成人鲁丝片一二三区免费| 免费无遮挡裸体视频| 色吧在线观看| 欧美丝袜亚洲另类| 成年版毛片免费区| 国产成人福利小说| 免费看av在线观看网站| 亚洲中文字幕日韩| 成人漫画全彩无遮挡| 91精品国产九色| 人人妻人人澡人人爽人人夜夜 | 免费看av在线观看网站| 亚洲第一区二区三区不卡| 欧美一区二区国产精品久久精品| 精品久久久久久成人av| 身体一侧抽搐| 校园人妻丝袜中文字幕| 成人综合一区亚洲| av国产免费在线观看| 18禁裸乳无遮挡免费网站照片| 日韩大尺度精品在线看网址| 精华霜和精华液先用哪个| videossex国产| 亚洲三级黄色毛片| 3wmmmm亚洲av在线观看| 能在线免费观看的黄片| 人人妻人人澡人人爽人人夜夜 | 在线播放国产精品三级| 91午夜精品亚洲一区二区三区| 我要搜黄色片| 日本一本二区三区精品| 国产精品无大码| 亚洲高清免费不卡视频| 免费av观看视频| 久久精品国产自在天天线| 亚洲精品国产av成人精品 | 欧美激情在线99| 久久人妻av系列| 男女啪啪激烈高潮av片| 亚洲高清免费不卡视频| 日日摸夜夜添夜夜添av毛片| 亚洲一级一片aⅴ在线观看| 国产精品美女特级片免费视频播放器| 欧美不卡视频在线免费观看| 露出奶头的视频| 国产91av在线免费观看| 婷婷精品国产亚洲av在线| 亚洲欧美日韩卡通动漫| 午夜视频国产福利| 久久精品国产清高在天天线| 欧美色欧美亚洲另类二区| h日本视频在线播放| 国产精品一区二区三区四区久久| 女的被弄到高潮叫床怎么办| av在线蜜桃| 国产成人91sexporn| 禁无遮挡网站| 国产精品国产高清国产av| 日本精品一区二区三区蜜桃| 校园春色视频在线观看| 又粗又爽又猛毛片免费看| а√天堂www在线а√下载| 久久久久久久亚洲中文字幕| 日本与韩国留学比较| 日韩高清综合在线| 国产精品乱码一区二三区的特点| 亚洲无线在线观看| 天堂网av新在线| av国产免费在线观看| 亚洲国产色片| 女同久久另类99精品国产91| av在线播放精品| 国内精品久久久久精免费| 97人妻精品一区二区三区麻豆| 春色校园在线视频观看| 免费无遮挡裸体视频| 亚洲精品国产成人久久av| 亚洲精品乱码久久久v下载方式| 久久热精品热| 午夜福利视频1000在线观看| 不卡视频在线观看欧美| 国产黄a三级三级三级人| 99热这里只有精品一区| 亚洲美女黄片视频| 国产精品美女特级片免费视频播放器| 国产极品精品免费视频能看的| 嫩草影院新地址| 精品久久久噜噜| 午夜a级毛片| 国产成年人精品一区二区| 中文字幕精品亚洲无线码一区| 国产黄片美女视频| 一个人免费在线观看电影| 国内精品久久久久精免费| 国内揄拍国产精品人妻在线| 欧美中文日本在线观看视频| 波多野结衣高清无吗| 春色校园在线视频观看| 日本成人三级电影网站| 亚洲七黄色美女视频| 亚洲丝袜综合中文字幕| 亚洲精品色激情综合| 精品午夜福利视频在线观看一区| av天堂在线播放| 男插女下体视频免费在线播放| 两性午夜刺激爽爽歪歪视频在线观看| a级毛片a级免费在线| 中文在线观看免费www的网站| 国产 一区精品| 精品人妻熟女av久视频| 麻豆久久精品国产亚洲av| 国产精品久久久久久av不卡| 国产毛片a区久久久久| 亚洲熟妇熟女久久| 级片在线观看| 精品一区二区免费观看| 亚洲av不卡在线观看| 亚洲激情五月婷婷啪啪| 麻豆成人午夜福利视频| 国产精品一区二区性色av| 亚洲五月天丁香| 91狼人影院| 国产精品人妻久久久影院| 校园春色视频在线观看| 3wmmmm亚洲av在线观看| 国产v大片淫在线免费观看| 日韩一本色道免费dvd| 亚洲va在线va天堂va国产| 日韩国内少妇激情av| 亚洲国产欧美人成| 国产欧美日韩精品一区二区| 波多野结衣高清作品| 亚洲精品乱码久久久v下载方式| 无遮挡黄片免费观看| 日韩 亚洲 欧美在线| 亚洲中文字幕日韩| 久久久久九九精品影院| av天堂中文字幕网| 欧美成人a在线观看| 国产伦一二天堂av在线观看| 深爱激情五月婷婷| 看片在线看免费视频| av免费在线看不卡| 又爽又黄a免费视频| 看非洲黑人一级黄片| 国产日本99.免费观看| 成人毛片a级毛片在线播放| 亚洲va在线va天堂va国产| 成年版毛片免费区| 男女视频在线观看网站免费| 又黄又爽又免费观看的视频| 久久6这里有精品| 日日摸夜夜添夜夜添av毛片| 97在线视频观看| 久久精品国产亚洲av天美| 麻豆久久精品国产亚洲av| 国产探花极品一区二区| 亚洲丝袜综合中文字幕| 两个人的视频大全免费| 国产国拍精品亚洲av在线观看| 人妻少妇偷人精品九色| 国产亚洲精品综合一区在线观看| 五月伊人婷婷丁香| 国产精华一区二区三区| 亚洲中文日韩欧美视频| avwww免费| 国产精品嫩草影院av在线观看| 午夜日韩欧美国产| 久久亚洲精品不卡| 精品欧美国产一区二区三| 亚洲精华国产精华液的使用体验 | 午夜福利18| videossex国产| a级一级毛片免费在线观看| 欧美日韩精品成人综合77777| 禁无遮挡网站| 亚洲性久久影院| 久久99热这里只有精品18| 精品一区二区三区视频在线| 国产午夜福利久久久久久| 日韩欧美精品免费久久| 精品不卡国产一区二区三区| 99riav亚洲国产免费| 亚洲经典国产精华液单| 97热精品久久久久久| 亚洲精品一卡2卡三卡4卡5卡| av视频在线观看入口| 亚洲欧美日韩东京热| 偷拍熟女少妇极品色| 日韩成人伦理影院| 日韩欧美免费精品| 一个人看视频在线观看www免费| 国产一区二区三区在线臀色熟女| 18+在线观看网站| 综合色av麻豆| 国产伦一二天堂av在线观看| 日本a在线网址| 欧美激情国产日韩精品一区| 91在线精品国自产拍蜜月| 一个人看视频在线观看www免费| 少妇裸体淫交视频免费看高清| 国产伦在线观看视频一区| 深爱激情五月婷婷| 国产精品乱码一区二三区的特点| 亚洲国产精品成人综合色| 久久精品国产亚洲av涩爱 | 国产单亲对白刺激| 国产真实伦视频高清在线观看| 免费看av在线观看网站| eeuss影院久久| 精品日产1卡2卡| 91av网一区二区| 亚洲欧美精品综合久久99| 成人三级黄色视频| 中出人妻视频一区二区| 国产高清激情床上av| 蜜桃久久精品国产亚洲av| 黄片wwwwww| 老司机福利观看| 少妇丰满av| 高清毛片免费看| 国产精品av视频在线免费观看| 中文字幕免费在线视频6| 3wmmmm亚洲av在线观看| 精品久久久久久成人av| av免费在线看不卡| 联通29元200g的流量卡| 国产精品,欧美在线| 亚洲成a人片在线一区二区| 大型黄色视频在线免费观看| 亚洲成a人片在线一区二区| 国产精品无大码| 欧美成人精品欧美一级黄| 一a级毛片在线观看| 成年av动漫网址| 国产成人91sexporn| av在线老鸭窝| 97在线视频观看| 国产精品不卡视频一区二区| 69人妻影院| 欧美不卡视频在线免费观看| 高清毛片免费观看视频网站| 日本黄大片高清| 精品人妻熟女av久视频| 国产午夜精品久久久久久一区二区三区 | 亚洲av中文字字幕乱码综合| 久久久精品欧美日韩精品| 伦精品一区二区三区| а√天堂www在线а√下载| 成年版毛片免费区| 禁无遮挡网站| 97碰自拍视频| 亚洲欧美精品综合久久99| 久久精品国产亚洲av香蕉五月| 亚洲国产精品国产精品| 天堂动漫精品| 日韩欧美在线乱码| а√天堂www在线а√下载| 一个人看的www免费观看视频| 午夜福利在线观看吧| 插逼视频在线观看| 精品一区二区三区视频在线| 人人妻人人澡欧美一区二区| 校园春色视频在线观看| 久久久久久久久久黄片| 久久久久久久久中文| 一级毛片久久久久久久久女| 黄色日韩在线| 亚洲国产色片| 国产精品无大码| 亚洲久久久久久中文字幕| 国产久久久一区二区三区| 菩萨蛮人人尽说江南好唐韦庄 | 1024手机看黄色片| 久久久午夜欧美精品| 免费看av在线观看网站| 日韩制服骚丝袜av| 日韩大尺度精品在线看网址| av在线天堂中文字幕| 日韩,欧美,国产一区二区三区 | 一夜夜www| 亚洲经典国产精华液单| 精品一区二区三区视频在线| 亚洲乱码一区二区免费版| 精品午夜福利视频在线观看一区| 久久中文看片网| 啦啦啦韩国在线观看视频| 亚洲精品国产成人久久av| 精品少妇黑人巨大在线播放 | 日本免费a在线| 欧美不卡视频在线免费观看| 天美传媒精品一区二区| 欧美日本视频| 国产一区二区在线av高清观看| 一边摸一边抽搐一进一小说| 欧美xxxx性猛交bbbb| 日本黄大片高清| 一边摸一边抽搐一进一小说| 看免费成人av毛片| 精品久久久久久久久av| 久久久久国内视频| 综合色av麻豆| 成人二区视频| 国产私拍福利视频在线观看| 级片在线观看| 一级毛片电影观看 | 日韩av在线大香蕉| 不卡一级毛片| 非洲黑人性xxxx精品又粗又长| 18禁黄网站禁片免费观看直播| 高清毛片免费看| 久久人人爽人人爽人人片va| 欧美激情国产日韩精品一区| 18禁裸乳无遮挡免费网站照片| 国产爱豆传媒在线观看| 老司机午夜福利在线观看视频| 丰满的人妻完整版| 精品久久久久久久人妻蜜臀av| 日本免费a在线| 小说图片视频综合网站| 成人性生交大片免费视频hd| www日本黄色视频网| 午夜久久久久精精品| 有码 亚洲区| 伊人久久精品亚洲午夜| 国产亚洲精品久久久com| 99国产极品粉嫩在线观看| 九九爱精品视频在线观看| 免费高清视频大片| 久久精品国产亚洲av涩爱 | 亚洲人成网站高清观看| 国产精品精品国产色婷婷| 精品福利观看| 成人av在线播放网站| 97人妻精品一区二区三区麻豆| 国产aⅴ精品一区二区三区波| a级毛片免费高清观看在线播放| 少妇熟女aⅴ在线视频| 久久九九热精品免费| 2021天堂中文幕一二区在线观| 欧洲精品卡2卡3卡4卡5卡区| 精品久久国产蜜桃| 99久久九九国产精品国产免费| 亚洲av免费高清在线观看| av在线观看视频网站免费| 直男gayav资源| 男女边吃奶边做爰视频| 国产精品一区二区三区四区免费观看 | 欧美人与善性xxx| 性色avwww在线观看| 日韩av不卡免费在线播放| 精品99又大又爽又粗少妇毛片| 国产高清视频在线观看网站| 一个人看视频在线观看www免费| 日本一二三区视频观看| 国产精品无大码| 亚洲人成网站高清观看| 国产黄色小视频在线观看| 你懂的网址亚洲精品在线观看 | 国产精品一区www在线观看| 亚洲精品成人久久久久久| 国产色婷婷99| av视频在线观看入口| 两个人的视频大全免费| 成熟少妇高潮喷水视频| 麻豆国产97在线/欧美| 少妇熟女aⅴ在线视频| 99在线视频只有这里精品首页| 亚洲自拍偷在线| 观看免费一级毛片| 日本免费a在线| 国产成人精品久久久久久| 欧美3d第一页| 亚州av有码| 欧美日韩国产亚洲二区| 亚洲最大成人手机在线| 美女免费视频网站| 国产 一区 欧美 日韩| 97超视频在线观看视频| 成年女人毛片免费观看观看9| 亚洲国产精品国产精品| 搡老岳熟女国产| 蜜桃亚洲精品一区二区三区| 亚洲激情五月婷婷啪啪| 最近2019中文字幕mv第一页| 午夜福利在线观看免费完整高清在 | 亚洲av免费在线观看| av在线老鸭窝| 国产亚洲91精品色在线| 看片在线看免费视频| 18禁在线播放成人免费| 国产伦在线观看视频一区| 日日摸夜夜添夜夜爱| 亚洲中文字幕一区二区三区有码在线看| 可以在线观看的亚洲视频| 亚洲美女黄片视频| 蜜桃久久精品国产亚洲av| 国产高清激情床上av| 国产又黄又爽又无遮挡在线| 亚洲熟妇中文字幕五十中出| 真人做人爱边吃奶动态| 国国产精品蜜臀av免费| 18+在线观看网站| 亚洲七黄色美女视频| 看十八女毛片水多多多| 最近手机中文字幕大全| 97碰自拍视频| 精品久久久久久成人av| 亚洲av成人av| 两性午夜刺激爽爽歪歪视频在线观看| 国产成人影院久久av| 22中文网久久字幕| 日韩 亚洲 欧美在线| 午夜福利在线在线| 夜夜夜夜夜久久久久| 国产不卡一卡二| 亚洲国产精品国产精品| 深夜精品福利| 真人做人爱边吃奶动态| 免费观看人在逋| 黄色日韩在线| 老司机影院成人| 婷婷精品国产亚洲av在线| 久久草成人影院| 99热网站在线观看| 大又大粗又爽又黄少妇毛片口| or卡值多少钱| 又黄又爽又免费观看的视频| videossex国产| 成人av一区二区三区在线看| 日本撒尿小便嘘嘘汇集6| 久久精品久久久久久噜噜老黄 | 精品免费久久久久久久清纯| 全区人妻精品视频| 国产人妻一区二区三区在| 最好的美女福利视频网| 午夜福利成人在线免费观看| 色综合亚洲欧美另类图片| 大又大粗又爽又黄少妇毛片口| or卡值多少钱| 日本撒尿小便嘘嘘汇集6| 国内精品一区二区在线观看| 免费搜索国产男女视频| 亚洲中文字幕日韩| 简卡轻食公司| 国产午夜精品久久久久久一区二区三区 | 国内精品宾馆在线| 日韩精品青青久久久久久| 蜜桃亚洲精品一区二区三区| 高清日韩中文字幕在线| 成人欧美大片| 免费看美女性在线毛片视频| av天堂中文字幕网|