• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Adversarial Active Learning for Named Entity Recognition in Cybersecurity

    2021-12-14 03:49:22TaoLiYongjinHuAnkangJuandZhuoranHu
    Computers Materials&Continua 2021年1期

    Tao Li,Yongjin Hu,*,Ankang Ju and Zhuoran Hu

    1Zhengzhou Institute of Information Science and Technology,Zhengzhou,450001,China

    2College of Letters and Science,University of Wisconsin-Madison,Madison,53706,USA

    Abstract:Owing to the continuous barrage of cyber threats,there is a massive amount of cyber threat intelligence.However,a great deal of cyber threat intelligence come from textual sources.For analysis of cyber threat intelligence,many security analysts rely on cumbersome and time-consuming manual efforts.Cybersecurity knowledge graph plays a significant role in automatics analysis of cyber threat intelligence.As the foundation for constructing cybersecurity knowledge graph,named entity recognition(NER)is required for identifying critical threat-related elements from textual cyber threat intelligence.Recently,deep neural network-based models have attained very good results in NER.However,the performance of these models relies heavily on the amount of labeled data.Since labeled data in cybersecurity is scarce,in this paper,we propose an adversarial active learning framework to effectively select the informative samples for further annotation.In addition,leveraging the long short-term memory(LSTM)network and the bidirectional LSTM(BiLSTM)network,we propose a novel NER model by introducing a dynamic attention mechanism into the BiLSTM-LSTM encoderdecoder.With the selected informative samples annotated,the proposed NER model is retrained.As a result,the performance of the NER model is incrementally enhanced with low labeling cost.Experimental results show the effectiveness of the proposed method.

    Keywords:Adversarial learning;active learning;named entity recognition;dynamic attention mechanism

    1 Introduction

    Owing to the growing number of increasingly fierce attack-defense campaigns in cyberspace,cybersecurity situation has become more and more severe,resulting in extensive cybersecurity incidents.Through keep abreast of and then analyzing the past cybersecurity incidents,security analysts can gain a deep understanding of cyber threats and the entire threat invasion process.

    As there exists detailed description about cyber threats in cyber threat intelligence,the analysis and sharing of cyber threat intelligence would be helpful for organizations to implement proactive cybersecurity defense.However,such cyber threat intelligence mainly comes from textual sources,like cybersecurity white papers,blogs,vendor bulletins and hacker forums.Textual cyber threat intelligence covers substantial threat-related information and is the primary source for constructing a cybersecurity knowledge base.However,processing such a massive amount of cyber threat intelligence may overwhelm security analysts,especially,because discovering threat-related knowledge from cybersecurity texts manually is cumbersome and time-consuming.To obtain threat-related information automatically,information extraction is required,converting the textual cyber threat intelligence into structured linked data and constructing cybersecurity knowledge graph,which can correlate the numerous seemingly unrelated cyber threat information.

    In general,information extraction includes named entity recognition,relation extraction and event extraction.And information extraction in cybersecurity has attracted considerable attention.The Task 8 of the SemEval-2018 workshop on semantic evaluation aims to implement semantic extraction from cybersecurity reports using natural language processing(SecureNLP)[1].In the SecureNLP task,there are subtasks on the prediction of entity,relation and attribute in malware texts.Named entity recognition(NER)is the major task in information extraction and is the premise for the remaining two tasks.Moreover,NER is also the foundation of constructing knowledge graph and natural language understanding.NER in cybersecurity focuses on specific and important threat-related classes,such as software,malware,vulnerability,tool,attack-pattern[2].

    Compared with the earlier NER methods that required handcrafted features,deep learning-based architectures without manual feature engineering have achieved better performance[3].However,such deep neural network-based models rely heavily on a profusion of labeled training samples[4].Moreover,in the specific domain,there is limited annotation budget and it is difficult to obtain a large-scale labeled training set.There are two challenges in particular in the annotation of cybersecurity texts:only cybersecurity practitioners or cybersecurity experts can precisely annotate cybersecurity texts;and,compared with the corpora of general domain,cybersecurity texts contain more specific entity types,which requires more manual efforts to complete the annotation.

    To address the above issues,active learning is proposed for incrementally selecting more informative samples from the unlabeled data pool.The selected samples are annotated by oracles,which are then added to the training set.As a result,we can incrementally train the model to improve its performance with low labeling cost.However,most traditional active learning algorithms rely on the uncertainty sampling strategy,which is complex when applied to the NER task due to its abundant tag space.In the paper,instead of evaluating the prediction uncertainty of the model,we consider the similarity between labeled samples and unlabeled samples.As a result,we train a discriminator with adversarial learning to sample the informative sentences,and then propose an adversarial active learning scheme for entity extraction in cybersecurity.More specifically,with the extracted features from the input sentence,we train a discriminative model to judge the similarity between the labeled sentence and unlabeled sentence.In addition,leveraging the long short-term memory(LSTM)network and bidirectional LSTM(BiLSTM)network,we propose a novel NER framework with the introduction of a dynamic attention mechanism into the BiLSTM-LSTM encoder-decoder.The two main contributions of this paper are:

    1.To solve the lack of labeledcorpusfor training the NER model in cybersecurity,combining adversarial learning with active learning,we propose an adversarial active learning framework to incrementally select informative samples for oracles to be labeled,which reduces the labeling cost for the training set.

    2.For entity extraction from cybersecurity texts,we propose a novel dynamic attention-based BiLSTMLSTM model for the NER task,where a dynamic attention mechanism is presented to adaptively capture dependency between two tokens,and LSTM is used as the tag decoder.Our model outperforms the mainstream NER models used in cybersecurity.

    2 Related Work

    By constructing the cybersecurity knowledge graph,security analysts can query and manage cybersecurity information intelligently.In addition,with the cybersecurity knowledge graph,security analysts can obtain a complete view of cybersecurity situation and make more accurate predictions on threat development,which would be significant for proactively improving cybersecurity.Since cybersecurity knowledge graph consists of cybersecurity entities and their semantic relations,NER becomes the fundamental and crucial task.

    The earlier NER methods were mainly based on statistical machine learning models,including maximum entropy model(MEM),hidden Markov model(HMM),support vector machine(SVM),and conditional random field(CRF).However,such machine learning-based methods rely on considerable feature engineering,which would result in poor robustness and generalization of models.With the success of deep learning in image recognition,speech recognition and natural language processing(NLP),there have been many deep neural networks-based NER methods.Typically,due to capturing the contextual feature information,bidirectional long short-term memory(BiLSTM)neural networks[5]have achieve great success in entity extraction.Gasmi et al.[6]utilized BiLSTM-CRF to extract cybersecurity concepts and entities and achieved promising results.Long et al.[7]applied the BiLSTM-based model to identify indicators of compromise(IOCs).Satyapanich et al.[8]combined BiLSTM with the attention mechanism to classify cybersecurity event nugget types and event argument types.Dionísio et al.[9]introduced a BiLSTM-CRF model to identify named entities from cyber threat-related tweets,which was used to issue cybersecurity alerts and fill the IOCs.Pingle et al.[2]developed a deep neural networkbased semantic relationship extraction system for cyber threat intelligence,which was aimed at obtaining semantic triples from open source cyber threat intelligence and then was combined with the security operation center to further enhance cybersecurity defense.

    For most deep neural networks-based NER methods,chain CRF[10]acts as the tag decoder.However,as an alternative,recurrent neural networks(RNNs)can be also used for decoding tags of sequences[11–13].Shen et al.[14]employed LSTM as the tag decoder for NER task.They found it can not only yield performance comparable to CRFs,but also with massive entities,LSTM outperformed CRF for decoding tags and was faster to train.

    Adversarial learning derives from the generative adversarial networks(GANs)[15],which were originally designed to synthesize images for the purpose of data augmentation.The GAN architecture consists of a generator and a discriminator,between which there is a two-player minimax game.Through the alternate training,the generator tries to fool the discriminator with the generated data by capturing the data distribution of the real sample,while the discriminator judges the input data as true or fake,distinguishing the generated data from the real data.In early phase,due to generating diverse data in continuous space,GANs were used as generative models.In essence,GAN provides a framework,within which the originally poor data can be enhanced through adversarial training between the generator and discriminator;after the alternate training,a well-trained discriminator can be obtained and further used as a classifier.In recent years,GANs have been extensively applied into fields other than computer vision,including speech recognition and natural language processing.Gui et al.[16]implemented part-of-speech tagging for twitter through an adversarial discriminator to learn common features among the out-ofdomain labeled data,unlabeled in-domain data and the labeled in-domain data.Zeng et al.[17]applied adversarial learning for distant supervised relation extraction.They used the deep neural network as the generator to generate the negative class,and trained the piecewise convolutional neural network(PCNN)as the discriminator to efficiently classify the final relation.

    Active learning aims to incrementally select samples for labeling,thus achieving better classification performance with lower labeling cost[18].Current researches on active learning include the querysynthesizing method and the pool-based method.The query-synthesizing method belongs to the generative model,which generate informative samples to be labeled.Zhu et al.[19]first proposed the GAN-based active learning model to generate the samples to be labeled.However,due to difficulties in training the GAN model,there is pattern collapse of the generated samples,which cannot capture the data distributions of the real samples[15].In addition,it is also difficult for the oracle to label the meaningless generated samples.Therefore,the performance of query-synthesizing algorithms relies on the quality and diversity of the generated samples.

    Pool-based active learning selects the informative samples from the unlabeled sample pool,which is the main research focus of active learning.The use of pool-based active learning algorithms has been explored in many tasks,like image classification,speech recognition,text classification,information retrieval,etc.The representative sampling strategies of pool-based active learning algorithms include uncertainty-based sampling[20],information-based sampling[21],ensemble-based sampling[22],expected model changebased sampling[23]and core set-based sampling[24].For typical uncertainty-based methods,Houlsby et al.[25]proposed a Bayesian active learning by disagreement(BALD)model,in which the sampling function is estimated by the mutual information of the training samples with respect to the model parameters.Gal et al.[26]measured the uncertainty in the prediction of neural networks by estimating the relationship between uncertainty and dropout,which was then applied to active learning.Sener et al.[24]proposed a core set-based active learning algorithm,which minimized the Euclidean distance between the sampled data points and unsampled data points in the feature space when training the model.Kuo et al.[27]proposed an ensemble-based active learning model to select samples by measuring the uncertainty,but the model tends to cause redundant sampling.Shen et al.[14]investigated the application of active learning for the NER task,and then compared the performance of three typical active learning models:the least confidence(LC)-based model,the BALD model and the maximum normalized logprobability(MNLP)-based model.

    3 Proposed Method

    In the section,we describe the proposed adversarial active learning framework for NER in cybersecurity.Our proposed model includes two components:the NER module and the adversarial active learning module.The detailed implementation of the proposed model is described below.

    3.1 Dynamic Attention-Based BiLSTM-LSTM for NER

    Fig.1 illustrates the architecture of the proposed novel NER model,depicting the embedding layer,BiLSTM feature encoding layer,dynamic attention layer and the LSTM tag decoding layer.

    3.1.1 Embedding Representation

    The embedding layer converts the cybersecurity sentence into the low-dimensional and dense vector,which is input into the encoder for feature extraction.To obtain high-quality embedding representation,in this paper,we aggregate massive cybersecurity corpora from recent threat intelligence blogs,security news,vulnerability descriptions from common vulnerabilities and exposures(CVE)and advanced persistent threat(APT)reports.Then we use the word2vec algorithm to obtain a 100-dimensional word embedding,which is looked up when converting the input sentence into embedding representation.The obtained word embedding representation serves as the word-level feature of the input sentence.

    In addition,to complete the embedding representation,we extract character-level features for each word of the input sentence.Since convolution neural networks(CNNs)can extract local features of the input data and select the most representative local features through pooling strategy,CNN was exploited to obtain character features in[28–29],and achieved promising results in corresponding tasks.In this paper,we use the same CNN structure as[28]to obtain the character-level feature,which is then concatenated with word embedding.Finally,the concatenated vector is input into the feature encoder.

    Figure 1:Dynamic attention-based BiLSTM-LSTM model

    3.1.2 BiLSTM Layer

    LSTM has been widely used in NLP tasks,which has significant advantages in modeling the sequences.LSTM integrates the past time-step information and the current input of the sequence to decide the current output,which solves the problem of long-distance dependency when modeling sequences.LSTM consists of the forget gate,input gate and output gate,which are used to control information flow.The specific implementation of LSTM is

    whereit,ft,otandctrespectively indicates input gate,forget gate,output gate and cell vector,σ and tanhare both nonlinear activation function,W(*)are the weight matrices of the corresponding gate,andb(*)are the bias vectors.In addition,htindicates the hidden state of LSTM at time stept.

    Figure 2:Unit of the LSTM decoder

    3.2 Adversarial Active Learning

    In the adversarial active learning module,we select informative samples by incrementally evaluating the similarity between the labeled sample and the unlabeled sample.Based on GANs,we combine the adversarial learning with active learning.The architecture of adversarial active learning is depicted in Fig.3.

    Figure 3:Architecture of adversarial active learning

    With the labeled sample setSLand unlabeled sample setSU,we select the labeled sequencesL~SLand the unlabeled sequencesU~SUas the input into the adversarial active learning module.After thesLandsUsequences are first transformed into the embedding representation,the same structure of BiLSTM in the aforementioned NER module is exploited to obtain their feature encoding respectively,obtainingHUandHLin the learned latent space.As a result,the BiLSTM encoder and the discriminator are shaped as the adversarial network.Within the adversarial framework,BiLSTM encoder tries to learn feature encoding to deceive discriminator into predicting all the learned features are from the labeled sample set,while the discriminator is trained to distinguish the labeled sample and unlabeled sample.After the adversarial training,the discriminator outputs a similarity score.A high score indicates the unlabeled sample contains the similar informativeness that the labeled sample covers;a low score indicates there is a large difference in informativeness between the labeled sample and the unlabeled sample.Therefore,the unlabeled sample with the low similarity score is annotated by the oracle.

    The objective of the BiLSTM encoder in adversarial active learning module is defined as minimizing the following loss function

    The objective of the discriminator is defined as minimizing the following loss function

    We further comprehensively consider both objectives of the proposed NER module and the generator in adversarial active learning,obtaining the overall objective of NER in the proposed model as below

    where λ is a hyperparameter used to balance the two parts of the above objective.

    After the description of the proposed model,the operation process is depicted in Algorithm 1.

    Algorithm 1:Cybersecurity entity recognition with adversarial active learning

    4 Experiment

    4.1 Experiment Setting

    The data in the experiment was collected from two sources:With the cybersecurity corpora released in Semeval-2018 Task 8,we selected 500 sentences related to malware,which were annotated and used for initially training the proposed dynamic-attention-based BiLSTM-LSTM model on the NER task before using the adversarial active learning algorithm.Then,we selected and annotated 1464 sentences as the testing set;in addition,we selected 9175 threat intelligence sentences from the AlienVault community,WeLiveSecurity community,Amazon security-related blogs and APT reports of the past two years.We annotated such cybersecurity sentences and added them to the initial training set,which we used for evaluating the performance of the proposed adversarial active learning model.

    Note that when annotating the cybersecurity entity in sentences,we refer to the entity types defined in the Unified Cybersecurity Ontology(UCO 2.0)[2]and implemented the annotation referring to the entity types,including organization,location,software,malware,indicator,vulnerability,course-of-action,tool,attack-pattern,and campaign.

    In our proposed model,the dimension of word embedding is set to 100,and the dimension of char embedding is set to 25.In addition,to obtain the char embedding,the number of CNN filters is set to 20,and the size of CNN kernels is set to 3.In the feature encoding layer,the dimensions of both the forward LSTM and reverse LSTM are set to 300.And we utilize dropout in the BiLSTM layer to mitigate the overfitting.In tag decoding layer,the dimension of LSTM is set to 600.In model training,the number of epochs is set to 100 and the batch size is set to 128.Furthermore,we train the model by stochastic gradient descent with the initial learning rate of 0.001.The hyperparameters are shown in Tab.1.

    For evaluation,we adopted the conventional criteria for information extraction,including precision(P),recall(R)and F1-score.And F1-score was used to evaluate the comprehensive performance of model.

    4.2 NER Performance Comparison

    We first evaluated the proposed novel NER module,called Dynamic-att-BiLSTM-LSTM,which was trained on the full training set and then compared with four mainstream NER models:CRF,BiLSTMCRF,self-attention based BiLSTM-CRF,and self-attention based BiLSTM-LSTM.The results of the performance comparison are shown in Tab.2.

    Table 1:Hyperparameters of the proposed model

    Table 2:Performance comparison of NER models

    As can be seen in the results,when BiLSTM is combined with the CRF,in which BiLSTM is used as the feature encoder and CRF is used as the tag decoder,the combined model performs better than the single CRF model,which may be due to its resolution of the long-distance dependency of BiLSTM.Then,with the charlevel embedding applied to NER,the performance of the BiLSTM-CRF model is enhanced,which shows the significance of character embedding for identifying the specific token.In addition,when we add the selfattention mechanism to the BiLSTM-CRF model,it turns out that self-attention contributes to the enhancement for entity recognition performance.To demonstrate the effectiveness of LSTM for tag decoding in the self-attention-BiLSTM framework,we compare the LSTM decoder with the conventional chain CRF.It can be seen that LSTM is lightly better for tag decoding.Our proposed Dynamic-att-BiLSTM-LSTM model outperforms other NER models,achieving the best F1-score of 88.61%,which shows that the proposed dynamic attention mechanism can capture more precise dependencies between two tokens.

    4.3 Performance of Adversarial Active Learning

    We next evaluated the effectiveness of our proposed active learning algorithm.We used the proposed adversarial active learning to select samples which were then annotated by the oracle.Then the NER model was incrementally retrained on the selected set with 100 training epochs.The performance of the proposed NER model trained on the labeled set,which was sampled by the adversarial active learning algorithm,was compared with the performance by using the full labeled training set.The results according to the size of the labeled set are shown in Tab.3.We can see with the increase in the number of labeled samples,the model’s performance improves.When the sampled data size reaches 45% of the full labeled training set,we achieve a good F1-score of 88.27%,which is only 0.34% less than when the full training set is used,again showing the effectiveness of the proposed adversarial active learning.

    Table 3:Performance comparison of different size of labeled set

    We also compared the proposed adversarial active learning model with three conventional uncertainty sampling-based active learning algorithms,which are based on LC,BALD and MNLP.By using these active learning algorithms to select the samples to be annotated,we were able to compare the performance of the proposed NER model based on different sizes of labeled data.We recorded the comparison results till the size of the labeled set reached 45%.The performance comparison is shown in Fig.4.From the results,we can see that our proposed adversarial active learning algorithm outperforms the other three methods.Such results may be due to the complex computation of the three methods when used in sequence labeling task,which would lead to the inaccurate sampling results.In addition,both MNLP and BALD methods perform better than the LC-based active learning,with MNLP achieving slightly better results than BALD.However,in general,the performance of MNLP and BALD in active learning are quite close to each other.The results show our proposed adversarial active learning method can be used to incrementally improve the performance of the NER task with low labeling cost.

    Figure 4:Performance comparison of the active learning algorithms on the NER task

    5 Conclusion

    Named entity recognition(NER)in cybersecurity is a fundamental task for constructing cybersecurity knowledge graph,which is significant for data-driven proactive cybersecurity.Representative deep learningbased NER models have recently achieved promising performance when there is a profusion of labeled data.However,it is difficult to obtain massive amounts of labeled data for NER in cybersecurity.In addition,the traditional uncertainty-based active learning algorithms are complex when applied to the sequence data.To address the issue,this paper proposes an adversarial active learning framework to incrementally improve the performance of the NER model with low labeling cost.Moreover,a novel dynamic attention-based BiLSTMLSTM model is presented for the NER task.The model presents a dynamic attention mechanism to adaptively capture the dependency between two tokens,and employs an LSTM decoder for entity tag decoding.Finally,we evaluate our proposed model through a series of comparison experiments.The results show that the proposed NER model attains better performance,and the proposed adversarial active learning scheme is effective in incrementally selecting informative samples.In future work,we would like to introduce the syntactic feature into the model to further enhance the model’s performance.In addition,we plan to build a specific domain dictionary with expert knowledge to rectify the extracted cybersecurity entities.

    Acknowledgement:The authors thank the editor and the reviewers for their hard work.

    Funding Statement:This work was supported by the National Natural Science Foundation of China under grant 61501515.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    成人免费观看视频高清| 99re在线观看精品视频| 日韩免费av在线播放| 国产精品 欧美亚洲| 99re在线观看精品视频| 久久天堂一区二区三区四区| 一二三四在线观看免费中文在| 中文字幕久久专区| 亚洲 欧美一区二区三区| 成人国语在线视频| 19禁男女啪啪无遮挡网站| 中文字幕精品免费在线观看视频| 一进一出抽搐动态| aaaaa片日本免费| 99热这里只有精品一区 | 国内精品久久久久久久电影| 免费在线观看完整版高清| 他把我摸到了高潮在线观看| 人人妻人人澡欧美一区二区| 国产av不卡久久| 欧美乱色亚洲激情| 黄片播放在线免费| 正在播放国产对白刺激| 成年女人毛片免费观看观看9| 国产成人av教育| 无遮挡黄片免费观看| 欧美乱妇无乱码| 熟女少妇亚洲综合色aaa.| 亚洲国产欧美网| 国产1区2区3区精品| 久久精品国产综合久久久| 日本免费一区二区三区高清不卡| 欧美黄色片欧美黄色片| 男人操女人黄网站| 黄色毛片三级朝国网站| 午夜免费成人在线视频| 久久久精品欧美日韩精品| 一进一出抽搐动态| 亚洲专区字幕在线| 国内揄拍国产精品人妻在线 | 伦理电影免费视频| 国产精品一区二区精品视频观看| 香蕉av资源在线| 久久天躁狠狠躁夜夜2o2o| 亚洲国产中文字幕在线视频| 麻豆成人av在线观看| 国产高清激情床上av| 人成视频在线观看免费观看| 欧美日韩亚洲国产一区二区在线观看| 美女国产高潮福利片在线看| 欧美午夜高清在线| 成人手机av| 观看免费一级毛片| 国产高清视频在线播放一区| 美女高潮到喷水免费观看| 别揉我奶头~嗯~啊~动态视频| 夜夜躁狠狠躁天天躁| 精品国产超薄肉色丝袜足j| 欧美黄色片欧美黄色片| 人人妻人人澡人人看| 免费一级毛片在线播放高清视频| 嫩草影视91久久| 淫秽高清视频在线观看| 久久草成人影院| 不卡av一区二区三区| 成人三级黄色视频| 69av精品久久久久久| 午夜福利成人在线免费观看| 一边摸一边做爽爽视频免费| 亚洲精品美女久久av网站| 欧美日韩福利视频一区二区| 国产熟女xx| 男女午夜视频在线观看| а√天堂www在线а√下载| 国产v大片淫在线免费观看| 天天一区二区日本电影三级| 欧美午夜高清在线| 操出白浆在线播放| 欧美绝顶高潮抽搐喷水| 国产免费av片在线观看野外av| 中国美女看黄片| 正在播放国产对白刺激| 999久久久国产精品视频| 淫秽高清视频在线观看| 欧美另类亚洲清纯唯美| 国产精品98久久久久久宅男小说| 国产欧美日韩精品亚洲av| 麻豆成人午夜福利视频| 熟妇人妻久久中文字幕3abv| 久久国产精品男人的天堂亚洲| av视频在线观看入口| 欧美另类亚洲清纯唯美| 精品不卡国产一区二区三区| 国产一区二区激情短视频| 午夜老司机福利片| 美女国产高潮福利片在线看| 午夜福利一区二区在线看| 在线观看午夜福利视频| 日本一本二区三区精品| 波多野结衣高清无吗| 免费看美女性在线毛片视频| 校园春色视频在线观看| 久久婷婷成人综合色麻豆| 久久久久久久午夜电影| 成人三级做爰电影| 久久久久久免费高清国产稀缺| 老汉色av国产亚洲站长工具| xxxwww97欧美| 婷婷精品国产亚洲av| 亚洲精品一区av在线观看| 国产黄a三级三级三级人| 丝袜人妻中文字幕| av片东京热男人的天堂| 夜夜躁狠狠躁天天躁| 婷婷精品国产亚洲av| 国内少妇人妻偷人精品xxx网站 | 国产亚洲欧美精品永久| 成熟少妇高潮喷水视频| xxx96com| 99久久99久久久精品蜜桃| 变态另类成人亚洲欧美熟女| 亚洲av成人不卡在线观看播放网| 国产高清激情床上av| 亚洲av日韩精品久久久久久密| 中文字幕另类日韩欧美亚洲嫩草| 制服诱惑二区| svipshipincom国产片| netflix在线观看网站| av电影中文网址| 国产伦一二天堂av在线观看| 手机成人av网站| 神马国产精品三级电影在线观看 | 亚洲欧美精品综合一区二区三区| 亚洲中文字幕一区二区三区有码在线看 | av在线播放免费不卡| 色综合站精品国产| 视频在线观看一区二区三区| 亚洲男人的天堂狠狠| 亚洲成人免费电影在线观看| 极品教师在线免费播放| 久久午夜综合久久蜜桃| 男女午夜视频在线观看| 久久久久九九精品影院| 久久 成人 亚洲| 亚洲成人精品中文字幕电影| 俺也久久电影网| 窝窝影院91人妻| av中文乱码字幕在线| 色播亚洲综合网| 91大片在线观看| 亚洲成人久久性| 欧美性猛交╳xxx乱大交人| 亚洲欧美精品综合久久99| 一级黄色大片毛片| 免费高清在线观看日韩| 国产又黄又爽又无遮挡在线| 人人妻人人澡欧美一区二区| 色精品久久人妻99蜜桃| 91在线观看av| 岛国视频午夜一区免费看| 99久久国产精品久久久| 一级黄色大片毛片| 欧美日韩亚洲综合一区二区三区_| 欧美日韩亚洲国产一区二区在线观看| 大型黄色视频在线免费观看| 午夜亚洲福利在线播放| 成人国语在线视频| 韩国av一区二区三区四区| 老汉色∧v一级毛片| 久久精品夜夜夜夜夜久久蜜豆 | 国产亚洲精品久久久久久毛片| 人人妻,人人澡人人爽秒播| 欧美av亚洲av综合av国产av| 99精品欧美一区二区三区四区| 香蕉丝袜av| 黄色片一级片一级黄色片| 正在播放国产对白刺激| netflix在线观看网站| 久久香蕉精品热| 午夜福利在线在线| 精品国产美女av久久久久小说| 亚洲九九香蕉| 人人妻人人澡人人看| 色综合婷婷激情| 久久亚洲精品不卡| 桃色一区二区三区在线观看| 欧美日韩黄片免| 国产精品精品国产色婷婷| 国产片内射在线| 久久欧美精品欧美久久欧美| 青草久久国产| 亚洲最大成人中文| 90打野战视频偷拍视频| 亚洲人成网站高清观看| 日本 av在线| av福利片在线| 国产精品亚洲一级av第二区| 免费女性裸体啪啪无遮挡网站| 成人18禁高潮啪啪吃奶动态图| 嫩草影视91久久| av超薄肉色丝袜交足视频| 国产精品电影一区二区三区| 久久婷婷人人爽人人干人人爱| 别揉我奶头~嗯~啊~动态视频| 少妇熟女aⅴ在线视频| 亚洲熟妇中文字幕五十中出| 国产单亲对白刺激| 久久婷婷成人综合色麻豆| 搡老妇女老女人老熟妇| 99国产精品99久久久久| 美女高潮喷水抽搐中文字幕| 特大巨黑吊av在线直播 | 国产视频内射| 每晚都被弄得嗷嗷叫到高潮| cao死你这个sao货| 国产精品亚洲美女久久久| 岛国视频午夜一区免费看| 亚洲成国产人片在线观看| 精品一区二区三区视频在线观看免费| 免费搜索国产男女视频| 女人爽到高潮嗷嗷叫在线视频| 免费av毛片视频| 99热6这里只有精品| 亚洲欧美精品综合一区二区三区| 搡老妇女老女人老熟妇| netflix在线观看网站| 18禁国产床啪视频网站| 欧美日韩亚洲国产一区二区在线观看| 国产野战对白在线观看| 国产极品粉嫩免费观看在线| 日本黄色视频三级网站网址| 久久亚洲真实| 免费在线观看视频国产中文字幕亚洲| 亚洲中文av在线| √禁漫天堂资源中文www| 亚洲av成人不卡在线观看播放网| 最近在线观看免费完整版| 久久久久国产精品人妻aⅴ院| 国产黄a三级三级三级人| 国产成人啪精品午夜网站| 久久精品人妻少妇| tocl精华| 午夜两性在线视频| 99国产精品一区二区蜜桃av| 黑人操中国人逼视频| 99热只有精品国产| 国产亚洲欧美精品永久| 波多野结衣av一区二区av| 一本一本综合久久| 午夜激情av网站| 亚洲av电影在线进入| 最新美女视频免费是黄的| 亚洲美女黄片视频| 亚洲人成伊人成综合网2020| 成人特级黄色片久久久久久久| 搡老妇女老女人老熟妇| 欧美乱码精品一区二区三区| 欧美激情极品国产一区二区三区| 正在播放国产对白刺激| 国内揄拍国产精品人妻在线 | 老司机靠b影院| 国产精品 国内视频| 9191精品国产免费久久| 欧美日韩中文字幕国产精品一区二区三区| 久久久久久久久免费视频了| 色综合亚洲欧美另类图片| 亚洲成国产人片在线观看| 老熟妇仑乱视频hdxx| 精品久久久久久久末码| 色播在线永久视频| 久热这里只有精品99| 亚洲五月婷婷丁香| 国产精品野战在线观看| 国产精品久久久久久亚洲av鲁大| 国产私拍福利视频在线观看| 男人的好看免费观看在线视频 | 国产在线精品亚洲第一网站| 国产午夜福利久久久久久| 欧美日韩中文字幕国产精品一区二区三区| 国产单亲对白刺激| www日本在线高清视频| 国产爱豆传媒在线观看 | 女性生殖器流出的白浆| 人人妻人人澡人人看| 夜夜躁狠狠躁天天躁| 欧美三级亚洲精品| 国产欧美日韩一区二区三| 久久天躁狠狠躁夜夜2o2o| 亚洲国产中文字幕在线视频| 亚洲三区欧美一区| 在线天堂中文资源库| 老司机福利观看| 亚洲av成人不卡在线观看播放网| 操出白浆在线播放| 欧美一区二区精品小视频在线| 亚洲五月天丁香| 欧美 亚洲 国产 日韩一| 美女高潮到喷水免费观看| 亚洲人成网站高清观看| 亚洲av美国av| 观看免费一级毛片| 国产亚洲精品第一综合不卡| www.www免费av| 日本一本二区三区精品| 久久人人精品亚洲av| 亚洲aⅴ乱码一区二区在线播放 | 校园春色视频在线观看| 又黄又爽又免费观看的视频| 草草在线视频免费看| 亚洲五月婷婷丁香| 国产亚洲精品一区二区www| 少妇被粗大的猛进出69影院| 国产熟女午夜一区二区三区| 亚洲一区二区三区不卡视频| 在线观看一区二区三区| 一级片免费观看大全| 国产97色在线日韩免费| 99国产精品一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 日本a在线网址| 欧洲精品卡2卡3卡4卡5卡区| 欧美国产精品va在线观看不卡| 18禁黄网站禁片免费观看直播| 色播亚洲综合网| 国产aⅴ精品一区二区三区波| 国产精品98久久久久久宅男小说| 在线视频色国产色| 中国美女看黄片| 色老头精品视频在线观看| 中文字幕另类日韩欧美亚洲嫩草| 一本大道久久a久久精品| 久久久久久大精品| 精品国产国语对白av| 琪琪午夜伦伦电影理论片6080| 看免费av毛片| 国产三级黄色录像| 成人一区二区视频在线观看| 国产亚洲欧美在线一区二区| 在线国产一区二区在线| 国产精品自产拍在线观看55亚洲| 美国免费a级毛片| 日韩视频一区二区在线观看| 亚洲国产日韩欧美精品在线观看 | 亚洲一区二区三区不卡视频| 91av网站免费观看| 少妇熟女aⅴ在线视频| 午夜亚洲福利在线播放| 色婷婷久久久亚洲欧美| 满18在线观看网站| 亚洲av五月六月丁香网| 精品久久久久久久末码| 欧美成人午夜精品| 久久久久九九精品影院| 999久久久精品免费观看国产| 又大又爽又粗| 国产午夜福利久久久久久| 亚洲五月婷婷丁香| 丝袜在线中文字幕| 国产国语露脸激情在线看| 国内少妇人妻偷人精品xxx网站 | 久久午夜亚洲精品久久| 一区二区三区高清视频在线| 在线观看日韩欧美| 伊人久久大香线蕉亚洲五| 操出白浆在线播放| 午夜福利在线在线| 欧美中文日本在线观看视频| 国产av在哪里看| 国产精品久久视频播放| 一级毛片精品| 国产男靠女视频免费网站| a在线观看视频网站| 老司机深夜福利视频在线观看| 亚洲熟妇熟女久久| 伦理电影免费视频| 久久久久久免费高清国产稀缺| 色综合站精品国产| 老司机午夜福利在线观看视频| 老司机在亚洲福利影院| 88av欧美| 国产区一区二久久| 在线观看66精品国产| 国产一区二区三区在线臀色熟女| 亚洲欧美精品综合久久99| 日本成人三级电影网站| 亚洲av日韩精品久久久久久密| 成人国语在线视频| 欧美三级亚洲精品| 亚洲精品美女久久av网站| 国产成人欧美在线观看| 在线观看66精品国产| 黄色视频,在线免费观看| 国产片内射在线| 亚洲国产精品999在线| 成年免费大片在线观看| 午夜福利欧美成人| 色综合欧美亚洲国产小说| 久久精品夜夜夜夜夜久久蜜豆 | 人人澡人人妻人| 国产精品一区二区三区四区久久 | 午夜免费观看网址| 高潮久久久久久久久久久不卡| 久久精品影院6| 麻豆一二三区av精品| 淫秽高清视频在线观看| 日韩国内少妇激情av| 男女之事视频高清在线观看| 午夜激情av网站| 这个男人来自地球电影免费观看| 国产成人欧美| 日本撒尿小便嘘嘘汇集6| 亚洲欧美日韩无卡精品| 欧美中文综合在线视频| 亚洲,欧美精品.| 怎么达到女性高潮| 日韩视频一区二区在线观看| 久久久久国产精品人妻aⅴ院| 国产精品 国内视频| 亚洲三区欧美一区| 亚洲人成电影免费在线| 午夜福利18| 精品久久蜜臀av无| 国产精品久久久久久精品电影 | 丝袜美腿诱惑在线| 欧美日韩精品网址| 久久午夜亚洲精品久久| 午夜福利在线在线| 欧美日韩亚洲国产一区二区在线观看| 亚洲成人国产一区在线观看| 亚洲精品av麻豆狂野| 听说在线观看完整版免费高清| 香蕉久久夜色| 国产高清有码在线观看视频 | 自线自在国产av| 欧美黄色淫秽网站| 久久99热这里只有精品18| 观看免费一级毛片| 成人亚洲精品一区在线观看| 欧美日韩瑟瑟在线播放| 大香蕉久久成人网| 亚洲av成人不卡在线观看播放网| 婷婷丁香在线五月| 国产一卡二卡三卡精品| 久久 成人 亚洲| 91av网站免费观看| 精品国产乱码久久久久久男人| 亚洲av电影不卡..在线观看| 国产又色又爽无遮挡免费看| 久久久久精品国产欧美久久久| 午夜免费观看网址| 麻豆一二三区av精品| 免费观看人在逋| 国产精品一区二区精品视频观看| 制服诱惑二区| 午夜影院日韩av| 亚洲国产中文字幕在线视频| 国产精品一区二区三区四区久久 | 天堂动漫精品| 欧美乱码精品一区二区三区| 老鸭窝网址在线观看| 亚洲电影在线观看av| 国产国语露脸激情在线看| 欧美av亚洲av综合av国产av| 国产高清videossex| 十八禁网站免费在线| 亚洲午夜理论影院| а√天堂www在线а√下载| 精品不卡国产一区二区三区| 操出白浆在线播放| 啦啦啦免费观看视频1| 国产人伦9x9x在线观看| 欧美日韩中文字幕国产精品一区二区三区| 欧美绝顶高潮抽搐喷水| 嫁个100分男人电影在线观看| 日韩欧美 国产精品| 欧美丝袜亚洲另类 | 久久欧美精品欧美久久欧美| 成人亚洲精品av一区二区| 欧美黄色片欧美黄色片| 亚洲国产日韩欧美精品在线观看 | 一进一出抽搐gif免费好疼| 在线看三级毛片| 99re在线观看精品视频| 午夜免费鲁丝| 少妇裸体淫交视频免费看高清 | 亚洲 国产 在线| 99久久综合精品五月天人人| 亚洲av美国av| 亚洲一区二区三区不卡视频| 欧美成人一区二区免费高清观看 | 久久精品91蜜桃| 久久久精品国产亚洲av高清涩受| 一区福利在线观看| 免费在线观看日本一区| 少妇裸体淫交视频免费看高清 | 嫩草影视91久久| 校园春色视频在线观看| 亚洲熟妇熟女久久| 色婷婷久久久亚洲欧美| 男人操女人黄网站| 成人三级黄色视频| 精品久久久久久久久久免费视频| 国产精品av久久久久免费| 久久国产精品影院| x7x7x7水蜜桃| 国产精品久久久久久精品电影 | 欧美zozozo另类| 999久久久精品免费观看国产| 99久久精品国产亚洲精品| 国产精品久久久人人做人人爽| 国产99久久九九免费精品| 男女之事视频高清在线观看| 国产99白浆流出| 国产精品亚洲av一区麻豆| 亚洲成人国产一区在线观看| 久久久久免费精品人妻一区二区 | 两性午夜刺激爽爽歪歪视频在线观看 | a在线观看视频网站| 此物有八面人人有两片| 国产久久久一区二区三区| 19禁男女啪啪无遮挡网站| 女人爽到高潮嗷嗷叫在线视频| 午夜亚洲福利在线播放| 亚洲国产精品合色在线| 中国美女看黄片| 精品一区二区三区视频在线观看免费| 91国产中文字幕| 国产99白浆流出| 一本一本综合久久| 老司机靠b影院| 免费在线观看黄色视频的| 午夜福利高清视频| x7x7x7水蜜桃| 久久国产乱子伦精品免费另类| 可以免费在线观看a视频的电影网站| 欧美在线黄色| 99热这里只有精品一区 | 成人亚洲精品av一区二区| 日韩精品免费视频一区二区三区| 国产精品 欧美亚洲| 亚洲,欧美精品.| 免费在线观看日本一区| 久久久久久国产a免费观看| 黑人操中国人逼视频| 制服丝袜大香蕉在线| 亚洲国产精品合色在线| 久久精品91无色码中文字幕| 亚洲成人免费电影在线观看| 欧美日韩一级在线毛片| 亚洲自拍偷在线| 人人澡人人妻人| 久久精品国产亚洲av高清一级| 每晚都被弄得嗷嗷叫到高潮| 亚洲国产中文字幕在线视频| 露出奶头的视频| 在线观看免费视频日本深夜| 桃色一区二区三区在线观看| 国产成年人精品一区二区| 欧美绝顶高潮抽搐喷水| 久久香蕉激情| 国产高清激情床上av| 国产成人精品久久二区二区免费| 十八禁人妻一区二区| 亚洲欧美日韩无卡精品| 国产精品电影一区二区三区| 免费人成视频x8x8入口观看| 欧美+亚洲+日韩+国产| 正在播放国产对白刺激| 91av网站免费观看| 亚洲成人久久性| 法律面前人人平等表现在哪些方面| 欧美日韩中文字幕国产精品一区二区三区| 麻豆成人av在线观看| 成年免费大片在线观看| 十八禁网站免费在线| 曰老女人黄片| 99热6这里只有精品| 亚洲激情在线av| 国产免费男女视频| 一级作爱视频免费观看| 久久99热这里只有精品18| 好看av亚洲va欧美ⅴa在| tocl精华| 一区二区日韩欧美中文字幕| 一级毛片精品| 日本免费a在线| 精品不卡国产一区二区三区| 在线观看日韩欧美| 亚洲自拍偷在线| 国产精品久久电影中文字幕| 亚洲,欧美精品.| 色综合婷婷激情| 国产成人精品无人区| 一区二区三区激情视频| 人人妻人人澡人人看| 美女大奶头视频| 亚洲欧洲精品一区二区精品久久久| 亚洲熟妇中文字幕五十中出| 91在线观看av| av超薄肉色丝袜交足视频| 免费在线观看影片大全网站| 国产日本99.免费观看| av欧美777| 夜夜躁狠狠躁天天躁| 99国产综合亚洲精品| 亚洲色图av天堂| 1024手机看黄色片| 嫩草影视91久久| 欧美日韩亚洲国产一区二区在线观看| 黄片小视频在线播放| 亚洲国产欧美网| 色播亚洲综合网| 每晚都被弄得嗷嗷叫到高潮| 久久精品国产99精品国产亚洲性色| 19禁男女啪啪无遮挡网站|