• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Positive unlabeled named entity recognition with multi-granularity linguistic information①

    2022-01-09 02:08:16OuyangXiaoye歐陽小葉ChenShudongWangRong
    High Technology Letters 2021年4期
    關(guān)鍵詞:小葉歐陽

    Ouyang Xiaoye(歐陽小葉),Chen Shudong②,Wang Rong

    (*Institute of Microelectronics,Chinese Academy of Sciences,Beijing 100029,P.R.China)

    (**University of Chinese Academy of Sciences,Beijing 100049,P.R.China)

    (***Key Laboratory of Space Object Measurement Department,Beijing Institute of Tracking and Telecommunications Technology,Beijing 100094,P.R.China)

    Abstract

    Key words:named entity recognition(NER),deep learning,neural network,positive-unlabeled learning,label-few domain,multi-granularity(PU)

    0 Introductio n

    Named entity recognition(NER)refers to the task of recognizing named entities in text and classifying them into specified types[1].NER is also a foundation task in natural language processing(NLP),and supports downstream applications such as relation extraction[2],translation[3],question and answer[4].At present,traditional methods based on supervised learning use a large amount of high-quality labeled data for NER[5].However,neural NER typically requires a large amount of manually labeled training data,which are not always available in label-few domain,such as biological/medical/military. Training neural NER with limited labeled data can be very challenging[6-7].

    Researchers have investigated a wide variety of methods and resources to boost the performance of label-few domain NER,e.g.,annotation learning and reinforcement learning[8],domain-adaptive fine-tuning[9],a fully Bayesian approach to aggregate multiple sequential annotations[10], adversarial transfer network[11],joint sentence and token learning[12],weak supervision to bootstrap NER[13].Wheras most of the previous studies have injected expert knowledge into the sequence labelling model,which is often critical when data is scarce or non-existent.This work presents a positive unlabeled learning approach,which is positive unlabeled named entity recognition(PUNER),using some positive instances and multi-granularity linguistic information to automatically annotate all unlabeled instances.Positive unlabeled(PU)learning refers to learning a classifier through unlabeled instances and positive instances,classifying unlabeled instances by this classifier,and finally making all unlabeled instances into annotation instances[14-15].PUNER solves the problem of a large amount of unlabeled data in the label-few domain by PU learning,and effectively parses rich semantic information to identify correct named entities through multi-granular language information.

    This paper has the following three contributions.(1)Designed a novel algorithm PUNER,which continuously iterates the unlabeled data through the PU learning method,and combines the neural networkbased PU classifier to identify all named entities and their types in the unlabeled data.(2)In PU classifier,there is a multi-granularity language information acquisition module,which integrates multi-granularity embedding of characters,words,and sentences to obtain rich language semantics in the context and helps to understand the meaning of sentences.(3)The experimental results show that PUNER is 1.51%higher than the most advanced AdaPU algorithm on the three multilingual NER data sets,and the performance of PUNER on SIGHAN Bakeoff 2006 is higher than that on CoNLL 2003 and CoNLL 2002 due to the different number of training set.

    1 Related work

    1.1 Named entity recognition

    The NER usually adopts a supervised learning approach that uses a labeled dataset to train the model.In recent years,neural network has become the mainstream of NER[16-17],which achieves most advanced performance.Many works use the long short-term memory(LSTM)and conditional random field(CRF)architecture.Ref.[18]further extended it into bidirectional LSTM-convolutional neural networks(CNNs)-CRF architecture,where the CRF module was added to optimize the output label sequence.Ref.[19]proposed task-aware neural language model(LM)termed LM-LSTM-CRF,where character-aware neural language models were incorporated to extract character-level embedding under a multi-task framework.

    1.2 Label-few domain NER

    The aim of label-few domain modelling is to reduce the need for hand annotated data in supervised training.A popular method is distant supervision,which relies on external resources such as knowledge bases to automatically label documents with entities that are known to belong to a specific category.Ref.[8]utilized the data generated by distant supervision to perform new type named entity recognition in new domains.The instance selector is based on reinforcement learning and obtains the feedback reward,aiming at choosing positive sentences to reduce the effect of noisy annotation.Ref.[9]proposed domainadaptive fine-tuning,where contextualized embeddings are first fine-tuned to both the source and target domains with a language modelling loss and subsequently fine-tuned to source domain labelled data.Refs[20,21]generalized this approach with the Snorkel framework which combines various supervision sources using a generative model to estimate the accuracy of each source.Ref.[22]presented a distant supervision approach to NER in the biomedical domain.

    1.3 Positive unlabeled learning NER

    PU learning is a distant supervision method,which can be regarded as a special classification task,that is,learning how to train a classifier with a small number of positive instances and many unlabeled instances.AdaSampling[23]first randomly selects a part of the unlabeled instances as the negative instances for training,then the process of sampling,modeling,and prediction is repeated for each iteration,and final predicted probability uses the average of theTiterations as the probability of the final prediction.Ref.[24]proposed the unbiased positive-unlabeled learning,and Ref.[25]adopted a bounded non-negative positiveunlabeled learning.Ref.[10]proposed a fully Bayesian approach to the problem of aggregating multiple sequential annotations, using variational expectationmaximization(EM)algorithm to compute posterior distributions over the model parameters.Ref.[13]relied on a broad spectrum of labelling functions to automatically annotate texts from the target domain.These annotations are then merged using a hidden Markov model which captures the varying accuracies and confusions of the labelling functions.

    For the label-few domain NER,the PU learning method can solve the problem of only a small amount of labeled data and a large amount of unlabeled data.At the same time,combining the neural network model to realize the PU classifier can obtain multi-granular sentence semantic information and identify named entities.Therefore,a novel PUNER algorithm is adopted,which applies PU learning with multi-granular language information to perform NER in the label-few domain.

    2 The proposed PUNER algorithm

    2.1 Problem formalization

    PU learning can be regarded as a special form of two-class(positive and negative)classification methods,when there are only given a set of positive instancesPand a set of unlabeled instances that contains both positive and negative instances.This work uses the binary labeling mechanism for NER tasks,rather than the mainstream B-begin I-inside O-outside(BIO)or B-begin I-inside O-outside E-end S-single(BIOES)mechanism.This is because the defect of positive instances affects the accuracy of BIO or BIOES mechanism labeling,and the binary labeling mechanism can avoid this effect well.Therefore,the NER task here can be regarded as a binary classification task.

    2.2 Algorithm overview

    The algorithm of the novel PU learning is shown in Algorithm 1,which is inspired by Ref.[26].It is a two-step approach,first selecting reliable negative instances from the unlabeled datasetU,then using the positive instances and reliable negative instances to train a classification model for new instances prediction.

    Algorithm 1 PUNER Algorithm Data:Positive dataset P and unlabeled dataset U Result:Predicted classification of all instances y∈{0,1}1. T0←P;//the initial positive training data;2. S0?U;//treat all unlabeled instances in U as negative instances,get S0 as initial negative training data;3. g1 ner←PULe Classifier(P,S0)//PU learning classifier g1 ner using「P,y=1■∪「S0,y=0■as the initial training dataset;4. UL1←g1ner(U)//use g1 ner to classify unlabeled data U,then get the labeled data UL1;5. S1←extract Negatives(UL1)//get negative instances from the labeled data UL1;6. RN1←S0;//get the initial set of reliable negative instances RN1;7. S1←S0;8. T1←P;9. while(|Si|≤|Si-1|and|P|?|Ti|)do:10. i←i+1 11. giner←PULe Classifier(P,RNi-1)12. UL i←gi ner(U)13. RNi←extract Negatives(UL i)14. Ti←extract Positives(UL i)15. return(giner)//use gi ner as the final classifier.

    In this paper,the PU learning classifiergneris a neural-network-based architecture with multi-granularity linguistic information used to recognize named entities and their types,and the specific introduction ofgneris shown in the next section.

    2.3 PU learning classifier gner

    In this section,a neural-network-based architecture is adopted to implement PU learning classifiergner,and this architecture is shared by different entity types,as shown in Fig.1.

    2.3.1 Multi-granularity word processor

    In this module,word processor semantically extracts meaningful word representation from different granularities,i.e.,the character-granularity representationec(w), the word-granularity representationew(w),and the sentence-granularity representationes(w).

    For the wordwin the sentences,the convolution network[27]is used for the char-granularity representationec(w)ofw,the fine-tuned Stanford’s GloVe word embeddings tool[28]for the word-granularity representationew(w)ofw,and the fine-tuned Bert embedding tool[17]for the sentence-granularity representationes(w)ofw.The final word presentation is obtained by concatenating these three parts of embeddings:

    where,⊕denotes the concatenation operation.Thus,a sequence of word vector{vt}is got.

    Fig.1 Architecture of PU learning classifier

    The word vector is obtained through the concatenation of multi-granularity linguistic information,that is to obtain multi-granularity features such as char,words,and sentences,and cooperate with the task model.Specifically,this work first uses CNN to generate char-level embedding,GloVe to generate word-level embedding,and Bert to generate sentence-level embedding,and then concatenates the three granular embeddings to obtain a more comprehensive and rich language semantics in the context,which further helps understanding the meaning of the sentence.Finally,it is more effective to cooperate with the upper sentence processor module.

    2.3.2 Sentence processor

    Based on the word vector{vt},the sentence processor employs a layer of gated recurrent unit(GRU)[29]to learn the contextual information of the sentence,which uses a hidden state vector{ht}to remember important signal.At each step,a new hidden state is computed based on previous hidden state using the same function.

    where,ztandrtare an update gate and a reset gate,σ(·)is a sigmoid function,Wz,Wr,Wh,Uz,UrandUhare parameters.e(wk/s)is the representation ofwkgivens.

    2.3.3 Entity recognition classifier

    The sentence representatione(w/s)is taken as the entity detection classifier’s input,and the probability of the positive classf(w/s)is defined as

    The cross-entropy loss function to learn a betterf(w/s)is minimized and defined as

    After training,PU classifier is used to perform label prediction.However,since a distinct classifier for each entity type is established,the type with the highest prediction probability(evaluated byf(w/s))is chosen.The predictions of other classifiers will be reset to 0.For sentences={w1,w2,w3,w4,w5,w6},if the label predicted by the classifier of a given type isL={0,1,0,0,1,1},then consider{w2}and{w5,w6}as two entities of the type.

    3 Experiments

    In order to demonstrate the performance and adaptability of the algorithm,several methods on three multilingual datasets are compared and details of the implementation and analysis of the experimental results are given.

    3.1 Compared methods

    Six methods are chosen to compare their performance with the proposed PUNER.The first four are supervised learning methods,which are Stanford NER(MEMM)[30]adds jump features between observation sequences,Stanford NER(CRF)[31]uses global normalization,BiLSTM[32]is combined by forward LSTM and backward LSTM,BiLSTM+CRF[32]uses the BiLSTM as baseline,and learn an optimal path by CRF in the last layer.The last two are applied to the label-few domain,Matching directly uses the constructed named entity positive instances to label the testing set,Ada-PU[33]is an adapted PU learning algorithm for NER.

    In addition,the partial structure of PUNER is also changed,and the performance of three variations of the proposed MGNER is compared.PUNERELMOuses ELMo[16]to do sentence embedding instead of Bert;PUNERbiLSTMreplaces GRU with BiLSTM neural network;andPUNERatt-implements entity processor module without attention mechanism.

    3.2 Data sets

    PUNER is evaluated on CoNLL 2003[34],CoNLL 2002[35]and SIGHAN Bakeoff 2006[36].The corpora statistics of the three datasets are shown in Table 1.These three datasets are labeled with four types,person(PER),location(LOC),organization(ORG),and miscellaneous(MISC),and the training set(TRAIN),development set(DEV)and testing set(TEST)are officially segmented.

    CoNLL2003 is an English dataset, collected from Reuters.There are four types of entities in this data set,namely PER,LOC,ORG,and MISC.The official split training set is used for model training,testa is used for development and testb is used for testing in the experiments,which contains 23 407,5918 and 5620 entities,respectively.Besides,there are about 45.6 k additional unlabeled entities.

    CoNLL2002 is a Spanish NER dataset,collected from Spanish EFE News Agency.It is also annotated by PER,LOC,ORG,and MISC types.The esp.train set is used for model training,esp.testa is used for development set and esp.testb is used for testing in the experiments.The TRAIN,DEV and TEST data sets contain 18 752,4324 and 3551 entities,respectively.

    SIGHAN Bakeoff 2006 is a Chinese dataset using multiple data sets provided by different institutions for evaluation.This dataset is also labeled with four types,PER,LOC,ORG,and MISC.It has about 32 317 entities in the training set(ner.train),3667 entities in development set(ner.dev)and 7403 entities in the testing set(ner.test).

    For the qualification of the label-few domain NER,each training set of the dataset is used for training.And it should be noted that the data annotation information during training is not used.The method of building named entity dictionary given in Ref.[33]is used to construct positive instances.For CoNLL2003,most popular and common names of person,location and organizations from Wikipedia are collected to construct the dictionary.For CoNLL2002,Google translator is used to translate the English PER,LOC,ORG,MISC dictionary into Spanish.And for SIGHAN Bakeoff 2006,a dictionary based on Baidu Baike is built.

    3.3 Implementation details

    If the comparison methods and PUNER method are all in the identical experimental environment,the results of these experiments will be copied directly,otherwise the methods will be reproduced in the context of this paper.

    Table 1 Corpora statistics for the CoNLL(en),CoNLL(sp)and Bakeoff(ch)datasets

    The proposed algorithm is implemented using Pytorch libraries.A random search[37]is used for superparameter optimization,and the best performance setting is chosen as the final setting.In this experiment,the Adam optimizer with the learning rate decay is applied.The learning rate starts from 0.001 and begins to decrease by 0.9.The batch size is set to 20.The word presentation consists of three parts,pretrained GloVe word embedding,sentence Bert embedding,along with a randomly initialized training CNN encoder for character embeddings.And the dimensionality of word embedding is set as 300.In order to prevent over-fitting,all the GRU layers dropout rates are set to 0.4.Besides,the positive instances in the PU learning algorithm are selected following previous work[33].

    3.4 Results

    Experiment results on the CoNLL 2003,CoNLL 2002 and SIGHAN Bakeoff 2006 datasets are shown in Table 2.As can be seen from Table 2(2),among the methods applied in label-few domain,performance of the proposed PUNER is better than others on three different datasets.PUNER achieves excellent results in label-few domain.

    The last set of methods shown in Table 2(2)are deformations of the proposed PUNER.By using Bert for embedding instead of ELMo,it increasesF1 score 1.4%on the CoNLL(en)dataset,1.3%on the CoNLL(sp)dataset and 0.8%on the Bakeoff(ch)dataset.Choosing GRU to extract semantics instead of BiLSTM,F1 score is improved by 0.8%on the CoNLL(en)dataset,0.3%on the CoNLL(sp)dataset and 0.7% on the Bakeoff(ch)dataset.The attention mechanism improvesF1 score by 1.2%on the CoNLL(en)dataset,0.8%on the CoNLL(sp)dataset and 1%on the Bakeoff(ch)dataset.The initial multigranularity linguistic information of word embedding has important effect on subsequent tasks,and at the same time,the attention mechanism also significantly helps to extract important semantics.

    Table 2 F1 scores on CoNLL(en),CoNLL(sp)and Bakeoff(ch)testing set for NER

    Analyzing different performance results of these three datasets,the ranking ofF1 value on the three data sets are Bakeoff(ch),CoNLL(en)and CoNLL(sp).F1 score on the Chinese dataset is 0.54%higher than English dataset and 7.78%higher than Spanish dataset.Considering the data set analysis information provided in Table 1,it is believed that the performance difference between different data sets is mainly caused by the difference in the number of sentences and entities.Specifically,the number of Bakeoff(ch)sets is larger than that of CoNLL(en)and CoNLL(sp),and the number of data sets directly affects the effect of model training.From the experimental results,F1 score on the CoNLL(sp)is the worst.This may also be caused by the low quality of the positive instances of CoNLL(sp),because the Spanish positive samples are translated from the positive instances of CoNLL(en).The translation process may produce noise data,which affects accuracy.

    Moreover,compared with the previous AdaPU,the performance of the proposed method is improved,because the combined use of Bert,GRU neural network and attention mechanism can improve the semantic understanding of context.However,compared with Table 2(1),the performance of PUNER is still worse than that of supervised learning.

    Experiments are conducted on three datasets,using different sizes of training sets to train the model,and studying the impact onF1 values.On three data sets,20%,40%,50%,60%,80%,and 100%training sets are selected for training PUNER,respectively.Fig.2 describes the results of this study on three datasets.It can be seen from Fig.2 that as the number of training sets increases,the overall performance of the model also increases,although there are fluctuations.Therefore,the amount of data has an impact on the performance of the model.Meanwhile,the performance of the supervised learning method BiLSTM+CRF in Fig.2 shows that the gap between supervised learning and unsupervised learning and research on unsupervised learning are also very meaningful.

    Fig.2 F1 of PUNER on the testing set of CoNLL(en),CoNLL(sp)and Bakeoff(ch)datasets for training using different segmentation of the training dataset.The dotted line indicates the F1 value obtained by using the supervised learning method BiLSTM+CRF

    4 Conclusion

    A novel PUNER algorithm for label-few domain is proposed,which uses PU learning algorithm combined with deep learning method to obtain multi-granularity language information for NER task.In PUNER,PU learning uses the positive instances and many unlabeled instances to effectively solve the labeling problem.Meanwhile,the neural network-based architecture is used to implement the PU learning classifier,which obtains multi-granularity linguistic information and facilitates named entity labeling.Experimental results show that PUNER achieves excellent results in labelfew domain on three multilingual datasets.In future research,graph convolutional network will be considered to model richer sentence semantics.

    猜你喜歡
    小葉歐陽
    動(dòng)物怎樣聽和看?
    我家的健忘老媽
    歐陽彥等
    防治蘋果小葉病 流翔高鈣來助力
    蘋果小葉病的防治
    河北果樹(2020年4期)2020-11-26 06:05:00
    依依送別歐陽鶴先生
    中華詩詞(2019年9期)2019-05-21 03:05:18
    小葉樟樹下的遐思
    海峽姐妹(2019年1期)2019-03-23 02:42:40
    The Development of Cosmetics in Thailand
    小葉蓮化學(xué)成分的研究
    中成藥(2018年9期)2018-10-09 07:18:46
    白鷺飛
    歌海(2017年2期)2017-05-30 22:22:10
    欧美丝袜亚洲另类| 91久久精品电影网| 少妇被粗大猛烈的视频| 男女国产视频网站| 亚洲成人手机| 熟女人妻精品中文字幕| 国产精品麻豆人妻色哟哟久久| 尾随美女入室| 国产精品久久久久久久电影| 深夜a级毛片| 免费不卡的大黄色大毛片视频在线观看| 国产精品三级大全| 99九九在线精品视频 | 人人妻人人澡人人爽人人夜夜| 人妻制服诱惑在线中文字幕| 22中文网久久字幕| 亚洲精品色激情综合| 美女xxoo啪啪120秒动态图| 国模一区二区三区四区视频| 少妇熟女欧美另类| 精品亚洲成国产av| 久久精品国产亚洲av涩爱| 天堂中文最新版在线下载| 亚洲天堂av无毛| 免费高清在线观看视频在线观看| 久久久久久久久大av| 青春草亚洲视频在线观看| 午夜福利视频精品| 国产一级毛片在线| 在线观看免费视频网站a站| 欧美人与善性xxx| 人人妻人人澡人人爽人人夜夜| 男人舔奶头视频| 丰满人妻一区二区三区视频av| 人人澡人人妻人| 热99国产精品久久久久久7| 国产一级毛片在线| 久久精品国产亚洲av涩爱| 成人毛片a级毛片在线播放| 午夜av观看不卡| 午夜激情久久久久久久| 嫩草影院入口| 男人狂女人下面高潮的视频| 亚洲人成网站在线观看播放| 国产成人精品无人区| 人人妻人人澡人人看| 丰满乱子伦码专区| 亚洲国产最新在线播放| 国产精品久久久久久久久免| 在线观看免费视频网站a站| 国产白丝娇喘喷水9色精品| 亚洲精品视频女| 国产精品国产三级国产av玫瑰| 免费大片18禁| 国产乱来视频区| 日日啪夜夜爽| 国产男女超爽视频在线观看| 欧美精品一区二区免费开放| 在线观看国产h片| 亚洲av免费高清在线观看| 国产欧美日韩一区二区三区在线 | 欧美激情国产日韩精品一区| 亚洲国产精品一区二区三区在线| 男人舔奶头视频| 国产极品粉嫩免费观看在线 | 人妻夜夜爽99麻豆av| 最黄视频免费看| 搡老乐熟女国产| 2021少妇久久久久久久久久久| 丝袜在线中文字幕| 久久久精品免费免费高清| av福利片在线观看| 日韩中字成人| 日韩成人av中文字幕在线观看| 老司机亚洲免费影院| 乱系列少妇在线播放| 亚洲精品成人av观看孕妇| 亚洲av不卡在线观看| 欧美成人午夜免费资源| 岛国毛片在线播放| 最黄视频免费看| 亚洲国产精品一区三区| 三级国产精品片| 久久久a久久爽久久v久久| 成人免费观看视频高清| 欧美日韩在线观看h| 国产精品欧美亚洲77777| 性色avwww在线观看| 如何舔出高潮| 欧美日本中文国产一区发布| 国产 一区精品| 在线观看免费日韩欧美大片 | 美女国产视频在线观看| 日日摸夜夜添夜夜爱| 国产精品久久久久久精品电影小说| 精品亚洲乱码少妇综合久久| 免费看光身美女| 国产黄色免费在线视频| 午夜激情久久久久久久| 国产 精品1| 国产色婷婷99| 狂野欧美激情性bbbbbb| 精品视频人人做人人爽| 伊人亚洲综合成人网| 黑人高潮一二区| 少妇的逼好多水| 国产精品久久久久久av不卡| 成人国产麻豆网| 偷拍熟女少妇极品色| 欧美 亚洲 国产 日韩一| 久久精品国产a三级三级三级| 亚洲久久久国产精品| 亚洲av中文av极速乱| 一级片'在线观看视频| 一本久久精品| 精品亚洲乱码少妇综合久久| 国产av码专区亚洲av| 亚洲美女视频黄频| 久久 成人 亚洲| 晚上一个人看的免费电影| 欧美一级a爱片免费观看看| 熟女电影av网| 亚洲av男天堂| 午夜福利视频精品| 中文字幕亚洲精品专区| 久久国产精品大桥未久av | 亚洲伊人久久精品综合| 高清欧美精品videossex| 又黄又爽又刺激的免费视频.| 美女脱内裤让男人舔精品视频| 精华霜和精华液先用哪个| 免费大片18禁| 水蜜桃什么品种好| 日韩视频在线欧美| 汤姆久久久久久久影院中文字幕| 人人澡人人妻人| 天堂俺去俺来也www色官网| 99久久精品一区二区三区| 成人毛片a级毛片在线播放| 精品午夜福利在线看| 国产在视频线精品| 久久久国产欧美日韩av| 在线天堂最新版资源| 桃花免费在线播放| 美女cb高潮喷水在线观看| 久久久国产欧美日韩av| 欧美xxⅹ黑人| 一级毛片电影观看| 欧美丝袜亚洲另类| 中文字幕久久专区| 精品人妻偷拍中文字幕| 偷拍熟女少妇极品色| 在线观看www视频免费| 中文资源天堂在线| 久久久午夜欧美精品| a级毛片免费高清观看在线播放| a 毛片基地| 亚洲三级黄色毛片| 在线观看av片永久免费下载| 国产亚洲午夜精品一区二区久久| 亚洲欧美日韩另类电影网站| 国产毛片在线视频| 人人妻人人澡人人看| 精品少妇内射三级| 日日啪夜夜爽| 国产女主播在线喷水免费视频网站| 久久人人爽人人爽人人片va| 国产精品一二三区在线看| 欧美+日韩+精品| 国产精品偷伦视频观看了| 成人漫画全彩无遮挡| 国产精品国产三级国产专区5o| 国产av一区二区精品久久| 伊人亚洲综合成人网| 成人特级av手机在线观看| 国产精品一区www在线观看| 久热这里只有精品99| 日韩人妻高清精品专区| 精品卡一卡二卡四卡免费| 日韩电影二区| 亚洲怡红院男人天堂| 在线 av 中文字幕| 国产精品一区二区三区四区免费观看| www.色视频.com| 六月丁香七月| 新久久久久国产一级毛片| 波野结衣二区三区在线| 亚洲怡红院男人天堂| 亚洲综合精品二区| 国产av一区二区精品久久| 婷婷色麻豆天堂久久| 丰满迷人的少妇在线观看| 婷婷色av中文字幕| 国产欧美亚洲国产| 黑人高潮一二区| 国产淫片久久久久久久久| 在线观看免费视频网站a站| 男人舔奶头视频| 丰满迷人的少妇在线观看| 国产 精品1| 乱系列少妇在线播放| 丁香六月天网| 男女啪啪激烈高潮av片| 啦啦啦中文免费视频观看日本| 18禁动态无遮挡网站| 久久女婷五月综合色啪小说| 自拍欧美九色日韩亚洲蝌蚪91 | 熟女av电影| 欧美日韩在线观看h| 91精品伊人久久大香线蕉| 久久精品国产亚洲网站| 日韩,欧美,国产一区二区三区| 一区在线观看完整版| 毛片一级片免费看久久久久| 极品少妇高潮喷水抽搐| 亚洲精品aⅴ在线观看| 欧美少妇被猛烈插入视频| 少妇熟女欧美另类| av福利片在线观看| 国产精品三级大全| 三上悠亚av全集在线观看 | 亚洲国产精品一区三区| 久热久热在线精品观看| 在线天堂最新版资源| 有码 亚洲区| 免费在线观看成人毛片| 国产成人精品福利久久| 最新中文字幕久久久久| 高清欧美精品videossex| 99久久精品一区二区三区| av在线观看视频网站免费| 成年人免费黄色播放视频 | 精品人妻熟女av久视频| 日日摸夜夜添夜夜添av毛片| 国产极品粉嫩免费观看在线 | 午夜视频国产福利| 国产一区二区在线观看日韩| 曰老女人黄片| 成人午夜精彩视频在线观看| 欧美精品一区二区免费开放| 偷拍熟女少妇极品色| 如何舔出高潮| 欧美日韩综合久久久久久| 亚洲av男天堂| 人妻少妇偷人精品九色| 一级毛片aaaaaa免费看小| 制服丝袜香蕉在线| 青青草视频在线视频观看| 久久精品久久久久久久性| 少妇高潮的动态图| 九色成人免费人妻av| 麻豆精品久久久久久蜜桃| 黄色一级大片看看| 国内揄拍国产精品人妻在线| 久久久久国产精品人妻一区二区| 国产欧美亚洲国产| 一边亲一边摸免费视频| 久久久久久久大尺度免费视频| 国产av精品麻豆| xxx大片免费视频| 日韩一区二区三区影片| 日韩熟女老妇一区二区性免费视频| 在线观看三级黄色| 国产亚洲一区二区精品| 亚洲成人av在线免费| 日韩av免费高清视频| 丝袜脚勾引网站| 麻豆成人av视频| 国产真实伦视频高清在线观看| 少妇猛男粗大的猛烈进出视频| 观看免费一级毛片| 一区在线观看完整版| 午夜老司机福利剧场| 80岁老熟妇乱子伦牲交| 久久久久精品久久久久真实原创| 亚洲精品国产av成人精品| 久久久久久久国产电影| 内射极品少妇av片p| 草草在线视频免费看| 中文乱码字字幕精品一区二区三区| 精品一区二区三卡| 亚洲av不卡在线观看| 色网站视频免费| 人人澡人人妻人| 日韩欧美一区视频在线观看 | 久久精品国产亚洲av天美| 国产精品福利在线免费观看| 国产精品99久久99久久久不卡 | 亚洲怡红院男人天堂| 超碰97精品在线观看| 午夜福利在线观看免费完整高清在| 亚洲,一卡二卡三卡| av视频免费观看在线观看| 免费不卡的大黄色大毛片视频在线观看| 欧美日韩亚洲高清精品| 国产精品福利在线免费观看| 亚洲婷婷狠狠爱综合网| 国产午夜精品一二区理论片| 丰满人妻一区二区三区视频av| 亚洲人成网站在线播| 汤姆久久久久久久影院中文字幕| 97精品久久久久久久久久精品| 色网站视频免费| 高清不卡的av网站| 免费看光身美女| 我要看日韩黄色一级片| 少妇的逼水好多| 亚洲综合色惰| 国产爽快片一区二区三区| 只有这里有精品99| 在线 av 中文字幕| 人妻夜夜爽99麻豆av| 日韩欧美精品免费久久| 黑丝袜美女国产一区| 久久 成人 亚洲| 高清视频免费观看一区二区| 国产亚洲最大av| 夜夜爽夜夜爽视频| 久久精品国产自在天天线| 亚洲精品成人av观看孕妇| 91久久精品电影网| 下体分泌物呈黄色| 欧美精品一区二区大全| 下体分泌物呈黄色| 天天躁夜夜躁狠狠久久av| 2021少妇久久久久久久久久久| 夫妻性生交免费视频一级片| 高清不卡的av网站| 日韩熟女老妇一区二区性免费视频| 日本av手机在线免费观看| 日日摸夜夜添夜夜添av毛片| 国产日韩一区二区三区精品不卡 | 日韩成人伦理影院| 日本猛色少妇xxxxx猛交久久| 丁香六月天网| 黑人高潮一二区| 国产亚洲av片在线观看秒播厂| 国产欧美日韩综合在线一区二区 | 亚洲精品乱码久久久久久按摩| 五月天丁香电影| 国产精品不卡视频一区二区| 亚洲伊人久久精品综合| 十八禁高潮呻吟视频 | 最近2019中文字幕mv第一页| 精品国产露脸久久av麻豆| 日日爽夜夜爽网站| 黄色配什么色好看| 高清不卡的av网站| 人人妻人人澡人人爽人人夜夜| 免费观看无遮挡的男女| 国产成人精品福利久久| 99久久综合免费| 少妇高潮的动态图| 亚洲成人手机| 99精国产麻豆久久婷婷| 午夜老司机福利剧场| 人人妻人人澡人人爽人人夜夜| 午夜福利视频精品| 人体艺术视频欧美日本| 春色校园在线视频观看| 全区人妻精品视频| 亚洲精品日本国产第一区| 免费观看av网站的网址| 国产在视频线精品| 久久99一区二区三区| 高清欧美精品videossex| 日本免费在线观看一区| 99久国产av精品国产电影| 成年美女黄网站色视频大全免费 | 又黄又爽又刺激的免费视频.| 十分钟在线观看高清视频www | 哪个播放器可以免费观看大片| 国产精品久久久久久精品古装| 国产精品福利在线免费观看| 女的被弄到高潮叫床怎么办| 亚洲怡红院男人天堂| 久久精品国产a三级三级三级| 国产精品秋霞免费鲁丝片| 狠狠精品人妻久久久久久综合| 观看av在线不卡| 天堂俺去俺来也www色官网| 亚洲精品,欧美精品| 久久精品久久久久久久性| 国产亚洲5aaaaa淫片| 亚洲欧美日韩另类电影网站| 国产午夜精品久久久久久一区二区三区| 啦啦啦啦在线视频资源| 精品国产乱码久久久久久小说| 如何舔出高潮| 精品久久久久久电影网| 亚洲国产精品一区三区| 亚洲精品一二三| 久久影院123| 国产伦在线观看视频一区| 视频中文字幕在线观看| 国产精品一区二区三区四区免费观看| videossex国产| 黄色配什么色好看| 丰满少妇做爰视频| 亚洲国产精品一区二区三区在线| 51国产日韩欧美| 日韩av不卡免费在线播放| 99热全是精品| 五月伊人婷婷丁香| 九九爱精品视频在线观看| 亚洲国产av新网站| 免费少妇av软件| 色哟哟·www| 久久毛片免费看一区二区三区| 亚洲av电影在线观看一区二区三区| 人人妻人人爽人人添夜夜欢视频 | 两个人免费观看高清视频 | 亚洲人成网站在线观看播放| 丰满迷人的少妇在线观看| 久久av网站| 丰满迷人的少妇在线观看| a级片在线免费高清观看视频| 成人特级av手机在线观看| 国产精品人妻久久久久久| 亚洲av电影在线观看一区二区三区| 中国美白少妇内射xxxbb| 天堂8中文在线网| 人人妻人人爽人人添夜夜欢视频 | 日本欧美视频一区| 色哟哟·www| 97超碰精品成人国产| 亚洲av电影在线观看一区二区三区| 熟女av电影| 国产综合精华液| 亚洲,欧美,日韩| 国精品久久久久久国模美| 国产成人精品福利久久| 午夜91福利影院| 制服丝袜香蕉在线| 水蜜桃什么品种好| 久久99蜜桃精品久久| 韩国高清视频一区二区三区| 国产熟女午夜一区二区三区 | 国产精品国产av在线观看| 我要看日韩黄色一级片| 欧美日本中文国产一区发布| 久久午夜福利片| 插逼视频在线观看| 熟女人妻精品中文字幕| 如日韩欧美国产精品一区二区三区 | 婷婷色综合大香蕉| 老熟女久久久| 国产精品人妻久久久影院| 又黄又爽又刺激的免费视频.| 五月玫瑰六月丁香| 另类亚洲欧美激情| 嫩草影院新地址| 久久久久久久久久人人人人人人| 老司机影院成人| 男女边摸边吃奶| 亚洲av综合色区一区| 少妇丰满av| 少妇精品久久久久久久| av又黄又爽大尺度在线免费看| 国产午夜精品一二区理论片| 国精品久久久久久国模美| 日本vs欧美在线观看视频 | 精品午夜福利在线看| 亚洲人成网站在线观看播放| 最近中文字幕高清免费大全6| 欧美日韩综合久久久久久| 夜夜骑夜夜射夜夜干| 精品国产露脸久久av麻豆| 美女cb高潮喷水在线观看| 久久免费观看电影| 美女视频免费永久观看网站| 极品人妻少妇av视频| 大又大粗又爽又黄少妇毛片口| 中国国产av一级| 成人免费观看视频高清| 麻豆成人午夜福利视频| 欧美日韩视频精品一区| 久久久a久久爽久久v久久| a级毛片免费高清观看在线播放| 免费看光身美女| 午夜福利影视在线免费观看| 这个男人来自地球电影免费观看 | 三级经典国产精品| 曰老女人黄片| 最黄视频免费看| 久久久久久久亚洲中文字幕| 一级二级三级毛片免费看| 黄色日韩在线| 欧美精品亚洲一区二区| 五月伊人婷婷丁香| 国产免费又黄又爽又色| 视频中文字幕在线观看| 国产精品久久久久久久电影| 麻豆乱淫一区二区| 色婷婷av一区二区三区视频| 少妇的逼水好多| 黄色毛片三级朝国网站 | 免费看av在线观看网站| 色吧在线观看| 尾随美女入室| 亚洲综合精品二区| 自拍偷自拍亚洲精品老妇| 内地一区二区视频在线| 一边亲一边摸免费视频| 黄色配什么色好看| 男人爽女人下面视频在线观看| 91aial.com中文字幕在线观看| 新久久久久国产一级毛片| 精品一区在线观看国产| 国产白丝娇喘喷水9色精品| 亚洲国产色片| 一区在线观看完整版| 亚洲国产毛片av蜜桃av| 青春草亚洲视频在线观看| 久久久久久久久久久免费av| 国产美女午夜福利| 26uuu在线亚洲综合色| 欧美97在线视频| 日本爱情动作片www.在线观看| 日本-黄色视频高清免费观看| 国产精品一二三区在线看| 精品一区二区免费观看| 国产又色又爽无遮挡免| av国产久精品久网站免费入址| 亚洲av欧美aⅴ国产| 国产欧美日韩一区二区三区在线 | 最近的中文字幕免费完整| 婷婷色综合大香蕉| 久久精品熟女亚洲av麻豆精品| 国产高清有码在线观看视频| 国产精品一区二区性色av| 色视频www国产| 日韩大片免费观看网站| 色5月婷婷丁香| 一本大道久久a久久精品| 亚洲欧美精品专区久久| 99热这里只有是精品50| 偷拍熟女少妇极品色| 夜夜看夜夜爽夜夜摸| 青春草国产在线视频| 日韩大片免费观看网站| 在线精品无人区一区二区三| 中文字幕精品免费在线观看视频 | freevideosex欧美| 精品亚洲成a人片在线观看| 成人漫画全彩无遮挡| 国产精品久久久久久久电影| 免费看av在线观看网站| 欧美精品国产亚洲| av有码第一页| 国产 精品1| 亚洲国产av新网站| 在线播放无遮挡| 一本大道久久a久久精品| 午夜av观看不卡| 成人综合一区亚洲| 少妇猛男粗大的猛烈进出视频| 日韩一本色道免费dvd| 免费黄网站久久成人精品| 激情五月婷婷亚洲| 国产有黄有色有爽视频| 色视频www国产| 五月玫瑰六月丁香| 国产成人精品福利久久| 欧美xxxx性猛交bbbb| 亚洲美女视频黄频| 国产精品不卡视频一区二区| 免费观看的影片在线观看| 国产午夜精品久久久久久一区二区三区| 国产日韩欧美在线精品| 国产高清有码在线观看视频| 22中文网久久字幕| 97在线人人人人妻| 视频中文字幕在线观看| 交换朋友夫妻互换小说| 国产在线一区二区三区精| 国产视频首页在线观看| 色网站视频免费| 超碰97精品在线观看| 一边亲一边摸免费视频| 国产精品一区二区在线观看99| 久久久久国产精品人妻一区二区| 桃花免费在线播放| 欧美日本中文国产一区发布| 日韩 亚洲 欧美在线| 亚洲av.av天堂| 欧美+日韩+精品| 欧美日韩av久久| 亚洲国产精品专区欧美| 国产精品99久久久久久久久| 五月天丁香电影| 九九久久精品国产亚洲av麻豆| 亚洲无线观看免费| 成年人午夜在线观看视频| 日本wwww免费看| 国产成人一区二区在线| 久久久久网色| 日本vs欧美在线观看视频 | 高清欧美精品videossex| av福利片在线| 97超碰精品成人国产| 国产精品嫩草影院av在线观看| 成人国产麻豆网| 国产伦在线观看视频一区| 国产精品麻豆人妻色哟哟久久| 人人澡人人妻人| 成人毛片a级毛片在线播放| 97在线人人人人妻| 国产黄片视频在线免费观看| 91午夜精品亚洲一区二区三区| 丁香六月天网| 日日爽夜夜爽网站| 国产亚洲5aaaaa淫片| 日韩电影二区| 午夜老司机福利剧场|