• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Positive unlabeled named entity recognition with multi-granularity linguistic information①

    2022-01-09 02:08:16OuyangXiaoye歐陽小葉ChenShudongWangRong
    High Technology Letters 2021年4期
    關(guān)鍵詞:小葉歐陽

    Ouyang Xiaoye(歐陽小葉),Chen Shudong②,Wang Rong

    (*Institute of Microelectronics,Chinese Academy of Sciences,Beijing 100029,P.R.China)

    (**University of Chinese Academy of Sciences,Beijing 100049,P.R.China)

    (***Key Laboratory of Space Object Measurement Department,Beijing Institute of Tracking and Telecommunications Technology,Beijing 100094,P.R.China)

    Abstract

    Key words:named entity recognition(NER),deep learning,neural network,positive-unlabeled learning,label-few domain,multi-granularity(PU)

    0 Introductio n

    Named entity recognition(NER)refers to the task of recognizing named entities in text and classifying them into specified types[1].NER is also a foundation task in natural language processing(NLP),and supports downstream applications such as relation extraction[2],translation[3],question and answer[4].At present,traditional methods based on supervised learning use a large amount of high-quality labeled data for NER[5].However,neural NER typically requires a large amount of manually labeled training data,which are not always available in label-few domain,such as biological/medical/military. Training neural NER with limited labeled data can be very challenging[6-7].

    Researchers have investigated a wide variety of methods and resources to boost the performance of label-few domain NER,e.g.,annotation learning and reinforcement learning[8],domain-adaptive fine-tuning[9],a fully Bayesian approach to aggregate multiple sequential annotations[10], adversarial transfer network[11],joint sentence and token learning[12],weak supervision to bootstrap NER[13].Wheras most of the previous studies have injected expert knowledge into the sequence labelling model,which is often critical when data is scarce or non-existent.This work presents a positive unlabeled learning approach,which is positive unlabeled named entity recognition(PUNER),using some positive instances and multi-granularity linguistic information to automatically annotate all unlabeled instances.Positive unlabeled(PU)learning refers to learning a classifier through unlabeled instances and positive instances,classifying unlabeled instances by this classifier,and finally making all unlabeled instances into annotation instances[14-15].PUNER solves the problem of a large amount of unlabeled data in the label-few domain by PU learning,and effectively parses rich semantic information to identify correct named entities through multi-granular language information.

    This paper has the following three contributions.(1)Designed a novel algorithm PUNER,which continuously iterates the unlabeled data through the PU learning method,and combines the neural networkbased PU classifier to identify all named entities and their types in the unlabeled data.(2)In PU classifier,there is a multi-granularity language information acquisition module,which integrates multi-granularity embedding of characters,words,and sentences to obtain rich language semantics in the context and helps to understand the meaning of sentences.(3)The experimental results show that PUNER is 1.51%higher than the most advanced AdaPU algorithm on the three multilingual NER data sets,and the performance of PUNER on SIGHAN Bakeoff 2006 is higher than that on CoNLL 2003 and CoNLL 2002 due to the different number of training set.

    1 Related work

    1.1 Named entity recognition

    The NER usually adopts a supervised learning approach that uses a labeled dataset to train the model.In recent years,neural network has become the mainstream of NER[16-17],which achieves most advanced performance.Many works use the long short-term memory(LSTM)and conditional random field(CRF)architecture.Ref.[18]further extended it into bidirectional LSTM-convolutional neural networks(CNNs)-CRF architecture,where the CRF module was added to optimize the output label sequence.Ref.[19]proposed task-aware neural language model(LM)termed LM-LSTM-CRF,where character-aware neural language models were incorporated to extract character-level embedding under a multi-task framework.

    1.2 Label-few domain NER

    The aim of label-few domain modelling is to reduce the need for hand annotated data in supervised training.A popular method is distant supervision,which relies on external resources such as knowledge bases to automatically label documents with entities that are known to belong to a specific category.Ref.[8]utilized the data generated by distant supervision to perform new type named entity recognition in new domains.The instance selector is based on reinforcement learning and obtains the feedback reward,aiming at choosing positive sentences to reduce the effect of noisy annotation.Ref.[9]proposed domainadaptive fine-tuning,where contextualized embeddings are first fine-tuned to both the source and target domains with a language modelling loss and subsequently fine-tuned to source domain labelled data.Refs[20,21]generalized this approach with the Snorkel framework which combines various supervision sources using a generative model to estimate the accuracy of each source.Ref.[22]presented a distant supervision approach to NER in the biomedical domain.

    1.3 Positive unlabeled learning NER

    PU learning is a distant supervision method,which can be regarded as a special classification task,that is,learning how to train a classifier with a small number of positive instances and many unlabeled instances.AdaSampling[23]first randomly selects a part of the unlabeled instances as the negative instances for training,then the process of sampling,modeling,and prediction is repeated for each iteration,and final predicted probability uses the average of theTiterations as the probability of the final prediction.Ref.[24]proposed the unbiased positive-unlabeled learning,and Ref.[25]adopted a bounded non-negative positiveunlabeled learning.Ref.[10]proposed a fully Bayesian approach to the problem of aggregating multiple sequential annotations, using variational expectationmaximization(EM)algorithm to compute posterior distributions over the model parameters.Ref.[13]relied on a broad spectrum of labelling functions to automatically annotate texts from the target domain.These annotations are then merged using a hidden Markov model which captures the varying accuracies and confusions of the labelling functions.

    For the label-few domain NER,the PU learning method can solve the problem of only a small amount of labeled data and a large amount of unlabeled data.At the same time,combining the neural network model to realize the PU classifier can obtain multi-granular sentence semantic information and identify named entities.Therefore,a novel PUNER algorithm is adopted,which applies PU learning with multi-granular language information to perform NER in the label-few domain.

    2 The proposed PUNER algorithm

    2.1 Problem formalization

    PU learning can be regarded as a special form of two-class(positive and negative)classification methods,when there are only given a set of positive instancesPand a set of unlabeled instances that contains both positive and negative instances.This work uses the binary labeling mechanism for NER tasks,rather than the mainstream B-begin I-inside O-outside(BIO)or B-begin I-inside O-outside E-end S-single(BIOES)mechanism.This is because the defect of positive instances affects the accuracy of BIO or BIOES mechanism labeling,and the binary labeling mechanism can avoid this effect well.Therefore,the NER task here can be regarded as a binary classification task.

    2.2 Algorithm overview

    The algorithm of the novel PU learning is shown in Algorithm 1,which is inspired by Ref.[26].It is a two-step approach,first selecting reliable negative instances from the unlabeled datasetU,then using the positive instances and reliable negative instances to train a classification model for new instances prediction.

    Algorithm 1 PUNER Algorithm Data:Positive dataset P and unlabeled dataset U Result:Predicted classification of all instances y∈{0,1}1. T0←P;//the initial positive training data;2. S0?U;//treat all unlabeled instances in U as negative instances,get S0 as initial negative training data;3. g1 ner←PULe Classifier(P,S0)//PU learning classifier g1 ner using「P,y=1■∪「S0,y=0■as the initial training dataset;4. UL1←g1ner(U)//use g1 ner to classify unlabeled data U,then get the labeled data UL1;5. S1←extract Negatives(UL1)//get negative instances from the labeled data UL1;6. RN1←S0;//get the initial set of reliable negative instances RN1;7. S1←S0;8. T1←P;9. while(|Si|≤|Si-1|and|P|?|Ti|)do:10. i←i+1 11. giner←PULe Classifier(P,RNi-1)12. UL i←gi ner(U)13. RNi←extract Negatives(UL i)14. Ti←extract Positives(UL i)15. return(giner)//use gi ner as the final classifier.

    In this paper,the PU learning classifiergneris a neural-network-based architecture with multi-granularity linguistic information used to recognize named entities and their types,and the specific introduction ofgneris shown in the next section.

    2.3 PU learning classifier gner

    In this section,a neural-network-based architecture is adopted to implement PU learning classifiergner,and this architecture is shared by different entity types,as shown in Fig.1.

    2.3.1 Multi-granularity word processor

    In this module,word processor semantically extracts meaningful word representation from different granularities,i.e.,the character-granularity representationec(w), the word-granularity representationew(w),and the sentence-granularity representationes(w).

    For the wordwin the sentences,the convolution network[27]is used for the char-granularity representationec(w)ofw,the fine-tuned Stanford’s GloVe word embeddings tool[28]for the word-granularity representationew(w)ofw,and the fine-tuned Bert embedding tool[17]for the sentence-granularity representationes(w)ofw.The final word presentation is obtained by concatenating these three parts of embeddings:

    where,⊕denotes the concatenation operation.Thus,a sequence of word vector{vt}is got.

    Fig.1 Architecture of PU learning classifier

    The word vector is obtained through the concatenation of multi-granularity linguistic information,that is to obtain multi-granularity features such as char,words,and sentences,and cooperate with the task model.Specifically,this work first uses CNN to generate char-level embedding,GloVe to generate word-level embedding,and Bert to generate sentence-level embedding,and then concatenates the three granular embeddings to obtain a more comprehensive and rich language semantics in the context,which further helps understanding the meaning of the sentence.Finally,it is more effective to cooperate with the upper sentence processor module.

    2.3.2 Sentence processor

    Based on the word vector{vt},the sentence processor employs a layer of gated recurrent unit(GRU)[29]to learn the contextual information of the sentence,which uses a hidden state vector{ht}to remember important signal.At each step,a new hidden state is computed based on previous hidden state using the same function.

    where,ztandrtare an update gate and a reset gate,σ(·)is a sigmoid function,Wz,Wr,Wh,Uz,UrandUhare parameters.e(wk/s)is the representation ofwkgivens.

    2.3.3 Entity recognition classifier

    The sentence representatione(w/s)is taken as the entity detection classifier’s input,and the probability of the positive classf(w/s)is defined as

    The cross-entropy loss function to learn a betterf(w/s)is minimized and defined as

    After training,PU classifier is used to perform label prediction.However,since a distinct classifier for each entity type is established,the type with the highest prediction probability(evaluated byf(w/s))is chosen.The predictions of other classifiers will be reset to 0.For sentences={w1,w2,w3,w4,w5,w6},if the label predicted by the classifier of a given type isL={0,1,0,0,1,1},then consider{w2}and{w5,w6}as two entities of the type.

    3 Experiments

    In order to demonstrate the performance and adaptability of the algorithm,several methods on three multilingual datasets are compared and details of the implementation and analysis of the experimental results are given.

    3.1 Compared methods

    Six methods are chosen to compare their performance with the proposed PUNER.The first four are supervised learning methods,which are Stanford NER(MEMM)[30]adds jump features between observation sequences,Stanford NER(CRF)[31]uses global normalization,BiLSTM[32]is combined by forward LSTM and backward LSTM,BiLSTM+CRF[32]uses the BiLSTM as baseline,and learn an optimal path by CRF in the last layer.The last two are applied to the label-few domain,Matching directly uses the constructed named entity positive instances to label the testing set,Ada-PU[33]is an adapted PU learning algorithm for NER.

    In addition,the partial structure of PUNER is also changed,and the performance of three variations of the proposed MGNER is compared.PUNERELMOuses ELMo[16]to do sentence embedding instead of Bert;PUNERbiLSTMreplaces GRU with BiLSTM neural network;andPUNERatt-implements entity processor module without attention mechanism.

    3.2 Data sets

    PUNER is evaluated on CoNLL 2003[34],CoNLL 2002[35]and SIGHAN Bakeoff 2006[36].The corpora statistics of the three datasets are shown in Table 1.These three datasets are labeled with four types,person(PER),location(LOC),organization(ORG),and miscellaneous(MISC),and the training set(TRAIN),development set(DEV)and testing set(TEST)are officially segmented.

    CoNLL2003 is an English dataset, collected from Reuters.There are four types of entities in this data set,namely PER,LOC,ORG,and MISC.The official split training set is used for model training,testa is used for development and testb is used for testing in the experiments,which contains 23 407,5918 and 5620 entities,respectively.Besides,there are about 45.6 k additional unlabeled entities.

    CoNLL2002 is a Spanish NER dataset,collected from Spanish EFE News Agency.It is also annotated by PER,LOC,ORG,and MISC types.The esp.train set is used for model training,esp.testa is used for development set and esp.testb is used for testing in the experiments.The TRAIN,DEV and TEST data sets contain 18 752,4324 and 3551 entities,respectively.

    SIGHAN Bakeoff 2006 is a Chinese dataset using multiple data sets provided by different institutions for evaluation.This dataset is also labeled with four types,PER,LOC,ORG,and MISC.It has about 32 317 entities in the training set(ner.train),3667 entities in development set(ner.dev)and 7403 entities in the testing set(ner.test).

    For the qualification of the label-few domain NER,each training set of the dataset is used for training.And it should be noted that the data annotation information during training is not used.The method of building named entity dictionary given in Ref.[33]is used to construct positive instances.For CoNLL2003,most popular and common names of person,location and organizations from Wikipedia are collected to construct the dictionary.For CoNLL2002,Google translator is used to translate the English PER,LOC,ORG,MISC dictionary into Spanish.And for SIGHAN Bakeoff 2006,a dictionary based on Baidu Baike is built.

    3.3 Implementation details

    If the comparison methods and PUNER method are all in the identical experimental environment,the results of these experiments will be copied directly,otherwise the methods will be reproduced in the context of this paper.

    Table 1 Corpora statistics for the CoNLL(en),CoNLL(sp)and Bakeoff(ch)datasets

    The proposed algorithm is implemented using Pytorch libraries.A random search[37]is used for superparameter optimization,and the best performance setting is chosen as the final setting.In this experiment,the Adam optimizer with the learning rate decay is applied.The learning rate starts from 0.001 and begins to decrease by 0.9.The batch size is set to 20.The word presentation consists of three parts,pretrained GloVe word embedding,sentence Bert embedding,along with a randomly initialized training CNN encoder for character embeddings.And the dimensionality of word embedding is set as 300.In order to prevent over-fitting,all the GRU layers dropout rates are set to 0.4.Besides,the positive instances in the PU learning algorithm are selected following previous work[33].

    3.4 Results

    Experiment results on the CoNLL 2003,CoNLL 2002 and SIGHAN Bakeoff 2006 datasets are shown in Table 2.As can be seen from Table 2(2),among the methods applied in label-few domain,performance of the proposed PUNER is better than others on three different datasets.PUNER achieves excellent results in label-few domain.

    The last set of methods shown in Table 2(2)are deformations of the proposed PUNER.By using Bert for embedding instead of ELMo,it increasesF1 score 1.4%on the CoNLL(en)dataset,1.3%on the CoNLL(sp)dataset and 0.8%on the Bakeoff(ch)dataset.Choosing GRU to extract semantics instead of BiLSTM,F1 score is improved by 0.8%on the CoNLL(en)dataset,0.3%on the CoNLL(sp)dataset and 0.7% on the Bakeoff(ch)dataset.The attention mechanism improvesF1 score by 1.2%on the CoNLL(en)dataset,0.8%on the CoNLL(sp)dataset and 1%on the Bakeoff(ch)dataset.The initial multigranularity linguistic information of word embedding has important effect on subsequent tasks,and at the same time,the attention mechanism also significantly helps to extract important semantics.

    Table 2 F1 scores on CoNLL(en),CoNLL(sp)and Bakeoff(ch)testing set for NER

    Analyzing different performance results of these three datasets,the ranking ofF1 value on the three data sets are Bakeoff(ch),CoNLL(en)and CoNLL(sp).F1 score on the Chinese dataset is 0.54%higher than English dataset and 7.78%higher than Spanish dataset.Considering the data set analysis information provided in Table 1,it is believed that the performance difference between different data sets is mainly caused by the difference in the number of sentences and entities.Specifically,the number of Bakeoff(ch)sets is larger than that of CoNLL(en)and CoNLL(sp),and the number of data sets directly affects the effect of model training.From the experimental results,F1 score on the CoNLL(sp)is the worst.This may also be caused by the low quality of the positive instances of CoNLL(sp),because the Spanish positive samples are translated from the positive instances of CoNLL(en).The translation process may produce noise data,which affects accuracy.

    Moreover,compared with the previous AdaPU,the performance of the proposed method is improved,because the combined use of Bert,GRU neural network and attention mechanism can improve the semantic understanding of context.However,compared with Table 2(1),the performance of PUNER is still worse than that of supervised learning.

    Experiments are conducted on three datasets,using different sizes of training sets to train the model,and studying the impact onF1 values.On three data sets,20%,40%,50%,60%,80%,and 100%training sets are selected for training PUNER,respectively.Fig.2 describes the results of this study on three datasets.It can be seen from Fig.2 that as the number of training sets increases,the overall performance of the model also increases,although there are fluctuations.Therefore,the amount of data has an impact on the performance of the model.Meanwhile,the performance of the supervised learning method BiLSTM+CRF in Fig.2 shows that the gap between supervised learning and unsupervised learning and research on unsupervised learning are also very meaningful.

    Fig.2 F1 of PUNER on the testing set of CoNLL(en),CoNLL(sp)and Bakeoff(ch)datasets for training using different segmentation of the training dataset.The dotted line indicates the F1 value obtained by using the supervised learning method BiLSTM+CRF

    4 Conclusion

    A novel PUNER algorithm for label-few domain is proposed,which uses PU learning algorithm combined with deep learning method to obtain multi-granularity language information for NER task.In PUNER,PU learning uses the positive instances and many unlabeled instances to effectively solve the labeling problem.Meanwhile,the neural network-based architecture is used to implement the PU learning classifier,which obtains multi-granularity linguistic information and facilitates named entity labeling.Experimental results show that PUNER achieves excellent results in labelfew domain on three multilingual datasets.In future research,graph convolutional network will be considered to model richer sentence semantics.

    猜你喜歡
    小葉歐陽
    動(dòng)物怎樣聽和看?
    我家的健忘老媽
    歐陽彥等
    防治蘋果小葉病 流翔高鈣來助力
    蘋果小葉病的防治
    河北果樹(2020年4期)2020-11-26 06:05:00
    依依送別歐陽鶴先生
    中華詩詞(2019年9期)2019-05-21 03:05:18
    小葉樟樹下的遐思
    海峽姐妹(2019年1期)2019-03-23 02:42:40
    The Development of Cosmetics in Thailand
    小葉蓮化學(xué)成分的研究
    中成藥(2018年9期)2018-10-09 07:18:46
    白鷺飛
    歌海(2017年2期)2017-05-30 22:22:10
    a级毛色黄片| 亚洲第一av免费看| 夜夜爽夜夜爽视频| 欧美最新免费一区二区三区| 婷婷色综合大香蕉| 91在线精品国自产拍蜜月| 欧美成人一区二区免费高清观看| 高清毛片免费看| 97超碰精品成人国产| 人人妻人人澡人人爽人人夜夜| freevideosex欧美| 亚洲精品国产成人久久av| 毛片女人毛片| 久久国产精品男人的天堂亚洲 | 校园人妻丝袜中文字幕| 久久精品国产亚洲网站| 国产精品国产av在线观看| 1000部很黄的大片| 十分钟在线观看高清视频www | 成人无遮挡网站| 香蕉精品网在线| 午夜视频国产福利| 插逼视频在线观看| 777米奇影视久久| 97热精品久久久久久| 亚洲精品中文字幕在线视频 | 国产成人午夜福利电影在线观看| 久久热精品热| 黑人猛操日本美女一级片| 国产精品久久久久成人av| 免费黄网站久久成人精品| 国产伦理片在线播放av一区| 亚洲精品久久久久久婷婷小说| 国产国拍精品亚洲av在线观看| av国产久精品久网站免费入址| 欧美成人精品欧美一级黄| 国产成人午夜福利电影在线观看| 只有这里有精品99| 国国产精品蜜臀av免费| 热re99久久精品国产66热6| 97超碰精品成人国产| 亚洲一区二区三区欧美精品| 观看美女的网站| 日韩一区二区三区影片| 在线播放无遮挡| 亚洲精品日韩在线中文字幕| 一级毛片久久久久久久久女| 特大巨黑吊av在线直播| 国产在线免费精品| 看非洲黑人一级黄片| 国产亚洲最大av| 精品一区二区三区视频在线| 国产伦精品一区二区三区四那| 寂寞人妻少妇视频99o| 色婷婷av一区二区三区视频| 午夜老司机福利剧场| 婷婷色av中文字幕| 一级毛片aaaaaa免费看小| 免费观看av网站的网址| 婷婷色麻豆天堂久久| 草草在线视频免费看| 少妇猛男粗大的猛烈进出视频| 国产精品伦人一区二区| 十八禁网站网址无遮挡 | 嫩草影院新地址| 99热全是精品| 亚洲,欧美,日韩| 汤姆久久久久久久影院中文字幕| 丰满乱子伦码专区| 欧美xxxx性猛交bbbb| 日韩欧美一区视频在线观看 | 色哟哟·www| 久久国内精品自在自线图片| 亚洲丝袜综合中文字幕| 身体一侧抽搐| a级毛片免费高清观看在线播放| 在线免费十八禁| 大又大粗又爽又黄少妇毛片口| h日本视频在线播放| 男女下面进入的视频免费午夜| 观看美女的网站| 超碰97精品在线观看| 91狼人影院| av在线观看视频网站免费| 久久影院123| 国产亚洲一区二区精品| 色综合色国产| 少妇人妻久久综合中文| 亚洲国产色片| 欧美+日韩+精品| 91aial.com中文字幕在线观看| 亚洲av电影在线观看一区二区三区| 51国产日韩欧美| 久久毛片免费看一区二区三区| 亚洲精品自拍成人| 精品少妇久久久久久888优播| 国产精品伦人一区二区| 在线天堂最新版资源| 最近最新中文字幕免费大全7| 能在线免费看毛片的网站| av在线app专区| 精品国产一区二区三区久久久樱花 | 国产国拍精品亚洲av在线观看| 亚洲精品日本国产第一区| 成人一区二区视频在线观看| 这个男人来自地球电影免费观看 | 午夜激情久久久久久久| av女优亚洲男人天堂| 成人一区二区视频在线观看| 成人黄色视频免费在线看| 日韩成人av中文字幕在线观看| 国产精品久久久久久精品古装| 国产毛片在线视频| 日本午夜av视频| 最近中文字幕高清免费大全6| 香蕉精品网在线| 国产黄片美女视频| 国产男女超爽视频在线观看| 搡女人真爽免费视频火全软件| 天天躁日日操中文字幕| 国产免费又黄又爽又色| 国产极品天堂在线| 黄色怎么调成土黄色| 欧美成人精品欧美一级黄| 看十八女毛片水多多多| 亚洲国产精品专区欧美| 亚洲精品日本国产第一区| 国产精品国产av在线观看| 免费高清在线观看视频在线观看| 国产av国产精品国产| 成年av动漫网址| 色网站视频免费| 一本—道久久a久久精品蜜桃钙片| 日本欧美国产在线视频| 尾随美女入室| 国产成人91sexporn| 亚洲精品国产av成人精品| 99久久精品一区二区三区| av在线观看视频网站免费| 久久精品久久精品一区二区三区| 免费黄频网站在线观看国产| 伊人久久精品亚洲午夜| 日韩一本色道免费dvd| 一区在线观看完整版| 成人亚洲精品一区在线观看 | 嫩草影院新地址| 韩国av在线不卡| 日韩在线高清观看一区二区三区| 亚洲精品亚洲一区二区| xxx大片免费视频| 中文字幕精品免费在线观看视频 | 在线看a的网站| 亚洲国产欧美在线一区| 成人漫画全彩无遮挡| 亚洲欧美日韩另类电影网站 | 日韩伦理黄色片| 国产精品女同一区二区软件| 久久精品夜色国产| 男人添女人高潮全过程视频| 国产有黄有色有爽视频| 精品久久久久久久久亚洲| 久久精品久久精品一区二区三区| 晚上一个人看的免费电影| 国产伦在线观看视频一区| 久久久久久九九精品二区国产| 亚洲精品色激情综合| 男人和女人高潮做爰伦理| 国产欧美日韩精品一区二区| 男女啪啪激烈高潮av片| 伦理电影免费视频| 99热6这里只有精品| 亚洲久久久国产精品| 日韩精品有码人妻一区| 免费观看无遮挡的男女| 内地一区二区视频在线| 国产av精品麻豆| 777米奇影视久久| 51国产日韩欧美| 欧美人与善性xxx| 欧美日韩视频精品一区| 亚洲精品久久午夜乱码| 蜜桃亚洲精品一区二区三区| 亚洲第一av免费看| 大陆偷拍与自拍| 精品国产三级普通话版| 狂野欧美激情性xxxx在线观看| 一级黄片播放器| 在线精品无人区一区二区三 | 欧美日韩亚洲高清精品| 亚洲色图av天堂| 丰满人妻一区二区三区视频av| 丝袜脚勾引网站| 国产又色又爽无遮挡免| 这个男人来自地球电影免费观看 | 黄色日韩在线| 一本—道久久a久久精品蜜桃钙片| 一区二区三区乱码不卡18| 成人一区二区视频在线观看| 欧美亚洲 丝袜 人妻 在线| 黄色怎么调成土黄色| 欧美高清性xxxxhd video| 国产av码专区亚洲av| 一区二区av电影网| 好男人视频免费观看在线| 中文字幕免费在线视频6| 五月开心婷婷网| 菩萨蛮人人尽说江南好唐韦庄| 国产精品久久久久久av不卡| 久久久久久久久久久丰满| 久久女婷五月综合色啪小说| 少妇丰满av| 好男人视频免费观看在线| 九九爱精品视频在线观看| 国产精品.久久久| 亚洲欧美成人综合另类久久久| 国产av国产精品国产| 丝瓜视频免费看黄片| 免费大片黄手机在线观看| av免费观看日本| 欧美zozozo另类| 欧美 日韩 精品 国产| 色综合色国产| 亚洲欧洲国产日韩| 国语对白做爰xxxⅹ性视频网站| av黄色大香蕉| 国产熟女欧美一区二区| 日本猛色少妇xxxxx猛交久久| 国产精品一区www在线观看| av专区在线播放| 制服丝袜香蕉在线| www.色视频.com| 精品一区在线观看国产| 免费黄色在线免费观看| 免费人成在线观看视频色| 天堂中文最新版在线下载| 啦啦啦在线观看免费高清www| 国产人妻一区二区三区在| 97超碰精品成人国产| 2022亚洲国产成人精品| 激情五月婷婷亚洲| 国产69精品久久久久777片| 国产毛片在线视频| 黄片无遮挡物在线观看| 毛片女人毛片| 九九爱精品视频在线观看| 成人亚洲精品一区在线观看 | 国产亚洲最大av| 精品人妻熟女av久视频| 亚洲丝袜综合中文字幕| 边亲边吃奶的免费视频| 成人国产av品久久久| 少妇人妻 视频| 国产成人a∨麻豆精品| 亚洲国产精品国产精品| 九九在线视频观看精品| 在线观看三级黄色| 国产精品久久久久久精品电影小说 | av福利片在线观看| 亚洲精品乱久久久久久| 人妻少妇偷人精品九色| 日韩成人伦理影院| 亚洲欧美成人综合另类久久久| 春色校园在线视频观看| 赤兔流量卡办理| 日韩伦理黄色片| 久久国内精品自在自线图片| 国产精品国产av在线观看| 少妇人妻 视频| 国产淫片久久久久久久久| 少妇的逼水好多| 国产精品av视频在线免费观看| 久久久久性生活片| 男女边摸边吃奶| 精品久久久精品久久久| 久久久久久久久久久免费av| 亚洲欧美日韩卡通动漫| 人人妻人人看人人澡| 五月玫瑰六月丁香| 中文字幕人妻熟人妻熟丝袜美| 欧美精品一区二区免费开放| 一级黄片播放器| 日本黄大片高清| 王馨瑶露胸无遮挡在线观看| 亚洲无线观看免费| 激情五月婷婷亚洲| 午夜视频国产福利| 欧美国产精品一级二级三级 | 成人毛片60女人毛片免费| 精品久久久久久久末码| 国产在线免费精品| 亚洲欧美日韩卡通动漫| 日韩,欧美,国产一区二区三区| 丰满乱子伦码专区| 国产av一区二区精品久久 | 蜜桃久久精品国产亚洲av| 久久久亚洲精品成人影院| 中文字幕精品免费在线观看视频 | 欧美xxxx性猛交bbbb| 成年美女黄网站色视频大全免费 | 免费观看性生交大片5| 欧美变态另类bdsm刘玥| 国产精品免费大片| 久久精品熟女亚洲av麻豆精品| av免费在线看不卡| 国产亚洲av片在线观看秒播厂| 看非洲黑人一级黄片| 热re99久久精品国产66热6| 日韩一本色道免费dvd| 成人无遮挡网站| 女性生殖器流出的白浆| 卡戴珊不雅视频在线播放| 91狼人影院| 91久久精品国产一区二区三区| 国产一区有黄有色的免费视频| 亚洲欧美成人综合另类久久久| 亚洲精品456在线播放app| 蜜臀久久99精品久久宅男| 精品一区二区三区视频在线| 纵有疾风起免费观看全集完整版| 国产片特级美女逼逼视频| av天堂中文字幕网| 一级毛片我不卡| 五月开心婷婷网| 亚洲精品久久午夜乱码| 亚洲国产日韩一区二区| 啦啦啦中文免费视频观看日本| 国产毛片在线视频| 观看美女的网站| 尾随美女入室| 午夜免费鲁丝| 国产欧美亚洲国产| 高清不卡的av网站| 网址你懂的国产日韩在线| 国精品久久久久久国模美| 久久av网站| 中国国产av一级| 人人妻人人看人人澡| 亚洲国产欧美人成| 另类亚洲欧美激情| 99九九线精品视频在线观看视频| 人妻一区二区av| 国产精品欧美亚洲77777| 国产免费又黄又爽又色| 国产男女内射视频| 男人添女人高潮全过程视频| 男女边吃奶边做爰视频| 久久毛片免费看一区二区三区| 欧美国产精品一级二级三级 | 午夜日本视频在线| 久久久久性生活片| 中文欧美无线码| 久久99热6这里只有精品| 欧美精品一区二区大全| 熟女人妻精品中文字幕| freevideosex欧美| 亚洲成人一二三区av| 免费高清在线观看视频在线观看| 啦啦啦啦在线视频资源| av国产免费在线观看| 国产精品伦人一区二区| 久久久久精品久久久久真实原创| 极品教师在线视频| 精华霜和精华液先用哪个| 免费看av在线观看网站| 看非洲黑人一级黄片| 亚洲av不卡在线观看| 五月伊人婷婷丁香| av线在线观看网站| 免费不卡的大黄色大毛片视频在线观看| 尤物成人国产欧美一区二区三区| 97超视频在线观看视频| 久久国产精品男人的天堂亚洲 | 十八禁网站网址无遮挡 | 精品99又大又爽又粗少妇毛片| 欧美bdsm另类| 直男gayav资源| 久久久成人免费电影| 尤物成人国产欧美一区二区三区| 黑人高潮一二区| 少妇精品久久久久久久| 国产亚洲欧美精品永久| 午夜视频国产福利| 美女xxoo啪啪120秒动态图| 欧美人与善性xxx| 久久99热这里只频精品6学生| 国产在线视频一区二区| 高清不卡的av网站| 欧美区成人在线视频| 插阴视频在线观看视频| 寂寞人妻少妇视频99o| 成年人午夜在线观看视频| 国产精品一区二区性色av| 亚洲欧美中文字幕日韩二区| 国产爱豆传媒在线观看| 秋霞在线观看毛片| 日本午夜av视频| 狠狠精品人妻久久久久久综合| 免费播放大片免费观看视频在线观看| 精品人妻一区二区三区麻豆| 日韩中文字幕视频在线看片 | 久久久久久久精品精品| 久久久久久久久久久免费av| 美女cb高潮喷水在线观看| 久久ye,这里只有精品| 人人妻人人看人人澡| 18禁在线播放成人免费| 国产在线免费精品| 一级毛片我不卡| 永久网站在线| 最近的中文字幕免费完整| 亚洲精品乱码久久久久久按摩| 99热6这里只有精品| 久久99蜜桃精品久久| 干丝袜人妻中文字幕| 观看av在线不卡| 丝瓜视频免费看黄片| 我的女老师完整版在线观看| 夜夜爽夜夜爽视频| av一本久久久久| 国产又色又爽无遮挡免| 亚洲国产精品一区三区| 国产精品.久久久| 亚洲国产最新在线播放| 在线看a的网站| 国产美女午夜福利| av播播在线观看一区| 亚洲av不卡在线观看| 精品一区二区免费观看| 联通29元200g的流量卡| 一级二级三级毛片免费看| 日韩国内少妇激情av| 久久6这里有精品| 人人妻人人澡人人爽人人夜夜| 国产免费一区二区三区四区乱码| 人妻夜夜爽99麻豆av| 免费黄色在线免费观看| 少妇人妻一区二区三区视频| 亚洲图色成人| 在线观看三级黄色| 熟女av电影| 18禁在线播放成人免费| 日日摸夜夜添夜夜添av毛片| 在线观看免费高清a一片| 精品国产一区二区三区久久久樱花 | 乱码一卡2卡4卡精品| 免费不卡的大黄色大毛片视频在线观看| 国产淫语在线视频| 欧美性感艳星| 欧美最新免费一区二区三区| 2021少妇久久久久久久久久久| 中文字幕av成人在线电影| 国产亚洲一区二区精品| 精品久久久久久电影网| 美女中出高潮动态图| 一级毛片电影观看| www.av在线官网国产| 亚洲va在线va天堂va国产| 国产精品一区二区性色av| 亚洲内射少妇av| 午夜视频国产福利| 亚洲欧美日韩卡通动漫| 午夜福利在线观看免费完整高清在| 国产黄色视频一区二区在线观看| av免费观看日本| 乱码一卡2卡4卡精品| 国产亚洲午夜精品一区二区久久| 免费看光身美女| av国产久精品久网站免费入址| 成人毛片60女人毛片免费| 天堂俺去俺来也www色官网| 日产精品乱码卡一卡2卡三| 日韩一区二区三区影片| 国产精品国产三级专区第一集| 免费黄色在线免费观看| 欧美三级亚洲精品| 亚洲婷婷狠狠爱综合网| 伊人久久精品亚洲午夜| 人人妻人人添人人爽欧美一区卜 | 国产精品偷伦视频观看了| 水蜜桃什么品种好| 22中文网久久字幕| 欧美精品亚洲一区二区| 插逼视频在线观看| 日韩欧美精品免费久久| 国产黄片美女视频| 毛片一级片免费看久久久久| 性高湖久久久久久久久免费观看| 一二三四中文在线观看免费高清| 久久鲁丝午夜福利片| 91在线精品国自产拍蜜月| 十八禁网站网址无遮挡 | 日韩欧美精品免费久久| 欧美区成人在线视频| 日韩免费高清中文字幕av| 亚洲经典国产精华液单| 观看美女的网站| 久久久久久久久久成人| 少妇裸体淫交视频免费看高清| 日韩在线高清观看一区二区三区| 欧美另类一区| 免费人成在线观看视频色| 黑人高潮一二区| 中文在线观看免费www的网站| 3wmmmm亚洲av在线观看| 久久精品夜色国产| 日韩av在线免费看完整版不卡| 日韩 亚洲 欧美在线| 中文字幕av成人在线电影| 亚洲欧美中文字幕日韩二区| 国产女主播在线喷水免费视频网站| 亚洲精品日本国产第一区| 男女免费视频国产| 国内精品宾馆在线| 身体一侧抽搐| 久久久久网色| 久久久久久九九精品二区国产| 精品人妻熟女av久视频| 亚洲精品自拍成人| 国产中年淑女户外野战色| 日产精品乱码卡一卡2卡三| 国产69精品久久久久777片| 一区在线观看完整版| 精品99又大又爽又粗少妇毛片| 国产伦精品一区二区三区视频9| 国产乱人偷精品视频| 高清日韩中文字幕在线| 99热这里只有是精品50| 少妇的逼好多水| 赤兔流量卡办理| 亚洲av电影在线观看一区二区三区| 中国美白少妇内射xxxbb| 青青草视频在线视频观看| 久久精品国产亚洲av涩爱| 免费观看性生交大片5| 色综合色国产| 性色avwww在线观看| 亚洲欧美精品自产自拍| 国产精品久久久久久精品电影小说 | 一边亲一边摸免费视频| av专区在线播放| 大片电影免费在线观看免费| 97在线人人人人妻| 亚洲美女视频黄频| 新久久久久国产一级毛片| 国产精品av视频在线免费观看| 精品亚洲乱码少妇综合久久| 亚洲国产精品专区欧美| av在线app专区| 菩萨蛮人人尽说江南好唐韦庄| 一个人免费看片子| 免费不卡的大黄色大毛片视频在线观看| av在线老鸭窝| 免费久久久久久久精品成人欧美视频 | 丝袜脚勾引网站| 国产极品天堂在线| 亚洲精品国产av蜜桃| 草草在线视频免费看| videos熟女内射| 97超视频在线观看视频| 日韩电影二区| 欧美另类一区| 亚洲精品一二三| 国产精品女同一区二区软件| 99re6热这里在线精品视频| 久久97久久精品| 成人免费观看视频高清| 久久6这里有精品| av福利片在线观看| 国产亚洲精品久久久com| 国内少妇人妻偷人精品xxx网站| 97精品久久久久久久久久精品| 自拍偷自拍亚洲精品老妇| 国产精品久久久久久精品古装| 日韩精品有码人妻一区| 免费黄频网站在线观看国产| 国产男女内射视频| 国产毛片在线视频| 卡戴珊不雅视频在线播放| 精品熟女少妇av免费看| a 毛片基地| 91aial.com中文字幕在线观看| 熟妇人妻不卡中文字幕| 亚洲精品乱久久久久久| 亚洲人成网站在线观看播放| 免费观看性生交大片5| 欧美区成人在线视频| 久久久久精品性色| 免费人妻精品一区二区三区视频| 国产欧美日韩一区二区三区在线 | 伦精品一区二区三区| 91精品伊人久久大香线蕉| 久久这里有精品视频免费| 国产视频首页在线观看| 你懂的网址亚洲精品在线观看| 啦啦啦啦在线视频资源| 国产成人91sexporn| 青春草视频在线免费观看| 舔av片在线| 国产精品99久久久久久久久| 校园人妻丝袜中文字幕| 色综合色国产| 毛片女人毛片| 午夜福利网站1000一区二区三区| 久久精品夜色国产| 国产一区二区三区av在线| 99久久人妻综合| 少妇的逼好多水| 免费看光身美女| 夜夜骑夜夜射夜夜干| 日韩一本色道免费dvd| 欧美 日韩 精品 国产| 亚洲国产精品专区欧美| 赤兔流量卡办理| 国产综合精华液| 国产白丝娇喘喷水9色精品|