• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Distant Supervised Relation Extraction with Cost-Sensitive Loss

    2019-11-25 10:22:56DaojianZengYaoXiaoJinWangYuanDaiandArunKumarSangaiah
    Computers Materials&Continua 2019年9期

    Daojian Zeng ,Yao Xiao ,Jin Wang, ,Yuan Dai and Arun Kumar Sangaiah

    Abstract:Recently,many researchers have concentrated on distant supervision relation extraction (DSRE).DSRE has solved the problem of the lack of data for supervised learning,however,the data automatically labeled by DSRE has a serious problem,which is class imbalance.The data from the majority class obviously dominates the dataset,in this case,most neural network classifiers will have a strong bias towards the majority class,so they cannot correctly classify the minority class.Studies have shown that the degree of separability between classes greatly determines the performance of imbalanced data.Therefore,in this paper we propose a novel model,which combines class-to-class separability and cost-sensitive learning to adjust the maximum reachable cost of misclassification,thus improving the performance of imbalanced data sets under distant supervision.Experiments have shown that our method is more effective for DSRE than baseline methods.

    Keywords:Relation extraction,distant supervision,class imbalance,class separability,cost-sensitive.

    1 Introduction

    Relation extraction plays a core role in Natural Language Processing (NLP),and it has always been the focus of many researchers.Supervised methodsoften achieve good results in relation extraction [Kambhatla (2004);Zhou,Su,Zhang et al.(2005)],But it relies on huge amount of data and entails better separation between classes [Raj,Magg and Werm ter (2016)].

    To overcome the shortcomings of the e lack of labeled training data in the supervised paradigm,distant supervision has been proposed,which can automatically generate training data.Distant supervision can convert massive unstructured data into labeled data by leveraging existing know ledge bases,then the supervised model uses these labeled data to create features [Mintz,Bills,Snow et al.(2009);H of fmann,Zhang,Ling et al.(2011);Riedel,Yao and McCallum (2010);Surdeanu,Tibshirani,Nallapati et al.(2012)].Recently,many researchers combine distant supervision with deep neural network to automatic learn features [Zeng,Liu,Lai et al.(2014);Zeng,Liu,Chen et al.(2015);Lin,Shen,Liu et al.(2016);Jiang,Wang,Li et al.(2016);Zeng,Dai,Li et al.(2018)],which have made a series of progress.Distant supervision assumes that if the sentence in the dataset contains the entity pairs expressing a relation in the know ledge base,then all sentences containing the same entity pairs in the dataset are considered to express this relation.Since this assumption is too absolute,and the data in the real world has its distribution,the method of distant supervision still has the following shortcomings.

    First,there is a serious class imbalance problem in the data which automatically labeled by the distant supervision.Secondly,the data automatically generated by distant supervision have a poor class-to-class (C2C) separability.

    Currently,there are two mainstream methods to address class imbalance.One is changing the dataset’s distribution and another one is adjusting the corresponding algorithm.For the first method,it changes the distribution of data by under-sampling or over-sampling.Specifically,under-sampling removes some instances from the majority class so that the number of samples from the majority class and minority class is close.Since many instances are discarded,the training set is smaller than the original one,it is possible to cause under-fitting.Over-sampling is the opposite of under-sampling.These methods add some instances into minority class to fill the quantity gap of imbalanced classes and then learn.Due to the repeated sampling from minority class,it is prone over-fitting.In this paper,we propose a new algorithm to address the problem of class imbalance.In order to reduce the negative influence of the artificial class noise in distant supervision,we use the ranking loss function.In the conventional ranking loss function,because the conventional cost function will treat all individual errors as equal importance,the classifier tends to classify all instances into the majority class [Murphey,Guo and Feldkamp (2004)].To avoid this kind of situation,we use cost-sensitive ranking loss function.When misclassifying the instance from minority class,we will give it more punishment than misclassifying a majority instance,so this method is more beneficial to correctly classify the minority class.

    Using cost-sensitive ranking loss achieves excellent results in most cases.However,due to some classes have poor C2C separability in the automatically labeled data,these classes cannot be correctly classified.In this case,we use the Silhouette score[Rousseeuw (1987)] as the C2C separability measure,and then adjust the cost of misclassification.Specifically,when the C2C separability is good which means it’s easier to correctly classify at this time,so the error should cost more,and vice versa.Generally,researchers will set the cost of misclassification based on the distribution of the data,and the cost remained unchanged during the training.Different from their works,by considering C2C separability,we can adjust the maximum reachable cost of misclassification,so that the cost of misclassification can be automatically learned based on the final problem,thus,our method is more flexible.

    2 Related work

    Relation extraction automatically identify the semantic relation between entities,which is a very important task in NLP.Generally supervised learning methods yield high performance [Mooney and Bunescu (2006);Zelenko,Aone and Richardella (2003);Zhou,Su,Zhang et al.(2005)].But supervised relation extraction of ten faces the challenge of a lack of labeled training data.Mintz et al.[Mintz,Bills,Snow et al.(2009)] uses the freebase,a prevalent know ledge base,to align with rich unstructured data for distant supervision,so that a large amount of labeled data can be obtained.But distant supervision has a problem of the wrong label.In order to address this problem,a relaxed distant supervision assumption was proposed by Riedel et al.[Riedel,Yao and McCallum (2010);H of fmann,Zhang,Ling et al.(2011);Surdeanu,Tibshirani,Nallapati et al.(2012)] for multi-instance learning.Nguyen et al.[Nguyen and Moschitti (2011)] extends distant supervision by using relations in Wikipedia.

    The above methods are effective for DSRE.But they need high quality handcrafted features.Recently,many researchers have attempted to use neural networks for DSRE rather than hand-crafted features.Zeng et al.[Zeng,Liu,Lai et al.(2014)] adopts CNNs to extract sentence-level features and lexical-level features to make full use of the semantic information of sentences.Santos et al.[Santos,Xiang and Zhou (2015)]proposes the pairwise ranking loss function to alleviate the impact of artificial classes.These methods use sentence-level annotated data to train the classifier.However,since a fact may correspond to multiple sentences during data generation,just like data collection of indoor localization [Li,Chen,Gao et al.(2018)],these methods cannot be applied directly in DSRE.Therefore,Zeng et al.[Zeng,Liu,Chen et al.(2015)] proposes piecewise convolutional neural network (PCNN) model,and incorporates multi-instance learning to solve the above problem.Lin et al.[Lin,Shen,Liu et al.(2016)] makes use of the attention mechanism to minimize the negative impact of the wrong label in the DSRE.Jiang et al.[Jiang,Wang,Li et al.(2016)] uses the cross-sentence max-pooling to share information from different sentences.Zeng et al.[Zeng,Zeng and Dai (2017)] Combines ranking loss and cost sensitive to solve the class imbalance problem in DSRE and reduce the impact of the artificial class.

    The above work has greatly promoted the relation extraction task.However,the works[Zeng,Liu,Lai et al.(2014);Zeng,Liu,Chen et al.(2015);Jiang,Wang,Li et al.(2016);Lin,Shen,Liu et al.(2016)] don’t pay attention to the class imbalance problem.In Zeng et al.[Zeng,Zeng and Dai (2017)],a new cost-sensitive loss function is proposed,which replaces the traditional cross-entropy loss,but their costs are predefined and fixed during the training process.When the C2C separability is not good,it does not achieve the expected experimental results.Different from these works,we let the C2C separability as one of the factors that affect the cost of misclassification,and let the cost parameter as an automatically learnable parameter.Our method automatically learns cost parameters based on the final problem,so its relatively more flexible.

    3 Methods

    The structure of our model is similar to Zeng et al.[Zeng,Zeng and Dai (2017)],which is made up of two parts:the PCNNs feature extractor and the ranking based classifier.As shown in Fig.1,the PCNNs feature extractor adopts a piecewise max pooling to extract the feature vectors of an instance in a bag.After that,in order to get the most appropriate instance and predict the relation of the instance from the bag,we use the rank-based classifier as shown in Fig.1(b).

    In PCNNs feature extractor,we combine the word embedding which isoften used in NLP[Xiang,Yu,Yang et al.(2018)] and position embedding as vector representations.Here we denote the word embedding asEand the position features asPFs.First,we initialize each word token with its corresponding pre-trained word embedding,and next,we train the word embedding by adopting the method in M ikolov et al.[M ikolov,Chen,Corrado et al.(2013)].After that,we use the method in Zeng et al.[Zeng,Liu,Lai et al.(2014);Zeng,Liu,Chen et al.(2015)] to obtain the positional features of each word token,and we also transform them into vectors.Finally,after convolution and piecewise max-poling,we can obtain the feature vector of the sentence.

    Figure1:The architecture used in this work

    Each feature vector of instance is denoted asb.Then,we fed it to the ranking based classifier.The network uses the dot product to calculate the score of the class label ti:

    where wtiis the embedding of class label ti.After calculating the score of each instance,we select the instance with highest score in the bag and use the corresponding label as the bag label.

    3.1 Cost-sensitive ranking loss

    In order to reduce the impact of artificial class and compare more conveniently with baseline methods,we use cost-sensitive ranking loss.Suppose that our training set are composed byNbags,thei-thbag is represented as Bi,and its label is relationri.When thei-thbag whose label is ri=tjis fed into the network,using Eq.(1),we can get the classification score of the current bag labelfor class tj,and the highest score of the negative class in the current bagfor class tk.The cost-sensitive ranking loss is given by:

    Where tirepresents the class label,and tk≠tj(j,k∈{1,…,T},Tequals the number of all relation types).λ is a constant term.i indicates the i-t? bag is fed into the network.mtjand mtkcan be obtained by calculate the following Eq.(3),which represents the different margin of the class ti,that is,a cost sensitive parameter:

    where γ is a constant item,and #tjequals the number of samples corresponding to the relation label tj.

    We can observe from Eq.(2) that as the score stjincreases,the first term on the right side of the equation decreases;and as the score stkdecreases,the second term on the right decreases.Since the goal of our model is to let the score of correct class tjgreater than mtjand the score of incorrect class tksmaller than mtk.Thus,when misclassify the minor classes,our model give more penalties for it than the major classes.

    However,one of the drawbacks of the is method is that it cannot find the optimum value for mti.Experiments show that if the cost is simply set according to the percentage of the classes in the data distribution,the performance improvement is not obvious [Zeng,Zeng and Dai (2017)],especially when poor separability between classes.Thus,we now let mtias an adaptive parameter,and experiment show that comparing to static values,the adaptive cost-sensitive parameter can get better performance.In this paper,we use C2C separability to adjust the maximum reachable cost of misclassification,change the originally fixed cost parameter into an optimizable cost parameter,and update the corresponding cost for different classes.

    3.2 m ti optim ization

    We will optimize the weight parameters and mtisimultaneously during the training process,that is,to keep one parameter constant while minimizing the cost relative to the other parameter [Jiang,Wang,Li et al.(2016)].In our work,we will optimize mtiin Eq.(2) as follows:

    Tis expressed by the following Eq.(5).His the ratio of imbalance,which is the maximum reachable cost of misclassification.

    Through the above steps,we get a mtibetween 1 andH,which is a learnable parameter during the training process,and we call it adaptable mti.

    3.3 Class-to-class separability

    Since effective learning under imbalanced data depends on the degree of separability between classes [Murphey,Guo and Feldkamp (2004)].For this,our method is,when classes are well separated,if misclassified,more punishment should be given.Conversely,when the C2C separability is poor,classification is difficult to achieve,errors should cost less.

    Silhouette score isoften used as a measure of C2C separability.The value of Silhouette ranges from -1 to +1,which indicates how close each data point relative to its own cluster.Particularly,when its value is +1,it means that a point is within its own cluster,correspondingly,-1 indicates that the point is completely in the opposite cluster,and when the point is on the boundary of two clusters,the value of Silhouette is 0.The degree of separability of two clusters is calculated by the sum of the Silhouette scores of all points which in these two clusters.So,for the class tjand class tk,the separability can be calculated by Eq.(6):

    where,K(i)=m inimun d(i,tk),and d(i,tk) is the average dissimilarity of object i in the class tjto all objects from class tk;Similarly,J(i)=m inimun d(i,tj),where d(i,tj) is the average dissimilarity of object i in the class tjto all other objects in this class (As shown in Fig.2).

    Figure2:Relation of all elements which included in the computation of S(i)

    We useSto represent the Silhouette score and give the imbalance ratio as Eq.(7):

    Now,His defined as the following form:

    From Eq.(8) we can observe that if two classes are well separated (in this case |S| = 1),the maximum cost at this time can be twice ofIR.In this way,we can adjust the maximum reachable cost of misclassification based on separability.The entire optimization process can be seen in the Algorithm 1.

    Algorithm 1.Learning optimal parameters Input:network parameters,training data,cost-sensitive parameters Output:learned optimal parameters w? and m ti?1 randomly initialize the network parameters,and divide a given training sam ple into m ini-batch 2 m ti is initialized to 1 3 while epoch ?≠max-epoch do 4 for mini-batch do 5 Forward propagation;6 calculate the error by formula (2);7 Calculate the gradient of the error;8 Update 9 end 10 Calculate the gradient of m ti by formula (4);11 Update 12 end

    4 Experiment

    In this section,first,we introduce the dataset and evaluation used in our paper.Then,in order to determine the parameters used in the experiment,we used cross-validation to test several variables.Finally,we show the results of the e experiment in charts and analyze them in detail.

    4.1 Dataset and evaluation metrics

    The dataset3used in this paper has been widely used in distant supervision relation extraction,it was developed by Pennington et al.[Pennington,Socher and Manning(2014)] and used by Santos et al.[Santos,Xiang and Zhou (2015);Riedel,Yao and M cCallum (2010);Zelenko,Aone and Richardella (2003)].It generated by aligning the NYT corpus with Freebase.We use corpus from 2005-2006 as the training corpus and corpus from 2007 as the test corpus.

    The goal of our methods is to improve the overall precision but not affect the precision of the majority and minority classes.In order to compare with baseline methods and test the performance of our method,we evaluate the models via precision,recall and F1-score.

    4.2 Experiment settings

    We pretrained skip-gram to generate word embedding.If the entity has multiple word tokens,then we use the ## operator to connect the tokens.We randomly initialized the Position Features to a uniform distribution between [-1,1].Parameters used in PCNNs model are set as the same as Zeng et al.[Zeng,Zeng and Dai (2017)].All parameters of our model are in Tab.1.

    Table1:All parameters used in our experiments.

    4.3 Baseline

    In our baseline methods,there are three methods that use handcrafted features,and the others use convolutional neural networks to extract features.Mintzextract features from all sentences,which proposed by Mintz et al.[Mintz,Bills,Snow et al.(2009)];MultiRis proposed by H of fmann et al.[H of fmann,Zhang,Ling et al.(2011)],which treats DSRE as a multi-instance learning task;theMIMLmethod used in [Surdeanu,Tibshirani,Nallapati et al.(2012)] is a multi-instance and multi-label method for relation extraction;PCNNs+M ILis proposed by [Zeng,Liu,Chen et al.(2015)],which extract bag features by using PCNNs and multi-instance learning;CrossMaxselects features across different instances by incorporating cross-sentence max-poolingandPCNNs,it’s proposed by Jiang et al.[Jiang,Wang,Li et al.(2016)];R-Lwas proposed by [Zeng,Zeng and Dai(2017)],which uses cost sensitivity learning to solve the class imbalance problems.

    4.4 Comparison with baseline methods

    In this part,we present the results of our experiments in charts,and perform some analysis based on these results.In the following charts,we use ours to represent the method that use C2C separability.

    We use the class separability scoresSto adjust the maximum reachable cost of a misclassification,thereby changing the cost-sensitive ranking loss from a fixed cost into an adaptive cost.The precision/recall curve of the method proposed in this work and baseline methods as shown in Fig.3.We can observe that our method gain the highest precision at all recall levels,and it can achieve a maximum recall level of approximately 39%.PCNNsM IL can achieve a recall level of 36%,but their precision is too low.R-L can get about 38% recall level,but its precision is lower than the method in this paper.Taking into account the precision and recall at the same time,our method can achieve better results.

    4.5 Effect of class separability score on cost-sensitive ranking loss

    Since our model degrades toR-Lafter removing the metrics of the e C2C separability,in order to verify the impact of C2C separability to cost-sensitive ranking loss,we calculated F1-score for some relations to make a comparison between the R-L baseline with fixed cost and our method with adaptable cost.The results are in Tab.2.

    From Tab.2,we can see the advantages of incorporating C2C separability metrics.Especially in relation labelpeople/person/place_livedandpeople/person/place_ of _birth,F1-score is lower when using theR-Lbaseline,because of the poor separability of these two relation classes.In our approach,since the metrics of C2C separability are considered,the classification performance of these two classes is greatly improved.In summary,incorporating class separability metrics to cost-sensitive ranking loss improves the performance effectively.

    Table2:F1-score for some relations to verify the impact of class separability

    5 Conclusions

    We concentrate on the class imbalance problem in DSRE.We use the Silhouette score to measure C2C separability and incorporate this measure to cost-sensitive ranking loss to adjust the maximum applicable cost.Through extensive experiments,the result shows that our method has more significant effect on improving the experimental results.The problem of class imbalance in DSRE can be effectively solved by incorporating the C2C separability measure into the cost-sensitive ranking loss.In future work,we want to further study the impact and difference of other loss functions and cost-sensitive strategies in DSRE.

    Acknowledgments:This research work is supported by the National Natural Science Foundation of China (Nos.61602059,61772454,6171101570).Hunan Provincial Natural Science Foundation of China (No.2017JJ3334),the Research Foundation of Education Bureau of Hunan Province,China (No.16C0045),and the Open Project Program of the National Laboratory of Pattern Recognition (NLPR).

    国产亚洲av高清不卡| 亚洲精品国产av成人精品| 性色av一级| 男人添女人高潮全过程视频| 成人免费观看视频高清| 久久久久久久久久久久大奶| 中文字幕色久视频| 免费一级毛片在线播放高清视频 | 亚洲av成人精品一二三区| 制服人妻中文乱码| 亚洲专区国产一区二区| 久久天堂一区二区三区四区| 天天躁夜夜躁狠狠久久av| 大码成人一级视频| 中文乱码字字幕精品一区二区三区| 亚洲av在线观看美女高潮| 岛国毛片在线播放| 欧美精品一区二区免费开放| svipshipincom国产片| 午夜福利影视在线免费观看| 免费少妇av软件| 你懂的网址亚洲精品在线观看| 亚洲国产av新网站| 国产精品免费视频内射| 人人妻,人人澡人人爽秒播 | 久久综合国产亚洲精品| 婷婷成人精品国产| www.999成人在线观看| 这个男人来自地球电影免费观看| 成人18禁高潮啪啪吃奶动态图| 9热在线视频观看99| 亚洲国产欧美日韩在线播放| 国产精品久久久av美女十八| 大香蕉久久网| 黄频高清免费视频| 中文欧美无线码| 精品国产乱码久久久久久小说| 性少妇av在线| 啦啦啦视频在线资源免费观看| 伦理电影免费视频| 在线观看免费高清a一片| 大片免费播放器 马上看| 久久精品国产综合久久久| 五月天丁香电影| 欧美性长视频在线观看| 成年人午夜在线观看视频| 欧美日韩精品网址| 国产成人欧美| 日韩 欧美 亚洲 中文字幕| 捣出白浆h1v1| 肉色欧美久久久久久久蜜桃| 亚洲一码二码三码区别大吗| 一区二区三区精品91| 一级毛片电影观看| 欧美亚洲日本最大视频资源| 亚洲人成77777在线视频| 久久这里只有精品19| 国产1区2区3区精品| 少妇粗大呻吟视频| 欧美激情 高清一区二区三区| 精品国产一区二区三区久久久樱花| 久久99精品国语久久久| 日韩大片免费观看网站| 高潮久久久久久久久久久不卡| 亚洲欧美中文字幕日韩二区| 精品亚洲乱码少妇综合久久| 亚洲,欧美,日韩| 亚洲欧美精品自产自拍| 久久精品亚洲av国产电影网| 青草久久国产| 国产97色在线日韩免费| www.熟女人妻精品国产| 久久 成人 亚洲| 中文字幕高清在线视频| 午夜福利影视在线免费观看| 老司机影院成人| 99久久人妻综合| 国产精品.久久久| 婷婷色综合大香蕉| 国产精品一区二区免费欧美 | 青春草亚洲视频在线观看| 日日爽夜夜爽网站| 下体分泌物呈黄色| 欧美日韩黄片免| 国产亚洲av片在线观看秒播厂| 国产精品麻豆人妻色哟哟久久| 国产欧美日韩一区二区三 | 国产淫语在线视频| 日韩人妻精品一区2区三区| www.熟女人妻精品国产| 国产高清视频在线播放一区 | 日本wwww免费看| 久久精品国产综合久久久| videos熟女内射| 成年美女黄网站色视频大全免费| 久久午夜综合久久蜜桃| 久久人人97超碰香蕉20202| 天天添夜夜摸| 在线av久久热| 亚洲三区欧美一区| 精品人妻熟女毛片av久久网站| 日本午夜av视频| 成人影院久久| 9191精品国产免费久久| 人人妻,人人澡人人爽秒播 | 亚洲av片天天在线观看| 国产主播在线观看一区二区 | 黑人巨大精品欧美一区二区蜜桃| 精品亚洲成国产av| 欧美精品人与动牲交sv欧美| 免费久久久久久久精品成人欧美视频| 欧美日韩黄片免| 视频区欧美日本亚洲| 女人精品久久久久毛片| 欧美精品亚洲一区二区| www.av在线官网国产| 久久久精品国产亚洲av高清涩受| 亚洲欧美一区二区三区国产| 亚洲成av片中文字幕在线观看| 精品国产乱码久久久久久小说| 亚洲国产欧美网| 国产精品麻豆人妻色哟哟久久| 美女大奶头黄色视频| 久久午夜综合久久蜜桃| 男女午夜视频在线观看| 汤姆久久久久久久影院中文字幕| 亚洲 欧美一区二区三区| 乱人伦中国视频| 久久精品国产亚洲av高清一级| 欧美国产精品一级二级三级| 999久久久国产精品视频| 成人亚洲欧美一区二区av| 亚洲综合色网址| 久久久久国产精品人妻一区二区| 精品少妇黑人巨大在线播放| 大码成人一级视频| 亚洲欧美激情在线| 爱豆传媒免费全集在线观看| a级毛片黄视频| 久久亚洲国产成人精品v| 欧美人与性动交α欧美精品济南到| 色播在线永久视频| 欧美变态另类bdsm刘玥| 亚洲欧洲国产日韩| 亚洲精品一卡2卡三卡4卡5卡 | 久久女婷五月综合色啪小说| 黑人猛操日本美女一级片| 丝袜在线中文字幕| 精品福利永久在线观看| 日日爽夜夜爽网站| 成人午夜精彩视频在线观看| 亚洲人成电影免费在线| 欧美 亚洲 国产 日韩一| 两个人免费观看高清视频| 两性夫妻黄色片| 真人做人爱边吃奶动态| 久久天堂一区二区三区四区| 亚洲免费av在线视频| 亚洲欧美中文字幕日韩二区| 欧美激情极品国产一区二区三区| 国产亚洲午夜精品一区二区久久| 狠狠精品人妻久久久久久综合| 久久人妻福利社区极品人妻图片 | av有码第一页| 一级毛片电影观看| 国产亚洲精品第一综合不卡| 欧美精品一区二区免费开放| 手机成人av网站| 人人妻人人爽人人添夜夜欢视频| 国产成人91sexporn| 下体分泌物呈黄色| 老司机影院毛片| 国产精品欧美亚洲77777| 婷婷成人精品国产| 欧美精品人与动牲交sv欧美| av国产久精品久网站免费入址| 日韩电影二区| 精品国产一区二区久久| 99国产精品一区二区蜜桃av | 欧美日韩综合久久久久久| 大片电影免费在线观看免费| 欧美成人精品欧美一级黄| 少妇裸体淫交视频免费看高清 | 色视频在线一区二区三区| 亚洲图色成人| 久久精品亚洲av国产电影网| 亚洲一区中文字幕在线| 亚洲人成77777在线视频| √禁漫天堂资源中文www| 亚洲国产欧美日韩在线播放| 男女边摸边吃奶| 日日爽夜夜爽网站| 国产91精品成人一区二区三区 | 国产熟女午夜一区二区三区| 亚洲精品一卡2卡三卡4卡5卡 | 天天操日日干夜夜撸| 50天的宝宝边吃奶边哭怎么回事| 搡老岳熟女国产| 久久性视频一级片| 在现免费观看毛片| 亚洲av日韩精品久久久久久密 | 波多野结衣av一区二区av| 亚洲精品久久久久久婷婷小说| 精品卡一卡二卡四卡免费| 两个人免费观看高清视频| 视频区图区小说| 水蜜桃什么品种好| 香蕉丝袜av| 丰满迷人的少妇在线观看| 国产亚洲精品第一综合不卡| 在线观看免费午夜福利视频| 一区二区三区激情视频| 久久中文字幕一级| 国产人伦9x9x在线观看| 巨乳人妻的诱惑在线观看| 国产一区二区三区av在线| 欧美乱码精品一区二区三区| 国产在线免费精品| 大片电影免费在线观看免费| 午夜精品国产一区二区电影| 久久天躁狠狠躁夜夜2o2o | 国产成人精品无人区| 校园人妻丝袜中文字幕| 欧美在线黄色| 亚洲av成人精品一二三区| 老司机在亚洲福利影院| 成人亚洲欧美一区二区av| 国产人伦9x9x在线观看| 美女中出高潮动态图| 国产精品一国产av| 精品福利观看| 香蕉丝袜av| 亚洲情色 制服丝袜| 免费黄频网站在线观看国产| 国产熟女午夜一区二区三区| 97精品久久久久久久久久精品| 大香蕉久久网| 国产成人av激情在线播放| 国产xxxxx性猛交| 制服诱惑二区| 黄色视频在线播放观看不卡| 丝袜美腿诱惑在线| 亚洲情色 制服丝袜| 欧美av亚洲av综合av国产av| 国产欧美日韩综合在线一区二区| 亚洲五月色婷婷综合| 国产av精品麻豆| 黄片播放在线免费| 久久国产精品男人的天堂亚洲| 一区二区三区精品91| 欧美国产精品va在线观看不卡| 欧美人与善性xxx| 国产男人的电影天堂91| 亚洲成人免费av在线播放| 亚洲美女黄色视频免费看| 国产在视频线精品| 看十八女毛片水多多多| 午夜免费鲁丝| 少妇被粗大的猛进出69影院| 精品国产一区二区久久| 国产精品人妻久久久影院| 国产成人欧美| 久久九九热精品免费| 欧美国产精品va在线观看不卡| 成人免费观看视频高清| 黄色a级毛片大全视频| 日韩电影二区| 高潮久久久久久久久久久不卡| 久久精品国产a三级三级三级| 亚洲精品日韩在线中文字幕| 美女国产高潮福利片在线看| 叶爱在线成人免费视频播放| 午夜福利视频精品| 国产欧美亚洲国产| 校园人妻丝袜中文字幕| 赤兔流量卡办理| 国产精品亚洲av一区麻豆| 久久久久久久大尺度免费视频| 亚洲一区二区三区欧美精品| 一级毛片电影观看| 飞空精品影院首页| 国产一区二区三区av在线| 97人妻天天添夜夜摸| 美女脱内裤让男人舔精品视频| 看免费av毛片| 高清视频免费观看一区二区| 在线观看国产h片| 搡老岳熟女国产| 18禁黄网站禁片午夜丰满| 丰满饥渴人妻一区二区三| 人成视频在线观看免费观看| 美女国产高潮福利片在线看| 国产伦人伦偷精品视频| 91九色精品人成在线观看| 日韩,欧美,国产一区二区三区| 欧美少妇被猛烈插入视频| 丝袜喷水一区| 在线av久久热| 无遮挡黄片免费观看| bbb黄色大片| 亚洲精品国产区一区二| 国产色视频综合| 一二三四在线观看免费中文在| 香蕉丝袜av| 丝袜美足系列| 亚洲人成77777在线视频| 精品少妇久久久久久888优播| 午夜福利一区二区在线看| 国产精品人妻久久久影院| av天堂在线播放| 国产黄色免费在线视频| 大片免费播放器 马上看| videos熟女内射| 天堂中文最新版在线下载| 国产男人的电影天堂91| 国产精品一区二区精品视频观看| 搡老岳熟女国产| 日韩大片免费观看网站| 又紧又爽又黄一区二区| 人妻一区二区av| 男女免费视频国产| av又黄又爽大尺度在线免费看| 国产精品国产三级国产专区5o| 19禁男女啪啪无遮挡网站| 亚洲伊人久久精品综合| 日韩一区二区三区影片| 叶爱在线成人免费视频播放| svipshipincom国产片| www.自偷自拍.com| 最近最新中文字幕大全免费视频 | 少妇精品久久久久久久| 18在线观看网站| 王馨瑶露胸无遮挡在线观看| 国产又爽黄色视频| 只有这里有精品99| 99国产精品99久久久久| 午夜福利乱码中文字幕| 人成视频在线观看免费观看| 搡老乐熟女国产| 午夜影院在线不卡| 亚洲熟女精品中文字幕| 亚洲精品日本国产第一区| 捣出白浆h1v1| 大码成人一级视频| 国产无遮挡羞羞视频在线观看| 午夜福利在线免费观看网站| 黑人猛操日本美女一级片| 亚洲专区中文字幕在线| 性色av乱码一区二区三区2| 在线观看国产h片| 男女午夜视频在线观看| 只有这里有精品99| 婷婷色综合www| 欧美黄色片欧美黄色片| 一边亲一边摸免费视频| 在线观看www视频免费| 中文字幕另类日韩欧美亚洲嫩草| 国产日韩一区二区三区精品不卡| 韩国精品一区二区三区| 欧美精品一区二区大全| 国产1区2区3区精品| 久久久精品免费免费高清| 精品一区二区三区四区五区乱码 | a级片在线免费高清观看视频| 操出白浆在线播放| 一边摸一边抽搐一进一出视频| 九草在线视频观看| 七月丁香在线播放| 国产深夜福利视频在线观看| 色婷婷av一区二区三区视频| 国产一区有黄有色的免费视频| 狂野欧美激情性xxxx| av在线老鸭窝| 国产男女内射视频| 热99久久久久精品小说推荐| 欧美日韩视频高清一区二区三区二| 久久av网站| 黑人欧美特级aaaaaa片| 男女床上黄色一级片免费看| 高清不卡的av网站| 天天添夜夜摸| 五月天丁香电影| 欧美精品一区二区大全| 欧美成人午夜精品| 日本色播在线视频| 嫩草影视91久久| 国产欧美日韩综合在线一区二区| 十八禁高潮呻吟视频| 亚洲av男天堂| 国产精品久久久av美女十八| 精品久久蜜臀av无| 人妻 亚洲 视频| svipshipincom国产片| 国产精品久久久人人做人人爽| 免费av中文字幕在线| 赤兔流量卡办理| 久久影院123| 熟女av电影| 色视频在线一区二区三区| 国产一区二区三区综合在线观看| 精品高清国产在线一区| 狠狠婷婷综合久久久久久88av| 欧美国产精品va在线观看不卡| 99国产精品一区二区蜜桃av | 别揉我奶头~嗯~啊~动态视频 | 老司机影院毛片| 国产在视频线精品| 男的添女的下面高潮视频| 亚洲欧美一区二区三区国产| 欧美精品一区二区免费开放| 自线自在国产av| av片东京热男人的天堂| 日韩视频在线欧美| 国产高清videossex| 亚洲,欧美精品.| 日本av手机在线免费观看| 九草在线视频观看| 午夜福利影视在线免费观看| 久久久久久久久久久久大奶| 另类精品久久| 一区二区av电影网| 欧美av亚洲av综合av国产av| av一本久久久久| 亚洲男人天堂网一区| 一区二区三区四区激情视频| 99精国产麻豆久久婷婷| 下体分泌物呈黄色| 香蕉国产在线看| 国产高清视频在线播放一区 | 日韩免费高清中文字幕av| 日本a在线网址| 亚洲精品一卡2卡三卡4卡5卡 | 桃花免费在线播放| 久久精品熟女亚洲av麻豆精品| 激情五月婷婷亚洲| 宅男免费午夜| 国产激情久久老熟女| videos熟女内射| 国产成人免费观看mmmm| 亚洲av在线观看美女高潮| 亚洲欧洲国产日韩| 在线亚洲精品国产二区图片欧美| 婷婷色麻豆天堂久久| 亚洲国产最新在线播放| 国产精品偷伦视频观看了| 天堂8中文在线网| 午夜免费男女啪啪视频观看| 久久久国产一区二区| 久久久国产欧美日韩av| 国产麻豆69| 纯流量卡能插随身wifi吗| 如日韩欧美国产精品一区二区三区| 久久久久久免费高清国产稀缺| 国产真人三级小视频在线观看| 色婷婷av一区二区三区视频| 亚洲少妇的诱惑av| 成人国产av品久久久| 国产熟女午夜一区二区三区| 亚洲精品日本国产第一区| 国产成人精品无人区| 一边摸一边抽搐一进一出视频| 欧美精品人与动牲交sv欧美| 国产老妇伦熟女老妇高清| 国产免费福利视频在线观看| 人妻人人澡人人爽人人| 国产精品av久久久久免费| 99久久综合免费| 19禁男女啪啪无遮挡网站| 真人做人爱边吃奶动态| 人人妻人人澡人人爽人人夜夜| 天堂8中文在线网| 在线天堂中文资源库| xxxhd国产人妻xxx| 捣出白浆h1v1| 校园人妻丝袜中文字幕| 蜜桃在线观看..| 美女扒开内裤让男人捅视频| 少妇精品久久久久久久| 日本wwww免费看| 伊人久久大香线蕉亚洲五| 亚洲天堂av无毛| 亚洲精品国产区一区二| 免费人妻精品一区二区三区视频| 97精品久久久久久久久久精品| 男的添女的下面高潮视频| 国产精品久久久久久精品电影小说| 亚洲国产精品一区二区三区在线| av不卡在线播放| 欧美xxⅹ黑人| 免费日韩欧美在线观看| 久久中文字幕一级| 一本久久精品| 中文字幕人妻丝袜制服| 丰满少妇做爰视频| 国产av精品麻豆| 女人爽到高潮嗷嗷叫在线视频| 精品久久久久久久毛片微露脸 | 精品久久蜜臀av无| 国产一区有黄有色的免费视频| 一级毛片 在线播放| 亚洲国产看品久久| 久久久久久久精品精品| 日本a在线网址| 精品国产超薄肉色丝袜足j| 国产成人91sexporn| av视频免费观看在线观看| 91老司机精品| 免费女性裸体啪啪无遮挡网站| 国产精品一二三区在线看| 国产成人91sexporn| 国产精品久久久人人做人人爽| 91精品国产国语对白视频| cao死你这个sao货| 国产高清视频在线播放一区 | a级毛片黄视频| 亚洲美女黄色视频免费看| 日本91视频免费播放| 成人18禁高潮啪啪吃奶动态图| 老司机影院毛片| 一级黄色大片毛片| 亚洲伊人久久精品综合| 91麻豆精品激情在线观看国产 | videos熟女内射| 国产精品一区二区在线不卡| 国产精品.久久久| 曰老女人黄片| 黄色片一级片一级黄色片| 日韩精品免费视频一区二区三区| 久久青草综合色| 国产福利在线免费观看视频| 免费在线观看影片大全网站 | 精品人妻一区二区三区麻豆| 国产视频首页在线观看| 最近中文字幕2019免费版| 99热全是精品| 一级毛片 在线播放| 欧美激情 高清一区二区三区| 一区二区三区乱码不卡18| 亚洲五月婷婷丁香| 亚洲国产毛片av蜜桃av| 久久综合国产亚洲精品| 国产成人精品久久久久久| 美女视频免费永久观看网站| 婷婷丁香在线五月| 99re6热这里在线精品视频| 欧美 亚洲 国产 日韩一| 男人操女人黄网站| 日韩精品免费视频一区二区三区| 国产免费视频播放在线视频| 1024视频免费在线观看| 国产在线一区二区三区精| 久久久久精品国产欧美久久久 | 色精品久久人妻99蜜桃| av线在线观看网站| 成人18禁高潮啪啪吃奶动态图| 叶爱在线成人免费视频播放| 精品国产超薄肉色丝袜足j| 日韩中文字幕视频在线看片| 成年人免费黄色播放视频| 国产片特级美女逼逼视频| 精品熟女少妇八av免费久了| 久久久精品免费免费高清| 国产免费一区二区三区四区乱码| 国产精品国产三级专区第一集| 午夜福利视频在线观看免费| xxxhd国产人妻xxx| 电影成人av| 亚洲图色成人| 国产一区二区 视频在线| 天堂中文最新版在线下载| 欧美 日韩 精品 国产| 国产精品三级大全| 亚洲av电影在线进入| 亚洲三区欧美一区| 可以免费在线观看a视频的电影网站| 日本黄色日本黄色录像| 9热在线视频观看99| 日韩制服骚丝袜av| 国产成人一区二区在线| 在现免费观看毛片| 少妇人妻 视频| 欧美激情 高清一区二区三区| 不卡av一区二区三区| 别揉我奶头~嗯~啊~动态视频 | 欧美日韩成人在线一区二区| 在线观看一区二区三区激情| 国精品久久久久久国模美| 亚洲一区二区三区欧美精品| 一个人免费看片子| 国精品久久久久久国模美| 1024视频免费在线观看| av在线老鸭窝| 国产1区2区3区精品| 国产成人影院久久av| 在线观看人妻少妇| 一区二区三区四区激情视频| 久久精品成人免费网站| 欧美精品高潮呻吟av久久| av国产精品久久久久影院| 麻豆av在线久日| 丝袜喷水一区| 久久国产亚洲av麻豆专区| 欧美亚洲 丝袜 人妻 在线| 搡老岳熟女国产| 亚洲成人手机| 大香蕉久久成人网| avwww免费| 女人爽到高潮嗷嗷叫在线视频| 久9热在线精品视频| 久久精品aⅴ一区二区三区四区| 午夜91福利影院| 欧美人与性动交α欧美精品济南到| 赤兔流量卡办理| 99国产精品一区二区三区| 少妇人妻 视频|