• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Capturing semantic features to improve Chinese event detection

    2022-05-28 15:17:20XiaoboMaYongbinLiuChunpingOuyang
    關(guān)鍵詞:法網(wǎng)

    Xiaobo Ma|Yongbin Liu|Chunping Ouyang,2

    1School of Computing,University of South China,Hengyang,China

    2Hunan provincial base for scientific and technological innovation cooperation,Hunan,China

    Abstract Current Chinese event detection methods commonly use word embedding to capture semantic representation,but these methods find it difficult to capture the dependence relationship between the trigger words and other words in the same sentence.Based on the simple evaluation,it is known that a dependency parser can effectively capture dependency relationships and improve the accuracy of event categorisation.This study proposes a novel architecture that models a hybrid representation to summarise semantic and structural information from both characters and words.This model can capture rich semantic features for the event detection task by incorporating the semantic representation generated from the dependency parser.The authors evaluate different models on kbp 2017 corpus.The experimental results show that the proposed method can significantly improve performance in Chinese event detection.

    KEYWORDS dependency parser,event detection,hybrid representation learning,semantic feature

    1|INTRODUCTION

    Event Detection (ED) is a key step in event extraction which aims to recognise event instances of a particular type in plain text.Specifically,given a sentence,ED is required to decide whether the sentence contains event triggers,and if so,it needs to identify the specificeventtype.For example,in the sentence‘He boughta plane ticket and arrived in Sydney on November 17th’,an event detection system will detect a ‘transaction’ event triggered by‘bought’,and a‘movement’event triggered by‘a(chǎn)rrived’.

    Chinese event extraction has had progress recently.To date,many methods [1-3] have been proposed and have obtained state-of-the-art performance.However,the existing event extraction methods find it challenging to capture sufficient semantic information from plain text,because a word may have different meanings in different sentences.For example,in Table 1 S1 is a sentence in which the word‘下課’is equivalent to resign,but in S2,the same word‘下課’is used to express the meaning that the class is over.Also,the wordtrigger mismatch problem still exists,because triggers do not exactly match with a word.In Table 1 in S3,the event should be triggered by ‘落入法網(wǎng)’ which is a cross-word trigger.Because the word segmentation tool divides ‘落入法網(wǎng)’ into‘落入’and ‘法網(wǎng)’ that makes it impossible to extract the complete trigger.In S4,there is more than one trigger in the word ‘擊斃’ (shoot and kill).A ‘shoot’ event triggered by ‘擊’and a ‘kill’ event triggered by ‘?dāng)馈?These triggers are called inside-word triggers.

    To improve event detection quality,previous methods often captured additional information,such as syntactic features.The dependency parser is an effective way to capture syntactic features.Using the dependency parser,a sentence can be labelled with structured information that contains a dependency relation.An arc represents a dependency relation that connects a dependency word to a headword in a sentence.For example,in Figure 1,an arc represents a dependency relation (dobj) that connects the dependency word‘合并’(merger)to the headword‘公司’ (company).The dependency relation provides rich semantic features for models and performs reasonably well.For example,in Figure 1,‘合并’(merger)is a trigger word.From thedependency relation between ‘合并’ (merger) and its entities‘公司’(company)we can exploit its entities to assist the classification of the trigger and discern the event type of‘Merge-Org’.These related entities called cue words can provide available information to assist the trigger classification.However,with traditional word embedding it is difficult to take full advantage of these cue words because they are scattered in a sentence.Therefore,we connect cue words to trigger comments by adding dependency relations.In most cases,these additional features can provide helpful information.

    TABLE 1 Examples of word polysemy and word-trigger mismatch

    FIGURE 1 Example of a Merge‐Org event triggered by ‘合并’ (merger)

    With this objective,we propose a hybrid representation method to learn information from words,chars,and dependency relations.More concretely,we first learn two separate characterlevel and word-level representations using token-level neural networks.Then,we obtain the dependency information from the dependency parser and generate its representation by One-Hot Encoding.Our analysis proves that the feature generated from the dependency parser are beneficial for event detection.Finally,we design appropriate hybrid paradigms to capture the hybrid representation.Our model achieved the micro-averaged F1 scores of 59.86 and 53.44,respectively,for trigger identification and trigger classification.

    2|METHODOLOGY

    Our model can be divided into two stages which process hybrid representation via dynamic multi-pooling convolutional neural networks.The first stage is trigger identification,where we use a neural network to capture the potential trigger nuggets containing the concerning character by exploiting the triggers' character compositional structures.The second stage is used to determine the event's specific type,given the potential trigger nuggets.This is called trigger type classification.We use the dependency information extract from a dependency parser to generate a feature representation via tokenlevel neural networks in these two stages.Then,we combine it with the word feature representation and obtain a hybrid representation.

    Figure 2 depicts the architecture of our event detection approach,which involves the following four components:(a)representation of the input sequence;(b)feature representation based on the dependency parser;(c)hybrid representation;and(d) dynamic multi-polling convolutional neural network.

    2.1|Representation of the input sequence

    To better capture information on different levels,we propose using two levels of embedding,namely,word level embedding and character level embedding.To further enhance performance,we use pre-trained weights to initialise the embedding.Then,a two token-level neural network is used to obtain features from characters and words.The network architecture is similar to NPNs [4].LetT=t1,t2,…,tnbe tokens in the sentence wheretiis theith token.Then,letxibe the concatenation of the embedding (word or character) oftiand theti’s relative position totc,a convolutional layer with the window size assis introduced to capture compositional semantics as follows:

    FIGURE 2 Architecture of event detection,where αTi and αTc are the gates described in Section 2.3

    Equation(1)shows the convolutional process,wherewiis theith filter of the convolutional layer,xi:i+jis the concatenation of embeddings fromxjtohj+s-1,andbiis a bias.Equation (2) gives the important signals of different parts of the sentence by using dynamic multi-pooling whererefers to the pooling result of the left oftc,andrefers to the pooling result of the right oftc.Then,we concatenateandto obtain the word-level representationfwordoftc.By using the same procedure on character-level sequences,we can also obtain the character-level representationfchar.

    2.2|Feature representation based on the dependency parser

    The dependency parser is the focus on most works in dependency grammar that are based on the dependency relation.Syntactic dependencies can be used to obtain deep semantic information.Syntactic dependencies are used in the neural network model by directly incorporating them into embedding.In this work,we use three different feature abstraction layers to represent three features:

    · POS:part of speech

    · DR:dependency relation

    · DIS:distance from the HEAD

    These three features are based on the results of the dependency parser.Dependency parsing for training was done with the Stanford Neutral Network dependency parser [5].

    POS is a generalisation of the word which plays an important role in natural languages processing tasks such as named syntactic analysis,entity recognition and event extraction.A noun or pronoun can act as a subject in a sentence,but an interjection cannot.This is because the grammatical component has a restriction on the speech component.Therefore,POS is applicable as an abstract feature to express different characteristics of text semantic information.We take POS as a feature to reinforce word-based features.There are 28 parts of speech in Chinese,we use a 28-dimensional one hot vector to represent the parts of speech.It means that the POS of each word in a sentence can be represented as a 28-dimensional feature vector.Each dimension of the 28-dimensional vector represents a part of speech.The POS of a word is expressed if one of the values is 1,and the remaining 27-dimensional values are 0.

    Dependency relation expresses the semantic relationships between the components of the sentence.For the event extraction task,trigger words usually are the predicate verbs.We find that,in the KBP corpus,the most popular dependent grammatical role of the trigger words is HEAD that accounts for 56%,and the role of the verb object accounts for 19%.Therefore,we consider that a dependency relation is available to improve trigger detection.On the feature layer of the dependency relation,the vector dimension is 18 (17 relationship types and one ‘other’ type).We find that 17 kinds of the dependency relation are frequently used in syntactic dependencies,and in order to reduce the complexity of feature representation,we classify the other dependency relation as the‘other’type.

    The distance from the HEAD is the length of the dependency path.Concretely,if a word and HEAD are directly related,we say that the distance of the HEAD is 1.If the path includes one intermediate dependency,the distance of the HEAD is 2.For example,in the sentence of Figure 1,‘宣布’(announced)is the HEAD,and the dependency path of‘公司’(company) are as follows:

    宣布c→comp合并→dobj公司

    Here,‘宣布’ (announced) is the HEAD in this sentence,‘合并’ (merger) creates an intermediate dependency between the HEAD and ‘公司’ (company).Therefore,for the word‘公司’(company),its distance from the HEAD is 2.HEAD is the root node in syntactic dependencies that usually represent the core word in a sentence.Triggers represent the important content in events.Based on the analysis of the trigger in the KBP corpus,we find that HEAD and trigger words are similar.It means that the distance from the HEAD can be used to measure whether a word is a trigger in a certain sense.On the feature layer of DIS,we use a seven-dimensional vector to represent the distance from 0 to 6.In addition,if it is greater than 6,we still identify its value as 6.

    2.3|Hybrid representation learning

    For Chinese event detection,only employing word-level representation or character-level representation cannot obtain sufficient information.Because characters reveal the inner compositional structure of event triggers [6],while words can provide more accurate and less ambiguous semantics than characters [7].For example,if we understand the word ‘逃離’(flee) in a character-level representation,it is a trigger consisting of ‘逃’ (escape) and ‘離’ (leave).While in a word-level representation,the word-level sequences can provide more explicit information to distinguish the semantics of‘離’(leave)in this context with the character in other words like ‘離婚’(divorce).

    After the embedding layer,we can obtain a word-level feature representationfword,a character-level representationfcharand a feature representationIn this work,we exploit a paradigm of Task-specific Hybrid [4] to combine these representations.Specifically,we first learn two gates to model the information flow for the trigger identification and event type classifier:sis the sigmoid function,andare the weight matrices,andbis the bias term.

    Based on the three feature layers in 2.2,we obtain three features representation.Then,we construct a 53-dimensional vectorby concatenating these three feature representations.Finally,we obtain the concatenation of the feature and word as a new representation:

    According to the gates which are introduced to trigger the nugget generator and the event type classifier,we can obtain the final vector as the input:

    wherefTiis the hybrid feature for the trigger identification andfTcis the hybrid feature for the event type classifier.αTiand (1 -αTi),respectively,represent the importance ofandin trigger identification.αTcplays a similar role in event type classifiers.

    2.4|Dynamic multi‐pooling convolutional neural networks

    Traditional convolutional neural networks use only one pooling layer,which implements a max operation.It means that traditional convolutional neural networks only capture the most important information in the representation of a sentence.In event extraction,one sentence may contain two or more events,and an argument may play a different role with a different trigger.However,traditional convolutional neural networks capture the most useful information of an entire sentence and will lose other important information in the same sentence.To address this problem,Chen et al.[8] propose a dynamic multi-pooling convolutional neural network(DMCNN)that can obtain more valuable information without losing the max-pooling value.

    In this study,the neural network that we used is similar to the DMCNN.We now describe the neural network for this study.Figure 3 describes the architecture of trigger identification.

    First,the concatenation of the embedding and the relative positionxiis obtained.We concatenatexito the feature vector,which we have explained in Section 2.2,resulting in the lexicallevel features.

    Next,the lexical-level features are given as input to the convolution layer to capture the compositional semantics and obtain feature maps.Concretely,a convolution operation produces a new feature by exploiting a filter(also called kernel)to scan a window ofhwords.Letxi:i+jrefer to the concatenation of wordsxi,xi+1,…,xi+j.Filters are applied to a window ofhwords in the sentencex1:h,x2:h+1,…,xn-h+1:nto produce a feature mapciwhere the indexiranges from 1 ton‐h+1.In our case,one filter produces one feature for one positioni:cij=σ(wj·xi:i+h-1+bias)whereσis a nonlinearity (tanh in our case),jranges from 1 tom,m is the number of filters.

    Then,the feature map outputcjis divided intoisections according to the number of triggers in a sentence.For example,if one sentence has one trigger then the sentence will be divided into two sections,and when this sentence has two triggers,these two triggers will divide the sentence into three sections.Through dynamic multi-pooling,the final output of one filter is given bypji=max(cji)to obtainpjifor each feature map and allpjiare concatenated to form a vector P.

    We concatenate the feature vectors above and the lexical feature into a single vectorFword.We adopt a similar method to obtain the char-level feature vectorFchar.Finally,we use hybrid representation learning which is expressed in Section 2.3 to produce two hybrid representations for the dense layer of trigger identification and trigger type classification.

    FIGURE 3 Example of trigger identification and trigger type classification.‘NIL’ means this character is not in any trigger

    2.5|Training and classification

    Following previous works,we consider event identification as a multi-class classification problem.The hybrid representationfN,which is learned from the architecture described in Section 2.3,is the input of convolutional neural networks.The input is a tokenised sentence.We also use the dropout layer[9]to prevent over-fitting.

    Then,convolution layers are used to capture semantics and generate feature maps by filters.Traditional convolutional neural networks usually get one max value for a feature map by taking one feature map as a pool.In our case,we use dynamic multi-pooling to get a multi max value by splitting each feature map into multiple parts.By concatenating the sentence-level and lexical feature,we get a single vector F and it is fed into a classifier.The process of getting the final output can be expressed as follows:

    WTis the transformation matrix,bis a bias term andZis the final output of the network.

    Finally,we use the softmax activation function to predict each type's probabilistic score,then pick the highest probability as the final result.We use two different classifiers to predict the span type in trigger detection and the event type in trigger type classification.In Figure 3,we give an example to show how classifiers get a trigger word and predict the event type.When the classifiers took a character as the potential trigger or potential trigger,one classifier extracts the best match in different possible trigger words containing this character.Then,another classifier will predict the event type triggered by this trigger.

    3|EXPERIMENTS

    3.1|Dataset and evaluation metric

    We evaluate the networks in this study using the TAC KBP 2017 Event Nugget Detection Evaluation dataset [10] that is widely used for ED.In the evaluation corpora,there are 693 documents in which half of the documents are newswire and the other half of the documents are texts from the discussion forum.Table 2 shows the number of each event type in the corpora.We used the Stanford CoreNLP toolkit [11] to preprocess all documents for sentence splitting and word segmentation.The grammar analysis tools adopted the Stanford Neutral Network dependency parser.

    Similar to[1,4,12,13],we used the same training set with 506 documents,the same development set with 20 documents and the rest 167 documents were used for testing.

    We follow standard evaluation procedures:A trigger is correct if its event subtype and offsets match those of a reference trigger.Finally,we used precision (P),recall (R) and the F-measure (F) to evaluate the results.

    TABLE 2 Number of nugget per event type

    3.2|Experimental setting

    For training the classifier,we divide the train set into positive instances and negative instances.A character included in trigger words is considered a positive instance,otherwise the character is considered a negative instance.In our case,we set the ratio of positive instances to negative instances to 1:20.In order to efficiently train neural networks,we limit the maximum sentence length of 220 in the character-level representation,and also limit the maximum sentence length of 130 in the word-level representation.The word embedding dimension and char embedding dimension are 100.For the activation function of the convolutional neural network,we use the same sigmoid function as the DMCNN[8].For evaluation,we limit the length of the triggers of 3.Because most of the triggers in the labelled corpus are not longer than 3.Besides,we set the batch size as 128.The dropout rate was set to 0.5.The parameters were initialised with a uniform distribution between-0.1 and 0.1.

    3.3|Experiments and results

    For our method,we use the pre-trained word embedding,respectively,learned by Skip-Gram and BERT.We select five state-of-the-art methods for comparison:

    FBRNN:Ghaeimi et al.[1] used forward-backward recurrent neural networks to detect events that can be either words or phrases.This method is one of the first efforts to handle multi-word events and also the first attempt to use RNNs for event detection.

    DMCNN:Chen et al.[8] proposed a dynamic multipooling convolutional neural network that uses a dynamic multi-pooling layer according to event triggers and arguments.This event extraction method aims to automatically extract lexical-level and sentence-level features without using complicated NLP tools.

    CLIZH (KBP2017 Best):Makarov and Clematide [13]incorporated many heuristic features into the LSTM encoder,which achieved the best performance in TAC KBP2017 evaluation.

    NPN (Task-specific):Lin et al.[4] proposed Nugget Proposal Networks for Chinese event detection which using hybrid representation to capture features,and designed three kinds of hybrid paradigms(Concat,General and Task-specific)to obtain the hybrid representation.In our study,we use the result of Taskspecific as a baseline to compare with our method.

    JMCEE:Xu et al.[14]proposed a Chinese multiple events extraction framework that adopts a pre-trained BERT encoder and utilises multiple sets of binary classifiers to determine the spans to complete event extraction.

    As we can see in Table 3,our method has a competitive precision and recall in the task of trigger identification,resulting in the highest micro-averaged F1.Compared with the best result,our model achieves 0.49%and 1.1%improvements in terms of micro-averaged F1,respectively.In the task of trigger identification,our method achieves the best performance in both precision and micro-averaged F1.Specific to the recall value,our model performs worse than JMCEE,but we achieve a great improvement on precision than the baseline and get a higher micro-averaged F1.This shows that our model does not improve the precision by sacrificing the recall.

    The result shown in Table 3 also shows that employing the dependency parser can achieve better precision.We believe this is because the dependency parser can capture rich relational information and different complementary information to assist trigger detection.What's more,the performance is better when BERT is used as the pre-training model of our model to initialise the parameters.This is due to the effective architecture and the large-scale pre-training information of BERT.

    3.4|Ablation study

    In order to illustrate the effectiveness of various parts of our model,we conduct ablation experiments on the kbp2017 Corpus.We first compared with the model removing dependency features (Char and Word Emb).Then,on the basis of removing the dependency features,we separately remove the word-level feature(Char Emb)and character-level features(Word Emb).

    TABLE 3 Experiment results on TAC KBP 2017 dataset

    According to the results presented in Table 4,we can see that neither the character-level model (Char Emb) nor the word-level model(Word Emb)can achieve competitive results.It is possible to incorporate character-level features and wordlevel features,which is beneficial to capture the trigger's integrity.Furthermore,it can be observed that dependency features,which could take advantage of event information scattered in the sentence,help trigger identification and trigger classification.

    3.5|Influence of mismatch triggers and polysemous triggers

    In order to explore the influence of mismatch triggers and polysemous triggers,we counted the mismatch triggers and polysemous triggers on the KBP2017 dataset.First,we split triggers into two parts:match and mismatch.Then,based on the polysemy of triggers,we split triggers into two parts:polysemy and univocal.Table 5 shows the statistics of the mismatch trigger and polysemous trigger on the KBP2017 dataset.

    Based on the ablation study,we further analysed the influence of our model on trigger mismatch and polysemy.Table 6 shows the F1 score of different types of word-trigger match on the trigger identification task.Compared with the model that only considers the word feature or character feature,our method achieves better performance regardless of whether the triggers are mismatched or not.The word-based model only gets 22.77% F1 score because mismatch triggers are difficult to be separated as a specific word in the preprocessing stage.

    To further explore the effect of the character,word and dependency feature,we split the KBP2017 test set into two parts:polysemy and univocal.The F1 score of each part is shown in Table 6,in which our method achieves the best results on both the polysemy part and the univocal part.Without the dependency feature,the performance of the method drops by 2.55% in the polysemy part.The results indicate that theseadditional information can alleviate some of the misclassification of the triggers caused by polysemy.In contrast,wordbased or char-based model does not get enough semantic information,thus has a low result.

    TABLE 4 Ablation study for our method

    TABLE 5 Proportion of the mismatch triggers and polysemous triggers on KBP 2017

    TABLE 6 F1 score of mismatch trigger on trigger identification task.And F1 score of polysemous trigger on trigger classification

    4|RELATED WORK

    4.1|Feature engineering method

    Event detection is a supervised multi-class classification task that can be divided into two steps:feature selection and classification model.For event detection tasks,traditional classification models are mostly maximum entropy models or support vector machines.Chieu and Ng [15] first presented a maximum entropy approach to extract information on a dataset with better performance than the past work that used pattern-learning methods.Ahn [16] used a modular approach based on lexical features,syntactic features,and external knowledge features to complete an event extraction task on an ACE English corpus.

    In follow-up studies,the researchers proposed more advanced features for event detection.Ji and Grishman [17]proposed a cross-document feature that extracts multiple results from a set of related documents.This approach achieves the propagation of event arguments in sentences and documents.Inspired by the hypothesis of ‘early-update’,a joint framework to extract events is proposed in [18,19].Although these approaches have a high performance,the traditional classification model suffers as it is heavily dependent on feature selection.

    These methods are based on feature engineering.The feature extraction is usually a pre-order module of the prediction model in the ways.Principal component analysis or linear discriminant analysis is generally used for feature extraction.The prediction model performs the classification task on the feature extraction results.

    4.2|Deep learning method

    In recent years,deep learning is widely used in event detection tasks based on structured prediction.We report studies on event extraction with neural networks.Nguyen and Grishman [20] exploit convolutional neural networks to achieve the modelling of non-continuous skip-grams [21]and overcome the problem of complicated feature engineering.These methods achieve high performance in event detection.But the issue of data scarcity limits their improvement and performance.Chen et al.[22] propose an automatic label method for event extraction by detecting triggers and arguments for each event type.Liu et al.[2]leverage supervised attention mechanisms to model argument information for event detection.

    Recently,the researcher proposed some novel models to solve a different problem in event extraction related to our work.Zhang et al.[23] transformed the event recognition problem into semantic feature classification and proposed a deep belief network model to identify the triggers.Chen et al.[8] proposed a dynamic multi-pool convolutional neural network to capture the sentence-level information for event extraction.Lin et al.[4] proposed a hybrid representation model to solve the problem of word-trigger mismatch.Tong et al.[24,25] proposed a novel Enrichment Knowledge Distillation (EKD) model to solve the problem of the long tail.

    These methods,based on the neural network,are essentially representation learning methods.The difficulty of these methods is the evaluation of the contribution or influence of representation learning on the final output of the system.With these methods,it is difficult to capture the dependence relationship between trigger words and other words in the same sentence.This study used the dependency parser that effectively captures dependency relationships and improves the accuracy of event categorisation.

    5|CONCLUSION

    This study proposes an effective Chinese event detection model.Due to the Chinese language's nature,the Chinese tokens usually contain more internal structures,and every single character in the token may convey helpful semantic information.Our method uses the hybrid representation to capture features both in the word-level and character-level and concatenates token features with semantic features generated from the dependency parser.Besides,we use a dynamic multi-pooling convolutional neural network to store more useful information in the same sentence.The experimental results prove that the dependency parser can capture more valuable features,which improves our method's performance in event detection tasks.

    ACKNOWLEDGEMENTS

    This work is supported by 973 Program(No.2014CB340504),the State Key Program of National Natural Science of China(No.61533018),National Natural Science Foundation of China (No.61402220),the Philosophy and Social Science Foundation of Hunan Province (No.16YBA323),Natural Science Foundation of Hunan Province (No.2020JJ4525) and the Scientific Research Fund of Hunan Provincial Education Department (Nos.18B279 and 19A439).

    CONFLICT OF INTERESTS

    The authors declare no conflict of interests.

    DATA AVAILABILITY STATEMENT

    Data available on request from the authors.

    ORCID

    Yongbin Liuhttps://orcid.org/0000-0002-3369-3101

    猜你喜歡
    法網(wǎng)
    “罒”和“覀”
    辛巴狗幽默日常
    487天
    法網(wǎng)延期,并準(zhǔn)備退票
    法網(wǎng)延期,影響全局
    小威“貓女”戰(zhàn)袍遭禁
    法網(wǎng)11冠
    中國(guó)金花“頂替”奧運(yùn)冠軍出戰(zhàn)法網(wǎng)
    俄教練贊法網(wǎng)女單冠軍:或超越莎娃
    兩高司法解釋織密安全生產(chǎn)刑事法網(wǎng)
    公民與法治(2016年2期)2016-05-17 04:08:16
    99精品久久久久人妻精品| 男人操女人黄网站| 美女扒开内裤让男人捅视频| 韩国av一区二区三区四区| 丝袜在线中文字幕| 91精品三级在线观看| 亚洲一码二码三码区别大吗| 夜夜夜夜夜久久久久| 久久人人爽av亚洲精品天堂| 老司机在亚洲福利影院| 日韩精品免费视频一区二区三区| 亚洲人成77777在线视频| 久久久国产成人免费| 级片在线观看| 日本黄色视频三级网站网址| 久热这里只有精品99| 香蕉丝袜av| 久久久精品国产亚洲av高清涩受| 夜夜躁狠狠躁天天躁| 在线十欧美十亚洲十日本专区| 纯流量卡能插随身wifi吗| 国产精品,欧美在线| 亚洲av日韩精品久久久久久密| 亚洲一码二码三码区别大吗| 一级,二级,三级黄色视频| 国产熟女午夜一区二区三区| 一区在线观看完整版| 日本一区二区免费在线视频| 极品人妻少妇av视频| 中文字幕色久视频| 国产精品永久免费网站| 成人18禁高潮啪啪吃奶动态图| 熟女少妇亚洲综合色aaa.| 婷婷六月久久综合丁香| 日韩免费av在线播放| 亚洲三区欧美一区| 身体一侧抽搐| 神马国产精品三级电影在线观看 | 操美女的视频在线观看| 亚洲自偷自拍图片 自拍| 最好的美女福利视频网| 亚洲七黄色美女视频| 首页视频小说图片口味搜索| 亚洲av成人一区二区三| 在线视频色国产色| 久久久久久久久久久久大奶| 精品欧美一区二区三区在线| 搞女人的毛片| 日本 av在线| 99国产精品一区二区蜜桃av| 中出人妻视频一区二区| 一边摸一边抽搐一进一出视频| 国产精品影院久久| 亚洲精品中文字幕一二三四区| 欧美丝袜亚洲另类 | 亚洲七黄色美女视频| 极品人妻少妇av视频| 色哟哟哟哟哟哟| 亚洲精品在线观看二区| 国产高清激情床上av| 亚洲精品美女久久久久99蜜臀| 欧美日韩亚洲综合一区二区三区_| 老鸭窝网址在线观看| 人人妻人人澡人人看| 中文字幕av电影在线播放| 午夜视频精品福利| 琪琪午夜伦伦电影理论片6080| 操出白浆在线播放| 韩国av一区二区三区四区| 91成人精品电影| 久久中文字幕一级| 最新在线观看一区二区三区| 好看av亚洲va欧美ⅴa在| 91成人精品电影| 性色av乱码一区二区三区2| 性欧美人与动物交配| 熟妇人妻久久中文字幕3abv| 丁香欧美五月| 丝袜人妻中文字幕| 成人欧美大片| 亚洲成人精品中文字幕电影| 性色av乱码一区二区三区2| 亚洲 欧美一区二区三区| 在线观看66精品国产| 国产精品久久久久久精品电影 | 操美女的视频在线观看| 夜夜躁狠狠躁天天躁| 国产精品一区二区精品视频观看| 一区二区三区激情视频| 亚洲国产精品999在线| 国产精品,欧美在线| 51午夜福利影视在线观看| tocl精华| 欧美国产日韩亚洲一区| 亚洲中文字幕一区二区三区有码在线看 | 满18在线观看网站| 日韩欧美国产在线观看| 亚洲精品国产区一区二| 大陆偷拍与自拍| 97超级碰碰碰精品色视频在线观看| 香蕉久久夜色| 免费av毛片视频| 成人手机av| 高潮久久久久久久久久久不卡| 亚洲国产欧美日韩在线播放| 天堂√8在线中文| 国产极品粉嫩免费观看在线| 黑人操中国人逼视频| 日本五十路高清| 黄频高清免费视频| 黑人欧美特级aaaaaa片| 此物有八面人人有两片| 亚洲aⅴ乱码一区二区在线播放 | 99久久综合精品五月天人人| 777久久人妻少妇嫩草av网站| 色综合亚洲欧美另类图片| 久久久久九九精品影院| 99久久99久久久精品蜜桃| 国产成人系列免费观看| 1024视频免费在线观看| 啦啦啦韩国在线观看视频| 日韩精品青青久久久久久| 日本免费a在线| 亚洲精品国产一区二区精华液| 麻豆成人av在线观看| 99国产精品免费福利视频| av超薄肉色丝袜交足视频| 亚洲人成伊人成综合网2020| 制服人妻中文乱码| av电影中文网址| 欧美日韩黄片免| 国产xxxxx性猛交| 99在线人妻在线中文字幕| 亚洲国产欧美一区二区综合| 日本vs欧美在线观看视频| 啦啦啦观看免费观看视频高清 | 老司机福利观看| 亚洲专区国产一区二区| 中文字幕人妻丝袜一区二区| 国内精品久久久久久久电影| 欧美性长视频在线观看| 99热只有精品国产| 首页视频小说图片口味搜索| 国产熟女xx| 亚洲成人久久性| 最好的美女福利视频网| 午夜免费观看网址| 久久香蕉国产精品| 一边摸一边抽搐一进一小说| 首页视频小说图片口味搜索| 欧美最黄视频在线播放免费| 麻豆成人av在线观看| 亚洲精品美女久久av网站| 亚洲一卡2卡3卡4卡5卡精品中文| 99久久99久久久精品蜜桃| 一a级毛片在线观看| 禁无遮挡网站| 亚洲va日本ⅴa欧美va伊人久久| 男女做爰动态图高潮gif福利片 | 亚洲av五月六月丁香网| 久久久久久国产a免费观看| 国产亚洲欧美98| 国产乱人伦免费视频| 亚洲av美国av| 精品午夜福利视频在线观看一区| 一边摸一边做爽爽视频免费| 成人免费观看视频高清| 级片在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 午夜免费激情av| 一区二区三区国产精品乱码| 国产在线观看jvid| 精品熟女少妇八av免费久了| 欧美 亚洲 国产 日韩一| 亚洲视频免费观看视频| 18禁黄网站禁片午夜丰满| 久久精品亚洲熟妇少妇任你| 亚洲自拍偷在线| 国产午夜精品久久久久久| 97超级碰碰碰精品色视频在线观看| 精品国产乱子伦一区二区三区| 少妇粗大呻吟视频| 欧美成人一区二区免费高清观看 | 久久亚洲精品不卡| 99热只有精品国产| 淫妇啪啪啪对白视频| 午夜激情av网站| 国产三级在线视频| 在线天堂中文资源库| 日本一区二区免费在线视频| 搡老熟女国产l中国老女人| 最好的美女福利视频网| 亚洲男人天堂网一区| 国产精品久久视频播放| 国产主播在线观看一区二区| 久久中文字幕人妻熟女| 精品一区二区三区四区五区乱码| 黄色毛片三级朝国网站| 国产一级毛片七仙女欲春2 | 黄片大片在线免费观看| 757午夜福利合集在线观看| 亚洲成av片中文字幕在线观看| 两人在一起打扑克的视频| 一级黄色大片毛片| 91av网站免费观看| 国产成人免费无遮挡视频| 亚洲片人在线观看| 精品熟女少妇八av免费久了| 高潮久久久久久久久久久不卡| 激情在线观看视频在线高清| 神马国产精品三级电影在线观看 | 国产精品乱码一区二三区的特点 | 欧美色欧美亚洲另类二区 | 88av欧美| 日韩av在线大香蕉| 欧美成人免费av一区二区三区| 91老司机精品| 亚洲中文字幕一区二区三区有码在线看 | 黄色女人牲交| 国产精品秋霞免费鲁丝片| 国产亚洲av高清不卡| 久久久久久大精品| 中文字幕久久专区| av视频免费观看在线观看| 国产欧美日韩综合在线一区二区| 久热爱精品视频在线9| 90打野战视频偷拍视频| 淫妇啪啪啪对白视频| 免费少妇av软件| 国产欧美日韩一区二区精品| 国产乱人伦免费视频| 久久久久久大精品| 精品国内亚洲2022精品成人| av欧美777| 国产成人av教育| 国产精品亚洲一级av第二区| 大陆偷拍与自拍| 9色porny在线观看| 一区福利在线观看| 国产成人欧美在线观看| 一进一出抽搐gif免费好疼| 亚洲av熟女| 久久 成人 亚洲| 婷婷六月久久综合丁香| 婷婷丁香在线五月| 日本免费一区二区三区高清不卡 | 每晚都被弄得嗷嗷叫到高潮| av网站免费在线观看视频| 亚洲国产欧美网| 亚洲,欧美精品.| 精品欧美一区二区三区在线| 欧美+亚洲+日韩+国产| 视频在线观看一区二区三区| 在线观看舔阴道视频| 88av欧美| 成人18禁高潮啪啪吃奶动态图| 搡老岳熟女国产| 可以在线观看的亚洲视频| 少妇 在线观看| 叶爱在线成人免费视频播放| 久久青草综合色| 日韩欧美一区视频在线观看| 欧美色欧美亚洲另类二区 | 最新在线观看一区二区三区| 每晚都被弄得嗷嗷叫到高潮| www.www免费av| 久久精品亚洲熟妇少妇任你| 欧美日韩中文字幕国产精品一区二区三区 | 777久久人妻少妇嫩草av网站| 嫩草影院精品99| 黄色a级毛片大全视频| 午夜a级毛片| 日韩一卡2卡3卡4卡2021年| 亚洲熟女毛片儿| av视频在线观看入口| 麻豆一二三区av精品| 久久精品国产清高在天天线| 色综合欧美亚洲国产小说| 久久人人97超碰香蕉20202| 黄片大片在线免费观看| 在线观看免费视频日本深夜| 国产成人精品无人区| 9191精品国产免费久久| 国产午夜精品久久久久久| 国产精品一区二区在线不卡| av在线播放免费不卡| 欧美色欧美亚洲另类二区 | 精品久久蜜臀av无| 乱人伦中国视频| 久久香蕉激情| 亚洲成人久久性| 久久久久九九精品影院| 女同久久另类99精品国产91| 国产高清有码在线观看视频 | 女人精品久久久久毛片| av天堂在线播放| 久9热在线精品视频| 在线观看66精品国产| 满18在线观看网站| 国产免费男女视频| 日本免费a在线| 男男h啪啪无遮挡| 十八禁网站免费在线| 亚洲欧洲精品一区二区精品久久久| 搡老岳熟女国产| 操美女的视频在线观看| av中文乱码字幕在线| 别揉我奶头~嗯~啊~动态视频| 亚洲中文字幕日韩| 高清毛片免费观看视频网站| 九色亚洲精品在线播放| 电影成人av| 在线十欧美十亚洲十日本专区| 国产精品自产拍在线观看55亚洲| 日日夜夜操网爽| 99精品欧美一区二区三区四区| 国产片内射在线| av在线播放免费不卡| 女人被狂操c到高潮| 国产精品久久久久久亚洲av鲁大| 欧美日韩福利视频一区二区| 亚洲精品国产区一区二| 国产欧美日韩一区二区精品| 999久久久精品免费观看国产| 久久精品成人免费网站| 欧美成人午夜精品| 黄片大片在线免费观看| 久热爱精品视频在线9| 亚洲成a人片在线一区二区| 亚洲全国av大片| 欧美色视频一区免费| 亚洲电影在线观看av| 欧美在线一区亚洲| 亚洲色图av天堂| 狠狠狠狠99中文字幕| 国产伦一二天堂av在线观看| 黑丝袜美女国产一区| 国产精品野战在线观看| 90打野战视频偷拍视频| 亚洲国产日韩欧美精品在线观看 | 国产精品美女特级片免费视频播放器 | 欧美成人性av电影在线观看| 日韩国内少妇激情av| 天天添夜夜摸| 中文字幕精品免费在线观看视频| 国产精品av久久久久免费| 黑人操中国人逼视频| 亚洲人成77777在线视频| 波多野结衣巨乳人妻| 99国产精品免费福利视频| xxx96com| 亚洲熟妇熟女久久| 亚洲在线自拍视频| 在线天堂中文资源库| 免费在线观看日本一区| 国产一区二区激情短视频| 午夜免费鲁丝| 久久久久久国产a免费观看| 午夜免费鲁丝| 久久久久久久午夜电影| 国产亚洲精品av在线| 啦啦啦 在线观看视频| 啦啦啦观看免费观看视频高清 | 亚洲一区二区三区不卡视频| 欧美激情高清一区二区三区| 欧美绝顶高潮抽搐喷水| 国产高清有码在线观看视频 | 国产成人啪精品午夜网站| 午夜a级毛片| 欧美中文综合在线视频| 国产私拍福利视频在线观看| av天堂久久9| 亚洲视频免费观看视频| 黄片播放在线免费| 国产黄a三级三级三级人| 国产99久久九九免费精品| 国产黄a三级三级三级人| 中文字幕av电影在线播放| 国产三级黄色录像| 色综合婷婷激情| 国产一区二区三区在线臀色熟女| 黑人操中国人逼视频| 亚洲九九香蕉| 日韩精品中文字幕看吧| 久久天躁狠狠躁夜夜2o2o| 国产精品一区二区精品视频观看| 久久久久国内视频| 久久久久久免费高清国产稀缺| 久久午夜亚洲精品久久| 亚洲全国av大片| 非洲黑人性xxxx精品又粗又长| 97碰自拍视频| 夜夜夜夜夜久久久久| 亚洲欧美日韩无卡精品| 欧洲精品卡2卡3卡4卡5卡区| 国产私拍福利视频在线观看| 高潮久久久久久久久久久不卡| 亚洲熟妇熟女久久| 精品卡一卡二卡四卡免费| 极品教师在线免费播放| 色老头精品视频在线观看| 亚洲国产欧美一区二区综合| 亚洲熟女毛片儿| 后天国语完整版免费观看| 老熟妇仑乱视频hdxx| 久99久视频精品免费| 国产精品久久久久久亚洲av鲁大| 在线av久久热| 精品人妻在线不人妻| 在线播放国产精品三级| 男女床上黄色一级片免费看| 一级毛片女人18水好多| 亚洲狠狠婷婷综合久久图片| 国产免费男女视频| a级毛片在线看网站| 中文亚洲av片在线观看爽| 国产色视频综合| 一进一出抽搐gif免费好疼| 美女 人体艺术 gogo| 午夜福利18| 亚洲欧美精品综合久久99| 亚洲片人在线观看| 精品久久久久久久久久免费视频| 国产亚洲欧美在线一区二区| 好看av亚洲va欧美ⅴa在| 国产高清有码在线观看视频 | 久久人妻av系列| 老汉色av国产亚洲站长工具| 亚洲中文日韩欧美视频| 国产伦人伦偷精品视频| 欧美精品亚洲一区二区| 好男人在线观看高清免费视频 | 国产精品精品国产色婷婷| 国产精品亚洲美女久久久| 叶爱在线成人免费视频播放| 真人做人爱边吃奶动态| av欧美777| 亚洲人成电影免费在线| 侵犯人妻中文字幕一二三四区| 成人免费观看视频高清| 国产日韩一区二区三区精品不卡| 丁香欧美五月| 久久久国产成人免费| 亚洲精品一卡2卡三卡4卡5卡| 国产伦一二天堂av在线观看| 国产区一区二久久| 亚洲三区欧美一区| 久久精品国产亚洲av高清一级| 看黄色毛片网站| 美女高潮喷水抽搐中文字幕| 日韩欧美在线二视频| 丁香欧美五月| 亚洲免费av在线视频| 午夜福利高清视频| 亚洲色图 男人天堂 中文字幕| 欧美精品亚洲一区二区| 狠狠狠狠99中文字幕| 国产熟女xx| 淫秽高清视频在线观看| 免费在线观看日本一区| 黄片小视频在线播放| 午夜成年电影在线免费观看| 中文字幕人成人乱码亚洲影| 国产97色在线日韩免费| 国产精品综合久久久久久久免费 | 可以在线观看的亚洲视频| 日韩精品青青久久久久久| 国产97色在线日韩免费| 男人舔女人下体高潮全视频| 午夜久久久久精精品| 亚洲熟妇熟女久久| 黑人操中国人逼视频| 一夜夜www| 美女国产高潮福利片在线看| 高清毛片免费观看视频网站| 99久久久亚洲精品蜜臀av| 午夜老司机福利片| 国产在线观看jvid| 国产av一区在线观看免费| 国产私拍福利视频在线观看| 欧美一级毛片孕妇| 日本五十路高清| 亚洲欧美激情综合另类| 久久人人97超碰香蕉20202| 午夜精品在线福利| 精品不卡国产一区二区三区| 老鸭窝网址在线观看| 不卡av一区二区三区| 久久中文字幕一级| netflix在线观看网站| 亚洲精品在线观看二区| 麻豆国产av国片精品| 日韩中文字幕欧美一区二区| 亚洲精品久久国产高清桃花| 又黄又爽又免费观看的视频| 搡老妇女老女人老熟妇| 午夜日韩欧美国产| 欧美日韩中文字幕国产精品一区二区三区 | 夜夜夜夜夜久久久久| 日本免费一区二区三区高清不卡 | 国产亚洲精品久久久久久毛片| 久久精品成人免费网站| 亚洲精品一卡2卡三卡4卡5卡| 亚洲黑人精品在线| 不卡av一区二区三区| 国产伦人伦偷精品视频| 好看av亚洲va欧美ⅴa在| 丝袜美腿诱惑在线| 精品卡一卡二卡四卡免费| 长腿黑丝高跟| 久久午夜综合久久蜜桃| 男女下面进入的视频免费午夜 | 久久性视频一级片| 一区二区三区激情视频| 亚洲成av人片免费观看| 亚洲中文字幕一区二区三区有码在线看 | 一边摸一边抽搐一进一小说| 精品少妇一区二区三区视频日本电影| 色播亚洲综合网| 国产精品98久久久久久宅男小说| 亚洲精华国产精华精| 欧美不卡视频在线免费观看 | 国产真人三级小视频在线观看| 国产成人av激情在线播放| 精品国产乱码久久久久久男人| 亚洲国产高清在线一区二区三 | 亚洲精品国产色婷婷电影| 久久精品人人爽人人爽视色| 国产精品一区二区精品视频观看| 制服诱惑二区| 每晚都被弄得嗷嗷叫到高潮| 99在线视频只有这里精品首页| 国产高清视频在线播放一区| 美女 人体艺术 gogo| 免费看a级黄色片| 国产区一区二久久| 波多野结衣高清无吗| 99久久精品国产亚洲精品| 最好的美女福利视频网| e午夜精品久久久久久久| av中文乱码字幕在线| 女人精品久久久久毛片| 村上凉子中文字幕在线| 久久精品国产99精品国产亚洲性色 | 亚洲国产欧美日韩在线播放| 在线观看免费午夜福利视频| 久久久久亚洲av毛片大全| 欧美日韩亚洲国产一区二区在线观看| 国产一区二区在线av高清观看| 成年版毛片免费区| 99riav亚洲国产免费| 国产成人欧美| 国产精品秋霞免费鲁丝片| 国产精品久久久av美女十八| 亚洲国产中文字幕在线视频| av视频免费观看在线观看| 亚洲天堂国产精品一区在线| 欧美黑人欧美精品刺激| 啦啦啦韩国在线观看视频| 视频区欧美日本亚洲| 一本大道久久a久久精品| 欧美成人性av电影在线观看| 国产精品一区二区精品视频观看| 久久久久国内视频| 午夜精品国产一区二区电影| 国产激情久久老熟女| 久久久久久大精品| 亚洲欧美激情在线| 成人国语在线视频| 十八禁网站免费在线| 老熟妇乱子伦视频在线观看| 国产麻豆成人av免费视频| 国产精品久久久人人做人人爽| 国产亚洲精品综合一区在线观看 | 久久亚洲真实| 嫩草影视91久久| 91大片在线观看| 亚洲欧美一区二区三区黑人| 亚洲专区中文字幕在线| 午夜久久久久精精品| 制服丝袜大香蕉在线| 女人被躁到高潮嗷嗷叫费观| 国产精品久久电影中文字幕| 国产一区二区三区在线臀色熟女| 动漫黄色视频在线观看| 欧美日韩亚洲国产一区二区在线观看| 午夜福利免费观看在线| 精品国产亚洲在线| 熟女少妇亚洲综合色aaa.| videosex国产| 久久精品91蜜桃| 亚洲精品国产一区二区精华液| 亚洲人成77777在线视频| 久久久久久免费高清国产稀缺| 欧美另类亚洲清纯唯美| 日韩视频一区二区在线观看| 不卡一级毛片| 法律面前人人平等表现在哪些方面| 国产一区二区三区在线臀色熟女| 精品国产亚洲在线| 亚洲精品国产一区二区精华液| 免费高清在线观看日韩| 国产一级毛片七仙女欲春2 | 麻豆成人av在线观看| 少妇的丰满在线观看| 琪琪午夜伦伦电影理论片6080| 亚洲欧美激情在线| 国产成人欧美在线观看| 免费在线观看完整版高清| 精品久久久久久久人妻蜜臀av | 亚洲欧洲精品一区二区精品久久久| 黄色视频不卡| 国产成+人综合+亚洲专区| 一级,二级,三级黄色视频| 99久久久亚洲精品蜜臀av|