• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Joint Event Extraction Based on Global Event-Type Guidance and Attention Enhancement

    2021-12-14 06:07:10DaojianZengJianTianRuoyaoPengJianhuaDaiHuiGaoandPengPeng
    Computers Materials&Continua 2021年9期

    Daojian Zeng,Jian Tian,Ruoyao Peng,Jianhua Dai,*,Hui Gao and Peng Peng

    1Hunan Normal University,Changsha,410081,China

    2Changsha University of Science&Technology,Changsha,410114,China

    3National University of Defense Technology,Changsha,410073,China

    4University of Waterloo,Waterloo,N2L3G1,Canada

    Abstract:Event extraction is one of the most challenging tasks in information extraction.It is a common phenomenon where multipleevents exist in the same sentence.However,extracting multiple events is more difficult than extracting a single event.Existing event extraction methods based on sequence models ignore the interrelated information between events because the sequence is too long.In addition,the current argument extraction relies on the results of syntactic dependency analysis,which is complicated and prone to error transmission.In order to solve the above problems,a joint event extraction method based on global event-type guidance and attention enhancement was proposed in this work.Specifically,for multiple event detection,we propose a globaltype guidance method that can detect event types in the candidate sequence in advance to enhance the correlation information between events.For argument extraction,we converted it into a table-filling problem,and proposed a tablefilling method of the attention mechanism,that is simple and can enhance the correlation between trigger words and arguments.The experimental results based on the ACE 2005 dataset showed that the proposed method achieved 1.6%improvement in the task of event detection,and obtained state-of-the-art results in the argument extraction task,which proved the effectiveness of the method.

    Keywords:Event extraction;event-type guidance;table filling;attention mechanisms

    1 Introduction

    Event extraction (EE) is an essential yet challenging task for information extraction.It is widely used in natural language processing,especially in the fields of automatic expansion of large-scale knowledge bases,automatic summarization,and biomedicine [1].Therefore,in recent years,a lot of research has been conducted on event extraction tasks,that aimed to extract trigger words from unstructured natural text,determine the event type of trigger words,and extract the argument related to the event and determine the role played in the event.The ACE 2005 evaluation conference defined event extraction as two subtasks:event detection (identifying and classifying event triggers) and argument extraction (identifying arguments of event triggers and labeling their roles).

    The traditional methods generally handle event extraction as a pipeline of two separate tasks:event detection and argument extraction.The pipeline method achieves good results,especially when deep learning techniques are used.The most successful pipelined method was proposed by Chen et al.[2],who used dynamic multiple pooling convolutional neural networks to automatically learn features from sentences and represented words with continuous representations [3-5].However,as the pipeline method is divided into two subtasks,the interrelationship between the subtasks is ignored.Specifically,the result of event detection affects the following argument extraction,and the effect of argument extraction promotes the result of event detection [6].Thus,researchers turned to the method of joint extraction.

    Li et al.[6]performed one of the most successful studies of the joint method,which is based on a structure-aware algorithm with sets of local and global features for EE.The interdependence between trigger words and arguments is captured by global features.This method alleviates the shortcomings of the pipeline method and achieves good results.However,this feature extraction relies on natural language processing tools (e.g.,part of speech tagging) and has poor generalization capabilities for new words and unseen features.Therefore,Nguyen et al.[7]proposed joint event extraction based on the Recurrent Neural Network (RNN).They used recurrent neural networks to automatically learn rich contextual semantic representations.In order to capture the interdependence between trigger words and arguments,memory vectors and matrices are introduced in the method to store prediction information in the process of sentence labeling.To a certain extent,this method solves the deficiencies of Li et al.[6]method,but it does not make full use of the syntactic dependence between the components in the sentence.Sha et al.[8]used a dependency bridge based on a bi-directional RNN to learn the syntactic dependency between each component in a sentence and introduce a tensor to learn the interdependence between arguments.However,all of the above methods have a common disadvantage:They ignore the interdependence of multiple events in the sentence.

    In actual event extraction scenarios,there will inevitably be multiple events in one sentence.Compared with single event extraction,it is more complicated to accurately extract multiple events.There is a strong correlation among events drawn from the same sentence.For example,as shown in Fig.1,theAttackevent helps us determine that the worddiedtriggers theDieevent rather than theInjectevent.It is worth noting that multiple event phenomena are ubiquitous in natural language.According to statistics,there are 3,978 incident-related sentences in the ACE 2005 dataset,and 1,042 sentences contain multiple events,accounting for 26.6% of the entire incident dataset.It is common for multiple events in the sentence to require extraction.Liu et al.[9]conducted an in-depth study on multiple event extraction.They used the graph convolutional neural to learn the dependency syntax relationship between the components in the sentence,and tried to capture the correlation between events.However,owing to the complexity of the dependency syntax tree and reliance on NLP tools for preprocessing,this method inevitably encounters the error propagation problem and the interdependence between events is not fully resolved.

    In order to solve the above problems,we proposed a joint event extraction method based on global event type guidance and the attention enhancement mechanism.Recent studies on multitask learning (MTL) in deep neural networks for NLP revealed that multi-task gains were more likely for target tasks that quickly plateaued with non-plateauing auxiliary tasks [10].Because of the compelling benefits of MTL,we proposed a multi-task setup for identifying and classifying events and arguments.Specifically,we first use the BERT pre-training model to encode the sentence in order to obtain the context information of each token.Next,the event guidance layer is exploited to predict the candidate event types of the input sentence.At the same time,we introduce the CRF layer to identify the candidate arguments.Then,we feed the candidate event types and context features into the softmax layer for trigger word recognition and event classification.Finally,we enumerate the combinations between two tokens in the sentence.The corresponding context features,candidate argument features,trigger words and event-type features are attentively considered for argument role classification in a table-filling [11-13]manner (see Fig.2).From the above,we noticed that the event types predicted by the event guidance layer helped to guide event classification.With the injected events,we allow the network to be aware of all the events that exist in the sentence in advance.Thus,the interdependencies of events are taken into account.Moreover,we use an attention mechanism to comprehensively take all tokens into account for the role classification of any two tokens.Therefore,the correlation between trigger words and arguments is taken into account at the table filling stage.In summary,the contributions of this work can be summarized as follows:

    Figure 1:An example of multiple events.There are two events in the sentence:a Die event triggered by the word died,with four arguments in the red,and an Attacked event triggered by the word fired,with four arguments in the blue

    Figure 2:The Triggers-Argument role table for the example in Fig.1.“Died” and “fired” are two trigger words.Place,Victim,Instrument and Target represent the argument roles.Blank cells indicate there is no argument role

    (1) We proposed a novel event-type guidance layer to predict the event types of the input sentence.The candidate event types are used to guide trigger word recognition and event detection,which can strengthen the complex interdependencies of events.

    (2) The method converts the argument extraction as a table-filling problem.An attention mechanism is introduced to involve the representations of multiple tokens,which automatically discovers some useful contextual information for argument role classification.

    (3) We conducted wide experiments on the ACE 2005 dataset.The experimental results indicate that the proposed method outperforms several strong baselines.

    2 Related Work

    Traditional event extraction methods usually exploit a pipelined way to extract events where arguments are identified using a classifier after event detection [14,15].These methods have a fatal flaw:They ignore the underlying interdependencies between event detection and argument extraction and suffer from error propagation.

    To address the above problem,a joint event extraction method based on the Markov logic network [16-18]was proposed.Afterward,the structured perceptron [6,19]and the dual decomposition method [20]were successively proposed for event extraction.

    Recently,with the widespread application of neural networks in machine translation,text classification,steganography analysis [21,22],and other fields,researchers have also tried to use neural networks to complete event extraction.For example,Chen et al.[2]employed dynamic multi-pooling convolutional neural networks to automatically learn features,in which the input words are represented by pre-trained word embeddings [3-5].However,they achieved promising results,and the method still follows the pipelined framework.Similarly,Nguyen et al.[7]proposed a joint approach named JRNN,in which recurrent neural networks are used to automatically learn the rich contextual semantic representation of sentences.The relations between event triggers with specific subtypes and their corresponding arguments are captured by the devised memory vectors and matrices.Similarly,Sha et al.[8]exploited dependency bridges to connect syntactically related words based on a bidirectional recurrent neural network.Moreover,a tensor layer was proposed on each pair of two candidate arguments,which enables intensive,argument-level information interaction.Liu et al.[9]conducted an in-depth study on multi-event extraction,which introduced a syntactic dependency tree and used the graph convolutional neural networks to learn the syntactic dependency of each component for the sentence.

    The above mentioned joint extraction methods have achieved good results.However,these existing methods have a common disadvantage:They do not consider the situation where multiple events appear in one sentence at the same time.To solve this problem,we proposed an eventguided and attention enhancement joint approach for event extraction.The pre-predicted eventtype information allows for better event detection,and the attention mechanism is exploited to leverage the sentential context.

    3 Methodology

    Our proposed joint model consists of six modules:(i) BERT,(ii) NER,(iii) Event-Types Proposal,(iv) Event Detection (v) Token Pair Attention,and (vi) Table Filling,as illustrated in Fig.3.Given a sentence as the model input,the model first generates a deep contextualized word representation for each token using BERT.Next,the event guidance layer is exploited to predict the candidate event types of the input sentence.At the same time,we introduce the CRF layer to identify the candidate arguments.Then,we feed the candidate event types and context features into the softmax layer for trigger word recognition and event classification.Finally,we enumerate the combinations between two tokens in the sentence and comprehensively consider the BERT output,NER label,predicted event types and attention results to fill the trigger word-argument role table.We explain the model details in the following subsections.

    Figure 3:The overall architecture of the global-event-type guidance and attention enhancement

    3.1 BERT

    BERT’s model architecture is a multiple layer bidirectional Transformer encoder.The encoder is as tack of identical blocks (BERT-Base stacks comprising 12 blocks on top of each other).Each block is composed of a multi-head self-attention layer and a position-wise,fully connected feed-forward layer.Assuming the output sequence of the previous layer is packed together into a matrix H,the output matrix Z of a multi-head self-attention layer is computed as

    where h is the number of attention heads,dkis the dimension of queries and keys,andWO,WiQ,WiK,WiVare the parameter matrices.Each layerin the encoder has a residual connection around it,followed by layer normalization.

    For a given token,the input representation of BERT is the sum in the corresponding token and segment and position embeddings.BERT uses WordPiece embeddings as token embeddings.In addition,BERT adds a special token ([CLS]) as the first token to obtain the aggregate sequence representation for a classification task and a special token ([SEP]) to distinguish between different sentences in the same input sequence.In particular,as the input sequence extracted by the joint event is only one sentence,the special token ([SEP]) is not useful in the current task.

    Given an input token sequenceX=(x0,x1,...,xn-1,xn),we denote the BERT contextual representation of each token asZ=(z0,z1,...,zk-1,zk).Moreover,given that the WordPiece tokenizer might split a token into several sub-tokens,we use the hidden state corresponding to the first sub-token of a given token as its contextual representation.

    3.2 Named Entity Recognition

    We formulate the NER task as a sequence-labeling problem and use the BIEO (Beginning,Inside,Ending,Outside) encoding scheme.A linear-chain CRF is employed to calculate the most probable tag for each token.Formally,we first derive the emission potential that comes from the sentence encoder output.The score of each tokenxifor each entity tag is calculated as follows:

    wheref(.)is an element wise activation function (e.g.,relu),si∈Rd,d is the number of encoding scheme tags,W1∈Rl×2m,V1∈Rl×dare the transformation matrixes.bh∈Rl,bs∈Rdare bias vectors,l is the hidden size,and m is the output dimension of BERT.Given a sequence of tag predictionsY=(y1,y2,...,yn-1,yn),the linear-chain CRF score is defined as

    wheresi,yiis the score of the tagyifor the tokenxi,which is obtained by Eq.(3),andayi-1,yiis the score of transition from tagyi-1to tagyi.Using Eq.(4),we can get the score of a tag sequencey,which is further converted to probability by the following softmax function:

    where T represents the training set andY*is the gold standard for sequencex.During training,we minimize the negative log likelihoodLNERof the gold standard.In the decoding process,the Viterbi algorithm is adopted to derive the optimal tag sequence.The tags are converted to embeddings by looking up an embedding layer.We then obtain the label-embedding sequencewherem′is the dimension of the label embeddings.

    3.3 Event-Types Proposal

    Event-types proposal is an auxiliary task for event extraction.The task aims to predict the possible event types in the sentence regardless of which trigger word contains them.The eventtypes proposal layer employs hard parameter sharing,the most common approach used in multitask learning,to share the same sentence encoder with NER.We use the first token that BERT outputs and then use a dense layer with non-linear activation to get predicted event types in this sentence:

    wherez0is the first for the BERT output.Wp∈R|tp|×his the transformation matrix,bp∈R|tp|is the bias vector,|tp|is the number of predefined event types,andf(.)stands for the sigmoid function,which potentially allows multiple events to exist in the same sentence.We create a criterion that measures the binary cross entropy between the target and the output.The loss function of the event-types proposal is:

    where T represents the training set,?is the gold standard event types set for sequencew,andis thei-th one.is calculated by applying the softmax function across event types.All event types are converted to embeddings by looking up an embedding layerwhere the average-pooling operation isetp.

    The predicted event types have two uses.On the one hand,the event-types proposal is a simple auxiliary task that can cooperate with the event detection task.On the other hand,it creates complex dependencies for the event types in a sentence.

    3.4 Event Detection

    Assume we have extracted an entire trigger candidate that meets an O label after an I-Type label or a B-Type label.Through softmax,the event-type embeddings are concatenated with the BERT contextual representation.Then we can get every token label category.

    whereWc∈Rm×dis the parameter matrix,bc∈Rmis the bias vector,andexpandis the dimensional extension function.According to the obtained label probability distribution,the event-type prediction label corresponding to each token can be obtained.

    The loss function is the cross-entropy between the target and the output for tokens:

    where T represents the training set.The tags are converted to embeddings by looking up an embedding layer.We obtain the label embeddings sequencewherem′is the dimension of label embeddings.

    3.5 Token Pair Attention

    The filling of the vanilla table takes into account just two candidate tokens to predict the role of the argument.We use the token pair attention mechanism to capture information between trigger words and arguments.Specifically,the attention score of the token pair<xi;xj>for thek-th token in a given sentence is calculated by the following equation:

    whereis the average ofVeiandVej;Wqis the attention parameter;is equal to 0 whentis equal toiorj.The main reason for this strategy is that we consider the representations of the token pair<xi;xj>in the table filling stage (see Eqs.(15) and (16)).Thus,we directly mask the token pair themselves when performing attention calculations.The attentive result for the token pair<xi;xj>is computed by the following equation:

    3.6 Table Filling

    For event embedding (eed),NER embedding (ener) is then concatenated with the BERT contextual representation to form a final feature representationLetxiandxjbe two words,Y (xi,xj) be all possible role relations,andsxi,xj,rbe a scoring function that assessesxiandxjfor the existing role types r.We can further get the conditional probability of role types r givenxiandxjthrough the softmax function:

    Here,δ(.)is an elementwise non-linear activation function (e.g.,tanh).Moreover{W,U,V,M}are the transformation matrixes.

    Based on the probability distribution of table filling,the predictive effect of each table in table filling is defined as

    The loss function is the cross entropy between the target and the output for all the table cells:

    wherexrepresents a sentence in the training set T,xkis thek-th word ofx,nis the sentence length,andRis the argument-roles set betweenxiandxj.

    In our framework,there are two main tasks:event detection and table filling,and two auxiliary subtasks:NER and ETP.The loss function is the summation of these four tasks.For joint event extraction,the loss function is the summation of these four tasks:LED+LETP+LNER+LTF.The loss is calculated as the average over-shuffled minibatch,and the derivatives of each parameter can be computed via backpropagation.

    4 Experiments

    4.1 Experiment Settings

    4.1.1 Dataset,Resources and Evaluation Metric

    We evaluated our framework on the ACE 2005 dataset.The ACE2005 dataset annotates 33 event subtypes and 36 role classes,and along with the NONE class and BIO annotation schema,we divided each token into 67 categories in event detection and 37 categories in argument extraction.In order to be consistent with previous work,we used the same data split as in previous work [2,7-9].This data split included 40 newswire articles (881 sentences) for the test set,30 other documents (1,087 sentences) for the development set and 529 remaining documents(21,090 sentences) for the training set.

    We then used precision,recall,and F1 score to evaluate the performance as done in previous work [2,8,9].

    4.1.2 Hyperparameter Setting

    For all the experiments below,in the ETP layer and event detection,we used 200 dimensions and 300 dimensions for the tag embeddings.We utilized a maximum lengthn=120 of sentences in the experiments by padding shorter sentences and cutting off longer ones.The batch size in our experiments was 64,we set the dropout rate to 0.5 and the learning rate to 0.05.Adam was used to optimize the neural networks.The experiments were trained with an NVIDIA RTX 1080Ti GPU.

    4.2 Baselines and Main Results

    To evaluate the performance of the proposed method,we compared our model with four competitive baselines,as follows:1)DMCNN [2],which uses dynamic multiple pooling to keep multiple event information;2) JRNN [7],which uses a bi-directional RNN and manually designed features to jointly extract event triggers and arguments;3) dbRNN [8],which adds dependency bridges over bi-directional LSTM for event extraction;4) JMEE [9]via attention-based graph information aggregation for multiple event extraction.

    Tab.1 shows the results of comparing our model with the baseline methods.Our framework achieved the best F1 scores in trigger recognition and trigger classification,is the scores were 1.6%higher than the best-reported models.However,it did not achieve the best F1 score for argument role classification.In summary,these results demonstrated the effectiveness of our method to incorporated with the global event type guidance and attention enhancement.

    4.3 Effect of ETP Layer for Extracting Multiple Events

    To evaluate the effect of the ETP layer for alleviating the multiple event phenomenon,we divided the test data into two parts (1/1 and 1/N) following [2,9,10]and then performed evaluation separately.Here,1/1 means that one sentence only had one trigger or one argument playing a role in one sentence;otherwise,1/N was used.

    Table 1:Overall performance comparing to the state-of-the-art methods with gold-standard entities

    Tab.2 illustrates the performance (F1 scores) of JMEE [9],JRNN [7],DMCNN [2],and our framework (with and without ETP layer) in the trigger classification subtask and argument role classification subtask.From the table we can see that our framework with the ETP layer achieved the best F1 scores,they were 1.6% higher than those of the best-reported models.However,the F1 scores decreased from 75.3% to 73.1% when without the ETP layer.The results indicate that the proposed ETP layer is effective.

    Table 2:System performance on single event sentences (1/1) and multiple event sentences (1/N)

    4.4 Analysis of Attention Mechanism

    As Tab.3 shows the results were not ideal when table filling alone was used for event argument extraction.The F1 value increased by 5.2% when the attention mechanism was added to the table filling,indicating that the attention mechanism can help improve the event extraction of arguments.

    We used the sentence “In Baghdad,a cameraman died when an American tank fired on the Palestine Hotel” as an example to illustrate the capture feature in our attention mechanism by attention scores for the heat map in Fig.2.There were two events in the sentence:an Attacked event triggered byfiredand a Die event triggered bydied.Additionally,the entities Baghdad,Cameraman,Tank,and Palestine Hotel played an important role in Die and Attacked.

    Table 3:Experimental comparison of table filling and table filling plus attention mechanism

    As Fig.4 shows,the trigger words “died” and “fired” had relatively strong connections with Baghdad,Cameraman,Tank,and Palestine Hotel in the Die and Attacked event,which may potentially be because of the capture information between triggers and the arguments through the attention mechanism.

    Figure 4:Illustration of token pair attention

    5 Conclusion

    In this work,we proposed a global event type guidance and attention enhancement to improve event detection and argument extraction.The enhancement exploits the pre-predicted event types to guide event detection,which strengthens the interdependencies of relations of multiple events in a sentence.Moreover,for argument extraction,we use the table filling method with the attention mechanism to obtain the correlation information between triggers and arguments.The experimental results on the ACE 2005 dataset indicate that our proposed model is effective,which superior to several strong baseline methods.

    As the relationship between arguments among multiple events has not yet been considered,we will examine its influence on the extraction of arguments in the future.

    Funding Statement:This work was supported by the Hunan Provincial Natural Science Foundation of China (Grant No.2020JJ4624,2019JJ50655),the Scientific Research Fund of Hunan Provincial Education Department (Grant No.19A020),and the National Social Science Fund of China(Grant No.20&ZD047)

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久精品成人免费网站| 老鸭窝网址在线观看| 中文字幕av电影在线播放| 国产成人精品久久二区二区91| 最好的美女福利视频网| 免费在线观看日本一区| 日本撒尿小便嘘嘘汇集6| 日韩欧美免费精品| 久久亚洲精品不卡| 999久久久国产精品视频| 亚洲午夜精品一区,二区,三区| 两个人看的免费小视频| 亚洲一区高清亚洲精品| 久久人妻福利社区极品人妻图片| 99re在线观看精品视频| 亚洲欧美激情在线| 精品久久久精品久久久| 搡老乐熟女国产| 亚洲免费av在线视频| 成人手机av| 久久中文字幕人妻熟女| 在线视频色国产色| 又紧又爽又黄一区二区| 亚洲国产欧美网| 成人三级做爰电影| 亚洲人成电影免费在线| 欧美在线一区亚洲| 美女高潮到喷水免费观看| 亚洲av日韩精品久久久久久密| 丝袜在线中文字幕| 亚洲av成人一区二区三| 精品熟女少妇八av免费久了| 国产精品自产拍在线观看55亚洲| 国产又色又爽无遮挡免费看| 久久久久久免费高清国产稀缺| 无限看片的www在线观看| 久久久久精品国产欧美久久久| 18禁裸乳无遮挡免费网站照片 | 丝袜美腿诱惑在线| 美女午夜性视频免费| 国产欧美日韩一区二区三区在线| 美女国产高潮福利片在线看| 日日摸夜夜添夜夜添小说| 可以免费在线观看a视频的电影网站| 欧美乱妇无乱码| 99国产综合亚洲精品| 午夜福利,免费看| 一级a爱片免费观看的视频| 欧美精品亚洲一区二区| 午夜日韩欧美国产| 久久国产精品人妻蜜桃| 国产单亲对白刺激| 久久久久九九精品影院| 天天添夜夜摸| 国产成年人精品一区二区 | 精品无人区乱码1区二区| 日本a在线网址| 久久精品人人爽人人爽视色| 宅男免费午夜| 久久久国产欧美日韩av| 日韩免费av在线播放| 9热在线视频观看99| 在线观看舔阴道视频| 久久影院123| 一级片'在线观看视频| 国产麻豆69| 欧美色视频一区免费| а√天堂www在线а√下载| 一二三四社区在线视频社区8| 久久精品aⅴ一区二区三区四区| 涩涩av久久男人的天堂| 欧美日韩黄片免| 亚洲精品一卡2卡三卡4卡5卡| 热re99久久国产66热| 91老司机精品| www.熟女人妻精品国产| 亚洲欧美精品综合久久99| 欧美人与性动交α欧美精品济南到| 亚洲黑人精品在线| 午夜亚洲福利在线播放| 国产欧美日韩精品亚洲av| 久久精品国产清高在天天线| 女人高潮潮喷娇喘18禁视频| 亚洲成人久久性| 老司机在亚洲福利影院| 国产精品久久久久成人av| 久久久久久久久中文| 最新美女视频免费是黄的| 大码成人一级视频| 久久久久久亚洲精品国产蜜桃av| 久99久视频精品免费| 18美女黄网站色大片免费观看| 黄色视频,在线免费观看| 亚洲久久久国产精品| 国产91精品成人一区二区三区| 乱人伦中国视频| 日韩免费av在线播放| 在线观看66精品国产| 成人永久免费在线观看视频| xxxhd国产人妻xxx| 久久国产精品人妻蜜桃| 亚洲精品国产色婷婷电影| 日本vs欧美在线观看视频| 波多野结衣一区麻豆| 一个人观看的视频www高清免费观看 | 国产av精品麻豆| 午夜福利在线免费观看网站| 国产午夜精品久久久久久| 天天躁狠狠躁夜夜躁狠狠躁| 99久久人妻综合| 大型黄色视频在线免费观看| 一级a爱视频在线免费观看| 两性午夜刺激爽爽歪歪视频在线观看 | 久久久久久亚洲精品国产蜜桃av| 欧美乱妇无乱码| 久久中文字幕人妻熟女| 日韩欧美三级三区| 男女做爰动态图高潮gif福利片 | av电影中文网址| 一区二区三区国产精品乱码| 丰满迷人的少妇在线观看| 久久久久精品国产欧美久久久| 精品欧美一区二区三区在线| www国产在线视频色| bbb黄色大片| 无人区码免费观看不卡| 后天国语完整版免费观看| 91国产中文字幕| 欧美成狂野欧美在线观看| 亚洲人成电影观看| 丝袜人妻中文字幕| 亚洲人成电影观看| 亚洲五月婷婷丁香| 久久九九热精品免费| 国产欧美日韩一区二区精品| 久久亚洲真实| 男女下面插进去视频免费观看| 亚洲aⅴ乱码一区二区在线播放 | 色精品久久人妻99蜜桃| 在线观看午夜福利视频| 最近最新免费中文字幕在线| 日韩av在线大香蕉| 日日干狠狠操夜夜爽| 黄频高清免费视频| 性色av乱码一区二区三区2| 狠狠狠狠99中文字幕| 精品福利永久在线观看| 亚洲精品国产精品久久久不卡| 美女国产高潮福利片在线看| 在线观看舔阴道视频| 99国产精品一区二区三区| 窝窝影院91人妻| 美女扒开内裤让男人捅视频| 色综合欧美亚洲国产小说| 国产97色在线日韩免费| 久久伊人香网站| 9色porny在线观看| 免费观看精品视频网站| 成人18禁高潮啪啪吃奶动态图| 国产亚洲欧美精品永久| 国产成人欧美在线观看| 别揉我奶头~嗯~啊~动态视频| 夜夜爽天天搞| 久久中文字幕一级| av在线天堂中文字幕 | 日日干狠狠操夜夜爽| 精品久久久久久久毛片微露脸| 亚洲男人天堂网一区| 日韩欧美一区二区三区在线观看| 国产野战对白在线观看| 涩涩av久久男人的天堂| 国产三级在线视频| 久久久久久免费高清国产稀缺| 夜夜看夜夜爽夜夜摸 | 美女大奶头视频| 国产高清视频在线播放一区| 动漫黄色视频在线观看| 亚洲精品成人av观看孕妇| 黄色成人免费大全| 老司机亚洲免费影院| 又黄又爽又免费观看的视频| 高清欧美精品videossex| 亚洲专区字幕在线| 在线观看舔阴道视频| 色综合婷婷激情| 国产成人av激情在线播放| 久久久久久久久免费视频了| 久久影院123| 亚洲欧美日韩高清在线视频| 美女扒开内裤让男人捅视频| 日韩欧美在线二视频| 欧美成人免费av一区二区三区| 女人精品久久久久毛片| 欧美一级毛片孕妇| 狂野欧美激情性xxxx| 自拍欧美九色日韩亚洲蝌蚪91| 人妻久久中文字幕网| 婷婷丁香在线五月| 夫妻午夜视频| 国产aⅴ精品一区二区三区波| 黄色丝袜av网址大全| 欧洲精品卡2卡3卡4卡5卡区| 中出人妻视频一区二区| 欧美丝袜亚洲另类 | 十八禁人妻一区二区| 制服诱惑二区| 精品熟女少妇八av免费久了| 精品久久久久久电影网| 最新在线观看一区二区三区| 日韩有码中文字幕| 国产欧美日韩综合在线一区二区| 免费在线观看亚洲国产| 日本wwww免费看| 亚洲第一欧美日韩一区二区三区| 中文字幕色久视频| 亚洲国产中文字幕在线视频| 国产精品一区二区在线不卡| 国产aⅴ精品一区二区三区波| 成人影院久久| 国产精品电影一区二区三区| 日韩大尺度精品在线看网址 | 久久人人爽av亚洲精品天堂| av国产精品久久久久影院| 男男h啪啪无遮挡| 亚洲va日本ⅴa欧美va伊人久久| 香蕉久久夜色| 欧美性长视频在线观看| 国产乱人伦免费视频| 久久国产精品影院| 亚洲国产欧美日韩在线播放| 精品人妻1区二区| 一a级毛片在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 国产av一区二区精品久久| 国产一区二区三区综合在线观看| 国产av精品麻豆| 久久久久久久久久久久大奶| 香蕉国产在线看| 少妇的丰满在线观看| 超碰成人久久| 波多野结衣av一区二区av| 久久久精品欧美日韩精品| 国产免费现黄频在线看| 国产不卡一卡二| 丁香欧美五月| av在线播放免费不卡| 在线永久观看黄色视频| 宅男免费午夜| 久久久精品欧美日韩精品| 久久精品国产亚洲av高清一级| 在线观看免费日韩欧美大片| 日本wwww免费看| 成熟少妇高潮喷水视频| www国产在线视频色| 欧美另类亚洲清纯唯美| 国产高清国产精品国产三级| 中文字幕av电影在线播放| 超碰成人久久| 亚洲精品在线美女| 亚洲自拍偷在线| 男女做爰动态图高潮gif福利片 | 一级黄色大片毛片| 999久久久国产精品视频| 国产xxxxx性猛交| 岛国视频午夜一区免费看| 久久久久久久午夜电影 | av国产精品久久久久影院| 美国免费a级毛片| 亚洲中文字幕日韩| 日本五十路高清| 宅男免费午夜| 国产午夜精品久久久久久| 天天添夜夜摸| 最近最新中文字幕大全免费视频| 交换朋友夫妻互换小说| 精品熟女少妇八av免费久了| 高清av免费在线| 日日摸夜夜添夜夜添小说| 老司机深夜福利视频在线观看| 亚洲自偷自拍图片 自拍| 妹子高潮喷水视频| 亚洲精品一区av在线观看| 在线观看一区二区三区激情| 国产成年人精品一区二区 | 一区二区三区国产精品乱码| 国产免费男女视频| 亚洲五月色婷婷综合| 免费看a级黄色片| 欧美av亚洲av综合av国产av| 免费在线观看完整版高清| 久久人妻福利社区极品人妻图片| 久久精品91无色码中文字幕| 国产欧美日韩一区二区精品| 久久久国产成人精品二区 | 精品一品国产午夜福利视频| 国产亚洲欧美在线一区二区| 亚洲五月天丁香| 后天国语完整版免费观看| 国产亚洲欧美精品永久| 一区二区三区国产精品乱码| www日本在线高清视频| 老熟妇仑乱视频hdxx| 国产又爽黄色视频| 国产精品日韩av在线免费观看 | 亚洲男人天堂网一区| 涩涩av久久男人的天堂| 涩涩av久久男人的天堂| 一区福利在线观看| 亚洲精品国产精品久久久不卡| 丝袜美腿诱惑在线| 男人的好看免费观看在线视频 | 国产欧美日韩一区二区三区在线| 国产精品影院久久| 天天影视国产精品| 国产91精品成人一区二区三区| 亚洲精品国产精品久久久不卡| 日韩欧美免费精品| 免费搜索国产男女视频| 久久久久久久久久久久大奶| 一区在线观看完整版| 国产成人av教育| 国产亚洲欧美精品永久| 免费一级毛片在线播放高清视频 | 色婷婷av一区二区三区视频| 黑人猛操日本美女一级片| 亚洲av片天天在线观看| 在线观看免费视频日本深夜| 12—13女人毛片做爰片一| 亚洲国产精品sss在线观看 | 日本a在线网址| 免费看十八禁软件| 女人精品久久久久毛片| 精品久久久久久久久久免费视频 | 黑人操中国人逼视频| 咕卡用的链子| 麻豆一二三区av精品| 丰满迷人的少妇在线观看| 三级毛片av免费| 啪啪无遮挡十八禁网站| 久久天躁狠狠躁夜夜2o2o| 成人黄色视频免费在线看| 自拍欧美九色日韩亚洲蝌蚪91| 91在线观看av| 一级片'在线观看视频| 国产成+人综合+亚洲专区| 欧美日韩黄片免| 91麻豆精品激情在线观看国产 | 国产日韩一区二区三区精品不卡| 国产精品国产高清国产av| 不卡一级毛片| 亚洲国产看品久久| 免费女性裸体啪啪无遮挡网站| 麻豆国产av国片精品| 亚洲精品国产色婷婷电影| 无限看片的www在线观看| 婷婷精品国产亚洲av在线| 少妇被粗大的猛进出69影院| 亚洲av熟女| 97超级碰碰碰精品色视频在线观看| 国产在线精品亚洲第一网站| 女性被躁到高潮视频| 亚洲美女黄片视频| 99在线人妻在线中文字幕| 一级,二级,三级黄色视频| 精品午夜福利视频在线观看一区| 久久 成人 亚洲| 悠悠久久av| 99国产精品免费福利视频| 在线观看免费日韩欧美大片| 欧美大码av| 国产精品亚洲一级av第二区| 美女午夜性视频免费| 18禁美女被吸乳视频| 免费在线观看亚洲国产| 久久青草综合色| 老司机靠b影院| 长腿黑丝高跟| 男人的好看免费观看在线视频 | 成人三级做爰电影| 免费在线观看视频国产中文字幕亚洲| 伊人久久大香线蕉亚洲五| 亚洲av日韩精品久久久久久密| 午夜精品在线福利| 欧美大码av| 激情视频va一区二区三区| 欧美黄色片欧美黄色片| 国产精华一区二区三区| 免费av中文字幕在线| 一二三四在线观看免费中文在| 久久精品国产亚洲av香蕉五月| 少妇裸体淫交视频免费看高清 | 国产精品日韩av在线免费观看 | 国产av又大| 男女高潮啪啪啪动态图| 又紧又爽又黄一区二区| 国产精品免费一区二区三区在线| 最近最新免费中文字幕在线| 人妻久久中文字幕网| 无限看片的www在线观看| 丁香欧美五月| 亚洲九九香蕉| 精品久久蜜臀av无| 欧美不卡视频在线免费观看 | 精品人妻1区二区| 91av网站免费观看| 国产97色在线日韩免费| 亚洲自偷自拍图片 自拍| 精品一区二区三卡| 男人的好看免费观看在线视频 | 两人在一起打扑克的视频| 久久精品国产综合久久久| 在线视频色国产色| 成人免费观看视频高清| x7x7x7水蜜桃| 亚洲av片天天在线观看| 精品久久久久久,| 久久人妻福利社区极品人妻图片| 极品人妻少妇av视频| 日韩国内少妇激情av| 国产精品久久久av美女十八| 日本a在线网址| svipshipincom国产片| 日本黄色视频三级网站网址| 在线观看免费视频网站a站| 大型av网站在线播放| 级片在线观看| 国产免费男女视频| 国产xxxxx性猛交| 久久精品成人免费网站| 怎么达到女性高潮| 午夜视频精品福利| 99国产综合亚洲精品| 亚洲avbb在线观看| 757午夜福利合集在线观看| 巨乳人妻的诱惑在线观看| 视频在线观看一区二区三区| 免费在线观看日本一区| av天堂在线播放| 久久久久久久精品吃奶| a级毛片在线看网站| 黄片播放在线免费| 99久久人妻综合| 大型av网站在线播放| 成人18禁高潮啪啪吃奶动态图| 日本免费一区二区三区高清不卡 | 久久久水蜜桃国产精品网| 国产欧美日韩综合在线一区二区| 国产成+人综合+亚洲专区| 99re在线观看精品视频| 日韩人妻精品一区2区三区| 夜夜躁狠狠躁天天躁| 日韩免费av在线播放| 国产高清videossex| 激情视频va一区二区三区| 国产免费现黄频在线看| 欧美一级毛片孕妇| 男男h啪啪无遮挡| 国产又色又爽无遮挡免费看| 麻豆av在线久日| 国产99久久九九免费精品| 亚洲成人精品中文字幕电影 | 在线av久久热| 国产视频一区二区在线看| 欧美日韩中文字幕国产精品一区二区三区 | 国产视频一区二区在线看| 久久久国产精品麻豆| av网站在线播放免费| 丰满饥渴人妻一区二区三| 日韩视频一区二区在线观看| 国产97色在线日韩免费| 黑人巨大精品欧美一区二区mp4| 中文字幕人妻丝袜一区二区| 最新在线观看一区二区三区| 手机成人av网站| 99在线人妻在线中文字幕| 免费在线观看亚洲国产| 在线十欧美十亚洲十日本专区| 少妇被粗大的猛进出69影院| 国产国语露脸激情在线看| 亚洲色图av天堂| 岛国视频午夜一区免费看| 黄色片一级片一级黄色片| 97人妻天天添夜夜摸| videosex国产| 男女高潮啪啪啪动态图| 国产精品影院久久| 久久久久久久精品吃奶| 久久人人爽av亚洲精品天堂| 久久人人精品亚洲av| 久久精品91无色码中文字幕| 精品国产乱子伦一区二区三区| 午夜免费成人在线视频| 日韩高清综合在线| 热re99久久国产66热| 他把我摸到了高潮在线观看| 欧美日韩福利视频一区二区| 欧美日韩av久久| 国产免费现黄频在线看| 久久久久精品国产欧美久久久| 国产精品综合久久久久久久免费 | 女人高潮潮喷娇喘18禁视频| 嫁个100分男人电影在线观看| 成人精品一区二区免费| 精品国产一区二区三区四区第35| 欧美日韩福利视频一区二区| 97人妻天天添夜夜摸| 亚洲av电影在线进入| 久久影院123| 日本欧美视频一区| 国产精品美女特级片免费视频播放器 | 99国产极品粉嫩在线观看| 18禁美女被吸乳视频| 69av精品久久久久久| 亚洲情色 制服丝袜| 日本五十路高清| 91国产中文字幕| 精品久久久精品久久久| 岛国视频午夜一区免费看| 精品久久久久久电影网| 激情视频va一区二区三区| 亚洲国产精品一区二区三区在线| 亚洲精品av麻豆狂野| 午夜福利一区二区在线看| 欧美黄色淫秽网站| 91成年电影在线观看| 熟女少妇亚洲综合色aaa.| 久久久久亚洲av毛片大全| 中国美女看黄片| 国产成+人综合+亚洲专区| 男女做爰动态图高潮gif福利片 | 亚洲精品一二三| 亚洲成国产人片在线观看| 欧美一区二区精品小视频在线| 日韩视频一区二区在线观看| 伦理电影免费视频| 丁香欧美五月| 99精品久久久久人妻精品| 欧美成人午夜精品| 国产成人精品无人区| 在线观看舔阴道视频| 人人妻人人爽人人添夜夜欢视频| 精品久久蜜臀av无| 天天躁夜夜躁狠狠躁躁| 国产精华一区二区三区| 美国免费a级毛片| 琪琪午夜伦伦电影理论片6080| 日韩欧美在线二视频| 乱人伦中国视频| 久9热在线精品视频| 国产视频一区二区在线看| 大型av网站在线播放| 搡老熟女国产l中国老女人| 欧美黑人欧美精品刺激| 高清在线国产一区| 91成人精品电影| bbb黄色大片| 麻豆久久精品国产亚洲av | 亚洲国产精品合色在线| 琪琪午夜伦伦电影理论片6080| 色尼玛亚洲综合影院| 国产三级在线视频| 国产欧美日韩一区二区精品| 午夜激情av网站| 狠狠狠狠99中文字幕| 99国产精品99久久久久| 天堂√8在线中文| 高清在线国产一区| 成人三级黄色视频| 午夜老司机福利片| 欧美一区二区精品小视频在线| 久久精品成人免费网站| 曰老女人黄片| 最近最新中文字幕大全免费视频| 91精品国产国语对白视频| 老汉色av国产亚洲站长工具| 黄色a级毛片大全视频| 欧美老熟妇乱子伦牲交| 国产av一区在线观看免费| 搡老岳熟女国产| 首页视频小说图片口味搜索| 午夜精品国产一区二区电影| 波多野结衣一区麻豆| 不卡一级毛片| 丝袜人妻中文字幕| 麻豆成人av在线观看| 国产成人精品无人区| 女性被躁到高潮视频| 大型av网站在线播放| a级毛片黄视频| 久久久久久久精品吃奶| 精品久久久精品久久久| 大陆偷拍与自拍| 搡老熟女国产l中国老女人| 国产一区二区在线av高清观看| 精品第一国产精品| 在线免费观看的www视频| √禁漫天堂资源中文www| 男女下面进入的视频免费午夜 | 日韩免费高清中文字幕av| 丝袜美腿诱惑在线| 久久精品亚洲精品国产色婷小说| 久久精品影院6| 9色porny在线观看| 99精品欧美一区二区三区四区| 国产片内射在线| 欧美激情 高清一区二区三区| 久久人妻av系列| 在线永久观看黄色视频| 人妻久久中文字幕网| 国产成人系列免费观看| 亚洲第一欧美日韩一区二区三区| 婷婷精品国产亚洲av在线| 国产国语露脸激情在线看| 久久欧美精品欧美久久欧美| 久99久视频精品免费|