• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    D‐BERT:Incorporating dependency‐based attention into BERT for relation extraction

    2022-01-12 07:05:44YuanHuangZhixingLiWeiDengGuoyinWangZhiminLin

    Yuan Huang | Zhixing Li | Wei Deng,2 | Guoyin Wang | Zhimin Lin

    1Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications,Chongqing, China

    2Center of Statistical Research,Southwestern University of Finance and Economics,Chengdu, China

    Abstract Relation extraction between entity pairs is an increasingly critical area in natural language processing. Recently, the pre-trained bidirectional encoder representation from transformer(BERT)performs excellently on the text classification or sequence labelling tasks.Here, the high-level syntactic features that consider the dependency between each word and the target entities into the pre-trained language models are incorporated.Our model also utilizes the intermediate layers of BERT to acquire different levels of semantic information and designs multi-granularity features for final relation classification. Our model offers a momentous improvement over the published methods for the relation extraction on the widely used data sets.

    1 | INTRODUCTION

    Relation extraction between target entities is one of the most crucial steps in information extraction.It is also widely applied in various domains, for example, knowledge graph construction,question answering system,knowledge engineering and so on. Relation extraction focuses on predicting the semantic relationship of sentences based on the given entity pairs. For example,given a text‘The <e1 >company </e1 >fabricates plastic <e2 >chairs </e2 >’, the head entity‘company’ and the tail entity ‘chairs’. In Table 1, the first entity is marked by<e1 >and </e1 >,and the second entity is marked by <e2 >and </e2 >.

    Recently, many researchers utilize the variants of convolutional neural network(CNN)and recurrent neural network(RNN)to implement relation extraction[1–5].Some of these methods use high-level syntactic features derived from external natural language processing(NLP)tools,such as named entity recognizers, syntax parsers, dependency parsers. Inevitably,many irrelevant words are introduced when entity pairs are far away from each other. The semantic modification relationship of each component in the sentence should be considered to obtain long-distance collocation information, which can be achieved by the dependency parsers. Relation extraction is different from ordinary classification tasks in that it needs to pay attention to not only the sentence information but also the target entities. It is indispensable to effectively highlight the target entities and consider the dependency between each element and entity pairs in a sentence, which is competent in eliminating the influence of noisy words. Previous models based on syntax parsers or dependency parsers have achieved excellent results in relation extraction task,which indicates that the introduction of syntactic information is beneficial to relation extraction.Besides this,with the development of attention mechanism in visual tasks, many scholars have gradually applied attention mechanism to a great deal of NLP tasks and developed state-of-the-art research findings. One purpose of this study is to grasp the most crucial semantic information considering the dependency of each word on the target entities,and we present a dependency-based attention mechanism.

    As far as we know, the pre-trained language model bidirectional encoder representation from transformer(BERT)[6]has proven to be advantageous for promoting the performance of NLP tasks [7–10], for example, question answering, text classification, sequence labelling problems and so on. BERT employs the transformer [11] encoder as its principalarchitecture and acquires contextualized word embeddings by pre-training on a broad set of unannotated data. Recently, Wu et al.[10]first applies the BERT model to relation classification and uses the sequence vector represented by ‘[CLS]’ to complete the classification task. Our study also aims to contribute to this growing area of research by exploring how to utilize the BERT intermediate layers for improving BERT fine-tuning.Different layers have different levels of feature representation for specific tasks.For instance,the low-level network of BERT learns phrase-level information representation, the middlelevel network of BERT learns rich linguistic features and the high-level network of BERT learns rich semantic information features. Thus, we incorporate two pooling strategies for integrating the multi-layer representations of the classification token. The previous methods have primarily implemented relation extraction by taking the single-granularity features as the input of classifier. Here, we utilize multi-granularity features for classification instead of single-granularity features to capture rich semantic information.

    TA B L E 1 A sample of relation extraction

    This study involves three fundamental contributions: (1)The dependency-based attention mechanism considering the dependency of each word on the target entities is applied in relation to extraction task, (2) we explore the intermediate layers of BERT and design multi-granularity features for final relation classification,and(3)in experiments,our model offers a momentous improvement over the previous methods for the relation extraction on the widely used data sets.

    2 | RELATED WORK

    2.1 | Syntactic analysis in relation extraction

    Relation extraction is a paramount link in natural language processing. Based on neural networks, many scholars have further studied the improvement of syntactic features in relation to extraction task, especially in supervised relation extraction.For the first time,Socher et al.[1]applied RNN to relation extraction. Each node in the parse tree is assigned a vector and a matrix, where the vector captures the inherent meaning of the components, and the matrix captures how vectors change the meaning of adjacent words or phrases.Zeng et al.[2]introduced position embeddings that considered the relative distance between each word and the target entities and took advantage of convolutional depth neural network to study the features of vocabulary and sentence levels. Xu et al.[12] integrated the shortest path of dependency analysis tree and the characteristics of word vector, part of speech, and WordNet-based on long short-term memory(LSTM)network.Xu et al.[13]developed a CNN model based on a dependency analysis tree to extract the relationship between the target entities, and a negative sampling strategy was proposed to settle the problem of irrelevant information introduced by the dependency analysis tree when entity pairs are far away from each other. One dominant challenge with the previous approaches was the inadequacy of annotated data supporting model training.To enlarge the training data set,Mintz et al.[14]adopted an external domain-independent entity knowledge base (KB) to perform distant supervision. Nevertheless, the wrong labelling problem caused by the hypothetical theory of distant supervision is unavoidable. To alleviate the problem,the multi-instance learning in distant supervision relation extraction was developed by[15–17].Zeng et al.[18]proposed a variant of CNN network, piecewise-CNN (PCNN), which can automatically capture the features of semantic structure for distantly supervised relation extraction, and incorporated multi-instance learning into the PCNN for reducing the impact of noise in the training data. Cai et al. [19] presented a bidirectional recursive CNN based on the shortest dependent path,which combined the CNN with the dual-channel recursive neural network based on LSTM.

    2.2 | Attention mechanism in relation extraction

    Some researchers have recently applied attention mechanism to relation extraction to select more significant sentences or words and capture the most crucial semantic information from them. Based on the advanced features acquired by Bi-LSTM, Zhou et al. [20] introduced an attention-based Bi-LSTM(Att-BLSTM),an attention mechanism focusing on the different weights of words was proposed to capture the most decisive word in the sentence. Wang et al. [21] developed an innovative CNN architecture that relied on a two-level attention mechanism, namely the attention mechanism for target entities and the attention mechanism for relationships.Lin et al. [22] adopted an attention mechanism focusing on the different weights of sentences to take advantage of the advantageous sentences in the package to further reduce the noise generated by the wrongly annotated sentences. Ji et al.[23] added entity description to a model that was based on PCNN and sentence-level attention to assist the learning of entity representation, thereby effectively improving the accuracy. Lee et al. [5] developed an end-to-end RNN model,which applied entity-aware attention after Bi-LSTM and incorporated with latent entity type for relation extraction. As the effectiveness of the attention mechanism, it is increasingly being used to address the mislabeling problem introduced by distant supervision.

    2.3 | Pre‐trained language models in relation extraction

    Recently, the pre-trained language model BERT was the mastermind of the considerable advances in various NLP tasks.The first systematic study of applying the pre-trained BERT model to relation extraction was reported by [10]. The proposed model incorporated information from the target entities and appended special symbols to mark the position of entity pairs to highlight the target entities. Soares et al. [24]investigated the effects of different input and output modes of the pre-trained BERT model on the results of relation extraction.

    3 | METHODOLOGY

    3.1 | Task description

    Given a set of sentences {x1, x2,…, xn} and two corresponding entities e1and e2, the goal of our model is to identify the semantic relationships between the head entity and tail entity.

    3.2 | Pre‐trained model of BERT

    The innovation of BERT[6] is that it utilizes the bidirectional transformer[11]for language models and pre-trains on a large amount of unannotated corpus for text classification or sentence prediction tasks.The input of the BERT model can be a text or a pair, and the representation of each word is the addition of three embeddings, namely token embedding,segment embedding and position embedding. For the text classification task, the BERT model inserts a [CLS] symbol in front of the text and takes the corresponding output hidden vector of the symbol as the semantic representation of the whole text, which is used for text classification.

    The optimization process of BERT is to gradually adjust the model parameters so that the semantic text representation of the model output can depict the nature of the language and facilitate the subsequent fine-tuning for specific NLP tasks.

    3.3 | Model architecture

    The pre-training part of our model ultimately adopts the BERT model, and the input sentence is a single sentence, so the‘[CLS]’is added at the beginning of the sentence,and there is no need to add‘[SEP]’.To locate the positional information of two target entities, a special token is also added for the beginning and end of each entity. The head entity is represented by‘$’and the tail entity by‘#’.For example,a sentence with two marked entities, ‘company’ and ‘chairs’: ‘The company fabricates plastic chairs’. After adding the two special tokens, the original sentence will become: ‘[CLS] The $ company $ fabricates plastic # chairs #’.

    The network architecture is detailed in Figure 1. Our model principally comprises the following three modules: (1)dependency-based attention that considers the dependency of each word on the target entities;(2)utilizing intermediate layers of BERT that captures different levels of features; and (3)integration of features that fuses features of different granularity for final relation classification.

    3.3.1 | Dependency-based attention

    Given a head entity e1, tail entity e2and sentence s, let Tmto Tndenotes the BERT's input of the entity e1, and Tito Tjdenotes the BERT's input of the entity e2.Suppose h={h1,h2, …, hn} be the final hidden states that the BERT model produced. The dependencies between each word and the two target entities are obtained using Stanford CoreNLP [25].Then, a randomly initialized word embedding matrix is adopted to map the two dependencies to the first real-valued vector di1and the second real-valued vector di2. The word vector embedding matrix is continuously updated as the training process. Different dependencies (the object of a preposition,nominal subject, indirect object etc.) contribute different degrees to the relation classification.And attention operation can automatically learn the contribution of each hiin a sentence.Thus,we utilize an attention module to combine hidden states of all tokens dynamically:

    where dj∈Rl×d,h ∈Rdw×l,Wa∈Rd×d,Va∈Rd×1,dwis the hidden state size from BERT, d is the size of the dependency embeddings,l is the length of the sentence,Wa,Vaand baare learnable weights and VaTis a transpose. djrepresents the dependency vector between each word and the two target entities. Sjrepresents the vector representation of the whole sentence based on the dependency between each word and the two target entities.

    Finally, we pass Sjto a fully connected layer after the activation operation, which can be where Wd∈Rdw×dw.

    FIGURE 1 Overview of the proposed D-BERT model

    3.3.2 | Utilizing intermediate layers of BERT the intermediate layers hCLS: Concat-Pooling and Attention-Pooling.The corresponding models are named BERT‐attention and BERT‐concat.

    BERT-attention:Representation of the‘[CLS]’shows the semantic information of the whole sequence, and different layers of BERT focus on different information of the sequence.Since the attention mechanism can dynamically learn the contribution of each hiCLSon the final classification, we utilize a dot-product attention module to combine all informative features in given intermediate layers effectively. HCLSrepresents the final vector representation of the ‘[CLS]’. After the step of the activation operation,the fully connected layer is also added to HCLS. The process can be expressed as follow:

    where hCLS∈Rdw×L, VCLS∈Rdw×1, W0∈Rdw×dw, L is the number of intermediate layers used and dwis the hidden state size from BERT. W0, VCLSand b0are learnable weights.

    BERT-concat:We apply the concat operation to connect all intermediate representations of the ‘[CLS]’. Then, we also pass HCLSto a fully connected layer after the activation operation, which can be where W0∈Rdw×Ldw(dwis the hidden state size from BERT).

    3.3.3 | Integration of features

    Given the final hidden states from BERT h = {hCLS, h1,h2, …, hn}, we suppose the hidden states hmto hnis the hidden states of entity e1, and hito hjis the hidden states of entity e2. The average operation is used to obtain the vector representation of two target entities. After the step of the activation operation, the fully connected layer is also added to the two averaged feature vectors, the corresponding outputs are H′1and H′2. The formula is as follows:

    where W1∈Rdw×dw, W2∈Rdw×dw.

    H′CLSrepresents the sentence-level feature vector of the whole sequence, which gathers the semantic information of all words and belongs to the coarse-grained features. S′1and S′2are combinatorial features of different syntactic components, which take into account the dependencies with entity pairs and belong to the fine-grained feature. The H′CLS, H′1, H′2, S′1and S′2are concatenated as the final output vector. Finally, the softmax operation is applied to all relation types:

    where W3∈Rr×5dw(r is the defined number of relation types)and p(r|X; θ) is the probability value that the input sentence belongs to the relation r. During the optimization process, we adopt the cross-entropy loss function.When we use the concat method with higher F1 score than attention to synthesize the intermediate layers of BERT, we call our model D-BERT.

    4 | EXPERIMENTS

    4.1 | Data sets

    We evaluate our model on the widely used data sets,SemEval-2010 Task 8[26]and KBP37[27].For the SemEval-2010 Task 8 data set[26],there are 18 directional relations and an‘Other’class. The data set contains 8000 instances in the training set and 2717 instances in the test set. When the directionality of the relationship is considered, a relationship type is divided into two subtypes, namely the forward relationship and the reverse relationship.For instance,Member-Collection contains Member-Collection (e1, e2) and Collection-Member (e2, e1).The KBP37 data set includes 18 semantic relations and ‘no relation’class.Similar to SemEval-2010 Task 8,the relationship is directional, so the actual number of relation types is 37. It contains 17,641 training instances and 3405 test instances.During the optimization process of training, the F1 score was utilized as the evaluation framework for D-BERT.

    4.2 | Parameter settings

    The prominent hyper-parameters involved in the experiment are listed in Table 2. We conduct all experiments with an uncased large BERT model [6] with different weights. Moreover, The dropout operation is applied before each add-norm layer.

    5 | RESULTS

    To measure the validity of our method,the following published methods are referenced as our comparison objects, including RNN, MVRNN, CNN + Softmax, CR-CNN, BiLSTM-CNN,Attention-CNN, Position-aware Self-attention, Entity Attention Bi-LSTM, R-BERT and Matching the Blanks.

    TA B L E 2 Parameter settings

    RNN: Zhang et al. [28] utilized a bidirectional recurrent neural network architecture for relation extraction task.MVRNN: Socher et al. [1] proposed an RNN model that each node in the parse tree is assigned a vector and a matrix to grasp the combined feature vectors of phrases and sentences. CNN + Softmax: Zeng et al. [2] introduced the position embeddings of each word relative to entity pairs.The model concatenated the sentence-level feature and lexical level feature and fed them into a softmax layer for prediction. CR-CNN: Nogueira dos Santos et al. [3] mainly improved [2] the CNN model and designed a margin-based ranking loss, which can effectively diminish the influence of artificial categories. BiLSTM-CNN: Zhang et al. [29] proposed a model that combined the advantages of CNN and LSTM, and took the higher-level feature representations obtained by LSTM as the input of CNN. Attention-CNN:Shen et al. [4] made full use of word embeddings, part of speech marking embeddings and position embeddings information. Attention-CNN introduced an attention mechanism into CNN to pick up the words that were conducive to the sentence meaning. Position-aware Self-attention: Bilan et al. [30] applied the self-attention encoder layer and an additional position-aware attention layer to the relation extraction task. Entity Attention Bi-LSTM: Lee et al. [5]introduced a self-attention that considered contextual information to boost the learning competence of word representation. At the same time, they added entity-aware attention after Bi-LSTM to incorporate the two features of position features and entity features with the latent entity type. R-BERT: Wu et al. [10] both located the target entities and incorporated the information from the target entities based on the hidden vectors of the BERT’s last layer for relation extraction task. Matching the Blanks: Soares et al.[24] investigated the effects of different input and output modes of BERT on the results of relation extraction.

    The F1 scores of the above methods are demonstrated in Table 3.Results on rows where the model name is marked with a * symbol are reported as published, the row of data in bold represents the best model and the corresponding F1 score.We can observe that D-BERT significantly outperforms previous baseline methods on the SemEval-2010 Task 8.On the KBP37 data set, the performance of D-BERT model is highly consistent with Matching the Blanks model and the F1 score of our model far higher than other models.

    5.1 | Ablation studies

    We have certified that our approach can achieve reliable empirical results. On this basis, we design three more variants of the model to examine the specific effects of each ingredient on the accuracy of the model.

    1. BERT‐INTER: We feed the preprocessed sentences into the BERT model and utilize the intermediate layers ofBERT to get the final hidden vectors of‘[CLS]’.The hidden vectors of target entities and the final hidden vectors of‘[CLS]’ are concatenated into the softmax layer for classification.During utilize the intermediate layers of BERT,the model corresponding to the BERT-attention mode is called BERT‐INTER‐Attention, the model corresponding to the BERT-Concat mode is called BERT‐INTER‐Concat.

    TA B L E 3 Results for supervised relation extraction tasks

    2. BERT‐DEPEND:We feed the preprocessed sentences into the BERT model to get all hidden vectors of the last layer and use a dependency-based attention module to combine hidden vectors of all tokens dynamically. The hidden vectors of ‘[CLS]’ and the target entities, and the hidden vectors based on dependencies are concatenated into the softmax layer for classification.

    3. BERT‐BASELINE: We feed the preprocessed sentences into the BERT model to get the hidden vectors of ‘[CLS]’.The hidden vectors of ‘[CLS]’ and the hidden vectors of target entities are concatenated into the softmax layer for classification.

    We report the performance of the above three variants in Table 3. We discover that D-BERT achieves the highest F1 score of all other methods we consider. Of the methods, the BERT-BASELINE model performs worst. The results corroborate that both the dependency-based attention and the intermediate layers of BERT make essential contributions to our approach. To further study the effect of the dependency-based attention for relation extraction, we empirically show the F1 score with different epochs on the SemEval-2010 Task 8 and KBP37, as indicated in Figure 2.According to Figure 2(a), as the training progress, the F1 score of the BERT-DEPEND is strikingly higher than that of the BERT-BASELINE on the SemEval-2010 Task 8 data set. According to Figure 2(b), on the KBP37 data set, the F1 score of the BERT-DEPEND improves to 0.684, rather than the baseline of 0.672. The study indicates that the proposed dependency-based attention is beneficial. It can effectively filter out meaningless words and learn fine-grained feature by pre-trained language models.

    Furthermore, we performed the effectiveness of utilizing the intermediate layers of BERT, as detailed in Figure 3.According to Figure 3(a), on the SemEval-2010 Task 8, the F1 scores of the BERT-INTER-Concat and the BERTINTER-Attention model are improved remarkably compared with the BERT-BASELINE. Especially after adding the intermediate layers of BERT to the BERTDEPEND, the F1 score reaches 0.901. Compared with the BERT-BASELINE of 0.892, it achieves excellent improvement. On the KBP37 data set, utilizing the intermediate layers also improves the performance of BERT-BASELINE,As shown in Figure 3(b). Compared with the BERTBASELINE score of 0.672, we increased the F1 score to 0.692.Moreover,boththeBERT-INTER-Concat and BERT-INTER-Attention pooling strategies help improve the performance for relation extraction, and the results are exceedingly comparable.

    The relation extraction task is different from the ordinary text classification task in that it also needs to focus on two entities. The semantic knowledge of the sentence is essential for the relation prediction. Nevertheless, we find that a sentence contains not only the target entities but also other irrelevant entities in the supervised relation extraction data set. On the one hand, how to identify the words associated with the entity pairs in one sentence is a significant breakthrough. The reason why the dependency-based attention module enhances the classification accuracy of the model is that it reduces the noise caused by hardly necessary words in the case of considering the dependency of each word on entity pairs. On the other hand, it is not enough to take advantage of the feature vectors of the BERT’s last layer for classification. Moreover, relation extraction needs to focus on both linguistic-level features and semantic-level features, which can help to make a more accurate prediction.Our experiments also reveal that the features of different granularity are beneficial to excavate the relationships between entity pairs.

    5.2 | Effect of number of intermediate layers

    FIGURE 2 The effect of the dependency-based attention on the SemEval-2010 Task 8 (a) and KBP37 (b)

    FIGURE 3 The effect of utilizing intermediate layers of BERT on the SemEval-2010 Task 8 (a) and KBP37 (b)

    TA B L E 4 F1 scores on the BERT-INTER-Concat and BERTINTER-Attention with different number of intermediate layers of BERT

    We analysed the effect of our model while varying the number of intermediate layers of BERT to capture the sentence-level feature vector of the whole sequence.Table 4 reports the P@4,P@6, P@8, P@10 and P@12 for the BERT-INTER-Concat and BERT-INTER-Attention models, the row of data in bold represents the corresponding F1 score on the best model.From the table,we can observe that(1)BERT-INTER-Concat performs slightly better as compared to BERT-INTERAttention and the difference between their performance is imperceptible. Both BERT-INTER-Concat and BERT-INTER-Attention can integrate the multi-layer representations of the classification token and acquire rich linguistic features and semantic features. (2) The experimental performance is the best in P@8 and P@10 for the BERT-INTER-Concat and BERT-INTER-Attention models.However,when the number of intermediate layers of BERT grows, the performance of BERT-INTER-Attention and BERT-INTER-Concat has almost no improvement.It even drops gradually in P@12 as the number of intermediate layers increases.It could be speculated that the features captured by the deeper intermediate layer of BERT are more basic and abstract, and the extracted features apply to most classification tasks.The results further support our argument that different levels of feature representation from intermediate layers of BERT are beneficial to capture the sentence-level feature vector.

    6 | CONCLUSION

    Here, the dependency-based attention mechanism is introduced into the BERT architecture, which can learn high-level syntactic features that consider the dependency between each word and the target entities. Besides, we explore the potential of utilizing BERT intermediate layers to acquire different levels of semantic information and design multi-granularity features for final relation classification. The experimental data reveal that D-BERT offers a momentous advancement over the published methods on the widely used data sets.

    Future studies will include:(i)extending the D-BERTmodel to entity and relationship joint extraction, question answering system and so on and(ii)enriching the representations of target entities by leveraging the relation triples involved in the knowledge graph to obtain more background information of the target entities.

    ACKNOWLEDGEMENTS

    This study is supported by the National Key Research and Development Programme of China (Grant no. 2016YFB1000 905), the State Key Programme of National Nature Science Foundation of China(Grant no.61936001).The authors thank our tutors for their careful guidance, various scholars and monographs cited here for their heuristic ideas, and the laboratory team for their helpful comments.

    色网站视频免费| 国产精品久久久久久人妻精品电影 | 免费观看人在逋| 日韩,欧美,国产一区二区三区| 国产欧美日韩一区二区三 | 咕卡用的链子| 中文字幕最新亚洲高清| 99热网站在线观看| 首页视频小说图片口味搜索 | 乱人伦中国视频| 日韩制服丝袜自拍偷拍| 一二三四在线观看免费中文在| 久久久久国产精品人妻一区二区| 久热爱精品视频在线9| 黄色 视频免费看| 午夜福利乱码中文字幕| 国产成人系列免费观看| 国产又爽黄色视频| 新久久久久国产一级毛片| 亚洲少妇的诱惑av| 亚洲国产毛片av蜜桃av| 丁香六月天网| 91成人精品电影| 国产精品久久久av美女十八| 亚洲专区中文字幕在线| 午夜视频精品福利| 女人被躁到高潮嗷嗷叫费观| 男女午夜视频在线观看| 亚洲精品久久成人aⅴ小说| 精品欧美一区二区三区在线| 好男人视频免费观看在线| 悠悠久久av| 99国产精品免费福利视频| 久久99精品国语久久久| 亚洲精品久久午夜乱码| 成人国产一区最新在线观看 | 精品国产国语对白av| 少妇粗大呻吟视频| 大香蕉久久网| 桃花免费在线播放| 男人爽女人下面视频在线观看| 韩国高清视频一区二区三区| 在线观看www视频免费| 夫妻午夜视频| av电影中文网址| 国产一级毛片在线| 两个人免费观看高清视频| 精品国产一区二区三区久久久樱花| 在线观看免费日韩欧美大片| 两人在一起打扑克的视频| 国产视频首页在线观看| 国产成人a∨麻豆精品| 亚洲自偷自拍图片 自拍| 另类精品久久| 蜜桃在线观看..| 最近中文字幕2019免费版| 日本av免费视频播放| 一级毛片电影观看| 一区二区三区乱码不卡18| 午夜精品国产一区二区电影| 久久久欧美国产精品| 黑丝袜美女国产一区| 成人手机av| 狂野欧美激情性xxxx| 久久久久精品国产欧美久久久 | 亚洲av国产av综合av卡| 午夜老司机福利片| 久久影院123| 超碰成人久久| 国产片特级美女逼逼视频| 亚洲午夜精品一区,二区,三区| 热99国产精品久久久久久7| 99热全是精品| 女人被躁到高潮嗷嗷叫费观| 极品少妇高潮喷水抽搐| 日本91视频免费播放| 男女无遮挡免费网站观看| 另类亚洲欧美激情| 日韩,欧美,国产一区二区三区| 成人亚洲精品一区在线观看| 一级毛片黄色毛片免费观看视频| 婷婷色麻豆天堂久久| 又大又黄又爽视频免费| 中文字幕最新亚洲高清| 99热全是精品| 性高湖久久久久久久久免费观看| 大码成人一级视频| 亚洲欧美日韩另类电影网站| 日韩,欧美,国产一区二区三区| 18禁黄网站禁片午夜丰满| 亚洲国产欧美网| 国产亚洲av片在线观看秒播厂| 少妇裸体淫交视频免费看高清 | 人人澡人人妻人| 中文字幕亚洲精品专区| 大香蕉久久网| 国产男人的电影天堂91| 一级黄片播放器| 亚洲av美国av| 精品久久久久久电影网| 水蜜桃什么品种好| 亚洲av美国av| 精品一区二区三区四区五区乱码 | 国产亚洲欧美在线一区二区| 一级毛片 在线播放| 我要看黄色一级片免费的| av欧美777| 十八禁人妻一区二区| 国产视频首页在线观看| 亚洲色图综合在线观看| 久久免费观看电影| 国产精品国产三级专区第一集| 最新的欧美精品一区二区| 青草久久国产| 99热国产这里只有精品6| 日韩中文字幕视频在线看片| 久久久精品94久久精品| 久久国产亚洲av麻豆专区| 天天躁日日躁夜夜躁夜夜| 欧美精品亚洲一区二区| 国产在线视频一区二区| 国产精品九九99| 国产成人精品久久二区二区91| 欧美日韩视频精品一区| 久久久国产一区二区| a 毛片基地| 国产欧美日韩一区二区三 | 日韩大码丰满熟妇| 91精品国产国语对白视频| 天天躁夜夜躁狠狠躁躁| 久久久欧美国产精品| 精品国产一区二区三区四区第35| 黄色视频在线播放观看不卡| 亚洲熟女毛片儿| 涩涩av久久男人的天堂| 国产主播在线观看一区二区 | 亚洲七黄色美女视频| 日韩一本色道免费dvd| 精品国产一区二区久久| 成人影院久久| 一级毛片女人18水好多 | 亚洲精品一二三| 人人妻人人澡人人爽人人夜夜| 黄频高清免费视频| 国产精品偷伦视频观看了| 亚洲av男天堂| 丁香六月欧美| 久久久久久久大尺度免费视频| 18禁国产床啪视频网站| 波野结衣二区三区在线| 香蕉丝袜av| 一级毛片黄色毛片免费观看视频| 男女下面插进去视频免费观看| 成年人黄色毛片网站| 女人精品久久久久毛片| 欧美久久黑人一区二区| 欧美人与性动交α欧美软件| 国产爽快片一区二区三区| tube8黄色片| 午夜福利,免费看| 国产成人精品久久久久久| 亚洲av成人精品一二三区| 国精品久久久久久国模美| 老汉色∧v一级毛片| 免费看不卡的av| 深夜精品福利| 丝袜喷水一区| 精品久久久久久久毛片微露脸 | 午夜免费成人在线视频| 亚洲国产精品999| 一级黄色大片毛片| 日韩一区二区三区影片| h视频一区二区三区| a级片在线免费高清观看视频| 永久免费av网站大全| 日本一区二区免费在线视频| 在线av久久热| 可以免费在线观看a视频的电影网站| 久久ye,这里只有精品| 99九九在线精品视频| 婷婷成人精品国产| 国产一区有黄有色的免费视频| 亚洲精品美女久久久久99蜜臀 | 亚洲免费av在线视频| 久久久久久久久免费视频了| 下体分泌物呈黄色| 十八禁高潮呻吟视频| 一二三四社区在线视频社区8| 成人手机av| 久久久久久久久免费视频了| 亚洲五月色婷婷综合| 在线观看免费日韩欧美大片| 日韩视频在线欧美| 日韩,欧美,国产一区二区三区| 日韩av不卡免费在线播放| 最新在线观看一区二区三区 | 久久人人爽人人片av| 日韩 欧美 亚洲 中文字幕| 99国产综合亚洲精品| 99香蕉大伊视频| 一级毛片女人18水好多 | 国产国语露脸激情在线看| 欧美精品av麻豆av| 久久精品亚洲av国产电影网| 狂野欧美激情性xxxx| 婷婷色综合www| 亚洲第一av免费看| 大话2 男鬼变身卡| 精品欧美一区二区三区在线| 欧美国产精品va在线观看不卡| 精品亚洲乱码少妇综合久久| 国产高清不卡午夜福利| 波多野结衣一区麻豆| 国产人伦9x9x在线观看| 老司机深夜福利视频在线观看 | 亚洲精品中文字幕在线视频| 黄片小视频在线播放| 麻豆av在线久日| 国产国语露脸激情在线看| 丁香六月天网| 亚洲,欧美精品.| 老鸭窝网址在线观看| 最新在线观看一区二区三区 | 国产无遮挡羞羞视频在线观看| 另类亚洲欧美激情| 成人免费观看视频高清| 久久久精品94久久精品| 老熟女久久久| 免费观看a级毛片全部| 中文乱码字字幕精品一区二区三区| 成年人免费黄色播放视频| 啦啦啦在线观看免费高清www| 国产高清视频在线播放一区 | 男女高潮啪啪啪动态图| 精品亚洲成a人片在线观看| 免费高清在线观看视频在线观看| 一边摸一边抽搐一进一出视频| www.999成人在线观看| 大片电影免费在线观看免费| 女人高潮潮喷娇喘18禁视频| 国产精品成人在线| 精品少妇一区二区三区视频日本电影| 亚洲av电影在线进入| av天堂久久9| 一个人免费看片子| 一区在线观看完整版| 美女国产高潮福利片在线看| 欧美激情高清一区二区三区| 91精品伊人久久大香线蕉| 国产精品国产av在线观看| 精品久久久久久电影网| 啦啦啦啦在线视频资源| 亚洲激情五月婷婷啪啪| 美女国产高潮福利片在线看| 99国产综合亚洲精品| 黄色a级毛片大全视频| 日韩电影二区| 国产麻豆69| 国产无遮挡羞羞视频在线观看| 99国产精品一区二区蜜桃av | 色精品久久人妻99蜜桃| 色婷婷久久久亚洲欧美| 国产亚洲精品久久久久5区| 免费在线观看日本一区| 一区二区日韩欧美中文字幕| 午夜老司机福利片| 成年人午夜在线观看视频| 无遮挡黄片免费观看| 97人妻天天添夜夜摸| 在线观看免费视频网站a站| 大话2 男鬼变身卡| 操美女的视频在线观看| 午夜福利视频精品| 亚洲国产精品国产精品| 久久女婷五月综合色啪小说| 欧美黄色淫秽网站| 水蜜桃什么品种好| 99国产精品一区二区蜜桃av | videosex国产| 亚洲欧洲精品一区二区精品久久久| 热re99久久精品国产66热6| 又紧又爽又黄一区二区| 中文字幕精品免费在线观看视频| 999精品在线视频| 女人久久www免费人成看片| 亚洲精品日韩在线中文字幕| 一级毛片电影观看| 亚洲欧美激情在线| 亚洲av综合色区一区| 高清视频免费观看一区二区| 国产成人精品无人区| 极品人妻少妇av视频| 亚洲色图综合在线观看| 亚洲三区欧美一区| 欧美黑人精品巨大| 这个男人来自地球电影免费观看| 国产精品一区二区精品视频观看| 欧美精品一区二区大全| 国产精品久久久久久精品电影小说| 国产高清不卡午夜福利| 高清不卡的av网站| 国产在线视频一区二区| 国产99久久九九免费精品| 国产一卡二卡三卡精品| 久久久久精品国产欧美久久久 | 精品亚洲成国产av| 少妇人妻久久综合中文| av天堂在线播放| 久久性视频一级片| 女人爽到高潮嗷嗷叫在线视频| 欧美黑人精品巨大| 最黄视频免费看| 成年人午夜在线观看视频| 国产淫语在线视频| 久久免费观看电影| 久久久久久久精品精品| 亚洲av在线观看美女高潮| 国产福利在线免费观看视频| 精品第一国产精品| 18禁国产床啪视频网站| 久久影院123| 亚洲人成网站在线观看播放| 国产精品 欧美亚洲| 天天影视国产精品| 自拍欧美九色日韩亚洲蝌蚪91| videos熟女内射| 精品国产一区二区久久| 国产高清视频在线播放一区 | 亚洲 欧美一区二区三区| 成年动漫av网址| 黄色片一级片一级黄色片| 下体分泌物呈黄色| 国产亚洲欧美在线一区二区| 亚洲欧洲日产国产| 亚洲精品国产av蜜桃| 纵有疾风起免费观看全集完整版| av国产精品久久久久影院| 午夜免费成人在线视频| 亚洲欧美一区二区三区黑人| 日韩av免费高清视频| 久久久精品区二区三区| 久久国产精品大桥未久av| 一边亲一边摸免费视频| 好男人电影高清在线观看| 成人黄色视频免费在线看| 亚洲欧美激情在线| 成人手机av| 国产精品一二三区在线看| 最新的欧美精品一区二区| 亚洲一区中文字幕在线| 欧美精品一区二区大全| 亚洲专区国产一区二区| 超色免费av| 亚洲欧美激情在线| 麻豆av在线久日| 一本综合久久免费| 亚洲精品一二三| 超色免费av| 99久久99久久久精品蜜桃| 老司机靠b影院| 中文字幕人妻丝袜制服| 久久久久国产一级毛片高清牌| 97精品久久久久久久久久精品| 欧美黑人欧美精品刺激| 亚洲人成电影观看| 青春草亚洲视频在线观看| av线在线观看网站| 免费女性裸体啪啪无遮挡网站| 又粗又硬又长又爽又黄的视频| 欧美日韩av久久| 久久久欧美国产精品| 欧美成狂野欧美在线观看| 久久 成人 亚洲| 在线观看人妻少妇| 高清视频免费观看一区二区| 女人精品久久久久毛片| 蜜桃国产av成人99| 男女免费视频国产| 国产成人精品久久二区二区免费| 亚洲精品久久成人aⅴ小说| 日本一区二区免费在线视频| 国产在视频线精品| 精品一品国产午夜福利视频| 亚洲色图 男人天堂 中文字幕| 久久精品久久精品一区二区三区| 人人妻,人人澡人人爽秒播 | 精品国产乱码久久久久久小说| 一本综合久久免费| 一本一本久久a久久精品综合妖精| 亚洲欧美成人综合另类久久久| 亚洲欧美一区二区三区国产| 久久女婷五月综合色啪小说| 亚洲欧洲日产国产| 91字幕亚洲| 欧美亚洲 丝袜 人妻 在线| 欧美精品啪啪一区二区三区 | 亚洲久久久国产精品| 午夜久久久在线观看| 少妇精品久久久久久久| 免费人妻精品一区二区三区视频| 一级毛片我不卡| 在线精品无人区一区二区三| 大码成人一级视频| 999精品在线视频| 满18在线观看网站| 真人做人爱边吃奶动态| 热99久久久久精品小说推荐| 精品一区在线观看国产| 精品人妻熟女毛片av久久网站| 黄色怎么调成土黄色| 久久鲁丝午夜福利片| 日日爽夜夜爽网站| 精品国产乱码久久久久久男人| av不卡在线播放| 午夜福利在线免费观看网站| 校园人妻丝袜中文字幕| 一区二区日韩欧美中文字幕| 首页视频小说图片口味搜索 | 天堂中文最新版在线下载| 美女高潮到喷水免费观看| 亚洲综合色网址| 妹子高潮喷水视频| 永久免费av网站大全| 高清不卡的av网站| 欧美老熟妇乱子伦牲交| 国产一区亚洲一区在线观看| 飞空精品影院首页| 丝袜人妻中文字幕| 中文字幕精品免费在线观看视频| 18禁国产床啪视频网站| 国产精品人妻久久久影院| 久久久久久久国产电影| 一级片免费观看大全| 精品卡一卡二卡四卡免费| 亚洲男人天堂网一区| 啦啦啦在线观看免费高清www| 老熟女久久久| 成人国语在线视频| 欧美变态另类bdsm刘玥| av在线播放精品| 午夜福利免费观看在线| 99国产精品一区二区三区| 国产精品 国内视频| 久热爱精品视频在线9| 国产黄频视频在线观看| 999久久久国产精品视频| 99久久精品国产亚洲精品| 国产在线观看jvid| 精品一区在线观看国产| 19禁男女啪啪无遮挡网站| 亚洲av片天天在线观看| 最近最新中文字幕大全免费视频 | 欧美精品人与动牲交sv欧美| 超碰97精品在线观看| 国产精品秋霞免费鲁丝片| 亚洲欧美色中文字幕在线| 黑人猛操日本美女一级片| 99国产精品免费福利视频| 亚洲中文字幕日韩| 日本午夜av视频| 精品亚洲乱码少妇综合久久| 午夜久久久在线观看| 亚洲,欧美,日韩| 丝袜美足系列| av网站免费在线观看视频| 韩国高清视频一区二区三区| 视频区图区小说| tube8黄色片| 伦理电影免费视频| 成人免费观看视频高清| 一边摸一边抽搐一进一出视频| 久久99精品国语久久久| 人人妻,人人澡人人爽秒播 | 波多野结衣av一区二区av| svipshipincom国产片| 欧美人与善性xxx| 99九九在线精品视频| 亚洲av欧美aⅴ国产| 亚洲精品国产区一区二| 欧美黄色淫秽网站| 亚洲视频免费观看视频| 丝袜美足系列| 天天躁夜夜躁狠狠久久av| 悠悠久久av| 成人国语在线视频| 久久亚洲国产成人精品v| 男男h啪啪无遮挡| 亚洲七黄色美女视频| 久久精品久久久久久久性| 少妇人妻久久综合中文| 另类亚洲欧美激情| 丝袜美足系列| 男女边摸边吃奶| 中文字幕制服av| 精品欧美一区二区三区在线| 人妻一区二区av| 99久久综合免费| 永久免费av网站大全| 国产精品久久久久久精品古装| 菩萨蛮人人尽说江南好唐韦庄| 久久精品熟女亚洲av麻豆精品| 亚洲情色 制服丝袜| 国产av精品麻豆| 一二三四在线观看免费中文在| 精品国产乱码久久久久久小说| 巨乳人妻的诱惑在线观看| 久久热在线av| 一级毛片黄色毛片免费观看视频| 建设人人有责人人尽责人人享有的| 日韩制服骚丝袜av| 赤兔流量卡办理| 91精品伊人久久大香线蕉| 精品福利永久在线观看| 亚洲精品国产av蜜桃| 高清av免费在线| 看免费成人av毛片| 老司机亚洲免费影院| 成人黄色视频免费在线看| 国产精品久久久av美女十八| 9色porny在线观看| 免费女性裸体啪啪无遮挡网站| 日韩中文字幕欧美一区二区 | 80岁老熟妇乱子伦牲交| 真人做人爱边吃奶动态| a级毛片在线看网站| 国产爽快片一区二区三区| 色综合欧美亚洲国产小说| 国产欧美日韩精品亚洲av| 欧美性长视频在线观看| 男人操女人黄网站| 亚洲欧美色中文字幕在线| 色视频在线一区二区三区| 51午夜福利影视在线观看| 久久精品久久久久久久性| 18禁裸乳无遮挡动漫免费视频| 欧美黄色片欧美黄色片| 一本大道久久a久久精品| 国产精品偷伦视频观看了| 丝袜喷水一区| 校园人妻丝袜中文字幕| 大型av网站在线播放| 中文字幕亚洲精品专区| 亚洲国产中文字幕在线视频| 看十八女毛片水多多多| 久热爱精品视频在线9| 日韩制服骚丝袜av| 熟女av电影| 高清欧美精品videossex| 制服诱惑二区| 日韩电影二区| 精品高清国产在线一区| 最新的欧美精品一区二区| 男女午夜视频在线观看| 久久国产精品人妻蜜桃| 成年美女黄网站色视频大全免费| 飞空精品影院首页| 国产精品三级大全| 人人妻人人澡人人爽人人夜夜| 秋霞在线观看毛片| 国产精品亚洲av一区麻豆| 一本一本久久a久久精品综合妖精| 欧美 亚洲 国产 日韩一| 欧美人与性动交α欧美精品济南到| 美女中出高潮动态图| 91精品伊人久久大香线蕉| 丝袜人妻中文字幕| 欧美av亚洲av综合av国产av| 国产99久久九九免费精品| 伊人亚洲综合成人网| 久久精品国产a三级三级三级| 制服人妻中文乱码| 中文字幕人妻熟女乱码| 在线观看免费视频网站a站| 丝袜美足系列| 国产黄频视频在线观看| 日韩视频在线欧美| 中文字幕av电影在线播放| 一级片免费观看大全| 女人高潮潮喷娇喘18禁视频| 我的亚洲天堂| 人成视频在线观看免费观看| 亚洲av在线观看美女高潮| 久久综合国产亚洲精品| 大话2 男鬼变身卡| 伊人久久大香线蕉亚洲五| 国产高清国产精品国产三级| 香蕉丝袜av| 午夜视频精品福利| 亚洲欧洲精品一区二区精品久久久| 一级毛片黄色毛片免费观看视频| 每晚都被弄得嗷嗷叫到高潮| 国产精品三级大全| 国产有黄有色有爽视频| 老汉色∧v一级毛片| svipshipincom国产片| 国产有黄有色有爽视频| 少妇粗大呻吟视频| 欧美国产精品一级二级三级| 香蕉丝袜av| 久久久久久人人人人人| 黑人猛操日本美女一级片| 你懂的网址亚洲精品在线观看| 国产高清视频在线播放一区 | 性色av一级| 一级片'在线观看视频| 国产男女超爽视频在线观看| 亚洲精品中文字幕在线视频| 欧美国产精品一级二级三级| 少妇裸体淫交视频免费看高清 | 亚洲精品久久成人aⅴ小说| 咕卡用的链子| 中文字幕人妻熟女乱码| 欧美老熟妇乱子伦牲交| 曰老女人黄片| 亚洲av男天堂|