• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Knowledge-Enriched and Span-Based Network for Joint Entity and Relation Extraction

    2021-12-14 09:57:50KunDingShanshanLiuYuhaoZhangHuiZhangXiaoxiongZhangTongtongWuandXiaoleiZhou
    Computers Materials&Continua 2021年7期

    Kun Ding,Shanshan Liu,Yuhao Zhang,Hui Zhang,Xiaoxiong Zhang,*,Tongtong Wu,3 and Xiaolei Zhou

    1The Sixty-Third Research Institute,National University of Defense Technology,Nanjing,210007,China

    2School of Computer Science and Technology,Southeast University,Nanjing,211189,China

    3Faculty of Information Technology,Monash University,Melbourne,3800,Australia

    Abstract: The joint extraction of entities and their relations from certain texts plays a significant role in most natural language processes.For entity and relation extraction in a specific domain, we propose a hybrid neural framework consisting of two parts:a span-based model and a graph-based model.The span-based model can tackle overlapping problems compared with BILOU methods,whereas the graph-based model treats relation prediction as graph classification.Our main contribution is to incorporate external lexical and syntactic knowledge of a specific domain, such as domain dictionaries and dependency structures from texts, into end-to-end neural models.We conducted extensive experiments on a Chinese military entity and relation extraction corpus.The results show that the proposed framework outperforms the baselines with better performance in terms of entity and relation prediction.The proposed method provides insight into problems with the joint extraction of entities and their relations.

    Keywords: Entity recognition; relation extraction; dependency parsing

    1 Introduction

    The extraction of entities and their interrelations is an essential issue in understanding text corpora.Determining the token spans in texts that compose entities and assigning types to these spans (i.e., named entity recognition, NER) [1] as well as assigning relations between each pair of entity mentions (i.e., relation classification, RC) [2,3] are critical steps in obtaining knowledge from texts for further possible applications, such as knowledge graph construction [4], knowledge-based question answering [5], and sentiment analysis [6].

    Currently, pre-trained language models, such as BERT [7], have achieved outstanding performance in various natural language processing (NLP) tasks, including entity and relation extraction.However, the performances of these BERT-based models on Chinese specific-domain corpora are not as effective as on English datasets.We argue that this is mainly due to two reasons.First, terminology is common in a specific domain, for example, weapons and equipment nomenclature in the military domain, bringing about the possible problem that important entities with terminology that have never appeared in the open domain are understood as unregistered words and are thus difficult to identify.Second, when used with Chinese corpora, BERT generates embeddings at the character level.However, Chinese words contain more semantic information than characters.As a result, it is difficult for BERT-based models to extract dependency relations between Chinese words, which is important in relation extraction.As in the example shown in Fig.1, dependency parsing can mine the two pairs of relationships,(Chinese) and(J-20),(American) and F-22, which contributes to relation extraction.

    To address the above-mentioned issues, we propose a novel architecture for joint entity and relation extraction.The key insight of our proposed model is to leverage external lexical and syntactic knowledge to overcome the limitations of BERT-based models encountered during Chinese specific-domain joint extraction.Specifically, lexical knowledge refers to domain dictionaries, such as weapons and equipment nomenclature, which helps to improve NER performance on entities with terminology.Moreover, syntactic knowledge refers to the dependency structure discussed in the current text.As the result of dependency parsing can be transformed into a tree structure, we further adopt the graph-based model to manage the dependency tree and capture the interaction between relations, which compensates for the lack of existing BERT-based models to extract dependencies and, in turn, contributes to relation extraction.

    Figure 1:Examples of dependency tree in the Chinese specific corpus.The dependency parsing result is generated by HanLP [8].Arrows in red present dependencies that can contribute to relation extraction.Blue blocks show that domain-dictionaries can help recognize entities with terminology

    Our proposed model incorporates the joint modeling of span-based and graph-based components by taking advantage of two different structures.More specifically, span-based components feature entity recognition and relation classification by making full use of a localized marker-free context representation [9].As an extension of the previous work in [9], we incorporate a graphbased component in our model using graph neural networks (GNNs) in the relation classification component.We allow the simultaneous classification of entities and relations with higher accuracy.The main contributions of this study are summarized as follows:

    —For the extraction of entities and relations in specific domains, a hybrid framework based on a knowledge-enriched and span-based network is proposed.

    —The dependency structure is incorporated in our model, which can leverage external lexical and syntactic knowledge effectively in Chinese specific-domain joint extraction.

    —Comparative experimental results demonstrated that our model outperforms the state-ofthe-art model, and achieves an absolute 4.13% improvement in the F1 score in relation extraction.

    The remainder of this paper is structured into five sections.Related works on entity and relation extraction are presented in Section 2.Section 3 defines the problem and provides details on the architecture of our proposed model for the joint extraction of entities and relations.Our experiments are presented in Section 4, and comparative experiments are implemented in Section 5.Finally, the conclusions are presented in Section 6.

    2 Related Work

    Most systems adopt a two-stage pipeline framework for the extraction of entities and relations.First, the entities in a given sentence are recognized using NER.Then, certain classification models are used to test each entity pair [10–12].This method is easy to implement, and each component can be more flexible, but it lacks the interaction between different tasks, leading to error propagation [13,14].

    Unlike pipelined methods, the joint extraction framework focuses on extracting entities together with relations using a single model.The advantages lie in joint entity and relation extraction to capture the inherent linguistic dependencies between relations and entity arguments to resolve error propagation.Most initial joint models are feature-based structured systems that require complicated feature engineering.For example, Roth et al.[15] investigated a joint inference framework based on integer linear programming to extract entities and relations.Li et al.[16]proposed a transition-based model for entity recognition and relation classification simultaneously.Ren et al.[17] investigated a new domain-independent framework that focused on a data-driven text segmentation algorithm for the extraction of entities and relations.

    For the sake of reducing the manual work in the extraction process, models with different neural networks have been proposed with the characteristics of automatic feature extraction.These models adopt low-dimensional dense embedding to denote features.Gupta et al.[18] proposed a table-filling multi-task recurrent neural network (RNN) for the joint extraction of entities and relations.Adel et al.[19] introduced globally normalized convolutional neural networks and relation extraction.Katiyar et al.[20] presented a novel attention-based long short-term memory(LSTM) network for the joint extraction of entity mentions and relations.

    Recently, Hong et al.[21] presented an end-to-end neural model based on graph convolutional networks for jointly extracting entities and relations.Kok et al.[22] provided a brief introduction to named entity (NE) extraction experiments performed on datasets of open-source intelligence and post-training mission analytics.Recently, Wang et al.[23] investigated a relation extraction method combining a bidirectional LSTM (Bi-LSTM) neural network, character embedding, and attention mechanism to solve a military named entity relation extraction.Takanobu et al.[24]later proposed a hierarchical reinforcement learning (HRL) framework for the sake of enhancing the interaction between entity and relation types.Trisedya et al.[25] adopted an N-gram attention mechanism with an encoder-decoder model for the completion of knowledge bases using distant supervised data.

    Despite the great efforts in this field, they still leave an open question:how to efficiently capture the semantic information in texts, especially in a Chinese specific domain.In this study,a novel framework with a knowledge-enriched and span-based network is proposed for jointly extracting entities and their relations simultaneously.Compared with other state-of-the-art models,this model improves the F1 score of entity and relation recognition in a specific domain.

    3 Methodology

    This section provides details on the implementation of our knowledge-enriched and spanbased BERT (KSBERT) network, and its overall framework is presented in Fig.2.

    Figure 2:The framework of our KSBERT network

    3.1 Problem Definition

    This paper focuses on extracting entities and relations jointly; the input is a sentenceSofNtokens, S={t1,t2,...,tN} and the output contains two structures:entity typeEand relationsR.Specifically, we considered two subtasks.First,entity recognitionextracts all possible entity spanssi=from sentenceSand predicts its best entity typeei.Second,relationclassificationfocuses on predicting the relation typerijbetween each set of two entities

    3.2 KSBERT

    We developed a knowledge-enriched and span-based network, KSBERT, to recognize entities and classify relations jointly from the given sentences.Because of the outstanding performance of BERT [7] in natural language processing tasks, the pre-trained BERT model is applied to encode each character into embeddings with a special classifier token extracting information from the whole sentence.Then, the character embeddings of the sentence are fed into two models:the span-based model and the graph-based model.The span-based model takes the selected candidate span as input and judges whether or not the span is an entity; if it is, the model predicts its type; otherwise, it filters the span.The graph-based model includes knowledge of a specific domain, the core of which is GNNs.Fed with BERT embedding, the graph-based model first changes the dependency parsing tree of the sentence into an adjacent matrix, taking embeddings as node labels, dependency relations between words as edge labels, and relation types of the sentence as graph labels.The adjacent matrix is then fed into the GNNs to predict the classes of the graph, which are the relation types of the given sentences.Finally, three losses,including the entity recognition loss and relation classification loss of the span-based model and the graph classification loss of the graph-based model, are trained jointly to predict the entity and relation types.

    Next, we introduce the main components of our KSBERT in detail.

    EmbeddingsBy taking advantage of a multi-layer bidirectional transformer architecture, BERT can encode both left-to-right and right-to-left contexts for word representations, with semantic and syntactic information extracted.Given a sentence ofNwords, the BERT encoder generates an embedding sequence of lengthN+ 1,(wcls,w1,w2,...,wN), wherewclsdenotes the special classification embedding capturing information of the whole sentence.

    Span-based ModelAs presented in Fig.2, the block in blue is the span-based model, containing two elements:span classification and span filtering.In contrast to the previous model [9],which takes a span of arbitrary length as input, our model proposes a novel negative sampling method to generate candidate spans for a specific domain.We first build a setCcontaining as many entities in the given dataset as possible.We then segment sentences in the dataset using the Jieba toolkit, filtering out words other than nouns.The similarities between nouns and entities in setCare computed, and the top similarNeones are selected as negative samples.

    Using the above method, we obtain the candidate spanand feed it to the span-based model.The BERT embeddings are then combined using max pooling, denoted asf(wi,wi+1,...,wi+k).The output is concatenated with the width-embeddingwwidthlearned by back-propagation and special classification embeddingwcls.

    Then, the concatenation is fed into a softmax function to obtain the logtis of entity typeyei.

    Moreover, our span-based model includes a span-filtering function.In Eq.(2), if a candidate span is not an entity, our model classifies it as none type.

    The embeddings of entity spanssiandsjextracted by the span classifier are then combined with contextcsi,sj, which ranges from the end of entitysito the beginning of entitysj, to obtain the relation representation.

    Generally, as relations are asymmetric, two relation representations exist between two entities.

    Relation representations are fed to a fully connected layer and activated by a sigmoid function to perform relation classification.

    Graph-based ModelAs the previous models cannot address challenges in specific domains, we add the graph-based model to our KSBERT to introduce external knowledge, particularly dependencies, which play an important role in capturing relations among words.In the graph-based model (the orange block in Fig.2), we regard the relation classification task as graph classification of dependency trees; that is, given a set of graphs (dependency trees) {G1,G2,...,Gn} and their labels (relation types) {y1,y2,...,yn}, we aim to learn a representation vectorhGto predict the label of the graph.Given a sentenceSi, we use HanLP [8] to obtain its dependency tree and then change the tree into an adjacent matrix to obtain the input of graphGi.The graph is then fed into Graph Isomorphism Network (Gin) implemented by CogDL.Gin is a variant of GNN proposed by Xu et al.[26] and is good at representing different multisets.The representation vector of the entire graphhGis learned by an aggregating feature vector of node neighbors recursively.

    Similar to the span classifier, representation vectorhGiis fed into a softmax layer:

    Final PredictionThe final prediction is a joint training process and the loss function can be given as follows:

    whereγedenotes the entity classification loss by the span-based part,γrrefers to the relation classification loss by the span-based model, andγgrepresents the graph classification loss by the graph-based model.f (·)is a linear function.

    4 Experiments

    This section provides an introduction to our experiments in detail.First, the dataset is introduced in brief.Next, the metrics used in this study are presented.Then, the experimental settings are tabulated.Three baselines are compared with the proposed model to illustrate its superiority.Finally, we analyze the results in detail.

    4.1 Dataset

    As artificial construction features are insufficient in a specific domain, such as the military field, and Chinese word segmentation errors are inevitable, we focus on extracting entities and relations jointly in the Chinese military field.Owing to the lack of a dataset, we built one by ourselves.After crawling several representative military news sites, we obtained 840,000 articles.Based on the keywords relating to the military, we filtered out articles that were not closely related to the military or from which military relations could not be extracted.Ultimately, 85,000 articles were obtained in total.Because labeling a dataset in a specific domain is more time-consuming and also requires rich domain knowledge, we invited military experts to label the entity location, entity type, and relation type for approximately 300 articles.Fig.3 presents an example of this humanlabeled data.When labeling, each entity span in the given article is assigned a unique ID.Then,experts mark the beginning and end positions of the entity span in the article and judge its entity type.The relation between two entities is also labeled with relation types, head entity ID, and tail entity ID.There are seven entity types—equipment, person, organization, location, military activity, title, and engineering for preparedness against war—as well as three relation types:deploy,have, and locate.

    For articles that were not manually labeled, we designed a regular template together with experts to label the dataset automatically, the steps of which are shown in Fig.4.After analyzing 100 articles randomly selected from the corpus, we designed regular expressions for predefined entities and relations.Regular expressions were then tested on a human-labeled dataset.If the accuracy of entities and relations exceeded the threshold, the regular template was considered credible and was applied to the automatic data labeling process.In the end, all of the labeled data were randomly split into a training set and a test set, with a ratio of 10:1.

    Figure 3:An example of dataset

    Figure 4:The steps of designing regular templates to label the military corpus

    4.2 Metrics

    In this study, three commonly used metrics, precision, recall, and F1 score, are adopted to evaluate model performance.

    First, with true positive (TP), false positive (FP), true negative (TN), and false negative (FN),precision, recall, and F1 score can be computed as follows:

    wherePstands for precision andRfor recall.Note that F1 is the harmonic average of the other two metrics.

    In our experiments, a relation is considered correct only if the relation type and the two related entities are correctly predicted.

    4.3 Experiment Settings

    The parameter settings of the training process are listed in Tab.1, along with the parameter names and their illustrations.The determination of these parameters was not a major concern in this study.In realistic applications, however, these values can be estimated based on historical data, expert elicitation, or experiments.

    4.4 Baselines

    To illustrate the superiority of our proposed method, we compared the model to three other competitive baselines, as follows:

    SpERT:A span-based model for the joint extraction of entities and relations proposed by Eberts et al.[9].In contrast to other BILOU-based models, SpERT can search over all spans in given sentences with span filtering and localized context representation and can identify overlapping entities efficiently.Note that BILOU is a common scheme for tag tokens in NLP.

    NN_GS:A joint model extracting entities and relations based on a novel tagging scheme proposed by Zheng et al.[27].This model can convert the joint extraction task into a tagging problem.Thus, neural networks can be easily used to model the joint extraction task without complicated feature engineering.

    DYGIE:A joint framework proposed by David et al.[28].This model extracts entities and relations by enumerating, refining, and scoring text spans designed to capture local (within sentence) and global (cross-sentence) contexts.

    4.5 Results

    Tab.2 presents the evaluation results of the models for our dataset.The first column is the index, and the second column is the model name.The third column presents the accuracy scores of graph-based models when performing graph classification.The fourth and fifth columns show the results of entity prediction and relation classification, respectively.

    Table 1:Parameters settings utilized in KSBERT model

    Our model was compared with three other models.More specifically, the model in Row 1 is a span-based model without a graph-based component, whereas Rows 2 and 3 refer to two novel hybrid models.As shown in Tab.2, KSBET performs the best in relation extraction in terms of the F1 score, followed by DYGIE, SpERT, and NN_GS.Although the NN_GS model performs well in entity extraction compared with other models, it cannot improve the relation prediction F1 scores.Comparing the results of Rows 1 and 4, we can see that our KSBERT model, which applies Gin [26] using a graph-based model, performs well in both entity recognition and relation classification, which indicates that incorporating external knowledge with the graphbased component can contribute to entity and relation extraction.

    Table 2:The evaluation results of different models for our dataset

    5 Discussion

    5.1 Ablation Study

    To evaluate the performance of the different components of our proposed model, we further conducted ablation studies.

    As shown in Tab.3, the results of our proposed model are presented in the first row, and their ablations are listed below.It is clear that both the graph classification and the domain dictionary components contribute to the model’s performance.More specifically, the graph classification results in a performance drop 0.32% and 4.13% in F1 score regarding entity and relation extraction, respectively.The domain dictionary, however, results in a relatively lower performance drop of 0.28% and 2.56% in F1 score regarding entity and relation extraction, respectively.In other words, graph classification makes a greater contribution to the improvement of performance than does the domain dictionary, but both of these components are indispensable.

    Table 3:Ablation analysis results

    5.2 Comparison of Joint Training Methods

    To determine the most efficient method to jointly train the span-based and graph-based models, in this section, we conduct a comparative analysis.There are three candidate approaches:adding loss of relation classification in the span-based modelγrand loss of graph classificationγg, multiplyingγrandγg, or using a linear function to combineγrandγrtogether.The results are shown in Tab.4; the add method and linear function methods can perform joint training correctly,while the multiply method cannot.Moreover, with the linear function, the model can obtain higher F1 scores of 76.07 and 60.56 in entity recognition and relation classification, respectively.In other words, the linear function method outperforms the other two methods in terms of the F1 score for both entity and relation extraction.Therefore, the linear function method was adopted in our KSBERT model.

    Table 4:The evaluation results of different joint training methods

    5.3 Comparison of Aggregation of BERT Character Embeddings

    A pre-trained BERT encoder can generate only character embeddings in Chinese, but Chinese words may contain more information than characters.To obtain word embeddings from BERTencoded character embeddings, we compared two methods:sum and average.For each word,the sum method adds all embeddings of characters in the word as word embedding, whereas word embedding in the average method is obtained by dividing the sum value by the number of characters in the word.

    Tab.5 presents the evaluation results of these two different methods.As can be seen, the sum method has a larger precision metric value.However, the average method performs better than the sum method for all other metrics, especially F1 score, indicating its efficiency in fully extracting information of each character and representing the word.

    Table 5:The evaluation results of different BERT character embedding aggregation approaches

    6 Conclusion

    In this paper, we propose a hybrid framework based on a knowledge-enriched and span-based network for the joint extraction of entities and their relations in a specific domain.With our KSBERT network, dependency relation and domain dictionary, as external lexical and syntactic knowledge, can be incorporated into relation prediction, which is essential for improving performance.Extensive experiments have been conducted on a military entity and relation extraction corpus.The results show that our proposed model outperforms other state-of-the-art approaches regarding F1 score and may be a more promising approach for future research.It should be further noted that our proposed model can be applied to other domains with slight modifications.

    In the future, our model can be easily extended by allowing for richer assumptions.As for future research directions, integrating other knowledge such as pos-tags into our framework can be taken into account.Addressing these and other challenges will contribute to the expansion of this method for dealing with other problems.

    Funding Statement:The research was supported by the Jiangsu Province “333” project BRA2020418, the NSFC under Grant Number 71901215, the National University of Defense Technology Research Project ZK20-46, the Outstanding Young Talents Program of National University of Defense Technology, and the National University of Defense Technology Youth Innovation Project.

    Conficts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产又色又爽无遮挡免费看| 成年人黄色毛片网站| 色老头精品视频在线观看| 欧美日韩亚洲国产一区二区在线观看 | 国产人伦9x9x在线观看| 大型av网站在线播放| 在线观看免费视频网站a站| 国产精品久久久久久精品古装| 国精品久久久久久国模美| 狂野欧美激情性xxxx| 两性夫妻黄色片| 91av网站免费观看| 亚洲专区中文字幕在线| 成人永久免费在线观看视频| 18禁黄网站禁片午夜丰满| 一级片免费观看大全| 成人免费观看视频高清| 男女午夜视频在线观看| 手机成人av网站| 免费av中文字幕在线| 亚洲精华国产精华精| 身体一侧抽搐| 欧美日韩成人在线一区二区| 少妇粗大呻吟视频| 高清欧美精品videossex| 日韩有码中文字幕| 91精品三级在线观看| 侵犯人妻中文字幕一二三四区| 午夜成年电影在线免费观看| 母亲3免费完整高清在线观看| 亚洲国产欧美日韩在线播放| 夜夜躁狠狠躁天天躁| 久久久精品国产亚洲av高清涩受| 欧美黄色片欧美黄色片| 一区二区三区国产精品乱码| 18禁裸乳无遮挡动漫免费视频| 久久久久久久久久久久大奶| 宅男免费午夜| 亚洲全国av大片| 国产免费男女视频| 国产亚洲欧美精品永久| 91国产中文字幕| 国产精品久久久人人做人人爽| 新久久久久国产一级毛片| 午夜激情av网站| 色综合婷婷激情| 午夜老司机福利片| 中国美女看黄片| 日韩熟女老妇一区二区性免费视频| 在线观看一区二区三区激情| 侵犯人妻中文字幕一二三四区| 亚洲精品久久午夜乱码| 精品熟女少妇八av免费久了| 大型av网站在线播放| 亚洲av美国av| 999久久久国产精品视频| 亚洲午夜精品一区,二区,三区| 免费av中文字幕在线| 波多野结衣一区麻豆| 在线观看日韩欧美| 欧美日韩福利视频一区二区| 亚洲av日韩精品久久久久久密| 亚洲成人免费av在线播放| 看片在线看免费视频| 亚洲人成伊人成综合网2020| 亚洲人成77777在线视频| 新久久久久国产一级毛片| 久久久久国产一级毛片高清牌| 少妇的丰满在线观看| 两个人看的免费小视频| 亚洲午夜理论影院| 少妇 在线观看| 日本五十路高清| 国产主播在线观看一区二区| 久久久久久久午夜电影 | 久久天堂一区二区三区四区| 午夜福利,免费看| 露出奶头的视频| 99riav亚洲国产免费| 国产欧美日韩一区二区精品| 日韩有码中文字幕| 美女 人体艺术 gogo| 中文字幕最新亚洲高清| 午夜91福利影院| 亚洲色图av天堂| 亚洲中文字幕日韩| 亚洲精品成人av观看孕妇| 成人av一区二区三区在线看| 天天躁狠狠躁夜夜躁狠狠躁| 久久精品aⅴ一区二区三区四区| 精品久久久久久久毛片微露脸| 久久久久久久午夜电影 | 国产亚洲欧美在线一区二区| 99精品欧美一区二区三区四区| 久久久精品免费免费高清| 免费看a级黄色片| 欧美精品高潮呻吟av久久| 韩国精品一区二区三区| 欧美成人午夜精品| 日韩免费高清中文字幕av| 国产aⅴ精品一区二区三区波| 亚洲熟女精品中文字幕| 欧美在线一区亚洲| 国产一卡二卡三卡精品| 黑人巨大精品欧美一区二区蜜桃| 亚洲欧美精品综合一区二区三区| 欧美激情 高清一区二区三区| 国产aⅴ精品一区二区三区波| 最近最新免费中文字幕在线| 欧美黑人欧美精品刺激| 欧美+亚洲+日韩+国产| 在线十欧美十亚洲十日本专区| 人人澡人人妻人| 亚洲成a人片在线一区二区| 老鸭窝网址在线观看| 精品人妻熟女毛片av久久网站| 午夜影院日韩av| 999久久久精品免费观看国产| 曰老女人黄片| 日韩免费高清中文字幕av| xxxhd国产人妻xxx| 亚洲国产中文字幕在线视频| 精品熟女少妇八av免费久了| 夜夜爽天天搞| 国产成人精品在线电影| 午夜福利一区二区在线看| 99国产极品粉嫩在线观看| 黄网站色视频无遮挡免费观看| 亚洲一区二区三区不卡视频| 免费观看精品视频网站| 成人影院久久| 亚洲国产欧美日韩在线播放| 欧美最黄视频在线播放免费 | 久久精品aⅴ一区二区三区四区| 欧美日韩亚洲综合一区二区三区_| 一本一本久久a久久精品综合妖精| 久久狼人影院| 在线视频色国产色| 久久久久久人人人人人| 亚洲专区国产一区二区| 91九色精品人成在线观看| 在线观看免费高清a一片| 99精品在免费线老司机午夜| 久久久久精品国产欧美久久久| 国产成人欧美在线观看 | 欧美黄色淫秽网站| 欧美日韩视频精品一区| 老鸭窝网址在线观看| 国产伦人伦偷精品视频| 国产精品.久久久| 国产91精品成人一区二区三区| 国产精品久久久av美女十八| 国产一区二区三区视频了| 97人妻天天添夜夜摸| 国产精品98久久久久久宅男小说| 又黄又粗又硬又大视频| 一级毛片高清免费大全| 夜夜躁狠狠躁天天躁| 欧美大码av| 欧洲精品卡2卡3卡4卡5卡区| 两人在一起打扑克的视频| 成人手机av| 中文欧美无线码| 色精品久久人妻99蜜桃| 人成视频在线观看免费观看| 久久人妻熟女aⅴ| 国产av精品麻豆| 成人免费观看视频高清| 丰满的人妻完整版| 免费少妇av软件| 好男人电影高清在线观看| 久热爱精品视频在线9| 91在线观看av| 十分钟在线观看高清视频www| 一级作爱视频免费观看| 免费观看a级毛片全部| 夜夜爽天天搞| 18禁美女被吸乳视频| 欧美日韩乱码在线| 欧美日本中文国产一区发布| 亚洲第一青青草原| 宅男免费午夜| 国产av一区二区精品久久| 天天躁狠狠躁夜夜躁狠狠躁| 悠悠久久av| 黄网站色视频无遮挡免费观看| 免费日韩欧美在线观看| 不卡av一区二区三区| 日本黄色视频三级网站网址 | 午夜福利,免费看| 亚洲成人国产一区在线观看| 亚洲专区国产一区二区| 欧美激情久久久久久爽电影 | av网站免费在线观看视频| 国产精品久久视频播放| 男女免费视频国产| 在线看a的网站| 久久久久久久精品吃奶| 亚洲一码二码三码区别大吗| 国产精品av久久久久免费| 18禁美女被吸乳视频| 亚洲精品中文字幕一二三四区| 国产成人精品久久二区二区91| 亚洲av日韩在线播放| 一个人免费在线观看的高清视频| 视频区欧美日本亚洲| 亚洲精品美女久久av网站| 亚洲第一青青草原| 在线天堂中文资源库| 变态另类成人亚洲欧美熟女 | 国产视频一区二区在线看| 999久久久精品免费观看国产| 久久国产精品影院| 在线国产一区二区在线| 亚洲 欧美一区二区三区| 亚洲精品久久午夜乱码| 怎么达到女性高潮| 国产成+人综合+亚洲专区| 又紧又爽又黄一区二区| 欧美国产精品va在线观看不卡| 又黄又爽又免费观看的视频| 国产精品免费视频内射| 日本黄色视频三级网站网址 | 交换朋友夫妻互换小说| 男女免费视频国产| 大型av网站在线播放| 91国产中文字幕| 99精品久久久久人妻精品| 超色免费av| 18禁黄网站禁片午夜丰满| 校园春色视频在线观看| 国产激情久久老熟女| www.熟女人妻精品国产| 大码成人一级视频| 黄色a级毛片大全视频| 国精品久久久久久国模美| 免费在线观看日本一区| 亚洲中文日韩欧美视频| 又黄又粗又硬又大视频| 国产亚洲一区二区精品| 亚洲伊人色综图| 亚洲国产看品久久| 亚洲成av片中文字幕在线观看| 99国产综合亚洲精品| 免费人成视频x8x8入口观看| 色精品久久人妻99蜜桃| 亚洲五月婷婷丁香| 国产成人一区二区三区免费视频网站| 久久草成人影院| 久久久久精品国产欧美久久久| 亚洲中文日韩欧美视频| 色老头精品视频在线观看| 国产99白浆流出| 午夜福利视频在线观看免费| 国产主播在线观看一区二区| 亚洲av美国av| videosex国产| av网站免费在线观看视频| av福利片在线| 亚洲成国产人片在线观看| 免费av中文字幕在线| 久久青草综合色| 亚洲自偷自拍图片 自拍| 九色亚洲精品在线播放| 国产高清国产精品国产三级| 国产97色在线日韩免费| 美女扒开内裤让男人捅视频| 最新美女视频免费是黄的| 国产成人免费无遮挡视频| 一本大道久久a久久精品| 日本黄色视频三级网站网址 | 男女之事视频高清在线观看| a级毛片在线看网站| 久久久精品国产亚洲av高清涩受| 777米奇影视久久| 在线国产一区二区在线| 久久人人97超碰香蕉20202| 午夜福利,免费看| 久久99一区二区三区| 99热国产这里只有精品6| 免费久久久久久久精品成人欧美视频| 王馨瑶露胸无遮挡在线观看| 在线十欧美十亚洲十日本专区| 欧美日韩亚洲高清精品| 亚洲成人免费av在线播放| 国产欧美日韩一区二区三区在线| 丝瓜视频免费看黄片| 亚洲五月色婷婷综合| 国产欧美日韩一区二区三区在线| 色婷婷av一区二区三区视频| 久久久久视频综合| 亚洲精品久久午夜乱码| av电影中文网址| 国产精品免费视频内射| 日韩免费av在线播放| 欧美黑人精品巨大| 亚洲av熟女| 国产激情久久老熟女| 91精品国产国语对白视频| 久热爱精品视频在线9| 国产成人系列免费观看| 国产一区二区三区综合在线观看| 免费人成视频x8x8入口观看| 欧美乱妇无乱码| 91麻豆av在线| 精品高清国产在线一区| 一本大道久久a久久精品| 三上悠亚av全集在线观看| 精品人妻在线不人妻| 露出奶头的视频| 久久人妻熟女aⅴ| 欧美日韩成人在线一区二区| 可以免费在线观看a视频的电影网站| 日日摸夜夜添夜夜添小说| 国产高清视频在线播放一区| 国产一区二区激情短视频| 久久久久久亚洲精品国产蜜桃av| 老熟妇乱子伦视频在线观看| 国产一卡二卡三卡精品| 一进一出抽搐动态| 咕卡用的链子| 美女午夜性视频免费| 一级,二级,三级黄色视频| 伦理电影免费视频| 亚洲va日本ⅴa欧美va伊人久久| 波多野结衣av一区二区av| 国产伦人伦偷精品视频| 久久久国产成人精品二区 | 在线观看日韩欧美| 99久久综合精品五月天人人| √禁漫天堂资源中文www| 天天躁日日躁夜夜躁夜夜| 亚洲专区中文字幕在线| 国产精品免费一区二区三区在线 | 久99久视频精品免费| 久久ye,这里只有精品| 精品卡一卡二卡四卡免费| 丰满饥渴人妻一区二区三| 无遮挡黄片免费观看| 国产一区二区激情短视频| av超薄肉色丝袜交足视频| 国产麻豆69| 高潮久久久久久久久久久不卡| 亚洲精品中文字幕在线视频| 日韩欧美一区视频在线观看| 国产男女超爽视频在线观看| 中国美女看黄片| 免费在线观看亚洲国产| xxx96com| 搡老熟女国产l中国老女人| 动漫黄色视频在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 天堂√8在线中文| 久久人妻熟女aⅴ| 欧美成狂野欧美在线观看| 久久久久国产一级毛片高清牌| 久久人人爽av亚洲精品天堂| 不卡av一区二区三区| 成人国产一区最新在线观看| 高清视频免费观看一区二区| 男女免费视频国产| 精品一区二区三区四区五区乱码| 国产日韩一区二区三区精品不卡| 欧美日韩av久久| av中文乱码字幕在线| 一本一本久久a久久精品综合妖精| 久久香蕉国产精品| 中亚洲国语对白在线视频| 中文字幕色久视频| 天天躁日日躁夜夜躁夜夜| 亚洲全国av大片| 在线免费观看的www视频| 1024视频免费在线观看| 亚洲成a人片在线一区二区| 亚洲精品国产色婷婷电影| 丝袜美足系列| 天堂俺去俺来也www色官网| 巨乳人妻的诱惑在线观看| 日本精品一区二区三区蜜桃| 麻豆av在线久日| 老司机福利观看| 天天躁日日躁夜夜躁夜夜| 嫩草影视91久久| 国产欧美亚洲国产| 久久午夜亚洲精品久久| 亚洲情色 制服丝袜| 午夜两性在线视频| 欧美av亚洲av综合av国产av| 久久久水蜜桃国产精品网| videosex国产| 别揉我奶头~嗯~啊~动态视频| 亚洲精品乱久久久久久| 免费在线观看日本一区| 999久久久精品免费观看国产| 欧美老熟妇乱子伦牲交| 中亚洲国语对白在线视频| 老司机福利观看| 午夜日韩欧美国产| 妹子高潮喷水视频| 在线免费观看的www视频| 一边摸一边抽搐一进一小说 | 黑人巨大精品欧美一区二区mp4| 亚洲美女黄片视频| 国内毛片毛片毛片毛片毛片| 亚洲人成伊人成综合网2020| 亚洲国产欧美网| 嫩草影视91久久| 亚洲精品粉嫩美女一区| 日本一区二区免费在线视频| 精品久久久久久久毛片微露脸| 亚洲黑人精品在线| 黑人猛操日本美女一级片| av免费在线观看网站| 国产av一区二区精品久久| 狠狠狠狠99中文字幕| 老司机福利观看| 欧美乱色亚洲激情| 中文字幕制服av| 久久精品国产清高在天天线| av超薄肉色丝袜交足视频| 校园春色视频在线观看| 在线十欧美十亚洲十日本专区| 欧美在线黄色| 夜夜爽天天搞| 午夜成年电影在线免费观看| 亚洲专区国产一区二区| 在线观看免费视频网站a站| 天堂中文最新版在线下载| 中文字幕色久视频| 亚洲专区国产一区二区| 国产91精品成人一区二区三区| 国产精华一区二区三区| xxx96com| 飞空精品影院首页| 91麻豆精品激情在线观看国产 | 亚洲av电影在线进入| 欧美日韩福利视频一区二区| 91大片在线观看| 超碰97精品在线观看| 91精品三级在线观看| 波多野结衣一区麻豆| 午夜日韩欧美国产| 亚洲成人免费av在线播放| 在线看a的网站| √禁漫天堂资源中文www| 少妇 在线观看| 黄片小视频在线播放| 91国产中文字幕| 久久青草综合色| 中文字幕制服av| videosex国产| 欧美丝袜亚洲另类 | 国产精品亚洲av一区麻豆| 久久热在线av| 欧美日韩黄片免| 国产欧美亚洲国产| 日本撒尿小便嘘嘘汇集6| 欧美国产精品va在线观看不卡| 国产主播在线观看一区二区| 啪啪无遮挡十八禁网站| 正在播放国产对白刺激| 久久婷婷成人综合色麻豆| 午夜福利免费观看在线| 国产成人免费观看mmmm| 中文字幕高清在线视频| 日韩成人在线观看一区二区三区| 电影成人av| 最新美女视频免费是黄的| 一级a爱视频在线免费观看| 日韩欧美一区二区三区在线观看 | 黄色片一级片一级黄色片| 亚洲成人手机| 日韩免费av在线播放| 黄色视频,在线免费观看| 亚洲精品国产一区二区精华液| 变态另类成人亚洲欧美熟女 | 亚洲精品国产色婷婷电影| 身体一侧抽搐| 成在线人永久免费视频| 国产精品久久电影中文字幕 | 天天影视国产精品| 一区二区三区精品91| 亚洲午夜理论影院| 精品一区二区三卡| 久久性视频一级片| 99久久99久久久精品蜜桃| 亚洲片人在线观看| 在线观看日韩欧美| 两性午夜刺激爽爽歪歪视频在线观看 | 黄色视频不卡| 欧美丝袜亚洲另类 | 国产极品粉嫩免费观看在线| 一本综合久久免费| 丰满的人妻完整版| bbb黄色大片| 亚洲一区中文字幕在线| 99re6热这里在线精品视频| e午夜精品久久久久久久| 欧美激情久久久久久爽电影 | 亚洲欧美色中文字幕在线| 亚洲精品自拍成人| 亚洲精品av麻豆狂野| 欧美av亚洲av综合av国产av| 又大又爽又粗| 在线观看免费高清a一片| 精品一区二区三卡| 后天国语完整版免费观看| 久久精品亚洲精品国产色婷小说| 在线永久观看黄色视频| 高清av免费在线| 欧美亚洲日本最大视频资源| 国产有黄有色有爽视频| 国产高清国产精品国产三级| 亚洲精品在线观看二区| 久久亚洲真实| 精品卡一卡二卡四卡免费| 久久精品国产亚洲av香蕉五月 | av免费在线观看网站| 国产三级黄色录像| √禁漫天堂资源中文www| 三上悠亚av全集在线观看| 69av精品久久久久久| 国产精品电影一区二区三区 | 很黄的视频免费| 麻豆成人av在线观看| 老司机影院毛片| 久久午夜综合久久蜜桃| 极品少妇高潮喷水抽搐| 超色免费av| 欧美乱色亚洲激情| 男女床上黄色一级片免费看| 亚洲成国产人片在线观看| 一区二区三区激情视频| 国产精品av久久久久免费| 黑人欧美特级aaaaaa片| 国产又爽黄色视频| 欧美日韩一级在线毛片| 欧美日韩瑟瑟在线播放| 国产一区二区激情短视频| 黑人猛操日本美女一级片| av线在线观看网站| 免费看十八禁软件| 日韩免费高清中文字幕av| 国产欧美日韩精品亚洲av| 亚洲精品国产一区二区精华液| 自线自在国产av| 18禁裸乳无遮挡免费网站照片 | 在线观看免费午夜福利视频| 成年版毛片免费区| 国产精品自产拍在线观看55亚洲 | 两个人免费观看高清视频| e午夜精品久久久久久久| 久久久久久免费高清国产稀缺| 亚洲七黄色美女视频| 成人精品一区二区免费| 一进一出好大好爽视频| 中文字幕色久视频| 纯流量卡能插随身wifi吗| 视频在线观看一区二区三区| 美国免费a级毛片| 亚洲av成人av| 国产精品永久免费网站| 亚洲成人免费av在线播放| 日韩有码中文字幕| 亚洲熟女毛片儿| 国产成人精品无人区| 丝袜美足系列| 91精品三级在线观看| √禁漫天堂资源中文www| 女人高潮潮喷娇喘18禁视频| 如日韩欧美国产精品一区二区三区| a级毛片黄视频| 女性生殖器流出的白浆| 精品第一国产精品| 成年动漫av网址| 最新在线观看一区二区三区| 亚洲精品在线观看二区| 别揉我奶头~嗯~啊~动态视频| 午夜两性在线视频| 色精品久久人妻99蜜桃| 国产精品久久久久成人av| 大香蕉久久网| 久久人人爽av亚洲精品天堂| 日韩精品免费视频一区二区三区| 99久久精品国产亚洲精品| 身体一侧抽搐| 亚洲精品自拍成人| 久久精品亚洲熟妇少妇任你| 国产成人精品在线电影| 人人妻人人添人人爽欧美一区卜| 巨乳人妻的诱惑在线观看| 亚洲欧美激情综合另类| 久久久久久人人人人人| 91国产中文字幕| 少妇 在线观看| 亚洲黑人精品在线| av不卡在线播放| 黄色视频,在线免费观看| 免费一级毛片在线播放高清视频 | 多毛熟女@视频| 99re在线观看精品视频| 亚洲成国产人片在线观看| 日韩成人在线观看一区二区三区| 午夜老司机福利片| 国产成人欧美| 欧美乱妇无乱码| 中文亚洲av片在线观看爽 | 亚洲一区二区三区不卡视频| 国产免费现黄频在线看| 黑人欧美特级aaaaaa片| 欧美黄色片欧美黄色片| 国产高清国产精品国产三级| 久热这里只有精品99| 欧美另类亚洲清纯唯美| 国产xxxxx性猛交|