• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Word Sense Disambiguation Model with a Cache-Like Memory Module

    2021-10-20 06:55:02LINQianLIUXinXINChunlei辛春蕾ZHANGHaiying張海英ZENGHualin曾華琳ZHANGTonghui張同輝SUJinsong蘇勁松
    關(guān)鍵詞:春蕾勁松

    LIN Qian(林 倩), LIU Xin(劉 鑫), XIN Chunlei(辛春蕾), ZHANG Haiying(張海英), ZENG Hualin(曾華琳), ZHANG Tonghui(張同輝), SU Jinsong(蘇勁松)

    School of Informatics, Xiamen University, Xiamen 361005, China

    Abstract: Word sense disambiguation (WSD), identifying the specific sense of the target word given its context, is a fundamental task in natural language processing. Recently, researchers have shown promising results using long short term memory (LSTM), which is able to better capture sequential and syntactic features of text. However, this method neglects the dependencies among instances, such as their context semantic similarities. To solve this problem, we proposed a novel WSD model by introducing a cache-like memory module to capture the semantic dependencies among instances for WSD. Extensive evaluations on standard datasets demonstrate the superiority of the proposed model over various baselines.

    Key words: word sense disambiguation (WSD); memory module; semantic dependencies

    Introduction

    Word sense disambiguation (WSD) aims to accurately identify the specific meaning of an ambiguous word according to particular context. As a fundamental task in natural language processing (NLP), it is beneficial to the studies of many other NLP tasks, such as neural machine translation (NMT), question answering (QA) and sentiment analysis. Therefore, how to construct a high-quality WSD model has attracted much attention in academia and industry.

    To achieve this goal, previous studies usually resorted to artificial features containing linguistic and other information. Generally, these models can be grouped into four categories: unsupervised[1-3], supervised[4-5], semi-supervised[6-8]and knowledge-based[9-10]approaches. Recently, with the rapid development of deep learning, the studies of WSD have evolved from conventional feature engineering based models into neural network architectures. From this point of view, the common practice is to use word embeddings. For example, the word embeddings were leveraged as WSD features in different ways[11]. In addition, recurrent neural networks (RNN) effectively exploiting word order have been proven to be effective. Some researchers[12-13]mainly focused on long short term memory (LSTM) based WSD models, which can capture the sequential and syntactic patterns of the given sentence and thus achieve competitive performance in this task. Despite their success, previous studies conducted WSD in isolation, while neglecting the semantic dependencies among instances: the considered words with similar context should have the same sense, which has been adopted in many NLP tasks, such as entity linking[14-15]. As shown in Fig.1, for the target worddykes, the same word senses appear in similar contexts.

    Instance 1 Assuming that magnetizations in the South Mountains granodiorite,Telegraph Pass granite and felsic dykes were acquired before and during ductile extensional deformation,we interpret these data as demonstrating that the South Mountains footwall has not been significantly tilted after mylonitic deformation.Sense_id: dykeXl:06:00::Instance 2 Similarly,assuming that magnetizations in the microdiorite dykes were acquired during initial stages of brittle deforma-tion,we interpret these data as demonstrating that the South Mountains footwall has not been significantly tilted after the brittle sta-ges of deformation.Sense_id: dykeXHiOO::

    In this paper, we propose a novel WSD model with a cache-like memory module. It is a significant extension of the conventional LSTM-based WSD model[13]. The introduced cache-like memory module is used to memorize the sense disambiguation results of other instances for the same target word, and thus provide helpful information to the current WSD. We design this module based on the fact that the sense disambiguation result may be the same based on the similar context. Besides, since the memory can be traced back to training examples, it might help explain the decisions that the model is making and thus improve understandability of the model, so that the memorized value could help to improve the model accuracy, as verified in the studies of other NLP tasks[16-17]. It is worth mentioning that, the introduced cache-like memory is composed of key-value pairs. The keys denote the semantic representations of instances, and values are the corresponding sense disambiguation results. We compute the dot product similarities between the current hidden state and the stored keys in memory. Then, according to these dot product similarities, we summarize the memorized sense disambiguation results as the weighted sum of the values. This summarized vector can be incorporated into the conventional decoder to refine the sense disambiguation result of the current instance. According to this, our proposed model is able to fully exploit the semantic similarities among instances to refine the conventional WSD model. To investigate the effectiveness of our proposed WSD model, we carry out multiple groups of experiments on benchmark datasets. Experimental results and in-depth analysis show that our model outperforms previous WSD models.

    The related work mainly includes WSD and memory neural network. WSD has been one of hot research topics in the community of NLP. More specifically, the previous studies on WSD can be roughly classified into the following aspects: unsupervised WSD, supervised WSD, semi-supervised WSD, and knowledge-based WSD. Unsupervised WSD is based on the assumption that similar word senses appear in similar contexts. Therefore, studies on unsupervised WSD mainly focuses on how to automatically learn the sense tags of target words from unlabeled data. The typical approaches take sense disambiguation task as a clustering problem which aims to group together examples with similar contexts[1-3, 18-19].

    Different from unsupervised WSD, supervised WSD mainly uses manually sense-annotated to train a classifier for WSD. Zhong and Ng[4]used a linear kernel support vector machine as the classifier. Shenetal.[5]also trained a multiclass classifier to distinguish categories[5]. Experimental results on many datasets demonstrate that these approaches can achieve satisfying performance in this task.

    Apparently, it is costly to obtain sense annotated corpora so that it is harder to extend the supervised WSD to the new domain. To deal with this problem, many researchers paid attention to semi-supervised WSD, which can simultaneously exploit both label and unlabeled datasets[6-8, 20-25].

    Unlike the above-mentioned approaches, dominant methods in this aspect mainly resort to leveraging external knowledge resources to identify the senses of target words such as knowledge bases, semantic networks and dictionaries[9, 19, 26-33]. However, knowledge-based WSD cannot been widely used due to the fact that external knowledge resources are rare for many languages and domains.

    Recently, with the rapid development of deep learning, neural network based WSD has attracted increasing attention and become dominant models in this tasks[9, 11, 34-36]. Compared with traditional methods, neural network-based models can automatically learn features that are beneficial to WSD. Particularly, some researchers use LSTM networks to capture the relationship between the context and word meaning by modeling the sequence of words surrounding the target word[12, 13, 37]. However, all above work conducts WSD in isolation.

    Recently, due to the role of memory in storing previous results and capturing useful history, memory neural network has been widely used in many NLP tasks, such as language modeling[38-40], QA[41]and NMT[16, 17, 42]. To the best of our knowledge, our work to introduce a memory module into WSD is meaningful, which directly utilizes the memorized useful information from similar examples, and thus makes better use of semantic dependencies between instances.

    The remainder of this paper is organized as follows. Session 1 describes our proposed model, including details on the model architecture and objective function. Experimental results are presented and analyzed in section 2, followed by conclusions in section 3.

    1 Proposed Model

    In this section, we will describe our proposed WSD model in detail. Our model is a significant extension of the conventional LSTM-based WSD model[13]. However, it is worth to note that our introduced cache-like memory is also applicable to other neural network based WSD model.

    Figure 2 illustrates the model architecture, which is composed of a conventional LSTM-based WSD model and a cache-like memory module.

    1.1 LSTM-based WSD model

    Fig. 2 Architecture of our WSD model

    Given a target wordxiand its contextual hidden statehi, we introduce a softmax layer to predict the probability distribution over its candidate senses. Formally, we produce the probability distribution of candidate senses as:

    (1)

    (2)

    1.2 Cache-like memory module

    In order to better identify the sense of a target word, we explicitly model the semantic dependencies among instances to refine the neural based WSD model. To this end, we introduce a cache-like memory module which memorizes the sense disambiguation results of other instances as an array of key-value pairs. Our basic intuition is that the more similar the context of current instance with other instances in memory, the closer their word sense disambiguation results should be.

    To exploit the cache-like memory information, we summarize the memorized sense disambiguation results as a memory vectormi. Formally,miis defined as the sum over the valuesvtweighted by the normalized similarities {st}:

    (3)

    Then we incorporate the memory vectormiinto the final output as

    (4)

    (5)

    whereσis the sigmoid function, the dynamic weightλis used to control the effect of the cache-like memory module andW(3),W(4), andW(5)are learnable parameter matrixes. The basic idea behind our strategy is that the same target word of different instances requires different sizes of context to be disambiguated. For one considered instance, if our model is able to retrieve another instance with the similar context from the cache-like memory, it is more reasonable for our model to exploit the disambiguation result of this instance, and vice versa.

    1.3 Training objective function

    Given a training corpusD, we train the model according to the following cross-entropy with parametersθ:

    (6)

    whereS(xi) is the sense set of the target wordxi, andtj(xi) is thejth element of the sense distributiont(xi) forxi. We will describe training details in section 2.1.2.

    2 Experiments

    2.1 Setup

    2.1.1Datasets

    To evaluate our proposed model, we carry out WSD experiments on the lexical sample task of SensEval2[43]and SensEval3[44].

    Table 1 provides the details of experimental data sets, including training set and testing set. Our baseline is a BiLSTM-based WSD model, proposed in Ref. [13].

    Table 1 Details of experimental data sets

    We train the proposed WSD model in two steps: pre-training and fine-tuning. In the pre-training step, from Ref. [13], we train a WSD model based on BiLSTM, with hyper-parameter setting presented in Table 2. Please note that we also train the baseline model under the same hyper-parameters, insuring the fair comparison.

    Table 2 Hyper-parameter settings at the pre-training

    2.1.2TrainingDetails

    Since our training datasets is not in large scale, it is better to employ dropout to prevent the model from over-fitting. Specifically, we set both dropout rates of embedding and hidden state as 0.5. Besides, we add Gaussian noise ~N(0, 0.2σi) to the word embeddings of input sentences, whereσiis theith dimension standard deviation in word embeddings matrix. In addition, we randomly discard some input words with rate 0.1 to further alleviate this issue and also we use theGloVevectors to initialize word embedding. For the out-of-vocabulary (OOV) words not appearing in the Glove vocabulary, we directly initialize these words according to the uniform distribution ~u(-0.1, 0.1).

    We apply stochastic gradient descent(SGD) algorithm to optimize model training. In order to balance the performance and the training speed of the model, at the early stage, we first use a large learning rate to ensure that the model can quickly descend after finding the gradient descent direction, and then at the later stage, a smaller learning rate is adopted to make the parameters slowly change to approximate the optimal parameters. The decay factor of the learning rate is set as 0.96 every fixed 75 steps.

    At the fine-tuning stage, we add a cache-like memory module into our WSD model. Note that before fine-tuning, we have stored the hidden states and sense disambiguation results of all training instances as key-value pairs into our cache-like memory, where these key-value pairs are fixed during fine-tuning. The hyper-parameters of the cache-like memory module is shown in Table 3. To avoid the slow training caused by a smaller learning rate, we limit the learning rate using a threshold. In addition, we clip the gradient to deal with the gradient vanishing problem.

    Table 3 Hyper-parameter settings at the cache-like

    2.1.3Baselines

    We refer to our model as MEM-BiLSTM and compare it with the following baselines.

    (1) 100JHU(R)[45]. It exploits a rich set of features for WSD.

    (2) IMS+adapted CW[34]. It uses a feedforward neural network to incorporate word embeddings into WSD model.

    (3) BiLSTM[13]. It is a commonly-used WSD model, which is based on bi-directional LSTM.

    2.2 Experimental results

    2.2.1Performance

    The results of different models measured in terms of F1 score are given in Table 4. Compared with the previous models, our reimplemented baseline achieves better or similar performance on the two datasets, respectively. This result demonstrates that our reimplemented baseline is competitive. Furthermore, when equipped with the baseline with our cache-like memory module, our WSD model achieves the best scores on SensEval2 and SensEval3 with varying degrees of improvements. Specifically, on the two datasets, our WSD model outperforms the reimplemented BiLSTM baseline by 0.4 and 0.3, respectively, which strongly proves that adding the memory module can help the WSD model.

    Table 4 Results for SensEval 2 and SensEval 3 on the

    2.2.2Generality

    To verify the generality of our proposed model, we also train different models using different sizes of training corpora: 10%, 25%, 50%, 75% and 100%, and then report the performances in Table 5. We can observe that with the increase of the amount of training data, the performance gap between the baseline and ours become larger. The underlying reason is when using the large training data, our model is able to exploit more similar instances to refine WSD.

    Table 5 Results for SensEval 2 and SensEval 3 on the English lexical sample task

    2.3 Case study

    To analyze why our model can outperform the baseline, we compare the WSD results of different models. Figures 3-5 show three examples, respectively. We can observe that in comparison to BiLSTM, our proposed model is able to make correct predictions with the help of the semantic related instances from the memory. Moreover, we simultaneously provide the three most similar instances for target words “argument”, “activate” and “hearth” in the last three rows of Figs. 3-5, respectively.

    Instance: ... When it fell to Dukes to introduce the second stage of the Bill empowering the referendum,he was forced to address himself specifically to the bishop, arguments in their letter (text,Irish Times,15 May 1986).Reference: argument0Zol: 10:02::BiLSTM: argumentyol: 10:03::MEM-BiLSTM: argumentyol: 10:02::The most similar instance 1: ...This has some affinity with the Marxist position. In a published argument between Scholes and Hir-sch, the former made the following statement, on the assumption that the conservative Hirsch would disagree with it ...The most similar instance 2: ... he accepted Chinas offer of a seat on the Basic Law Drafting Committee, helping to write Hong Kong SAR Chinas post - 1997 mini - constitution, and was embroiled in more unsuccessful arguments for direct elections, op-posed by mainland communists and Hong Kong conservatives ...The most similar instance 3: ...Although the banks will begin to present their arguments today,Mr Scrivener said: this court is not concerned with private rights ...

    Fig. 3 The first example

    Fig. 4 The second example

    3 Conclusions

    In this paper, we proposed a novel WSD model with a cache-like memory module. As an improvement of the conventional LSTM-based WSD model, our model incorporates a cache-like memory module composed of key-value pairs, where the keys denote the semantic representation of instances, and values are the corresponding sense disambiguation results. We first compute the dot product similarities between the current hidden state and the stored keys in memory. Then, we summarize the memory values as a memory vector according to these dot product similarities, and then the induced memory vector is exploited to refine the WSD results of the current instance. Extensive experiments also validate the effectiveness of our proposed model.

    In the future, we plan to design more effective architectures to better exploit semantic dependencies between instances for WSD. Besides, how to introduce graph neural networks into WSD is also one of our focuses in future researches.

    猜你喜歡
    春蕾勁松
    顧勁松
    藝術(shù)家(2024年2期)2024-04-15 08:19:20
    嚴(yán)冬過盡綻春蕾——致公黨連云港市委會齊心協(xié)力戰(zhàn)疫情
    莊勁松美術(shù)作品
    莆田市婦聯(lián)開展“精準(zhǔn)脫貧·春蕾圓夢”助學(xué)行動
    海峽姐妹(2020年10期)2020-10-28 08:08:46
    Simulation and experimental research of digital valve control servo system based on CMAC-PID control method①
    余勁松
    春蕾圓夢 高考助學(xué)在行動
    海峽姐妹(2015年8期)2015-02-27 15:12:33
    春蕾之花 美麗綻放——百名“春蕾之星”尋訪記
    福利中國(2015年6期)2015-01-03 08:44:38
    閱讀理解精練
    Gross Error Detection and Identification Based on Parameter Estimation for Dynamic Systems*
    免费av毛片视频| 日韩欧美精品免费久久| www日本黄色视频网| 又黄又爽又免费观看的视频| 亚洲第一电影网av| 91在线精品国自产拍蜜月| 亚洲精品日韩av片在线观看| 老熟妇乱子伦视频在线观看| 国产伦精品一区二区三区四那| 简卡轻食公司| 亚洲无线在线观看| 久久精品国产自在天天线| 免费在线观看影片大全网站| 国产精品99久久久久久久久| 蜜桃久久精品国产亚洲av| 久99久视频精品免费| 精品久久久久久久末码| 十八禁网站免费在线| 免费观看在线日韩| 日本熟妇午夜| 一本精品99久久精品77| 久久99热6这里只有精品| 69av精品久久久久久| 欧美一区二区亚洲| 波多野结衣巨乳人妻| 国产美女午夜福利| 最后的刺客免费高清国语| 久久久成人免费电影| 午夜福利高清视频| 日本成人三级电影网站| 久久久久久久久中文| 亚洲图色成人| 啦啦啦啦在线视频资源| 狂野欧美白嫩少妇大欣赏| 一本久久中文字幕| 免费大片18禁| 十八禁网站免费在线| 精华霜和精华液先用哪个| 国产一级毛片七仙女欲春2| 亚洲人成网站在线播放欧美日韩| 国产综合懂色| 国产亚洲精品久久久com| av卡一久久| 欧美精品国产亚洲| 国产不卡一卡二| 神马国产精品三级电影在线观看| 免费av不卡在线播放| 国产成人一区二区在线| 精品久久久久久久久久免费视频| 亚洲av电影不卡..在线观看| 麻豆av噜噜一区二区三区| 熟女人妻精品中文字幕| 日本一二三区视频观看| 久久久久九九精品影院| 一区二区三区免费毛片| 国产伦精品一区二区三区视频9| 亚洲乱码一区二区免费版| 人人妻人人澡欧美一区二区| 欧美+亚洲+日韩+国产| 国产精品亚洲一级av第二区| 97超碰精品成人国产| 少妇的逼水好多| 国产av一区在线观看免费| 又爽又黄无遮挡网站| 美女 人体艺术 gogo| 男人和女人高潮做爰伦理| 亚洲高清免费不卡视频| 亚洲七黄色美女视频| 一夜夜www| 在线观看美女被高潮喷水网站| 长腿黑丝高跟| 亚洲综合色惰| 国内揄拍国产精品人妻在线| 一级a爱片免费观看的视频| 婷婷精品国产亚洲av在线| 成人av一区二区三区在线看| 两个人的视频大全免费| 国产极品精品免费视频能看的| 成年女人看的毛片在线观看| 在线观看免费视频日本深夜| 国产亚洲精品久久久久久毛片| 三级男女做爰猛烈吃奶摸视频| 国语自产精品视频在线第100页| 亚洲国产精品合色在线| 国产精品亚洲美女久久久| 国产精品亚洲美女久久久| 人人妻人人看人人澡| 国产亚洲精品综合一区在线观看| 久久久欧美国产精品| 欧美高清成人免费视频www| 国产精品人妻久久久久久| 中文字幕av在线有码专区| 内射极品少妇av片p| 国产精品乱码一区二三区的特点| 欧美激情国产日韩精品一区| 久久亚洲国产成人精品v| 不卡视频在线观看欧美| 特级一级黄色大片| 国产毛片a区久久久久| 黄色一级大片看看| 欧美不卡视频在线免费观看| 熟妇人妻久久中文字幕3abv| 白带黄色成豆腐渣| 国产老妇女一区| 不卡一级毛片| 此物有八面人人有两片| 无遮挡黄片免费观看| 91av网一区二区| 草草在线视频免费看| 91麻豆精品激情在线观看国产| 免费高清视频大片| a级毛片免费高清观看在线播放| 久久久精品94久久精品| 少妇裸体淫交视频免费看高清| 一夜夜www| 亚洲婷婷狠狠爱综合网| 精品久久久久久久末码| 最近手机中文字幕大全| 国产真实乱freesex| 日韩av不卡免费在线播放| 成人毛片a级毛片在线播放| 国产成人精品久久久久久| 又粗又爽又猛毛片免费看| 欧美又色又爽又黄视频| 色在线成人网| av国产免费在线观看| 久久精品国产自在天天线| 18禁黄网站禁片免费观看直播| 成人国产麻豆网| 亚洲自偷自拍三级| 国产一区二区激情短视频| 一级毛片久久久久久久久女| 亚洲七黄色美女视频| 99riav亚洲国产免费| 精品一区二区三区视频在线| 99热这里只有是精品在线观看| 午夜激情福利司机影院| 又爽又黄a免费视频| 久久精品国产亚洲av天美| 精品99又大又爽又粗少妇毛片| 国产色爽女视频免费观看| 人妻丰满熟妇av一区二区三区| 看片在线看免费视频| 午夜免费男女啪啪视频观看 | 欧美高清成人免费视频www| 成人美女网站在线观看视频| 国产精品不卡视频一区二区| 在线国产一区二区在线| 日韩,欧美,国产一区二区三区 | 九九久久精品国产亚洲av麻豆| 午夜a级毛片| 免费看a级黄色片| 成人三级黄色视频| 色噜噜av男人的天堂激情| 黄片wwwwww| 国产精品久久久久久精品电影| 黑人高潮一二区| 久久人人爽人人爽人人片va| 精品久久久久久久久av| 美女xxoo啪啪120秒动态图| 狠狠狠狠99中文字幕| 卡戴珊不雅视频在线播放| 99久久精品国产国产毛片| 在线免费观看不下载黄p国产| 午夜激情福利司机影院| 久久久国产成人精品二区| 色尼玛亚洲综合影院| 最近最新中文字幕大全电影3| 亚洲图色成人| 日日摸夜夜添夜夜添小说| 国产亚洲精品综合一区在线观看| 亚洲经典国产精华液单| 三级经典国产精品| 国内精品美女久久久久久| 久久欧美精品欧美久久欧美| 黄片wwwwww| 亚洲精品色激情综合| 国产高清视频在线观看网站| 久久久久久大精品| 中文字幕熟女人妻在线| 久久午夜福利片| 最近在线观看免费完整版| 男人狂女人下面高潮的视频| 久久精品国产亚洲网站| 国产精品福利在线免费观看| 欧美在线一区亚洲| 女的被弄到高潮叫床怎么办| 国产一级毛片七仙女欲春2| 国产色婷婷99| 高清毛片免费看| 赤兔流量卡办理| h日本视频在线播放| 神马国产精品三级电影在线观看| 国内揄拍国产精品人妻在线| 久久久久久国产a免费观看| 99国产精品一区二区蜜桃av| 亚洲国产色片| 中文字幕免费在线视频6| 黄色欧美视频在线观看| 国产亚洲精品av在线| 麻豆久久精品国产亚洲av| 久久精品夜夜夜夜夜久久蜜豆| 亚洲内射少妇av| 校园人妻丝袜中文字幕| 午夜精品在线福利| 欧美丝袜亚洲另类| 久久久久久久亚洲中文字幕| 天天一区二区日本电影三级| 成人毛片a级毛片在线播放| 香蕉av资源在线| 最近中文字幕高清免费大全6| 久久午夜亚洲精品久久| 国产91av在线免费观看| 亚洲中文日韩欧美视频| 波野结衣二区三区在线| 免费看a级黄色片| 国国产精品蜜臀av免费| 成人无遮挡网站| 成人鲁丝片一二三区免费| 天堂av国产一区二区熟女人妻| 亚洲中文字幕日韩| 中文字幕久久专区| .国产精品久久| 久久国产乱子免费精品| 亚洲精品国产成人久久av| 一本一本综合久久| 别揉我奶头 嗯啊视频| 日韩国内少妇激情av| 日日啪夜夜撸| 国产亚洲91精品色在线| videossex国产| 美女高潮的动态| 亚洲中文日韩欧美视频| 亚洲中文字幕日韩| 精品国产三级普通话版| 超碰av人人做人人爽久久| 丝袜美腿在线中文| 欧美色视频一区免费| 女人被狂操c到高潮| 美女被艹到高潮喷水动态| 一个人看的www免费观看视频| 男女啪啪激烈高潮av片| 九九久久精品国产亚洲av麻豆| 十八禁网站免费在线| 日韩高清综合在线| 欧美3d第一页| 成人精品一区二区免费| 中文亚洲av片在线观看爽| 搡女人真爽免费视频火全软件 | 人妻制服诱惑在线中文字幕| h日本视频在线播放| 又粗又爽又猛毛片免费看| 国产成年人精品一区二区| 日本免费a在线| 可以在线观看的亚洲视频| 中文字幕免费在线视频6| 级片在线观看| 少妇熟女aⅴ在线视频| 91久久精品电影网| av天堂在线播放| 丰满人妻一区二区三区视频av| 日本爱情动作片www.在线观看 | 国产色爽女视频免费观看| av专区在线播放| 国产黄片美女视频| 国产免费一级a男人的天堂| 午夜a级毛片| av在线天堂中文字幕| 99久久无色码亚洲精品果冻| 国产v大片淫在线免费观看| 欧美一级a爱片免费观看看| 国产在视频线在精品| 乱系列少妇在线播放| 色av中文字幕| 看片在线看免费视频| 日韩欧美 国产精品| 午夜a级毛片| 日韩强制内射视频| videossex国产| 亚洲人成网站在线播| a级一级毛片免费在线观看| 伦理电影大哥的女人| 中文字幕av成人在线电影| 十八禁网站免费在线| 一区二区三区四区激情视频 | 日韩欧美精品v在线| 91麻豆精品激情在线观看国产| 日日摸夜夜添夜夜爱| 国产美女午夜福利| 高清毛片免费看| 国产色婷婷99| 精品一区二区三区视频在线观看免费| 国内精品久久久久精免费| 日日摸夜夜添夜夜添av毛片| 国产免费一级a男人的天堂| 久久久成人免费电影| 俄罗斯特黄特色一大片| 亚洲成a人片在线一区二区| 最近最新中文字幕大全电影3| 99视频精品全部免费 在线| 亚洲自拍偷在线| 99久久精品国产国产毛片| av女优亚洲男人天堂| 日本三级黄在线观看| 久久久久久久久久成人| 精品熟女少妇av免费看| АⅤ资源中文在线天堂| 成人无遮挡网站| 亚洲精品乱码久久久v下载方式| 精品福利观看| 12—13女人毛片做爰片一| 久久精品夜夜夜夜夜久久蜜豆| 69av精品久久久久久| 国产v大片淫在线免费观看| 联通29元200g的流量卡| 中文亚洲av片在线观看爽| 亚洲成人中文字幕在线播放| 亚洲国产欧美人成| 亚洲最大成人手机在线| 老女人水多毛片| 国产免费男女视频| 中文亚洲av片在线观看爽| 国产 一区精品| 国产欧美日韩精品亚洲av| 老司机影院成人| 国产大屁股一区二区在线视频| 日韩国内少妇激情av| 成人二区视频| 国产在线男女| 午夜福利在线在线| 久久中文看片网| 舔av片在线| 成人无遮挡网站| 久久99热6这里只有精品| 青春草视频在线免费观看| 免费观看在线日韩| 午夜影院日韩av| 天堂√8在线中文| 色播亚洲综合网| 麻豆av噜噜一区二区三区| 中文字幕久久专区| 在线观看免费视频日本深夜| 久久精品国产亚洲av香蕉五月| 久久久久国内视频| 亚洲不卡免费看| 久久久久国产精品人妻aⅴ院| 婷婷色综合大香蕉| 欧美性感艳星| 性欧美人与动物交配| 国产 一区 欧美 日韩| 大香蕉久久网| 精华霜和精华液先用哪个| 男女之事视频高清在线观看| 亚洲久久久久久中文字幕| 久久久久久久久久成人| 日韩一本色道免费dvd| 国产精品人妻久久久影院| 别揉我奶头~嗯~啊~动态视频| 国产大屁股一区二区在线视频| 一个人看的www免费观看视频| 亚洲精品一区av在线观看| 一个人免费在线观看电影| 久久99热6这里只有精品| 亚洲成av人片在线播放无| 身体一侧抽搐| 国产黄a三级三级三级人| 久久午夜亚洲精品久久| 久久久久精品国产欧美久久久| 欧美日韩国产亚洲二区| 久久这里只有精品中国| 天美传媒精品一区二区| 99热全是精品| 免费观看精品视频网站| 精品熟女少妇av免费看| 亚洲av美国av| 久久午夜福利片| 国产欧美日韩精品一区二区| 亚洲中文字幕一区二区三区有码在线看| 久久久久久久久久黄片| 亚洲高清免费不卡视频| 亚洲欧美中文字幕日韩二区| 国内精品久久久久精免费| 精品一区二区三区视频在线观看免费| 亚洲精品成人久久久久久| 一边摸一边抽搐一进一小说| 麻豆av噜噜一区二区三区| 伦精品一区二区三区| 菩萨蛮人人尽说江南好唐韦庄 | 精品久久久久久久久av| 日韩欧美精品免费久久| 成年av动漫网址| 成年女人看的毛片在线观看| 日韩三级伦理在线观看| 欧美成人a在线观看| 国产又黄又爽又无遮挡在线| 人人妻人人澡人人爽人人夜夜 | 精品福利观看| 久久久久久久久大av| 国产成人a∨麻豆精品| 校园春色视频在线观看| 三级经典国产精品| 国产精华一区二区三区| 精品久久久久久成人av| 人妻制服诱惑在线中文字幕| 国产成人91sexporn| 欧美成人一区二区免费高清观看| 欧美中文日本在线观看视频| 国产白丝娇喘喷水9色精品| 最近手机中文字幕大全| 国产在线精品亚洲第一网站| 久久人人精品亚洲av| 免费大片18禁| 人妻久久中文字幕网| 国产亚洲精品久久久久久毛片| 国产成年人精品一区二区| 18禁裸乳无遮挡免费网站照片| 久久精品国产亚洲av天美| 12—13女人毛片做爰片一| 亚洲av免费在线观看| 精品久久久久久久末码| 久久人妻av系列| 久久热精品热| 九九热线精品视视频播放| 97热精品久久久久久| 亚洲中文日韩欧美视频| 97人妻精品一区二区三区麻豆| 亚洲av熟女| 国产探花在线观看一区二区| 久久精品人妻少妇| 美女xxoo啪啪120秒动态图| 中文字幕人妻熟人妻熟丝袜美| 三级国产精品欧美在线观看| 18禁在线播放成人免费| 亚洲av不卡在线观看| 亚洲色图av天堂| 久久亚洲精品不卡| 1024手机看黄色片| 久久人妻av系列| 免费观看在线日韩| 国产亚洲91精品色在线| 午夜激情福利司机影院| av福利片在线观看| 亚洲av美国av| 日本一二三区视频观看| 日韩在线高清观看一区二区三区| 美女黄网站色视频| 日韩欧美精品v在线| 欧美zozozo另类| 啦啦啦观看免费观看视频高清| 22中文网久久字幕| 国产精品精品国产色婷婷| 又爽又黄a免费视频| 日韩欧美 国产精品| 九色成人免费人妻av| 日韩 亚洲 欧美在线| 成年版毛片免费区| 欧美精品国产亚洲| 精品久久久久久成人av| 成人特级黄色片久久久久久久| 亚洲最大成人av| 99久久成人亚洲精品观看| 久久久国产成人精品二区| 伊人久久精品亚洲午夜| 日韩制服骚丝袜av| 中文字幕精品亚洲无线码一区| 欧美绝顶高潮抽搐喷水| 久久久久久伊人网av| 韩国av在线不卡| 日日摸夜夜添夜夜添av毛片| 国产不卡一卡二| 免费看美女性在线毛片视频| 精品久久久噜噜| 日本与韩国留学比较| 99久国产av精品国产电影| 亚洲欧美成人精品一区二区| 九九久久精品国产亚洲av麻豆| 99热全是精品| 日本熟妇午夜| 日韩三级伦理在线观看| 色av中文字幕| 日本黄大片高清| 我要看日韩黄色一级片| 又黄又爽又刺激的免费视频.| 精品人妻偷拍中文字幕| 精品免费久久久久久久清纯| 淫妇啪啪啪对白视频| 成人毛片a级毛片在线播放| 免费观看人在逋| 国产久久久一区二区三区| 神马国产精品三级电影在线观看| 一区二区三区四区激情视频 | 乱人视频在线观看| 久久草成人影院| 国产爱豆传媒在线观看| 少妇裸体淫交视频免费看高清| 在线播放国产精品三级| 亚洲自拍偷在线| 性插视频无遮挡在线免费观看| 久久久精品大字幕| 久久人人精品亚洲av| 一区二区三区四区激情视频 | 国产真实乱freesex| 真人做人爱边吃奶动态| 搡老岳熟女国产| 在线免费观看不下载黄p国产| 91在线观看av| 久久久成人免费电影| 欧美丝袜亚洲另类| 亚洲精品一卡2卡三卡4卡5卡| 亚洲国产欧洲综合997久久,| 国产v大片淫在线免费观看| 国产老妇女一区| 午夜影院日韩av| 性色avwww在线观看| 免费av观看视频| 国产男人的电影天堂91| 久久人人爽人人片av| 免费av不卡在线播放| 久久综合国产亚洲精品| 一本精品99久久精品77| 国产高清视频在线播放一区| 夜夜爽天天搞| 在线国产一区二区在线| 国产蜜桃级精品一区二区三区| 国产单亲对白刺激| 国产老妇女一区| 我要看日韩黄色一级片| 日日摸夜夜添夜夜添小说| 精品一区二区三区视频在线观看免费| 九色成人免费人妻av| 国产中年淑女户外野战色| 联通29元200g的流量卡| 久久婷婷人人爽人人干人人爱| 一进一出抽搐动态| 中文字幕人妻熟人妻熟丝袜美| 亚洲精品日韩av片在线观看| 99久久无色码亚洲精品果冻| 一本精品99久久精品77| 少妇被粗大猛烈的视频| 国产成人a∨麻豆精品| 精品人妻一区二区三区麻豆 | 日韩成人伦理影院| 黄片wwwwww| 最近2019中文字幕mv第一页| 国产极品精品免费视频能看的| 日本精品一区二区三区蜜桃| 午夜激情福利司机影院| 日日摸夜夜添夜夜添av毛片| 不卡一级毛片| 欧美日韩精品成人综合77777| 亚洲国产欧洲综合997久久,| 俄罗斯特黄特色一大片| 天天躁日日操中文字幕| 欧美成人免费av一区二区三区| 国产高清不卡午夜福利| 国产一区二区在线av高清观看| 别揉我奶头 嗯啊视频| 国产高清激情床上av| 午夜激情福利司机影院| 九九在线视频观看精品| 亚洲av第一区精品v没综合| 久久久欧美国产精品| 亚洲无线在线观看| 天堂影院成人在线观看| 久久久久久伊人网av| 午夜精品国产一区二区电影 | 欧美一区二区精品小视频在线| 99久久无色码亚洲精品果冻| 非洲黑人性xxxx精品又粗又长| 精品人妻偷拍中文字幕| 美女cb高潮喷水在线观看| 免费观看在线日韩| 性插视频无遮挡在线免费观看| 免费观看精品视频网站| 亚洲熟妇中文字幕五十中出| 国产精品永久免费网站| www.色视频.com| 干丝袜人妻中文字幕| 中文字幕av成人在线电影| 少妇熟女欧美另类| 日韩欧美 国产精品| 老司机影院成人| 老熟妇乱子伦视频在线观看| 在线观看美女被高潮喷水网站| 欧美+日韩+精品| 我要看日韩黄色一级片| 日日摸夜夜添夜夜添av毛片| 哪里可以看免费的av片| 国产探花在线观看一区二区| 日韩欧美一区二区三区在线观看| 99在线视频只有这里精品首页| 12—13女人毛片做爰片一| 久久九九热精品免费| 日韩欧美 国产精品| 亚洲精品久久国产高清桃花| 欧美三级亚洲精品| 久久久久九九精品影院| 国产成人freesex在线 | 一卡2卡三卡四卡精品乱码亚洲| 2021天堂中文幕一二区在线观| 97超级碰碰碰精品色视频在线观看| 成人av一区二区三区在线看| 麻豆成人午夜福利视频| 色哟哟哟哟哟哟| av视频在线观看入口| 波野结衣二区三区在线| 欧美日韩乱码在线| 伦理电影大哥的女人| 欧美激情在线99| 欧美最新免费一区二区三区| 亚洲高清免费不卡视频| 两个人的视频大全免费| 99久国产av精品国产电影| h日本视频在线播放| 午夜精品在线福利|