• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Convolutional Multi-Head Self-Attention on Memory for Aspect Sentiment Classification

    2020-08-05 09:40:04YaojieZhangBingXuandTiejunZhao
    IEEE/CAA Journal of Automatica Sinica 2020年4期

    Yaojie Zhang,Bing Xu,and Tiejun Zhao

    Abstract—This paper presents a method for aspect based sentiment classification tasks,named convolutional multi-head self-attention memory network(CMA-MemNet). This is an improved model based on memory networks,and makes it possible to extract more rich and complex semantic information from sequences and aspects. In order to fix the memory network’s inability to capture context-related information on a word-level,we propose utilizing convolution to capture n-gram grammatical information. We use multi-head self-attention to make up for the problem where the memory network ignores the semantic information of the sequence itself. Meanwhile, unlike most recurrent neural network(RNN) long short term memory(LSTM), gated recurrent unit (GRU) models, we retain the parallelism of the network. We experiment on the open datasets SemEval-2014 Task 4 and SemEval-2016 Task 6. Compared with some popular baseline methods, our model performs excellently.

    I.Introduction

    ASPECT based sentiment analysis(ABSA)[1]–[3]is a detailed sentiment analysis task which aims to analyze the sentiment polarity(positive, negative or neutral)expressed by different aspects of the same text.In many cases,we need to focus not only on the overall sentiment in product reviews,as in ordinary sentiment analysis(SA)tasks, but also on more detailed and in-depth sentiment expressions.The sentiment expressions of different aspects in a sentence may be different.For example,in the sentence“Good performance, but too little battery power.”,there is a positive attitude towards“performance”, but a negative attitude towards“battery”.This task is important and challenging,and many shared task studies have been conducted in recent years,such as SemEval-2014 Task 4[3],SemEval-2015 Task 12[4],and SemEval-2016 Task 5[5].ABSA tasks are generally divided into aspect extraction (AE)subtasks[6]and aspect sentiment classification(ASC)subtasks[7].with the development of a series of related research,the task definition of ABSA has become more complete.It is divided into three parts[8]:opinion target extraction (OTE),aspect category detection,and sentiment polarity(SP).This paper mainly studies SP task;that is,given a sentence with some aspects,how one analyzes the sentiment polarity of aspects in the sentence.SP/ASC can be divided into two types:aspect-category sentiment analysis(ACSA)and aspect-term sentiment analysis(ATSA)[9].The main difference between ACSA and ATSA is that ACSA classifies many kinds of targets to be analyzed into several categories,and identifies the sentiment polarity of each aspect category in the sentences.The goal of ATSA is to directly identify the sentiment polarity of targets being analyzed,whose categories are uncertain.This paper studies these two tasks.

    Early research used traditional methods based on rules[10]or statistics[11].Support vector machine (SVM) with external resources[12]is one of the most successful methods, but its performance depends heavily on the construction of artificial features.Target dependent(TD)-LSTM (long short term memory)and target connection(TC)-LSTM[13]take the prediction target as the central word and build two LSTM s from left to right and from right to left.Considering that only using LSTM will result in information loss when processing long sequences,aspect-attention-aspect-embedding(ATAE)-LSTM[7]uses an aspect-related attention mechanism.However,these LSTM based methods are always difficult to integrate statements with dispersed important feature locations.For example,in the sentence“Everything except memory is terrible.”,“except”and “terrible” have a positive effect on the word “memory”.Reference[14]first applied memory network to ABSA and achieved good results.LSTM has strong aspect-sequence modeling ability, but it loses context-related information besides word-level,and lacks the modeling of complex semantic expression.Although multilayer attention can alleviate this defect,it only focuses on the semantic relationship between aspect and sequence,and ignores the semantic relationship between the words of the sequence.There are many subsequent improvements based on memory in ABSA tasks[15]–[18],and they have all achieved good results, but most lose network parallelism.

    To solve the aforementioned problems,we propose to use convolution to integrate text features of words and multiwords,and use amulti-head self-attention of transformer[19]encoder instead of recurrent neural network(RNN)to extract semantic information in the sequence.The output of the encoder is then used as memory.Convolutional multi-head self-attention is first proposed in hierarchical convolutional attention network(HCAN)[20].HCAN is a hierarchical feature extraction method for document-level text classification.Finally,we classify the aspect’s sentiment polarity with the help of an aspect-oriented memory network.In this way,the model considers long-term dependence information of aspect words and sequences by aspect attention,context-related information besides word-level by convolutional calculation and considers semantic-related information of sequence itself by self-attention.This is an improved model based on memory network,and makes it possible to extract more complex and richer semantic information from sequences and aspects.The whole model retains the parallelism of network computing.Each component is differentiable,and can be trained end-to-end with gradient descent.We evaluate our approach on four typical datasets: three from SemEval 2014’s laptop dataset and restaurant review dataset[3],and one from SemEval 2016’s tweets dataset[21].We apply datasets to ACSA and ATSA tasks respectively.The experimental results show that our model performs well on different types of data for two kinds of tasks.

    The rest of this paper is organized as follows.Section II introduces our methods in detail.Section III introduces our experimental results and analysis on open datasets.Section IV describes some of our summaries and future work directions.

    II.Method

    In this section,we will introduce our method for ACSA and ATSA tasks.The ACSA task is defined as:given a sentence and an aspect category,the model predicts the sentiment polarity(positive,negative or neutral)of the sentence to the aspect category.The ATSA task starts with being:given a sentence and an aspect(usually one or more words)that appears in the sentence.The model predicts the sentiment polarity of the sentence to the aspect.The overall structure of model is shown in Fig.1.

    A. Embedding

    B.Convolution Operation

    C. Multi-Head Self-Attention

    III.Experiments

    A. Datasets

    Our experiments used four open datasets,two for aspectcategory sentiment analysis(ACSA)tasks and two for aspectterm sentiment analysis(ATSA)tasks.Table I shows the statistics of datasets, where Res-ACSA,Res-ATSA and Lap-ATSA are customer comments on restaurants and laptops provided by SemEval-2014 Task 411 http://alt.qcri.org/semeval2014/task4/[3],and Tweet-ACSA is tweets provided by SemEval-2016 Task 62http://alt.qcri.org/semeval2016/task6/[21].

    The Res-ACSA dataset contains customer evaluations of five aspects of categories,namely “misc”,“food”,“service”,“price” and “ambience”.Res-ATSA is same as Res-ACSA,but each sentence contains the customer's evaluation of the specific terms.Lap-ATSA is the evaluation of specific terms of the laptops by customers.Some existing work[9]on three datasets in SemEval-2014 removed “conflict”labels.Tweet-ACSA is the user’s sentiment expression on five topics of“feminist movement”,“hillary clinton”,“climate change is a real concern”,“l(fā)egalization of abortion”and “atheis”.We divide the sentiment of the four datasets into three categories:“positive”,“negative”and “neutral”.

    TABLE I Statistics of the Da tasets

    B. Experimental Setting

    In our experiments,we use 300-dimension word embedding vectors pre-trained by GloVe3http://nlp.stanford.edu/projects/glove/[22]which is trained from web data where the vocabulary size is1.9m2.Word embedding vectors are not fine-tuned during training.Position embedding vectors are randomly initialized.The number of convolution filters is300.We set the learning rate as 7×10–5and L2 regularization coefficient as 1×10–5.We set dropout to be 0.2.We will discuss the window size and hops in detail later.In order to learn semantic information from easy to difficult and reduce the padding to 0,we sort the training data by sentence length,and let the network learn short sentences before long ones.The batch size is2 0 instances and the maximal epochs is 40.We randomly sampled 0.2 training data as dev set,and saved the best performance model parameters on the dev set,then calculated evaluation on the test set.

    C. Baselines

    In experiments, we compare our proposed model with the following models:

    1) Feature+SVM:Feature-based SVM shows good performance on aspect sentiment classification.The system usesn-gram, parse and lexicon features[12].

    2) LSTM:A standard LSTM[23]encodes a sentence from the starting to the final word,and the average value of all the hidden states is regarded as the final representation.For different aspects in a sentence,the model will give the same sentiment polarity.

    3)TD-LSTM:It uses two LSTM s start from left and right to term words respectively[13].Then it takes the hidden states of LSTM at the last time step to represent the features for prediction.

    4) ATAE-LSTM:An aspect sentiment classification method using attention-based LSTM[7].The model concatenates aspect embedding and the embedding of each word and feeds them to LSTM,and then passes through an attention layer.

    5) IAN:Interactive attention network(IAN)[24]uses two LSTM on aspect embedding and word embedding,and regards the result of average-pooling as the query vector of other party's attention.

    6) MemNet:This applies attention multiple times on word embedding,and feeds the last attention’s output to softmax for prediction [14].

    7)GCAE:Gated convolutional network[9]is an efficient model based on CNN.It uses two convolution with different activation functions on embedding,and uses the result of convolution to structure Gated Tanh-Relu Units.

    D. Main Result

    For model comparability, we evaluate our model’s accuracy[9],[14],[24]and macro-averaged F-score.CMA-MemNet achieves the best performance compared with baselines on 4 datasets.Conv-MemNet only uses convolutions while MAMemNet only uses multi-head self-attention on embedding.The result of ATSA task is shown in Table II,and ACSA is shown in Table III.

    TABLE II Experimental Results for ATSA.The Models with “1” Are Provided by [18],“2”Are Provided by [15],“3” Are Provided by [9],“4”Are Provided by [25]

    TABLE III Experimental Results for ACSA without TD-LSTM and IAN.the Meaning of Markup Is the Same as in Table II

    As can be seen from Tables II and III,SVM provides a relatively strong machine learning baseline,which has outstanding performance in ABSA tasks.However,its performance depends strongly on feature engineering and effective vocabulary,and its effect is not as good as those of neural networks when there are not enough features.LSTM networks have more advantages than most networks in sequence modeling,and do not need to manually extract features to generate effective feature representation.Among all LSTM based methods,standard LSTM is the worst, mainly because it ignores aspect information. ATAE-LSTM pays close attention to the expression of sentiment in the sequence of aspect,and has made a significant improvement,especially in the Res-ATSA dataset,where the accuracy has been improved by 2.95%.IAN is the best LSTM based method for ATSA tasks,mainly because it utilizes the strong sequence modeling ability of LSTM,and combines the information of aspect influencing sequence and sequence influencing aspect.It is5 .64% more accurate than LSTM on Lap-ATSA dataset.

    MemNet is an excellent network for ACSA tasks.It wins all baselines on Res-ACSA and Tweet-ACSA datasets,and its accuracy on Res-ATSA dataset is only 0.44%lower than that on IAN.Compared with MemNet,Conv-MemNet collects context information and MA-MemNet collects the semantic relevance of sequence itself,which is improved.This proves that this part of semantic information is effective in improving performance.We can draw the conclusion that MemNet has a strong aspect-sequence modeling capability, but lacks context information and sequence information,which limits its performance.CMA-MemNet can also combine this information well,while retaining the original information.

    E. Effects of Window Size and Hops

    As shown in Table IV,we take the Lap-ATSA dataset as an example to illustrate the effect of convolution window size and the number of memory network hops on the performance of the model.Window size affects the length of the context semantic information extracted from the network.The number of hops is the layer amount of aspect attention, which affects the abstraction of semantic information captured by the network.Experimental results show that the impact of window size and the number of hops on network performance is not a monotonous trend.The value of optimum performance on different datasets is often different.

    TABLE IV Effect of Convolution’s window Size and Number of Hops on Network Accuracy for Lap-ATSA

    We find that the accuracy rate is the highest on the Lap-ATSA datasets when the Window size is 3 and the number of hops is 2.For when more than 1 hop is needed,[14]explainsthat it is necessary to extract deeper semantic information.In the experiment of MemNet, when hops is7, the model works best.Our network is not based on word embedding.The model has extracted deep semantic information through convolutional multi-head self-attention,so fewer hops are needed.When window size is1,it is equivalent to paying attention to word level information.When window size is too large, the network is easily affected by some non-related information noise in the same window.

    TABLE V Cases in the Lap-ATSA Dataset.the Bold Is Aspect and the Subscript Is Label

    Fig.4.Comparison of attention on word-level memory and CMA memory.Attention score by(9)is used as color-coding.Deep color means a high attention score.

    We use the same method to get the best value of the model on other datasets.On Res-ATSA dataset,Window size is3 and the number of hops is5.On Res-ACSA dataset,Window size is2 and the number of hops is2.On Tweet-ACSA dataset,Window size is2 and the number of hops is2.

    F.Case Study

    In this section,we analyze some cases in the Lap-ATSA dataset,as shown in Table V and Fig.4,to illustrate the effectiveness of the mechanism.

    There are three types of examples that most methods find difficult to identify.The first is implicit sentiment expression.In Case 1,it uses“gestures”unconsciously to explain it likes them.However,there are no obvious sentiment words,and our system can recognize such examples completely correctly.This is another important research direction in SA.The second is the complex expression of important information.The aspect-sequence attention in MemNet can capture useful information for aspect, but often not accurately enough,and does not recognize all aspects correctly such as in Case 2.As shown in Fig.4(a),“beats windows easily” in “speed”,shows a negative the polarity for “windows”.But it is hard for wordlevel mechanisms to capture information such as“A beats B”.Convolution can combine some related and important features,and self-attention pays attention to the semantics of the sequence itself, while networks can better understand the relationship between important words.The third is context expression such as negation,comparison and condition.As with the comparative expression in Case 3,this is a difficult problem for word-level mechanisms.As shown in Fig.4(b),if a model lacks a sequence semantic,it may only see“price”and “higher”in the sentence when analyzing “PC”.The model is likely to give negative judgment to both “PC”and “Mac”.Convolution and self-attention can better understand this kind of contextual information,and enable the model to focus on the word “compared”.

    IV.Conclusion

    In this paper,we propose a highly parallel convolutional multi-head self-attention based memory network.Compared with an embedding based memory network,CMA-MemNet can capture complex semantic information of the context better and give more attention to the semantic relations between the words in the sequence itself.We show the performance of the model on four datasets for ATSA and ACSA tasks and prove its effectiveness.In the future, we would like to consider more types of memory modules in semantic information representation,and synthetically analyze their aspects according to the scores outputted by different memory modules.

    在线a可以看的网站| 国产三级中文精品| 麻豆国产97在线/欧美| 欧美激情在线99| 女警被强在线播放| 两个人看的免费小视频| 久久精品91蜜桃| 亚洲片人在线观看| 久久亚洲真实| 在线免费观看不下载黄p国产 | 悠悠久久av| 村上凉子中文字幕在线| 叶爱在线成人免费视频播放| 国产69精品久久久久777片 | 亚洲五月天丁香| x7x7x7水蜜桃| 欧美日本视频| 久久久国产欧美日韩av| 俄罗斯特黄特色一大片| 亚洲一区高清亚洲精品| 免费在线观看日本一区| 欧美日韩黄片免| 草草在线视频免费看| 久久中文字幕一级| 99久国产av精品| 一级毛片高清免费大全| 欧美+亚洲+日韩+国产| 国产精品一区二区三区四区久久| 久久性视频一级片| 观看免费一级毛片| 亚洲一区二区三区色噜噜| 色综合亚洲欧美另类图片| 成人午夜高清在线视频| 亚洲avbb在线观看| 免费av不卡在线播放| 美女免费视频网站| 免费人成视频x8x8入口观看| av片东京热男人的天堂| 久久九九热精品免费| 亚洲欧美激情综合另类| 久久久久九九精品影院| 国产亚洲精品综合一区在线观看| 麻豆成人av在线观看| 婷婷精品国产亚洲av在线| 久久久成人免费电影| 国产亚洲欧美在线一区二区| 免费观看人在逋| 1024手机看黄色片| 国产久久久一区二区三区| 国产1区2区3区精品| 欧美日韩综合久久久久久 | 精品福利观看| 久久精品91蜜桃| 又紧又爽又黄一区二区| 神马国产精品三级电影在线观看| 久久99热这里只有精品18| 久久久久久大精品| 天天躁日日操中文字幕| 男人和女人高潮做爰伦理| 色视频www国产| 精品熟女少妇八av免费久了| 欧美3d第一页| 亚洲国产中文字幕在线视频| av黄色大香蕉| 国产伦精品一区二区三区四那| 亚洲 欧美一区二区三区| 亚洲av成人精品一区久久| 精品日产1卡2卡| 后天国语完整版免费观看| 两性夫妻黄色片| e午夜精品久久久久久久| 色综合欧美亚洲国产小说| 久久久精品大字幕| 两个人的视频大全免费| 亚洲无线在线观看| 一进一出抽搐动态| 一级毛片高清免费大全| 日韩欧美精品v在线| 国产精品av久久久久免费| av片东京热男人的天堂| 激情在线观看视频在线高清| 亚洲,欧美精品.| 久久久久国内视频| 啦啦啦韩国在线观看视频| 国内精品美女久久久久久| 黄片小视频在线播放| 99riav亚洲国产免费| 在线播放国产精品三级| 小说图片视频综合网站| 深夜精品福利| 免费在线观看成人毛片| 在线十欧美十亚洲十日本专区| 美女免费视频网站| a在线观看视频网站| 亚洲天堂国产精品一区在线| 免费在线观看影片大全网站| 亚洲精品国产精品久久久不卡| 国产真实乱freesex| 国产激情欧美一区二区| 男人舔奶头视频| 国产精品精品国产色婷婷| 欧美色欧美亚洲另类二区| 久久精品国产综合久久久| 人人妻人人澡欧美一区二区| 99热只有精品国产| 国产亚洲欧美在线一区二区| 久久香蕉精品热| 大型黄色视频在线免费观看| 美女午夜性视频免费| 欧美一区二区国产精品久久精品| 久久天躁狠狠躁夜夜2o2o| 国产淫片久久久久久久久 | 特级一级黄色大片| 熟女电影av网| 国产av在哪里看| 亚洲欧美日韩无卡精品| 一卡2卡三卡四卡精品乱码亚洲| 99在线视频只有这里精品首页| 制服人妻中文乱码| 国产精品99久久99久久久不卡| 五月伊人婷婷丁香| 亚洲熟妇熟女久久| 舔av片在线| 听说在线观看完整版免费高清| 精品国产美女av久久久久小说| 国产主播在线观看一区二区| 免费电影在线观看免费观看| 国产精品日韩av在线免费观看| 欧美日韩乱码在线| 男插女下体视频免费在线播放| 女人被狂操c到高潮| 日韩 欧美 亚洲 中文字幕| 久久精品夜夜夜夜夜久久蜜豆| 午夜福利欧美成人| 午夜福利欧美成人| 久久精品国产亚洲av香蕉五月| 色在线成人网| 巨乳人妻的诱惑在线观看| 久久天堂一区二区三区四区| 国产三级黄色录像| 国产精品av久久久久免费| 国产精品 国内视频| 久久这里只有精品19| 国产精品久久电影中文字幕| 欧美中文综合在线视频| 亚洲人成网站在线播放欧美日韩| 久久久国产欧美日韩av| 精品久久久久久,| 精品人妻1区二区| 国产91精品成人一区二区三区| 日韩 欧美 亚洲 中文字幕| 日本撒尿小便嘘嘘汇集6| 色av中文字幕| 国产亚洲精品综合一区在线观看| 婷婷精品国产亚洲av在线| 久久精品国产清高在天天线| 国产伦人伦偷精品视频| xxxwww97欧美| 可以在线观看毛片的网站| av天堂中文字幕网| 国产精品香港三级国产av潘金莲| 嫩草影院精品99| 久9热在线精品视频| 亚洲欧洲精品一区二区精品久久久| 欧美精品啪啪一区二区三区| 日本五十路高清| 国产av不卡久久| 一本久久中文字幕| 色尼玛亚洲综合影院| 757午夜福利合集在线观看| 午夜免费成人在线视频| 丰满人妻一区二区三区视频av | 小说图片视频综合网站| 色综合亚洲欧美另类图片| 天天躁日日操中文字幕| 久久伊人香网站| 成人高潮视频无遮挡免费网站| 18禁黄网站禁片免费观看直播| 亚洲精品粉嫩美女一区| 久久性视频一级片| 精品国产乱子伦一区二区三区| 在线a可以看的网站| 国产免费av片在线观看野外av| 亚洲性夜色夜夜综合| 女警被强在线播放| 黑人欧美特级aaaaaa片| 两个人的视频大全免费| 亚洲av免费在线观看| 欧美黑人巨大hd| 亚洲精品乱码久久久v下载方式 | av福利片在线观看| 18禁国产床啪视频网站| 日韩高清综合在线| 亚洲欧美精品综合久久99| 欧美日本视频| 日韩人妻高清精品专区| 天天添夜夜摸| 熟女少妇亚洲综合色aaa.| 网址你懂的国产日韩在线| 国产99白浆流出| 999精品在线视频| av天堂中文字幕网| 亚洲中文字幕日韩| 亚洲中文字幕日韩| 国产欧美日韩一区二区三| 欧美性猛交╳xxx乱大交人| a在线观看视频网站| 香蕉国产在线看| 欧洲精品卡2卡3卡4卡5卡区| 日本成人三级电影网站| 日本与韩国留学比较| 国产成人精品久久二区二区91| 国产精品久久电影中文字幕| 国产精品电影一区二区三区| 一区二区三区高清视频在线| 国产精品一区二区三区四区久久| 成人性生交大片免费视频hd| 国产亚洲精品av在线| 日韩 欧美 亚洲 中文字幕| 黄片大片在线免费观看| 看免费av毛片| 久久久久久人人人人人| 在线观看一区二区三区| 18禁裸乳无遮挡免费网站照片| 九色国产91popny在线| 黑人巨大精品欧美一区二区mp4| 午夜福利18| 成在线人永久免费视频| 成年女人毛片免费观看观看9| 国产亚洲精品一区二区www| av中文乱码字幕在线| 久久中文字幕一级| 色综合欧美亚洲国产小说| 亚洲中文日韩欧美视频| ponron亚洲| 天天添夜夜摸| 性欧美人与动物交配| 一本综合久久免费| 美女午夜性视频免费| 18美女黄网站色大片免费观看| 亚洲人成网站高清观看| 全区人妻精品视频| 99热只有精品国产| 欧美成狂野欧美在线观看| 一级毛片女人18水好多| ponron亚洲| 欧美日韩中文字幕国产精品一区二区三区| 这个男人来自地球电影免费观看| 久久午夜综合久久蜜桃| 麻豆久久精品国产亚洲av| 亚洲,欧美精品.| 在线观看日韩欧美| 精品一区二区三区av网在线观看| 九九在线视频观看精品| 热99在线观看视频| 久久天躁狠狠躁夜夜2o2o| 级片在线观看| 成年版毛片免费区| 日本与韩国留学比较| 亚洲av第一区精品v没综合| 久久伊人香网站| 亚洲中文字幕一区二区三区有码在线看 | 成人三级做爰电影| 中文在线观看免费www的网站| 又黄又粗又硬又大视频| 日本黄色视频三级网站网址| 午夜免费观看网址| a在线观看视频网站| 国产av麻豆久久久久久久| 午夜福利高清视频| 听说在线观看完整版免费高清| 亚洲成a人片在线一区二区| 日韩欧美在线乱码| 18禁观看日本| 亚洲av电影在线进入| 两人在一起打扑克的视频| 一个人看视频在线观看www免费 | 欧美在线黄色| 男女之事视频高清在线观看| 亚洲国产精品久久男人天堂| 亚洲 欧美 日韩 在线 免费| 长腿黑丝高跟| 在线观看日韩欧美| 亚洲欧美日韩东京热| 日本 av在线| 国产91精品成人一区二区三区| 两个人视频免费观看高清| 欧美一级毛片孕妇| 麻豆一二三区av精品| 我的老师免费观看完整版| 国产精品九九99| 久久久久性生活片| 色老头精品视频在线观看| 99久久精品热视频| 午夜精品在线福利| 欧美日本视频| 日本 av在线| 免费观看的影片在线观看| 国产精品永久免费网站| 亚洲精品在线美女| 久久久久久大精品| 国产美女午夜福利| 人妻丰满熟妇av一区二区三区| 亚洲国产精品合色在线| 午夜影院日韩av| 露出奶头的视频| 午夜免费激情av| 亚洲在线观看片| 精品不卡国产一区二区三区| 国产91精品成人一区二区三区| 1024手机看黄色片| 一区二区三区国产精品乱码| 亚洲自拍偷在线| 日本 欧美在线| 一级毛片女人18水好多| 国产男靠女视频免费网站| 69av精品久久久久久| 精品国产乱子伦一区二区三区| 日韩免费av在线播放| 欧美日韩乱码在线| 精品一区二区三区av网在线观看| 精品久久久久久,| 午夜免费激情av| 久久精品国产综合久久久| 美女高潮喷水抽搐中文字幕| 国产97色在线日韩免费| 亚洲精品在线美女| 久久久精品欧美日韩精品| www.精华液| 日韩人妻高清精品专区| 91久久精品国产一区二区成人 | 日本a在线网址| 亚洲精品456在线播放app | 婷婷亚洲欧美| 国产欧美日韩精品亚洲av| 美女cb高潮喷水在线观看 | 一二三四社区在线视频社区8| 非洲黑人性xxxx精品又粗又长| 色老头精品视频在线观看| 麻豆久久精品国产亚洲av| 久久久久久大精品| 97超级碰碰碰精品色视频在线观看| av福利片在线观看| 亚洲国产欧美人成| 精品久久久久久久久久久久久| 日韩成人在线观看一区二区三区| 男人和女人高潮做爰伦理| 韩国av一区二区三区四区| 欧美黄色片欧美黄色片| 在线看三级毛片| 久久国产精品影院| 国产伦精品一区二区三区视频9 | 亚洲精品久久国产高清桃花| 一级毛片女人18水好多| 亚洲美女视频黄频| 美女cb高潮喷水在线观看 | 淫秽高清视频在线观看| 亚洲第一电影网av| 国产精品98久久久久久宅男小说| 九色成人免费人妻av| 久久久国产成人精品二区| 亚洲中文字幕日韩| 叶爱在线成人免费视频播放| 天堂网av新在线| 午夜福利欧美成人| 久99久视频精品免费| 成年版毛片免费区| 成人特级av手机在线观看| svipshipincom国产片| 亚洲精品在线观看二区| 色播亚洲综合网| 波多野结衣巨乳人妻| 免费电影在线观看免费观看| 国产成人欧美在线观看| 日本黄色视频三级网站网址| cao死你这个sao货| 亚洲国产精品久久男人天堂| 日韩欧美国产一区二区入口| 一边摸一边抽搐一进一小说| 亚洲人成网站高清观看| 久久精品91蜜桃| 淫秽高清视频在线观看| 狂野欧美激情性xxxx| 手机成人av网站| 一进一出好大好爽视频| av女优亚洲男人天堂 | 免费高清视频大片| 黄色片一级片一级黄色片| 国产伦一二天堂av在线观看| 18禁国产床啪视频网站| 亚洲欧美激情综合另类| 手机成人av网站| 国产精品九九99| 成年女人毛片免费观看观看9| 国内精品美女久久久久久| 久久中文字幕人妻熟女| 人人妻,人人澡人人爽秒播| 亚洲人成伊人成综合网2020| 国产三级在线视频| 欧美中文日本在线观看视频| 女人高潮潮喷娇喘18禁视频| 欧美一级毛片孕妇| 国产精品香港三级国产av潘金莲| 伦理电影免费视频| 91麻豆精品激情在线观看国产| 欧美大码av| 宅男免费午夜| 亚洲在线自拍视频| 久久久水蜜桃国产精品网| 一级毛片女人18水好多| 国产精品爽爽va在线观看网站| 国产1区2区3区精品| 九九热线精品视视频播放| 国产激情偷乱视频一区二区| 变态另类丝袜制服| 小蜜桃在线观看免费完整版高清| 国产精品av久久久久免费| 国产伦在线观看视频一区| 亚洲欧美一区二区三区黑人| 又黄又爽又免费观看的视频| 久久亚洲真实| 欧美三级亚洲精品| 一进一出抽搐动态| 午夜福利在线在线| 少妇的丰满在线观看| 丁香欧美五月| 成人亚洲精品av一区二区| 午夜a级毛片| 亚洲精品乱码久久久v下载方式 | 亚洲精品久久国产高清桃花| 亚洲精品在线美女| 国产成人一区二区三区免费视频网站| 2021天堂中文幕一二区在线观| 久久精品国产99精品国产亚洲性色| 久久久国产欧美日韩av| 亚洲精品乱码久久久v下载方式 | 日本一二三区视频观看| 国产亚洲av高清不卡| 性色avwww在线观看| 午夜福利在线观看免费完整高清在 | 国产三级在线视频| 人妻久久中文字幕网| 国产精品久久久人人做人人爽| 国产综合懂色| 亚洲熟妇熟女久久| 国产伦一二天堂av在线观看| cao死你这个sao货| 成年女人看的毛片在线观看| 99精品久久久久人妻精品| av天堂在线播放| 久久精品国产99精品国产亚洲性色| 在线观看一区二区三区| 久久久久精品国产欧美久久久| 中文字幕人妻丝袜一区二区| 少妇的逼水好多| 我要搜黄色片| 国产精品99久久99久久久不卡| aaaaa片日本免费| 国产三级中文精品| 老司机福利观看| 国产精品香港三级国产av潘金莲| 久久热在线av| 久久精品综合一区二区三区| 欧美日本视频| 69av精品久久久久久| 亚洲熟女毛片儿| 好男人在线观看高清免费视频| 18禁观看日本| 亚洲欧美精品综合久久99| 丰满的人妻完整版| 一级毛片女人18水好多| 免费大片18禁| 欧美不卡视频在线免费观看| 青草久久国产| 午夜福利高清视频| 91麻豆av在线| 免费av毛片视频| 高潮久久久久久久久久久不卡| 亚洲无线在线观看| 少妇熟女aⅴ在线视频| 成人无遮挡网站| 亚洲一区二区三区色噜噜| 亚洲成人久久爱视频| 免费看a级黄色片| 亚洲av电影不卡..在线观看| 亚洲,欧美精品.| 少妇的逼水好多| 日日干狠狠操夜夜爽| 婷婷精品国产亚洲av| 嫁个100分男人电影在线观看| 12—13女人毛片做爰片一| 欧美丝袜亚洲另类 | 久久久久国产一级毛片高清牌| 亚洲国产看品久久| 亚洲欧美日韩高清在线视频| 国产精品自产拍在线观看55亚洲| 国产精品一区二区三区四区免费观看 | 亚洲精品久久国产高清桃花| 日本熟妇午夜| 在线观看美女被高潮喷水网站 | 在线观看日韩欧美| 黑人操中国人逼视频| 亚洲无线在线观看| 精品久久久久久久人妻蜜臀av| 国产欧美日韩精品亚洲av| 九色国产91popny在线| 黄色女人牲交| 久久久久国产一级毛片高清牌| 此物有八面人人有两片| 国产真实乱freesex| 日韩欧美 国产精品| 嫁个100分男人电影在线观看| 小说图片视频综合网站| 男女视频在线观看网站免费| 曰老女人黄片| 日韩av在线大香蕉| 亚洲色图av天堂| 欧美激情在线99| 免费无遮挡裸体视频| 久久久久久久精品吃奶| 国产亚洲精品av在线| 偷拍熟女少妇极品色| 成人永久免费在线观看视频| 露出奶头的视频| 一区二区三区国产精品乱码| 一个人免费在线观看的高清视频| 美女被艹到高潮喷水动态| 国产亚洲欧美在线一区二区| 成人一区二区视频在线观看| 男女床上黄色一级片免费看| 黑人欧美特级aaaaaa片| 变态另类丝袜制服| 天天一区二区日本电影三级| 好男人在线观看高清免费视频| 欧美日本亚洲视频在线播放| 午夜成年电影在线免费观看| 欧美一区二区国产精品久久精品| 国产精品一区二区三区四区免费观看 | 精品久久久久久久末码| 国产精品综合久久久久久久免费| 免费看日本二区| 中文字幕人妻丝袜一区二区| 久久精品亚洲精品国产色婷小说| 日本一二三区视频观看| 在线视频色国产色| 国产 一区 欧美 日韩| 国产精品av久久久久免费| 久久国产精品影院| 国产视频内射| 黄色女人牲交| 久久精品国产综合久久久| 丝袜人妻中文字幕| www.自偷自拍.com| 色尼玛亚洲综合影院| 90打野战视频偷拍视频| 亚洲色图av天堂| 三级毛片av免费| 婷婷六月久久综合丁香| 可以在线观看毛片的网站| 色播亚洲综合网| 国产精品综合久久久久久久免费| 色综合欧美亚洲国产小说| 久久精品夜夜夜夜夜久久蜜豆| 一本精品99久久精品77| 一进一出好大好爽视频| 亚洲精品粉嫩美女一区| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲国产欧美人成| 好看av亚洲va欧美ⅴa在| 九九热线精品视视频播放| 天天一区二区日本电影三级| 国产黄色小视频在线观看| 国内精品一区二区在线观看| 天天躁日日操中文字幕| 亚洲国产高清在线一区二区三| 一本久久中文字幕| 精品国产亚洲在线| 久久午夜亚洲精品久久| 免费在线观看影片大全网站| 在线观看免费午夜福利视频| 三级国产精品欧美在线观看 | 黑人操中国人逼视频| 综合色av麻豆| 99国产精品一区二区三区| 69av精品久久久久久| 国产99白浆流出| 别揉我奶头~嗯~啊~动态视频| 嫩草影视91久久| 女人被狂操c到高潮| 亚洲精品一区av在线观看| 色av中文字幕| 午夜免费观看网址| 日日摸夜夜添夜夜添小说| 国产一区二区三区在线臀色熟女| 老司机深夜福利视频在线观看| 色精品久久人妻99蜜桃| 日本黄色视频三级网站网址| 亚洲成人久久爱视频| 2021天堂中文幕一二区在线观| 欧美另类亚洲清纯唯美| 久久天堂一区二区三区四区| 怎么达到女性高潮| 欧美性猛交黑人性爽| 免费无遮挡裸体视频| 久久这里只有精品19| 深夜精品福利| 色老头精品视频在线观看| 精品无人区乱码1区二区| 深夜精品福利| 国产精品国产高清国产av| x7x7x7水蜜桃| www.999成人在线观看| 日韩国内少妇激情av| 国产成人av教育| 久久久久久久久久黄片|