• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Sentiment Classification Based on Piecewise Pooling Convolutional Neural Network

    2018-09-11 05:13:46YuhongZhangQinqinWangYulingLiandXindongWu
    Computers Materials&Continua 2018年8期

    Yuhong Zhang , Qinqin Wang Yuling Li and Xindong Wu

    Abstract: Recently, the effectiveness of neural networks, especially convolutional neural networks, has been validated in the field of natural language processing, in which,sentiment classification for online reviews is an important and challenging task. Existing convolutional neural networks extract important features of sentences without local features or the feature sequence. Thus, these models do not perform well, especially for transition sentences. To this end, we propose a Piecewise Pooling Convolutional Neural Network (PPCNN) for sentiment classification. Firstly, with a sentence presented by word vectors, convolution operation is introduced to obtain the convolution feature map vectors. Secondly, these vectors are segmented according to the positions of transition words in sentences. Thirdly, the most significant feature of each local segment is extracted using max pooling mechanism, and then the different aspects of features can be extracted. Specifically, the relative sequence of these features is preserved. Finally, after processed by the dropout algorithm, the softmax classifier is trained for sentiment classification. Experimental results show that the proposed method PPCNN is effective and superior to other baseline methods, especially for datasets with transition sentences.

    Keywords: Sentiment classification, convolutional neural network, piecewise pooling,feature extract.

    1 Introduction

    Sentiment classification, also called sentiment analysis or opinion mining, is to study people’s opinions, sentiments, evaluations, attitudes and sentiment from text and reviews[Liu and Zhang (2012)], which is an important task in natural language processing (NLP).With the successful application of deep learning in visual and speech recognition, some researchers have applied deep learning models such as recurrent neural networks (RNN)[Yoav (2016); Socher, Pennington, Huang et al. (2011); Sutskever, Vinyals and Le(2014); McCann, Bradbury, Xiong et al. (2017); Li, Luong, Jurafsky et al. (2015); Socher,Perelygin, Wu et al. (2013)] and convolutional neural networks (CNN) [Kim (2014);Kalchbrenner, Grefenstette and Blunsom (2014); Zeng, Liu, Lai et al. (2014); Johnson and Zhang (2015); Yin and Schütze (2016); Wang, Xu, Xu et al. (2015); Soujanya, Erik and Alexander (2016)] to address the data sparseness in sentiment classification and get a better performance. Compared with RNNs, CNNs have attracted more attention because it can capture better semantics and it is easier to train with fewer tags, fewer connections and fewer parameters.

    Recently, CNNs have shown to be effective in capturing syntax and semantics of words in sentences. CNNs [Kim (2014); Zeng, Liu, Lai et al. (2014); Wang, Xu, Xu et al.(2015); Hu, Lu, Li et al. (2014)] usually take a max pooling mechanism to capture the most useful feature of a sentence. Dynamic CNNs [Kalchbrenner, Grefenstette and Blunsom (2014); Yin and Schütze (2016)] use a dynamick-max pooling operation for semantic modeling of sentences, and it can extract thekmost useful features for sentiment classification. However, in practice, people are accustomed to express both positive and negative opinions connected by transitional words [Tang, Qin and Liu(2016); Vasileios and Kathleen (1997)]. In fact, statistically the number of the transition sentences takes a large proportion of about 40% in several review benchmark datasets.Therefore, the classification of the transition sentences has a great impact on the overall classification accuracy.

    Most existing convolution neural networks adopt max pooling ork-max pooling to deal with transition sentences. However, this makes it difficult to capture both of the positive and negative features. For example, “beautifully filmed, talented actor and acted well, but admittedly problematic in its narrative specifics.” Max pooling based CNN models only extract one feature “well” in affirmative acting, while omitting the feature “problematic”on the script which determines the sentiment orientation of this sentence. In contrast,kmax pooling based CNN models based onk-max pooling can extract three aspects features “well”, “talented” and “beautifully”. However, the three features extracted are the positive aspect for the filming and the performance of the actor, while the negative information on screenplay is absent.

    In this paper, a piecewise pooling technology is introduced into CNN, and it forms our Piecewise Pooling Convolutional Neural Network (PPCNN). More specifically, with a transition word database, feature mapping vector is segmented, and then the most significant feature of each local segment is extracted using the max pooling mechanism.This not only extracts local significant features with different sentiment polarities, but also preserves the relative word sequence of these features.

    Our contributions of this paper are as follows:

    1. The text is represented with word embedding as the input of CNN, which does not require a complicated NLP preprocessing.

    2. A piecewise pooling mechanism in CNN is proposed for sentiment classification on transition sentences, which can get multiple features with different sentiment polarities,and can also maintain the relative sequence of words in a sentence.

    The remainder of this paper is organized as follows. Section 2 briefly reviews related work about RNN and CNN. Section 3 gives the details of our proposed PPCNN method.Section 4 shows the effectiveness of our proposed method experimentally. Section 5 summarizes the paper.

    2 Related work

    Deep learning models have been successfully applied in the fields of computer vision[Krizhevsky, Sutskever and Hinton (2012)] and speech recognition [Graves, Mohamed and Hinton (2013); Kim, Hori and Watanabe (2017)]. In the field of sentiment analysis,researchers have adopted deep learning models to learn better feature representations.These models fall into two categories: Sequence-based recursive neural network model[Socher, Pennington, Huang et al. (2011); Sutskever, Vinyals and Le (2014); McCann,Bradbury, Xiong et al. (2017); Li, Luong, Jurafsky et al. (2015)] and convolutional neural network model [Kalchbrenner, Grefenstette and Blunsom (2014); Zeng, Liu, Lai et al.(2014); Johnson and Zhang (2015); Yin and Schütze (2016); Wang, Xu, Xu et al. (2015);Soujanya, Erik and Alexander (2016)].

    2.1 Recursive neural network model

    Based on RNN model, Richard et al. [Socher, Pennington, Huang et al. (2011)] and Sutskever et al. [Sutskever, Vinyals and Le (2014)] proposed a semi-supervised recursive automatic encoder and a recursive neural tensor network to analyze the sentiment of sentences separately. Ramy et al. [Ramy, Hazem, Nizar et al. (2017)] created an Arabic Sentiment Treebank (ARSENTB) to explore different morphological and orthographical features at multiple levels of abstract. Kai et al. [Kai, Socher and Christopher (2015)]combined the LSTM networks with strong retention capabilities for time-series information to construct a tree-structure LSTM network model. This model outperformed other LSTM baselines on predicting the semantic relevancy between two different sentences and sentiment classification. Generally, RNNs require a lot of manual tagged words, phrases and sentences.

    2.2 Convolutional neural network model

    Compared with RNN, CNN is easy to be trained and requires fewer parameters and sentence-level tags. The standard CNN usually consists of an input layer, a convolution layer, a pooling layer and an output layer. In the input layer, each word is represented by a real-valued vector. Convolution layer is to learn and extract features. Pooling layer is to select features that have the strongest relevance to the task. Output layer is to classify, in which softmax classifier is usually adopted.

    Kim [Kim (2014)] proposed a simple and improved CNN, whose input layer toke both task-specific and static word vectors for sentiment analysis and classification.Kalchbrenner et al. [Kalchbrenner, Grefenstette and Blunsom (2014)] introduced a dynamic convolutional neural network (DCNN), and a dynamick-Max pooling operation was used as a nonlinear sampling function to dynamically adjust the extracted important features (kvalues) to accomplish sentiment classification without the requirement of parsers and other external features. Zeng et al. [Zeng, Liu, Lai et al.(2014)] established a deep convolutional neural network (DNN), which extracted the vocabulary and sentence-level features to classify. DNN also introduced the relationship label of the noun pair as a position feature into the network. Yin et al. [Yin and Schütze(2016)] proposed a multi-channel variable-size convolution neural network model(MVCNN) for sentiment classification on sentences and subjectivity classification, where"MV" indicated that texts were initialized with five-word-vector training methods such as word2vec and Glove, and a variable-size convolution filter was applied to extract the features of sentences with various ranges.

    3 Our proposed approach PPCNN

    Aiming to improve the sentiment classification for a large number of transition sentences,this paper proposes a novel piecewise pooling convolution neural network, namely PPCNN. In this model, firstly, a sentence is represented with word embedding, and convolution operation is applied to obtain a feature mapping vector. Then this vector is segmented according to the positions of transition words in the sentence, so each important local feature is extracted in each fragment to capture the sentiment of the sentence. Finally, the features captured from all segments are used to train a classifier.Fig. 1 shows the architecture of our piecewise pooling neural network for text sentiment classification. Generally, the whole framework includes four parts: Data representation,convolution operation, piecewise pooling, and softmax output. We will describe these components in detail.

    3.1 Representation for input data

    At present, there are many works about word embedding [Mikolov, Sutskever, Chen et al.(2013); Thang, Richard and Christopher (2013)]. These works point out that word vectors learned in large-scale unsupervised corpus can obtain more sematic information of words.In this paper, Google’s word embedding tool, googlenews-vecctors-negative 300 billion,is adopted to train word vectors based on a corpus containing about 100 billion words.Given the training data

    can be represented as a two-dimensional matrix, as shown in Eq. (1).

    3.2 Convolution operation

    3.3 Piecewise pooling

    Traditional convolution neural network [Kim (2014); Wang, Xu, Xu et al. (2015)] takes the max pooling mechanism to map the input sentences of variable lengths into the same dimensional representation. In order to express the meanings of the text better, some

    It is necessary to mention that different sizes of convolution kernels indicate the differences of cutting points. In this paper, instead of selecting the most reasonable point,we use different sizes of convolution kernels to capture richer pooing features, which will benefit the classification.

    Finally, the feature maps of allonvolution kernels are respectively segmented and pooled to obtain the final output, donated as

    In this way, we can extract the most important information in each segment based on the position of the transition word in one sample, and finally the feature vectorscan be obtained. In order to avoid over-fitting and improve the prediction accuracy, the dropout algorithm [Kim (2014)] is applied to randomly set the input data to 0 according to a certain probability, and only the preserved elements are passed through the whole network to train softmax classifier.

    In fact, the positions of transition words vary in different sentences, which means that the segment cannot keep balance between two parts of a whole sentence. Our proposed PPCNN method will extract one important feature from each segment regardless of the length of segment. Therefore, the position of transition words will not influence the performance of our proposed approach. Moreover, when there is no transition word, our proposed approach will perform as same as the one proposed in Richard et al. [Socher,Perelygin, Wu et al. (2013)].

    4 Experiment results

    4.1 Data sets

    In this section, we compared our proposed PPCNN with 10 relevant algorithms on 6 benchmark datasets to prove the superiority of our proposed algorithm. The details of these 6 datasets are illustrated as follows, with the statistical summary of shown in Tab. 1.

    MR:In this data set of movie reviews, there is one sentence for each review.Classification process involves detecting positive/negative reviews. There are 10662 reviews totally, in which positive/negative emotions own an equal weight. There are 4647 transition samples in this dataset, with the average length of samples 20. And the dataset is available at: https://www.cs.cornell.edu/people/pabo/movie-review-data/.

    Table 1: Details of data sets

    SST-1:SST-1 is also called Stanford Sentiment Treebank, and it is an extension of MR but with five labels including very positive, positive, neutral, negative, and very negative.SST-1 can be obtained from: http://nlp.stanford.edu/sentiment/.

    SST-2:SST-2 is the same as SST-1 but with neutral reviews removed and all reviews are converted to binary labels.

    Subj:Subj is a dataset, and the task is to classify a sentence as being subjective or objective. There are 10000 samples in this dataset, including 4411 transition samples, and the average length of samples is 23. More details can refer to Wang et al. [Wang and Christopher (2012)].

    CR:CR is a dataset about customer reviews of various products, including cameras,MP3s, etc., and the task is to predict positive/negative reviews. There are 1489 transition samples in this dataset. For more details, please refer to Wang et al. [Wang and Christopher (2012)].

    MPQA:MPQA is a dataset for opinion polarity detection, and there are 10606 samples including 461 transition samples. For more details, please refer to http://www.cs.pitt.edu/mpqa/.

    4.2 Baselines and parameters

    To demonstrate the effectiveness of our proposed model, four categories methods,including 10 algorithms are used as the baselines, whose details are as follows.

    1) Traditional classifiers with Bag of words:NB and SVM are the traditional classifiers using the bag of words method.

    2) Traditional classifiers wit unigram and bigram:BiNB, NBSVM and MNB also use the traditional classifiers NB and SVM. Especially, BiNB trains NB classifier with unigram and bigram features, and NBSVM and MNB train Naive Bayes SVM and Multinomial Naive Bayes with uni-bigrams [Wang and Christopher (2012)].

    3) RNNs:RAE, MV-RNN and RNTN are models based on the RNN and use a fullylabeled parser to parse the vector representation of tree learning phrases and complete sentences. RAE [Socher, Pennington, Huang et al. (2011)] adopts Recursive Auto Encoders with pre-trained word vectors from Wikipedia. MV-RNN [Richard, Brody,Christopher et al. (2012)] is a Matrix-Vector Recursive Neural Network with parse trees.In contrast, RNTN [Socher, Perelygin, Wu et al. (2013)] adopts a Recursive Neural Tensor Network with tensor-based feature functions and parse trees.

    4) CNNs:Both DCNN and CNN are based on the CNN model. CNN [Kim (2014)] is a Convolutional Neural Network with max pooling, while DCNN [Kalchbrenner,Grefenstette and Blunsom (2014)] is a Dynamic Convolutional Neural Network withkmax pooling.

    Since the advantages of multiple sizes of convolution kernels have been demonstrated in existing works, we adopt three sizes of filter windows of 3×|V|, 4×|V|, and 5×|V|(|V|=300), and each size includes 100 kernels as the settings in these works [Kim (2014);Kalchbrenner, Grefenstette and Blunsom (2014)]. Meanwhile, other parameters are kept the same as those in Kim et al. [Kim (2014)], such as: A dropout rate is set to 0.5, an L2 constraint is set to 3, and a mini-batch size is set to 50. The classification accuracy averaged over 10 cross-validations will be reported in the following subsection.

    The word corpus trained on the news corpus of Google is utilized to initialize the experimental data in this paper.

    We incorporate Smart Words (http://www.smart-words.org/linking-words/transitionwords.html) and MSU (https://msu.edu/user/jdowell/135/transw.html) to get a transition word corpus, which includes 179 transition words in total. This transition word corpus is utilized to locate probable transitions in each sentence.

    4.3 Classification performance

    We compare our proposed PPCNN with the baselines, and the classification accuracies of all methods are shown in Tab. 2. We have notice that there are some missing values in Tab. 2. On the one hand, for Subj and CR data sets, the accuracies of RNN models are not included because the data sets have not the phrase tag information. On the other hand,as for other missing values in Tab. 2, the results are not included because these data sets cannot run in the open source code.

    Compared with traditional methods (such as NB, SVM, etc.), the classification performance based on neural networks (including RAE, MV-RNN, RNTN, DCNN, CNN,and PPCNN) have an improvement by a range [3.4%, 6.7%]. It indicates that neural network models can obtain more valuable context information, relieve the data sparseness and explore the semantic information of texts more effectively.

    Table 2: Classification accuracy of all algorithms (%)

    Compared with the RNNs (including RAE, MV-RNN, and RNTN), convolution-based models (including CNN, DCNN and PPCNN) are more suitable for representing the semantic of texts. And classification accuracies of CNNs are improved by [2.8%, 4.2%]on 6 datasets. This is due to the fact that CNNs can select richer and more important features in pooling layer and capture the contextual information in convolution layer. In contrast, RNNs can only capture contextual information using semantic combinations of constructed text trees, which heavily depends on the performance of tree construction. In addition, RNNs cost O(n2) time to represent the sentence, whereas CNNs only cost O(n),wherenmeans the length of the text.

    Compared with other CNNS, our proposed CNN based PPCNN improves the classification accuracies by [0.6%, 1.6%] on MR, SST-1, SST-2 and CR datasets,because both positive and negative sentiments exist in these four kinds of comment texts,which are interrupted by transition words. Traditional CNN and DCNN ignore local important features, which may lead to an incorrect final sentiment label. However, our PPCNN algorithm extracts multiple features with different sentiment from multiple segments, and all these features are useful in sentiment classification.

    As for MPQA and Subj datasets, our proposed PPCNN method is superior to RAE and traditional bag-of-words based methods on accuracy performance, but it is not prior to other baselines. This is due to the fact that MPQA dataset is very short (an average length of sentence is 3), and the advantage of dividing the sentence according to transition word cannot be reflected. As a result, our method performs similarly to other CNN methods. In addition, Subj dataset is to determine the text whether subjective evaluation or objective factual summary, so the results have nothing to do with transition words.

    4.4 Effectiveness of piecewise pooling

    Table 3: Examples of captured features by CNN, DCNN and PPCNN on MR dataset

    In this subsection, we validate the effectiveness of piecewise pooling from two aspects:The quality of extracted features and the classification accuracy. Firstly, we examine the completeness of the captured features based on piecewise pooling. Tab. 3 compares the features extracted by CNN, DCNN and PPCNN on MR dataset.

    In sentences with the transition word, the sentiment polarity will turn from the positive(negative) to negative (positive). However, CNN can extract only one feature, while DCNN can extractkfeatures according to frequency or position. Taking the first review in Tab. 3 as an example, CNN extracts only one feature “cleverly”, and DCNN extracts two positive features “cleverly”, “clean” and a negative feature “peep”. Both may predict the label incorrectly. And our proposed PPCNN can extract “cleverly” from one segment and “hollow” from another segment according to the transition word, and it has larger probability to predict the label correctly.

    It can also be seen that the positions of transition words vary, for example, some are balance (such as example 1, 3 and 6 in Tab. 3) and some are not (such as example 2, 4 and 5 in Tab. 3). It can be concluded that the positions of transition words will not influence the performance of our proposed PPCNN. Especially, when the transition words are not centered, baselines may neglect the smaller part, while our algorithm will not. As a result, our PPCNN will extract richer sentimental features.

    Table 4: Comparison results on a Transition subset and a non-Transition subset (%)

    In addition, we show the effectiveness of the piecewise pooling in term of the accuracy.With the dataset divided into two subsets according to the fact whether one sample contains transition words, we have a transition subset (i.e. Transi in Tab. 4) and a nontransition subset (i.e. NoTransi in Tab. 4). In Tab. 4, the SST-1 dataset is divided into a transition subset (with the size of 4762) and a non-transition subset (with the size of 7093). In Tab. 4, 41.1% is the accuracy of DCNN trained and tested on the transition subsets using 10-fold cross-validation.

    As shown in Tab. 4, the classification accuracies of three methods on Transi subsets are significantly lower than those on NoTransi subsets, which reveals that the classification for transition sentence is challenging. In the three subsets with transition words, our proposed PPCNN performs best with an improvement by [0.6%, 3.1%] compared with DCNN and CNN. It shows that piecewise pooling can capture more representative features for transition sentences. Additionally, when there is no transition word in sentence, our PPCNN will not divide the sentence, therefore, PPCNN performs as same as CNN and DCNN, as shown in Tab. 4 on NoTransi subsets.

    5 Conclusion

    Transition sentences in the real application make sentiment classification a challenging and attractive task. This paper focuses on the transition sentences and proposes a piecewise pooling convolution neural network (PPCNN). For common texts with transitional semantics, we can capture important local features from multiple segments of sentences. Experimental results show that the proposed model is superior to the current convolutional neural network models on four public customer comment datasets. In the near future, we tend to represent the input data in chunk vector [Yan, Zheng, Zhang et al.(2017)] to address the sentiment classification to improve the efficiency.

    Acknowledgement:This work is supported in part by the Natural Science Foundation of China under grants (61503112, 61673152 and 61503116).

    bbb黄色大片| 欧美日本视频| 久久精品91蜜桃| 网址你懂的国产日韩在线| 国产真实乱freesex| 日韩有码中文字幕| 在线播放国产精品三级| 免费一级毛片在线播放高清视频| 又黄又粗又硬又大视频| 亚洲 欧美 日韩 在线 免费| 久久香蕉精品热| 757午夜福利合集在线观看| 免费在线观看视频国产中文字幕亚洲| 在线观看66精品国产| 午夜免费成人在线视频| 97人妻精品一区二区三区麻豆| 精品乱码久久久久久99久播| 99国产精品一区二区蜜桃av| 久9热在线精品视频| 国产久久久一区二区三区| 色av中文字幕| 久久久久精品国产欧美久久久| 国产精品一区二区免费欧美| 国产探花在线观看一区二区| 高潮久久久久久久久久久不卡| 国产1区2区3区精品| 色综合婷婷激情| 国产97色在线日韩免费| 精品久久蜜臀av无| 狂野欧美激情性xxxx| 最新在线观看一区二区三区| 亚洲自偷自拍图片 自拍| 婷婷精品国产亚洲av在线| 国产午夜精品久久久久久| 久久九九热精品免费| 一本精品99久久精品77| 亚洲国产精品合色在线| 亚洲午夜理论影院| 国产精品 欧美亚洲| 国内精品久久久久久久电影| 久久久久性生活片| 欧美丝袜亚洲另类 | 真人一进一出gif抽搐免费| 久久草成人影院| 久久久精品大字幕| a在线观看视频网站| 亚洲国产精品sss在线观看| 久久国产精品人妻蜜桃| 亚洲精品456在线播放app | 日韩欧美国产在线观看| 国产真实乱freesex| av女优亚洲男人天堂 | 欧美+亚洲+日韩+国产| 美女免费视频网站| 久久精品国产清高在天天线| 天堂动漫精品| 51午夜福利影视在线观看| 中亚洲国语对白在线视频| 麻豆国产av国片精品| 精品熟女少妇八av免费久了| 特大巨黑吊av在线直播| 国产美女午夜福利| svipshipincom国产片| 91在线观看av| 999精品在线视频| 亚洲av免费在线观看| 男人的好看免费观看在线视频| 国产精品 欧美亚洲| 亚洲专区国产一区二区| 天堂动漫精品| e午夜精品久久久久久久| 欧美乱色亚洲激情| 热99在线观看视频| 最近在线观看免费完整版| 99热精品在线国产| 久久欧美精品欧美久久欧美| 两个人视频免费观看高清| 成熟少妇高潮喷水视频| 亚洲国产高清在线一区二区三| 国产精品免费一区二区三区在线| 国产亚洲av嫩草精品影院| 亚洲av成人精品一区久久| 欧美日韩黄片免| 十八禁网站免费在线| 精品一区二区三区视频在线 | 一进一出抽搐gif免费好疼| 久久久水蜜桃国产精品网| 日韩有码中文字幕| 天天一区二区日本电影三级| 男女做爰动态图高潮gif福利片| 麻豆av在线久日| 国产精品 欧美亚洲| 中文亚洲av片在线观看爽| 97超视频在线观看视频| 在线免费观看不下载黄p国产 | 精品久久久久久久末码| 久久久久久久久免费视频了| www.自偷自拍.com| 亚洲国产精品成人综合色| 成人18禁在线播放| 久久久久久九九精品二区国产| 久久久久久大精品| 色av中文字幕| 人妻夜夜爽99麻豆av| 成年女人永久免费观看视频| e午夜精品久久久久久久| 日本熟妇午夜| 18禁国产床啪视频网站| 非洲黑人性xxxx精品又粗又长| 成人永久免费在线观看视频| 色综合亚洲欧美另类图片| 久久久国产欧美日韩av| 九色成人免费人妻av| 精品一区二区三区视频在线 | 2021天堂中文幕一二区在线观| 免费看美女性在线毛片视频| 伦理电影免费视频| 色哟哟哟哟哟哟| 亚洲中文av在线| 欧美丝袜亚洲另类 | 真人一进一出gif抽搐免费| 麻豆一二三区av精品| 久久欧美精品欧美久久欧美| 最近最新免费中文字幕在线| xxx96com| 午夜福利18| 啪啪无遮挡十八禁网站| 国产精品电影一区二区三区| 搡老岳熟女国产| 18美女黄网站色大片免费观看| 天堂网av新在线| 成人国产一区最新在线观看| 欧美在线一区亚洲| 脱女人内裤的视频| 五月伊人婷婷丁香| 亚洲色图 男人天堂 中文字幕| 小蜜桃在线观看免费完整版高清| 午夜a级毛片| 高清毛片免费观看视频网站| 9191精品国产免费久久| 丰满的人妻完整版| 成年免费大片在线观看| 国产亚洲精品久久久久久毛片| 九色成人免费人妻av| 日本黄色片子视频| 亚洲中文字幕一区二区三区有码在线看 | 91在线精品国自产拍蜜月 | 精品不卡国产一区二区三区| 精品久久久久久久毛片微露脸| 亚洲精品456在线播放app | 亚洲人成伊人成综合网2020| 亚洲欧美精品综合一区二区三区| 欧美日韩中文字幕国产精品一区二区三区| 美女黄网站色视频| 亚洲一区高清亚洲精品| 国产视频一区二区在线看| 日韩三级视频一区二区三区| 老司机福利观看| 精品国产亚洲在线| 最近最新中文字幕大全免费视频| 一级毛片精品| 国产精品av视频在线免费观看| 欧美成狂野欧美在线观看| 色综合婷婷激情| 国内揄拍国产精品人妻在线| 18禁黄网站禁片午夜丰满| 国产免费男女视频| 亚洲精品美女久久久久99蜜臀| 免费在线观看亚洲国产| 日韩中文字幕欧美一区二区| 我要搜黄色片| 久久热在线av| 一级a爱片免费观看的视频| 精品久久久久久久末码| 成人永久免费在线观看视频| 哪里可以看免费的av片| 欧洲精品卡2卡3卡4卡5卡区| av中文乱码字幕在线| 亚洲aⅴ乱码一区二区在线播放| 国产 一区 欧美 日韩| 此物有八面人人有两片| 禁无遮挡网站| 亚洲av五月六月丁香网| 噜噜噜噜噜久久久久久91| 欧美成人免费av一区二区三区| 亚洲中文字幕一区二区三区有码在线看 | 亚洲精品乱码久久久v下载方式 | 日本免费a在线| 久久午夜综合久久蜜桃| 欧美av亚洲av综合av国产av| 别揉我奶头~嗯~啊~动态视频| 亚洲自偷自拍图片 自拍| 免费电影在线观看免费观看| 精品日产1卡2卡| 人妻夜夜爽99麻豆av| 日本免费a在线| 国内久久婷婷六月综合欲色啪| 国产精品98久久久久久宅男小说| 草草在线视频免费看| 免费高清视频大片| 欧美日韩瑟瑟在线播放| 在线a可以看的网站| 亚洲真实伦在线观看| 国产91精品成人一区二区三区| www.www免费av| 蜜桃久久精品国产亚洲av| 免费无遮挡裸体视频| av中文乱码字幕在线| 婷婷六月久久综合丁香| 丝袜人妻中文字幕| 亚洲天堂国产精品一区在线| 精品不卡国产一区二区三区| 在线免费观看的www视频| 嫁个100分男人电影在线观看| 亚洲第一电影网av| 国产伦一二天堂av在线观看| 香蕉av资源在线| 三级毛片av免费| 看黄色毛片网站| 在线观看免费视频日本深夜| 亚洲av成人不卡在线观看播放网| 2021天堂中文幕一二区在线观| 精品久久久久久久末码| 成年免费大片在线观看| 99久久精品一区二区三区| 搡老熟女国产l中国老女人| 精品久久久久久,| 叶爱在线成人免费视频播放| 操出白浆在线播放| 国产午夜精品论理片| 精品福利观看| 欧美黄色片欧美黄色片| 国产亚洲欧美在线一区二区| 亚洲欧美日韩卡通动漫| 亚洲性夜色夜夜综合| 成年免费大片在线观看| 搞女人的毛片| 日韩免费av在线播放| 国产高清激情床上av| 国产精品精品国产色婷婷| 亚洲欧美激情综合另类| 国产av麻豆久久久久久久| 人人妻人人澡欧美一区二区| 天天躁日日操中文字幕| 丁香六月欧美| 欧美一区二区精品小视频在线| 欧美成人性av电影在线观看| 成人18禁在线播放| 超碰成人久久| 美女大奶头视频| 欧美成人性av电影在线观看| 人妻久久中文字幕网| 超碰成人久久| 欧美又色又爽又黄视频| 99国产精品一区二区蜜桃av| 久久久精品大字幕| 精品久久久久久,| 国产亚洲欧美在线一区二区| 狂野欧美激情性xxxx| 一本综合久久免费| 精品国产三级普通话版| 美女cb高潮喷水在线观看 | 人人妻,人人澡人人爽秒播| 久久99热这里只有精品18| 国模一区二区三区四区视频 | 香蕉av资源在线| 毛片女人毛片| 日韩大尺度精品在线看网址| 女人高潮潮喷娇喘18禁视频| 亚洲欧美日韩卡通动漫| 欧美另类亚洲清纯唯美| 不卡一级毛片| av在线天堂中文字幕| 成年免费大片在线观看| 超碰成人久久| 人人妻人人澡欧美一区二区| 国产成人影院久久av| 两个人看的免费小视频| av中文乱码字幕在线| 俺也久久电影网| 日韩欧美在线乱码| 搡老岳熟女国产| 日本 欧美在线| 制服人妻中文乱码| 无限看片的www在线观看| 国产真人三级小视频在线观看| 精品久久久久久久毛片微露脸| 免费在线观看亚洲国产| 成年版毛片免费区| 日韩精品青青久久久久久| 国模一区二区三区四区视频 | 久久久成人免费电影| 99riav亚洲国产免费| 中文字幕av在线有码专区| 香蕉av资源在线| 少妇的丰满在线观看| 黑人操中国人逼视频| 亚洲国产高清在线一区二区三| 亚洲精品一区av在线观看| av在线天堂中文字幕| 国产精品亚洲av一区麻豆| 免费在线观看视频国产中文字幕亚洲| 久久精品国产清高在天天线| 久久久色成人| 老司机深夜福利视频在线观看| 一进一出好大好爽视频| 国产三级中文精品| 两个人看的免费小视频| 99国产极品粉嫩在线观看| 精品国产美女av久久久久小说| 国产成人一区二区三区免费视频网站| 欧美激情久久久久久爽电影| 99精品欧美一区二区三区四区| 久久久久久久精品吃奶| 亚洲第一电影网av| 国产精品久久久久久亚洲av鲁大| 欧美xxxx黑人xx丫x性爽| 国产在线精品亚洲第一网站| 成年版毛片免费区| 婷婷六月久久综合丁香| 亚洲成av人片免费观看| 国产亚洲精品综合一区在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 嫩草影院入口| 亚洲av成人精品一区久久| 国产精品影院久久| 51午夜福利影视在线观看| 久久午夜综合久久蜜桃| 久久久久久人人人人人| 精品免费久久久久久久清纯| 级片在线观看| 男女午夜视频在线观看| 日韩av在线大香蕉| 男女那种视频在线观看| 精品国产超薄肉色丝袜足j| 亚洲精品中文字幕一二三四区| 中文在线观看免费www的网站| 男人舔奶头视频| 熟女人妻精品中文字幕| 免费在线观看影片大全网站| 国产欧美日韩一区二区精品| 动漫黄色视频在线观看| 99久久精品热视频| 美女cb高潮喷水在线观看 | 久久久成人免费电影| 91av网一区二区| 1000部很黄的大片| 91字幕亚洲| a级毛片a级免费在线| 在线看三级毛片| 全区人妻精品视频| 舔av片在线| 黑人欧美特级aaaaaa片| 无遮挡黄片免费观看| 美女cb高潮喷水在线观看 | 精品国产乱码久久久久久男人| 国产精品一区二区免费欧美| 日韩欧美精品v在线| 香蕉国产在线看| 又黄又爽又免费观看的视频| 亚洲熟妇熟女久久| 老司机福利观看| 亚洲专区中文字幕在线| 久久天躁狠狠躁夜夜2o2o| 亚洲国产欧美网| 久久精品国产清高在天天线| 美女高潮的动态| 久久精品91蜜桃| 91久久精品国产一区二区成人 | 一级作爱视频免费观看| 国内精品美女久久久久久| 日本一二三区视频观看| 国产一级毛片七仙女欲春2| 午夜免费激情av| 熟女电影av网| 成年人黄色毛片网站| 999精品在线视频| 99在线视频只有这里精品首页| 免费在线观看影片大全网站| 无人区码免费观看不卡| 女人高潮潮喷娇喘18禁视频| 18禁黄网站禁片免费观看直播| 18禁黄网站禁片午夜丰满| 男人舔女人的私密视频| 老司机午夜十八禁免费视频| 成人三级黄色视频| 国产成人啪精品午夜网站| 九九久久精品国产亚洲av麻豆 | 国产成人系列免费观看| 国产成人啪精品午夜网站| 在线永久观看黄色视频| 国产久久久一区二区三区| 日韩欧美一区二区三区在线观看| 久久国产精品人妻蜜桃| 19禁男女啪啪无遮挡网站| 精品久久久久久久人妻蜜臀av| 欧美日韩中文字幕国产精品一区二区三区| 欧美3d第一页| 精华霜和精华液先用哪个| 校园春色视频在线观看| 国产精品av视频在线免费观看| www.熟女人妻精品国产| 男女床上黄色一级片免费看| 国产激情欧美一区二区| 露出奶头的视频| 黑人操中国人逼视频| 久久中文字幕一级| 曰老女人黄片| 午夜福利视频1000在线观看| 每晚都被弄得嗷嗷叫到高潮| 在线看三级毛片| 天天躁狠狠躁夜夜躁狠狠躁| 国产精品 欧美亚洲| 宅男免费午夜| 丁香六月欧美| 日韩欧美免费精品| 国产精品99久久99久久久不卡| 精品久久久久久久末码| 国内精品美女久久久久久| 成人性生交大片免费视频hd| 国产av在哪里看| 久久草成人影院| 亚洲片人在线观看| 最近最新中文字幕大全电影3| 日本精品一区二区三区蜜桃| 久久久久久久久中文| 91av网站免费观看| 白带黄色成豆腐渣| 国产精品久久视频播放| 12—13女人毛片做爰片一| 无人区码免费观看不卡| 美女免费视频网站| 一本精品99久久精品77| 久久香蕉国产精品| 欧美日韩国产亚洲二区| 亚洲国产精品成人综合色| 在线观看免费视频日本深夜| 精品熟女少妇八av免费久了| 看片在线看免费视频| 日韩三级视频一区二区三区| 国产亚洲精品av在线| 两个人视频免费观看高清| 两性午夜刺激爽爽歪歪视频在线观看| 成人特级av手机在线观看| 久久这里只有精品19| www国产在线视频色| 免费在线观看影片大全网站| 欧美xxxx黑人xx丫x性爽| 亚洲最大成人中文| 琪琪午夜伦伦电影理论片6080| 观看免费一级毛片| 国产真人三级小视频在线观看| 三级毛片av免费| 黄色 视频免费看| 日韩av在线大香蕉| 好看av亚洲va欧美ⅴa在| 欧美黑人欧美精品刺激| 国产久久久一区二区三区| www日本黄色视频网| 久久久久国产一级毛片高清牌| 欧美又色又爽又黄视频| 91av网一区二区| 1024手机看黄色片| 国产一级毛片七仙女欲春2| 91在线精品国自产拍蜜月 | 亚洲第一电影网av| 国产免费男女视频| 免费看美女性在线毛片视频| 免费在线观看日本一区| 草草在线视频免费看| 午夜久久久久精精品| 国产精品久久久久久精品电影| 丰满人妻熟妇乱又伦精品不卡| 亚洲国产精品sss在线观看| xxxwww97欧美| 国产爱豆传媒在线观看| 久久午夜亚洲精品久久| 亚洲最大成人中文| 欧美日韩乱码在线| 9191精品国产免费久久| 久久精品综合一区二区三区| 精品国产乱子伦一区二区三区| 国产精品免费一区二区三区在线| 午夜福利成人在线免费观看| 精品国产超薄肉色丝袜足j| 香蕉丝袜av| 一级毛片高清免费大全| www日本黄色视频网| 久久天躁狠狠躁夜夜2o2o| 欧美日韩精品网址| 一夜夜www| 特级一级黄色大片| 香蕉丝袜av| 国产亚洲精品一区二区www| 久久久久久国产a免费观看| 精品国产乱子伦一区二区三区| 丰满人妻一区二区三区视频av | a级毛片在线看网站| 曰老女人黄片| 久久天躁狠狠躁夜夜2o2o| 黄色成人免费大全| 五月玫瑰六月丁香| 精品国内亚洲2022精品成人| 国产高清视频在线观看网站| 在线观看日韩欧美| 亚洲在线观看片| 长腿黑丝高跟| 天堂av国产一区二区熟女人妻| 视频区欧美日本亚洲| 夜夜躁狠狠躁天天躁| 国产免费av片在线观看野外av| svipshipincom国产片| 国产主播在线观看一区二区| 特大巨黑吊av在线直播| 色噜噜av男人的天堂激情| 级片在线观看| 丰满的人妻完整版| 夜夜躁狠狠躁天天躁| 亚洲av日韩精品久久久久久密| 国产成+人综合+亚洲专区| 精品久久久久久久毛片微露脸| 精品乱码久久久久久99久播| 国产亚洲av嫩草精品影院| 亚洲午夜精品一区,二区,三区| cao死你这个sao货| 国产亚洲av高清不卡| 中亚洲国语对白在线视频| 国产黄片美女视频| a在线观看视频网站| 国产精品1区2区在线观看.| 给我免费播放毛片高清在线观看| 亚洲一区二区三区色噜噜| 国产精品美女特级片免费视频播放器 | 亚洲人成网站在线播放欧美日韩| 国产精品久久久久久人妻精品电影| 99视频精品全部免费 在线 | 国产黄片美女视频| 又大又爽又粗| 每晚都被弄得嗷嗷叫到高潮| 两人在一起打扑克的视频| 国产一区在线观看成人免费| 成人18禁在线播放| 深夜精品福利| 国产1区2区3区精品| 精品国产乱码久久久久久男人| 欧美日韩精品网址| 香蕉国产在线看| 国产高潮美女av| 欧美丝袜亚洲另类 | 亚洲精品国产精品久久久不卡| 中文字幕高清在线视频| 观看美女的网站| av福利片在线观看| 午夜福利视频1000在线观看| www.自偷自拍.com| 99久国产av精品| 久久中文看片网| 看片在线看免费视频| 香蕉丝袜av| 欧美日韩精品网址| 给我免费播放毛片高清在线观看| 久久精品人妻少妇| 国产v大片淫在线免费观看| 最好的美女福利视频网| 欧美另类亚洲清纯唯美| 一进一出好大好爽视频| 亚洲欧美日韩高清在线视频| 淫秽高清视频在线观看| 欧美绝顶高潮抽搐喷水| 天天躁日日操中文字幕| 一级黄色大片毛片| 一边摸一边抽搐一进一小说| 国产精品久久久久久久电影 | 岛国在线免费视频观看| 手机成人av网站| 高清毛片免费观看视频网站| 欧美日韩黄片免| 久久久色成人| 亚洲真实伦在线观看| 欧美另类亚洲清纯唯美| 日韩欧美三级三区| 国产伦精品一区二区三区四那| 国产一区二区三区在线臀色熟女| 无限看片的www在线观看| 狠狠狠狠99中文字幕| 精品福利观看| 亚洲五月天丁香| 岛国在线观看网站| 欧美激情在线99| 在线观看66精品国产| 一本一本综合久久| 亚洲中文字幕日韩| 久99久视频精品免费| 国产男靠女视频免费网站| 老司机午夜福利在线观看视频| 黄色女人牲交| 免费观看精品视频网站| 久久久国产精品麻豆| 免费看光身美女| 午夜免费激情av| 久久精品人妻少妇| 99热6这里只有精品| 美女免费视频网站| 国产aⅴ精品一区二区三区波| 国产精品九九99| 亚洲专区字幕在线| 久久精品91无色码中文字幕| 999久久久精品免费观看国产| 国产精品美女特级片免费视频播放器 | 亚洲精品美女久久av网站| 欧美乱码精品一区二区三区| 免费搜索国产男女视频| av在线蜜桃| 国产亚洲欧美98| 欧美午夜高清在线|