• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Sentiment Analysis of Short Texts Based on Parallel DenseNet

    2021-12-10 11:53:04LuqiYanJinHanYishiYueLiuZhangandYannanQian
    Computers Materials&Continua 2021年10期

    Luqi Yan,Jin Han,*,Yishi Yue,Liu Zhang and Yannan Qian

    1Nanjing University of Information Science&Technology,Nanjing,210044,China

    2State Grid Hunan Electric Power Company Limited Research Institute,Changsha,410007,China

    3Waterford Institute of Technology,Waterford,X91 K0EK,Ireland

    Abstract:Text sentiment analysis is a common problem in the field of natural language processing that is often resolved by using convolutional neural networks(CNNs).However,most of these CNN models focus only on learning local features while ignoring global features.In this paper,based on traditional densely connected convolutional networks(DenseNet),a parallel DenseNet is proposed to realize sentiment analysis of short texts.First,this paper proposes two novel feature extraction blocks that are based on DenseNet and a multiscale convolutional neural network.Second,this paper solves the problem of ignoring global features in traditional CNN models by combining the original features with features extracted by the parallel feature extraction block,and then sending the combined features into the final classifier.Last,a model based on parallel DenseNet that is capable of simultaneously learning both local and global features of short texts and shows better performance on six different databases compared to other basic models is proposed.

    Keywords:Sentiment analysis;short texts;parallel DenseNet

    1 Introduction

    With the development of computers and networks,people are increasingly using these networks to communicate.As shown in a recent survey,there were more than 904 million Internet users in China by April 2020,and the penetration rate of internet technology has reached 64.5%[1].A variety of topics and comments on social media are spreading on the Internet,influencing every aspect of daily life.Applying natural language processing technology on social media has become an important way for enterprises to monitor public opinion,making it simple to analyze comments and understand the overall sentiment expressed by people.

    Text sentiment analysis,also known as opinion mining,usually refers to dealing with short texts and deducing the overall sentiment expressed through modern information technology.There are three major methods for sentiment analysis of short texts at present,and they are based on a dictionary,traditional machine learning,and deep learning.The text sentiment analysis method based on a dictionary makes it possible for us to obtain the sentimental tendency of texts by counting and weighing the sentimental scores of texts according to the words with sentimental information[2].The text sentiment analysis method based on traditional machine learning does not depend on the dictionary and has the ability to learn the sentimental characteristics of texts by itself[2].The text sentiment analysis method based on deep learning can learn more advanced and indescribable sentimental features of texts.Therefore,the features extracted by the text sentiment analysis method based on deep learning are abstract and difficult to express explicitly.

    The popular text sentiment analysis model enables us to learn text expression features by using convolutional neural networks(CNNs)[3],recurrent neural networks(RNNs)[4]and graph convolutional neural networks(GCNs)[5].The models of CNNs and RNNs can learn the local features of sentences better,while ignoring the global features,in that they give priority to location and order.The GCN model analyzes the relationship between words by constructing a graph CNN.Although the performance of long text analysis by GCN is excellent,the performance of short text analysis by GCN is not ideal.Therefore,this paper attempts to build a network to better adapt to the sentimental analysis of short texts.

    In this paper,we propose a parallel DenseNet for sentiment analysis of short texts based on the traditional densely connected convolutional network(DenseNet).A novel defined convolutional feature extraction block is proposed that is different from the dense block proposed by Huang et al.[6].First,each convolutional feature extraction block proposed will extract the features of the original text based on different feature extraction methods.Second,we will merge the output features of all convolutional feature extraction blocks with the original text.Finally,we will classify these output features using the classifier.

    Briefly,the innovation of this paper is as follows:In this paper,a novel defined convolutional feature extraction block is proposed that can learn both the global and local features of texts,and is able to use different kernels to convolve the sentence to obtain features.Additionally,the network can obtain the long-distance dependency of the text by merging the output features of all convolutional feature extraction blocks.Compared with other CNN models,this model has a shorter convergence time and does not require multiple iterations of training.

    The experimental results show that the proposed method is better than the latest text sentiment analysis method.The method developed in this paper has good performance in both small and large training datasets.In addition,the method is equally effective in the case of multi-classification,such as three classification,five classification,ten classification,and others.

    The rest of this paper is organized as follows:Section 2 introduces the current situation of text sentiment analysis.Section 3 introduces the original definition of blocks.Section 4 introduces a parallel DenseNet for sentiment analysis of short texts.Section 5 introduces the data and schemes used in the experiment,and gives results of comparison and evaluation of the model’s performance.Concluding remarks are given in Section 6.

    2 Related Work

    With regard to feature extraction in text sentiment analysis,there are three major methods:the bag-of-words model,the word embedding model,and the graph network model.The bag-ofwords model is a very simple eigenvector representation model that has achieved many research results in text analysis tasks.The word embedding model is a model developed on the basis of the bag-of-words model that can contain more semantic information,and is the most important feature extraction method in text deep learning.The graph network model is a model developed in recent years that can analyze the sentiment of the text by constructing a network of the relationships between words.As shown in Fig.1,this section will summarize the text sentiment analysis according to the three major methods used in feature extraction.

    The principle of text sentiment analysis based on a bag-of-words model is to place all words into one bag,the so-called word bag.When a word appears in a sentence,the position of this word in the vector is 1,and the position of the other words is 0.In this case,the words in the sentence are out of order.Therefore,the bag-of-words model has been further developed into feature extraction methods,such as part-of-speech(POS)tagging and n-gram phrase tagging.Partof-speech tagging,also known as grammatical tagging,is the process of marking words in a text(corpus)as corresponding to specific parts.N-gram phrase tagging is based on the fact that one word depends on several other words.When marking a word,that word is usually combined with the previous word.Chenlo et al.[7]and Priyanka et al.[8]combine POS and n-gram,and their experimental results show that this method can improve the classification accuracy.However,in the task of sentiment analysis of short texts,Kouloumpis et al.[9]found that this method could not achieve satisfactory accuracy.This is because these texts are short in length and similar to Weibo comments,and the composition of these sentences is extremely casual.Therefore,it is difficult to achieve satisfactory accuracy using part-of-speech tagging[10].In terms of sentence division,Tang et al.[11]designed a classification framework that can identify words that lead to the transfer of sentimental polarity.Khan et al.[12]used the classification algorithm designed by SentiWordNet emotion score,and achieved a significant performance improvement on six evaluation datasets.SentiWordNet is a lexical resource for opinion mining that assigns to each synset of WordNet three sentiment scores:positivity,negativity,and objectivity.Some studies have shown that using SentiWordNet to query the sentiment value of a word and adding it as a feature can improve the accuracy of sentiment analysis[8,13].The above description shows that the text classification based on a bag-of-words model can achieve better classification results when the word features are properly obtained.However,the bag-of-words model also has some shortcomings,because it abandons the order between words,cannot convey deep-seated semantic features,and is unable to express semantic combination.

    Text sentiment analysis based on a word embedding model solves the problem of the highdimensional word vector in a bag-of-words model.The most frequently used word embedding model is the word2vec model.The word embedding model is based on the principle of “distance similarity”and has the function of smoothing.Another advantage of a word embedding model is that it is an unsupervised learning method.It has been proved that the word embedding model can obtain more semantic and grammatical features than the bag-of-words model[2].This advantage enables the word embedding model to achieve very good results in a variety of natural language processing tasks.Tsvetkov et al.[14]designed a measurement method,QVEC,to evaluate the feature representation performance of various text analysis models.The experimental results show that for 300D word vectors,the QVEC score of the text sentiment analysis method based on a word embedding model is higher than that of other models.In recent years,more and more text analysis methods have adopted the combination of word embedding model and deep learning,and achieved better performance.Kombrink et al.[15]designed a word embedding learning algorithm that combines word vectors with an RNN and can be well applied to speech recognition.Cheng et al.[16]and Sundermeyer et al.[17]combine word vectors with long short-term memory(LSTM)to achieve better efficiency.Although the text CNN designed by Kim[3]has only one convolutional layer,its classification performance is significantly better than that of the ordinary machine learning classification algorithm.However,this method cannot obtain the long-distance dependencies of the text through convolution.Johnson et al.[18]extracted long-distance text dependencies by deepening the network and using residual ideas in 2017,but the performance was not satisfactory when the training data set was small.Wang et al.[19]introduced a structure similar to DenseNet in 2018 using a short-cut between the upper and lower convolutional blocks so that larger-scale features could be obtained from smaller-scale feature combinations.However,the model used a convolutional core of a specific size that slid from the beginning of the text to the end,producing a feature map.Yan et al.[20]introduced the method of small sample learning into text classification in order to solve the problem of poor text classification in the case of small sample size,and achieved good results,but the text classification is generally good in normal samples.Xiang et al.[21]and Yang et al.[22]designed a text steganography model by combining text with information hiding and achieved favorable results.Xiang et al.[23]achieved good results in spam detection by using LSTM-based multi-entity temporal features,but the results are generally good in sentiment analysis.

    Text sentiment analysis based on graph network models improves on prior models by constructing a relationship network between words.In 2019 Yao et al.[5]classified texts by composing the unstructured data text through the co-occurrence information of words and articles,term frequency-inverse document frequency(TF-IDF)weight,and mutual information weight,and used GCN to capture the document–word,word–word,and document–document relationships in the graph.The experimental results showed that the classification performance of their model was excellent on long regular documents,but the classification effect and composition were not very ideal in short texts.

    In this paper,we propose a parallel DenseNet for sentiment analysis of short texts.A novel defined convolutional feature extraction block is designed in this network.Like the network designed by Kim[3],the convolutional feature extraction block can use different kernels to convolve sentences in one dimension to obtain features.Furthermore,the network can obtain the long-distance dependency of the text by merging the output features of all convolutional feature extraction blocks.The experimental results show that this method has the same efficiency in both small and large training datasets.

    3 Background

    In this section,we briefly introduce the definition and components of the dense block proposed from Huang et al.[6]in 2017.Next,we will briefly introduce the difference between the block in this paper and the block proposed by Huang.

    As shown in Fig.2,the input of each layer in the dense block proposed by Huang is the concatenation of the outputs of all previous layers.At the same time,all dense blocks have the same structure in DenseNet,which means that the number of internal layers and the size of the convolutional kernels are exactly the same.

    Figure 2:The structure of the dense block proposed in 2017

    The block proposed in this paper,which is called the convolutional feature extraction block,has a unique internal structure.For example,the two feature extraction blocks,consisting of a densely connected convolutional feature extraction block and a multi-scale convolutional feature extraction block,have completely different internal structures.The densely connected convolutional feature extraction block is similar to the dense block proposed by Huang.The input to each layer in the block comes from the sum of the outputs of all previous layers.The multiscale convolutional feature extraction block is completely different from the dense block proposed by Huang.There is a parallel relationship between the layers in the block,and each layer uses a different window size similar to the n-gram method for feature extraction.In this paper,an independent feature extraction block can be called a block and does not necessarily need to have the structure of the dense block proposed by Huang.

    4 Method

    4.1 Overview

    Our goal is to improve the performance of short text classification through a parallel DenseNet.The overall model of this paper is shown in Fig.3.At the beginning of the model,a text x of length m is entered.Here,.Then,the text is input into two convolutional feature extraction blocks,which are a densely connected convolutional feature extraction block and a multi-scale convolutional feature extraction block,and the features are extracted in parallel.Then,the total featureXis obtained by combining the featuresX1andX2extracted from two convolutional blocks with the featureX3extracted from the text through the maximum pool block with a size of 50.Here,and n=n1+n2+n3.Finally,the total feature is pooled through the global average to obtain the average total feature,and then classified according to the average total feature.Here,.

    Figure 3:Framework of parallel densely connected convolutional neural network

    4.2 Densely Connected Convolutional Feature Extraction Block

    Here,m is the number of words in the text,and d is the dimension that each word is pretrained into a word vector.

    Figure 4:The model of the densely connected convolutional feature extraction block

    Then,the original input text matrix is input into a convolutional layer with a size of 5×d for feature extraction(using a combination of five words for feature extraction).

    Here,x0is the original input text matrix,f5×dis the convolutional transformation with a convolutional kernel size of 5 × d,and y1is the characteristic matrix after a convolutional transformation.

    Then,the original input text matrix is combined with the characteristic matrix after a convolutional transformation,and a new input text matrix is obtained.

    Here,x0is the original input text matrix,y1is the characteristic matrix after a convolutional transformation,andCatrefers to the splicing and merging of multiple matrices in the last dimension.

    Then the new input text matrix is input into a convolutional layer with a size of 5×d for feature extraction(using a combination of five words for feature extraction).

    Here,x1is a new input text matrix,y2is a characteristic matrix after quadratic convolutional transformation,andf5×dis a convolutional transformation with a convolutional kernel size of 5×d.

    Then the original input text matrix,the characteristic matrix after primary convolutional transformation and the characteristic matrix after quadratic convolutional transformation are combined to obtain a new feature matrix.

    Here,x0is the original input text matrix,y1is the characteristic matrix after a convolutional transformation,y2is the characteristic matrix after a convolutional transformation,andCatrefers to the splicing and merging of multiple matrices in the last dimension.

    Finally,the new eigenmatrix is input into the maximum pool layer with a size of 46,and the eigenmatrix of the densely connected convolutional feature extraction block is obtained.

    Here,x2is the new eigenmatrix,and h46is the maximum pool transformation of size 46.

    4.3 Multi-Scale Convolutional Feature Extraction Block

    As shown in Fig.5,in accordance with the densely connected convolutional feature extraction block module,let xi∈Rdbe the d-dimensional pre-training word vector x0of the i-th word in the text.Then,the original input text matrix is input to convolutional layers with sizes 5×d,4×d,3×d,and 2×d at the same time for feature extraction(using a combination of words,i.e.,five,four,three,and two words for feature extraction).

    Here,x0is the original input text matrix and y1,y2,y3,and y4represent the characteristic matrix after convolutional transformation with convolutional kernel sizes of 5×d,4×d,3×d,and 2×d,respectively.

    Then,after the convolutional transformation of the convolutional kernel size of 5×d,4×d,3×d,and 2×d,the maximum pool operation of the maximum pool layer with the input sizes of 46,47,48,and 49,respectively,is carried out to obtain a new characteristic matrix.

    Here,y1,y2,y3,and y4represent the eigenmatrix after convolutional transformation with convolutional kernel sizes of 5 × d,4 × d,3 × d,and 2 × d;h46,h47,h48,and h49are the maximum pool transformations of input size 46,47,48,and 49;and x1,x2,x3,and x4are the new eigenmatrices.

    Figure 5:The model of the multi-scale convolutional feature extraction block

    Finally,the new eigenmatrices are combined to obtain the eigenmatrix of the multi-scale convolutional feature extraction block.

    Here,Catrefers to the splicing and merging of multiple matrices in the last dimension,and x1,x2,x3,and x4are the new characteristic matrices.

    4.4 Text Classification

    As shown in Fig.6,the total feature matrixXis obtained by combining the feature matricesX1andX2obtained from the parallel feature extraction of the original input text matrix through the above two convolutional feature extraction blocks,consisting of a densely connected convolutional feature extraction block and a multi-scale convolutional feature extraction block,and the feature matrixX3,which is extracted from the original input text matrix through the largest pool sblock with a size of 50.

    Figure 6:The model of text classification

    Here,X1,X2,andX3represent the feature matrices obtained by the above feature extraction,andCatrefers to the stitching and merging of multiple matrices in the last dimension.

    Then,Xis pooled by one-dimensional global averaging to obtain the final eigenmatrix.

    Here,Xdenotes the total eigenmatrix,represents the final eigenmatrix obtained by onedimensional global average pooling ofX,and g represents one-dimensional average pooling.

    Finally,the final feature matrix is input into the classification layer for text classification.

    5 Experiments

    In this section,the model of the experiment is introduced in detail,and the results of the experiment are analyzed and discussed.

    5.1 Experimental Setup

    To verify the rationality and validity of the model,six widely used benchmark corpora were selected and tested.These include the GameMultiTweet dataset,SemEval dataset,SS-Tweet dataset,AG News dataset,R8 dataset,and Yahoo! Answers dataset.

    ?The GameMultiTweet dataset is built by searching game data and other game themes.In this dataset,12780 pieces of data are separated into three categories,and the proportion of categories is 3952:915:7913.

    ?The SemEval dataset consists of 20K data created by the Twitter sentiment analysis task.In this dataset,7967 pieces of data are separated into three categories,and the proportion of categories is 2964:1151:3852.

    ?The SS-Tweet dataset is the sentimental intensity Twitter dataset.In this dataset,4242 pieces of data are separated into three categories,and the proportion of categories is 1953:1336:953.

    ?The AG News dataset is a collection of more than 1 million news articles from more than 2000 different news sources after more than a year of efforts by ComeToMyHead.In this dataset,127600 pieces of data are separated into four categories,and the proportion of categories is 31900:31900:31900:31900.

    ?The R8 dataset is a collection of approximately 20000 newsgroup documents.In this dataset,4203 pieces of data are separated into eight categories,and the proportion of categories is 1392:241:2166:20:162:0:72:150.

    ?The Yahoo! Answers dataset is the 10 main classification data of the Yahoo! Answers Comprehensive Questions and Answers 1.0 dataset.In this dataset,350000 pieces of data which are separated into 10 categories,and the proportion of categories is 23726:35447:31492:35252:35546:25787:81571:23961:28706:28482.

    All datasets were randomly divided into the following three parts:70% training set,15%verification set,and 15% test set.The specific dataset statistics are shown in Tab.1 below.

    Table 1:Summary statistics of datasets

    In this paper,the novel method is compared with the following benchmark models:

    ?CNN:A CNN model composed of three layers of one-dimensional convolutional layers,with convolutional kernels of each layer being the same size.

    ?TextCNN:A method proposed by Kim[3]in 2014 that applies CNN to text classification tasks.This method extracts the key information from the text according to the convolutional kernels of different sizes(the function of the convolutional kernels of different sizes is similar to the n-gram of different sizes),so as to better obtain the local features of the text.

    ?FastText:A simple and efficient text classification method proposed by Joulin et al.[24]in 2017.The core idea of this method is to obtain the text vector by averaging the word vector of the whole text and the vector superimposed by n-gram vector,and then the vector is multi-classified by softmax.

    ?DPCNN:A deep CNN proposed by Johnson et al.[18]in 2017.The core idea of this method is to take the word vector of each word of the text as input and extract features through the network to achieve the purpose of classification.Each convolutional block in the network consists of two convolutional layers.The connection between the convolutional blocks is made by jumping,and the input of each convolutional block is the result of the addition of the output of the previous convolutional block and the identity mapping.The sampling block is downsampled with a scale of 2 to achieve the purpose of scaling.Several convolutional blocks and sampling blocks are stacked to form a scale pyramid to achieve the purpose of dimension scaling.Finally,the output is spliced into vectors through the hidden layer and softmax layer as the output classification.

    5.2 Implementation Details

    In this study,the dimension of each word in the text was set to 300 dimensions,and the maximum number of words in each sentence was set to 150.Each sentence was transformed into a “150×300” matrix by word2vector.Some parameters were set,such as using the adam optimizer[25],and setting the learning rate to 0.001,dropout rate to 0.2,and L2 loss weight to 10?8.The model batch size was 50 and the number of epochs was 5.If the loss was not reduced in 10 consecutive periods,the training was stopped.

    For the benchmark model,the parameter settings we used were the same as those set in the original article.In the pre-training word embedding model,300D word2vector word embedding was used.

    5.3 Experimental Results

    Tab.2 shows the results of the model and the benchmark model in this paper.From the results,we can see that the model in this paper can achieve better accuracy than its competitors.

    Table 2:Test accuracy on several text classification datasets

    As can be seen from the results,based on large datasets(AG News and Yahoo! Answers)and small datasets(GameMultiTweet,SemEval,SS-Tweet and R8),the model in this paper is more accurate than traditional models,such as CNN,TextCNN,FastText,and DPCNN.Both the model in this paper and the benchmark model choose filter stop words and part-of-speech tagging in feature extraction.Although TextCNN contains only one layer of convolutional operation in the model,it is much better than CNN with three layers and one-dimensional convolutional layer in the task of text classification.Therefore,the text features extracted by convolutional kernels of different sizes in TextCNN can better reflect the local features of the text,which is more conducive to the task of text classification.Although FastText is a very simple linear model that takes the average value of word vector and n-gram vector as its text feature vector,it is better than TextCNN,which has a layer convolutional operation to obtain multi-scale maximum feature vector combination in a text classification task.Therefore,the average value of the superposition of multi-scale vector features and word vectors can better extract the global features of the text.DPCNN uses the jump connection between convolutional blocks and the sampling block to scale down with the size of 2 to achieve the purpose of obtaining long-distance features.Although the performance is excellent in the original article,the performance is not ideal on the experimental data set of this paper.The model in this paper combines the advantages of TextCNN and FastText.Local features can be extracted by convolutional kernels of different sizes,and global features can be obtained by averaging.At the same time,compared with other deep CNN models,the model can converge with very few epochs and does not need multiple iterative training.

    The number of samples has an obvious effect on the performance of the model.For SemEval,SS-Tweet,and R8 datasets,because of the small sample size of these three datasets,the accuracy of the DPCNN model is significantly different from that of other models.Therefore,it can be seen that deep neural network models such as DPCNN are not effective in the task of text classification.However,although the model presented in this paper also utilizes a deep neural network,it has a better classification effect on the small sample dataset.

    The length of the sample text has an obvious effect on the performance of the model.For SSTweet and R8 datasets,because the sample size of the two datasets is similar and the average text length is not the same,the classification accuracy of the two datasets is very different.However,the classification effect of this model is the best on these two datasets.

    5.4 Tuning of Hyperparameters

    Epoch size:Through experiments,this paper shows the influence of epoch size on the size of the model.In this paper,epoch size was parameterized within the range {3,4,5,6,7}.As shown in Fig.7a,epoch size has a great influence on the model.Therefore,we adjusted epoch size several times before converging on the best result.

    Network depth:This paper assesses how the depth of the network affects performance by changing the size of the densely connected convolutional feature extraction block.In this paper,network depth is parameterized within the range {1,2,3,4,5}.As shown in Fig.7b,network depth has a small impact on the model.Although the increase of network depth in densely connected convolutional feature extraction block will make the features extracted by the block more obvious,the improvement is not obvious for the whole model.

    Figure 7:Accuracy with different epoch sizes and network depths.(a)Epoch size.(b)network depth

    6 Conclusion

    In this paper,we propose a parallel DenseNet for sentiment analysis of short texts,in which a novel convolutional feature extraction block is defined.This model extracts features by using convolutional feature extraction blocks and then conducts feature extraction and classification by merging these features with original text features.Compared with other deep CNN models,this model has a smaller convergence time and does not require multiple iterations of training.The model demonstrates competitive performance on six datasets.Our analysis reveals that this model can extract both global features and local features,and obtain best performance when compared to its peers.

    Funding Statement:This work was supported by the National Key R&D Program of China under Grant Number 2018YFB1003205;by the National Natural Science Foundation of China under Grant Numbers U1836208,U1536206,U1836110,61602253,and 61672294;by the Startup Foundation for Introducing Talent of NUIST(1441102001002);by the Jiangsu Basic Research Programs-Natural Science Foundation under Grant Number BK20181407;by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund;and by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    中文字幕亚洲精品专区| 中文天堂在线官网| 亚洲精品色激情综合| 国产黄片视频在线免费观看| 狂野欧美激情性bbbbbb| 丁香六月天网| 18禁裸乳无遮挡动漫免费视频| 一区二区三区四区激情视频| 特大巨黑吊av在线直播| 精品酒店卫生间| av在线观看视频网站免费| 中国三级夫妇交换| videossex国产| 嘟嘟电影网在线观看| 18+在线观看网站| 亚洲av日韩在线播放| 亚洲av男天堂| 亚洲综合精品二区| 免费黄频网站在线观看国产| 免费观看av网站的网址| 国产一区亚洲一区在线观看| 丝袜脚勾引网站| 亚洲国产毛片av蜜桃av| 人人妻人人澡人人爽人人夜夜| 午夜av观看不卡| 亚洲av电影在线观看一区二区三区| 国产精品一区www在线观看| 久久久国产欧美日韩av| 午夜福利网站1000一区二区三区| 97超视频在线观看视频| 免费日韩欧美在线观看| 免费少妇av软件| 国产av码专区亚洲av| 精品国产一区二区久久| 性色avwww在线观看| 国产爽快片一区二区三区| 精品人妻熟女毛片av久久网站| 婷婷色麻豆天堂久久| 国产亚洲欧美精品永久| 国产成人aa在线观看| 亚洲三级黄色毛片| 天堂中文最新版在线下载| 99热网站在线观看| 国内精品宾馆在线| 成年女人在线观看亚洲视频| 亚洲精品视频女| 色哟哟·www| 国产老妇伦熟女老妇高清| 丝袜喷水一区| 精品国产一区二区三区久久久樱花| 视频区图区小说| 久久99热这里只频精品6学生| a级毛色黄片| 国产又色又爽无遮挡免| 街头女战士在线观看网站| 国产亚洲午夜精品一区二区久久| 熟女人妻精品中文字幕| 欧美激情 高清一区二区三区| 久久热精品热| a级毛片免费高清观看在线播放| 人妻人人澡人人爽人人| 精品一区二区三卡| av.在线天堂| 欧美日韩精品成人综合77777| 七月丁香在线播放| 精品少妇久久久久久888优播| 亚洲国产精品成人久久小说| 男人添女人高潮全过程视频| 成人毛片a级毛片在线播放| av免费在线看不卡| 男人爽女人下面视频在线观看| 亚洲欧美色中文字幕在线| 人人妻人人澡人人爽人人夜夜| 精品久久久久久电影网| 亚洲精品国产av成人精品| 又黄又爽又刺激的免费视频.| 日本-黄色视频高清免费观看| 18禁裸乳无遮挡动漫免费视频| 大香蕉97超碰在线| 80岁老熟妇乱子伦牲交| 在线看a的网站| 精品午夜福利在线看| 国内精品宾馆在线| 日韩在线高清观看一区二区三区| av免费观看日本| 最近中文字幕高清免费大全6| 成人二区视频| 夫妻性生交免费视频一级片| 99热网站在线观看| 久久国内精品自在自线图片| 少妇被粗大的猛进出69影院 | 久久99精品国语久久久| 在线观看免费高清a一片| 免费黄网站久久成人精品| 亚洲av二区三区四区| 国产一区二区在线观看av| 99九九在线精品视频| av在线app专区| 熟女电影av网| 777米奇影视久久| 精品一区二区三区视频在线| 国产毛片在线视频| 搡女人真爽免费视频火全软件| 99久久综合免费| 国产乱人偷精品视频| 99视频精品全部免费 在线| 女人精品久久久久毛片| 国产精品一区www在线观看| 国产不卡av网站在线观看| 只有这里有精品99| 久久99热这里只频精品6学生| 成年av动漫网址| 免费高清在线观看日韩| 免费久久久久久久精品成人欧美视频 | 亚洲精品色激情综合| a 毛片基地| 国产精品人妻久久久久久| 亚洲色图 男人天堂 中文字幕 | 国产片特级美女逼逼视频| 精品一品国产午夜福利视频| 中文欧美无线码| 日韩成人av中文字幕在线观看| 女的被弄到高潮叫床怎么办| 人体艺术视频欧美日本| 自拍欧美九色日韩亚洲蝌蚪91| 久久精品夜色国产| 亚洲经典国产精华液单| 97在线视频观看| 建设人人有责人人尽责人人享有的| 伦精品一区二区三区| 人人妻人人澡人人看| 亚洲国产最新在线播放| av视频免费观看在线观看| 亚洲精品,欧美精品| 久久精品久久精品一区二区三区| 亚洲国产毛片av蜜桃av| 欧美日韩在线观看h| 日日摸夜夜添夜夜爱| 麻豆成人av视频| 美女主播在线视频| 日韩亚洲欧美综合| 伊人久久精品亚洲午夜| 啦啦啦在线观看免费高清www| 国产黄色视频一区二区在线观看| 黄色配什么色好看| 99热国产这里只有精品6| 成人18禁高潮啪啪吃奶动态图 | 久久久久久久精品精品| 国模一区二区三区四区视频| 中文字幕制服av| 免费大片黄手机在线观看| 欧美日韩成人在线一区二区| 国产亚洲午夜精品一区二区久久| 好男人视频免费观看在线| 久久久久久久大尺度免费视频| 免费看光身美女| 国产一区二区三区av在线| 精品一区在线观看国产| 一本一本综合久久| 欧美成人午夜免费资源| 伊人亚洲综合成人网| 青青草视频在线视频观看| 精品99又大又爽又粗少妇毛片| 插逼视频在线观看| 日韩成人av中文字幕在线观看| 极品少妇高潮喷水抽搐| 伊人亚洲综合成人网| 国产黄片视频在线免费观看| 又黄又爽又刺激的免费视频.| 这个男人来自地球电影免费观看 | 人人妻人人澡人人爽人人夜夜| 午夜激情久久久久久久| 99热这里只有精品一区| 人妻夜夜爽99麻豆av| 国产综合精华液| 欧美成人精品欧美一级黄| 精品国产一区二区三区久久久樱花| 多毛熟女@视频| 日韩一区二区三区影片| 午夜免费男女啪啪视频观看| 日本免费在线观看一区| 亚洲av男天堂| 夜夜骑夜夜射夜夜干| 丝瓜视频免费看黄片| 少妇高潮的动态图| 精品国产一区二区久久| 一级毛片我不卡| av黄色大香蕉| 亚洲精品日韩在线中文字幕| 久久午夜福利片| 日韩精品免费视频一区二区三区 | 亚洲伊人久久精品综合| 99热这里只有是精品在线观看| 色94色欧美一区二区| 少妇的逼好多水| 婷婷成人精品国产| 男女边摸边吃奶| 中文字幕av电影在线播放| 成人二区视频| 久久久久国产网址| 人人妻人人澡人人看| 午夜91福利影院| 狠狠精品人妻久久久久久综合| 久久亚洲国产成人精品v| av网站免费在线观看视频| 欧美精品国产亚洲| 老司机亚洲免费影院| 精品一品国产午夜福利视频| av网站免费在线观看视频| 午夜久久久在线观看| 热99国产精品久久久久久7| 久久久a久久爽久久v久久| 中文字幕人妻丝袜制服| 女人精品久久久久毛片| 日韩欧美精品免费久久| 伊人久久国产一区二区| 精品国产露脸久久av麻豆| 亚洲av男天堂| 婷婷色综合www| 午夜视频国产福利| 亚洲国产av新网站| 免费看av在线观看网站| 久久99一区二区三区| 黄色怎么调成土黄色| 少妇精品久久久久久久| 日韩精品有码人妻一区| 下体分泌物呈黄色| 国产精品一区二区三区四区免费观看| av天堂久久9| 十八禁高潮呻吟视频| 亚洲精品亚洲一区二区| 日韩免费高清中文字幕av| 国产精品99久久久久久久久| 亚洲欧洲日产国产| 少妇精品久久久久久久| 日本vs欧美在线观看视频| 久久精品久久久久久久性| 精品国产露脸久久av麻豆| 欧美日韩视频高清一区二区三区二| 少妇精品久久久久久久| 亚洲av成人精品一二三区| 狂野欧美白嫩少妇大欣赏| 九九在线视频观看精品| 黄色毛片三级朝国网站| 色5月婷婷丁香| 亚洲精品色激情综合| 国产精品蜜桃在线观看| 欧美亚洲 丝袜 人妻 在线| 亚洲综合精品二区| www.av在线官网国产| 免费久久久久久久精品成人欧美视频 | 国产精品久久久久久精品电影小说| 看免费成人av毛片| av天堂久久9| 在线观看一区二区三区激情| 久久久久久久久大av| 免费看av在线观看网站| 汤姆久久久久久久影院中文字幕| 久久久久网色| 老司机影院毛片| 视频中文字幕在线观看| 人妻 亚洲 视频| 精品久久久久久电影网| 免费黄网站久久成人精品| 久久99精品国语久久久| 国产精品久久久久久精品电影小说| 性高湖久久久久久久久免费观看| 在线观看美女被高潮喷水网站| 男女无遮挡免费网站观看| 日韩制服骚丝袜av| 久久久久久久大尺度免费视频| 亚洲精品自拍成人| 国产精品99久久99久久久不卡 | 在线观看www视频免费| 国产免费福利视频在线观看| av卡一久久| 欧美人与善性xxx| 日本猛色少妇xxxxx猛交久久| 丝瓜视频免费看黄片| 亚洲色图 男人天堂 中文字幕 | 成年av动漫网址| 日本与韩国留学比较| 亚洲精品乱码久久久久久按摩| 最后的刺客免费高清国语| 国产精品99久久久久久久久| 丰满迷人的少妇在线观看| 日韩电影二区| 国产成人免费观看mmmm| 久久精品国产鲁丝片午夜精品| 亚洲精品日韩av片在线观看| 超碰97精品在线观看| av网站免费在线观看视频| 日日摸夜夜添夜夜爱| 亚洲无线观看免费| 亚洲av电影在线观看一区二区三区| 免费人成在线观看视频色| 精品一区二区免费观看| 精品99又大又爽又粗少妇毛片| 国产精品成人在线| 免费黄色在线免费观看| 纵有疾风起免费观看全集完整版| 亚洲精品日本国产第一区| 人体艺术视频欧美日本| 一区二区三区乱码不卡18| 日本黄色日本黄色录像| 成人影院久久| 美女中出高潮动态图| 九九久久精品国产亚洲av麻豆| 在线观看www视频免费| 日韩av在线免费看完整版不卡| 最近手机中文字幕大全| 肉色欧美久久久久久久蜜桃| 夜夜骑夜夜射夜夜干| 人人妻人人澡人人爽人人夜夜| 最近中文字幕高清免费大全6| 在线播放无遮挡| 少妇的逼好多水| 亚洲av成人精品一区久久| 亚洲av男天堂| 丁香六月天网| 亚州av有码| 成年av动漫网址| 欧美人与性动交α欧美精品济南到 | 久久精品久久久久久噜噜老黄| 欧美3d第一页| 日本色播在线视频| 18禁动态无遮挡网站| 一区二区三区精品91| 狂野欧美白嫩少妇大欣赏| 交换朋友夫妻互换小说| 国产精品熟女久久久久浪| 国产av码专区亚洲av| 美女内射精品一级片tv| 国产免费现黄频在线看| 男男h啪啪无遮挡| 九草在线视频观看| 国产精品久久久久久av不卡| 夫妻性生交免费视频一级片| 午夜老司机福利剧场| 少妇高潮的动态图| 大片免费播放器 马上看| 精品酒店卫生间| av在线播放精品| 十八禁网站网址无遮挡| 亚洲欧美日韩卡通动漫| 亚洲国产日韩一区二区| 91精品伊人久久大香线蕉| 国产探花极品一区二区| 黄色毛片三级朝国网站| 国产成人精品在线电影| 热re99久久国产66热| 五月开心婷婷网| 天堂8中文在线网| 久久av网站| 亚洲欧洲日产国产| 蜜桃在线观看..| 多毛熟女@视频| 午夜免费观看性视频| 有码 亚洲区| 精品一品国产午夜福利视频| 一级a做视频免费观看| 中文字幕人妻熟人妻熟丝袜美| 最新的欧美精品一区二区| 国产精品嫩草影院av在线观看| 夫妻性生交免费视频一级片| 最近手机中文字幕大全| 两个人的视频大全免费| 少妇人妻精品综合一区二区| av网站免费在线观看视频| a 毛片基地| a级毛片在线看网站| 欧美xxxx性猛交bbbb| 伦理电影大哥的女人| 免费观看性生交大片5| 亚洲精华国产精华液的使用体验| 久久综合国产亚洲精品| 黄色配什么色好看| 99九九在线精品视频| 99国产综合亚洲精品| 国产亚洲午夜精品一区二区久久| 亚洲欧美成人精品一区二区| xxxhd国产人妻xxx| 人人妻人人澡人人爽人人夜夜| 王馨瑶露胸无遮挡在线观看| 午夜福利在线观看免费完整高清在| 亚洲高清免费不卡视频| 国产精品一区二区在线观看99| 好男人视频免费观看在线| 黑丝袜美女国产一区| 黑人欧美特级aaaaaa片| 精品酒店卫生间| 亚洲高清免费不卡视频| 午夜福利在线观看免费完整高清在| 老司机影院成人| 高清午夜精品一区二区三区| 国产在视频线精品| 精品熟女少妇av免费看| 老司机影院成人| 精品人妻偷拍中文字幕| 午夜激情av网站| 九九在线视频观看精品| av女优亚洲男人天堂| a级毛片在线看网站| 韩国av在线不卡| 午夜日本视频在线| 丰满饥渴人妻一区二区三| 国产午夜精品久久久久久一区二区三区| 久久久久国产精品人妻一区二区| av在线观看视频网站免费| 亚洲精品中文字幕在线视频| 美女大奶头黄色视频| 91精品伊人久久大香线蕉| 亚洲av二区三区四区| 国语对白做爰xxxⅹ性视频网站| 黑丝袜美女国产一区| 2021少妇久久久久久久久久久| 亚洲欧美精品自产自拍| 亚洲精品第二区| 国产免费一级a男人的天堂| 老司机影院成人| 91午夜精品亚洲一区二区三区| 狂野欧美白嫩少妇大欣赏| 日本与韩国留学比较| 亚洲国产成人一精品久久久| 国产一区二区在线观看av| 又黄又爽又刺激的免费视频.| 啦啦啦在线观看免费高清www| 80岁老熟妇乱子伦牲交| xxxhd国产人妻xxx| 九九在线视频观看精品| 老司机影院成人| 国产免费一级a男人的天堂| 熟女人妻精品中文字幕| 亚洲熟女精品中文字幕| 精品人妻一区二区三区麻豆| 桃花免费在线播放| 欧美一级a爱片免费观看看| 精品久久久久久电影网| 王馨瑶露胸无遮挡在线观看| 国产熟女欧美一区二区| 下体分泌物呈黄色| 日本与韩国留学比较| 纯流量卡能插随身wifi吗| 最黄视频免费看| 熟妇人妻不卡中文字幕| 日本vs欧美在线观看视频| 午夜影院在线不卡| 在线观看国产h片| 99国产精品免费福利视频| 日日摸夜夜添夜夜爱| 久久久久久久久久人人人人人人| 大香蕉97超碰在线| 日产精品乱码卡一卡2卡三| 欧美3d第一页| 日日撸夜夜添| 少妇的逼水好多| 黄片播放在线免费| 大陆偷拍与自拍| 亚洲av免费高清在线观看| 亚洲一区二区三区欧美精品| 午夜精品国产一区二区电影| 中国三级夫妇交换| 国产精品人妻久久久久久| 国产高清有码在线观看视频| 99精国产麻豆久久婷婷| 国产精品久久久久久精品电影小说| 免费人妻精品一区二区三区视频| 久久精品国产亚洲av天美| 免费不卡的大黄色大毛片视频在线观看| 午夜免费观看性视频| 人人妻人人添人人爽欧美一区卜| 美女脱内裤让男人舔精品视频| 国产 精品1| 成人手机av| 熟妇人妻不卡中文字幕| 亚洲一级一片aⅴ在线观看| 免费少妇av软件| 国产日韩一区二区三区精品不卡 | 2018国产大陆天天弄谢| 日日摸夜夜添夜夜添av毛片| 久久午夜福利片| 91久久精品电影网| 青春草国产在线视频| 欧美日韩视频精品一区| 高清在线视频一区二区三区| 男女高潮啪啪啪动态图| 日韩熟女老妇一区二区性免费视频| 热99国产精品久久久久久7| 欧美日韩成人在线一区二区| 国产乱人偷精品视频| 多毛熟女@视频| 看免费成人av毛片| 色婷婷久久久亚洲欧美| 欧美日韩视频精品一区| 亚洲国产毛片av蜜桃av| 久久综合国产亚洲精品| 全区人妻精品视频| 亚洲五月色婷婷综合| 成人国产麻豆网| 国产淫语在线视频| 黄色怎么调成土黄色| 中文字幕人妻丝袜制服| 欧美丝袜亚洲另类| 亚洲精品乱久久久久久| 国产成人一区二区在线| 能在线免费看毛片的网站| 亚洲一区二区三区欧美精品| 老熟女久久久| 男女高潮啪啪啪动态图| 国产成人精品在线电影| 少妇 在线观看| 久久精品国产自在天天线| 黄色欧美视频在线观看| 久久精品久久精品一区二区三区| 满18在线观看网站| 成人二区视频| 亚洲av福利一区| 日韩成人伦理影院| 最近手机中文字幕大全| 国产精品成人在线| 国产精品麻豆人妻色哟哟久久| 日韩强制内射视频| 波野结衣二区三区在线| 老司机影院成人| 亚洲天堂av无毛| 欧美一级a爱片免费观看看| 视频中文字幕在线观看| 国产成人精品久久久久久| 国产精品一二三区在线看| 女人精品久久久久毛片| 国产成人一区二区在线| 国产精品久久久久成人av| 免费高清在线观看视频在线观看| 丁香六月天网| 欧美日韩视频高清一区二区三区二| 肉色欧美久久久久久久蜜桃| 亚洲av综合色区一区| 精品久久蜜臀av无| 亚洲久久久国产精品| 国产在视频线精品| 国产精品一国产av| videos熟女内射| av天堂久久9| 国产熟女欧美一区二区| 国产无遮挡羞羞视频在线观看| 亚洲av.av天堂| 国产不卡av网站在线观看| 精品久久久精品久久久| 国产午夜精品久久久久久一区二区三区| 又黄又爽又刺激的免费视频.| 国产成人av激情在线播放 | 久久韩国三级中文字幕| 亚洲精品视频女| 精品亚洲成a人片在线观看| 成人影院久久| 中文字幕人妻丝袜制服| 香蕉精品网在线| 九色成人免费人妻av| 久久久国产精品麻豆| 99热6这里只有精品| 我的女老师完整版在线观看| 欧美激情国产日韩精品一区| 国产片特级美女逼逼视频| 成人国产麻豆网| 国产成人午夜福利电影在线观看| 午夜福利影视在线免费观看| 日本色播在线视频| 久久这里有精品视频免费| 好男人视频免费观看在线| 久久99精品国语久久久| 夜夜爽夜夜爽视频| 男男h啪啪无遮挡| 国产精品国产三级专区第一集| 丝袜喷水一区| 亚洲,一卡二卡三卡| 高清黄色对白视频在线免费看| 最近的中文字幕免费完整| 中文欧美无线码| av在线app专区| 国产精品免费大片| 亚洲精华国产精华液的使用体验| 美女脱内裤让男人舔精品视频| 亚洲欧美清纯卡通| 99视频精品全部免费 在线| 亚洲婷婷狠狠爱综合网| 久久99一区二区三区| 熟女人妻精品中文字幕| 国产午夜精品久久久久久一区二区三区| 国产深夜福利视频在线观看| 菩萨蛮人人尽说江南好唐韦庄| 国产色婷婷99| √禁漫天堂资源中文www| 97超碰精品成人国产| 久久久精品94久久精品| 国产午夜精品一二区理论片| 亚洲精品久久午夜乱码| 国产高清不卡午夜福利| 欧美xxxx性猛交bbbb| 日韩不卡一区二区三区视频在线| 亚洲三级黄色毛片| 久久97久久精品| 边亲边吃奶的免费视频| 亚洲成人一二三区av| 久久久久视频综合| 欧美激情国产日韩精品一区| videossex国产| 在线天堂最新版资源| 久久 成人 亚洲| 日本黄大片高清| 亚洲欧美中文字幕日韩二区| 久久99精品国语久久久| 亚洲精品乱码久久久v下载方式| 五月玫瑰六月丁香| 国产精品国产av在线观看| 日日摸夜夜添夜夜爱|