• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Tibetan Sentiment Classification Method Based on Semi-Supervised Recursive Autoencoders

    2019-08-13 05:55:04XiaodongYanWeiSongXiaobinghaoandAntiWang
    Computers Materials&Continua 2019年8期

    Xiaodong Yan,Wei Song,,Xiaobing Ζhao and Anti Wang

    Abstract: We apply the semi-supervised recursive autoencoders (RAE) model for the sentiment classification task of Tibetan short text,and we obtain a better classification effect.The input of the semi-supervised RAE model is the word vector.We crawled a large amount of Tibetan text from the Internet,got Tibetan word vectors by using Word2vec,and verified its validity through simple experiments.The values of parameter α and word vector dimension are important to the model effect.The experiment results indicate that when α is 0.3 and the word vector dimension is 60,the model works best.Our experiment also shows the effectiveness of the semi-supervised RAE model for Tibetan sentiment classification task and suggests the validity of the Tibetan word vectors we trained.

    Keywords: Recursive autoencoders (RAE),sentiment classification,word vector.

    1 Introduction

    With the rapid development of Web 2.0,users participate in the manufacture of website content.Consequently,there are a large number of user-involved valuable comments on people,events,products and so on which are generated on the Internet.By analyzing this information,potential users can mine people’s views and opinions to make business decisions,political decisions,and so on.It is a hard work to deal with such massive amounts of data manually.How to use help users quickly analyze and process these web texts automatically and extract useful emotional information by computer has become the focus of many researchers.Text sentiment analysis is the process of analyzing,processing,summarizing and disposing words,sentences and texts with emotional color0.At present,the research on sentiment classification of Chinese and English texts is relatively mature.However,for Tibetan information that started late,the study of Tibetan sentiment tendencies is relatively lagging behind.With the increasing content of network information such as Tibetan web pages and Tibetan digital libraries,more and more Tibetan compatriots express their opinions and opinions in Tibetan on the Internet.The emotional analysis of Tibetan texts has become an urgent research issue.On the basis of the analysis of sentence sentiment tendency,it is convenient to analyze the sentiment orientation of the text,and even get the overall tendencies of massive information.Therefore,sentence-level sentiment classification has important research value and is also the research focus of this paper.

    2 Related work

    Sentiment classification is one of the hot issues in natural language processing.There have been many researches on text sentiment classification at home and abroad.In general,it can be divided into a machine learning based method and sentiment dictionary-based method.The basic idea of the machine learning method is to get an estimate of the dependence between the input and output of the system based on known training samples,so that it can make the most accurate prediction of the unknown output.In 2002,Pang et al.[Pang,Lee and Vaithyanathan (2002)] used common machine learning techniques to make propensity judgments,and compared the propensity judgment effects of support vector machine (SVM),naive Bayes (NB),and maximum entropy.It shows that the SVM has the best classification effect.The literature0studied the classification of news texts,and used the naive Bayesian method and the maximum entropy method to divide the news text into positive emotions and negative emotions,and used word frequency and binary value as feature weights,and finally achieved better results.Classification effect,the highest classification accuracy rate of more than 90%.Based on sentiment dictionary or knowledge system,the literature uses the existing semantic dictionary to judge the semantic tendency of the sentiment words in the sentence.Then according to the syntactic structure and other information,indirectly get the semantic tendency of the sentence.Riloff et al.[Riloff and Shepherd (1997)] proposed a corpus-based approach to construct sentiment dictionary to achieve emotional classification.Later et al.[Riloff,Wiebe and Phillips (2005)] used the Bootstrapping algorithm,which used the elements of pronouns,verbs,adjectives and adverbs in the text as features.And they also treated differently according to the position of the sentence in the paragraph to realize the objective and subjective classification of corpus data.Zhu et al.[Zhu,Min,Zhou et al.(2006)] artificially constructed the word set of positive and negative seed sentiment words in the literature,and then used HowNet to calculate the semantic similarity between candidate words and seed sentiment words to determine their emotional polarity.

    In terms of sentiment classification of Tibetan texts,research at home and abroad is not yet mature,and relevant literature is very limited.The literature used Tibetan three-level segmentation system to segment the Tibetan texts and part-of-speech tagging,and used the hand-built Tibetan sentiment analysis vocabulary to extract the emotional features in combination with the existing feature selection methods,and used the similarity classification algorithm to classify the sentiment of Tibetan texts.In literature,the sentiment analysis of Tibetan Weibo was carried out based on the combination of statistics and dictionary-based methods.The accuracy of this method was significantly higher than that of TF-IDF-based Tibetan microblog sentiment analysis.

    Based on the above related work,this paper applies the semi-supervised recursive autoencoders RAE model to sentiment classification task of Tibetan short text.Through the extensive training of the word vector and the determination of the dimension,we get good classification result.

    3 Sentiment classification method based on semi-supervised RAE

    3.1 Tibetan word vector training

    The input of the semi-supervised RAE Tibetan sentiment classification model method is a sequence of word vectors of Tibetan text.There are two methods for initializing the word vector.In the first method,we simply initialize the vector of each word x∈Rnto a value(sample) that follows the Gaussian distribution:x~N(0 ,δ2),and then put the word vectors into a matrixL∈Rn×|v|,where |V| is the length of the vocabulary.This initialization method works well in an unsupervised neural network that can optimize these word vectors by capturing valid information in the training data.The second method to get a word vector is through an unsupervised neural language model [Bengio,Ducharme,Vinent et al.(2003);Collobert and Weston (2008)].In the processing of training the word vector by neural language model,they get the grammatical and semantic information in the training corpus by calculating the words’ co-occurrence statistical information.And then they transform the information into a vector space,so after getting the word vector the verbal similarity of the two words in the training corpus can be predicted.

    This paper uses the Word2vec tool to train and to obtain the neural language model of Tibetan.The Word2vec tool is a tool that Google developed and open sourced in 2013 to represent words as real numbers.Its main idea is to calculate the context relevant statistics of each word in all text and other co-occurring words by training a large amount of text.And then use these statistics to represent the words appearing in the text as a K-dimensional vector,and the value of K is nor very large.After obtaining the vectorized representation of the words,we can get the semantic similarity of the text by the word vector operation.Word vectors trained through Word2vec can be used to do a lot of research in the field of natural language,such as text clustering,text categorization,sentiment analysis,and so on.If we regard words as features,then Word2vec can express the features of the text in K-dimensional vector space,and this representation with semantic information is a deeper feature representation.The corpus used to train Tibetan word2vec includes wiki encyclopedia Tibetan version,primary and secondary school textbooks,news and Sina Weibo,totaling 253 M Tibetan texts.

    The evaluation method of word vector can be mainly divided into two types.The first one is to apply the word vector to an existing system,and compare the running results of the system before and after the addition;the second one is to evaluate the word vector directly from the perspective of linguistics,such as text similarity one-level semantic offset and so on.To test the effect and quality of trained Tibetan word vectors,we first test some semantic similarities of words,such as input words:(Qinghai Provincial People’s Congress Standing Committee) to find out which words are similar in the training expectation (see in Fig.1).The number on the right side of the figure measures the degree of similarity between the word and the input target word,and its value ranges from [0,1].The larger the value,the higher the similarity.

    Figure1:Tibetan word vector test (1)

    In the training corpus find the similar words with ???? (happiness) (see Fig.2).

    From the above test results,it can be found that the similar candidate words calculated by the Tibetan word vector trained in this paper have a large or a certain degree of semantic similarity with the calculated original words.Therefore,we believe that the Tibetan word vector trained in this paper has good effect and quality.In this paper,the Tibetan word vector trained by the Word2vec tool is used as the input of the semi-supervised RAE model.For a small number of words not found in the word vector trained by Word2vec,we use the first method above to initialize.

    Figure2:Tibetan word vector test (2)

    3.2 Semi-supervised RAE model for Tibetan sentiment classification

    The Tibetan sentiment classification based on semi-supervised RAE model is shown in Fig.3.

    Figure3:Semi-supervised RAE sentiment classification model

    With the Tibetan word vector,the unsupervised RAE method has been able to obtain the distribution feature vector representation of the text sentence without giving the text structure tree.In order to apply it to Tibetan sentiment classification,it needs to be expanded to a semi-supervised RAE.The basic idea is to add a classifier to the top of its RAE and supervise it with labeled sample data.To do this,we add a simple softmax layer to the root node of the tree structure representing the sentence for classification,as defined by formula (1) where d ∈RKis a K-dimensional polynomial distribution,and K is the number of emotional labels.This paper focuses on both negative and positive categories.That is k=2.

    The output of the softmax layer represents the conditional probability A,which is the probability that the current text belongs to each category,so that the category of the text can be predicted.The calculation method of the cross-entropy error is shown in formula (2)where t is the distribution of the labels.

    After adding the softmax layer,the training process of the semi-supervised RAE model needs to consider not only the reconstruction error of the parent node in the text sentence structure tree,but also the cross-entropy error of the softmax layer to learn the semantic and sentiment classification information in the text.Fig.4 shows the root node RAE unit of the sentence structure tree.

    Figure4:Root node RAE unit

    Therefore,the optimization objective function based on the semi-supervised RAE method on the tagged training data set can be expressed as Eq.(3) where (x,t) represents a sample in the training corpus,that is (text sentence,label),indicating the error of a sample.

    The error of a text sentence is the sum of the reconstruction error and the cross-entropy error of all non-terminal nodes in the sentence tree structure,so it can be expressed as shown in formula (4) Where s represents the non-terminal node in the sentence tree structure.

    For the root node,the reconstruction error and the cross-entropy error need to be considered at the same time.The error calculation formula in the formula (4) can be written as shown in the formula (5) Where α is a parameter that adjusts the reconstruction error and the crossentropy error weight.

    When the value of α is adjusted,the change will propagate back and affect the parameters of the RAE model as well as the vector representation of the text.In this paper,we will adjust the parameters to study its impact on the classification results in subsequent experiments.

    4 Experiment and result analysis

    4.1 Experimental data set

    We crawl Tibetan Weibo and Tibetan comments on Sina Weibo by writing a web crawler,and saves the crawled text corpus in txt format.After pre-processing,we also need to manually tag and verify these Tibetan texts to get a library of Tibetan emotional corpus.The labeling rules are as follows:positive text entries are marked with the label ‘+1’,negative text entries are marked with the label ‘-1’ neutral text entries are marked with the label ‘0’,and the useless text entries are not removed in the preprocessing are marked with the label ‘2’.The final marked result is shown in Tab.1.

    Table1:Marked corpus statistical result

    For a better comparison of the experimental results,the data set of this experiment are all 3717 negative samples and randomly selecting 4000 samples from the positive samples.At the same time,400 positive and negative samples were randomly selected from the experimental data set as test sets,and the remaining samples were used as training sets.

    4.2 Parameter setting

    The semi-supervised RAE algorithm used in this paper is an open source project on Github by Sanjeev Satheesh from Stanford University.The source code is re-implemented in Java based on Richard Socher’s documentation and MATLAB source code.For the hardware platform,the operating system of the server is Ubuntu Linux 14.04,128 G memory,16-core processor.

    We use the default values of the parameters in the softmax layer classifier.And we compare two sets of experiments to find the optimal values of super parameter α for measuring cross entropy error and reconstruction error and the dimension of Word2Vec word vector.The model needs multiple iterations to get the final optimized result.And the number of iterations required for model convergence is different under the different parameter settings state.For making the experiment result is always best at every parameter setting,after observing the number of iterations of multiple experiments,it is found that the number of iterations will not exceed 1000.So we set 1000 as the number of iteration of the experiment.

    4.3 The value of α selection experiment and result analysis

    The value of the parameter α determines the degree of attention to the cross-entropy error and reconstruction error of the semi-supervised RAE model during training.Therefore,when the value of α is larger,the model pays more attention to the reconstruction error,and the sentence distribution vector obtained by training will contain more syntactic information and less emotional information;when the value of α is smaller,the opposite is true.Therefore,this paper studies the influence of the parameter α on the training process and finds the optimal parameter value by setting a series of different values of α in each training process.In addition,we fix the dimension of the word vector as a moderate value 50,to avoiding its impact on the results.The details of the experiment results are shown in Tab.2.

    Table2:Experimental results with different A values

    In order to more intuitively observe and analyze the influence of the value of α on the experimental results,the following Fig.5 shows the trend of the fold line change of the negative polarity,positive polarity and overall classification accuracy under different values of α.

    Figure5:Trend of classification effect under different value of α

    It can be found from Tab.2 and Fig.5 that when the value of α changes from 0.1 to 0.3,the classification effect of the model becomes better as the value of α increases,and when α is 0.3,the whole model reaches the best;when the value of α changes from 0.3 to 0.4,the effect of the negative polarity classification increases with the increase of α,but the classification effect of the positive polarity decreases sharply;when the value of α is greater than 0.4,the classification effect of the positive polarity and the positive polarity decrease greatly with the increase of α,indicating that the model over-represented the syntactic information of the text and could not obtain the emotional information of the corpus.

    In the experiment of this paper,the whole model achieves the best effect when α is 0.3,and the optimal value of α is 0.2 in the comparative experiment of Pu et al.[Pu,Hou,Liu et al.(2017)].Therefore,we can find that the optimal value of α in the semi-supervised RAE method is not the same for different corpus of texts in the same language;however,the best value is always small.Therefore,the cross-entropy error of the softmax layer should be paid more attention to when training of the model.

    4.4 Word vector dimension selection experiment and result analysis

    When using the Word2Vec tool to train Tibetan word vectors,you can set the dimension(length) of the word vector.The size of the vector dimension has a significant impact on the accuracy of the semi-supervised RAE model and the training efficiency.If the word vector is too short,it cannot effectively contain the semantic information of the word;if the word vector is too long,not only will the training data be sparse,but also the subsequent model training will be inefficient,wasting time and computing resources.To find the best word vector,we set a group of experiments that the values of the word vector are different in each training process while the parameter α is fixed to the optimal value 0.3.The results are shown in the Tab.3.

    Table3:Word vector dimension comparison test result

    50 60 70 80 90 100 Negative 83.91 79.50 81.64 Positive 80.52 84.75 82.58 Negative 84.68 78.75 81.61 Positive 80.14 85.75 82.85 Negative 83.96 78.50 81.13 Positive 79.81 85.00 82.32 Negative 84.30 76.50 80.21 Positive 78.49 85.75 81.96 Negative 84.53 76.50 80.31 Positive 78.54 86.00 82.10 Negative 84.34 76.75 80.37 Positive 78.67 85.75 82.06 82.13 82.25 81.75 81.13 81.25 81.25

    To more intuitively observe and analyze the influence of word vector dimension on the classification effect of the model under different values,Fig.6 shows the variation trend of the negative line,positive polarity and overall classification accuracy under different vector dimensions.

    Figure6:Trend of classification effect under different feature dimensions

    From Tab.3 and Fig.6,we can see that when the word vector dimension is 10,the classification effect of the model is particularly poor.The reason may be that the vector dimension is too short to better represent the text information;when the word vector is increased from 10 to 20,the classification effect of the model has been greatly improved;from 20 to 60,the classification effect still improves with the increase of the vector dimension,but its growth is slow and its amplitude is getting smaller,and the overall effect of the model is best when the vector dimension is 60-dimensional;When the vector dimension is larger than 60,the overall classification effect decreases and fluctuates up and down with a small amplitude,which indicate that the dimension of the word vector increasing can not only fail to better express the text information,but also bring the noise data to the model and then affect the classification effect.

    In the literature of Pu et al.[Pu,Hou,Liu et al.(2017)],the optimal value of the word vector is 110,and when the corpus volume is 10,000,the overall classification accuracy of the model reaches 86.2%,which is higher than the best classification result of this paper.In theory,because the word vector used in this paper is obtained through training,the final classification effect should be higher than the method of initializing the word vector randomly.It is very likely that the Tibetan sentiment corpus collected in this paper covers a wider range of fields,and the number of samples in some areas is insufficient,resulting in the model not being able to learn the emotional characteristics of the field well.Therefore,this comparison is not strong scientific and rigorous.I hope that with the continuous development of informatization in Tibetan and other minority languages,relevant research institutions can launch relevant evaluation platforms,thereby achieving evaluation and comparison in the same corpus environment.It can better promote the progress and development of minority languages in the field of sentiment analysis.

    4.5 Comparison and analysis of experimental results

    In this paper,SVM experiments based on artificial extraction features,SVM Tibetan sentiment classification experiments based on algorithm extraction features,SVM Tibetan sentiment classification experiments based on multi-feature fusion,and Tibetan sentiment classification based on semi-supervised RAE model are respectively presented in the same dataset.The comparison of the results obtained by the four sets of experiments with their optimal parameters is shown in Fig.7:

    Figure7:Comparison of experimental results

    It can be seen from Fig.7 that the classification effect of the Tibetan sentiment classification model based on semi-supervised RAE is better than that of other models in this paper,and the overall accuracy of classification is 82.25%.We could find the reason.SVM is a statistical machine learning method which can only learn the probability statistics of words,while semi-supervised RAE model used in this paper can perform distributed vector representation of the text sentences.The vector not only contains the statistical distribution information of the feature words in the text,but also learns the sentence context structure information of the text,which can better understand the text,so that achieving a better emotional classification result.

    5 Conclusion

    This chapter applies the semi-supervised RAE model to the emotional classification task of Tibetan short texts,and achieves a better classification effect.The method is compared with SVM experiments based on artificial extraction features,SVM Tibetan emotion classification experiments based on algorithm extraction features,and SVM Tibetan emotion classification experiments based on multi-feature fusion.The results show that the proposed method is superior to the other three classification methods.

    Acknowledgment:The work in this paper is supported by the National Natural Science Foundation of China project “Research on special video recognition based on deep learning and Markov logic network” (61503424).

    国产精品影院久久| 一级毛片电影观看| 成年人午夜在线观看视频| 91字幕亚洲| 少妇被粗大的猛进出69影院| 欧美精品人与动牲交sv欧美| 自拍欧美九色日韩亚洲蝌蚪91| 啦啦啦中文免费视频观看日本| 久久久久国产精品人妻一区二区| 久久狼人影院| 中亚洲国语对白在线视频| 大码成人一级视频| 老熟妇乱子伦视频在线观看 | 午夜福利,免费看| 电影成人av| av又黄又爽大尺度在线免费看| 精品高清国产在线一区| 99国产极品粉嫩在线观看| www.av在线官网国产| 久久九九热精品免费| 精品一区在线观看国产| 久久久精品区二区三区| 精品国产一区二区三区四区第35| 国内毛片毛片毛片毛片毛片| 日韩中文字幕视频在线看片| 老司机深夜福利视频在线观看 | 午夜久久久在线观看| 人妻人人澡人人爽人人| 国产又爽黄色视频| 国产精品 国内视频| 91麻豆av在线| 九色亚洲精品在线播放| 欧美激情 高清一区二区三区| 伊人亚洲综合成人网| 自拍欧美九色日韩亚洲蝌蚪91| 夜夜骑夜夜射夜夜干| 极品少妇高潮喷水抽搐| 亚洲精品中文字幕一二三四区 | 老司机深夜福利视频在线观看 | 亚洲国产精品一区三区| 老汉色∧v一级毛片| 国产精品二区激情视频| 国产极品粉嫩免费观看在线| 在线观看免费高清a一片| 黄色 视频免费看| 亚洲久久久国产精品| 久久九九热精品免费| 热99久久久久精品小说推荐| 欧美日韩一级在线毛片| 国产一区二区 视频在线| 手机成人av网站| 美女午夜性视频免费| 黄频高清免费视频| 午夜久久久在线观看| 97人妻天天添夜夜摸| 一个人免费看片子| 大型av网站在线播放| 中文精品一卡2卡3卡4更新| 亚洲欧美清纯卡通| 午夜免费观看性视频| 久久人妻熟女aⅴ| 美女大奶头黄色视频| 一进一出抽搐动态| 高清在线国产一区| 久久久久久免费高清国产稀缺| 另类亚洲欧美激情| 制服诱惑二区| 91精品伊人久久大香线蕉| 日韩欧美国产一区二区入口| 国产亚洲精品久久久久5区| 国产成人欧美在线观看 | 精品国产乱子伦一区二区三区 | 国产成人免费观看mmmm| 国产精品久久久久久精品电影小说| 日韩视频在线欧美| 天天操日日干夜夜撸| 欧美日韩亚洲国产一区二区在线观看 | 在线天堂中文资源库| 成人黄色视频免费在线看| 午夜免费鲁丝| 性色av一级| 人妻一区二区av| 如日韩欧美国产精品一区二区三区| 精品一区在线观看国产| 亚洲三区欧美一区| 99re6热这里在线精品视频| 国产欧美日韩一区二区精品| 91老司机精品| 99国产极品粉嫩在线观看| 99国产综合亚洲精品| 欧美黄色淫秽网站| 欧美乱码精品一区二区三区| 一区福利在线观看| 男人操女人黄网站| 女人爽到高潮嗷嗷叫在线视频| 涩涩av久久男人的天堂| 在线看a的网站| www.精华液| 国产精品免费视频内射| 亚洲av美国av| 国产熟女午夜一区二区三区| 无限看片的www在线观看| 欧美97在线视频| 满18在线观看网站| 人人澡人人妻人| 少妇粗大呻吟视频| 午夜福利在线免费观看网站| 国产97色在线日韩免费| 黄色视频在线播放观看不卡| 中文字幕人妻丝袜一区二区| 日韩有码中文字幕| 亚洲欧美清纯卡通| 免费在线观看完整版高清| 中文字幕人妻熟女乱码| 精品久久久精品久久久| 国产又爽黄色视频| 后天国语完整版免费观看| 丁香六月欧美| 亚洲国产看品久久| 亚洲熟女精品中文字幕| 搡老熟女国产l中国老女人| 在线观看一区二区三区激情| 亚洲精品久久久久久婷婷小说| 国产成人av激情在线播放| 中文字幕制服av| 精品久久久久久久毛片微露脸 | 亚洲精品国产av蜜桃| 国产精品一区二区在线观看99| 国产区一区二久久| 亚洲熟女毛片儿| 久久久久精品人妻al黑| 欧美日韩一级在线毛片| 亚洲国产精品成人久久小说| 国产欧美日韩精品亚洲av| 99久久国产精品久久久| 精品福利观看| 国产精品九九99| av片东京热男人的天堂| 一级黄色大片毛片| 欧美日韩中文字幕国产精品一区二区三区 | 精品人妻熟女毛片av久久网站| 日本一区二区免费在线视频| 欧美日韩精品网址| 国产一区二区激情短视频 | 在线观看免费视频网站a站| 丝袜人妻中文字幕| 欧美日韩国产mv在线观看视频| 免费少妇av软件| 午夜福利在线观看吧| 亚洲欧美日韩高清在线视频 | 天天添夜夜摸| 国产激情久久老熟女| 超碰97精品在线观看| 亚洲人成77777在线视频| 国产男人的电影天堂91| 国产成人欧美| 日本a在线网址| 久久影院123| 亚洲国产成人一精品久久久| a级毛片在线看网站| 高潮久久久久久久久久久不卡| 老熟女久久久| 亚洲av国产av综合av卡| 啦啦啦视频在线资源免费观看| 国产亚洲精品第一综合不卡| a级毛片在线看网站| 国精品久久久久久国模美| 十八禁高潮呻吟视频| 亚洲精品av麻豆狂野| 自线自在国产av| 亚洲国产精品一区二区三区在线| av国产精品久久久久影院| 91成人精品电影| 纵有疾风起免费观看全集完整版| 成人18禁高潮啪啪吃奶动态图| 狠狠精品人妻久久久久久综合| 久久人妻福利社区极品人妻图片| 夫妻午夜视频| 乱人伦中国视频| 欧美在线一区亚洲| 91精品伊人久久大香线蕉| 成年动漫av网址| 久久香蕉激情| 国产精品一区二区精品视频观看| 国产男女超爽视频在线观看| 国产亚洲午夜精品一区二区久久| 亚洲精品国产av蜜桃| 男女免费视频国产| 国产免费视频播放在线视频| 他把我摸到了高潮在线观看 | 国产av国产精品国产| 丝袜喷水一区| 69精品国产乱码久久久| 少妇 在线观看| 亚洲av日韩精品久久久久久密| 亚洲国产欧美在线一区| 男人舔女人的私密视频| 久久久久国内视频| 精品高清国产在线一区| 国产免费av片在线观看野外av| 别揉我奶头~嗯~啊~动态视频 | 国产黄色免费在线视频| av电影中文网址| 欧美激情 高清一区二区三区| 少妇人妻久久综合中文| 国产日韩欧美视频二区| 亚洲欧美一区二区三区黑人| 在线观看免费高清a一片| 一区二区日韩欧美中文字幕| 菩萨蛮人人尽说江南好唐韦庄| 日韩免费高清中文字幕av| 亚洲,欧美精品.| 亚洲国产av新网站| 777久久人妻少妇嫩草av网站| 日韩,欧美,国产一区二区三区| 久久久久视频综合| 男人舔女人的私密视频| 国产色视频综合| 国产在视频线精品| 成人免费观看视频高清| 美女扒开内裤让男人捅视频| 韩国精品一区二区三区| 欧美精品人与动牲交sv欧美| 夜夜骑夜夜射夜夜干| 亚洲久久久国产精品| 国产男人的电影天堂91| videos熟女内射| 91国产中文字幕| 老司机深夜福利视频在线观看 | 超碰成人久久| 99国产精品免费福利视频| 老司机午夜福利在线观看视频 | 啦啦啦免费观看视频1| 中文字幕人妻熟女乱码| 亚洲三区欧美一区| 国产精品久久久久久人妻精品电影 | 色播在线永久视频| 日本精品一区二区三区蜜桃| 久久久久精品国产欧美久久久 | 欧美人与性动交α欧美精品济南到| www日本在线高清视频| 国产精品九九99| 正在播放国产对白刺激| 亚洲国产看品久久| 国产成人欧美在线观看 | 99热全是精品| 亚洲一码二码三码区别大吗| 女人爽到高潮嗷嗷叫在线视频| 免费在线观看完整版高清| 亚洲国产欧美在线一区| 91成人精品电影| 久9热在线精品视频| 手机成人av网站| 黑丝袜美女国产一区| 50天的宝宝边吃奶边哭怎么回事| 国产一卡二卡三卡精品| 欧美中文综合在线视频| 美女午夜性视频免费| 人人妻人人澡人人看| 狂野欧美激情性bbbbbb| 99精国产麻豆久久婷婷| 久久综合国产亚洲精品| 国产91精品成人一区二区三区 | 亚洲专区字幕在线| 欧美激情极品国产一区二区三区| 50天的宝宝边吃奶边哭怎么回事| 国产精品国产av在线观看| e午夜精品久久久久久久| 国产精品久久久久久精品古装| 亚洲欧美一区二区三区黑人| 亚洲欧美一区二区三区久久| 亚洲精品美女久久av网站| 精品一区在线观看国产| 男女无遮挡免费网站观看| 久久这里只有精品19| 午夜两性在线视频| 亚洲av国产av综合av卡| 曰老女人黄片| 啦啦啦 在线观看视频| 熟女少妇亚洲综合色aaa.| 国产欧美日韩一区二区三区在线| 久久性视频一级片| 99国产精品99久久久久| 久久久国产欧美日韩av| 亚洲av欧美aⅴ国产| 中文字幕高清在线视频| 啦啦啦 在线观看视频| 久久影院123| 久久精品熟女亚洲av麻豆精品| 夜夜夜夜夜久久久久| 在线观看www视频免费| 黑丝袜美女国产一区| 黄色a级毛片大全视频| 久久精品aⅴ一区二区三区四区| 桃花免费在线播放| www.自偷自拍.com| 亚洲熟女精品中文字幕| 秋霞在线观看毛片| 久久免费观看电影| 亚洲欧美清纯卡通| 中文字幕另类日韩欧美亚洲嫩草| 丰满少妇做爰视频| 国产精品av久久久久免费| 精品国内亚洲2022精品成人 | 久久久久网色| 少妇裸体淫交视频免费看高清 | 在线观看一区二区三区激情| 十分钟在线观看高清视频www| 婷婷成人精品国产| 免费人妻精品一区二区三区视频| 亚洲国产精品成人久久小说| 精品国产乱码久久久久久男人| 中文字幕高清在线视频| 91成年电影在线观看| 一区二区三区四区激情视频| 十八禁网站免费在线| 久久午夜综合久久蜜桃| 1024香蕉在线观看| 精品第一国产精品| 国产精品二区激情视频| 日本av手机在线免费观看| 久久毛片免费看一区二区三区| 久久久久国产一级毛片高清牌| 纯流量卡能插随身wifi吗| 亚洲一区二区三区欧美精品| 777久久人妻少妇嫩草av网站| 欧美xxⅹ黑人| 成年人午夜在线观看视频| 黄频高清免费视频| 午夜91福利影院| www.av在线官网国产| 久久久久久久精品精品| 久久久水蜜桃国产精品网| 黄色a级毛片大全视频| 久久中文看片网| 亚洲欧洲精品一区二区精品久久久| 国产淫语在线视频| av又黄又爽大尺度在线免费看| 亚洲成人免费av在线播放| 三级毛片av免费| 12—13女人毛片做爰片一| 制服诱惑二区| 久久久久久久大尺度免费视频| 亚洲熟女精品中文字幕| 纯流量卡能插随身wifi吗| 亚洲天堂av无毛| 一区二区三区激情视频| 男人操女人黄网站| 三上悠亚av全集在线观看| 国产av又大| 亚洲中文字幕日韩| 在线观看免费日韩欧美大片| 在线观看www视频免费| 一级,二级,三级黄色视频| 老司机亚洲免费影院| 国产有黄有色有爽视频| 又黄又粗又硬又大视频| 国产福利在线免费观看视频| 老司机亚洲免费影院| 狠狠狠狠99中文字幕| 免费在线观看视频国产中文字幕亚洲 | 国产日韩一区二区三区精品不卡| 99精品欧美一区二区三区四区| 两个人看的免费小视频| 国产一区二区三区综合在线观看| 啦啦啦中文免费视频观看日本| 捣出白浆h1v1| 日日爽夜夜爽网站| 90打野战视频偷拍视频| 亚洲精品国产av成人精品| 久久中文字幕一级| 无限看片的www在线观看| 国产亚洲精品久久久久5区| 我要看黄色一级片免费的| 男女之事视频高清在线观看| 老司机影院成人| 亚洲欧美一区二区三区黑人| 国产精品一区二区精品视频观看| 超色免费av| 日韩中文字幕欧美一区二区| 久9热在线精品视频| 女人被躁到高潮嗷嗷叫费观| 久久这里只有精品19| 亚洲精品一区蜜桃| 首页视频小说图片口味搜索| 搡老岳熟女国产| 大码成人一级视频| 亚洲精品久久成人aⅴ小说| 国产精品欧美亚洲77777| h视频一区二区三区| 亚洲欧洲日产国产| 国产91精品成人一区二区三区 | 飞空精品影院首页| 国产一区二区在线观看av| 老熟女久久久| 老司机午夜福利在线观看视频 | 十八禁网站免费在线| 一区二区三区四区激情视频| 成在线人永久免费视频| 中文字幕精品免费在线观看视频| 啪啪无遮挡十八禁网站| 天堂俺去俺来也www色官网| 在线观看免费日韩欧美大片| 午夜精品国产一区二区电影| 亚洲国产av新网站| 免费少妇av软件| 亚洲精品国产一区二区精华液| 久久精品国产亚洲av香蕉五月 | 爱豆传媒免费全集在线观看| 欧美黑人精品巨大| 91老司机精品| 90打野战视频偷拍视频| 免费日韩欧美在线观看| 亚洲精华国产精华精| 亚洲精品成人av观看孕妇| 亚洲美女黄色视频免费看| 999精品在线视频| 国产欧美日韩综合在线一区二区| 日本91视频免费播放| 可以免费在线观看a视频的电影网站| 青春草视频在线免费观看| 国产精品欧美亚洲77777| 亚洲第一青青草原| 亚洲精品av麻豆狂野| 老司机影院毛片| 日韩免费高清中文字幕av| 亚洲七黄色美女视频| 下体分泌物呈黄色| 两性夫妻黄色片| 久久久久久久国产电影| 一本色道久久久久久精品综合| 啦啦啦免费观看视频1| 丰满人妻熟妇乱又伦精品不卡| 欧美激情 高清一区二区三区| 看免费av毛片| 老司机影院毛片| av不卡在线播放| 最近最新免费中文字幕在线| 不卡一级毛片| 亚洲国产av影院在线观看| 国产精品麻豆人妻色哟哟久久| 婷婷丁香在线五月| kizo精华| 美女福利国产在线| 亚洲七黄色美女视频| 久久久久国产精品人妻一区二区| 69精品国产乱码久久久| 新久久久久国产一级毛片| 婷婷丁香在线五月| 两个人免费观看高清视频| 青春草亚洲视频在线观看| 91精品三级在线观看| 亚洲av片天天在线观看| 人人妻人人添人人爽欧美一区卜| 午夜福利乱码中文字幕| 人人澡人人妻人| 视频区欧美日本亚洲| 两性午夜刺激爽爽歪歪视频在线观看 | 精品国产乱子伦一区二区三区 | 久热爱精品视频在线9| 久久性视频一级片| 日本五十路高清| 一级片'在线观看视频| 精品乱码久久久久久99久播| 色播在线永久视频| 美女大奶头黄色视频| 超碰97精品在线观看| 亚洲国产av影院在线观看| 精品一区二区三卡| 亚洲av电影在线观看一区二区三区| 欧美激情极品国产一区二区三区| 欧美 亚洲 国产 日韩一| 国产免费福利视频在线观看| 一区二区三区激情视频| 久久久精品国产亚洲av高清涩受| 国产日韩欧美在线精品| 久久人人爽av亚洲精品天堂| 亚洲av电影在线进入| 亚洲午夜精品一区,二区,三区| 最新的欧美精品一区二区| 午夜视频精品福利| 亚洲精品乱久久久久久| 亚洲精品一二三| 窝窝影院91人妻| 日韩欧美免费精品| 一二三四在线观看免费中文在| 午夜精品久久久久久毛片777| 久久久国产欧美日韩av| 制服人妻中文乱码| 久久国产精品大桥未久av| 精品福利观看| 国产欧美日韩综合在线一区二区| 男人添女人高潮全过程视频| 淫妇啪啪啪对白视频 | 免费在线观看日本一区| 成年美女黄网站色视频大全免费| 一边摸一边做爽爽视频免费| 菩萨蛮人人尽说江南好唐韦庄| 亚洲精品久久成人aⅴ小说| 亚洲精品久久午夜乱码| 91av网站免费观看| 天堂俺去俺来也www色官网| 国产一区二区在线观看av| 日韩欧美一区视频在线观看| 久久国产亚洲av麻豆专区| 欧美xxⅹ黑人| 蜜桃国产av成人99| 久久久久国产一级毛片高清牌| 亚洲av美国av| 亚洲自偷自拍图片 自拍| 一区二区三区精品91| 97在线人人人人妻| 久久午夜综合久久蜜桃| 99精国产麻豆久久婷婷| 国产日韩欧美视频二区| 日本wwww免费看| 80岁老熟妇乱子伦牲交| 亚洲成人手机| 中国美女看黄片| av电影中文网址| 视频在线观看一区二区三区| 亚洲 国产 在线| 欧美中文综合在线视频| 国产精品.久久久| 在线 av 中文字幕| tocl精华| 男人添女人高潮全过程视频| 啦啦啦在线免费观看视频4| 国产成人精品无人区| 欧美 日韩 精品 国产| 亚洲综合色网址| 男女床上黄色一级片免费看| av不卡在线播放| 亚洲色图 男人天堂 中文字幕| 精品亚洲成国产av| 制服人妻中文乱码| 欧美激情高清一区二区三区| 热re99久久精品国产66热6| e午夜精品久久久久久久| 久久亚洲国产成人精品v| 亚洲国产欧美网| 1024视频免费在线观看| 新久久久久国产一级毛片| 亚洲国产精品一区三区| 肉色欧美久久久久久久蜜桃| 婷婷色av中文字幕| 老司机福利观看| 19禁男女啪啪无遮挡网站| 手机成人av网站| 人人妻人人澡人人看| 日韩欧美一区二区三区在线观看 | 少妇人妻久久综合中文| a级片在线免费高清观看视频| 99九九在线精品视频| 免费人妻精品一区二区三区视频| 天天躁狠狠躁夜夜躁狠狠躁| xxxhd国产人妻xxx| 精品福利永久在线观看| 久久精品国产a三级三级三级| 黑人猛操日本美女一级片| 国产免费现黄频在线看| 亚洲五月色婷婷综合| 日本wwww免费看| 宅男免费午夜| 免费在线观看视频国产中文字幕亚洲 | 国产精品久久久久久精品电影小说| 一进一出抽搐动态| 欧美变态另类bdsm刘玥| 精品久久久精品久久久| 国产av又大| avwww免费| 国产成人a∨麻豆精品| 又大又爽又粗| 国产成人系列免费观看| 啦啦啦在线免费观看视频4| 日韩人妻精品一区2区三区| 久久人人97超碰香蕉20202| 亚洲欧洲精品一区二区精品久久久| 丝袜美腿诱惑在线| 老汉色∧v一级毛片| 午夜久久久在线观看| 日韩视频一区二区在线观看| 啦啦啦啦在线视频资源| 欧美日韩亚洲国产一区二区在线观看 | 热99国产精品久久久久久7| 免费观看a级毛片全部| 国产主播在线观看一区二区| 午夜91福利影院| 后天国语完整版免费观看| 亚洲精品久久成人aⅴ小说| 在线观看舔阴道视频| 亚洲熟女精品中文字幕| 久热这里只有精品99| 精品卡一卡二卡四卡免费| 十八禁人妻一区二区| 亚洲国产毛片av蜜桃av| 久久青草综合色| 亚洲国产欧美一区二区综合| 国产野战对白在线观看| 日本av手机在线免费观看| 亚洲国产欧美一区二区综合| 国产亚洲精品第一综合不卡| 久久青草综合色| 视频在线观看一区二区三区| 欧美精品高潮呻吟av久久| 少妇猛男粗大的猛烈进出视频| 黄片大片在线免费观看| 无遮挡黄片免费观看| 免费在线观看日本一区| 中国美女看黄片| 国产日韩欧美在线精品| 中文字幕高清在线视频| 两性夫妻黄色片| 久久亚洲精品不卡| 丁香六月欧美|