• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-Class Sentiment Analysis of Social Media Data with Machine Learning Algorithms

    2021-12-10 11:56:18GalimkairMutanovVladislavKaryukinandZhanlMamykova
    Computers Materials&Continua 2021年10期

    Galimkair Mutanov,Vladislav Karyukinand Zhanl Mamykova

    Al-Farabi Kazakh National University,Almaty,050040,Kazakhstan

    Abstract:The volume of social media data on the Internet is constantly growing.This has created a substantial research field for data analysts.The diversity of articles,posts,and comments on news websites and social networks astonishes imagination.Nevertheless,most researchers focus on posts on Twitter that have a specific format and length restriction.The majority of them are written in the English language.As relatively few works have paid attention to sentiment analysis in the Russian and Kazakh languages,this article thoroughly analyzes news posts in the Kazakhstan media space.The amassed datasets include texts labeled according to three sentiment classes:positive,negative,and neutral.The datasets are highly imbalanced,with a significant predominance of the positive class.Three resampling techniques(undersampling,oversampling,and synthetic minority oversampling(SMOTE))are used to resample the datasets to deal with this issue.Subsequently,the texts are vectorized with the TF-IDF metric and classified with seven machine learning(ML)algorithms:na?ve Bayes,support vector machine,logistic regression,k-nearest neighbors,decision tree,random forest,and XGBoost.Experimental results reveal that oversampling and SMOTE with logistic regression,decision tree,and random forest achieve the best classification scores.These models are effectively employed in the developed social analytics platform.

    Keywords:Social media;sentiment analysis;imbalanced classes;machine learning;oversampling;undersampling;SMOTE;russian;Kazakh

    1 Introduction

    It has become a common practice for people to actively share their thoughts and opinions about local and global events through social media.As new occasions happen almost every day,and their actuality varies remarkably,it is imperative to monitor the most critical topics in different spheres of life(i.e.,politics,economics,civil society,education,healthcare,ecology,culture,and sports).The volume of facts and opinions about them shared on social media renders such a tracking impracticable without automated methods,and this has made analytical platforms indispensable.Generally,the core element of these platforms is the sentiment analysis tool.Sentiment analysis[1]has been extensively explored since the early works by Mantyla et al.[2].The analytical platforms[3]have been developed to automate and increase the social media processing speed.They are customarily targeted at monitoring actual social and political situations[3,4],using social networks under governmental control[5],quantitatively analyzing unstructured data[6],forming analytical material[7],and extracting pertinent information from texts[8].

    In this paper,we present the OMSystem,the first automatic tool developed to analyze Kazakh users’opinions expressed through social media and over-the-top(OTT)platforms.This system enables monitoring web resources and social networks with subsystems for modeling “social wellbeing,” estimating the sentiment of user’s messages and comments,supporting the sentiment dictionaries of the Russian and Kazakh languages,and machine learning(ML)algorithms.This OMSystem supports Kazakhstan’s leading news portals,the most popular social networks,such as Facebook,VKontakte,Instagram,Twitter,and YouTube,and accounts of famous bloggers.The system’s chief objectives are prompt monitoring of the information space and social networks on the most relevant themes.They unambiguously define the purview of the problem,determine public opinions and their quick explanation,analyze the dynamics of a commercial brand,events,and activity mentions,and,in turn,the evaluation of the extent of “social well-being.”

    The architecture of the OMSystem,schematically illustrated in Fig.1,includes the following components:

    Figure 1:The architecture of the OMSystem

    ?Data sources:They include news portals,blogs,and social networks.

    ?Connector module:It is used to configure the connection to sources and the API of the target data sources.

    ?Linguistic constructor module:It is used to create sentiment dictionaries comprising words belong to either of the three classes:positive,negative,and neutral.

    ?Data analysis and processing module:It is based on sentiment dictionaries and deploys ML algorithms for sentiment analysis.Furthermore,it builds social analytics that reveals the sentiment concerning momentous events and people’s attitude toward and interest in them.

    ?Results module:It encompasses a newly formed relational database of texts and comments,models of sentiment analysis,social analytics,and visualized reports of “social well-being.”

    The core element of the OMSystem is the sentiment analysis tool capable of identifying three sentiment categories(positive,neutral,and negative)of parsed texts.

    There are several approaches to the sentiment definition:

    ?Lexicon-based[9]

    ?ML-based[10]

    ?Deep learning(DL)-based[11]

    Thelexicon-based approach[12,13]relies on assigning sentiment categories to words.Words are typically labeled in two categories(positive and negative),three categories(positive,neutral,and negative),or five categories(very positive,positive,neutral,negative,and very negative).The effectiveness of the lexicon-based approach[13]depends on the high quality of sentiment dictionaries containing the large corpus of words labeled in the categories mentioned earlier.A notable drawback[14]of this approach is the need to include a large number of linguistic resources to find the essential words for sentiment analysis.

    TheML-based approach[15]includes supervised and unsupervised learning methods[16].In the former,instead of words,whole texts are labeled with sentiment categories.It is an intricate,time-consuming,and error-prone method,which requires meticulously-designed guidelines.Therefore,the elaboration of semi-automatic methods,using sentiment dictionaries,is a reasonable solution in accelerating text labeling and enhancing its quality.After labeling,the dataset is segregated into training and testing portions.In the next step,the TF-IDF measure is used to extract features from texts.Subsequently,texts are classified with ML algorithms(na?ve Bayes(NB),logistic regression(LR),support vector machine(SVM),k-nearest neighbors(k-NN),decision tree(DT),random forest(RF),XGBoost,CatBoost,etc.).Unsupervised learning[16]does not include any labeled training data and therefore does not require human participation.The most commonly employed unsupervised method is the k-means clustering[17].This method groups similar data points together around centroids,representing the clusters’centers,and discovers their mutual features.Although clustering-based approaches do not require a preliminary stage of dataset preparation by human experts,they are susceptible to the position of centroids.Moreover,the clustering method groups instances together based on criteria that are not explicitly evident.

    A number of recent studies have been devoted to aDL-based approach[18–20]that focuses on enhancing text classification performance by dint of its superiority in terms of accuracy when trained with a considerable amount of data.To this end,the use of deep neural networks(DNNs)[20],recurrent neural networks(RNNs)[21,22],and convolutional neural networks(CNNs)[23]is well documented in the literature.A DNN is a type of neural network(NN)that includes several layers:an input layer processing a representation of the input data,hidden layers abstracting from this representation,and an output layer that predicts a class based on the inner abstraction.A CNN is a DNN composed of convolutional[23]and pooling[24]layers.While convolutional layers filter inputs to extract features,pooling layers reduce the dimension of features.A final layer reduces the vector dimension to the length of the categorical representation of the class.An RNN is an NN where connections between neurons create a directed cycle that forms feedback loops.This type of an NN can remember previous computation steps and reuse the information in the following input sequence.

    This paper focuses on the supervised ML-based approach,which is computationally fast and exhibits promising classification results.The rest of the paper is organized in the following way:Section 2 provides an overview of the related works pertinent to the theme of this paper.In Section 3,we introduce the benchmark datasets,preprocessing steps,and ML algorithms used for the sentiment classification.In Section 4,we discuss our experimental setting,providing an extensive analysis of our results.Finally,in Section 5,we briefly delineate all the steps taken,suggest the best ML models for use in the OMSystem,and outline directions for future research.

    2 Related Works

    This section reviews the literature devoted to sentiment classification approaches.Research in sentiment analysis has been reflected in a large number of works in the last couple of years.As the emotional aspect of texts is generally exacting to determine unambiguously,lexicon-,ML-,and DL-based approaches have been explored in diversified ways.

    A number of recent works[12–14]have presented extensive studies on the usage of lexicons and have introduced various labeling schemes for lexicon generation and news classification.Reference[25]experimented with several categories:politics,business,sports,entertainment,and technology.The lexicon dictionary was used to find the positive and negative words in a document.The whole document’s sentiment score was computed by considering the sentiment value of all its words.Although the assignment of the document’s sentiment with a lexicon dictionary is defined well,a few studies[14,25]did not discuss the manual check of the quality of the lexiconbased labeling by human annotators.This step is vital for sentiment analysis and is elucidated in Section 3 of this paper.

    In the framework of ML-based approaches,a number of works focused on comments from the Twitter platform[10,15,18],releasing or exploiting existing large-scale datasets available for building their classifiers.The classification[26,27]of tweets with NB,k-NN,and SVM classifiers have been explored in[28–31],revealing fairly satisfactory and expeditious results despite the simplicity of their implementation.Preprocessing techniques and Bernoulli NB,SVM,and LR algorithms were used to improve the efficacy of sentiment classification[29].Stemming and removal of redundant symbols and stop words helped to increase the accuracy of their classification results.

    DNNs have also been used,among other works,in[32,33].CNNs have been implemented for sentiment classification from Chinese text in[34].Results computed on the Chinese datasets indicated that the accuracy was comparable with traditional ML methods.Focusing on Arabic sentiment classification,Reference[35]explored both CNNs and long short-term memory networks(LSTMs)for binary sentiment classification.Experimental results manifested an outstanding performance with an accuracy of 88% and 85% for CNN and LSTM,respectively.The combinations of CNN with LSTM and gated recurrent unit models were implemented in[36].The binary classification was applied to five reviews and three Twitter datasets.In the experiments,an average accuracy of 90% was attained.

    Most of the mentioned works focused on processing the English language that has numerous available and accessible resources.This paper observes the sentiment analysis of texts in the Russian and Kazakh languages,which has heretofore received minimal attention.Reference[37]explored sentiments of Russian tweets using LR,XGBoost,and CNNs.Reference[38]focused on ML algorithms on classifying Russian texts,but it does not provide a detailed comparison of the previously employed algorithms.Reference[39]implemented a dictionary for sentiment analysis from Kazakh texts.In[40],the sentiment analysis was performed by formalizing rules for defining the sentiment of phrases in texts.These works neither conducted a thorough study of the sentiment classification with various lexicon-and ML-based approaches nor presented a comparison with the results attained by the previous similar works.Thus,this paper delivers a more comprehensive sentiment analysis of Russian and Kazakh texts with seven extensively deployed ML algorithms.

    3 Methodology

    This section describes the principal steps of text preprocessing[41,42],class resampling[43–45],feature selection,and text classification with the use of ML algorithms[46].These steps and the underlying logic are graphically represented in Fig.2.

    Figure 2:Stages of classification with ML algorithms

    3.1 Datasets

    The texts used to build our training and testing datasets were collected with the web-crawler provided by the OMSystem.The primary sources were the leading news portals of Kazakhstan,namely:“Nur”(https://www.nur.kz/),“Informburo”(https://www.informburo.kz/),“Today”(http://www.today.kz/),“Kazinform”(https://www.kazinform.kz/),“KazTag”(https://www.kaztag.kz/ru/),“Holanews”(https://www.holanews.kz/),“Forbes”(https://www.forbes.kz/),“Zakon”(https://www.zakon.kz/),“Time”(https://www.time.kz/),“Vlast”(https://www.vlast.kz/),“Tengrivews”(https://www.tengrinews.kz/),“Kapital”(https://www.kapital.kz/),and “The village”(https://www.the-village-kz.com/).

    The downloaded texts were labeled according to three sentiment classes:positive,negative,and neutral.The initial labeling was realized through a sentiment dictionary.Subsequently,the labeled texts were manually examined and corrected by Masters and Ph.D.students in political science.Each text was commonly reviewed by three annotators separately,and the final label was assigned on the basis of the majority of votes.The total number of manually-revised sentiment-labeled texts is 80,873 in Russian and 15,933 in Kazakh.Tab.1 provides a distribution of the downloaded texts over three classes.

    Table 1:Distribution of texts over classes

    3.2 Data Processing

    The retrieved texts are required to be preprocessed prior to the subsequent steps.First,all words were transformed to the lowercase register.Afterward,the punctuation marks,digits,special symbols,and links were dropped as they did not carry any pertinence in most instances[37].Additionally,it was necessary to remove the extremely frequent words(i.e.,stop words such as.

    Furthermore,stemming or lemmatization has to be performed to reduce the number of words with similar emotional meanings[37,38].The difference between these approaches is that the latter obtains an infinitive form of the words,whereas the former eliminates affixes and endings of words to gain a root.In this paper,stemming was used because there is no well-designed lemmatizer for the Kazakh language.Its complete development is overly taxing.“SnowballStemmer” from Python NLTK library was applied for words in the Russian language,and our own“KazakhStemmer,” based on a full set of affixes and endings,was designed to process words in the Kazakh language.

    3.3 Class Resampling

    Imbalanced classes act as a notable challenge in training a good classifier,both for binary and multi-class classification tasks[43–45].As the classes are highly imbalanced,a majority classifier would yield fairly accurate results labeling all instances with the most represented class.However,failing on all the items belonging to the other two classes would perform poorly in terms of precision,recall,and F1-score,representing our primary evaluation metrics.Class resampling techniques provide us with different alternative solutions to avoid this problem.Among them,for our experiments,we chose three widely used techniques(random undersampling,random oversampling[43–45],and synthetic minority oversampling(SMOTE)),leaving the exploration of alternative approaches for future research(Fig.3).

    The undersampling method eliminates the segment of the training dataset belonging to the majority class to make it close to or equal in size to the minority class.The drawback of such a solution is that the minority class is too small to reduce the other two classes to its size,and therefore,a large part of pertinent and valuable information is lost.In the oversampling method,a contrasting operation is realized.The minority class is increased in its size to match the majority class by coping multiple times its instances to reach the desired size.This solution has the advantage of preserving all the valuable information in the dataset.

    SMOTE is another prevalent oversampling technique wherein new points are synthesized between the existing ones.The procedure is typically contemplated as a hypercube between each point of the minority class and its k nearest neighbor points.New artificial points are created inside the hypercube.This solution has a conspicuous advantage of preserving useful information and even increasing its size.

    Figure 3:Class resampling—(a)random undersampling and(b)random oversampling

    3.4 Text Vectorization

    The text vectorization step aims at transforming texts into a numeric vector representation on which ML algorithms can be readily applied.“Bag of words” is a simple vectorization approach wherein every text from the dataset is represented as a vector with a length equal to the vocabulary of the dataset.In this encoding model,a vector is filled with the frequency of each word that appears in the text.Despite the simplicity of this approach,vectors are generally observed to be very long with lots of zeros.Besides,it does not consider the importance of the words.Along this direction,a valid alternative is represented by TF-IDF[47].It is based on “Bag of words”but targets the pertinence of words in a given document.TF-IDF is calculated by:

    TFis the ratio of a word occurrence in a document to the number of words in the document:

    where,count(w,d)is the frequency of a wordwin a documentdandcount(N,d)is the number of wordsNin a documentd.

    IDFprovides the weight of each word based on its frequency in thecorpus D:

    where,count(D)is the number of documents.

    3.5 Classification with ML Algorithms

    where,d(x,y)is the distance between two documents;aixandaiyare weights of theith terms in the documentsxandy,respectively;Nis the number of unique words in a list of documents.

    A DT[45]is a structure with N nodes.In the first step,a word is chosen,and all documents holding the same are placed on one side,and documents not containing it are put on the other side.This way,two separate sets are created.Subsequent to that,a new word is selected in these sets,and all the previous steps are repeated.The entire procedure continues until a set in which all documents are assigned to the same class is attained.In the RF classifier[48],a bunch of independent trees is built.Every document is classified by the trees independently.The class of the document is defined by the largest number of votes of all trees.

    XGboost[45]is one of the most extensively used ML algorithms.It has a good performance and solves most regression and classification problems.Boosting represents an ensemble technique where previous errors are resolved in a new model.The diversions of the trained ensemble’s predictions are computed on a training set at each iteration.Thereby,the optimization is done by adding the new tree predictions to the ensemble,decreasing the model’s mean deviation.This procedure continues until the required level of the error is reached,or the “early stopping”criterion is achieved.

    4 Experiments and Discussion

    In this study,Python 3.8 with efficient NLTK,Scikit-learn,Imbalanced-learn,Matplotlib,and Seaborn libraries were used for the experiments.Tokenization,removal of stop words,and stemming were performed by NLTK.Python’s Imbalanced-learn package was utilized for class resampling.Vectorization and classification were accomplished by Scikit-learn.The plots of Matplotlib and Seaborn were used to visualize the experimental results.The required steps were taken in the following order.Texts were preprocessed,resampled with three techniques(undersampling,oversampling,and SMOTE),vectorized with TF-IDF,and classified with the ML algorithms described in Section 3.5.Different metrics were used depending on the classification task to measure the performance of classifiers.

    Binary classification(into positive/negative)

    where,TP(true positive)indicates a test instance correctly classified with thepositivesentiment class;TN(true negative)indicates a test instance correctly classified with thenegativesentiment class;FP(false positive)indicates a test instance wrongly classified with thepositivesentiment class;FN(false negative)indicates a test instance wrongly classified with thenegativesentiment class.

    Multi-class classification into positive/negative/neutral.The following metrics were implemented:precision-macro,precision-micro,precision-weighted,recall-macro,recall-micro,recall-weighted,F1-score macro,F1-score micro,and F1-score-weighted.Precision-macro is the arithmetic mean of all precision scores for all classes.Precision-micro is the sum of all true positives for all classes,divided by all the positive predictions.

    The weighted average is computed like the macro average;however,each class has a weight according to the number of entries that belong to it.Weighted precision and recall are calculated in the following way.

    where,w1,w2,and w3are the weights of the corresponding classes.

    In experimental results,datasets were randomly divided into training 70% and testing 30%sets.The seven ML algorithms were then applied to texts,and the corresponding results were obtained.The classification results computed on the imbalanced Russian and Kazakh language datasets are shown in Tab.2.Multi-class classification metrics and a confusion matrix of texts in the Russian language with LR are shown in Fig.4.

    The results of the classification of oversampled datasets are encapsulated in Tab.3.Multiclass classification metrics and a confusion matrix of Russian texts with LR are shown in Fig.5.

    The results of the classification of SMOTE datasets are shown in Tab.4.Multi-class classification metrics and a confusion matrix of Russian texts with LR are shown in Fig.6.

    The results of the classification of undersampled datasets are shown in Tab.5.Multi-class classification metrics and a confusion matrix of Russian texts with LR are shown in Fig.7.

    Table 2:Classification of imbalanced datasets

    Figure 4:Classification metrics of imbalanced Russian texts

    Table 3:Classification of oversampled datasets

    Figure 5:Classification metrics of oversampled Russian texts

    Table 4:Classification of SMOTE datasets

    Figure 6:Classification metrics of SMOTE Russian texts

    Table 5:Classification of undersampled datasets

    Figure 7:Classification metrics of undersampled Russian texts

    The results of different classification models reveal that the models trained on imbalanced data achieve the lowest performance.Data undersampling obtains medium results,possibly owing to the fact that the resulting models cannot take full advantage of the whole training material available.As expected,the oversampled and SMOTE models,which make better use of the available data,achieve the best results.Among the various ML models tested,LR,DT,and RF yielded the best performances.Although the NB classifier performs well,it is worth remarking that the algorithm suffers from the known limitations associated with the assumption that all its features are mutually independent.Despite its simplicity,k-NN attains satisfactory results on datasets with a small size.Nevertheless,it tends to be slower and less accurate with larger corpora.As the RF classifier uses a number of independent DTs,and it is apparent that its performance is superior to a single DT.In a previous study,singular value decomposition[49]was applied to texts where they were classified with SVM and XGBoost.It was done to speed up the algorithms’training,so it is one reason explaining that these classifiers are under-performing compared to others.The classification results across the Russian and Kazakh languages are comparatively equal with slightly better performance for the latter in the oversampled and SMOTE datasets,having a smaller testing size.In summary,it could be seen that large balanced datasets,obtained with oversampling and SMOTE approaches,are the best ones and preferable to be used in the social analytics platforms.

    5 Conclusion

    We described the OMSystem,the advanced analytical system for monitoring Kazakhstan’s most popular news portals and social networks.We focused on the sentiment analysis component for automatic text labeling.We described its core functionalities,processing steps,and algorithms(NB,SVM,LR,k-NN,DT,RF,and XGBoost),discussing their strengths and weaknesses given our text classification task.Before applying these ML algorithms,texts were preprocessed to remove punctuation,extra symbols,and stop words,stemmed,and resampled to account for the highly imbalanced data the system has to be trained on.Specifically,resampling was performed with random undersampling,random oversampling,and SMOTE.As far as the features are concerned,in our work,we concentrated on feeding our models with word frequency information supplied in the form of TF-IDF values.Classification performance was measured with different metrics(accuracy,precision,recall,and F1-score),taking into account the various data conditions(imbalanced and balanced through resampling).Besides,the corresponding histograms were built to visualize the classification metrics.The analysis of our results reveals that LR,DT,and RF with random oversampling and SMOTE are the most suitable ones to address the said task.

    Based on this research,the best ML classification models for estimating social mood are included in the OMSystem for evaluating people’s attitude toward significant events in society and their level of interest and involvement in different topics.The social mood on specific topics is determined by finding the largest number of texts belonging to one of three sentiment categories.As the corpora of labeled texts and the base word thesaurus used to understand their content are constantly growing,our ML models are periodically retrained to improve their sentiment classification performance.Moreover,additional future works will include strengthening these ML models by applying CNN,RNN,and bi-directional encoder representation for transformers.

    Acknowledgement:We would like to thank the Center for data analysis and processing of Al-Farabi Kazakh National University for providing the datasets obtained with the OMSystem.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    熟女电影av网| 亚洲精品亚洲一区二区| 亚洲 欧美 日韩 在线 免费| 国产白丝娇喘喷水9色精品| 国产伦在线观看视频一区| 成人精品一区二区免费| 丁香六月欧美| 久久国产精品人妻蜜桃| 国产又黄又爽又无遮挡在线| 国产在视频线在精品| 观看免费一级毛片| 999久久久精品免费观看国产| 草草在线视频免费看| 男女之事视频高清在线观看| 美女高潮的动态| 丁香六月欧美| 午夜福利在线在线| 免费大片18禁| 国内精品久久久久久久电影| 搡老熟女国产l中国老女人| 久久久久久九九精品二区国产| 日本精品一区二区三区蜜桃| 亚洲国产色片| av专区在线播放| 永久网站在线| 国产欧美日韩精品亚洲av| 精品久久久久久久末码| 最近在线观看免费完整版| 久久午夜福利片| 国产精华一区二区三区| 亚洲精品粉嫩美女一区| 日韩 亚洲 欧美在线| 看免费av毛片| 日韩免费av在线播放| 美女高潮喷水抽搐中文字幕| 99在线视频只有这里精品首页| 十八禁国产超污无遮挡网站| 999久久久精品免费观看国产| 免费电影在线观看免费观看| 精品欧美国产一区二区三| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 美女免费视频网站| 亚洲五月天丁香| 校园春色视频在线观看| 757午夜福利合集在线观看| 久久国产乱子伦精品免费另类| 我要看日韩黄色一级片| bbb黄色大片| 窝窝影院91人妻| 人妻夜夜爽99麻豆av| 亚洲精品色激情综合| 午夜福利视频1000在线观看| 脱女人内裤的视频| 成人国产一区最新在线观看| 美女xxoo啪啪120秒动态图 | 久久久久久久久久黄片| 国产精品美女特级片免费视频播放器| 9191精品国产免费久久| 午夜免费成人在线视频| 一区二区三区四区激情视频 | 免费av毛片视频| 嫩草影院入口| 十八禁国产超污无遮挡网站| freevideosex欧美| 久久99热这里只频精品6学生| 免费看av在线观看网站| 人人妻人人澡人人爽人人夜夜| 成人国产麻豆网| 免费观看无遮挡的男女| 国产精品麻豆人妻色哟哟久久| 全区人妻精品视频| 一级毛片 在线播放| 日韩欧美精品免费久久| 国产探花在线观看一区二区| freevideosex欧美| 一二三四中文在线观看免费高清| 国产日韩欧美亚洲二区| 午夜福利网站1000一区二区三区| 偷拍熟女少妇极品色| 日本黄色片子视频| 成人免费观看视频高清| 日韩大片免费观看网站| 国产在视频线精品| 午夜视频国产福利| 欧美亚洲 丝袜 人妻 在线| 深爱激情五月婷婷| 免费黄网站久久成人精品| 国产黄片美女视频| 午夜激情久久久久久久| 国产69精品久久久久777片| 久久亚洲国产成人精品v| 日韩欧美精品v在线| 国产黄片视频在线免费观看| 中文字幕人妻熟人妻熟丝袜美| 欧美日韩综合久久久久久| 高清日韩中文字幕在线| 毛片女人毛片| 亚洲三级黄色毛片| 22中文网久久字幕| 一区二区三区四区激情视频| 亚洲人成网站高清观看| 亚洲精品久久久久久婷婷小说| 激情 狠狠 欧美| 久久99热6这里只有精品| 搞女人的毛片| 亚洲成人久久爱视频| 精品久久久噜噜| 成人毛片60女人毛片免费| 一级a做视频免费观看| 亚洲精品成人av观看孕妇| 国产成人一区二区在线| 高清日韩中文字幕在线| 一级黄片播放器| 亚洲性久久影院| 欧美日韩精品成人综合77777| av国产精品久久久久影院| 国产爽快片一区二区三区| 久久女婷五月综合色啪小说 | 欧美丝袜亚洲另类| 午夜免费男女啪啪视频观看| 熟女电影av网| 日本与韩国留学比较| 只有这里有精品99| 亚洲精品自拍成人| 欧美日韩视频精品一区| 国产精品国产三级国产av玫瑰| 亚洲av电影在线观看一区二区三区 | 麻豆成人午夜福利视频| 免费看av在线观看网站| 日本一本二区三区精品| 日韩成人伦理影院| 三级男女做爰猛烈吃奶摸视频| 久久久久久九九精品二区国产| 一区二区av电影网| 嫩草影院精品99| 热99国产精品久久久久久7| 99久久精品一区二区三区| 国产极品天堂在线| 国模一区二区三区四区视频| 美女被艹到高潮喷水动态| 欧美亚洲 丝袜 人妻 在线| 久久6这里有精品| 国产综合懂色| 日日啪夜夜撸| 大片免费播放器 马上看| 免费少妇av软件| 午夜日本视频在线| 18禁在线无遮挡免费观看视频| 在线观看一区二区三区激情| 久久精品人妻少妇| 王馨瑶露胸无遮挡在线观看| 18+在线观看网站| 少妇被粗大猛烈的视频| 一本久久精品| 麻豆成人av视频| 日韩不卡一区二区三区视频在线| 黄色视频在线播放观看不卡| 亚洲国产日韩一区二区| 久久久国产一区二区| 色视频www国产| .国产精品久久| 大码成人一级视频| 亚洲精品456在线播放app| 夫妻性生交免费视频一级片| 久久ye,这里只有精品| 女的被弄到高潮叫床怎么办| 日韩欧美 国产精品| 成人漫画全彩无遮挡| 久久精品国产亚洲av天美| 亚洲欧美精品自产自拍| 亚洲欧美日韩卡通动漫| 在线观看人妻少妇| 成人黄色视频免费在线看| 国产免费一区二区三区四区乱码| 小蜜桃在线观看免费完整版高清| 亚洲熟女精品中文字幕| 国产亚洲av嫩草精品影院| 人妻 亚洲 视频| 久久国产乱子免费精品| 在线免费十八禁| 91精品一卡2卡3卡4卡| 99热国产这里只有精品6| 一区二区三区乱码不卡18| 人妻 亚洲 视频| 国产一区二区在线观看日韩| 99热6这里只有精品| 女人久久www免费人成看片| 国产精品人妻久久久影院| 性色av一级| 亚洲av二区三区四区| 久久久国产一区二区| 91精品国产九色| 天堂中文最新版在线下载 | 国产 一区 欧美 日韩| 男女啪啪激烈高潮av片| 高清欧美精品videossex| 91精品一卡2卡3卡4卡| 精品一区二区免费观看| 天堂俺去俺来也www色官网| 夫妻性生交免费视频一级片| 午夜亚洲福利在线播放| 久久这里有精品视频免费| 亚洲精品日韩av片在线观看| 人妻少妇偷人精品九色| 看十八女毛片水多多多| 高清av免费在线| 婷婷色av中文字幕| 国内精品美女久久久久久| 综合色丁香网| 久久久精品免费免费高清| 少妇被粗大猛烈的视频| 麻豆精品久久久久久蜜桃| 亚洲av二区三区四区| 男女国产视频网站| 黄色欧美视频在线观看| 少妇人妻 视频| 简卡轻食公司| 夫妻性生交免费视频一级片| 国产精品爽爽va在线观看网站| 欧美高清性xxxxhd video| 亚洲激情五月婷婷啪啪| 热99国产精品久久久久久7| 精品少妇黑人巨大在线播放| 激情五月婷婷亚洲| 乱码一卡2卡4卡精品| 只有这里有精品99| 最近手机中文字幕大全| 九草在线视频观看| 久久综合国产亚洲精品| 美女被艹到高潮喷水动态| 日韩不卡一区二区三区视频在线| 啦啦啦啦在线视频资源| 夜夜看夜夜爽夜夜摸| 18禁在线播放成人免费| 午夜视频国产福利| 国产男人的电影天堂91| av线在线观看网站| 欧美日韩在线观看h| 久久久色成人| 波野结衣二区三区在线| 亚洲色图综合在线观看| 高清视频免费观看一区二区| 一级a做视频免费观看| 亚洲精品一区蜜桃| av国产免费在线观看| 熟女人妻精品中文字幕| 国产午夜福利久久久久久| 狂野欧美激情性xxxx在线观看| 最近最新中文字幕大全电影3| 国产成人精品福利久久| 国产国拍精品亚洲av在线观看| 亚洲av不卡在线观看| 中文天堂在线官网| 国模一区二区三区四区视频| 少妇高潮的动态图| 在线观看美女被高潮喷水网站| 亚洲人成网站高清观看| www.色视频.com| 成年免费大片在线观看| 亚洲av欧美aⅴ国产| 亚洲婷婷狠狠爱综合网| 99re6热这里在线精品视频| 自拍欧美九色日韩亚洲蝌蚪91 | 国产中年淑女户外野战色| 老司机影院毛片| 超碰av人人做人人爽久久| 青春草视频在线免费观看| 最新中文字幕久久久久| 麻豆精品久久久久久蜜桃| 寂寞人妻少妇视频99o| 欧美成人a在线观看| 少妇的逼水好多| 深夜a级毛片| 欧美成人a在线观看| 精品99又大又爽又粗少妇毛片| 日韩国内少妇激情av| 哪个播放器可以免费观看大片| 欧美一区二区亚洲| 韩国高清视频一区二区三区| 亚洲无线观看免费| 日本与韩国留学比较| 黄色怎么调成土黄色| 亚洲精品中文字幕在线视频 | 免费电影在线观看免费观看| 精品久久久久久久末码| 成人美女网站在线观看视频| 亚洲精品自拍成人| 日本av手机在线免费观看| 久久精品国产a三级三级三级| 国产黄频视频在线观看| 麻豆乱淫一区二区| 国内少妇人妻偷人精品xxx网站| 一区二区三区免费毛片| 黄色日韩在线| 国精品久久久久久国模美| 熟女av电影| 自拍欧美九色日韩亚洲蝌蚪91 | 国产视频内射| 亚洲精品乱码久久久久久按摩| 99久久精品国产国产毛片| 国产欧美亚洲国产| 黄色怎么调成土黄色| 久久久欧美国产精品| 亚洲自偷自拍三级| 丝袜喷水一区| 免费电影在线观看免费观看| 日日啪夜夜爽| 久久99热这里只有精品18| 极品教师在线视频| 亚洲欧美日韩无卡精品| 日本黄色片子视频| 亚洲欧美精品专区久久| 麻豆乱淫一区二区| 国产精品国产三级专区第一集| 麻豆成人午夜福利视频| 国产女主播在线喷水免费视频网站| 插阴视频在线观看视频| 一级毛片黄色毛片免费观看视频| 在线观看免费高清a一片| 91精品一卡2卡3卡4卡| 日本猛色少妇xxxxx猛交久久| tube8黄色片| 少妇人妻久久综合中文| 亚洲精品一区蜜桃| 美女国产视频在线观看| 亚洲国产精品国产精品| 九九久久精品国产亚洲av麻豆| 香蕉精品网在线| 亚洲欧美精品专区久久| 婷婷色av中文字幕| 免费观看a级毛片全部| 99热这里只有是精品在线观看| 国产乱人视频| 精品人妻熟女av久视频| 男女边摸边吃奶| 亚洲欧美中文字幕日韩二区| 永久免费av网站大全| 99久国产av精品国产电影| 欧美区成人在线视频| 一个人观看的视频www高清免费观看| 内射极品少妇av片p| 亚洲最大成人av| 美女被艹到高潮喷水动态| 国产亚洲91精品色在线| 久久久a久久爽久久v久久| 亚洲av男天堂| 国产一区有黄有色的免费视频| 久久99热这里只有精品18| 蜜桃久久精品国产亚洲av| eeuss影院久久| 欧美成人精品欧美一级黄| 久久国产乱子免费精品| 特大巨黑吊av在线直播| 亚洲欧美成人综合另类久久久| 久热久热在线精品观看| av国产免费在线观看| 国产精品久久久久久精品古装| av在线观看视频网站免费| 少妇熟女欧美另类| 亚洲精品乱码久久久久久按摩| 亚洲成人久久爱视频| 永久免费av网站大全| 国产片特级美女逼逼视频| 老司机影院毛片| 色播亚洲综合网| 看免费成人av毛片| 色播亚洲综合网| 黄色配什么色好看| 69av精品久久久久久| 免费看光身美女| 国产高清国产精品国产三级 | 日韩人妻高清精品专区| 最近的中文字幕免费完整| 日韩av在线免费看完整版不卡| 精品人妻熟女av久视频| 国产亚洲5aaaaa淫片| 久久久久性生活片| 亚洲精品日韩在线中文字幕| 日韩一区二区三区影片| 国产 一区精品| 超碰97精品在线观看| 久久99蜜桃精品久久| 欧美丝袜亚洲另类| 日本午夜av视频| 如何舔出高潮| 精品久久久久久电影网| 国产一区有黄有色的免费视频| 亚洲精品色激情综合| 免费少妇av软件| 高清av免费在线| 99久久精品一区二区三区| 亚洲国产欧美人成| 亚洲精品一二三| 免费大片18禁| 成年版毛片免费区| 丰满少妇做爰视频| 国产精品一区二区性色av| 国产片特级美女逼逼视频| 成人毛片a级毛片在线播放| 国产 精品1| 久久热精品热| 三级国产精品片| 国产精品无大码| 特大巨黑吊av在线直播| 精品久久久久久电影网| 午夜精品一区二区三区免费看| 国产男人的电影天堂91| 小蜜桃在线观看免费完整版高清| 久久人人爽av亚洲精品天堂 | 欧美bdsm另类| 97在线人人人人妻| 一边亲一边摸免费视频| 久久久欧美国产精品| 欧美性感艳星| 大香蕉久久网| 国语对白做爰xxxⅹ性视频网站| 免费观看在线日韩| 午夜爱爱视频在线播放| 日产精品乱码卡一卡2卡三| 免费看av在线观看网站| 亚洲怡红院男人天堂| 真实男女啪啪啪动态图| 久久久久久久午夜电影| 精品少妇黑人巨大在线播放| 亚洲婷婷狠狠爱综合网| 三级国产精品欧美在线观看| 麻豆乱淫一区二区| 在线观看一区二区三区| 老师上课跳d突然被开到最大视频| av又黄又爽大尺度在线免费看| 中文乱码字字幕精品一区二区三区| 成人毛片a级毛片在线播放| 国产中年淑女户外野战色| 亚洲自拍偷在线| 国产欧美日韩一区二区三区在线 | 精品视频人人做人人爽| 狂野欧美激情性xxxx在线观看| 国产精品99久久久久久久久| 日本爱情动作片www.在线观看| 久久久久久久久久久免费av| 亚洲四区av| 午夜福利高清视频| 色视频www国产| 久久精品国产自在天天线| 国产乱人偷精品视频| 亚州av有码| 国产高清三级在线| 国产亚洲最大av| 亚洲,欧美,日韩| 亚洲一级一片aⅴ在线观看| 熟女人妻精品中文字幕| 国产精品一区二区在线观看99| 激情 狠狠 欧美| 大又大粗又爽又黄少妇毛片口| 日韩制服骚丝袜av| 又爽又黄无遮挡网站| 日日啪夜夜撸| 黄色视频在线播放观看不卡| 99视频精品全部免费 在线| 卡戴珊不雅视频在线播放| 人妻一区二区av| 国产一区二区亚洲精品在线观看| 免费黄频网站在线观看国产| 看非洲黑人一级黄片| 成人鲁丝片一二三区免费| 日日啪夜夜撸| 国产片特级美女逼逼视频| 色网站视频免费| 欧美激情在线99| 伊人久久精品亚洲午夜| 舔av片在线| 伦精品一区二区三区| 99九九线精品视频在线观看视频| 国产伦在线观看视频一区| 毛片一级片免费看久久久久| 国产女主播在线喷水免费视频网站| 少妇人妻精品综合一区二区| 日韩中字成人| 亚洲不卡免费看| 日韩制服骚丝袜av| 欧美97在线视频| 免费大片18禁| 高清午夜精品一区二区三区| 国产黄色视频一区二区在线观看| 久久国产乱子免费精品| 国产高清三级在线| av在线天堂中文字幕| www.av在线官网国产| 大香蕉97超碰在线| 国产精品国产三级专区第一集| 国产爽快片一区二区三区| 亚洲av福利一区| 熟女av电影| 久久午夜福利片| 国产永久视频网站| 国产一级毛片在线| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 精品人妻视频免费看| 99久久人妻综合| 国产av不卡久久| 欧美+日韩+精品| 一区二区三区免费毛片| 男女边摸边吃奶| 99热这里只有精品一区| 国产亚洲91精品色在线| 免费看光身美女| 99久久精品国产国产毛片| av在线观看视频网站免费| 久久精品国产亚洲av天美| 日韩av免费高清视频| 亚洲,欧美,日韩| 久久久久九九精品影院| 免费高清在线观看视频在线观看| 伊人久久国产一区二区| 日韩强制内射视频| 在线a可以看的网站| 国产精品99久久99久久久不卡 | 国产大屁股一区二区在线视频| 国产亚洲午夜精品一区二区久久 | 一本久久精品| 亚洲精品,欧美精品| 亚洲天堂国产精品一区在线| 亚洲精品国产成人久久av| 国产黄片美女视频| 最近的中文字幕免费完整| 国产精品av视频在线免费观看| 亚洲三级黄色毛片| 一区二区三区四区激情视频| 1000部很黄的大片| 香蕉精品网在线| 自拍欧美九色日韩亚洲蝌蚪91 | 亚洲av中文字字幕乱码综合| 在线观看免费高清a一片| av网站免费在线观看视频| 久久久久久久国产电影| 夫妻午夜视频| 国产精品.久久久| 麻豆成人午夜福利视频| 日韩伦理黄色片| 大又大粗又爽又黄少妇毛片口| 久久精品久久精品一区二区三区| 1000部很黄的大片| 久久久久久久国产电影| 久久精品综合一区二区三区| 亚洲成色77777| 亚洲欧美日韩另类电影网站 | 国产亚洲午夜精品一区二区久久 | 国产精品久久久久久精品电影小说 | 日本wwww免费看| 另类亚洲欧美激情| 18禁在线无遮挡免费观看视频| 亚洲一区二区三区欧美精品 | 麻豆国产97在线/欧美| 伦理电影大哥的女人| 最近2019中文字幕mv第一页| 精品久久久久久电影网| 熟女av电影| 免费观看的影片在线观看| 欧美日韩视频精品一区| 中文字幕久久专区| 亚洲国产精品成人综合色| av网站免费在线观看视频| 亚洲精品乱码久久久v下载方式| av一本久久久久| 精品国产一区二区三区久久久樱花 | 亚洲精品国产av成人精品| 黄片无遮挡物在线观看| 日本一本二区三区精品| av在线播放精品| 欧美精品人与动牲交sv欧美| 亚洲不卡免费看| 嫩草影院精品99| 精品熟女少妇av免费看| 18禁动态无遮挡网站| 夫妻午夜视频| 国产老妇女一区| 成人鲁丝片一二三区免费| 国产亚洲91精品色在线| 久久久久久久久久久免费av| 人妻系列 视频| 亚洲自偷自拍三级| 色综合色国产| 免费在线观看成人毛片| av又黄又爽大尺度在线免费看| 岛国毛片在线播放| 免费黄色在线免费观看| 国产成人免费观看mmmm| 免费看光身美女| 日韩 亚洲 欧美在线| 一级毛片 在线播放| 不卡视频在线观看欧美| 国产精品99久久99久久久不卡 | 大又大粗又爽又黄少妇毛片口| 卡戴珊不雅视频在线播放| 久久久午夜欧美精品| 免费黄频网站在线观看国产| av国产精品久久久久影院| 91午夜精品亚洲一区二区三区| 国产大屁股一区二区在线视频| 最近中文字幕高清免费大全6| 中文字幕av成人在线电影| 在线观看人妻少妇| 亚洲欧美日韩卡通动漫| 日日啪夜夜撸| 精品一区二区三卡| 精品少妇久久久久久888优播| 精品人妻偷拍中文字幕| 国产大屁股一区二区在线视频| 91在线精品国自产拍蜜月| 久久精品人妻少妇| 亚洲人成网站高清观看| 亚洲,一卡二卡三卡| 国产爱豆传媒在线观看| videos熟女内射| 久久综合国产亚洲精品|