• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Suggestion Mining from Opinionated Text of Big Social Media Data

    2021-12-14 06:04:52YouseefAlotaibiMuhammadNomanMalikHumaHayatKhanAnabBatoolSaifulIslamAbdulmajeedAlsufyaniandSalehAlghamdi
    Computers Materials&Continua 2021年9期

    Youseef Alotaibi,Muhammad Noman Malik,Huma Hayat Khan,Anab Batool,Saif ul Islam,Abdulmajeed Alsufyani and Saleh Alghamdi

    1Department of Computer Science,College of Computer and Information Systems,Umm Al-Qura University,Saudi Arabia

    2Department of Computer Science,Faculty of Engineering and Computer Sciences,National University of Modern Languages,Islamabad,Pakistan

    3Department of Software Engineering,Faculty of Engineering and Computer Sciences,National University of Modern Languages,Islamabad,Pakistan

    4Department of Computer Sciences,Institute of Space Technology,Islamabad,Pakistan

    5Department of Computer Science,College of Computers and Information Technology,Taif University,Taif,21944,Saudi Arabia

    6Department of Information Technology,College of Computers and Information Technology,Taif University,Taif,Saudi Arabia

    Abstract:Social media data are rapidly increasing and constitute a source of user opinions and tips on a wide range of products and services.The increasing availability of such big data on biased reviews and blogs creates challenges for customers and businesses in reviewing all content in their decision-making process.To overcome this challenge,extracting suggestions from opinionated text is a possible solution.In this study,the characteristics of suggestions are analyzed and a suggestion mining extraction process is presented for classifying suggestive sentences from online customers’reviews.A classification using a word-embedding approach is used via the XGBoost classifier.The two datasets used in this experiment relate to online hotel reviews and Microsoft Windows App Studio discussion reviews.F1,precision,recall,and accuracy scores are calculated.The results demonstrated that the XGBoost classifier outperforms—with an accuracy of more than 80%.Moreover,the results revealed that suggestion keywords and phrases are the predominant features for suggestion extraction.Thus,this study contributes to knowledge and practice by comparing feature extraction classifiers and identifying XGBoost as a better suggestion mining process for identifying online reviews.

    Keywords:Suggestion mining;word embedding;Na?ve Bayes;random forest;XGBoost;dataset

    1 Introduction

    Online texts of reviews and blogs are continuously increasing and constitute public opinions regarding products,services,individuals,organizations,or events.The expression of sentences in available online text can be related to sentiments and emotions [1],and generally referred to as opinions,recommendations,instructions,advice,and tips for others regarding any entity.Such opinions can be collectively termed as suggestions [2].

    Studies have described suggestion mining as sentence classification,which is based on predicting opinionated text into the binary forms of suggestions and non-suggestions [3-5].The literature has generally defined suggestion mining as the “extraction of suggestions from the opinionated text,where suggestions keyword denotes the recommendation,advice,and tips” [3].These suggestions are valuable to customers and business organizations [6]if extracted comprehensively from opinionated text [7].Suggestions must be extracted using computers because online reviews,blogs,and forums that contain suggestions are continuously increasing,resulting in large datasets [6].The high data volume makes it challenging to extract suggestions [8];therefore,automatic suggestion mining has emerged as a new research area [1].

    Suggestion mining is an approach that largely emphasizes analyzing and identifying sentences to explore explicitly the suggestions they contain [2].Identifying opinions about products and services that are discussed on social media is useful to organizations’management and to consumers.These opinions offer suggestions that assist management in deciding on improvements to products and services [6].In addition,consumers can benefit from these suggestions by using them to decide whether to buy a particular product or service.Such increased opinionated text has constituted the major dataset in the majority of recent research [9-11].Some studies have focused on product reviews [4,5,12]related to tourism (e.g.,hotel service) [10,11]and on social media data(e.g.,Twitter) [13].

    Moreover,several challenges in suggestion mining approaches relate to analyzing the sentiments of the sentence,identifying the relationship between suggestions,and selecting annotators for supervised and unsupervised learning [14].Suggestion mining is a recent research area,and thus,studies on extracting suggestions involving different classifiers and algorithms are relatively limited [15].Studies related to support vector machines (SVMs) [16],long short-term memory(LSTM) [8],hidden Markov [17],Random Forest [18,19],Na?ve Bayes [20,21],and other areas [22]have also contributed to improvements in suggestion mining.

    Thus,the present study is among the few such studies that are aimed at improving suggestion mining results by experimenting with the word-embedding approach and the XGBoost classifier.This study is aimed to capture context and similarity with other words.Furthermore this study contributes by improving the classifier performance through the XGBoost classifier,as compared with Na?ve Bayes and Random Forest.Moreover,variations in the proposed suggestion mining extraction process casting improved suggestion mining results.The remainder of the paper is structured as follows.Section 2 describes related work regarding suggestion mining and Section 3 explains the proposed suggestion mining extraction process.Section 4 describes the detailed experiment results and Section 5 presents a results analysis and discussion.Last,Section 6 describes the conclusion and future work.

    2 Related Works

    Prior approaches to suggestion mining focused on rules for linguistic and supervised machine learning through features that are manually identified.The key supervised learning algorithms used in these studies were the hidden Markov model,the conditional random field (CRF) [9],factorization machines [4],and SVM [2].Further,these studies used training datasets that had less than 8,000 sentences and an exceedingly imbalanced distribution of classes.Importantly,only a few of these datasets are publicly available.All these datasets contain suggestion class in the minority,and the ratio ranges from 8% to 27% of the entire dataset’s sentences.

    “Suggestion” can be defined in two ways.First,a generic definition [11,12]is that “a sentence made by a person,usually as a suggestion or an action guide and/or conduct relayed in a particular context.” Second,an application-specific definition defines suggestion as “sentences where the commenter wishes for a change in an existing product or service” [15].Although the generic definition is applied to all domains,the existing research has recorded evaluating suggestion mining on a solitary domain.

    Various studies [23,24]have performed mining on weblogs and forums of what they denote as sentences that reveal advice.This mining is performed using learning methods by.Recently,neural networks and learning algorithms have been utilized for suggestion mining [13].Tao et al.[13]used pretrained word insertion with a dataset that was related to gold-standard training.In addition,diverse classifiers were compared.These classifiers included manually expressed guidelines and SVM (with a diversity of manually reported features related to lexical,syntactic,and sentiment analysis),convolutional neural networks and LSTM networks.

    Similarly,the authors in the study conducted in 2021 [4]engaged supervised learning and achieved suggestion detection on “tweets.” These suggestions are regarded the phone that was launched by Microsoft.Zucco et al.[14]did not define the suggestions in their work;rather,they reported the objectives of the collection of suggestions,which was to progress and improve the quality and functionality of the product,organization,and service.The authors in [25]delivered an algorithm—“GloVE”—to train word embedding to the additional algorithms that highly perform on several benchmark tasks and datasets.The GloVE algorithm has outperformed various other algorithms,such as skip-grams and the continuous bag of words,which are variations of the “word2vec” model.Therefore,it is a strong base to use pretrained GloVE embeddings [25]to evaluate the performance of the embedding theory using the present study’s dataset.

    Training task-base embedding is verified as beneficial for tasks regarding short-text classification (e.g.,sentiment analysis).In this regard,the authors in [26]reported the trained sentiment-related word embedding by using supervised learning on a large dataset regarding Twitter sentiments,which were characterized through the emotions displayed in the tweets.Recently,studies have focused on suggestion mining in regard to the problems involved in classifying the sentences and experimented with various statistical classifiers and their features [27].However,improvement in classifiers in terms of their accuracy and datasets is a serious concern to achieve the desired complete results [28].Thus,the existing algorithms need to be significantly improved to address this gap because it is an emerging and novel nature of classifying the text.Although existing studies have specified the feature extraction classifiers and their accuracies for suggestion mining,it is concluded that none have used the XGBoost classifier to identify suggestions from customer reviews.

    Further,earlier studies have also not compared XGBoost with other classifiers to determine the better approach for identifying the suggestions from reviews.Therefore,this study defines suggestion classification and presents a better suggestion mining extraction process to identify suggestions from social media data regarding online customer reviews of the hotel industry.The next section presents the proposed suggestion mining extraction process of the opinionated text of online customer reviews.

    3 Methodology

    This study presents a novel approach to the suggestion mining extraction process,which aims to extract useful features to train the classifier for improved results.Fig.1 illustrates the suggestion mining steps used in this study and Algorithm 1 demonstrates the steps in training a model to predict a review as either a suggestion or non-suggestion.

    Figure 1:Suggestion mining extraction steps

    3.1 Preprocessing

    First,this study preprocesses the text,which involves two sub-steps—data cleansing and data processing—to clean the data for further processing.Algorithm 2 describes the details of the preprocessing component.

    3.1.1 Data Cleansing

    The primary reason for using the data cleansing approach is to clean unusable data [23].Generally,online reviews consist of rich information,such as usernames,blank spaces,special characters,and URLs.Removing such unnecessary information can assist in extracting suggestions from the cleaned opinionated text [1].Therefore,this study performs data cleansing by removing unusable text in suggestion mining.The following information is removed from the dataset,using regular expressions,to ensure a clean dataset ready for further processing.

    Algorithm 1:Training a model Input:Review dataset(reviews,labels)where label=1 for suggestion and label=0 for non-suggestion Output:traine d model that in dataset ws[]←prep predicts a review as either a suggestion or non-suggestion for each reviewdo tokenizedRevierocessing(review)end for for each tokenizedReview in dataset do//word features in form of unigram,bigram,trigram,or all wordFeatures[]←featureExtraction(tokenizedReview)end for while accuracy is not improved do trainClassifier(wordFeatures)end while

    Algorithm 2:Data preprocessing Input:Review dataset Output:Tokenized arrays of words for each review in dataset do dataCleansing(review)split review into array of words for each word in review do lowercase(word)stemming(wor d)end for end for

    · usernames in the sentences (e.g.,@xyz)

    · empty fields

    · unnecessary numbers

    · special characters used by customers and users in their reviews

    · URLs

    3.1.2 Data Processing

    After data cleansing,the following data processing steps are undertaken.First,the tokenization process is applied,which helps decompose the whole sentence stream into portions of words or meaningful elements [23].These elements are referred to as tokens;for example,words such as“suggest,”“recommend,”and “please”are usually used to express an opinion.Meaningful features lead to classification success.In this study,all words in the review were tokenized using a pretrained version of the Punkt Sentence Tokenizer,from the Natural Language Toolkit (NLTK)library.Tab.1 presents some of the tokens used in this study,which were useful for further data processing.Second,each token is transformed into lower case,to eliminate the repetition of words and terms and to place the entire text in a unique structure.Third,the stemming process is used to unify the words across the entire document and to highlight the uniqueness of words through their stems;for example,“computational,” “compute,’and “computing” stem from “compute.” During the feature extraction phase,this process helps to avoid duplications.This study used the Porter stemming algorithm to create stems for tokens that were included in the Python NLTK library.

    Table 1:Sample of preprocessed tokens from two datasets (hotel reviews [HR],Microsoft windows app studio reviews [MSWASR])

    3.2 Feature Extraction

    Almost all supervised machine learning algorithms can classify data in the form of integer or floating-point vectors [29].Feature extraction is the process of converting input data into the vector form for use in training classifiers.Machine learning classifiers do not work on data because they attempt to understand and extract data patterns for classification [27,30].Feature extraction and selection play a primary role in classification accuracy.Using irrelevant features limits the classifiers’performance.The proposed suggestion mining extraction process experimented with four different features.

    Reviews are converted into vectors containing Boolean values (i.e.,0 or 1) that correspond to unigrams,bigrams,trigrams,and the uni/bi/trigram combination.The translated review is given to classifiers to extract suggestions and non-suggestions.Tab.2 depicts the vector size for each review using these feature extraction techniques.Algorithm 3 describes the review vectorization against unigram features.In the unigram feature extraction process,all words from the preprocessed dataset are removed and a bag of unique words is created.Next,a vector is created for each review by assigning 1 if the word exists in the review,and 0 otherwise.It is common for words such as “suggest,” “recommend,” and “please” to occur in suggestive text.

    Table 2:Feature extraction techniques and size

    Algorithm 4 describes the bigrams feature model.In the bigram feature extraction process,all pairs of words are extracted from the dataset and a bag of bigram is created.For each review,(1,0) vectors are created,depending on whether the bigram exists.Bigram features are used to cater to suggestive phrases,such as “would like,” “would love,” and “instead of.” Similarly,trigrams phrase examples are “should come with” and “would be nice.” Last,a set of unigrams,bigrams,and trigrams are combined and the vector is created.The more meaningful and relevant are the input features,the more will be the classifier’s learning and prediction accuracy.

    Algorithm 3:Unigram modelling algorithm Input:Preprocessed reviews,bag of unigrams Output:Unigra h review im features vector for eacn preprocessed reviews do for each word in bag of unigrams do if word exists in review then vector[review,word]=1 else vector[review,word]=0 end if end for end for

    Algorithm 4:Bigram modelling algorithm Input:Preprocessed reviews,bag of unigrams Output:Unigram features vector for each review in preprocessed reviews do for each word i rd existsn bag of unigrams doif wo in review then vector[review,word]=1 else vector[review,word]=0 end if end for end for

    Tab.3 shows the example association of words using the unigram word feature.The “class label”column shows whether the review is a suggestion (i.e.,1) or non-suggestion (i.e.,0).Further,in this table,1 refers to the found association whereas 0 denotes that there is no association with the word in the given sentence.

    3.2.1 Classification

    After the feature extraction process,the reviews are ready for classification.The proposed suggestion mining system used XGBoost classifier and compared the results with the Na?ve Bayes and Random Forest algorithms.The XGBoost classifier is a relatively new machine learning algorithm that is based on decision trees and boosting.Nevertheless,it was used in this study because it is highly scalable and provides improved statistics and better results.

    3.2.2 Experiment

    This study used two datasets of the hotel industry as well as the MSWASR dataset in relation to customer reviews (see Tab.4).These reviews contain opinionated text with sentences that explicitly express suggestions and non-suggestions.To perform the experiments,a random data subset was created to foresee the overall performance of the algorithms.

    Table 3:Example association of words using the unigram word feature

    Table 4:Datasets used in the experiment

    Tab.4 consists of five columns.First,“dataset” refers to the nature of the dataset.Second,“data source”describes the source of data in which the dataset was retrieved.Third,“N”refers to the total number of data collection instances from the data source.Fourth,“S”denotes the subset volume of the dataset that was randomly selected for the experiment.Last,“purpose” describes the tasks that need to be executed in this experiment.

    This experiment used 42,000 online reviews from the hotel industry datasets and 9,000 reviews from the MSWASR dataset.All datasets comprised opinionated text (e.g.,opinion,advice,suggestion,or tips),from which the experiment aimed to extract suggestions.In this experiment,the hotel industry Datafiniti dataset contained 34,000 data instances for training purposes,in which a subset of 10,500 instances was used to test the dataset.Similarly,the hotel industry Code Source Competition dataset contained 8,500 data instances for training purposes,in which a subset of 2,200 instances was used for evaluation.Further,the MSWASR Github dataset contained 9,000 data instances for training purposes,in which a subset of 2,700 instances was used to test the dataset.

    As previously specified,the XGBoost classifier was used to classify suggestions.Initially,data cleansing was performed,which was followed by the tokenization process.The word2vec approach was used to generate word vectors,which continuously improve each time the classifier is executed.Therefore,training the classifier with a training set is important because it can assist in building vocabulary for the test set.This study used an online hotel review dataset to train the classifier.Next,the hotel industry testing datasets and MSWASR’s total dataset were used to determine the performance of three classifiers—XGBoost,Na?ve Bayes,and Random Forest.To obtain the best performance,the semantic inclusion approach was utilized through a bag of words technique.Therefore,unique words were listed through a bag of words,which generated vectors.

    4 Results

    The performance measurement results were identified based on precision,recall,F1 score,and accuracy.Precision is generally used to measure the proportion of identification as a result of precision;for example,a precision score of 0.80% indicates that its predictions of suggestive reviews are correct 80% of the time.Next,recall generally refers to the completeness of the classifier used in a given dataset.It describes the proportion of actual positives,which means how many suggestions are identified correctly.Further,the F1 score refers to the average precision and recall;it reveals the highest best and worst values towards 0.Last,accuracy demonstrates the ratio of correctly predicted observations and explains the classifiers’ability to predict accurately.Moreover,the average accuracy is calculated to cross-validate the results.

    Further,positive and negative scores are categorized into true positive,false positive,true negative,and false negative.True positive means that the output class of review is a found suggestion and that it is correctly classed as a suggestion.Conversely,true negative describes that the output class of review is a non-suggestion and it is correctly classed a non-suggestion.Next,false positive describes that the output class of review is a non-suggestion but it is falsely classed as a suggestion.Conversely,false negative describes that the output class of review is a suggestion but it is falsely classed as a non-suggestion.In addition,the results and analysis are reported based on the unigram,bigram,and trigram models.Moreover,comparative statistics are also reported for all three models.

    Tab.5 reports statistics regarding the performance measurement of feature identification using the unigram model.Tab.5 comprises two main columns,“hotel industry dataset” and“MSWASR” dataset,” which are further split into three sub-columns of classifiers—Na?ve Bayes,Random Forest,and XGBoost.Suggestions are reported against each classifier in regard to F1,precision,recall,and accuracy.

    Table 5:Performance measurement of features using the unigram model

    The results for the unigram model reveal the lowest scores for Na?ve Bayes for F1,precision,recall,and accuracy.The highest scores are observed for Random Forest and XGBoost classifiers.However,the experimental results indicate that XGBoost scored higher than Random Forest.

    Tab.6 reports statistics regarding the performance measurement of feature identification using the bigram model.Tab.6 comprises two main columns that represent both datasets,which are further split into sub-columns that represent the three classifiers.Again,suggestions are reported against each classifier in regard to F1,precision,recall,and accuracy.

    Table 6:Performance measurement of features using the bigram model

    The results indicate that all scores are higher for the XGBoost classifier.Random Forest outperformed Na?ve Bayes in all categories except for precision.

    Tab.7 reports statistics regarding the performance measurement of feature identification using the trigram model.Tab.7 comprises two main columns that represent both datasets,which are further split into sub-columns that represent the three classifiers.Suggestions are once again reported against each classifier in regard to F1,precision,recall,and accuracy.

    Table 7:Performance measurement of features using the trigram model

    The results demonstrate that Na?ve Bayes has the lowest scores for F1,precision,recall,and accuracy.The highest scores are obtained by using the Random Forest and XGBoost classifiers.However,the results indicate that XGBoost scored higher than Random Forest.

    In addition,a combined performance evaluation is presented.Tab.8 reports the comparative statistics of the unigram,bigram,and trigram models.Tab.8 comprises two main columns that represent both datasets,which are further split into sub-columns that represent the three classifiers.Suggestions are reported against each classifier in regard to F1,precision,recall,and accuracy.

    Table 8:Performance measurement of features using the uni/bi/trigram combination model

    When the unigram,bigram,and trigram models are executed together,the results varied regarding Na?ve Bayes and Random Forest.Specifically,Random Forest had the lowest scores for F1,precision,recall,and accuracy.Interestingly,Na?ve Bayes performed better in this scenario than in the previous scenarios,in which the models were not executed simultaneously.However,XGBoost once again displayed the highest results.

    5 Discussion

    Based on the experiments conducted in this study,it can be observed that the XGBoost classifier has outperformed the other two classifiers.The findings of the experiments are shown in Figs.2-5,in which the results for the F1,precision,recall,and accuracy of the three classifiers are reported.

    Figure 2:(a) Unigram model scores for the hotel dataset.(b) Unigram model scores the for MSWASR dataset

    Figure 3:(a) Bigram model scores for the hotel dataset.(b) Bigram model scores for the MSWASR dataset

    Figure 4:(a) Trigram model scores for the hotel dataset.(b) Trigram model scores for the MSWASR dataset

    Figure 5:(a) Uni/Bi/Trigram model scores for the hotel dataset.(b) Uni/Bi/Trigram model scores for the MSWASR dataset

    Further,an accuracy comparison among Na?ve Bayes,Random Forest,and XGBoost classifiers was conducted for the hotel industry and MSWASR datasets.The detailed illustration of the accuracy comparison of the three classifiers is shown in Fig.6

    Figure 6:Accuracy comparison of Na?ve Bayes,random forest,and XGBoost classifiers for the MSWASR dataset

    As demonstrated in Fig.6a,Random Forest performed better than Na?ve Bayes in terms of the accuracy of results;however,its results varied among the unigram,bigram,trigram,and the combination of all three models (0.64,0.68,0.68,and 0.64,respectively).Interestingly,the results for XGBoost accuracy were better than those for Random Forest in all models (0.84,0.81,0.81,and 0.84,respectively).As shown in Fig.6b,similar results were found for the MSWASR dataset,in which Random Forest outperformed Na?ve Bayes in terms of accuracy,but again had varied results among the unigram,bigram,trigram,and the uni/bi/trigram combination (0.82,0.81,0.78,and 0.77,respectively).Once again,the results for XGBoost accuracy were better than those for Random Forest in all models (0.87,0.89,0.87,and 0.82,respectively).Based on these findings,the XGBoost classifier performed better than the others on the given online review dataset.The Random Forest method is unsustainable because its accuracy values were more distributed than other classifiers.

    Further,average accuracies were also analyzed on the given data for the three classifiers on unigram,bigram,trigram,and uni/bi/trigram modelling (see Figs.7a and 7b).Fig.7a demonstrates that the lowest average accuracy value (0.63) was found in the bigram of Na?ve Bayes and the highest value (0.82) was found in the uni/bi/trigram combination for XGBoost.Likewise,Fig.7b shows that the lowest average accuracy value (0.77) was found in the trigram of Na?ve Bayes and the highest value (0.87) was found in the uni/bi/trigram combination for XGBoost.Although Random Forest achieved better average accuracy results than Na?ve Bayes,there is no significant difference.Conversely,the average accuracy scores for XGBoost were stable and demonstrated fewer distribution scores on the given data in the unigram,bigram,trigram,and uni/bi/trigram combination modelling.

    The authors attempted to conduct this study in such a way that the results could be generalized.This became possible by selecting datasets from two different domains (hotel and software industry),in which the various classifiers were executed.The authors have noted that the results would be more generalizable and reliable if they were statistically evaluated through performing non-parametric tests.Because of a lack of any statistical proof,the scope of the analysis is limited.

    Figure 7:(a) Average accuracy comparison of Na?ve Bayes,random forest,and XGBoost classifiers for the hotel industry dataset.(b) Average accuracy comparison of Na?ve Bayes,random forest,and XGBoost classifiers for the MSWASR dataset

    6 Conclusion and Future Work

    The availability of opinionated text regarding social media data is increasing,which can assist in decision-making if extracted and analyzed carefully.The extracted suggestions,tips,and advice must be carefully analyzed to improve the business and subsequently benefit customers.Recent studies have explored suggestions from online reviews through different classifiers,such as Random Forest and Na?ve Bayes.The results of these studies are not mature enough and require further improvements.Therefore,this study proposed a suggestion mining process to improve the results further.

    To this end,the authors used various techniques,such as word embedding,bag of words,and word2vec.In addition,XGBoost classifiers were used to train the dataset.The results revealed that the XGBoost classifier outperformed and gave an accuracy of 0.8.Moreover,the results also indicated that suggestion keywords and phrases are the predominant features for suggestion extraction.This study contributes to the methodological approach for suggestions mining through the XGBoost classifier that can be replicated in other datasets.It contributes toward the state of knowledge and practice by comparing feature extraction classifiers.In addition,it presents XGBoost as a better suggestion mining extraction process for social media data about online customer reviews of the hotel industry.

    Nevertheless,the present study has some limitations.Although this study used more than 8,500 online hotel reviews,it is suggested that further results can be found by using a larger dataset.Second,the test dataset was manually analyzed for its suggestions class,which could impart biasness.However,this limitation was overcome by involving other researchers to perform this task.Future research is needed to improve the suggested suggestion mining extraction process using the XGBoost classifier on larger review datasets.These datasets could be related to products,shopping sites,or services.Another promising research area could be extending the results of the XGBoost classifier by providing beyond domain-based training for its versatility.

    Acknowledgement:We deeply acknowledge Taif University for supporting this study through Taif University Researchers Supporting Project Number (TURSP-2020/115),Taif University,Taif,Saudi Arabia.

    Funding Statement:This research is funded by Taif University,TURSP-2020/115.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    91aial.com中文字幕在线观看| 国产v大片淫在线免费观看| 99riav亚洲国产免费| 18禁在线无遮挡免费观看视频| 天美传媒精品一区二区| 免费看美女性在线毛片视频| 午夜亚洲福利在线播放| 久久久久久大精品| 一级毛片我不卡| 特大巨黑吊av在线直播| 精品无人区乱码1区二区| 久久鲁丝午夜福利片| 日本黄色片子视频| 少妇熟女欧美另类| 国产精品美女特级片免费视频播放器| 亚洲国产色片| 国产淫片久久久久久久久| 亚洲精品国产av成人精品| 国模一区二区三区四区视频| 嫩草影院入口| 18禁裸乳无遮挡免费网站照片| videossex国产| 99久国产av精品国产电影| 最近手机中文字幕大全| 精品久久久久久久末码| 高清毛片免费观看视频网站| 淫秽高清视频在线观看| 中国国产av一级| 国产高清有码在线观看视频| 国产精品精品国产色婷婷| 美女黄网站色视频| 99久久九九国产精品国产免费| 国产色爽女视频免费观看| 亚洲精品456在线播放app| 性欧美人与动物交配| 色尼玛亚洲综合影院| kizo精华| 国产老妇女一区| 日韩欧美 国产精品| 日韩制服骚丝袜av| 爱豆传媒免费全集在线观看| 国产成人福利小说| 一级毛片久久久久久久久女| 日日干狠狠操夜夜爽| 午夜爱爱视频在线播放| 高清在线视频一区二区三区 | 欧美性感艳星| 亚洲国产色片| 国产综合懂色| 观看免费一级毛片| 国产伦精品一区二区三区四那| 白带黄色成豆腐渣| 亚洲成人久久性| www.色视频.com| 色哟哟·www| 六月丁香七月| 亚洲av免费高清在线观看| 午夜精品在线福利| 99热这里只有是精品在线观看| 免费在线观看成人毛片| 亚洲天堂国产精品一区在线| 国产伦一二天堂av在线观看| 尾随美女入室| 亚洲中文字幕一区二区三区有码在线看| 国产一级毛片七仙女欲春2| 国产免费一级a男人的天堂| 哪个播放器可以免费观看大片| a级毛片a级免费在线| 日韩成人av中文字幕在线观看| 岛国在线免费视频观看| 精品欧美国产一区二区三| 国内精品宾馆在线| 亚洲精品乱码久久久v下载方式| 国产午夜精品论理片| 九九在线视频观看精品| 国产av在哪里看| 日本五十路高清| 自拍偷自拍亚洲精品老妇| 国产精品久久久久久精品电影| 亚洲自拍偷在线| 12—13女人毛片做爰片一| 精品人妻偷拍中文字幕| 成人三级黄色视频| 久久99热这里只有精品18| 亚洲综合色惰| 日韩欧美三级三区| 久久久久久久久久黄片| 听说在线观看完整版免费高清| а√天堂www在线а√下载| 亚洲高清免费不卡视频| 国产精品久久久久久av不卡| 18禁裸乳无遮挡免费网站照片| 精品日产1卡2卡| 欧美激情久久久久久爽电影| 国产精品,欧美在线| 国产亚洲av嫩草精品影院| 精品久久久噜噜| av.在线天堂| 国产私拍福利视频在线观看| 老司机影院成人| 亚洲丝袜综合中文字幕| 亚洲欧美日韩无卡精品| 亚洲婷婷狠狠爱综合网| 国产综合懂色| 波多野结衣高清无吗| 舔av片在线| 免费看av在线观看网站| 精品不卡国产一区二区三区| 久久久久久久久久黄片| 欧美色欧美亚洲另类二区| 成人永久免费在线观看视频| 免费av毛片视频| 99久久精品国产国产毛片| 精品久久久噜噜| 大型黄色视频在线免费观看| 成人午夜高清在线视频| 十八禁国产超污无遮挡网站| 久久久午夜欧美精品| 18禁在线播放成人免费| 亚洲av不卡在线观看| 伦理电影大哥的女人| 在线观看一区二区三区| 国产老妇伦熟女老妇高清| 亚洲av中文字字幕乱码综合| 村上凉子中文字幕在线| 最后的刺客免费高清国语| 成人综合一区亚洲| 国产午夜精品久久久久久一区二区三区| 亚洲七黄色美女视频| 久久精品国产鲁丝片午夜精品| 夜夜看夜夜爽夜夜摸| 日韩三级伦理在线观看| 美女 人体艺术 gogo| 欧美人与善性xxx| 国内少妇人妻偷人精品xxx网站| 亚洲18禁久久av| 能在线免费观看的黄片| 亚洲,欧美,日韩| 午夜a级毛片| 一本久久精品| 又黄又爽又刺激的免费视频.| 12—13女人毛片做爰片一| 嫩草影院精品99| 91精品国产九色| 国产精品一区二区在线观看99 | 午夜视频国产福利| 男女下面进入的视频免费午夜| 午夜老司机福利剧场| 国产亚洲欧美98| 久久亚洲国产成人精品v| 亚洲成人中文字幕在线播放| 99riav亚洲国产免费| 亚洲丝袜综合中文字幕| 看非洲黑人一级黄片| 午夜免费激情av| 久久这里有精品视频免费| 国产三级在线视频| 男插女下体视频免费在线播放| 一级毛片aaaaaa免费看小| 日本与韩国留学比较| 日韩av在线大香蕉| 国产伦精品一区二区三区四那| 黄片无遮挡物在线观看| 亚洲av免费高清在线观看| 91精品一卡2卡3卡4卡| 一区福利在线观看| 最近2019中文字幕mv第一页| 变态另类成人亚洲欧美熟女| 国产一区二区三区在线臀色熟女| 高清在线视频一区二区三区 | 美女大奶头视频| 中国美女看黄片| 国产av麻豆久久久久久久| 久久99蜜桃精品久久| 欧美成人a在线观看| 99精品在免费线老司机午夜| 草草在线视频免费看| 在现免费观看毛片| 亚洲欧美日韩东京热| 99久久中文字幕三级久久日本| 51国产日韩欧美| 国产精品99久久久久久久久| 久久精品影院6| 国产单亲对白刺激| 久久久久久九九精品二区国产| 免费看美女性在线毛片视频| 成人美女网站在线观看视频| 日韩中字成人| 国产黄色视频一区二区在线观看 | 精品久久久久久久久亚洲| 亚洲欧美清纯卡通| 成人高潮视频无遮挡免费网站| 99久久中文字幕三级久久日本| 美女 人体艺术 gogo| 国产白丝娇喘喷水9色精品| 精品午夜福利在线看| 成年免费大片在线观看| 午夜福利在线观看免费完整高清在 | 国国产精品蜜臀av免费| 亚洲国产精品国产精品| 成人三级黄色视频| 国产麻豆成人av免费视频| 国产精品一区二区三区四区免费观看| 欧美性猛交黑人性爽| 欧美高清性xxxxhd video| 99久久久亚洲精品蜜臀av| 精品免费久久久久久久清纯| 可以在线观看毛片的网站| 亚洲精品成人久久久久久| 国产老妇伦熟女老妇高清| 伊人久久精品亚洲午夜| 色哟哟·www| 校园春色视频在线观看| 精品人妻偷拍中文字幕| 亚洲欧美清纯卡通| 国产精品.久久久| 哪里可以看免费的av片| 亚洲国产精品国产精品| 精华霜和精华液先用哪个| 亚洲欧美日韩高清专用| 尾随美女入室| 91av网一区二区| 精品少妇黑人巨大在线播放 | 女人被狂操c到高潮| 亚洲丝袜综合中文字幕| 日韩人妻高清精品专区| 欧美性猛交╳xxx乱大交人| 老司机福利观看| av福利片在线观看| 五月玫瑰六月丁香| 好男人在线观看高清免费视频| 男女那种视频在线观看| 国内精品美女久久久久久| 在线观看一区二区三区| 少妇裸体淫交视频免费看高清| 色播亚洲综合网| 久久草成人影院| 国产午夜精品论理片| 精品人妻视频免费看| 国产成人精品婷婷| 精品欧美国产一区二区三| 亚洲精品乱码久久久v下载方式| 伦理电影大哥的女人| 日日啪夜夜撸| 亚洲国产欧美人成| 国产亚洲精品久久久com| 十八禁国产超污无遮挡网站| av福利片在线观看| 五月伊人婷婷丁香| 日日干狠狠操夜夜爽| 中文字幕免费在线视频6| 人体艺术视频欧美日本| 少妇裸体淫交视频免费看高清| 精品久久久噜噜| 26uuu在线亚洲综合色| 国产中年淑女户外野战色| 小说图片视频综合网站| 亚洲国产精品成人综合色| 赤兔流量卡办理| 久久99热这里只有精品18| 成年免费大片在线观看| 日韩欧美 国产精品| 午夜福利在线观看吧| 中国美白少妇内射xxxbb| 日韩三级伦理在线观看| 天堂影院成人在线观看| 午夜a级毛片| 日本撒尿小便嘘嘘汇集6| 国产成人精品久久久久久| 99国产精品一区二区蜜桃av| 久久久久久久久久成人| 黄色视频,在线免费观看| 日韩大尺度精品在线看网址| 国产私拍福利视频在线观看| 国内久久婷婷六月综合欲色啪| 久久99蜜桃精品久久| 亚洲欧美中文字幕日韩二区| av.在线天堂| 色播亚洲综合网| 夜夜夜夜夜久久久久| 国产亚洲精品久久久久久毛片| 午夜亚洲福利在线播放| 欧美日韩国产亚洲二区| 舔av片在线| 联通29元200g的流量卡| 国产成年人精品一区二区| 成年av动漫网址| 自拍偷自拍亚洲精品老妇| 成人av在线播放网站| 国产亚洲精品久久久久久毛片| av视频在线观看入口| 天堂av国产一区二区熟女人妻| 高清日韩中文字幕在线| 中文字幕制服av| 自拍偷自拍亚洲精品老妇| 能在线免费观看的黄片| 国产高潮美女av| 久久久久九九精品影院| 岛国毛片在线播放| 成人午夜高清在线视频| 成人高潮视频无遮挡免费网站| 激情 狠狠 欧美| 大又大粗又爽又黄少妇毛片口| 国产成人91sexporn| 中文字幕制服av| 国产一区二区三区av在线 | 夜夜看夜夜爽夜夜摸| 一夜夜www| 欧美变态另类bdsm刘玥| 精品久久久久久久久av| 国产精品电影一区二区三区| 中文精品一卡2卡3卡4更新| 尤物成人国产欧美一区二区三区| 午夜a级毛片| 蜜桃亚洲精品一区二区三区| 久久国产乱子免费精品| 99热这里只有是精品在线观看| 在线观看av片永久免费下载| 人妻少妇偷人精品九色| 日韩欧美 国产精品| 亚洲最大成人手机在线| 日本黄大片高清| 午夜老司机福利剧场| 国产黄片美女视频| 好男人视频免费观看在线| 岛国毛片在线播放| 狠狠狠狠99中文字幕| 欧美人与善性xxx| 中国国产av一级| 亚洲人成网站在线观看播放| 一级毛片电影观看 | av在线亚洲专区| 欧美性猛交黑人性爽| 成人一区二区视频在线观看| 国产国拍精品亚洲av在线观看| 国产精品麻豆人妻色哟哟久久 | 精华霜和精华液先用哪个| 日本黄色片子视频| 欧美日韩在线观看h| 午夜激情福利司机影院| 国产亚洲5aaaaa淫片| 非洲黑人性xxxx精品又粗又长| 悠悠久久av| 男人和女人高潮做爰伦理| 亚洲中文字幕日韩| 国内精品宾馆在线| 悠悠久久av| 日韩亚洲欧美综合| 两个人的视频大全免费| 人妻少妇偷人精品九色| 又粗又爽又猛毛片免费看| 久久久国产成人免费| 免费观看在线日韩| 麻豆成人午夜福利视频| 国产爱豆传媒在线观看| 可以在线观看的亚洲视频| 一区二区三区高清视频在线| 51国产日韩欧美| 精品人妻偷拍中文字幕| 免费在线观看成人毛片| 99国产极品粉嫩在线观看| 少妇人妻精品综合一区二区 | 99久久精品国产国产毛片| 国产熟女欧美一区二区| 亚洲av免费高清在线观看| 国产极品精品免费视频能看的| 一级毛片我不卡| 欧美区成人在线视频| 久久午夜福利片| 国语自产精品视频在线第100页| 久久韩国三级中文字幕| 免费av不卡在线播放| 亚洲五月天丁香| h日本视频在线播放| 免费搜索国产男女视频| h日本视频在线播放| 六月丁香七月| 日本黄色视频三级网站网址| 久久午夜亚洲精品久久| 12—13女人毛片做爰片一| 国产精品久久久久久精品电影小说 | 白带黄色成豆腐渣| 亚洲精品久久国产高清桃花| 全区人妻精品视频| 麻豆国产av国片精品| 日韩欧美 国产精品| 国产成人aa在线观看| 欧美+亚洲+日韩+国产| 精品无人区乱码1区二区| 日产精品乱码卡一卡2卡三| 免费在线观看成人毛片| 日本色播在线视频| 欧洲精品卡2卡3卡4卡5卡区| 久久精品夜夜夜夜夜久久蜜豆| 蜜桃久久精品国产亚洲av| 久久婷婷人人爽人人干人人爱| 亚洲美女搞黄在线观看| 亚洲av.av天堂| 国产在线精品亚洲第一网站| 国产精品三级大全| 欧美日韩国产亚洲二区| 日韩av在线大香蕉| 你懂的网址亚洲精品在线观看 | 精品久久久久久久久久免费视频| 色哟哟·www| 久久久久久久久中文| 欧美性猛交╳xxx乱大交人| 国产免费男女视频| 卡戴珊不雅视频在线播放| 夫妻性生交免费视频一级片| 免费观看的影片在线观看| 国产成人精品久久久久久| 成人性生交大片免费视频hd| 神马国产精品三级电影在线观看| 精品无人区乱码1区二区| 亚洲乱码一区二区免费版| 99久国产av精品| 国内揄拍国产精品人妻在线| 午夜久久久久精精品| 国产成年人精品一区二区| 国产欧美日韩精品一区二区| 成人漫画全彩无遮挡| 亚洲,欧美,日韩| 女人十人毛片免费观看3o分钟| h日本视频在线播放| 午夜免费激情av| 1000部很黄的大片| 99热这里只有是精品在线观看| 亚洲精品日韩av片在线观看| 亚洲国产精品国产精品| АⅤ资源中文在线天堂| 日韩欧美在线乱码| av在线播放精品| 波野结衣二区三区在线| 久久久久九九精品影院| 大型黄色视频在线免费观看| 免费看日本二区| 午夜a级毛片| 国产 一区 欧美 日韩| АⅤ资源中文在线天堂| 色5月婷婷丁香| 成人午夜高清在线视频| 国产精品久久久久久久电影| 国产精品美女特级片免费视频播放器| 国产真实伦视频高清在线观看| 亚洲av电影不卡..在线观看| 爱豆传媒免费全集在线观看| 在线国产一区二区在线| 岛国在线免费视频观看| 成人特级黄色片久久久久久久| 日韩亚洲欧美综合| 国产三级在线视频| 久久精品国产亚洲av天美| 97超视频在线观看视频| 日韩欧美在线乱码| 91久久精品国产一区二区成人| 亚洲中文字幕日韩| 免费看光身美女| 欧美zozozo另类| 中文亚洲av片在线观看爽| 亚洲国产色片| 村上凉子中文字幕在线| 亚洲国产精品国产精品| 色哟哟·www| 熟妇人妻久久中文字幕3abv| 18禁在线无遮挡免费观看视频| 日本黄大片高清| 人妻系列 视频| 午夜a级毛片| 国产日韩欧美在线精品| 一级二级三级毛片免费看| 国产精品.久久久| 久久九九热精品免费| 亚洲精品日韩在线中文字幕 | 久久久成人免费电影| 国内精品久久久久精免费| 淫秽高清视频在线观看| 色5月婷婷丁香| 中文字幕熟女人妻在线| 欧美日韩国产亚洲二区| 97热精品久久久久久| 99视频精品全部免费 在线| 亚洲高清免费不卡视频| 精品不卡国产一区二区三区| 热99re8久久精品国产| 18禁黄网站禁片免费观看直播| 在线播放国产精品三级| 国产精品三级大全| 国产毛片a区久久久久| 国产亚洲精品久久久com| 午夜福利在线在线| 欧美高清性xxxxhd video| 国产亚洲91精品色在线| 久久精品国产亚洲av涩爱 | 久久欧美精品欧美久久欧美| 内地一区二区视频在线| 床上黄色一级片| 搞女人的毛片| 春色校园在线视频观看| 人人妻人人澡欧美一区二区| 级片在线观看| 国产一区二区三区在线臀色熟女| 国产精品一区二区在线观看99 | 亚洲国产欧美人成| 婷婷亚洲欧美| 国产人妻一区二区三区在| 亚洲自拍偷在线| 老师上课跳d突然被开到最大视频| a级毛片免费高清观看在线播放| 亚洲国产精品合色在线| 精品欧美国产一区二区三| 村上凉子中文字幕在线| 丝袜喷水一区| 欧美一区二区亚洲| 熟女电影av网| 日本黄色片子视频| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 午夜a级毛片| 国产探花在线观看一区二区| 校园人妻丝袜中文字幕| 国产午夜福利久久久久久| 美女高潮的动态| av免费在线看不卡| 中国美白少妇内射xxxbb| 性色avwww在线观看| 亚洲精品日韩在线中文字幕 | 男女边吃奶边做爰视频| 色综合色国产| 精品人妻熟女av久视频| 婷婷六月久久综合丁香| 小蜜桃在线观看免费完整版高清| 亚洲高清免费不卡视频| 国产精品久久久久久亚洲av鲁大| 亚洲不卡免费看| 男女边吃奶边做爰视频| 亚洲av二区三区四区| 亚洲人与动物交配视频| 精品一区二区免费观看| 亚洲欧美成人精品一区二区| 亚洲va在线va天堂va国产| 2021天堂中文幕一二区在线观| 又爽又黄a免费视频| 中文字幕熟女人妻在线| 国产熟女欧美一区二区| 成人鲁丝片一二三区免费| 日韩三级伦理在线观看| 两性午夜刺激爽爽歪歪视频在线观看| 久久九九热精品免费| 久久国产乱子免费精品| 青春草视频在线免费观看| 两个人视频免费观看高清| 边亲边吃奶的免费视频| 中文字幕免费在线视频6| 国产精品日韩av在线免费观看| 大香蕉久久网| 最近2019中文字幕mv第一页| 国产蜜桃级精品一区二区三区| 免费av观看视频| 午夜免费激情av| 精品人妻视频免费看| 日韩大尺度精品在线看网址| 亚洲aⅴ乱码一区二区在线播放| 悠悠久久av| 免费黄网站久久成人精品| 亚洲精品粉嫩美女一区| 一个人看的www免费观看视频| 国产中年淑女户外野战色| 高清日韩中文字幕在线| 搞女人的毛片| 在线免费十八禁| 亚洲欧洲国产日韩| 97超碰精品成人国产| 春色校园在线视频观看| av国产免费在线观看| 国产精品野战在线观看| 国产精品久久视频播放| 欧美日本亚洲视频在线播放| 国产成人a区在线观看| av视频在线观看入口| 日韩av不卡免费在线播放| 日韩欧美国产在线观看| 观看美女的网站| 精品一区二区三区人妻视频| 午夜福利成人在线免费观看| 亚洲无线在线观看| av.在线天堂| 亚洲av不卡在线观看| 亚洲无线在线观看| 日本黄色视频三级网站网址| 人妻久久中文字幕网| 国产一区二区激情短视频| av天堂中文字幕网| 女人被狂操c到高潮| 国产黄a三级三级三级人| 人人妻人人澡人人爽人人夜夜 | 深爱激情五月婷婷| 精品一区二区三区人妻视频| 蜜桃久久精品国产亚洲av| 青春草亚洲视频在线观看| 永久网站在线| 一卡2卡三卡四卡精品乱码亚洲| 欧美性猛交黑人性爽| 欧美三级亚洲精品| 精品99又大又爽又粗少妇毛片| 中文字幕熟女人妻在线| 午夜福利在线观看免费完整高清在 | 欧美最新免费一区二区三区| 青春草国产在线视频 | 尤物成人国产欧美一区二区三区| 观看美女的网站| 日韩av不卡免费在线播放| 国产精品麻豆人妻色哟哟久久 | 免费大片18禁| 一区二区三区高清视频在线| 级片在线观看|