• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Optimized Deep Learning Model for Emotion Classification in Tweets

    2022-03-14 09:28:46ChinuSinglaFahdAlWesabiYashSinghPathaniaBadriaSulaimanAlfurhoodAnwerMustafaHilalMohammedRizwanullahManarAhmedHamzaandMohammadMahzari
    Computers Materials&Continua 2022年3期

    Chinu Singla,Fahd N.Al-Wesabi,Yash Singh Pathania,Badria Sulaiman Alfurhood,Anwer Mustafa Hilal,Mohammed Rizwanullah,Manar Ahmed Hamza and Mohammad Mahzari

    1Department of Computer Science and Engineering,Thapar Institute of Engineering and Technology,Patiala,India

    2Department of Computer Science,King Khalid University,Muhayel Aseer,Kingdom of Saudi Arabia

    3Faculty of Computer and IT,Sana’a University,Sana’a,Yemen

    4Department of Computer Science,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,Saudi Arabia

    5Department of Computer and Self Development,Preparatory Year Deanship,Prince Sattam bin Abdulaziz University,Alkharj,Saudi Arabia

    6Department of English,College of Science&Humanities,Prince Sattam bin Abdulaziz University,Alkharj,Saudi Arabia

    Abstract: The task of automatically analyzing sentiments from a tweet has more use now than ever due to the spectrum of emotions expressed from national leaders to the average man.Analyzing this data can be critical for any organization.Sentiments are often expressed with different intensity and topics which can provide great insight into how something affects society.Sentiment analysis in Twitter mitigatesthe various issues of analyzing the tweets in terms of views expressed and several approaches have already been proposed for sentiment analysis in twitter.Resources used for analyzing tweet emotions are also briefly presented in literature survey section.In this paper, hybrid combination of different model’s LSTM-CNN have been proposed where LSTM is Long Short Term Memory and CNN represents Convolutional Neural Network.Furthermore,the main contribution of our work is to compare various deep learning and machine learning models and categorization based on the techniques used.The main drawback of LSTM is that it’s a timeconsuming process whereas CNN do not express content information in an accurate way,thus our proposed hybrid technique improves the precision rate and helps in achieving better results.Initial step of our mentioned technique is to preprocess the data in order to remove stop words and unnecessary data to improve the efficiency in terms of time and accuracy also it shows optimal results when it is compared with predefined approaches.

    Keywords: Meta level features; lexical mistakes; sentiment analysis; count vector; natural language processing; deep learning; machine learning; naive bayes

    1 Introduction

    Sentiments are something that one can express through various ways as it could be verbal,written or over the internet.Natural Language Processing and Python provide a well-developed tool with which one can easily get rid of the lexical adulterations and focus on the actual context to successfully predict the sentiments for the same.

    Twitter is the world’s largest micro-blogging site and it has become ubiquitous these days across the world.It allows for short posts primarily containing text that is 140 characters long and these are referred to as “tweets” [1].Twitter in particular among all social media platforms has a widespread adoption and a rapid communication volume.Twitter has 313 million users active within a given month and 100 million users actively tweeting.It has a wide range of applications in commerce, public health [2,3], opinion detection about political tendencies [4,5] and stock market monitoring [6].Emotion analysis is the process of identifying the attitude towards a target or topic.The attitude can be the polarity (positive or negative) or an emotional state such as joy, anger, or sadness [7].Thus, manual classification of posts and opinion mining becomes an infusible option.The very subjectivity of the data has made this problem is still an open problem in the field.

    Natural Language Processing is concrete with tradition text genres like news data and long summaries of books and papers.But twitter possess an entirely different challenge for natural language processing, they are short and have “hash tags #” (which are a type of tagging for Twitter messages).The language used is also highly informal with creative spelling, slang, new words and URLs and gender-specific abbreviations like RT for “Re-Tweet.” This paper aims to overcome these hurdles in the pre-processing phase to boost the results of the different models.

    For this study, a proper benchmark of the different models is being set and the dataset incorporated is SemEval Datasets [8] which is an ongoing semantic evaluation of computer semantics and has widely been used for benchmarking sentiment analyses.In our research paper, CNN as well as LSTM is being used as both of them individually comprises of certain shortcomings which can be mitigated by using both of them in tandem and this is done by using a hybrid model as mentioned in this paper .CNN has been used to create a pool layer that is further passed to the LSTM model down the pipeline.The main contributions of our approach, hybrid LSTM and CNN can be summarized as follows:

    ? Majority of the work done in this field aims over a large data to extract any essence regarding any particular trend that is being carried out by using machine learning model to avoid the overhead issue, the same is used in our model despite it being a neural network.

    ? Most of the work done focuses on word embedding by extracting important words from tweets which fails to extract the essence of it like sarcasm and irony this can be resolved by further dividing the word embedding into regions and using a convolution layer to extract further features.

    ? Most of the research papers do no lay emphasis on emoticons which are essential for social media platform and hence sentimental analysis without a bias towards them is not accurate.

    ? Thus, this framework provides a more robust scalable and more functional approach that can be tailored fit to meet special needs like understanding the sentiments of the public about various situations more accurately and thus can help in decision making and social engineering.

    Another hurdle that this paper tends to solve is the extensive use of emoji’s that has drawn growing attention from research and these emoji’s contain important information as previous work has shown that, it is useful to pre-train a deep neural network on an emoji prediction task with pre-trained emoji to predicting emotion and sarcasm with greater accuracy [9].In our work an “opinion lexicon”has been used for a lexicon-based approach [10] that uses punctuations and emoticons as key flags for determination.Previous literatures lack in consideration of this complexity and variety of emoji’s.Therefore, applying emoji embedding to older models could result in boosting their accuracy and give better overall results.Our work is to compare various deep learning and machine learning models and categorization based on the techniques used.Furthermore, in this paper, hybrid combination of different model’s LSTM-CNN have been proposed to achieve more accuracy and efficiency by reducing latency.

    The remainder of the article is structured as follow.In Section 2, we present the related work already done in this field.The proposed model is explained in Section 3.Implementations and simulation is provided in Section 4.Comparison and discussion are provided in Section 5.Results analysis is provided in Section 6, and finally, we conclude the article in Section 7.

    2 Related Work

    Various Machine learning and Deep learning methods have been introduced for sentiment analysis of tweets.State-of-the-art systems [11,12] used approaches of integrating different models and applied feature vectors including semantics, syntactic features, and word embeddings to display tweets.There have been a lot of ways in which researchers have used to classify Reddit comments as “depressed” or “not depressed” [13].The paper uses a BERT [14] based model, and a neural network with a word embedding (CNN) model for classification.Their results showed that CNN without embedding performed better than the BERT based model.Lexical corrections and intensive preprocessing with an Apache System (SVM) [15] have been used to formulate a model with 5% greater accuracy than the traditional sentiment analysis method.Though the research clearly mentions in the conclusion that the result will subjected to change to a large extent depending on the dataset used.

    This recent study [16] primarily focused on twitter uses an unsupervised approach for a graphical representation of opinions all in real time applied over scale dataset for the years 2014-2016.Sondy [17] an open source java based social dynamics analyzer with its main focus being user influence and event detection.Lambda architecture is also software architecture and is usedalongside machine learning to analyze large data streams such as twitter.Lexical resources like Wilson et al.[18] labeled a list of English words in positive and negative categories.ANEW [19] application for twitter has uses the AFINN lexicon which is a list of words rated from -5 to 5 based on how positive or negative they are.Research papers as written by extensively used the Stanford NLP library which produces great results for abstraction and removal of data.Different Deep learning models have been utilized to develop end-to-end systems in many tasks including text classification, speech recognition and image classification.Results show that these types of systems automatically extract high-level features from raw data [20,21].

    2.1 Bidirectional Encoder Representations from Transformers(BERT)

    The reference to this model has been drawn from [22] led by It is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context.As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of NLP task [23].Fig.1 depicts the architectural layout of BERT Model.

    Figure 1: Architecture of BERT model

    It is pre-trained on a large corpus of unlabeled text including the entire Wikipedia and Book Corpus this has made Bert a well know NLP model in all tasks including twitter sentiment analysis.The BERT model implemented is Bert-tokenizer for classification of text using Tensor Flow 2.0 the batch size used is 30, dropout rate of 0, 2 and 10 epochs and accuracy of 87% has been obtained by the model.

    2.2 Convolution Neural Networks(CNN)

    The architecture for this convolution network is shown in Fig.2 [24].The input in the model is treated in the form of a sequence of words and for each of these words a sentence matrix is built in which each column represents a word embedding at the corresponding position.

    Figure 2: Architecture of CNN

    Due to CNN being heavily dependent on computation power it takes the most amount of time and resources.In CNN model, the layers which are present closer to the input in ConvNet help in classifying the basic rudimentary features for classification such as postv lexical indicators like “good,” “bad” these are then taken into account and then the next layer performs a more detailed evaluation thus the top layers in the end compiles all complex features and makes a prediction, the same can be observed in Fig.2.

    2.3 Long Short-Term Memory(LSTM)

    LSTM is a artificial Recurrent Neural Network (RNN) is used in the field of deep learning to perform sentiment analysis due to its nature to grasp embeddings long term dependencies in discrete text groups.

    The first step is to embed words to word as shown in Fig.3 followed by step 2 where the RNN receives a sequence of vectors as input and consider the order of the vectors to generate sentiment as compared to a basic LSTM model usually comprised of a single hidden LSTM layer and then a feed forward output layer.This implies we will have fixed length reviews encoded as integers and then be converted to embedded vectors pushed to a LSTM in a recursive fashion and pick the last prediction as an output sentiment.

    Figure 3: LSTM architecture

    It was observed that when the words were encoded to integers randomly the accuracy was about 60-65% but when the encoded happened based upon the frequency of the words the accuracy jumped up to 83-87%.The words to vector transformation were carried out by using word2vec library.

    3 System Model

    After analyzing different research papers described in Section 2, it is clear that deep learningbased text classification using NLP is a popular topic nowadays.Identifying positive and negative emotions from social media is a very popular and challenging topic for research.As many people share their emotions through social media, it can be easy to collect huge data for research.In this paper, we have proposed some models to classify emotions from tweets of Twitter users.For this purpose, data preprocessing is an essential step after collecting the dataset.

    The main contribution of the proposed model can be summarized as follows:

    — Comparing various deep learning models and traditional machine learning models.This has been done to make sure this paper serves as the basis for further research into fine tuning various methods to a more custom fit for a particular task.

    — To preprocess a data with various lexicon corrections that range from general cleaning of tweets to fixing slang used to a certain degree.As Twitter being a micro-blogging site it’s filled with noise and various nonstandard ways of writing text, our robust preprocessing system aims to correct this to provide with clean noise free data.This ensures better results even with models that are highly sensitive to noise like SVN.

    — Propose a hybrid model using LSTM and CNN to improve both accuracy of sentiment analysis and the time taken over huge datasets.That has been greatly customized with an initial layer and a final layer.The initial layer comprises of pre trained word embedding from “Google News Word 2 VCC” with a word vector vocabulary of more than 3 million words.

    — In our LSTM + CNN technique we use CNN to create the initial classification layers that our then fed to the LSTM model which greatly reduces time and increases accuracy it can serve to be a great way to quickly analyze twitter sentiment without taking up too much time and has proven to be better than many standard models such as BERT.

    This research focus on proposing both baseline models (traditional machine learning models) and deep learning models (Stacked LSTM, Stacked LSTM with 1D convolution, CNN with pre-trained word embedding, a BERT based model).For baseline models, TF-IDF (term frequency-inverse document frequency) and count vectors features have been used for Multinomial Naive Bayes, Support Vector Machine (SVM) and Logistic Regression separately as input.TFIDF is a weighting scheme that assigns each term in a document a weight based on its term frequency (TF) and inverse document frequency (IDF).The terms with higher weight scores are considered to be more important [25].Count Vector works on Terms Frequency, i.e., counting the occurrences of tokens and building a sparse matrix of documents x tokens [26].For all deep learning models, epoch size has remained the same and it is 3 because of the large size of the dataset.But batch size, dropout probability, activation and optimization has been varied from model to model.The main target of this proposed method is to obtain the best model for sentiment analysis that can then easily be fine-tuned to match particular topics or genres as shown in Fig.4.This has been carried out in this paper by complete testing of different machine learning models and deep learning models.This was done by using sent 140 dataset that was preprocessed for noise and the various adulteration found in micro blogging sites (slang, re-tweets etc).We also propose a hybrid model that uses LSTM in combination to computer neural networks to develop a flexible yet fast model that can be adapted to any situation.This was done by adding a one-dimensional CNN and a max pooling layer after the embedding layer which is the feeds the consolidated features to the LSTM this is explained further in great detail in Section 4.4.4.

    4 Implementation

    4.1 Conformance

    Consider Intel R core i7-7700 CPU is the CPU used for this research work.The computer has 16 GB RAM and a Windows operating system paired with a 1050ti GPU ere.Thus, the language used for the results obtained is python while the idea being used to run the codes is Visual Studio code.All the models mentioned in Section 4 have been tested on the same hardware for uniform results to compensate for the technological advancements in the past few years.

    Figure 4: Overview of proposed work

    4.2 Standard Dataset

    The training and testing data has been extracted from sentiment140 [27] which comprises 1048576 rows with 6 columns.But for the requirement of this study, only two columns have been extracted which constitutes of the follow:

    — The polarity of the Tweet

    — The text of the tweet

    For the first column aforementioned the data is binary 0 pointing to negative data while 4 points to a positive emotion tweet this can be seen in Fig.5 along with the units in percentage.

    This dataset was extracted using twitters own Application programming interface and thus contains of real world tweets and not prefabricated hand chosen tweets that could skew the result in any way.The dominant topics of which the tweets are extracted are shown in Tab.1.

    The dataset has been repeatedly used for research purposes for sentiments analysis because of it consisting of opinions from a spectrum of topics leading to no such bias in the dataset.

    The dataset consists of 6 columns but only two columns the text and the sentiment were required for the scope of this research and thus have been used.

    Figure 5: Classification of dataset

    Table 1: Primary topics in Dataset

    4.3 Preprocessing

    The next step after collection of dataset is to preprocess it so that it can be removed of all the garbage values and unnecessary data that might have been left out due to this being a real life dataset.This helps in filtering out some of the values from the huge dataset.Since twitter is a micro-blogging website where people often use “hashtags”, “URL”, ”slangs”, “acronyms” and other lexical complexities it becomes very important to regulate it this was done by implementing these steps broadly.

    — Tokenizing words.

    — Removing hashtags and URLs.

    — Changing uppercase to lowercase.

    — Reserved words (RT, FAV1).

    — Removing unnecessary and multiple pronunciations.

    — Removing stop words.

    Most of this was achieved through a package called tweet-preprocessor 0.60 [28] which is an inbuilt library widely used to clean tweet of the know noise present in them in the form of above-mentioned occurrences.

    The next step in preprocessing was normalizing text for lexicons was done [29] by changing words with repeated syllables to more understandable structures as depicted in Fig.6 but the latter is easily recognizable in different lexicons then the prior and hence this normalization is necessary.

    Figure 6: Example showing lexical correction

    4.4 Generic Models

    There have been three generic models that have been used to act as a base like to compare to the deep learning models mentioned above, these models serve as a yard stick to analyze which deep learning models performs how much better the generic models used were as follows.

    4.4.1 Na?ve Bayes

    Baye’s rule as shown in Eq.(1) points to the explanation of an event on prior knowledge [30]the probability of event.A (positive) given that B (negative) has already occurred after Laplace smoothing na?ve Bayes results in a mathematical Eq.(2).

    where a: smoothing parameters, K: no of features, N: number of reviews with target positive

    Here the product of these ratios can result in too large numbers these are then resolved by taking logs on either side or comparing log of prior to log of likelihood this was carried out for 1/4 of the allocated test split of the dataset to get baseline accuracy.For the creation of this model Python’s sckit-learn’s Multinomial has been used.

    4.4.2 Support Vector Model(SVM)

    The SVM model thus implemented is a linear one-vs.-rest Support-Vector-Machine (SVM)classifier [31].That takes into account each unique word present in the sentence, as well as all consecutive words.For the representation in the dataset to be useful all of which was converted into a vector, where the vector is the same length as our vocabulary, i.e., the list of all words observed in our training data, with each word representing an entry vector.If a particular word is present, that entry in the vector is 1, otherwise 0.This was done using the count vector present in sklearn.

    The model was made using pythons sckit-learn’s SGDC classifier with the attributes of maxiter = 12 and alpha = 0.001.

    4.4.3 Regression Models

    As a baseline even the logistic regression model from Python’s sckit-learn has been used the sigmoid function used in logistic regression as shown in Eq.(3).

    where,θ: weight parameter, and xi: some input parameter.

    4.4.4 LSTM and CNN

    A stacked residual LSTM model inspired from [24] also used as adding more layers to LSTM accounting to 8-layer neural network has seen better performance then lexicon base or conventional NN-based techniques [24] hence a Word2seq CNN + LSTM model is implemented with 1 embedding layer 2 convolution layer with max pooling 1 LSTM layer and 2 fully connected layers.Ten epochs are used, and the total runtime was 80 mins.

    This model using model checkpoints and call-backs, saves the model weights when validation accuracy is maximum thus the pooling layer can use the standard length of 2 to halve the feature map size.Fig.7.The CNN model initially benefits from pre trained word embeddings which were used the pre trained word embeddings were obtaining from Google News Word2VCC which includes a word vector for a vocabulary for 3 million words and phrases that are trained over 100 billion words.The training and validation accuracy and validation loss with these pre-trained word embeddings for CNN has shown in CNN with word embeddings are used to extract complex features from sentences while LSTM is used as a classifier.

    Figure 7: Overview of LSTM- CNN hybrid

    The hybrid structure shown in Fig 7 incorporates a CNN network over regions r that denotes subsets of the datasets to find structure 2 embedding layers which is then incorporated into 1 pooling layer that increases the speed of the LSTM model.This is done after embedding the word vectors which was seen to have a severe impact on the CNN training time shown in Fig 8.

    Figure 8: Training and Validation accuracy and validation loss for CNN with word embeddings

    5 Comparison and Discussion

    In this section, comparison of models used will be explained.It can be clearly observed from Figs.9 and 10 that among all the rudimentary baseline models, SVM with TF-IDF features outperforms the rest of the rudimentary features.

    Figure 9: Resulting benchmarks (TF-IDF)

    Figure 10: Resulting benchmarks (count vectors)

    While SVM remains constant it takes a huge hit during the initial stages the pre-processed data is used without lexicon corrections and other twitter specific cleaning strategies as mentioned in Section 4.3, this could primarily pointed towards the variations noise in SVM model.These are corrected but it does not show much variation between count vector and TF-IDF features in comparison to other models.

    Fig.11 shows the comparison of different models when count vectors were used to TF-IDF features on the basis of accuracy as follow.

    Figure 11: Comparison of different models

    The various benchmarking parameters for different deep learning models have been shown in Fig.12.The results obtained are judged on the parameters of accuracy, precision, sensitivity and F1 score it has been clearly seen that all the deep learning models performed very well across the board the high sensitivity across the models fell in line with the fact that all the models are trained on dataset which had more positive value than negative values runtime is not taken into consideration while comparing the models but LSTM and CNN performed much faster due to maximum polling in comparison to all other deep learning models.Simulation results depict that deep learning models outperform and show significant improvements than traditional machine learning approaches.

    Figure 12: Resulting benchmarks (Deep Learning Models)

    6 Result Analysis

    The result accounted for after formulating all mentioned methods give us much greater insight of how these works and which are better for sentiment analysis since our dataset has only positive and negative emotions and the theoretical chance was around 50% but all the models especially the deep learning models have performed far better than the theoretical expectations.

    Almost all of the deep learning models performed better than the traditional machine learning models signifying how important it is for a model to be adaptive when it comes to Natural Language Processing.In baseline models, logistic regression has performed the best with 83%accuracy followed by Na?ve Bayes and SVM on further inspection this can be attributed to the fact that the inconsistency of the lingo on social media websites can create a lot of noise to which SVM is very sensitive [32].

    In deep learning models, the best performing model is LSTM with a combination of CNN with 0.877 F1-Score and 90% Precision.

    Twitter users use a lot of emoticons to express how they feel which have been considered with these models as such.In future, the model can also be scaled to accommodate for bilingual dialects which have not been well documented or worked up for us to scale the project to at this moment.

    7 Conclusion

    This paper is presented to serve as baseline comparison as to which model work better when it comes to modern day language processing problems that require modern day solutions it is clear that with proper word preprocessing and embedding deep learning models tend to perform very well across the board having greater than 80% accuracy which can then be specialized for the particular trend to get even better results.

    No particular stop words or specialized texts are used for this research both of which can increase accuracy when building a model with a particular even in focus.The models used are working in a very rudimentary from with little to no specialization which can be done but, in our case, would have skewed the results.

    The project demonstrates a working functional LSTM + CNN hybrid model that can be scaled up quickly and perform better overall then existing solutions.This can be incorporated into previous research papers [33] that utilize machine learning models over neural networks due to the overhead that comes with it.

    It is seen that lexicon correction and preprocessing play a major role in determining sentiments when it comes to non-uniform stream of textual data.This was seen to have heavy impact on SVM models and thus correcting noise becomes essential.

    The hybrid solution thus provided in this research paper helps in fast deployment and extensive scaling due to the CNN tokenization leveraging the speed of a CNN network while having high accuracy benefiting from an LSTM model towards the end of the pipeline.

    Funding Statement:The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under Grant Number (RGP.2/23/42),www.kku.edu.sa.This research was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-Track Path of Research Funding Program.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久99热这里只有精品18| 午夜视频国产福利| 亚洲国产精品久久男人天堂| 亚洲欧美日韩高清专用| 欧美在线一区亚洲| 亚洲精品粉嫩美女一区| 香蕉av资源在线| 亚洲真实伦在线观看| 精品一区二区三区人妻视频| 亚洲一区二区三区不卡视频| 看免费av毛片| 亚洲精品乱码久久久v下载方式| 简卡轻食公司| 久久天躁狠狠躁夜夜2o2o| 欧美性猛交╳xxx乱大交人| 国产欧美日韩精品亚洲av| 国产精品久久久久久精品电影| 国产精品影院久久| 亚洲一区二区三区色噜噜| 欧美最新免费一区二区三区 | 国产高清三级在线| 中文字幕免费在线视频6| 欧美丝袜亚洲另类 | 亚洲五月天丁香| 最近中文字幕高清免费大全6 | 国产精品国产高清国产av| 9191精品国产免费久久| 成人一区二区视频在线观看| 国产av一区在线观看免费| 色视频www国产| 国产麻豆成人av免费视频| 欧美乱妇无乱码| 国产精华一区二区三区| 欧美成人一区二区免费高清观看| 一个人免费在线观看电影| 亚洲国产欧美人成| 哪里可以看免费的av片| 亚洲av美国av| 99国产极品粉嫩在线观看| 国产高潮美女av| 国语自产精品视频在线第100页| 天美传媒精品一区二区| 欧美在线黄色| 美女大奶头视频| 99国产极品粉嫩在线观看| 日韩精品青青久久久久久| 亚洲 国产 在线| 国产精品综合久久久久久久免费| av在线蜜桃| 午夜福利在线观看免费完整高清在 | 国产欧美日韩精品亚洲av| 亚州av有码| 黄片小视频在线播放| 久久久久久久久大av| 搡女人真爽免费视频火全软件 | 18禁在线播放成人免费| 90打野战视频偷拍视频| 亚洲精品粉嫩美女一区| 亚州av有码| 亚洲av成人av| 国产成年人精品一区二区| 国产伦精品一区二区三区视频9| eeuss影院久久| 搡老岳熟女国产| 午夜福利欧美成人| 国产不卡一卡二| www日本黄色视频网| 成人三级黄色视频| 亚洲国产精品999在线| 无遮挡黄片免费观看| 亚洲欧美日韩无卡精品| 日本与韩国留学比较| 亚洲美女黄片视频| 婷婷色综合大香蕉| av欧美777| 国产主播在线观看一区二区| 色噜噜av男人的天堂激情| 美女免费视频网站| 亚洲精品成人久久久久久| 欧美日本亚洲视频在线播放| 亚洲美女黄片视频| 国产真实伦视频高清在线观看 | 2021天堂中文幕一二区在线观| 免费看a级黄色片| 无遮挡黄片免费观看| 日韩中字成人| 中亚洲国语对白在线视频| 欧美成人a在线观看| 成人无遮挡网站| 日韩高清综合在线| 一边摸一边抽搐一进一小说| 性插视频无遮挡在线免费观看| 男插女下体视频免费在线播放| 嫁个100分男人电影在线观看| 欧美乱妇无乱码| 18禁黄网站禁片午夜丰满| 麻豆av噜噜一区二区三区| 丰满的人妻完整版| 婷婷色综合大香蕉| 午夜a级毛片| 国产蜜桃级精品一区二区三区| 久久久久久久久大av| 男女床上黄色一级片免费看| 亚洲熟妇中文字幕五十中出| 国产一区二区亚洲精品在线观看| 欧美性猛交╳xxx乱大交人| 一夜夜www| 女人十人毛片免费观看3o分钟| 狠狠狠狠99中文字幕| 免费人成视频x8x8入口观看| 欧美在线一区亚洲| 桃红色精品国产亚洲av| 给我免费播放毛片高清在线观看| 十八禁人妻一区二区| 国产亚洲精品久久久com| 熟女人妻精品中文字幕| 欧美色欧美亚洲另类二区| 欧美3d第一页| 国产久久久一区二区三区| 十八禁国产超污无遮挡网站| 看片在线看免费视频| 一本综合久久免费| 国产精品乱码一区二三区的特点| 国产探花极品一区二区| 757午夜福利合集在线观看| 国语自产精品视频在线第100页| 亚洲人成网站在线播放欧美日韩| 每晚都被弄得嗷嗷叫到高潮| 亚洲成人久久性| 97碰自拍视频| 国产午夜精品久久久久久一区二区三区 | 天天一区二区日本电影三级| 国产探花极品一区二区| 在线a可以看的网站| 成年女人看的毛片在线观看| 每晚都被弄得嗷嗷叫到高潮| 亚洲乱码一区二区免费版| 婷婷色综合大香蕉| 男女床上黄色一级片免费看| 综合色av麻豆| 亚洲精品影视一区二区三区av| 国产精华一区二区三区| 国产人妻一区二区三区在| 亚洲人成伊人成综合网2020| 又爽又黄无遮挡网站| 成熟少妇高潮喷水视频| 日韩欧美国产在线观看| 成人一区二区视频在线观看| 欧美绝顶高潮抽搐喷水| 欧美区成人在线视频| 1000部很黄的大片| 亚洲乱码一区二区免费版| 久久精品国产清高在天天线| 麻豆国产97在线/欧美| 午夜亚洲福利在线播放| 成人无遮挡网站| 美女 人体艺术 gogo| 亚洲国产精品久久男人天堂| 亚洲三级黄色毛片| 国产精品不卡视频一区二区 | 天美传媒精品一区二区| 97热精品久久久久久| 可以在线观看的亚洲视频| 婷婷丁香在线五月| 国产欧美日韩精品一区二区| 网址你懂的国产日韩在线| 免费在线观看影片大全网站| 久久久久亚洲av毛片大全| 国内揄拍国产精品人妻在线| 97超视频在线观看视频| 亚洲狠狠婷婷综合久久图片| 亚洲aⅴ乱码一区二区在线播放| 最后的刺客免费高清国语| 午夜激情欧美在线| 亚洲精品乱码久久久v下载方式| 久久久久久久亚洲中文字幕 | 亚洲av电影在线进入| 久久午夜福利片| 麻豆成人午夜福利视频| 亚洲一区高清亚洲精品| 夜夜躁狠狠躁天天躁| 免费看光身美女| 日本黄色视频三级网站网址| 波多野结衣高清无吗| 国产免费av片在线观看野外av| 女生性感内裤真人,穿戴方法视频| av视频在线观看入口| 精品一区二区三区视频在线| 一级av片app| 国产高清三级在线| 亚洲,欧美,日韩| 12—13女人毛片做爰片一| 一级a爱片免费观看的视频| 久久精品国产清高在天天线| 日本a在线网址| 免费观看的影片在线观看| 97超级碰碰碰精品色视频在线观看| 亚洲久久久久久中文字幕| 日韩大尺度精品在线看网址| 国产精品一区二区三区四区免费观看 | 五月伊人婷婷丁香| 麻豆国产97在线/欧美| 亚洲欧美日韩卡通动漫| 非洲黑人性xxxx精品又粗又长| av黄色大香蕉| 特级一级黄色大片| 亚洲一区二区三区色噜噜| 国产三级在线视频| 国产国拍精品亚洲av在线观看| 国产精品女同一区二区软件 | 最近最新免费中文字幕在线| 成人性生交大片免费视频hd| 成人亚洲精品av一区二区| 久久久久久久亚洲中文字幕 | 琪琪午夜伦伦电影理论片6080| 国产精品久久视频播放| 欧美黄色淫秽网站| 亚洲av成人精品一区久久| 国产成+人综合+亚洲专区| 日本黄色视频三级网站网址| 欧美在线黄色| 变态另类丝袜制服| a在线观看视频网站| 97超级碰碰碰精品色视频在线观看| 国产在线精品亚洲第一网站| 亚洲国产精品合色在线| 99久久久亚洲精品蜜臀av| 亚洲国产精品sss在线观看| 亚洲内射少妇av| 中国美女看黄片| 一进一出抽搐动态| 又爽又黄a免费视频| 搞女人的毛片| ponron亚洲| www.色视频.com| 制服丝袜大香蕉在线| 天堂av国产一区二区熟女人妻| 黄色配什么色好看| 亚洲精品在线观看二区| 波多野结衣高清作品| 亚洲精品影视一区二区三区av| 两人在一起打扑克的视频| 免费观看精品视频网站| 亚洲国产精品合色在线| 深夜a级毛片| 有码 亚洲区| 美女cb高潮喷水在线观看| 国内久久婷婷六月综合欲色啪| 真人做人爱边吃奶动态| 全区人妻精品视频| 久久99热这里只有精品18| 日本a在线网址| 精品无人区乱码1区二区| 国产色婷婷99| 久久国产精品影院| 中文字幕人成人乱码亚洲影| 淫秽高清视频在线观看| 国产亚洲av嫩草精品影院| 亚洲第一电影网av| 亚洲最大成人中文| 免费看a级黄色片| 亚洲狠狠婷婷综合久久图片| 能在线免费观看的黄片| 亚洲不卡免费看| 黄色一级大片看看| 亚洲成人精品中文字幕电影| 午夜福利免费观看在线| 最近在线观看免费完整版| 久久人妻av系列| 波野结衣二区三区在线| 精品人妻视频免费看| 床上黄色一级片| 国产亚洲精品久久久com| 日韩欧美国产在线观看| 激情在线观看视频在线高清| 亚洲国产精品999在线| 我要看日韩黄色一级片| 欧美xxxx性猛交bbbb| 国产高清三级在线| 欧美区成人在线视频| 热99在线观看视频| 欧美激情在线99| 亚洲av不卡在线观看| 国产伦在线观看视频一区| 我的老师免费观看完整版| 日韩欧美一区二区三区在线观看| 51午夜福利影视在线观看| 国产亚洲欧美98| 国产激情偷乱视频一区二区| 欧美三级亚洲精品| 日韩欧美一区二区三区在线观看| 女人十人毛片免费观看3o分钟| 国产69精品久久久久777片| 波多野结衣巨乳人妻| 91字幕亚洲| 色哟哟·www| 午夜精品在线福利| 久久国产精品人妻蜜桃| 女同久久另类99精品国产91| 国产伦精品一区二区三区视频9| 色播亚洲综合网| 成年女人毛片免费观看观看9| 国产午夜精品论理片| 精品久久久久久,| 久久久久久国产a免费观看| 一级作爱视频免费观看| 久久久精品大字幕| 国产午夜福利久久久久久| 波多野结衣高清作品| 亚洲av免费高清在线观看| 国产色爽女视频免费观看| 在线观看66精品国产| 亚洲美女视频黄频| 欧美成人一区二区免费高清观看| 日韩欧美在线二视频| 亚洲av五月六月丁香网| 少妇被粗大猛烈的视频| 欧美成人性av电影在线观看| 美女高潮的动态| 搞女人的毛片| 色噜噜av男人的天堂激情| 18禁在线播放成人免费| 国产精品亚洲av一区麻豆| 日日摸夜夜添夜夜添av毛片 | 国产免费一级a男人的天堂| 国产成年人精品一区二区| 欧美色视频一区免费| 免费观看精品视频网站| 三级国产精品欧美在线观看| 亚洲av免费在线观看| 99国产综合亚洲精品| 亚洲欧美日韩高清专用| 久久久久久久亚洲中文字幕 | 性色avwww在线观看| 国内精品久久久久精免费| 每晚都被弄得嗷嗷叫到高潮| 国产精品三级大全| 久久久国产成人免费| 真人做人爱边吃奶动态| 午夜福利在线在线| 日本免费a在线| 一卡2卡三卡四卡精品乱码亚洲| 欧美不卡视频在线免费观看| www.色视频.com| 日本一二三区视频观看| 国产在线精品亚洲第一网站| 国产精品一区二区性色av| 欧美黄色片欧美黄色片| 久久午夜亚洲精品久久| 怎么达到女性高潮| 日韩精品青青久久久久久| 最近中文字幕高清免费大全6 | av在线天堂中文字幕| 色综合欧美亚洲国产小说| 午夜两性在线视频| 国产精品三级大全| 99久久精品国产亚洲精品| 99国产精品一区二区三区| 免费av毛片视频| 麻豆成人午夜福利视频| netflix在线观看网站| 久久国产乱子伦精品免费另类| 欧美日韩乱码在线| 国产精品永久免费网站| 精品欧美国产一区二区三| 免费人成在线观看视频色| 国产精品影院久久| 日日干狠狠操夜夜爽| 日本三级黄在线观看| 永久网站在线| 欧美日韩中文字幕国产精品一区二区三区| 日本撒尿小便嘘嘘汇集6| 熟妇人妻久久中文字幕3abv| 久久精品夜夜夜夜夜久久蜜豆| 桃红色精品国产亚洲av| 国产欧美日韩精品一区二区| av中文乱码字幕在线| 51午夜福利影视在线观看| 久久精品国产亚洲av天美| 亚洲电影在线观看av| 亚洲无线在线观看| 两人在一起打扑克的视频| 国产毛片a区久久久久| 一区福利在线观看| 亚洲精华国产精华精| 国产成人a区在线观看| 国产高潮美女av| 精品免费久久久久久久清纯| 国产久久久一区二区三区| 麻豆久久精品国产亚洲av| 99国产综合亚洲精品| 欧洲精品卡2卡3卡4卡5卡区| 亚洲成人免费电影在线观看| 国产成人福利小说| 女人被狂操c到高潮| 欧美不卡视频在线免费观看| 在线免费观看的www视频| 久9热在线精品视频| 中文字幕高清在线视频| 精华霜和精华液先用哪个| 免费人成视频x8x8入口观看| 国产精品一区二区免费欧美| 亚洲av成人不卡在线观看播放网| 一区二区三区四区激情视频 | 亚洲av不卡在线观看| 精品国产亚洲在线| 欧美性感艳星| 亚洲不卡免费看| 成人毛片a级毛片在线播放| 偷拍熟女少妇极品色| 国产精品久久电影中文字幕| 午夜福利高清视频| 搡老妇女老女人老熟妇| 日韩欧美国产在线观看| 97碰自拍视频| 日本 欧美在线| 国产精品电影一区二区三区| 久久国产乱子免费精品| 国产成人影院久久av| 丁香六月欧美| av欧美777| 久久久国产成人免费| 伊人久久精品亚洲午夜| 天堂动漫精品| 97超级碰碰碰精品色视频在线观看| 国产白丝娇喘喷水9色精品| 国产私拍福利视频在线观看| 亚洲欧美激情综合另类| av天堂中文字幕网| 久久久久久大精品| 极品教师在线视频| 国产精品一区二区性色av| 嫩草影院新地址| 国内精品久久久久精免费| 成人三级黄色视频| 午夜精品在线福利| 国产免费男女视频| 久久99热这里只有精品18| aaaaa片日本免费| 免费无遮挡裸体视频| 国产黄a三级三级三级人| 又爽又黄无遮挡网站| 精品一区二区三区人妻视频| 一级毛片久久久久久久久女| 国产成人a区在线观看| 麻豆国产av国片精品| 白带黄色成豆腐渣| 亚洲av成人av| 成人亚洲精品av一区二区| 日本免费一区二区三区高清不卡| 男女之事视频高清在线观看| 淫妇啪啪啪对白视频| 欧美日本亚洲视频在线播放| 青草久久国产| 韩国av一区二区三区四区| 国产高清视频在线观看网站| 韩国av一区二区三区四区| 亚洲成人精品中文字幕电影| 日本一二三区视频观看| 亚洲va日本ⅴa欧美va伊人久久| 精品乱码久久久久久99久播| 国产视频内射| 国产极品精品免费视频能看的| 欧美性感艳星| 怎么达到女性高潮| 男女下面进入的视频免费午夜| 毛片女人毛片| 老熟妇乱子伦视频在线观看| 色哟哟·www| 中文字幕人妻熟人妻熟丝袜美| 此物有八面人人有两片| 男插女下体视频免费在线播放| 老司机深夜福利视频在线观看| www.999成人在线观看| 亚洲男人的天堂狠狠| 午夜免费成人在线视频| 亚洲五月婷婷丁香| 亚洲最大成人中文| 国产精品爽爽va在线观看网站| 久久伊人香网站| 波多野结衣高清无吗| 给我免费播放毛片高清在线观看| 老司机深夜福利视频在线观看| 国产黄a三级三级三级人| www.熟女人妻精品国产| av中文乱码字幕在线| 久久午夜亚洲精品久久| 国产精品98久久久久久宅男小说| 久久久久久大精品| 怎么达到女性高潮| 久久这里只有精品中国| 欧美丝袜亚洲另类 | h日本视频在线播放| 俄罗斯特黄特色一大片| 日日夜夜操网爽| 久久中文看片网| 久久精品国产99精品国产亚洲性色| 悠悠久久av| 一边摸一边抽搐一进一小说| 内射极品少妇av片p| 综合色av麻豆| 少妇裸体淫交视频免费看高清| 久久人妻av系列| av欧美777| 91麻豆av在线| 午夜福利高清视频| 禁无遮挡网站| 又紧又爽又黄一区二区| 亚洲七黄色美女视频| 日韩欧美在线乱码| 精品久久久久久久久av| 国产 一区 欧美 日韩| 一a级毛片在线观看| 国内精品久久久久久久电影| 日本黄大片高清| 久久久久国内视频| 国产探花极品一区二区| 九九热线精品视视频播放| 成人一区二区视频在线观看| 高清日韩中文字幕在线| 色综合欧美亚洲国产小说| 久久国产乱子伦精品免费另类| 国产美女午夜福利| 99久久久亚洲精品蜜臀av| 桃红色精品国产亚洲av| 国语自产精品视频在线第100页| 欧美一区二区亚洲| 波多野结衣巨乳人妻| 欧美一区二区国产精品久久精品| 观看免费一级毛片| 欧美午夜高清在线| 亚洲美女搞黄在线观看 | 女人被狂操c到高潮| 欧美日韩黄片免| 日韩欧美精品免费久久 | 国产精品精品国产色婷婷| 国产伦精品一区二区三区四那| 日韩欧美三级三区| 国产aⅴ精品一区二区三区波| 精品一区二区免费观看| 婷婷丁香在线五月| 午夜福利18| 久久久久性生活片| 国产精品影院久久| av女优亚洲男人天堂| 成人特级黄色片久久久久久久| 在线观看午夜福利视频| 网址你懂的国产日韩在线| 亚洲av熟女| 国产精品亚洲美女久久久| 露出奶头的视频| 中文字幕av在线有码专区| 亚洲国产欧美人成| 丁香六月欧美| 国产一区二区在线av高清观看| 最近最新中文字幕大全电影3| 欧美bdsm另类| 如何舔出高潮| 成人三级黄色视频| 亚洲成a人片在线一区二区| av中文乱码字幕在线| 很黄的视频免费| 99国产极品粉嫩在线观看| 国产真实乱freesex| 麻豆久久精品国产亚洲av| 99在线视频只有这里精品首页| 色哟哟哟哟哟哟| 久9热在线精品视频| 成人av一区二区三区在线看| 国产真实乱freesex| 亚洲成人久久性| 麻豆国产av国片精品| 免费在线观看影片大全网站| 可以在线观看的亚洲视频| 一个人看的www免费观看视频| 69av精品久久久久久| 激情在线观看视频在线高清| 国产黄色小视频在线观看| 欧美激情久久久久久爽电影| 欧美zozozo另类| 亚洲经典国产精华液单 | 欧美日韩国产亚洲二区| 在线观看美女被高潮喷水网站 | 91午夜精品亚洲一区二区三区 | 真人一进一出gif抽搐免费| 国内精品美女久久久久久| 亚洲五月婷婷丁香| 人妻夜夜爽99麻豆av| 亚洲人成电影免费在线| 国产乱人视频| 免费在线观看成人毛片| av天堂在线播放| 夜夜夜夜夜久久久久| 无遮挡黄片免费观看| 久久草成人影院| 精品国产三级普通话版| 天堂影院成人在线观看| 国产欧美日韩一区二区三| 国产aⅴ精品一区二区三区波| 久久精品国产亚洲av香蕉五月| 亚洲第一电影网av| 免费电影在线观看免费观看| 精品一区二区三区av网在线观看| 亚洲熟妇熟女久久| 久久久久久久久中文| 欧美一区二区亚洲| 99国产精品一区二区三区| 欧美zozozo另类| 久久国产精品影院| 午夜福利视频1000在线观看| 精品国产亚洲在线| 99在线视频只有这里精品首页| 在线观看av片永久免费下载| 老司机午夜十八禁免费视频| 日韩欧美国产一区二区入口|