• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Arabic Fake News Detection Using Deep Learning

    2022-08-24 03:30:38KhaledFouadSaharSabbehandWalaaMedhat
    Computers Materials&Continua 2022年5期

    Khaled M.Fouad,Sahar F.Sabbehand Walaa Medhat

    1Faculty of Computers&Artificial Intelligence,Benha University,Egypt

    2University of Jeddah,College of Computer Science and Engineering,Jeddah,21493,Saudi Arabia

    3Information Technology and Computer Science,Nile University,Egypt

    Abstract: Nowadays, an unprecedented number of users interact through social media platforms and generate a massive amount of content due to the explosion of online communication.However, because user-generated content is unregulated, it may contain offensive content such as fake news,insults,and harassment phrases.The identification of fake news and rumors and their dissemination on social media has become a critical requirement.They have adverse effects on users,businesses,enterprises,and even political regimes and governments.State of the art has tackled the English language for news and used feature-based algorithms.This paper proposes a model architecture to detect fake news in the Arabic language by using only textual features.Machine learning and deep learning algorithms were used.The deep learning models are used depending on conventional neural nets(CNN),long short-term memory(LSTM),bidirectional LSTM(BiLSTM),CNN+LSTM,and CNN + BiLSTM.Three datasets were used in the experiments, each containing the textual content of Arabic news articles; one of them is reallife data.The results indicate that the BiLSTM model outperforms the other models regarding accuracy rate when both simple data split and recursive training modes are used in the training process.

    Keywords: Fake news detection; deep learning; machine learning; natural language processing

    1 Introduction

    The rise of social networks has considerably changed the way users around the world communicate.Social networks and user-generated content(UGC)are examples of platforms that allow users to generate,share,and exchange their thoughts and opinions via posts,tweets,and comments.Thus,social media platforms (i.e., Twitter, Facebook, etc.) are considered powerful tools through which news and information can be rapidly transmitted and propagated.These platforms empowered their significance to be the essence of information and news source for individuals through the WWW[1].However,social media and UGC platforms are a double-edged sword.On the one hand,they allow users to share their experiences which enriches the web content.On the other hand, the absence of content supervision may lead to the spread of false information intentionally or unintentionally[2],threatening the reliability of information and news on such platforms.

    False information can be classified as intention-based or knowledge-based [3].Intention-based can be further classified into misinformation and disinformation.Misinformation is an unintentional share of false information based on the user’s beliefs, thoughts, and point of view.Whereas the intentional spread of false information to deceive,mislead,and harm users are called disinformation.Fake news is considered disinformation as they include news articles that are confirmed to be false/deceptive published intentionally to mislead people.Another categorization of false information[4]was based on the severity of its impact on users.Based on this study,false information is classified as a) fake news, b) biased/inaccurate news, and c) misleading/ambiguous news.Fake news has the highest impact and uses tools such as content fabrication, propaganda, and conspiracy theories [5].Finally,biased content is considered to be less dangerous and mainly uses hoaxes and fallacies.The last group is misleading news,which has a minor impact on users.Misleading content usually comes in the forms of rumors,clickbait,and satire/sarcasm news.Disinformation results in biased,deceptive,and decontextualized information based upon which emotional decisions are made,impulsive reactions,or stopping actions in progress.Disinformation results negatively impact users’experience and decisions,such as online shopping and stock markets[6].

    The bulk of researches for fake news detection are based on machine learning techniques [7].Those techniques are feature-based, as they require identifying and selecting features that can help identify any piece of information/text’s fakeness.Those features are then fed into the chosen machine learning model for classification.In various languages,deep learning models[8]have recently proven efficiency in text classification tasks and fake news detection [9].They have the advantage that they can automatically adjust their internal parameters until they identify the best features to differentiate between different labels on their own.However,no researches use deep learning models[10]for fake news detection for the Arabic language,as far as we know from the literature.

    The problem of Fake news detection can have harmful consequences on social and political life.Detecting fake news is very challenging,mainly when applied in different languages than English.The Arabic language is one of the most spoken languages over the globe.There are a lot of sources for news in Arabic,including official news websites.These sources are considered the primary source of Arabic datasets.Our goal is to detect rumors and measure the effect of fake news detection in the middle east region.We have evaluated many algorithms to achieve the best results.

    The work’s main objective is exploring and evaluating the performance of different deep learning models in improving fake news detection for the Arabic language.Additionally,compare deep learning performance with the traditional machine learning techniques.Eight machine learning algorithms with cross-fold validation are evaluated,including probabilistic and vector space algorithms.We have also tested five combinations of deep learning algorithms,including CNN and LSTM.

    The paper is organized as follows.Section 2 tackles the literature review in some detail.The proposed model architecture is presented in Section 3.Section 4 presents the experiments and the results with discussion.The paper is concluded in Section 5.

    2 Literature Review

    There are many methods used for fake news detection and rumor detection.The methods include machine learning and deep learning algorithms,as illustrated in the following subsections.

    2.1 Fake News Detection Methods

    Fake news detection has been investigated from different perspectives; each utilized different features for information classification.These features included linguistic, visual, user, post, and network-based features [5,11].The linguistic-based methods tried to find irregular styles within text based on a set of features such as the number of words, word length, multiple words frequencies;unique word count, psycho-linguistic features, syntactic features (i.e., TF-IDF, question marks,exclamation marks,hash-tags...etc.)to discriminate natural and fake news[11].Visual-based systems attempted to identify and extract visual elements from fictitious photos and movies [12] by using deep learning approaches.The user-based methods analyzed user-level features to identify likely fake accounts.It is believed that fake news can probably be created and shared by fake accounts or automatic pots created for this sake.User-based features were used to evaluate source/author credibility.Those features include, among others:the number of tweets, tweet repetition, number of followers, account age, account verifiability, user photo, demographics, user sentiment, topically relevancy,and physical proximity[13,14].The post-based methods analyzed users’feedback,opinions,and reactions as indicators of fake news.These features included comments,opinions,sentiment,user rating, tagging, likes, and emotional reactions [14].The network-based methods of social networks enabled the rapid spread of fake news.These methods tried to construct and analyze networks from different perspectives.Friendship networks,for instance,explored the user followers relationship.In comparison, stance networks represent post-to-post similarities.Another type is the co-occurrence networks,which evaluate user–topic relevancy[14].

    Many methods considered rumors or fake news detection as a classification issue.These methods aim to associate attributes’values,like a rumor or non-rumor,true or false,or fake or genuine,with a specific piece of text.Researchers had utilized machine-learning methods,accomplishing optimistic results.Substitutionary researchers utilized other methods based on data mining techniques.They depended on extrinsic resources, like knowledge bases, to forecast either the included class of social media content or to examine their truthfulness.Many methods that detect rumors have concentrated on utilizing content features for classification.Few methods for rumor detection have depended on social context.Otherwise, rumor detection and verification methods predominantly utilized a combination of content and context features.Using this combination is since using the social context of rumors may significantly enhance detection performance[14].Some different method categories of rumor detection that may be considered in the works of rumor detection analysis are shown in Tab.1.These methods can be categorized into classification methods and other methods.

    Table 1:Different attributes that participate in the start and growth of rumors on social media

    2.2 Machine Learning-Based Fake News Detection

    Most of the fake news detection works to formulate the problem as a binary classification problem.The literature research may fall under the umbrella of three main classes[15];feature-based machine learning approaches,networking approaches,and deep learning approaches.Feature-based machine learning approaches aim at learning features from data using feature-engineering techniques before classification.Textual, visual, and user features are extracted and fed into a classifier, then evaluated to identify the best performance given those sets of features.The most widely used supervised learning techniques include logistic regression(LR)[16],ensemble learning techniques such as random forest(RF)and adaptive boosting(Adaboost)[16–18], decision trees[18], artificial neural networks[18],support vector machines(SVM)[16,18],na?ve Bayesian(NB)[16,18,19]and k-nearest neighbor(KNN) [16–18] and linear discriminant analysis (LDA) [16,20,21].However, feature-based machine learning models suffer the issue of requiring feature engineering ahead of classification,cumbersome.Networking approaches evaluate user/author credibility by extracting features such as the number of followers, comments/replies content, timestamp, and using graph-based algorithms to analyze structural social networks[22,23].

    2.3 Deep Learning for Fake News Detection

    Deep learning (DL) [9,15] approaches use deep learning algorithms to learn features from data without feature engineering during training automatically.Deep learning models have proven a substantial performance improvement and eliminated the need for the feature extraction process.As stated earlier, deep learning models can automatically overcome the burden of feature engineering steps and automatically use training data to identify discriminatory features for fake news detection.Deep learning models showed remarkable performance in text classification general tasks[24–27]and have been widely used for fake news detection[28].Due to their efficiency for text classification tasks,deep learning models have been applied for fake news detection from the NLP perspective using only text.For instance, in [29], deep neural networks were used to predict real/fake news using only the news’s textual content.Different machines and deep learning algorithms were applied in this work,and their results showed the superiority of Gated Recurrent Units(GRUs)over other tested methods.

    In [30], text content was preprocessed and input to recurrent neural networks (RNN), GRU,Vanilla, and Long Short Term Memories (LSTMs) for classifying fake news.Their results showed that LSTM follows GRU’s best performance and finally comes vanilla.Text-based only classification in [31] using tanh-RNNs and LSTM with a single hidden layer, GRU with one hidden layer, and enhanced with an extra GRU hidden layer.In [32], bidirectional LSTMs together with Concurrent neural networks(CNN)were used for classification.The bi-directional LSTMs considered contextual information in both directions:forward and backward in the text.In[33],the feasibility of applying deep learning architecture of CNN with LSTM cells for text-only–based classification of RNN and LSTMs and GRUs were used in[34]for text-based classification.Deep CNN multiple hidden layers were used in[35].

    Other attempts were made using text and one or more of other features in[36]Arjun et al.utilized ensemble-based CNN and (BiLSTM) for multi-label classification of fake news based on textual content and features related to the source’s behavior speaker.In their model, fake news is assigned to one of six classes of fake news.(Pants-fire, False, Barely-true, Half-true, Mostly-true, and True).In [37], neural ensemble architecture (RNN, GRN, and CNN) and used content-based and authorbased features to detect rumors on Twitter.Textual and propagation-based features were used in[38,39]for classification:the former used RNN, CNN, and recursive neural networks (RvNN).CNN and RNN used only text and sentiment polarity of tweet’s response,and the RvNN model was used on the text and the propagation.At the same time,the latter study constructed RvRNN based on top-down and bottom-up tree structures.These models were tested compared to the traditional state-of-the-art models such as decision tree(Tree-based Ranki DTR,Decision-Tree Classifier DTC),RFC,different variations of SVM,GRU-RNN.Their results showed that TD-RvRNN gave the best performance.

    Post-based, together with user-based features, were used for fake news predictions [40].They applied RNN,LSTMs,and GRUs and found out that LSTMs outperformed the other two models.In [41], text, user-based, content-based features, signal features were used for prediction tasks using a hierarchical recurrent convolutional neural network.Their experiments included(Tree-based Ranki DTR,Decision-Tree Classifier DTC),SVM,GRUs,BLSTMs.Tab.2 summarizes the surveyed works in the literature.

    Table 2:The summarization of the related works

    Table 2:Continued

    2.4 Arabic Rumor Detection

    The Arabic language has a complex structure that imposes challenges in addition to the lack of datasets.Thus, the researches on rumor detection in Arabic social media are few and require more attention and effort to achieve optimal results.The studies that are focused on Arabic rumor detection are summarized in Tab.3.

    Table 3:The researches that are concerning Arabic rumors detection

    3 Proposed Architecture

    The proposed methodology investigates the most famous state of the arts deep learning algorithms for Arabic text classification;like deep learning techniques,they have the advantage of their ability to capture semantic features from textual data[47]automatically.The four different combinations of deep neural networks,namely,CNN,LSTM,BiLSTM,CNN+LSTM,and CNN+BiLSTM classification,have been performed.The proposed methodology is shown in Fig.1.

    3.1 Datasets

    The first dataset consists of news and tweets that were manually collected and annotated by rumor/non-rumor.The actual dataset was collected from the Arabic news portals such as; Youm7,Akhbarelyom, and Ahram.This fake news was announced to make people aware that news is not absolute and fake.This effort is the responsibility of the information and decision support center,the Egyptian cabinet.

    Figure 1:System architecture

    The second dataset is a benchmark dataset published in[48].Then,the two datasets are merged into one large combined dataset to test deep learning performance using a larger dataset.The details of each dataset are shown in the Tab.4.

    Table 4:The description of the datasets

    Tab.5 shows samples of that real dataset collected from Arabic news websites.

    3.2 Preprocessing

    Preprocessing the text before it is fed into the classifier is very important and impacts the overall performance of the classification model.In this step, the text is cleaned using filters to remove punctuations and all non-Unicode characters.Afterward,stop words are removed,then sentences are tokenized,and tokens are stemmed.The resulting sentences are then encoded as numerical sequences,the number of unique tokens and the maximum sentences’length is calculated.This maximum length is used to pad all sentences to be of the same size, equal to the maximum length.Labels are then encoded using one-hot-encoding.

    Table 5:The samples of the real dataset

    3.3 Word Embedding

    Recently, word embeddings proved to outperform traditional text representation techniques.It represents each word as a real-valued vector in a dimensional space while preserving the semantic relationship between words.As vectors of words with similar meanings are placed close to each other.Word embeddings can be learned from the text to fit a deep neural model on text data.For our work,the Tensorflow Keras embedding layer was used.It takes,as an input,the numerically encoded text.It is implemented as the first hidden layer of the deep neural network where the word embeddings will be learned while training the network.The embedding layer stores a lookup table to map the words represented by numeric indexes to their dense vector representations.

    3.4 Proposed Models

    Our system explores the usage of three deep neural networks,namely,CNN,LSTM,and BiLSTM,and two combinations between CNN+LSTM and CNN+BiLSTM as illustrated in Fig.2.

    CNN model consists of one conventional layer which learns to extract features from sequences represented using a word embedding and derive meaningful and useful sub-structures for the overall prediction task.It is implemented with 64 filters (parallel fields for processing words) with a linear(‘relu’) activation function.The second layer is a pooling layer to reduce the output of the convolutional layer by half.The 2D output from the CNN part of the model is flattened to one long 2D vector to represent the‘features’extracted by the CNN.Finally,two dense layers are used to scale,rotate and transform the vector by multiplying Matrix and vector.

    Figure 2:Deep learning model

    The output of the word embedding layer is fed into one LSTM layer with 128 memory units.The output of the LSTM is fed into a dense layer of size=64.This is used to enhance the complexity of the LSTM’s output threshold.The activation function is natural for binary classification.It is a binary-class classification problem;binary cross-entropy is used as the loss function.

    The third model combines both CNN with LSTM,where two conventional layers are added with max-pooling and dropout layers.The conventional layers act as feature extractors for the LSTMs on input data.The CNN layer uses the output of the word embeddings layer.Afterward,the pooling layer reduces the features extracted by the CNN layer.A dropout layer is added to help to prevent neural from overfitting.The LSTM layer with a hidden size of 128 is added.We use one LSTM layer with a state output of size=128.Note, as per default return sequence is False, we only get one output,i.e.,of the last state of the LSTM.The output of LSTM is connected with a dense layer of size=64 to produce the final class label by calculating the probability of the LSTM output.The softmax activation function is used to generate the final classification.

    The BiLSTM model uses a combination of recurrent and convolutional cells for learning.The output from the word embeddings layers is fed into a bi-directional LSTM.Afterward,dense layers are used to find the most suitable class based on probability.

    BiLSTM-CNN model architecture uses a combination of convolutional and recurrent neurons for learning.As input, the output of the embeddings layers is fed into two-level conventional layers for feature learning for the BiLSTM layer.The features extracted by the CNN layers are max-pooled and concatenated.The fully connected dense layer predicts the probability of each class label.

    For Training the deep learning models,Adam optimizer with 0.01 learning rate,weight decay of 0.0005,and 128 batch size.A dropout value of 0.5 is used to avoid overfitting and speed up the learning.The output layer uses a softmax activation function.

    The experiments used python programming Tensorflow and Keras libraries for machine learning and deep learning models.A windows 10–based machine with core i7 and 16 GB RAM was used.

    4 Results and Discussion

    Two experiments have been performed on three different datasets.The first experiment utilizes the proposed deep learning algorithms.The second experiment utilizes machine-learning algorithms using n-gram feature extraction and compares their results with deep learning algorithms.

    Experiments included two phases;first,the most famous machine learning algorithms have been applied for classification with different n-grams.Machine learning techniques were evaluated using accuracy, f1-measure, and AUC (Area Under Curve) measures.The second phase of experiments included applying deep learning models for classification.Deep learning algorithms were first trained using simple data spilled with 80%training and 20%testing.Then same algorithms were trained using 5-fold cross-validation[49].

    4.1 Machine Learning Experiments

    The experiments are conducted using many machine learning algorithms,including Linear SVC,SVC,multinomialNB,bernoulliNB,stochastic gradient descent(SGD),decision tree,random forest,and k-neighbors.Each algorithm is evaluated using accuracy, F-score, and area under the curve(AUC).The results of the first dataset experiment are shown in Tab.6.The table shows that the SGD classifier gives the best results.The figure shows that SVC,decision tree,and random forest give lower performance than other algorithms.

    Table 6:The results of the first dataset experiment

    The results of the second dataset experiment are shown in Tab.7.Each algorithm is evaluated using accuracy,F(xiàn)-score,and area under the curve(AUC).Tab.7 shows that the LinerarSVC classifier gives the best results.The table shows that SVC, decision tree, and random forest give lower performance than other algorithms.

    Table 7:The results of the second dataset experiment

    The results of the third dataset experiment are shown in Tab.8.Each algorithm is evaluated using accuracy, F-score, and area under the curve (AUC).The table shows that the LinerarSVC classifier gives the best F-score and AUC while MultinomialNB gives the best accuracy.The figure shows that SVC,decision tree,and random forest give lower performance than other algorithms.

    Table 8:The results of the third dataset experiment

    The methods SVC,decision tree,and random forest can be concluded that is not suitable for this problem.The following graphs depicted in Figs.3a–3h show the performance of each ML algorithm applied to each dataset.The BernoulliNB method shows its performance with the first dataset,found in Fig.3a.Fig.3b shows that MultinomialNB gives its lower performance for the third dataset.Fig.3c shows that k neighbors give their best performance for the first dataset.Fig.3d shows that random forest gives its best performance for the first dataset.Fig.3e shows that the decision tree performs best for the first dataset.Fig.3f shows that the SGD classifier gives an almost equivalent performance for all datasets.Fig.3g shows that SVC gives its best performance for the first dataset.Fig.3h shows that Linear SVC gives almost equivalent performance for all datasets.The first dataset is the manual collected and annotated data,which is the real-life data.Therefore,the machine learning algorithms give excellent performance for real-life data.

    Figure 3:The performance of each ML algorithms with each dataset

    4.2 Deep Learning Algorithms

    Many deep learning algorithms are conducted; CNN, LSTM, CNN + LSTM, BiLSTM, and CNN+BiLSTM.The evaluation metrics of accuracy,loss,and AUC are used.

    Tab.9 shows the performance of each algorithm applied to the first dataset.Tab.9 shows that BiLSTM gives the slightest loss,best accuracy,and best AUC.Thus,BiLSTM gives good performance with reasonable loss compared to other algorithms,which are close to each other in performance.

    Table 9:The performance of each algorithm applied on the first dataset

    Tab.10 shows the performance of each algorithm on the second dataset.Results on the second dataset also show that BiLSTM gives the least amount of loss, the best accuracy, and the best.Additionally, CNN gives a bad performance with a significant loss and the lowest accuracy.Other algorithms give almost similar performances.

    Table 10:The performance of each algorithm applied on the second dataset

    Tab.11 shows the performance of each algorithm on the third dataset.Tab.11 shows that CNN gives the least amount of loss.BiLSTM gives the best accuracy and the best AUC.Tab.11 shows that LSTM and CNN+BiLSTM give a significant loss while accuracies and AUC are almost similar to other algorithms.

    Table 11:The performance of each algorithm applied on the third dataset

    The following graphs depicted in Figs.4a–4h show the performance of each deep learning algorithm with each dataset.The BiLSTM method is more suitable for the first and second datasets because it significantly loses the third dataset, as shown in Fig.4a.Fig.4b shows that CNN is not suitable for this problem as it gives low performance for all datasets and a considerable amount of loss in the third dataset.Fig.4c shows that CNN+BiLSTM is more suitable for datasets one and second as it significantly loses dataset three.Fig.4d shows that CNN+LSTM is more suitable for the first and second datasets as it gives a significant amount of loss in the third dataset.Fig.4e shows that LSTM provides accepted performance for the first and second datasets but encounters a significant loss for the third dataset.Therefore,deep learning algorithms give better performance when combining with real-life data.

    Figure 4:The performance of each deep learning algorithm with each dataset

    4.3 Cross-Validation

    To verify the experiments done with deep learning algorithms, five-fold cross-validation on the three datasets have experimented.The results of each dataset are shown in Tabs.12–14.Results show that BiLSTM and BiLSTM + CNN give the highest accuracy and most negligible loss for all three datasets.On the other hand,CNN achieved the worest performance among all experimented models.

    Table 12:The five-fold cross-validation of the first dataset

    Table 13:The five-fold cross-validation of the second dataset

    Table 14:The five-fold cross-validation of the third dataset

    5 Conclusions and Future Works

    This paper aims at investigating machine learning and deep learning models for content-based Arabic fake news classification.A series of experiments were conducted to evaluate the task-specific deep learning models.Three datasets were used in the experiments to assess the most well-known models in the literature.Our findings indicate that machine learning and deep learning approaches can identify fake news using text-based linguistic features.There was no single model that performed optimally across all datasets in terms of machine learning algorithms.On the other hand,our results show that the BiLSTM model achieves the highest accuracy among all models assessed across all datasets.

    We intend to thoroughly examine the existing architectures combining various layers as part of our future work.Furthermore, examine the effect of various pre-trained word embeddings on the performance of deep learning models.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:Authors declare that they have no conflicts of interest to report regarding the present study.

    成人午夜高清在线视频| 在线天堂最新版资源| 午夜影院日韩av| 久久人人爽人人爽人人片va | 国产伦精品一区二区三区四那| 欧美区成人在线视频| 少妇被粗大猛烈的视频| 亚洲欧美日韩卡通动漫| 欧美极品一区二区三区四区| 婷婷亚洲欧美| 亚洲真实伦在线观看| 国产一区二区亚洲精品在线观看| 亚洲人成电影免费在线| 啦啦啦韩国在线观看视频| 亚洲专区国产一区二区| 亚洲综合色惰| 免费观看的影片在线观看| 日本在线视频免费播放| 人妻制服诱惑在线中文字幕| 午夜a级毛片| 免费大片18禁| 如何舔出高潮| 成人国产综合亚洲| 人人妻人人澡欧美一区二区| 国产伦人伦偷精品视频| 亚洲成a人片在线一区二区| 可以在线观看毛片的网站| 成年女人看的毛片在线观看| 国产精品免费一区二区三区在线| 久久性视频一级片| 又黄又爽又刺激的免费视频.| av福利片在线观看| 国产伦精品一区二区三区视频9| 舔av片在线| 18禁黄网站禁片午夜丰满| 免费观看精品视频网站| 每晚都被弄得嗷嗷叫到高潮| 十八禁网站免费在线| 青草久久国产| 日日夜夜操网爽| 悠悠久久av| 五月玫瑰六月丁香| 国产成人aa在线观看| 淫妇啪啪啪对白视频| 精品不卡国产一区二区三区| 免费av毛片视频| 亚洲精华国产精华精| 精品99又大又爽又粗少妇毛片 | 精品久久久久久久久亚洲 | 欧美3d第一页| 熟妇人妻久久中文字幕3abv| www日本黄色视频网| 午夜亚洲福利在线播放| 成人亚洲精品av一区二区| 两性午夜刺激爽爽歪歪视频在线观看| 无遮挡黄片免费观看| 听说在线观看完整版免费高清| 美女 人体艺术 gogo| 免费一级毛片在线播放高清视频| 99国产极品粉嫩在线观看| 亚洲av日韩精品久久久久久密| 别揉我奶头~嗯~啊~动态视频| 淫妇啪啪啪对白视频| 三级国产精品欧美在线观看| 成人特级av手机在线观看| 精品熟女少妇八av免费久了| 欧美乱色亚洲激情| 婷婷六月久久综合丁香| 欧美xxxx黑人xx丫x性爽| 久久伊人香网站| 变态另类成人亚洲欧美熟女| www.999成人在线观看| 日本一二三区视频观看| 啦啦啦韩国在线观看视频| ponron亚洲| 国产精品三级大全| 亚洲成a人片在线一区二区| 国产午夜精品论理片| 久久人人精品亚洲av| 俄罗斯特黄特色一大片| 国产视频内射| 看十八女毛片水多多多| 美女黄网站色视频| 很黄的视频免费| 欧美日本视频| 美女大奶头视频| 成人午夜高清在线视频| 又黄又爽又刺激的免费视频.| 欧美日韩黄片免| 亚洲国产欧洲综合997久久,| x7x7x7水蜜桃| 日韩国内少妇激情av| 美女免费视频网站| 中国美女看黄片| 熟女人妻精品中文字幕| 91九色精品人成在线观看| 日韩成人在线观看一区二区三区| 丝袜美腿在线中文| 国产精品一区二区免费欧美| 亚洲 国产 在线| 嫩草影院精品99| 一级av片app| 少妇熟女aⅴ在线视频| 亚洲精品日韩av片在线观看| 老鸭窝网址在线观看| av黄色大香蕉| 日韩人妻高清精品专区| 在线观看美女被高潮喷水网站 | 成人国产一区最新在线观看| 69人妻影院| 色av中文字幕| 不卡一级毛片| 国产三级黄色录像| 蜜桃亚洲精品一区二区三区| 亚洲成人免费电影在线观看| 最近视频中文字幕2019在线8| 久久久国产成人精品二区| 亚洲最大成人中文| 宅男免费午夜| 国内精品久久久久久久电影| 精品福利观看| 我的女老师完整版在线观看| aaaaa片日本免费| 久久久久久久久久成人| 黄色女人牲交| 一进一出抽搐动态| 日韩精品中文字幕看吧| 99热只有精品国产| 亚洲欧美激情综合另类| 欧美精品啪啪一区二区三区| 高清日韩中文字幕在线| 午夜激情欧美在线| 一个人看的www免费观看视频| 久久精品国产清高在天天线| 非洲黑人性xxxx精品又粗又长| 欧美日韩国产亚洲二区| 少妇丰满av| 日韩欧美精品免费久久 | 日本免费a在线| 人妻久久中文字幕网| 成年版毛片免费区| 久久精品影院6| 国产视频内射| 亚洲国产高清在线一区二区三| 国产欧美日韩精品亚洲av| 亚洲欧美激情综合另类| 全区人妻精品视频| 国产精品女同一区二区软件 | 国语自产精品视频在线第100页| 久久精品国产清高在天天线| 99热6这里只有精品| 在线十欧美十亚洲十日本专区| 丰满的人妻完整版| 99久久精品国产亚洲精品| 伦理电影大哥的女人| 色吧在线观看| 最新在线观看一区二区三区| 亚洲av五月六月丁香网| 亚洲国产欧美人成| 日韩欧美在线二视频| 精品国产亚洲在线| 一区二区三区四区激情视频 | 亚洲av日韩精品久久久久久密| 女人被狂操c到高潮| 亚洲经典国产精华液单 | 美女被艹到高潮喷水动态| 亚洲乱码一区二区免费版| 嫩草影院入口| 国产亚洲精品av在线| 婷婷色综合大香蕉| 亚洲成av人片免费观看| 内射极品少妇av片p| 国产蜜桃级精品一区二区三区| 久久久国产成人免费| 91在线观看av| 国产精品一区二区三区四区久久| 国产v大片淫在线免费观看| 日本成人三级电影网站| 黄色视频,在线免费观看| 校园春色视频在线观看| 亚洲专区中文字幕在线| 97热精品久久久久久| 亚洲三级黄色毛片| 国产高清视频在线播放一区| 美女高潮喷水抽搐中文字幕| 51午夜福利影视在线观看| 亚洲国产色片| 国产大屁股一区二区在线视频| 国产免费一级a男人的天堂| 欧美精品国产亚洲| 国产欧美日韩一区二区精品| 日本一本二区三区精品| 精品一区二区三区人妻视频| www.999成人在线观看| 在现免费观看毛片| 亚洲美女搞黄在线观看 | 国产精品美女特级片免费视频播放器| 可以在线观看毛片的网站| 国产高清视频在线播放一区| 有码 亚洲区| 简卡轻食公司| 波多野结衣巨乳人妻| 大型黄色视频在线免费观看| 色综合站精品国产| 午夜激情福利司机影院| 搡老熟女国产l中国老女人| 成人国产一区最新在线观看| 天堂动漫精品| 久久久久九九精品影院| 亚洲男人的天堂狠狠| 国产精品亚洲美女久久久| 欧美日韩黄片免| 欧美午夜高清在线| 嫩草影院入口| 国产精品伦人一区二区| netflix在线观看网站| 国产一区二区三区视频了| 久久这里只有精品中国| 91在线精品国自产拍蜜月| 国产成人av教育| av在线老鸭窝| 亚洲国产高清在线一区二区三| 国产91精品成人一区二区三区| 97热精品久久久久久| 最近最新中文字幕大全电影3| 动漫黄色视频在线观看| 久久国产乱子免费精品| 成人国产一区最新在线观看| 极品教师在线免费播放| 99热这里只有精品一区| 日本黄大片高清| 亚洲人成网站高清观看| 国产一级毛片七仙女欲春2| 亚洲av成人av| 性欧美人与动物交配| 亚洲精品一卡2卡三卡4卡5卡| 国产欧美日韩精品亚洲av| 国产成人aa在线观看| 亚洲中文日韩欧美视频| 日本与韩国留学比较| 国产成人影院久久av| 亚洲无线观看免费| 国产一区二区在线av高清观看| 校园春色视频在线观看| 深夜a级毛片| 精品一区二区三区视频在线| 欧美激情国产日韩精品一区| 亚洲国产欧美人成| 高潮久久久久久久久久久不卡| 在线观看一区二区三区| 亚洲成a人片在线一区二区| 亚洲精品在线美女| 欧美成人性av电影在线观看| 小蜜桃在线观看免费完整版高清| 国产精品,欧美在线| 老司机午夜福利在线观看视频| 午夜两性在线视频| 91久久精品国产一区二区成人| 午夜视频国产福利| 黄片小视频在线播放| 99久国产av精品| 成人av一区二区三区在线看| 十八禁人妻一区二区| 日本熟妇午夜| 久久精品综合一区二区三区| 精品国内亚洲2022精品成人| 美女黄网站色视频| 色综合欧美亚洲国产小说| 日韩精品青青久久久久久| 久久久久国内视频| 老熟妇乱子伦视频在线观看| 搡老妇女老女人老熟妇| a级一级毛片免费在线观看| 香蕉av资源在线| 亚洲第一欧美日韩一区二区三区| 午夜福利18| 别揉我奶头~嗯~啊~动态视频| 免费av不卡在线播放| 少妇裸体淫交视频免费看高清| 日韩大尺度精品在线看网址| 五月玫瑰六月丁香| 国产精品一区二区免费欧美| 色吧在线观看| 国产在视频线在精品| 18禁黄网站禁片免费观看直播| 三级毛片av免费| 999久久久精品免费观看国产| 成人午夜高清在线视频| 两性午夜刺激爽爽歪歪视频在线观看| 一级毛片久久久久久久久女| 中文字幕高清在线视频| 久久久久久久午夜电影| 亚洲在线自拍视频| 黄片小视频在线播放| 蜜桃久久精品国产亚洲av| 精品人妻1区二区| 精品一区二区三区视频在线| 三级国产精品欧美在线观看| 亚洲成av人片免费观看| 久久精品91蜜桃| 日韩av在线大香蕉| 在线十欧美十亚洲十日本专区| 亚洲最大成人av| 99热精品在线国产| 哪里可以看免费的av片| 怎么达到女性高潮| 国产精品免费一区二区三区在线| 国产aⅴ精品一区二区三区波| 最新中文字幕久久久久| 亚洲av不卡在线观看| 久久欧美精品欧美久久欧美| 一进一出抽搐动态| 免费在线观看影片大全网站| 真人一进一出gif抽搐免费| 麻豆成人av在线观看| 男女之事视频高清在线观看| 亚洲va日本ⅴa欧美va伊人久久| 最近视频中文字幕2019在线8| av专区在线播放| 亚州av有码| 日韩欧美国产在线观看| 亚洲av.av天堂| 午夜福利成人在线免费观看| 少妇丰满av| 蜜桃亚洲精品一区二区三区| 婷婷色综合大香蕉| 校园春色视频在线观看| 精品国产三级普通话版| 欧美高清性xxxxhd video| 色在线成人网| 国产精品伦人一区二区| 久久国产乱子伦精品免费另类| 精品无人区乱码1区二区| 午夜福利高清视频| 国产美女午夜福利| 此物有八面人人有两片| 97超级碰碰碰精品色视频在线观看| 国产爱豆传媒在线观看| 精品人妻熟女av久视频| 精品人妻偷拍中文字幕| 午夜激情欧美在线| 亚洲真实伦在线观看| 一级黄片播放器| 99久久久亚洲精品蜜臀av| 桃红色精品国产亚洲av| 不卡一级毛片| 亚洲人成电影免费在线| 日韩免费av在线播放| 99精品久久久久人妻精品| 精品一区二区三区人妻视频| 国产人妻一区二区三区在| 999久久久精品免费观看国产| 高清毛片免费观看视频网站| 国产极品精品免费视频能看的| 性色avwww在线观看| 三级男女做爰猛烈吃奶摸视频| 欧美黑人巨大hd| 老熟妇乱子伦视频在线观看| 国产精品久久视频播放| 亚洲成人免费电影在线观看| 真人一进一出gif抽搐免费| 999久久久精品免费观看国产| 久久性视频一级片| 国产高潮美女av| 简卡轻食公司| 神马国产精品三级电影在线观看| 最新在线观看一区二区三区| 丁香欧美五月| 中文字幕免费在线视频6| 国产一区二区在线观看日韩| 日本在线视频免费播放| 丰满人妻一区二区三区视频av| 天堂影院成人在线观看| 男女床上黄色一级片免费看| 99精品在免费线老司机午夜| 噜噜噜噜噜久久久久久91| 波多野结衣巨乳人妻| a级毛片免费高清观看在线播放| 国产欧美日韩精品一区二区| 欧美日韩中文字幕国产精品一区二区三区| 中文字幕高清在线视频| 在线观看美女被高潮喷水网站 | 丁香欧美五月| 免费黄网站久久成人精品 | 啦啦啦韩国在线观看视频| 亚洲熟妇熟女久久| 欧美潮喷喷水| 一个人免费在线观看的高清视频| 成人永久免费在线观看视频| 成人午夜高清在线视频| 久久国产精品影院| 久久久色成人| 波多野结衣高清无吗| 午夜激情欧美在线| 深夜a级毛片| 精品日产1卡2卡| 精品久久久久久久久亚洲 | 欧美最新免费一区二区三区 | 亚洲国产精品合色在线| 欧美xxxx黑人xx丫x性爽| 在线观看免费视频日本深夜| 亚洲五月婷婷丁香| 成人毛片a级毛片在线播放| 国产成人av教育| 国产男靠女视频免费网站| 国产成人欧美在线观看| 在线观看一区二区三区| 高清在线国产一区| 免费高清视频大片| netflix在线观看网站| 免费观看人在逋| 国内精品一区二区在线观看| 一区二区三区免费毛片| 色哟哟哟哟哟哟| 18禁黄网站禁片午夜丰满| 99国产极品粉嫩在线观看| 久久精品国产99精品国产亚洲性色| 亚洲国产高清在线一区二区三| 欧美乱妇无乱码| 每晚都被弄得嗷嗷叫到高潮| 两人在一起打扑克的视频| 日本与韩国留学比较| 特大巨黑吊av在线直播| 国产伦精品一区二区三区视频9| 中文在线观看免费www的网站| 国产精品,欧美在线| 啦啦啦观看免费观看视频高清| 国产成人福利小说| 国产伦在线观看视频一区| 久久6这里有精品| 三级男女做爰猛烈吃奶摸视频| 青草久久国产| 波多野结衣高清无吗| 久久精品国产99精品国产亚洲性色| 色噜噜av男人的天堂激情| 久久伊人香网站| 亚州av有码| 天堂av国产一区二区熟女人妻| 乱人视频在线观看| 伦理电影大哥的女人| 高清毛片免费观看视频网站| 成人av一区二区三区在线看| 欧美丝袜亚洲另类 | 91在线精品国自产拍蜜月| 黄色日韩在线| 成年人黄色毛片网站| 久久热精品热| 久久草成人影院| 搡女人真爽免费视频火全软件 | 久久九九热精品免费| 亚洲专区中文字幕在线| 国产精品久久久久久久久免 | 91麻豆av在线| 亚洲激情在线av| 少妇高潮的动态图| a在线观看视频网站| av在线蜜桃| 免费在线观看亚洲国产| 精品久久久久久久久亚洲 | 午夜福利在线观看免费完整高清在 | 亚洲成av人片在线播放无| 少妇裸体淫交视频免费看高清| 网址你懂的国产日韩在线| 亚洲五月婷婷丁香| 久久久久亚洲av毛片大全| 国产精品伦人一区二区| 美女黄网站色视频| 免费av观看视频| а√天堂www在线а√下载| 亚洲精品一卡2卡三卡4卡5卡| 精品欧美国产一区二区三| 亚洲av成人不卡在线观看播放网| 亚洲第一欧美日韩一区二区三区| 丰满人妻一区二区三区视频av| 国产在线男女| 亚洲七黄色美女视频| 国产精品久久久久久久电影| 午夜激情福利司机影院| 精品国产三级普通话版| 国产精品电影一区二区三区| 嫩草影院新地址| 国产精品久久久久久久电影| 免费在线观看成人毛片| 国内揄拍国产精品人妻在线| 欧美最黄视频在线播放免费| 免费大片18禁| 国产精品久久久久久久电影| 国产日本99.免费观看| 热99在线观看视频| 亚洲三级黄色毛片| 搡老妇女老女人老熟妇| 亚洲精品在线美女| 国产69精品久久久久777片| 热99在线观看视频| 天堂av国产一区二区熟女人妻| 狠狠狠狠99中文字幕| 亚洲人成网站在线播放欧美日韩| 精品无人区乱码1区二区| av欧美777| 免费在线观看日本一区| 制服丝袜大香蕉在线| .国产精品久久| 男人和女人高潮做爰伦理| 不卡一级毛片| 精品一区二区三区视频在线| 亚洲av五月六月丁香网| 亚洲成人久久性| www.www免费av| av福利片在线观看| 国模一区二区三区四区视频| 亚洲欧美日韩东京热| 51午夜福利影视在线观看| 婷婷精品国产亚洲av| 男女视频在线观看网站免费| 麻豆成人av在线观看| 久久久色成人| 午夜福利在线在线| 欧美中文日本在线观看视频| 3wmmmm亚洲av在线观看| 精品99又大又爽又粗少妇毛片 | 丰满的人妻完整版| 天堂影院成人在线观看| 亚洲精品色激情综合| 黄色丝袜av网址大全| www.色视频.com| 乱码一卡2卡4卡精品| 久久精品国产清高在天天线| 午夜久久久久精精品| 免费av毛片视频| 久久伊人香网站| 婷婷精品国产亚洲av在线| 亚洲五月婷婷丁香| 日本黄色视频三级网站网址| 国产伦人伦偷精品视频| 可以在线观看毛片的网站| 国产精品一区二区三区四区久久| 看免费av毛片| 琪琪午夜伦伦电影理论片6080| 欧美日本视频| 国产精品女同一区二区软件 | 黄色日韩在线| 99久久无色码亚洲精品果冻| 网址你懂的国产日韩在线| 毛片一级片免费看久久久久 | 免费在线观看日本一区| 精品久久国产蜜桃| 国产在线男女| 黄色视频,在线免费观看| 特级一级黄色大片| 狠狠狠狠99中文字幕| 婷婷精品国产亚洲av| 俺也久久电影网| 1000部很黄的大片| 国产成人aa在线观看| 国产男靠女视频免费网站| 蜜桃亚洲精品一区二区三区| 亚洲片人在线观看| 国产aⅴ精品一区二区三区波| 亚洲av成人av| 免费av观看视频| 性色av乱码一区二区三区2| 激情在线观看视频在线高清| 天天躁日日操中文字幕| 成年女人看的毛片在线观看| 成人av在线播放网站| 国产av在哪里看| 色综合欧美亚洲国产小说| 免费av毛片视频| 中文字幕久久专区| 亚洲av电影在线进入| 一个人看视频在线观看www免费| 亚洲av熟女| 日韩中文字幕欧美一区二区| 欧美黄色淫秽网站| 日本精品一区二区三区蜜桃| 亚洲,欧美,日韩| 久久久国产成人免费| 搡女人真爽免费视频火全软件 | 美女高潮的动态| 亚洲av成人av| 国语自产精品视频在线第100页| 免费在线观看成人毛片| 亚洲自偷自拍三级| 99久久九九国产精品国产免费| 能在线免费观看的黄片| 国产欧美日韩一区二区三| 亚洲一区二区三区不卡视频| 精品一区二区三区视频在线| 少妇人妻一区二区三区视频| 国产色婷婷99| 嫩草影院入口| 国产午夜精品久久久久久一区二区三区 | 精品久久久久久久久av| 黄色日韩在线| 国产精品亚洲av一区麻豆| 亚洲精品粉嫩美女一区| 三级男女做爰猛烈吃奶摸视频| 欧美丝袜亚洲另类 | 亚洲成av人片免费观看| 最新中文字幕久久久久| 韩国av一区二区三区四区| 久久午夜亚洲精品久久| 少妇丰满av| 国产亚洲精品av在线| 国产欧美日韩一区二区精品| 婷婷六月久久综合丁香| 欧美激情国产日韩精品一区| 精品人妻熟女av久视频| 国产精品,欧美在线| 欧美激情久久久久久爽电影| 国产麻豆成人av免费视频| 美女高潮喷水抽搐中文字幕| 老熟妇乱子伦视频在线观看| 亚洲内射少妇av| 欧美区成人在线视频|