• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fake News Classification:Past,Current,and Future

    2023-12-15 03:57:14MuhammadUsmanGhaniKhanAbidMehmoodMouradElhadefandShehzadAshrafChaudhry
    Computers Materials&Continua 2023年11期

    Muhammad Usman Ghani Khan,Abid Mehmood,Mourad Elhadef and Shehzad Ashraf Chaudhry,3,★

    1Department of Computer Science,University of Engineering and Technology,Lahore,54890,Pakistan

    2Department of Computer Science&Information Technology,Abu Dhabi University,Abu Dhabi,59911,United Arab Emirates

    3Department of Software Engineering,Faculty of Engineering and Architecture,Nisantasi University,Istanbul,Turkey

    ABSTRACT The proliferation of deluding data such as fake news and phony audits on news web journals,online publications,and internet business apps has been aided by the availability of the web,cell phones,and social media.Individuals can quickly fabricate comments and news on social media.The most difficult challenge is determining which news is real or fake.Accordingly,tracking down programmed techniques to recognize fake news online is imperative.With an emphasis on false news,this study presents the evolution of artificial intelligence techniques for detecting spurious social media content.This study shows past,current,and possible methods that can be used in the future for fake news classification.Two different publicly available datasets containing political news are utilized for performing experiments.Sixteen supervised learning algorithms are used,and their results show that conventional Machine Learning (ML) algorithms that were used in the past perform better on shorter text classification.In contrast,the currently used Recurrent Neural Network(RNN)and transformer-based algorithms perform better on longer text.Additionally,a brief comparison of all these techniques is provided,and it concluded that transformers have the potential to revolutionize Natural Language Processing(NLP)methods in the near future.

    KEYWORDS Supervised learning algorithms;fake news classification;online disinformation;transformers;recurrent neural network(RNN)

    1 Introduction

    Recent internet advancements have had a considerable impact on social communications and interactions.Social media platforms are being used more and more frequently to obtain information.Additionally,people express themselves through a variety of social media sites.Speedy access to information,low cost,and quick information transmission are just a few of social media’s many advantages.These advantages have led many people to choose social media over more conventional news sources,including television or newspapers,as their preferred method of news consumption.Therefore,social media is replacing traditional news sources quickly.However,social media’s nature can be changed to accomplish different goals[1].One of the reasons that social networks are favored for news access is that it allows for easy commenting and sharing of material with other social media users.Low cost and ease of access are the primary reasons numerous people use social network platforms with rapid access to conventional news sources such as the internet,newsletter,and telecasting.The large volume of internet news data necessitates the development of automated analysis technologies.

    Moreover,recently,during the coronavirus breakdown,the spread of fake news on social networking sites has increased,causing a terrible epidemic worldwide.Fig.1 shows some of the fake news stories circulated on social media during the lockdown1,2,3,4,5https://timesofindia.indiatimes.com/times-fact-check/news/fake-alert-no-russia-does-not-have-lions-roaming-the-streets-to-keep-people-indoors/articleshow/74768135.cms.Emissions from Chinese crematoriums could be visible from space.500 lions are released into the streets of Russia to keep people indoors.In London,doctors are being mugged.The condition can be cured with snake oils or vitamins.How about inhaling a hairdryer’s heated air?Or gargling with garlic water that’s been warmed up?

    Figure 1:Examples of fake news spread on social media

    False information harms people,society,corporations,and governments.The spread of fake news,particularly low-quality news,negatively affects personal and societal beliefs.Spammers or malicious users may distribute false and misleading information that could be very harmful.As a result,identifying fake news has become an essential area of study.Manually identifying and removing fake news or fraudulent reviews from social media takes more effort,money,and time.According to certain prior studies,people perform worse than automated systems when it comes to distinguishing real news from fake news[2].

    ML technologies have been focusing on automatically distinguishing between fake and authentic news for the last few years.Following the 2015 presidential election in the United States,several important social media platforms,including Twitter,Facebook,and Google,focused on developing ML and NLP-based methods to identify and prevent fake news.The extraordinary progress of supervised ML models cleared the path for developing expert systems to detect fake news in English,Portuguese,and other languages [2].Different ML models can have different results on the same classification problem,which is a serious issue [3].Their performance can be affected by corpus features like the size of the corpus and the distribution of instances into classes[3].The performance of the K-Nearby Neighbor (KNN),for example,is determined by the value of (k).Similarly,when handling optimization issues,the Support Vector Machine(SVM)experiences numerical instability[4].

    Various ML algorithms have been utilized in the past to classify fake news.These algorithms are compared against state-of-the-art techniques such as Long Short-Term Memory(LSTM)and Gated Recurrent Unit (GRU),which are currently being used.Transformer models are also experimented with as they are expected to be employed in future fake news classification tasks.This approach enables the evaluation of past techniques.It allows for understanding the current trends in fake news classification and a glimpse into potential future developments in the field.A detection algorithm with two phases has been suggested in this study to detect fake and bogus news on social networking sites.The proposed model is a hybrid of ML algorithms and NLP techniques.Text mining methods are used on the internet news data set in the initial part of this project.Text analysis tools and procedures are designed to extract structured information from raw news data.In the second step,supervised learning algorithms(BayesNet,Logistic Model Tree(LMT),Stochastic Gradient Descent(SGD),decision stump,linear SVM,kernel SVM,Logistic Regression,Decision Tree and Gaussian Discriminant Analysis have been applied to two publicly available Random and Buzzfeed Political News datasets[5].Individual studies employing only a few of these algorithms have been published in the literature.

    Furthermore,they are primarily implemented on a single dataset.In contrast to previous papers,the challenge of detecting fake and fraudulent news has been dealt with and regarded as a classification issue.A wide range of supervised learning algorithms has opted for all two publicly available data sets comprising titles and bodies.The contributions of this research paper are:

    ? We compared the performance of sixteen supervised learning algorithms.

    ? A pipeline for the utilization of transformers on two different datasets.

    ? Analyzed and presented the past,current and future trends of NLP techniques.

    The following is a breakdown of the paper’s structure.The related work is briefly described in Section 2.Details of some of the ML and DL algorithms are described in Section 3.Section 4 contains the details of the methodology and how text preprocessing techniques are applied before utilizing artificial intelligence methods.Section 5 covers datasets and experimental evaluations produced from sixteen supervised artificial intelligence algorithms for two different datasets.Section 5 also describes the results and discussion part.In Section 6,conclusions and future research directions have been examined.

    2 Related Works

    In recent years,detecting rumors and fake news,evaluating web content trustworthiness,and developing fact-checking systems have all been hot subjects.Preprocessing of data can be utilized for the estimation and recovery of various text forms.This includes pre-handling the text utilizing NLP,for example,stemming or lemmatization,standardization,tokenization,and afterward utilization of Term Frequency-Inverse Document Frequency(TF-IDF)[6]for forming words into vectors,Honnibal et al.[7]utilized Spacy for changing words into vectors.Similarly,Mikolov et al.[8]and Pennington et al.[9]used word2vec and Glove for word embeddings.

    Even though the fake news identification problem is only established,it has drawn much attention.Different researchers proposed different methodologies to distinguish fake news in many data types.Reference[10]divided the difficulty of detecting fake news into three categories,i.e.,severe fabrication,comical fake news,and massive scope deception.In [11],Conroy et al.utilized a hybrid technique and proposed a novel detector for fake news.Their proposed methodology[11]incorporates different linguistic cueing and network analysis techniques.In addition to this,they used the vector space model to confirm news [12].In [13] methodology,TF-IDF and SVM were used to categorize news into different groups.In [14],humorous cues were employed to detect false or deceptive news.The authors proposed an SVM-based model and used 360 news articles to evaluate it.To verify the stories,reference[15]found different perspectives on social media.Then,they tested their model against actual data sets.Reference[16]employed ML classifiers such as Decision Tree,K-Nearest Neighbor,Naive Bayes,SVM,and Logistic Regression to classify fake news from online media sources.An ensemble categorization model is suggested in the article[17]for identifying fake news that outperformed the state-of-the-art in terms of accuracy.The recommended approach identified vital characteristics from the datasets.The retrieved characteristics were then categorized using an ensemble model of three well-known ML models:Decision Tree,Random Forest,and Extra Tree Classifier.

    Two categorization models were presented in[18]to address the problem of identifying fake news.One is a Boolean crowd-sourcing approach,while the other is a Logistic Regression model.Aside from this,the preprocessing methods for the problem of false news detection and the creation of assessment measures for data sets have been thoroughly documented [19].Reference [20] employed ML classification techniques and n-gram analysis for classifying spam and fake news.On publicly accessible datasets,the authors assessed their study methods throughout.Gradient boosting,SGD,Random Forests,SVM,and limited Decision Trees were used as classification methods[21].Reference[22]have developed CSI,an algorithm comprises of different characteristics for classifying fake news.Three characteristics were merged in their strategy for a more accurate forecast.Capture,score,and integrate were the three attributes.Reference [23] introduced a tri-relationship false news detection model that considers news attitude,publisher bias,and interactions of users.They evaluated their approach using public datasets for detecting fake news.To classify fake news,the author of [24]suggested a novel hybrid DL model that integrated CNN and RNN.The algorithm was evaluated effectively on two false news datasets,and detection performance was notably superior to previous non-hybrid baseline techniques.

    Reference [25] developed a novel hybrid algorithm based on attention-based LSTM networks for the fake news identification challenge.Evaluation of the method’s performance is conducted on benchmark sets of false news detection data.In early 2017,reference[26]investigated the current state of fake news,provided a remedy for fake news,and described two opposing approaches.Janze et al.[27]developed a detection technique for spotting fake news.The authors of this study evaluated their models on Facebook News during the 2016 presidential election in the United States.Reference[28] developed another automated algorithm.This paper’s authors provided a categorization model based on semantic,syntactic,and lexical information.Reference[29]offered an automated technique for detecting false news in popular Twitter discussions.This approach was tested on three existing data sets.Reference [30] researched the statistical features of misinformation,fraud,and unverified assertions in online social networks.

    Reference[31]developed a competitive model to mitigate the impact of misleading information.The author mainly focused on the interaction between original erroneous and updated information.Reference [32] developed a new algorithm for detecting fake news that considers consumer trust.Reference[33]solved the problem by using a crowded signal.As a result,the authors have presented a novel detective method that uses Bayesian inference and learns the accuracy of users’flagging over time.Reference[34]suggested a content-based false news detection approach.The authors developed a semi-supervised approach for detecting fake news.Reference [35] looked at the different types of social networks and advocated using them to identify and counteract false news on social media.Reference [36] created a model that can identify the truthfulness of Arabic news or claims using a Deep Neural Network (DNN) approach,specifically Convolutional Neural Networks (CNN).The aim was to tackle the fact-checking problem,determining whether a news text claim is authentic or fake.The model achieved an accuracy of 91%,surpassing the performance of previous methods when applied to the same Arabic dataset.Reference[37]discussed the use of Deep Learning(DL)models to detect fake news written in Slovak,using a dataset collected from various local online news sources associated with the COVID-19 epidemic.The DL models were trained and evaluated using this dataset.A bidirectional LSTM network combined with one-dimensional convolutional layers resulted in an average macro F1-score of 94%on an independent test set.

    For accurately identifying misleading information using text sentiment analysis [38] presented“emoratio,”a sentiment scoring algorithm that employs the Linguistic Inquiry Word Count(LIWC)tool’s psychological and linguistic skills.Reference[39]proposed a thorough comparative examination of different DL algorithms,including ensemble methods,CNN,LSTM,and attention mechanisms for fake news identification.The CNN ensembled with bidirectional LSTM using the attention mechanism was found to have the most remarkable accuracy of 88.75%.Another tricky topic of false news classification is the circulation of intentionally generated phony photographs and altered images on social media platforms.The examination was directed by[40]on a dataset of 36,302 picture answers by utilizing both traditional and deep picture forgery techniques for distinguishing fraudulent pictures produced using picture-to-picture transformation based on the Generative Adversarial Networks(GAN) model,a DNN for identifying fake news in its early stages.Reference [41] utilized time and assault for veracity classification[42],style examination of hyperpartisan news[43],are worth focusing on spearheading research in believability investigation on informal organizations.

    Bidirectional Encoder Representations Transformer (BERT) and VGG19,based on a multimodal supervised framework,namely ‘Spotfake’[44],classify the genuine and fictitious articles by utilizing the capacities of encoders and decoders.Moreover,reference[45]used Adversarial training to classify news articles.The purpose of[46]was to develop a model for identifying fake news using the content-based classification approach and focusing on news titles.The model employed a BERT model combined with an LSTM layer.The proposed model was compared to other base classification models,and a standard BERT model was also trained on the same dataset to compare the impact of using an LSTM layer.Results indicated that the proposed model slightly improved accuracy on the datasets compared to the standard BERT model.References [47,48] utilized linear discriminant analysis and KNN for the detection of fake news,even in a vehicular network.The summary of the related work is shown in Table 1.

    Several datasets have been used for fake news detection research.Some of the datasets that have been used in the past are LIAR,FNC-1,and FakeNewsNet datasets.On the other hand,the GossipCop,PolitiFact,and the Fake News Challenge(FNC)datasets are widely used in the current era.It is important to note that many datasets are created to serve a specific research problem;therefore,they might not be generalizable to other scenarios and might not have the same size,type of data,quality,and time coverage.Thus,it is essential to consider these factors when selecting the dataset for a specific task.

    This work uses transformers,RNN,and conventional ML algorithms to classify fake news and provide an in-depth comparison of all these models.Results depict that ML algorithms perform better than complex DL-based models on shorter text.While for longer text,transformers outperform other algorithms.

    3 Machine Learning and Deep Learning

    This section briefly describes the algorithms used in this study’s experiments.Moreover,it is further divided into ML and DL methods.

    3.1 Supervised ML Algorithms

    3.1.1 Linear SVM

    One of the most well-known supervised learning methods,SVM,is used to tackle classification and regression problems.“Linear separable data” refers to information that can be split into two groups by a single straight line.Linear SVM is used to classify such information,and the classifier employed is referred to as a linear SVM classifier.

    3.1.2 Kernel SVM

    When the collection of samples cannot be divided linearly,SVM can be expanded to address nonlinear classification challenges.The data are mapped onto a high-dimensional feature space by applying kernel functions,where linear grouping is conceivable.

    3.1.3 Logistic Regression

    In contrast to Linear Regression,Logistic Regression is used as a classification technique.Logistic regression predicts the outcome by utilizing values of different independent variables.It is undoubtedly one of the most utilized ML techniques.Rather than giving a constant value,it provides the result as a binary,i.e.,valid or invalid,fake and real,yes or no,etc.Its probabilistic value ranges between 0 and 1.

    3.1.4 Naive Bayes

    It is a supervised ML algorithm based on the Bayesian theorem for classification tasks.This classifier posits that features in a class are independent of each other.This type of classifier is relatively easy to construct and is especially good for massive datasets.Naive Bayes outperforms even the most advanced classification systems due to its simplicity.

    3.1.5 Decision Tree

    One of the supervised algorithms based on rules is the Decision Tree.The Decision Tree is used for classification and regression and is a non-parametric method.In the Decision Tree,every node has one of the rules and gives output that is passed to another node,and then another rule-based testing is applied.

    3.1.6 Random Forest

    Random Forest is a supervised algorithm that combines Decision Trees for the different samples and gives results by giving an average from each Decision Tree.It is one of the flexible algorithms that can produce a good result for classification even without tuning.

    3.1.7 Gaussian Discriminant Analysis-Linear

    ML algorithms that directly predict the class from the training set are known as discriminant algorithms.One of the discriminative algorithms applied in our study is Gaussian Discriminant Analysis.Gaussian Discriminant Analysis fits a Gaussian distribution to each class of data separately to capture the distribution of each class.The probability will be high if the predicted value lies at the center of the contour of one of the classes in the training dataset.Linear Discriminant Analysis is a particular type of Quadratic Discriminant Analysis with linear boundaries.

    3.1.8 Gaussian Discriminant Analysis-Quadratic

    ML algorithms that directly predict the class from the training set are known as discriminant algorithms.One of the discriminative algorithms applied in our study is Gaussian Discriminant Analysis.Gaussian Discriminant Analysis fits a Gaussian distribution to each class of data separately to capture the distribution of each class.The probability will be high if the predicted value lies at the center of the contour of one of the classes in the training dataset.

    3.1.9 KNN

    KNN is one of the most well-known and widely utilized supervised learning methods.It works by finding the distance between new data points and comparing it with the number of K points provided as input.The data point is allocated to the class where the distance is minimum.Euclidean distance is one of the distance functions used in KNN.

    3.1.10 Weighted KNN

    It is a specially modified version of KNN.In contrast to traditional KNN,it assigns the highest weight to the points which are near and less weight to those which are far away.Its performance varies with the change in the hyperparameter K.Weighted KNN may produce outliers if the value of K is too small.

    3.2 RNN-Based Algorithms

    3.2.1 Gated Recurrent Units(GRU)

    GRU comprises two gates,i.e.,the update gate and the reset gate.An update gate combines the features of an input gate and a forget gate.A forget gate makes decisions about which data will be discarded and which will be stored.On the other hand,reset gates prevent gradient explosions by erasing previous information.It regulates how much past data must be discarded.

    3.2.2 Long Short-Term Memory(LSTM)

    Each LSTM network has three gates that govern data flow and cells storing data.The data is transmitted from the beginning to the end of the time step by the Cell States.In LSTM,the forget gate determines whether data must be pushed forward or omitted.While in the input gate,upon deciding on the relevant data,the data is sent to the input gate,which carries the data onto the cell states,causing them to be updated.It is as simple as storing and changing the weight.An output gate is triggered when the information has been transferred via the input gate.The output gate produces the hidden phases,and the current condition of the cells is carried forward to the next step.

    3.3 Transformers

    3.3.1 Bidirectional Encoder Representations from Transformers(BERT)

    BERT is an excellent addition to the Transformers community,especially for dealing with longer text.It is a bidirectional encoder-based transformer proposed by Google.BERT currently consists of two versions BERT base and BERT large.For the input,BERT took 512 tokens sequence at once.BERT can take three input embedding types:position embeddings,segment embeddings,and token embeddings.

    3.3.2 ALBERT

    ALBERT is a particularly lite variant of BERT with efficient training speed and fewer parameters as compared to BERT.Because ALBERT uses absolute position embeddings,it is best to pad the right side of the inputs rather than the left.Moreover,the computation cost remains the same as BERT because it has the same number of hidden layers as BERT.

    3.3.3 DeBERTa

    A neural language model based on Transformer called Decoding enhanced BERT with disentangled attention(DeBERTa)trains on enormous amounts of raw text corpora using self-supervised learning.To do numerous NLP jobs in the future,DeBERTa is built to accumulate universal language representations.By utilizing three unique strategies,DeBERTa trumps the previous state-of-the-art BERT.The strategies are as follows:

    ? A precise attention mechanism.

    ? Mask decoder improvement.

    ? A technique for virtual adversarial training that can be fine-tuned.

    3.3.4 RoBERTa

    The architecture of RoBERTa is similar to that of BERT,but it employs a different pre-training strategy and a byte-level BPE as a tokenizer.It extends BERT by changing crucial hyperparameters,such as deleting the following sentence pre-training goal and training with considerably bigger minibatches and learning rates.

    4 Methodology

    This section provides the detail of our methodology for fake news classification.Each step is discussed in sequence.First,duplicated words and unwanted characters,such as numbers,stopwords,dates,time,etc.,are removed from the dataset.Then,feature extraction was performed on the fake news dataset to reduce the feature space.Each word frequency is calculated for the construction of a document term matrix.Sixteen supervised algorithms are applied to the two political news datasets in the final step.Fig.2 shows the whole methodology,and Table 2 shows the specifications of the dataset being utilized in it.

    Table 2: Stats for the dataset

    4.1 Preprocessing for ML Algorithms

    ? Tokenization

    From the word tokenization,it is clear that it is used to make tokens of text by dividing the text into smaller chunks.Punctuation marks are also eradicated from the corpus.Moreover,a number filter has been used to remove those words that contain numeric values,followed by a case converter filter for converting all text from upper to lower case.Lastly,in this step,a filter is used to remove DateTime from the textual data.

    Figure 2:The overall process flow of methodology

    ? Stopwords and line removal

    Stopwords,usually little,are used to join phrases and finish sentences.These are regional language terms that do not transmit knowledge.Pronouns,prepositions,and conjunctions are all examples of stop words.The number of stopwords in the English language is between 400 and 500 [49].Stop words include that,does,a,an,so on,where on,too,above,I until,but,again,what,all,and when.

    ? Stemming

    Stemming is a technique to identify the fundamental forms of words with similar semantics but diverse word forms.This process converts grammatical structures such as a word’s verb,adjective,noun,adverb,and so on into their root form.The words“collect,”“collections,”and“collecting,”for example,all come from the word“collect.”

    The specifics of the preprocessing processes are displayed in Table 3.

    Table 3: Steps for preprocessing data

    4.2 Feature Engineering

    Managing high-dimensional information is the most challenging part of text mining.To increase performance,unrelated and inconsequential qualities should be disposed of from the model.The means of information preprocessing include extracting features from high-layered unstructured data.In this work,stem phrases in informational collections with a recurrence over the edge are tracked down utilizing a feature selection method.Following this technique,each record was changed into a vector of term loads by weighing the terms in its informational index.The Vector Space Model(VSM)is the most direct essential portrayal.VSM assigns a value to each word that indicates the word’s weight in the text.Term frequency is one approach for calculating these weigh.Inverse Document Frequency(IDF)and Term Frequency Inverse Document Frequency(TF-IDF)are the two most well-known of these methods.In this paper,the TF-IDF approach is applied.The TF-IDF approach is used to weigh a phrase in any piece of information and assign value based on the number of times it appears in the document.It also looks at the keyword’s significance across the internet,a technique known as corpus.

    4.3 Evaluation Measures

    The performance of our model is evaluated using precision,accuracy,F1-score,and recall,represented in Eqs.(1)-(4),respectively.

    whereas TPN stands for True Positive News—news that is real and anticipated by the model to be real.TNN stands for True Negative News—fake news projected to be fake by the model.FPN stands for False Positive News or the real fake news that the model incorrectly anticipated to be true.FNN stands for False Negative News;the actual real news projected to be fake by the model.

    5 Results and Discussion

    In this section,dataset and training details are provided.Moreover,the comparison results of RNN,transformers,and ML-based algorithms are also discussed.

    5.1 Experimental Settings

    In this work,two publicly available datasets from the political domain[5]are used.As discussed above,sixteen (RNN,transformers,and conventional ML) algorithms are applied to the title and body text of the dataset.Before applying the algorithms,the dataset is split with a ratio of 70% to 30%,respectively,for training and testing.The TF-IDF is used to form the word-weighted matrix for feature extraction for the conventional ML algorithms.While for the RNN-based algorithm,GloVe vectors are utilized.

    5.1.1 Dataset

    The dataset[5]described in Table 2 is used for the tests.“Buzzfeed News Data and Random News Data”are just two news datasets that are included.48 examples of false news and 53 instances of actual news are included in”Buzzfeed News Data.”O(jiān)n the other hand,the“Random News Data”collection contains 75 instances of satire,true news,and false news.Real news and false news data are both used in this study.Both datasets include the headline and the story’s content,which are utilized separately to classify the dataset.A few examples of these datasets are shown in Table 4.

    Table 4: Instances from Buzzfeed and political news dataset

    Table 5: Results on title(Buzzfeed political news dataset)

    Table 6: Results on body(Buzzfeed political news dataset)

    Table 8: Results on title(Random political news dataset)

    5.1.2 Hyperparameters

    For the DL-based method,i.e.,RNN and GRU,a glove matrix of 300 embedding dimensions and 60 epochs with a batch size of 16 are used.The hidden units are set to 256,which is the number of neurons in the hidden layer.The number of hidden units is chosen based on the task’s complexity and the dataset’s size.A dropout rate of 0.3 is used during the training of the model.This rate is chosen to strike a balance between preventing overfitting and maintaining the model’s ability to capture the relevant information from the data.The optimization algorithm used for training the model is Stochastic Gradient Descent(SGD),a widely used optimization algorithm for training neural networks.To further prevent overfitting,an early stopping strategy is implemented.Moreover,the learning rate is set to 0.0001,which determines the optimization algorithm’s step size in finding the model’s optimal parameters.

    For both datasets,experiments are run 10 times for conventional ML algorithms because there is a massive distinction between the outcomes due to random data selection.After running each traditional algorithm of ML 10 times,the mean value of evaluation measures,i.e.,accuracy,precision,recall,and F1-score,is taken.

    These hyperparameters were chosen through a combination of literature review and experimental tuning,demonstrating that they provided optimal performance for the task.Finally,in addition to the RNNs,the transformers model is trained using BERT embeddings with a dropout rate of 0.2.The dropout rate of 0.2 is used on the BERT embeddings during the fine-tuning process to prevent overfitting.

    5.2 Dataset 1:Buzzfeed Political News Dataset

    The Buzzfeed Political News dataset has been subjected to the recently mentioned supervised ML,RNN,and transformer-based algorithms to determine whether the news is accurate.The features are disengaged from the dataset using TF-IDF.On the dataset for Buzzfeed Political News,Tables 5 and 6 compare the effectiveness of various supervised ML algorithms on the title and body of the Buzzfeed Political News dataset,respectively.Tables 5 and 6 show that in terms of precision,kernel SVM and quadratic Gradient Discriminant Analysis(GDA)perform worst on the title and body of the Buzzfeed Political News dataset,respectively.On the other hand,linear GDA and Random Forest perform best in terms of precision on the title and body text of the Buzzfeed News dataset.Tables 5 and 6 depict that kernel SVM has the worst performance regarding the recall and F1-score linear GDA,Logistic Regression,and Random Forest on title and body text,respectively.It seems that kernel SVM and BERT perform best in terms of recall and F1-score on the title,while kernel SVM and RoBERTa perform best on body text.Regarding accuracy,kernel SVM performs worst on both title and body,while BERT and RoBERTa perform best on the title and body text.Figs.3 and 4 depict a graphical illustration of the algorithm’s performance in terms of accuracy,precision,recall,and F-measure metrics.While Fig.5 shows the comparison of loss on the title and body of the Buzzfeed Political News dataset.

    Figure 3:Comparison of RNN,transformers,and ML-based algorithms on the title text of Buzzfeed political news dataset

    Figure 4:Comparison of RNN,transformers,and ML-based algorithms on the body text of Buzzfeed political news dataset

    Figure 5:Comparison of loss on title and body of Buzzfeed political news dataset

    5.3 Dataset 2:Random Political News Dataset

    This section provides the results of the applied artificial intelligence algorithms with respect to their evaluation measures on both datasets.On the title and body of the Random Political News dataset,Tables 7 and 8 show the outcomes of the various supervised ML algorithms.Figs.6 and 7 visually represent a comparison of sixteen supervised learning algorithms’outputs.In addition to this,Fig.8 shows the comparison of loss on the title and body of the Random Political News dataset.

    In Tables 7 and 8 for the Random Political News dataset,kernel SVM performs worst in terms of precision on both title and body text.While on precision,Decision Tree and linear SVM performance are best of all others.For recall,Decision Tree performance is worst on both title and body.On the other hand,kernel SVM performs best in terms of recall.

    Figure 6:Comparison of RNN,transformers,and ML-based algorithms on the title text of the random political news dataset

    Figure 7: Comparison of RNN,transformers,and ML-based algorithms on the body text of the random political news dataset

    Figure 8:Comparison of loss on title and body of random political news dataset

    For F1-score and accuracy,kernel SVM remains on the lowest performance for title and body text,while DeBERTa performs best on title and body,respectively.

    From the above results and analysis,it is depicted that in the title text of both datasets,the performance of conventional ML algorithms is better than RNN and transformer-based algorithms in terms of computation and evaluation measures.For the longer text,i.e.,transformers outperform the remaining applied algorithms for the body of both datasets.

    Other than this,Table 9 shows the comparison of different algorithms used for the detection of fake news in recent surveys.

    Table 9: Comparison of the different algorithms used in recent studies for fake news detection

    6 Conclusion

    This paper compares supervised learning models for detecting fake news on social media based on NLP techniques and supervised RNN,transformers,and conventional ML algorithms.The accuracy,recall,precision,and F1-measure values for supervised artificial intelligence algorithms are examined.Two datasets are used to determine the average performance of all supervised AI algorithms.From our obtained results,it is clear that ML algorithms perform better on short text classification.It depicts that it is better to use an ML algorithm when the text is one or two lines,and also ML algorithms are efficient in computation.In contrast,longer text transformers outperform the other algorithms.

    In the future,this work could be improved with the advancement in transformers,existing hybridizing techniques,and intelligent optimization algorithms.In addition,we will be looking for multi-modal data(images,videos,audio)to detect fake news.The experiments will be undertaken on a multi-modal dataset to understand better the aspects of fake news identification and how to employ ML algorithms better.

    Acknowledgement:ADU authors acknowledge financial support from Abu Dhabi University’s Office of Research and Grant programs.

    Funding Statement:Abu Dhabi University’s Office of sponsored programs in the United Arab Emirates(Grant Number:19300752)funded this endeavor.

    Author Contributions:All the authors contributed equally.

    Availability of Data and Materials:https://github.com/rpitrust/fakenewsdata1.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    一级毛片精品| 在线看三级毛片| 久久中文字幕一级| 国产男靠女视频免费网站| or卡值多少钱| 成人国语在线视频| 亚洲国产欧美网| 男女午夜视频在线观看| 欧美黄色淫秽网站| 亚洲av电影不卡..在线观看| 黄色a级毛片大全视频| 女人爽到高潮嗷嗷叫在线视频| 亚洲精品一卡2卡三卡4卡5卡| 日本五十路高清| 宅男免费午夜| 亚洲一码二码三码区别大吗| 一进一出抽搐动态| 久久午夜综合久久蜜桃| 少妇熟女aⅴ在线视频| www.www免费av| 亚洲成av人片免费观看| 国产区一区二久久| 国产久久久一区二区三区| 亚洲精品av麻豆狂野| 亚洲人成网站在线播放欧美日韩| 国产一区在线观看成人免费| 黑丝袜美女国产一区| 高清毛片免费观看视频网站| 人妻久久中文字幕网| 91在线观看av| 国内少妇人妻偷人精品xxx网站 | 日韩国内少妇激情av| 韩国精品一区二区三区| 亚洲男人天堂网一区| 母亲3免费完整高清在线观看| 欧美丝袜亚洲另类 | 日本免费a在线| 午夜福利在线观看吧| 又黄又爽又免费观看的视频| 亚洲欧美一区二区三区黑人| 搡老岳熟女国产| 精品欧美一区二区三区在线| 久久中文字幕人妻熟女| 亚洲国产欧美日韩在线播放| 成人三级做爰电影| 最近最新免费中文字幕在线| 国产成人系列免费观看| 久久午夜综合久久蜜桃| 90打野战视频偷拍视频| 麻豆久久精品国产亚洲av| av天堂在线播放| 免费在线观看完整版高清| 香蕉丝袜av| 午夜激情福利司机影院| 高潮久久久久久久久久久不卡| 熟女电影av网| 少妇被粗大的猛进出69影院| 狠狠狠狠99中文字幕| 99久久精品国产亚洲精品| 国产1区2区3区精品| 欧美日韩黄片免| 国产欧美日韩一区二区三| 亚洲国产精品成人综合色| 十分钟在线观看高清视频www| 一级毛片高清免费大全| 亚洲精品在线美女| 国产精品免费一区二区三区在线| 亚洲五月天丁香| 人妻久久中文字幕网| 搞女人的毛片| 自线自在国产av| 亚洲男人天堂网一区| 人成视频在线观看免费观看| 成年免费大片在线观看| 日日摸夜夜添夜夜添小说| 国产一级毛片七仙女欲春2 | 久久国产精品人妻蜜桃| 婷婷精品国产亚洲av| 亚洲国产看品久久| 18禁美女被吸乳视频| 在线av久久热| 亚洲一区二区三区不卡视频| 欧美又色又爽又黄视频| 19禁男女啪啪无遮挡网站| 国产精品久久视频播放| 国产精品 欧美亚洲| 婷婷亚洲欧美| 欧美日本亚洲视频在线播放| 欧美黑人欧美精品刺激| 一区二区三区激情视频| 精品久久久久久久末码| 一区二区三区高清视频在线| 欧美色欧美亚洲另类二区| 国产主播在线观看一区二区| 精品福利观看| 一本精品99久久精品77| 国产国语露脸激情在线看| 精品电影一区二区在线| 亚洲av中文字字幕乱码综合 | 一进一出好大好爽视频| 老熟妇乱子伦视频在线观看| 一进一出抽搐gif免费好疼| 他把我摸到了高潮在线观看| 久久精品91无色码中文字幕| 99热这里只有精品一区 | 色播亚洲综合网| 国产视频内射| 人妻丰满熟妇av一区二区三区| 欧美三级亚洲精品| 国产私拍福利视频在线观看| 18美女黄网站色大片免费观看| 欧美丝袜亚洲另类 | 日韩精品青青久久久久久| 亚洲中文av在线| 国产成人一区二区三区免费视频网站| 国产1区2区3区精品| 大型黄色视频在线免费观看| 欧美日韩中文字幕国产精品一区二区三区| 免费在线观看成人毛片| 精品久久久久久久人妻蜜臀av| 国产爱豆传媒在线观看 | 亚洲五月婷婷丁香| 日本一本二区三区精品| 国产一级毛片七仙女欲春2 | 黄色毛片三级朝国网站| 精品电影一区二区在线| 90打野战视频偷拍视频| 桃色一区二区三区在线观看| 香蕉久久夜色| 精品久久久久久,| 搡老妇女老女人老熟妇| 成熟少妇高潮喷水视频| 国产伦在线观看视频一区| 国产精品综合久久久久久久免费| 国产欧美日韩一区二区三| 大型av网站在线播放| 美女午夜性视频免费| 国产精品永久免费网站| 午夜久久久在线观看| 一区二区三区精品91| 亚洲一码二码三码区别大吗| 国产蜜桃级精品一区二区三区| 欧美精品亚洲一区二区| 久久久久久亚洲精品国产蜜桃av| 免费观看人在逋| 97碰自拍视频| 日韩精品中文字幕看吧| 18禁黄网站禁片午夜丰满| 精品久久久久久久久久久久久 | 国产精品久久久久久亚洲av鲁大| 国产精品,欧美在线| 最近最新中文字幕大全免费视频| 国产麻豆成人av免费视频| 91大片在线观看| 欧美国产精品va在线观看不卡| ponron亚洲| 国产蜜桃级精品一区二区三区| 久久中文字幕人妻熟女| 国产熟女午夜一区二区三区| 亚洲男人的天堂狠狠| 久久精品国产综合久久久| 精品一区二区三区视频在线观看免费| 国内揄拍国产精品人妻在线 | 久久久久久亚洲精品国产蜜桃av| 高潮久久久久久久久久久不卡| 变态另类成人亚洲欧美熟女| 波多野结衣av一区二区av| 长腿黑丝高跟| 亚洲精华国产精华精| 两人在一起打扑克的视频| 国产在线精品亚洲第一网站| 老司机在亚洲福利影院| 国产又色又爽无遮挡免费看| 日韩欧美免费精品| 久久久国产成人免费| 亚洲欧洲精品一区二区精品久久久| www日本在线高清视频| 午夜免费成人在线视频| 欧美黄色片欧美黄色片| 又大又爽又粗| 国产亚洲欧美精品永久| 欧美乱码精品一区二区三区| 97碰自拍视频| 精品免费久久久久久久清纯| 女人被狂操c到高潮| 国产乱人伦免费视频| 亚洲激情在线av| 欧美色视频一区免费| 9191精品国产免费久久| 美女免费视频网站| 两个人视频免费观看高清| 国产精品久久电影中文字幕| 国产精品98久久久久久宅男小说| 草草在线视频免费看| 1024视频免费在线观看| 亚洲国产精品久久男人天堂| 熟女电影av网| 久久久国产精品麻豆| 国产单亲对白刺激| 国产亚洲av高清不卡| 久久精品影院6| 亚洲成人久久爱视频| 不卡av一区二区三区| 日韩欧美一区二区三区在线观看| 日韩高清综合在线| 精品国产乱子伦一区二区三区| 此物有八面人人有两片| 满18在线观看网站| 久久午夜综合久久蜜桃| 日韩免费av在线播放| 性色av乱码一区二区三区2| 色尼玛亚洲综合影院| 身体一侧抽搐| 黑丝袜美女国产一区| √禁漫天堂资源中文www| 欧美日韩黄片免| 国产三级黄色录像| 十八禁人妻一区二区| 精品欧美一区二区三区在线| 中文资源天堂在线| 波多野结衣高清无吗| 美女高潮到喷水免费观看| 给我免费播放毛片高清在线观看| 国产aⅴ精品一区二区三区波| 亚洲色图av天堂| 久久99热这里只有精品18| 亚洲五月天丁香| 最近最新中文字幕大全电影3 | 久久久久国内视频| 久久久国产成人免费| 精品欧美一区二区三区在线| a级毛片在线看网站| or卡值多少钱| 国产蜜桃级精品一区二区三区| 色综合亚洲欧美另类图片| 国产高清视频在线播放一区| netflix在线观看网站| 高清在线国产一区| 又黄又爽又免费观看的视频| 在线视频色国产色| 亚洲性夜色夜夜综合| 国产97色在线日韩免费| 人人妻人人看人人澡| 精品电影一区二区在线| 亚洲中文av在线| 女警被强在线播放| 91九色精品人成在线观看| 亚洲美女黄片视频| 成在线人永久免费视频| 欧美色欧美亚洲另类二区| 国产亚洲精品一区二区www| 天天躁夜夜躁狠狠躁躁| av有码第一页| bbb黄色大片| 久久久久久久久中文| 91成人精品电影| 制服丝袜大香蕉在线| 99久久精品国产亚洲精品| 中文字幕高清在线视频| 精华霜和精华液先用哪个| 欧美激情高清一区二区三区| 狂野欧美激情性xxxx| 99国产精品一区二区三区| 男女下面进入的视频免费午夜 | 亚洲av电影不卡..在线观看| 国语自产精品视频在线第100页| 一a级毛片在线观看| 最近最新中文字幕大全免费视频| www.精华液| 亚洲精品在线美女| 亚洲精品色激情综合| 真人一进一出gif抽搐免费| 成人一区二区视频在线观看| 好男人电影高清在线观看| 欧美性猛交黑人性爽| aaaaa片日本免费| 亚洲专区字幕在线| www日本黄色视频网| 国产亚洲av嫩草精品影院| 免费在线观看视频国产中文字幕亚洲| 亚洲男人天堂网一区| 久久久久国产一级毛片高清牌| 国产一级毛片七仙女欲春2 | 亚洲av五月六月丁香网| 欧美黑人欧美精品刺激| 99久久综合精品五月天人人| 老汉色∧v一级毛片| 久久久久久久精品吃奶| 精品久久久久久久人妻蜜臀av| 久久婷婷人人爽人人干人人爱| 热99re8久久精品国产| 变态另类成人亚洲欧美熟女| 99国产综合亚洲精品| 成人18禁高潮啪啪吃奶动态图| 村上凉子中文字幕在线| tocl精华| 手机成人av网站| 成人免费观看视频高清| 亚洲天堂国产精品一区在线| 色在线成人网| 最近在线观看免费完整版| 国产免费男女视频| 免费电影在线观看免费观看| 桃色一区二区三区在线观看| 在线观看66精品国产| 91成年电影在线观看| 亚洲精品国产一区二区精华液| 欧美亚洲日本最大视频资源| 免费人成视频x8x8入口观看| 色综合婷婷激情| 亚洲精品色激情综合| 熟女少妇亚洲综合色aaa.| 好男人电影高清在线观看| 午夜免费激情av| 欧美精品啪啪一区二区三区| 琪琪午夜伦伦电影理论片6080| 伦理电影免费视频| 天天躁狠狠躁夜夜躁狠狠躁| 99在线视频只有这里精品首页| 777久久人妻少妇嫩草av网站| av在线天堂中文字幕| 中文字幕久久专区| 国产精品久久久久久精品电影 | 国产真人三级小视频在线观看| 99精品久久久久人妻精品| 黄色成人免费大全| 日韩欧美免费精品| 看片在线看免费视频| 在线免费观看的www视频| 99久久综合精品五月天人人| 亚洲 国产 在线| 亚洲成av人片免费观看| 国产精品98久久久久久宅男小说| 香蕉丝袜av| 久久国产亚洲av麻豆专区| 岛国视频午夜一区免费看| 欧美日韩亚洲国产一区二区在线观看| 韩国精品一区二区三区| 高清毛片免费观看视频网站| 亚洲精品av麻豆狂野| 亚洲国产看品久久| 真人一进一出gif抽搐免费| 美女扒开内裤让男人捅视频| 欧美色视频一区免费| 欧洲精品卡2卡3卡4卡5卡区| 精品国内亚洲2022精品成人| 91老司机精品| 女人爽到高潮嗷嗷叫在线视频| 757午夜福利合集在线观看| 久久精品国产亚洲av香蕉五月| 禁无遮挡网站| 精品欧美一区二区三区在线| 老司机午夜福利在线观看视频| 婷婷丁香在线五月| 757午夜福利合集在线观看| 亚洲专区国产一区二区| 欧美激情久久久久久爽电影| 国产精品亚洲一级av第二区| 国产av一区二区精品久久| 日韩有码中文字幕| 白带黄色成豆腐渣| 欧美激情高清一区二区三区| 成人精品一区二区免费| 免费电影在线观看免费观看| 色综合欧美亚洲国产小说| 国产野战对白在线观看| 在线观看日韩欧美| 夜夜看夜夜爽夜夜摸| 亚洲一码二码三码区别大吗| 国产伦在线观看视频一区| 欧美性猛交黑人性爽| 成人一区二区视频在线观看| 亚洲av美国av| 日韩欧美国产在线观看| 男人舔女人下体高潮全视频| 久久久国产成人精品二区| 自线自在国产av| 久久久国产成人免费| 亚洲成人免费电影在线观看| 性色av乱码一区二区三区2| 久久 成人 亚洲| www.自偷自拍.com| 欧美成人一区二区免费高清观看 | 精品欧美一区二区三区在线| 国产男靠女视频免费网站| 日本一区二区免费在线视频| 亚洲精品国产区一区二| 丝袜人妻中文字幕| 婷婷精品国产亚洲av| 久久久久久久精品吃奶| 天天一区二区日本电影三级| 亚洲人成77777在线视频| 欧美最黄视频在线播放免费| 免费在线观看完整版高清| 亚洲第一电影网av| 亚洲国产看品久久| 精品欧美一区二区三区在线| 国产亚洲精品第一综合不卡| 久久久精品国产亚洲av高清涩受| 一区二区三区高清视频在线| 后天国语完整版免费观看| 亚洲熟女毛片儿| 久久精品成人免费网站| 777久久人妻少妇嫩草av网站| 亚洲aⅴ乱码一区二区在线播放 | 亚洲专区中文字幕在线| 亚洲色图 男人天堂 中文字幕| 国产乱人伦免费视频| 亚洲国产精品成人综合色| 欧美日韩瑟瑟在线播放| 日本免费a在线| 久久精品亚洲精品国产色婷小说| 国产一区二区激情短视频| 国产精品电影一区二区三区| 国产精品一区二区三区四区久久 | 91字幕亚洲| 操出白浆在线播放| ponron亚洲| 欧美日韩一级在线毛片| 精品国产亚洲在线| 18禁美女被吸乳视频| 亚洲国产欧美网| 啦啦啦 在线观看视频| 国产三级黄色录像| 国产一区二区激情短视频| 国产欧美日韩一区二区精品| 亚洲国产精品久久男人天堂| 色播在线永久视频| 国产精品98久久久久久宅男小说| 国产亚洲精品久久久久久毛片| avwww免费| 亚洲狠狠婷婷综合久久图片| 这个男人来自地球电影免费观看| 久久精品91无色码中文字幕| 国产伦一二天堂av在线观看| 黄色视频,在线免费观看| 怎么达到女性高潮| 在线免费观看的www视频| 国产av一区在线观看免费| 精品国产一区二区三区四区第35| xxxwww97欧美| 亚洲 国产 在线| 国产极品粉嫩免费观看在线| 日韩免费av在线播放| 女警被强在线播放| 自线自在国产av| 人人妻人人澡人人看| 嫩草影视91久久| 在线看三级毛片| 久久久国产欧美日韩av| 少妇的丰满在线观看| 国产成人系列免费观看| 性欧美人与动物交配| 亚洲自拍偷在线| 操出白浆在线播放| 一本大道久久a久久精品| 国产精品1区2区在线观看.| 大型黄色视频在线免费观看| 午夜福利在线在线| 日本 欧美在线| 中文字幕久久专区| 在线观看免费视频日本深夜| 色综合亚洲欧美另类图片| 麻豆成人午夜福利视频| 级片在线观看| 亚洲成人免费电影在线观看| 免费高清视频大片| 可以在线观看的亚洲视频| 免费一级毛片在线播放高清视频| 久久婷婷成人综合色麻豆| 日韩欧美免费精品| 正在播放国产对白刺激| avwww免费| 欧美中文日本在线观看视频| 12—13女人毛片做爰片一| 成年人黄色毛片网站| 在线观看免费视频日本深夜| 久久亚洲真实| 欧美乱色亚洲激情| 露出奶头的视频| 国产激情偷乱视频一区二区| 久久久久久久久久黄片| 国内久久婷婷六月综合欲色啪| 精华霜和精华液先用哪个| 夜夜夜夜夜久久久久| 久久久久久九九精品二区国产 | tocl精华| 一区二区三区激情视频| 18美女黄网站色大片免费观看| 熟妇人妻久久中文字幕3abv| 国产一区二区三区在线臀色熟女| 脱女人内裤的视频| 88av欧美| xxx96com| 免费在线观看完整版高清| 精品久久久久久成人av| 亚洲国产精品999在线| 欧美黄色淫秽网站| 村上凉子中文字幕在线| 精品无人区乱码1区二区| 午夜福利高清视频| av福利片在线| 一区二区三区高清视频在线| 一本一本综合久久| 亚洲国产高清在线一区二区三 | 丰满人妻熟妇乱又伦精品不卡| 亚洲国产看品久久| 狂野欧美激情性xxxx| 午夜福利18| xxxwww97欧美| 2021天堂中文幕一二区在线观 | 成人国产一区最新在线观看| 欧美日韩一级在线毛片| 老司机靠b影院| 男人操女人黄网站| 麻豆久久精品国产亚洲av| 精品欧美国产一区二区三| 精品国产亚洲在线| 午夜福利视频1000在线观看| 亚洲精品在线美女| 91国产中文字幕| 亚洲中文字幕日韩| 日韩大码丰满熟妇| 亚洲 欧美 日韩 在线 免费| 桃红色精品国产亚洲av| 亚洲人成网站在线播放欧美日韩| 母亲3免费完整高清在线观看| 亚洲最大成人中文| 最近最新中文字幕大全免费视频| 日本熟妇午夜| 日日干狠狠操夜夜爽| 正在播放国产对白刺激| 国产蜜桃级精品一区二区三区| 精品久久久久久久久久久久久 | 欧美成人一区二区免费高清观看 | 精品一区二区三区四区五区乱码| 伊人久久大香线蕉亚洲五| 桃色一区二区三区在线观看| 欧美在线黄色| 午夜成年电影在线免费观看| 亚洲激情在线av| 一级毛片女人18水好多| 视频区欧美日本亚洲| 国产伦在线观看视频一区| 国产成人精品久久二区二区91| av免费在线观看网站| 又大又爽又粗| 国产99白浆流出| a级毛片在线看网站| 91大片在线观看| 黄网站色视频无遮挡免费观看| 天堂动漫精品| 免费在线观看成人毛片| 啦啦啦韩国在线观看视频| 国产野战对白在线观看| 成人午夜高清在线视频 | 亚洲精品在线观看二区| 国产成人精品无人区| 国产精品野战在线观看| 久久久国产欧美日韩av| 香蕉国产在线看| 亚洲色图av天堂| 日韩av在线大香蕉| 亚洲成人免费电影在线观看| 两性夫妻黄色片| 91字幕亚洲| 国产精品亚洲av一区麻豆| 天天躁狠狠躁夜夜躁狠狠躁| 久久久久国产一级毛片高清牌| 在线观看一区二区三区| 91大片在线观看| 国产一区二区在线av高清观看| 看黄色毛片网站| 免费看美女性在线毛片视频| 一本综合久久免费| 两个人视频免费观看高清| 亚洲精品一卡2卡三卡4卡5卡| 在线观看免费日韩欧美大片| 久久久久久九九精品二区国产 | xxxwww97欧美| 美女免费视频网站| 老司机福利观看| 1024香蕉在线观看| 免费在线观看日本一区| 一个人免费在线观看的高清视频| 久久久久国产精品人妻aⅴ院| 嫁个100分男人电影在线观看| 亚洲全国av大片| 亚洲精品国产一区二区精华液| 亚洲精品美女久久av网站| 日韩国内少妇激情av| 精品午夜福利视频在线观看一区| 黄色女人牲交| 成人精品一区二区免费| 亚洲aⅴ乱码一区二区在线播放 | 一区二区日韩欧美中文字幕| 亚洲精品av麻豆狂野| 欧美zozozo另类| 丝袜在线中文字幕| e午夜精品久久久久久久| 男女做爰动态图高潮gif福利片| 日本熟妇午夜| 国产一区二区在线av高清观看| www日本在线高清视频| 午夜精品在线福利| 日韩成人在线观看一区二区三区| 一本精品99久久精品77| 国产一区二区三区在线臀色熟女| 一二三四在线观看免费中文在| 黄色片一级片一级黄色片| 日韩欧美一区视频在线观看| 波多野结衣巨乳人妻| 老鸭窝网址在线观看| 91字幕亚洲| 国产片内射在线|