• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fake News Classification Using a Fuzzy Convolutional Recurrent Neural Network

    2022-08-23 02:21:06DheerajKumarDixitAmitBhagatandDharmendraDangi
    Computers Materials&Continua 2022年6期

    Dheeraj Kumar Dixit,,Amit Bhagat and Dharmendra Dangi

    Department of Mathematics and Computer Applications,Maulana Azad National Institute of Technology(MANIT),Bhopal,Madhya Pradesh,India

    Abstract:In recent years,social media platforms have gained immense popularity.As a result,there has been a tremendous increase in content on social media platforms.This content can be related to an individual’s sentiments,thoughts,stories,advertisements,and news,among many other content types.With the recent increase in online content,the importance of identifying fake and real news has increased.Although,there is a lot of work present to detect fake news,a study on Fuzzy CRNN was not explored into this direction.In this work,a system is designed to classify fake and real news using fuzzy logic.The initial feature extraction process is done using a convolutional recurrent neural network (CRNN).After the extraction of features, word indexing is done with high dimensionality.Then, based on the indexing measures, the ranking process identifies whether news is fake or real.The fuzzy CRNN model is trained to yield outstanding results with 99.99±0.01%accuracy.This work utilizes three different datasets(LIAR,LIAR-PLUS,and ISOT)to find the most accurate model.

    Keywords: Fake news detection; text classification; convolution recurrent neural network;fuzzy convolutional recurrent neural networks

    1 Introduction

    The term‘fake news’was not heard frequently until a few years ago.However,when society entered the digital age and people began using social media more frequently,significant problems arose among mankind regarding how people receive news.In the digital age,fake news,information bubbles,news manipulation,and lack of trust are considered to be emerging issues in this regard[1].Fake news can be identified and understood based on its intent and origin.Based on this understanding, various techniques—for example, machine learning, natural language processing, and artificial intelligence(AI)—have been designed to overcome the problems of fake news and related issues.The issue of fake news has been considered over the past year using different techniques,and various explanations have been determined.

    Recently,theNew York Times Magazinedescribed fake news as“a story created for a lie.”Fake news has been defined with the proper subject being more important than opposing metrics.The creation of fake news has been influenced such that it could remember the trust over journalism at its true form;it is also intended to generate profits from broadcasting[2].

    In today’s society,the extensive spread of fake news has become more complicated as information is shared without any limits,making it easy to spread fake news.Developments in AI enable fake news to be spread automatically without manual support.This circumstance has had horrific effects, as many people blindly trust anything they read on social media.Recreational and new users of digital media also appear to be fooled easily by fake news.News checking is considered a form of deception,for example,when someone sends fraudulent spam texts or emails.This issue is worth solving since it can foster crime,political instability,and misery[3].

    Fake news is characterized by glaring contradictions and inaccurate data—in this way, the dispersion of fake news is similar to that of spam texts and emails.Fake news is often spread through three noteworthy mechanisms.The first is movement of society,by which spam messages are generally found in close to home messages or on explicit audit sites.Thus,they locally affect only a few crowds.Meanwhile, the effect of counterfeit news in online interpersonal organizations can be enormous because of the substantial numbers of international clients.It is also supported by the broad data sharing and engendering that these clients engage in.

    The second mechanism is crowds’drive.Rather than getting spam messages inactively, clients in online interpersonal organizations look for, obtain, and share news data without considering its accuracy or validity.

    The third mechanism is ID trouble.In this case, spam messages are relatively easy to recognize based on their correlations with bountiful standard messages (in messages or survey sites).In the interim,distinguishing false news with erroneous data is extremely challenging since it requires both intensive proof-gathering and fact-checking because of the absence of other similar news stories[4].

    These mechanisms of fake news present new difficulties in the identification task.Other than distinguishing fake news stories, recognizing phony newsmakers and subjects will become more significant,which will help prevent fake news producers from entering informal online communities[5].Text mining could help with this process,as it makes it relatively easy to convert large amounts of text data into small chunks based on specific problems[6].

    This paper aims to improve the accuracy of fake news identification that is currently possible with existing techniques.The technique studied here differentiates variations between fake news and real news with maximum accuracy.Fake news identification with the help of AI has become a very important field that has attracted the attention of researchers throughout the world [7].Despite its recent prominence in research,the accuracy of fake news detection has not improved significantly due to the ineffective news data implied on their content.

    The proposed model has been compared with several existing techniques, and their accuracies have been compared.The proposed model performs better than the existing methods when classifying news spread via social media as fake or real.The classification of data is based on the incorporation of a fuzzy approach with an estimation of features and ranking of words.The sentiments of texts also play a vital role in fake new identification[8].

    The paper exhibits the following aspects,which might be useful for further advancing this type of research in this direction:

    1.This proposed method applies fuzzy convolutional recurrent neural networks (CRNNs) on three different datasets,showing better results than previous methods.

    2.This paper will serve as a reference for scholars working in the field of news detection.

    3.This paper mathematically defines the fuzzy concept for CRNN algorithm training.

    4.This paper proposes a model that is more accurate than previous models regarding news classification based on an ISOT dataset.

    This paper is organized as follows.This section (Section I) presented a general description of fake news classification.In the next section (Section II), the existing literature related to fake news classification is presented.Section III presents the operation of a CRNN and fuzzy CRNN for classifying fake news presented on social media.In Section IV,the dataset information and comparison are presented.In Section V,the overall operation of the proposed fuzzy CRNN is described,followed by simulation results and a comparative analysis.Finally,Section VI provides the overall conclusions and suggestions for future research.

    2 Related Works

    Recent techniques for detecting fake news mainly concentrate on the content of the news and the information within a social context [9,10].When classifying fake news, the features of the news content are primarily obtained from the text and visuals.These features are also used to detect the types of precise writing[11]and the emotions that usually accompany fake news content[12,13].Many researchers have studied identification techniques through a level of information based on their content and context[14].

    Moreover,the representation of text has been designed along with the use of tensor factors that express techniques and deep neural systems that can accurately detect fake news[15].Features based on visuals(namely,images and videos)have been developed to express different characteristics of fake news.These techniques are used within a basic social context according to correlations between three types of features:(i)features based on the user,(ii)features based on the post,and(iii)features based on the network.Features based on the user have been are obtained for measuring their characteristics[16].Features based on the post emphasize their involvement in social media considering the types of perspectives,as well as the involvement of users on their credibility[17,18].

    Considering the extensive spreading of fake news in social media, additional studies have been considered based on the usage of social networks to identify fake news.As an example, fake news detection at the earliest is done using social learning and relationships based on the user, detection through semi-supervised techniques,detection through unsupervised techniques,and the features on meta [19].The researchers, in correlation with the context, the issues that have been studied by the scholars for fake news by Kaggle fake news dataset.Specifically, the research uses various machine learning techniques through term frequency and inverse document frequency to extract features when identifying fake news[20].

    In classical sentiment analysis, a statistical correlation between words is considered while the dependency factor between aspect and sentiment word is ignored[21].Another research has simulated the classification technique to detect fake news using a linear regression(LR)-based unigram model.They obtained an accuracy rate of 89.00%.When a linear support vector machine was used as a classifier,an accuracy rate of 92%was obtained.Afterward,they used convolutional neural networks(CNNs)to detect fake news and obtained an accuracy rate of 92.10%[22].

    In [23], deep learning techniques were used to detect fake news, achieving an accuracy rate of 93.50%.Research in [24] classified fake news classification using a hybrid model in which the relationships among users were considered as a significant feature for detecting fake news.This method yielded an accuracy rate of 89.20%.Another author used various machine learning techniques to classify fake news using a linguistic analysis and word count-based approach,attaining an accuracy rate of 87.00% using a support vector machine as a classifier [25].Another author [26] proposed a technique for detecting fake news using a deep CNN known as FNDNet, which correctly detected fake news 98.26%of the time.

    Bangyal et al.,in 2019 proposed improved Bat algorithm by enhancing its exploitation capabilities and avoids escaping from local minima.They have shown the efficiency of improved bat algorithm over gradient descent and other population based techniques with Feed Forward Neural Network on UCI dataset.They have seen that improved bat algorithm shown better optimization[27].Taun-Linh Nguyen et al.proposed a fuzzy CNN model for sentiment analysis.Using this network,the authors extracted a high level of emotional features (approx.78.85%) using Twitter data [28].Khattak et al.in 2021 build an expanded ontological relation with to classify the sentiments of user reviews.In[29],an extended set of linguistic rules for concept feature pair extraction with enhanced set ontological relations have been proposed.Then machine learning technique is used for sentiment classification with an accuracy of 87.5%.Tab.1 lists several important works related to this one, along with their accuracy rates.

    Table 1: Fake news detection accuracies attained in previous works

    3 System Overview

    The design proposed in the present study presents a fake news detection architecture comprised of various stages(Fig.2).Initially,the dataset was preprocessed to generate tokenized words,which meant the original news statements in real-world language format were converted into a list of integers representing the words in the same sequence as in the input string statement.Then, these tokenized sentences were replaced with word embedding,resulting in a matrix with the same number of columns as the sentence length.The number of rows is equal to the usual dimension size for the word embeddings.

    At first, the training is done for news classification on these word embeddings with the help of CRNNs.Then, these CRNNs are stripped from the last dense layers, and these layers are replaced with a fuzzy c-means classifier,which is again trained with the dataset;however,this time,the CRNN layers are not updated during training.

    The final resultant model combines fuzzy c-means and CRNN architecture, where the CRNN encodes the sentence into an n-dimensional feature vector,which works as a reduced representation of the input statement.Then,this n-dimensional feature vector is passed on to the fuzzy c-means,which classifies the input statement as either real or fake.

    Pseudo-Code for the Proposed Architecture

    Input:Training Data Output:Trained Fuzzy CRNN model 1 Design a CRNN model with the help of 1-dimensional convolution layers followed by multiple dense layers 2 Add a softmax output layer to the end of the CRNN with several nodes(equal to the number of classification classes).3 Train the CRNN with the training data until convergence occurs.4 Replace the dense and softmax layers from the trained CRNN model with an FCM.5 Train the final fuzzy CRNN architecture again with the training data.6 Return the trained fuzzy CRNN

    3.1 One-Dimensional Convolution Layer

    In general,CNNs are considered artificial neural networks that specialize in recognizing patterns in images while sharing the parameters with the help of corresponding kernels.This allows the model to encode image-specific features into a model with an understandable format while providing a reduced number of trainable parameters.These convolution operations employ 2-dimensional convolutional layers that operate on 2-dimensional inputs(i.e.,inputs with height and width).

    While these 2-dimensional convolutional layers are perfect for 2-dimensional data, they are not helpful for 1-dimensional data like time series,text,audio,and signal data.Such 1-dimensional data comprise a single value or a specific length of data in a sequential format.Due to this sequential format in one dimension, 1-dimensional convolution layers work perfectly when all the advantages of 2-dimensional convolutions are applied.A convolutional operation (with the kernel moving in a timestamp)is shown in Fig.1.

    1-dimensional convolution layers process a finite number of data points equal to the length of the kernel in a sequential format (sometimes referred to as a temporal axis).These kernels are then shifted ahead into the temporal axis for another finite number of steps defined by the stride value of the kernel.These 1-dimensional convolution layers work the same as the 2-dimensional convolution layers;the key difference is that the shifting of the kernel is done on only one axis.

    Figure 1:Convolution filter operation on 1-dimensional data

    Figure 2:Flow of data through proposed architecture

    3.2 Fuzzy C-Means Clustering

    Traditional clustering algorithms work by defining centroid points,which are classified based on their closeness to one of the centroids.One drawback of this approach is that a single element can belong to only one class at a time.This hard clustering of the data points could lead to incorrect classification results for points that are approximately equidistant from two centroids.

    Fuzzy clustering is a soft clustering technique in which each data sample can belong to more than one centroid at a time,thereby allowing a data sample to belong to a centroid for all the centroids.With these values, the user can categorize the data points into different classes and obtain more detailed correlations with all the centroids[30].

    Fuzzy c-means clustering is the most widely used fuzzy clustering algorithm.It is very similar to the K-means algorithm,which is commonly used in machine learning research.

    For a given set of data samples X = {x1,x2, x3...xn}, where, for each samplexi∈Rm, m represents several dimensions of features that are used to identify each data sample.The task is to generate a set of c unique cluster centers denoted as C = {c1,c2,c3..., cc}.The relationships of the data samples with the cluster centers are then determined.

    The squared distance between the corresponding data sample and cluster node is represented in Eq.(1).

    Since the data samples are in vectorized format, the above equation can be modified into a vectorized A-norm form as in Eq.(2).

    The partitions and the degree of association for all data samples and the corresponding cluster centers can be collectively represented as partition matrix W.This matrix is a read as a valued n×c matrix,where the rows represent the corresponding data samples ranging from{1,...,n}and the columns represent the corresponding cluster centers ranging {1,..., c}.This partition matrix represents all the data samples and their degrees of association with the cluster center collectively.

    When multiplied with the distance values calculated earlier,this weighted partition matrix gives a squared error value between the data sample and the corresponding cluster.This value is weighted by the partition matrix for each data sample with each cluster center,which is represented by

    Summation is performed to generalize the squared errors for all cluster centers for each corresponding data sample.The outcome is an n-dimensional real-valued vector with a sum of the squared error values of each data sample,and it is represented as

    The above equation can be termed the generalized overall weighted sum of squared sum error for the complete dataset.A cost function(see Eq.(3))can be defined for a given set of data samples and clusters(with the help of the above equation).Eq.(3)calculates the overall loss of the given data sample and cluster association system.

    For a given data sample to belong to a specified cluster,the degree of membership should be high.In other words,the data point should be closer to the center of the cluster it is a member of than to any other cluster center.To achieve this,a minimization operation is performed on the given cost function Eq.(4):

    where

    xi→ithdata sample 1 ≤i<n

    cj→jthcluster center 2 ≤j<n

    m→weighted exponent 1<m<∞

    wij→Partition matrix

    The cost function is minimized by updating the partition matrix and the cluster center vectors with each iterative step.This reduces the overall cost for the given dataset and the corresponding cluster centers.

    With each minimization step,all cluster centers are updated using Eq.(5).

    The partition matrix helps define how much of the data sample’s weight belongs to a corresponding cluster.This can be calculated using Eq.(6).

    3.3 Model Architecture

    The overall model’s architecture is divided into two parts:CRNN and FCM.The CRNN handles the model’s input(processing it in sequential format with the help of 1-dimensional convolution layers)and generates an f-dimensional feature vector.This vector is a form of the input statement that has been encoded into an f-dimensional feature space.This f-dimensional vector is then passed to the FCM for classification into two categories:real news and fake news.

    Since there are only two classification categories, the FCM generates two values for each data sample,and each value corresponds to the degree to which that sample belongs to the corresponding class.This final value is evaluated,and the final class(either real news or fake news)is obtained.Fig.2 is a flowchart illustrating the process of detecting fake news.

    4 Datasets

    4.1 LIAR

    LIAR is a publicly available dataset for fake news detection [31].It contains manually labeled short statements collected from various domains from POLITIFACT.COM.It includes 12,800 human-labeled short statements.Each statement was evaluated in two stages—first by the POLITIFACT.COM’s editors and then through an analysis of 200 randomly sampled instances by journalists.The statements were recorded between 2007 and 2016.

    The complete dataset was classified into six groups based on their truthfulness ratings: pantsfire,false,barely-true,half-true,mostly-true,and true.For this research,the main task was to simply classify statements as real or fake.This task was achieved by merging the six labels of the LIAR dataset into two categories as follows:

    Fake Category->{pants-fire,false,barely-true}

    Real Category->{half-true,mostly-true,true}

    4.2 LIAR-PLUS

    The LIAR-PLUS dataset is an extension of the original LIAR dataset[32].In the LIAR-PLUS dataset,the justifications used for the statements were updated by automatically extracting the claims provided in the original articles.The justifications were extracted from the conclusion or summary sections of the texts, which were filtered out based on the use of ‘verdict’and related words.These extended justifications were added as support statements for the fact-checking claims.They also provided more detailed information about each data sample.

    4.3 ISOT

    The ISOT fake news dataset [33] contains several thousand fake news and truthful articles obtained from various legitimate sources.The truthful articles were broadly obtained by legally crawling articles from Reuters.com,while the fake news articles were collected from multiple sources validated by POLTIFACT.COM.The dataset contains 44,898 sample statements and their corresponding body texts.Of these statements,21,417 belong to the real news category,and 23,481 belong to the fake news category(Tab.2).

    Table 2: Samples of data in each dataset

    5 Results

    Multiple models were trained on the above-specified datasets with the proposed techniques.The complete experiment configuration,training steps,and results are stated below.

    5.1 Experimental Setup

    The complete simulation has been carried out for the proposed fuzzy CRNN using the Python tool.Their architecture has been considered, and the experiment was carried out using a PC with Windows 10 OS,4GB RAM,and an Intel I5 processor.

    The key tasks performed for the complete training of a fuzzy CRNN model comprise initial CRNN training,transferring knowledge to fuzzy c-means,and training the combined fuzzy CRNN architecture.More detailed descriptions of the configuration and steps are provided in the following sections.

    5.2 Data Pre-Processing

    This section describes the general experimental configurations and all the steps taken to prepare the dataset.The process starts with the initial unprocessed news statements in paragraph format and ends with a clean, structured sequential embedding format that can be understood by the proposed architecture(Fig.3).

    The main sequential tasks performed on all the statements in all the datasets during data cleaning are listed below:

    · Removal of URLs

    · Removal of numbers

    · Removal of punctuation

    · Conversion of all characters into lowercase

    · Splitting of words at spaces and converting them into a list format.

    · Removing words containing one or two letters

    · Rejoining the list into a string without losing the sequential information.

    Figure 3:The complete data preprocessing pipeline

    After the data cleaning process, the retrieved data are standardized data with no punctuation,special symbols,or redundant spaces.

    Even after the data are cleaned and standardized,they are not understandable because machine learning models work with numbers.Therefore,the textual information needs to be converted into a numerical form without losing any information.Word tokenization is employed to achieve this.

    Even if the dataset is sufficiently large, situations can arise where the model encounters a word that is not in the tokenization dictionary,meaning that its corresponding token is not generated.These types of words are called out of vocabulary tokens and are assigned index 0 as their token index.

    After the tokenization dictionary is generated, the statements are converted into a list of words split at spaces.These words are then replaced by their corresponding token values,yielding a numeric representation of the textual data without losing any information.The statements comprising the realworld textual data do not need to be of equal length.This results in variable statement lengths in the dataset,which could lead to model overfitting based on the sentence lengths.

    Input padding is employed to overcome this variable-length input problem.Input padding invlolves utilizing a predefined fixed input length.Based on this length,all statements are padded with out of vocabulary tokens.Statements longer than the predefined input length are truncated(starting at the end)until the predefined input length is attained.While this could lead to the loss of information,the fixed input length is determined based on the overall dataset sample length—therefore,the amount of information loss is minimized for all data samples.

    Even after the words are tokenized into numbers, a large corpus of textual data is needed for the model to understand the real contexts and meanings of the tokenized words.Pre-trained word embeddings can be used to overcome this problem.These word embeddings are vectorized,encoded forms of individual words.For this research,Global Vectors for Word Representation(GLoVe)pretrained embeddings were used.These were trained on a vast corpus of Wikipedia 2014 textual data.

    Fig.3 depicts all the data preprocessing steps.First,the input data go through the data cleaning process.Then,the sentences are tokenized.After this,word embedding is performed.

    5.3 Model Training

    For the proposed model, a sentence encoder (in the form of a convolution recurrent neural network)is required to process the input in a sequential format and produce encoded sentences.When the encoded form of the input data is generated,the neural network architectures can work on highdimensional data;only important information from this high-dimensional data is extracted and used.This process reduces the dimensionality of the input while maintaining its unique representational value.These encoded input data are then passed to the fuzzy c-means for classification.It is essential to reduce the dimensionality of the high-dimensional data because FCMs work efficiently with lowdimensional data.

    Since a CRNN (Fig.4) is employed for input data encoding or dimensionality reduction and FCM is used for classification,the proposed approach can combine the positive aspects of each.As a result,an efficient and accurate classification architecture is provided,and the overall complexity and computation needs are reduced.The word embedding is then passed through CNNs and dense layers,eventually leading to classification as real or fake news.

    Figure 4:CRNN model for encoder training

    The CRNN is trained in a supervised way by being given input statements in the form of sequential word embeddings.It then classifies the statements into the corresponding categories (fake or real).After a considerable number of training iterations,the CRNN training is terminated,and the CRNN is truncated from the last layers; only the 1-dimensional convolution layers are kept, which serve as the encoder for the FCM.The convolutional layers are used for features extraction from the data and recurrent layer is just a single layer which will also act as 1D convolutional layer and after passing from several layers fuzzy system is used to classify the result instead of neuron units.

    After training the CRNN as the encoder, the CRNN layers are frozen for further updating by training.Also,the FCM is attached to the end of the encoder to relay the encoded low-dimensional output of the CRNN into the FCM.This, combined with the fuzzy CRNN architecture (Fig.5), is then trained with the dataset.However,since the CRNN layers are frozen for training,they work only as encoders,and training is done only on the FCM section of the architecture.

    The FCM is trained until a considerable loss is achieved; the final architecture is called a fuzzy CRNN.A fuzzy CRNN can categorize temporal or sequential input into finite categories while allowing an input sample to belong to more than one category.This kind of categorization is not possible with traditional K-means algorithms.

    Figure 5:Fuzzy CRNN connected with frozen CRNN layers

    5.3.1 Experimental Evaluation

    The proposed fuzzy CRNN is evaluated based on the parameters of precision, recall, accuracy,and F1 score.All parameters were evaluated using the classification report of the best model.The classification report provides details about the accuracy, recall, precision, and F1 score, and some data samples were evaluated to support the evaluation[34].The formulas used to calculate accuracy,recall,precision,and F1 score are as follows.

    Accuracy:resolves the proximity for detection by the classifier and is determined by Eq.(7).

    Recall:detects the positive sample by the proposed Fuzzy CRNN represented in Eq.(8).

    Precision:provides the ratio of true positive(TP)values to the total predicted values expressed by Eq.(9).

    F1 score:the ratio between the average mean of precision and recall expressed by Eq.(10).

    TP is a forecast value that is expected to be positive and is determined as positive in an AI model.A false positive (FP) is defined as a forecast value that is expected to be negative initially and later shown to be positive by an AI model.A true negative(TN)is a forecast value expected to be negative and shown to be negative by the AI model.Finally,a false negative(FN)is a forecast value expected to be positive but later shown to be negative by the AI model.

    5.3.2 Implementation Results

    The proposed architecture was trained and tested on three datasets—namely,the LIAR dataset,LIAR-PLUS dataset,and ISOT dataset.All these datasets’details are provided in previous sections.

    LIAR Dataset

    For the LIAR dataset,the target classes or labels were divided into six sections based on the severity or truthfulness of the statements.However,for binary classification(i.e.,fake/real classification),these six labels were merged into two labels.Specifically, the false, barely-true, and pants-fire labels were grouped into the fake news category, and the true, mostly-true, and half-true categories were grouped into the real news category.

    With these label groupings,a GLoVe embedding of 300 dimensions was used for word embedding,while the CRNN reduced the input data to 64-dimensional encoded vectors.These vectors were used as input for the FCM substructure for classification.The classification report of the LIAR dataset is shown in Fig.6.

    Figure 6:Classification report for the proposed architecture on the LIAR dataset

    The overall accuracy rate is 65%, which is higher than that of the same binary classification problem as discussed in [31].The proposed model was 66% accurate in the real news case and 64%accurate in the fake news category.Similarly,the proposed method yielded 68%recall in the real news category and 62%recall in the fake news case.Also,support values differed(668 in the real news class and 616 in the fake news class).

    LIAR-PLUS Dataset

    In the case of the LIAR-PLUS dataset,the labels were in the same format as the LIAR dataset;the added information comprised extended justification statements.While the labels were grouped similarly to the case of the LIAR dataset,the extended justification statements were also concatenated into the news statements, thereby making a combined input with both the news statement and its justification.

    With this label grouping and combined statement and justification input,a GLoVe embedding of 200 dimensions was used for word embedding.Meanwhile,the CRNN reduced the input data into 64-dimensional encoded vectors,which were used as input for the FCM substructure for classification.The classification report for the LIAR-PLUS dataset is shown in Fig.7.

    The proposed architecture was able to predict fake news with more accuracy and confidence than real news.The overall architecture also yielded high performance and low-computation-extensive predictions when compared to traditional neural network approaches.The LIAR-PLUS dataset yielded an accuracy of 70%(75%in the real news category and 66%in the fake news category).This difference occurred because the support values for each class differ(668 in the real news class and 616 in the fake news class).

    Figure 7:Classification report for the proposed architecture on the LIAR-PLUS dataset

    ISOT Dataset

    The ISOT dataset also contained the news title and statement, making it similar to the LIARPLUS dataset.However, the ISOT dataset also combined the title and statement to form a single complete news statement.

    With the combined news title and statement input,the GLoVe embedding of 200 dimensions was used as word embedding,and the CRNN reduced the input data to 64-dimensional encoded vectors.These vectors were used as input for the FCM substructure for classification.The classification report for the ISOT dataset is shown in Fig.8.

    Figure 8:Classification report for the proposed architecture on the ISOT dataset

    The proposed architecture yielded 99.99±0.01%accuracy for the validation dataset while having lower computational costs than the other implementations.The results show that the precision,recall,and F1 score provide 99.99±0.01%accuracy with 4283 support instances in each class.

    Fig.9 below shows the confusion matrix for the classification.

    The confusion matrix reveals important results related to several factors, such as accuracy,precision,recall,and F1 score.The accuracy of the model can be calculated using the confusion matrix as follows:

    This implies that the proposed model has achieved 99.99%accuracy.On executing the model on thirty different time,99.99±0.01%accuracy has been achieved.

    Tab.3 compares the proposed architecture with other supervised classification models when used for the same binary classification problem with the respective datasets.The table also contains performance results from [32] for the LIAR and LIAR-PLUS dataset and from [33] for the ISOT dataset.

    Figure 9:Confusion matrix for the proposed architecture on the ISOT dataset

    Table 3: Results of different models using various datasets

    Table 3:Continued

    6 Conclusion

    This paper describes the development of an algorithm combining a CRNN and fuzzy cmeans algorithm.This combination led to high generalization, the CRNNs ability to process highdimensional data,and the fuzzy c-means’ability to allow a data sample to belong to more than one class into a single architecture simultaneously.The proposed approach has been examined using LIAR,LIAR-PLUS,and ISOT datasets.The results showed that the proposed approach yielded 65%,70%,and 99.99±0.01%accuracy on the LIAR,LIAR-PLUS,and ISOT datasets,respectively.Although,in this work nearly 100%accuracy has been achieved but there are various other methods which have been already given almost same result.So, the importance of doing this work, lies in exploring the Fuzzy CRNN method in fake news classification and testing its accuracy on three different datasets which have been never done.From this all,it can be said that this research interest lies into exploring the possibility to enhance the efficiency or providing theoretical evidence of FCRNN into fake news classification.

    The key advantages of the proposed approach are as follows:

    · It can handle variable-length inputs.

    · Fuzzy c-means can replace the dense layers of the traditional network,thereby reducing overall computation and memory costs without hindering the model’s accuracy.

    · The same proposed architecture can be employed on other natural language processing tasks.

    · The initial encoder can be replaced with pre-trained sentence embedding generators.

    · Unsupervised training can be employed for CRNN encoder training.

    · The encoder can be easily replaced by another encoder, thereby adding to the robustness of modifications in the architecture.

    There is room for advancement of this work in the following ways:

    · Better sentence encoding approaches can be employed to more efficiently encode the input statements for FCM input.

    · The CRNN and FCM can be trained in a parallel fashion in a single training session.

    The most noteworthy limitations of this work are as follows:

    · The FCM depended heavily on the CRNN’s encoding ability for its final predictions.

    · The FCM does not perform well with high-dimensional input data.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久精品国产亚洲av涩爱 | 日韩一区二区视频免费看| 欧美zozozo另类| 亚洲高清免费不卡视频| 99久久人妻综合| 免费搜索国产男女视频| 亚洲国产高清在线一区二区三| 婷婷色av中文字幕| 九九热线精品视视频播放| 免费搜索国产男女视频| 91精品一卡2卡3卡4卡| 99riav亚洲国产免费| 亚洲av中文av极速乱| 国产精品麻豆人妻色哟哟久久 | 全区人妻精品视频| 嘟嘟电影网在线观看| 亚洲国产欧洲综合997久久,| 国产精品蜜桃在线观看 | 免费搜索国产男女视频| 亚洲乱码一区二区免费版| 人人妻人人澡人人爽人人夜夜 | 亚洲国产精品成人久久小说 | 国产成人aa在线观看| 亚洲国产精品成人综合色| av国产免费在线观看| 国产久久久一区二区三区| 国产 一区 欧美 日韩| 午夜亚洲福利在线播放| 亚洲无线在线观看| 九色成人免费人妻av| 亚洲av.av天堂| 看非洲黑人一级黄片| 22中文网久久字幕| 在线天堂最新版资源| 国产又黄又爽又无遮挡在线| 日本免费a在线| 在线免费观看的www视频| 国产av麻豆久久久久久久| 免费看美女性在线毛片视频| 免费搜索国产男女视频| 亚洲成人中文字幕在线播放| 久久热精品热| 男女啪啪激烈高潮av片| 午夜久久久久精精品| 成人国产麻豆网| 人妻夜夜爽99麻豆av| 欧美激情久久久久久爽电影| 精品久久久噜噜| 亚洲不卡免费看| 热99re8久久精品国产| 51国产日韩欧美| 麻豆成人午夜福利视频| 69人妻影院| 亚洲精品色激情综合| 99视频精品全部免费 在线| 天美传媒精品一区二区| 村上凉子中文字幕在线| 美女脱内裤让男人舔精品视频 | 亚洲高清免费不卡视频| 午夜福利在线在线| 亚洲性久久影院| 男人狂女人下面高潮的视频| 少妇熟女欧美另类| 国内精品宾馆在线| 嫩草影院入口| 狂野欧美激情性xxxx在线观看| 精品国产三级普通话版| 日产精品乱码卡一卡2卡三| 不卡一级毛片| 国产探花极品一区二区| 日产精品乱码卡一卡2卡三| 九九热线精品视视频播放| 国产精品久久久久久精品电影小说 | 久久精品综合一区二区三区| 久久久国产成人免费| 老女人水多毛片| 国产成人午夜福利电影在线观看| 美女 人体艺术 gogo| 不卡视频在线观看欧美| 又爽又黄无遮挡网站| 六月丁香七月| 久久人人精品亚洲av| 色哟哟哟哟哟哟| 亚洲经典国产精华液单| 国产精品.久久久| 国产激情偷乱视频一区二区| 中文在线观看免费www的网站| 麻豆精品久久久久久蜜桃| 久久精品久久久久久久性| 桃色一区二区三区在线观看| 非洲黑人性xxxx精品又粗又长| 欧美色视频一区免费| 97热精品久久久久久| 偷拍熟女少妇极品色| 亚洲av.av天堂| av免费观看日本| av福利片在线观看| 国产在视频线在精品| 热99re8久久精品国产| 国产成人aa在线观看| 亚洲aⅴ乱码一区二区在线播放| 久久午夜福利片| a级毛色黄片| 国产高清三级在线| 麻豆成人av视频| 夜夜看夜夜爽夜夜摸| 日韩中字成人| 精品午夜福利在线看| 日日摸夜夜添夜夜爱| 在线免费观看不下载黄p国产| 国产亚洲av片在线观看秒播厂 | 午夜福利高清视频| 夫妻性生交免费视频一级片| 精品一区二区三区人妻视频| 在线观看av片永久免费下载| 热99在线观看视频| 免费av毛片视频| 亚洲av免费高清在线观看| 国产日本99.免费观看| 亚洲成a人片在线一区二区| 免费看日本二区| 久久亚洲国产成人精品v| 最好的美女福利视频网| 女人十人毛片免费观看3o分钟| 亚洲第一电影网av| 欧美三级亚洲精品| 欧美一区二区亚洲| 久久人妻av系列| 两个人的视频大全免费| 亚洲av一区综合| 人妻久久中文字幕网| 在线a可以看的网站| 亚洲18禁久久av| 可以在线观看的亚洲视频| 91精品一卡2卡3卡4卡| av免费观看日本| videossex国产| 国内精品久久久久精免费| 2022亚洲国产成人精品| 成人无遮挡网站| 亚洲欧美日韩高清在线视频| 午夜福利在线在线| 日韩av在线大香蕉| 天堂网av新在线| 男女啪啪激烈高潮av片| 97热精品久久久久久| 欧美潮喷喷水| 亚洲欧美成人综合另类久久久 | 中文字幕制服av| 三级男女做爰猛烈吃奶摸视频| 国产精品一区二区三区四区久久| 韩国av在线不卡| 国产精品国产高清国产av| 日韩欧美三级三区| 国产精品伦人一区二区| 99视频精品全部免费 在线| 色哟哟·www| 夜夜夜夜夜久久久久| 欧美不卡视频在线免费观看| 一级毛片电影观看 | 中文在线观看免费www的网站| 国产精品综合久久久久久久免费| а√天堂www在线а√下载| 乱码一卡2卡4卡精品| 久久热精品热| 国模一区二区三区四区视频| 亚洲国产日韩欧美精品在线观看| 日日撸夜夜添| 色视频www国产| 在线a可以看的网站| 色哟哟·www| 免费在线观看成人毛片| 成人欧美大片| 亚洲欧美日韩东京热| 亚洲欧美清纯卡通| 国产大屁股一区二区在线视频| 亚洲在久久综合| 精品人妻视频免费看| 久久久精品欧美日韩精品| 三级毛片av免费| 小蜜桃在线观看免费完整版高清| 亚洲人成网站在线播放欧美日韩| 亚洲18禁久久av| 在现免费观看毛片| 欧美变态另类bdsm刘玥| 久久人人精品亚洲av| 两个人视频免费观看高清| 91精品国产九色| 精品少妇黑人巨大在线播放 | 看免费成人av毛片| 亚洲成人精品中文字幕电影| 成人漫画全彩无遮挡| 国产中年淑女户外野战色| av专区在线播放| 欧美一区二区精品小视频在线| 亚洲18禁久久av| 午夜精品在线福利| 久久人妻av系列| 亚洲国产高清在线一区二区三| 亚洲av成人av| 能在线免费观看的黄片| 99热精品在线国产| 亚洲第一区二区三区不卡| av免费在线看不卡| 一本久久精品| 久久久a久久爽久久v久久| 国产精品久久久久久久电影| 99久久精品热视频| 亚洲五月天丁香| 亚洲人成网站在线播| 久久鲁丝午夜福利片| 哪个播放器可以免费观看大片| 天美传媒精品一区二区| 中出人妻视频一区二区| 久久久久国产网址| 少妇人妻一区二区三区视频| 日韩欧美一区二区三区在线观看| 一区二区三区免费毛片| 精品日产1卡2卡| 婷婷精品国产亚洲av| 美女国产视频在线观看| 欧美xxxx黑人xx丫x性爽| 内地一区二区视频在线| 欧美+亚洲+日韩+国产| 国产不卡一卡二| 欧美性感艳星| 日本欧美国产在线视频| 内地一区二区视频在线| 国内精品一区二区在线观看| 3wmmmm亚洲av在线观看| 综合色丁香网| 欧美色欧美亚洲另类二区| 日韩在线高清观看一区二区三区| 国产高潮美女av| 欧美性猛交黑人性爽| a级毛色黄片| 亚洲成av人片在线播放无| 久久人人精品亚洲av| 偷拍熟女少妇极品色| 成人鲁丝片一二三区免费| av在线亚洲专区| 日韩欧美精品v在线| 好男人在线观看高清免费视频| 国产免费男女视频| 国产私拍福利视频在线观看| 最后的刺客免费高清国语| 免费看日本二区| 亚洲无线观看免费| 国产极品天堂在线| 国产一区二区激情短视频| 久久久色成人| 大又大粗又爽又黄少妇毛片口| 国产熟女欧美一区二区| 99久久中文字幕三级久久日本| 18禁裸乳无遮挡免费网站照片| av在线老鸭窝| 久久精品91蜜桃| 国产成人精品婷婷| 国内精品宾馆在线| 精品人妻一区二区三区麻豆| 国产午夜福利久久久久久| 美女国产视频在线观看| 久久午夜亚洲精品久久| av黄色大香蕉| 最近手机中文字幕大全| 国产精品久久电影中文字幕| 久久久a久久爽久久v久久| 国产白丝娇喘喷水9色精品| 亚洲成人中文字幕在线播放| 日本av手机在线免费观看| 一进一出抽搐动态| 欧洲精品卡2卡3卡4卡5卡区| 国产伦一二天堂av在线观看| 日本五十路高清| 欧美性感艳星| 国产精品蜜桃在线观看 | 寂寞人妻少妇视频99o| 97在线视频观看| 久久久久性生活片| 国产精品乱码一区二三区的特点| 久久这里只有精品中国| 日日干狠狠操夜夜爽| 日本三级黄在线观看| 成人高潮视频无遮挡免费网站| 久久亚洲精品不卡| 午夜精品在线福利| 国产白丝娇喘喷水9色精品| 最近视频中文字幕2019在线8| 国产日韩欧美在线精品| 亚洲av免费在线观看| 高清毛片免费看| 亚洲国产精品sss在线观看| 亚洲人与动物交配视频| 99久久九九国产精品国产免费| 深夜精品福利| 亚洲精品国产av成人精品| 长腿黑丝高跟| 中国国产av一级| 老女人水多毛片| 三级国产精品欧美在线观看| 麻豆国产97在线/欧美| 女的被弄到高潮叫床怎么办| 国产精品人妻久久久久久| 成人国产麻豆网| 日韩 亚洲 欧美在线| 成年版毛片免费区| 国产精品福利在线免费观看| 卡戴珊不雅视频在线播放| 天堂影院成人在线观看| 两个人的视频大全免费| 国国产精品蜜臀av免费| 国产精品伦人一区二区| 国产精品人妻久久久久久| 亚洲成人精品中文字幕电影| 一边摸一边抽搐一进一小说| 搞女人的毛片| 国产一区二区激情短视频| 成熟少妇高潮喷水视频| 欧美三级亚洲精品| 欧美性猛交╳xxx乱大交人| 99久久成人亚洲精品观看| 国产乱人视频| 美女xxoo啪啪120秒动态图| 日本一本二区三区精品| 欧美日韩在线观看h| 六月丁香七月| 中文字幕av在线有码专区| 日韩制服骚丝袜av| 秋霞在线观看毛片| 欧美变态另类bdsm刘玥| 色噜噜av男人的天堂激情| 给我免费播放毛片高清在线观看| 成人午夜精彩视频在线观看| eeuss影院久久| 可以在线观看的亚洲视频| 三级经典国产精品| 日韩欧美一区二区三区在线观看| 你懂的网址亚洲精品在线观看 | 三级经典国产精品| 亚洲综合色惰| 国产人妻一区二区三区在| 熟妇人妻久久中文字幕3abv| 国产私拍福利视频在线观看| a级毛色黄片| 一级黄色大片毛片| 综合色av麻豆| 欧美高清性xxxxhd video| 男的添女的下面高潮视频| av在线播放精品| 精品国内亚洲2022精品成人| 欧美日韩精品成人综合77777| 1024手机看黄色片| 内射极品少妇av片p| 亚州av有码| 干丝袜人妻中文字幕| 插逼视频在线观看| 两个人视频免费观看高清| 日韩视频在线欧美| 国产精品人妻久久久影院| 十八禁国产超污无遮挡网站| 亚洲国产欧美在线一区| 一级毛片久久久久久久久女| 欧美日本视频| 亚洲av一区综合| 欧美人与善性xxx| 一级毛片久久久久久久久女| 伊人久久精品亚洲午夜| 一夜夜www| 伊人久久精品亚洲午夜| 欧美zozozo另类| 色综合色国产| 好男人视频免费观看在线| 国产精品一区二区在线观看99 | 久久久久久久久大av| 91在线精品国自产拍蜜月| 国产精品一二三区在线看| 特级一级黄色大片| 波多野结衣高清作品| 干丝袜人妻中文字幕| 波野结衣二区三区在线| 国产黄a三级三级三级人| 国产精品久久久久久久久免| 精品熟女少妇av免费看| 午夜亚洲福利在线播放| 内地一区二区视频在线| 全区人妻精品视频| 久久久精品欧美日韩精品| 国产单亲对白刺激| 在线观看免费视频日本深夜| 嫩草影院精品99| 亚洲人成网站高清观看| 亚洲成人中文字幕在线播放| 热99re8久久精品国产| 欧美精品一区二区大全| 亚洲婷婷狠狠爱综合网| 91精品一卡2卡3卡4卡| 深夜精品福利| 成人亚洲精品av一区二区| 在线观看av片永久免费下载| 中国美女看黄片| 麻豆成人av视频| 变态另类丝袜制服| 久久精品国产鲁丝片午夜精品| 国产精品无大码| 日韩 亚洲 欧美在线| 欧美区成人在线视频| 一区二区三区四区激情视频 | 国产精品免费一区二区三区在线| 亚洲av成人精品一区久久| 亚洲自偷自拍三级| 国产在线男女| 午夜亚洲福利在线播放| 人人妻人人澡人人爽人人夜夜 | 久久精品国产清高在天天线| 成人综合一区亚洲| 亚洲内射少妇av| 午夜免费男女啪啪视频观看| 一夜夜www| 白带黄色成豆腐渣| 给我免费播放毛片高清在线观看| 男女啪啪激烈高潮av片| 一级av片app| 在线观看一区二区三区| 一本精品99久久精品77| 国产一区二区在线av高清观看| 午夜免费男女啪啪视频观看| 亚洲av第一区精品v没综合| 99久久九九国产精品国产免费| 欧美xxxx黑人xx丫x性爽| 亚洲第一区二区三区不卡| 国产精品久久久久久精品电影小说 | 成人毛片60女人毛片免费| 国产麻豆成人av免费视频| 蜜臀久久99精品久久宅男| 蜜桃亚洲精品一区二区三区| 日韩欧美一区二区三区在线观看| 大又大粗又爽又黄少妇毛片口| 九九热线精品视视频播放| 成人永久免费在线观看视频| 性插视频无遮挡在线免费观看| 在线天堂最新版资源| 亚洲国产精品成人久久小说 | 久久精品国产亚洲av天美| 在线免费观看的www视频| 啦啦啦韩国在线观看视频| 国产一区二区三区在线臀色熟女| 岛国毛片在线播放| 国产精华一区二区三区| 久久欧美精品欧美久久欧美| 国产一级毛片在线| 99热网站在线观看| 国产探花在线观看一区二区| 欧美性感艳星| 波野结衣二区三区在线| av女优亚洲男人天堂| 国产精品久久久久久亚洲av鲁大| 婷婷色av中文字幕| 高清日韩中文字幕在线| 国产在线男女| 欧美zozozo另类| 亚洲国产高清在线一区二区三| 精品人妻熟女av久视频| 亚洲精品日韩av片在线观看| 国产大屁股一区二区在线视频| 国产黄a三级三级三级人| av在线亚洲专区| 久久韩国三级中文字幕| 成人毛片60女人毛片免费| 亚洲精品日韩av片在线观看| 久久精品国产亚洲av天美| 国产成年人精品一区二区| 中文字幕av在线有码专区| 日日干狠狠操夜夜爽| 国产亚洲精品久久久久久毛片| 成人性生交大片免费视频hd| 美女大奶头视频| 精品人妻一区二区三区麻豆| 国产精品日韩av在线免费观看| 看片在线看免费视频| 国产av不卡久久| 青春草视频在线免费观看| 久久人人爽人人爽人人片va| 色哟哟·www| 亚洲久久久久久中文字幕| av在线观看视频网站免费| 亚洲人与动物交配视频| 在线观看av片永久免费下载| 人妻少妇偷人精品九色| 国产精品无大码| 精品人妻一区二区三区麻豆| 久久久精品94久久精品| 国产精品一区二区三区四区免费观看| 国内精品宾馆在线| 亚洲自拍偷在线| 高清在线视频一区二区三区 | 我的女老师完整版在线观看| 少妇的逼水好多| 亚洲aⅴ乱码一区二区在线播放| 亚洲第一区二区三区不卡| 日韩一区二区三区影片| 国产伦精品一区二区三区视频9| 国产成人精品一,二区 | 国产亚洲精品久久久久久毛片| 国产麻豆成人av免费视频| 久久精品国产自在天天线| 精品久久久噜噜| 偷拍熟女少妇极品色| 欧美区成人在线视频| 非洲黑人性xxxx精品又粗又长| 在线观看一区二区三区| 国产精品电影一区二区三区| 成人午夜高清在线视频| 国产女主播在线喷水免费视频网站 | 欧美一区二区国产精品久久精品| 久久鲁丝午夜福利片| 一本一本综合久久| 亚洲精品日韩av片在线观看| 成人美女网站在线观看视频| 日本av手机在线免费观看| 日本黄色视频三级网站网址| 久久精品国产亚洲网站| 日日撸夜夜添| 国产亚洲精品av在线| 在线观看午夜福利视频| 久久99精品国语久久久| 国产又黄又爽又无遮挡在线| 偷拍熟女少妇极品色| 夜夜看夜夜爽夜夜摸| 国产片特级美女逼逼视频| 大型黄色视频在线免费观看| 最近2019中文字幕mv第一页| 真实男女啪啪啪动态图| 亚洲乱码一区二区免费版| 午夜激情福利司机影院| 亚洲精品日韩在线中文字幕 | 日本熟妇午夜| 天美传媒精品一区二区| 亚洲,欧美,日韩| 嫩草影院新地址| 日韩,欧美,国产一区二区三区 | 国产精华一区二区三区| 亚洲一区二区三区色噜噜| 国产人妻一区二区三区在| 久久精品人妻少妇| 国产精品一二三区在线看| 亚洲精品日韩av片在线观看| 中文字幕制服av| 精品一区二区免费观看| 十八禁国产超污无遮挡网站| 长腿黑丝高跟| 18+在线观看网站| 观看美女的网站| 少妇高潮的动态图| 中国美女看黄片| 亚洲一区二区三区色噜噜| 国产美女午夜福利| 日产精品乱码卡一卡2卡三| 国产v大片淫在线免费观看| 热99在线观看视频| 99热这里只有是精品50| 少妇的逼好多水| av在线老鸭窝| 成人亚洲欧美一区二区av| 插阴视频在线观看视频| 亚洲成人av在线免费| 日韩高清综合在线| 国产成人精品久久久久久| 亚洲欧美中文字幕日韩二区| 边亲边吃奶的免费视频| 禁无遮挡网站| 亚洲一区二区三区色噜噜| 久久精品夜色国产| 久久人人精品亚洲av| 极品教师在线视频| 国产蜜桃级精品一区二区三区| 国产精品麻豆人妻色哟哟久久 | 国产激情偷乱视频一区二区| 国产一区二区三区在线臀色熟女| 亚洲最大成人中文| 欧美成人一区二区免费高清观看| 亚洲第一区二区三区不卡| 国产黄色视频一区二区在线观看 | 搡女人真爽免费视频火全软件| 十八禁国产超污无遮挡网站| 国国产精品蜜臀av免费| 伦精品一区二区三区| 六月丁香七月| 精品一区二区三区人妻视频| 韩国av在线不卡| avwww免费| 成人美女网站在线观看视频| 日韩强制内射视频| 人体艺术视频欧美日本| 国产亚洲欧美98| 变态另类丝袜制服| 日本成人三级电影网站| 嘟嘟电影网在线观看| 国国产精品蜜臀av免费| 高清午夜精品一区二区三区 | 亚洲自偷自拍三级| 成年版毛片免费区| 夜夜夜夜夜久久久久| 一边亲一边摸免费视频| 欧美最新免费一区二区三区| 啦啦啦啦在线视频资源| 波多野结衣高清作品| 波野结衣二区三区在线| 久久久国产成人精品二区| 亚洲国产欧美人成| 欧美日韩一区二区视频在线观看视频在线 | 国产午夜精品一二区理论片| 久久久久九九精品影院| 看非洲黑人一级黄片| 国产精品不卡视频一区二区| 久久久色成人|