• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multimodal Social Media Fake News Detection Based on Similarity Inference and Adversarial Networks

    2024-05-25 14:40:08FangfangShanHuifangSunandMengyiWang
    Computers Materials&Continua 2024年4期

    Fangfang Shan ,Huifang Sun and Mengyi Wang

    1College of Computer,Zhongyuan University of Technology,Zhengzhou,450007,China

    2Henan Key Laboratory of Cyberspace Situation Awareness,Zhengzhou 450001,China

    ABSTRACT As social networks become increasingly complex,contemporary fake news often includes textual descriptions of events accompanied by corresponding images or videos.Fake news in multiple modalities is more likely to create a misleading perception among users.While early research primarily focused on text-based features for fake news detection mechanisms,there has been relatively limited exploration of learning shared representations in multimodal(text and visual)contexts.To address these limitations,this paper introduces a multimodal model for detecting fake news,which relies on similarity reasoning and adversarial networks.The model employs Bidirectional Encoder Representation from Transformers(BERT)and Text Convolutional Neural Network(Text-CNN)for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer(VGG-19)to extract visual features.Subsequently,the model establishes similarity representations between the textual features extracted by Text-CNN and visual features through similarity learning and reasoning.Finally,these features are fused to enhance the accuracy of fake news detection,and adversarial networks have been employed to investigate the relationship between fake news and events.This paper validates the proposed model using publicly available multimodal datasets from Weibo and Twitter.Experimental results demonstrate that our proposed approach achieves superior performance on Twitter,with an accuracy of 86%,surpassing traditional unimodal modal models and existing multimodal models.In contrast,the overall better performance of our model on the Weibo dataset surpasses the benchmark models across multiple metrics.The application of similarity reasoning and adversarial networks in multimodal fake news detection significantly enhances detection effectiveness in this paper.However,current research is limited to the fusion of only text and image modalities.Future research directions should aim to further integrate features from additional modalities to comprehensively represent the multifaceted information of fake news.

    KEYWORDS Fake news detection;attention mechanism;image-text similarity;multimodal feature fusion

    1 Introduction

    With the rapid development of mobile Internet technology,the primary platform for accessing news has shifted from traditional paper-based media,such as newspapers,to social media platforms represented by Twitter and Weibo [1].The real-time and convenient nature of social media enables people to quickly access and disseminate information.However,in the absence of effective supervision,the openness and low entry barriers of social media also facilitate the simultaneous spread of fake news[2].Fake news is characterized by its low cost,rapid dissemination,and easy accessibility,which can lead to serious social issues,spark public opinion storms,and even manipulate public events,thereby undermining the credibility of social media.The rapid expansion of social media has become a breeding ground for the dissemination of fake news,where various fake news spread widely.Fake news not only has the potential to mislead the public but can also cause harm to individuals,organizations,and society.For instance,in 2019,Reuters reported that Hong Kong’s Chief Executive,Carrie Lam,submitted a report to Beijing recommending the consideration of the‘five major demands’of Hong Kong’s protesters.However,the report was allegedly rejected by Beijing.This fabricated news aimed to incite radical demonstrators,escalating the turmoil in Hong Kong,undermining the government’s relationship with the people,and disrupting social harmony in China.Additionally,in 2020,amid the global COVID-19 pandemic,it became one of the primary sources of fake news[3].

    To foster a harmonious online environment and mitigate the negative impact of fake news,there is an urgent need for reliable methods and technologies to address the issue of false information dissemination.Consequently,the detection of multimodal fake news on social media has emerged as a prominent research focus in recent years.Generally,there is no clear definition of fake news.The Merriam-Webster Online Dictionary defines fake news as“a news report that is intentionally false or misleading”.Shu et al.[4]defined fake news as“false information that is intentionally misleading to readers and can be verified”.Ajao et al.[5]defined fake news as“anything that is circulated,shared,or spread that cannot be authenticated”.However,in academic research,fake news is usually defined as unverified or unconfirmed news.In this study,fake news is defined as intentionally misleading information that has been confirmed as false[6,7].

    At present,the detection of fake news is primarily divided into two main directions: Unimodal modal detection and multimodal detection.Unimodal modal methods rely solely on text or image features for fake news detection.However,a news story embodies falseness in both text and image aspects,limiting the effectiveness of this approach in capturing the diverse features of fake news.In contrast,multimodal detection integrates features from various modalities,such as text and images,allowing for a more comprehensive understanding of fake news content.

    Current multimodal fake news detection methods typically connect textual and visual features to obtain a unified multimodal feature representation.Nevertheless,these methods have yet to fully explore the similarity relationships between multimodal information,which is crucial for accurate fake news detection.Some fake news stories,aiming to garner clicks and widespread dissemination,often employ provocative image information that deviates significantly from the actual news text.For instance,Fig.1 illustrates a fake news story about the U.S.government purchasing 30,000 guillotines,where the accompanying image features a historical painting depicting the beheading of Queen Marie Antoinette,creating a discordance with the textual content.

    Addressing the semantic bias between textual and visual content,this paper introduces a multimodal social media fake news detection approach grounded in similarity reasoning and adversarial networks.Specifically tailored to bridge the research gap in understanding the similarity relationships within multimodal information,our method aims to comprehensively and accurately unveil the characteristics of fake news.By delving deeply into the correlations between text and images,our approach provides a nuanced and precise perspective for fake news detection.The method comprises five modules:(1)multimodal feature extractor;(2)similarity representation learning and reasoning;(3)multimodal feature fusion;(4)fake news detector;(5)event classifier.Designed to discern the falseness of news articles in terms of text,image,or “mismatch”.Our main contributions are summarized as follows:

    (1)We conduct a comprehensive consideration of both local and global features in text and images.To extract text features,we integrate BERT and Text-CNN,introducing an attention mechanism post-Text-CNN to capture global text features.Simultaneously,leveraging the pre-trained VGG-19 on ImageNet,we extract local features from images.Following VGG-19,an attention mechanism is incorporated to capture global image features.

    (2) We employ similarity representation learning and inference to infer the similarity between images and text,thereby recognizing more intricate matching patterns.

    (3) By integrating event-based adversarial networks with multimodal networks,we not only capture features specific to particular events but also learn the associations between modal features and events.

    (4)Extensive experiments on publicly available multimodal datasets,the research results demonstrate the outstanding performance of the proposed model in fake news detection tasks,particularly on the Twitter dataset.In comparison to traditional detection models,this model consistently achieves superior results across multiple metrics,effectively enhancing the accuracy and performance of fake news identification.

    Figure 1: Example of inconsistent graphic content of fake news

    2 Related Work

    In this section,we present the research related to the proposed model for detecting fake news.Fake news detection has been a widely researched area due to its significance in maintaining the accuracy and reliability of public information.Existing fake news detection methods can be classified into two categories:Unimodal-based and multimodal-based fake news detection.

    2.1 Unimodal-Based Fake News Detection

    In the domain of unimodal-based fake news detection,the prevailing strategy revolves around leveraging textual information to ascertain the authenticity of news articles.This approach involves evaluating text content,syntactic structures,themes,and other factors to establish the veracity of the news.In the study conducted by Liu [8],the TF-IDF algorithm is applied to extract text features,and these features are utilized as inputs for a Support Vector Machine (SVM) to distinguish the authenticity of news.In contrast to certain intricate deep learning models,the TF-IDF algorithm exhibits superior computational efficiency,particularly when handling extensive textual data.Nevertheless,it falls short in capturing the contextual relationships between words,a crucial aspect for precise determinations of information authenticity in fake news detection.Amid the advancements in deep learning technology,models based on neural networks now can acquire more profound and abstract features,enabling end-to-end learning.In the context of fake news detection,Ma et al.[9]employed Recurrent Neural Networks(RNNs)to input all news texts associated with a specific event.The final hidden state of the RNN was subsequently leveraged to discern fake news at the event level.Although RNNs demonstrate proficiency in capturing contextual information within text sequences,they confront challenges in capturing long-distance dependencies when handling extensive sequences.Addressing the early detection challenge in fake news,Yu et al.[10] presented a Convolutional Neural Network (CNN)-based approach.This method initially groups news about the same event,transforming the textual content of each newsgroup into a document vector.Subsequently,CNN extracts text features from multiple document vectors for fake news detection.In comparison to models like RNN,which necessitate the consideration of sequence information,CNN’s efficiency in text processing is notable for not relying on sequence information.However,CNN is typically employed to capture local features,with limited capacity for processing global information.Ma et al.[11]further improved the model’s performance by introducing generative adversarial networks to enhance the learning of textual representations in fake news detection.Generative Adversarial Networks enable enhanced and nuanced learning of text representations.The interplay between the generator and discriminator allows the model to acquire text representations that are both distinctive and abstract.It is noteworthy,however,that the application of GANs often necessitates considerable computing resources,such as high-performance GPUs,and entails prolonged training time.While the success of text feature-based fake news detection is evident in certain aspects,the consideration of textual features alone lacks comprehensiveness,as fake news may be accompanied by misleading images or charts.

    With the continuous advancement of image processing technology,the decreasing difficulty in forging false images poses greater challenges for the general public in discerning the authenticity of news and,consequently,presents a more significant challenge for fake news detection[12].Therefore,scholars have increasingly focused on the detection of fake images.Mahmood et al.[13] proposed a method that combines the smooth wavelet transform and Discrete Cosine Transform (DCT) to detect and locate copy-move operations in images.This method comprehensively captures image features in the frequency domain,aiding in the more precise detection of copy-move operations.However,it encounters challenges in effectively addressing intricate textures or semantic information.Farooq et al.[14],combining Local Binary Pattern(LBP)features and texture features,introduced a method using a universal algorithm based on LBP to detect passive image forgery.While this method adeptly captures local texture information within images,its comprehension of global structure and context is constrained.In situations involving intricate forgery techniques,it lacks the requisite discriminative capacity.Peng et al.[15] identified forged images by examining resampling traces in the images.In contrast to approaches that require a reference image for comparison,this method does not necessitate obtaining a reference image beforehand,making it practical for real-world applications.However,it is typically employed for the overall assessment of whether an image is manipulated and does not offer detailed localization of forged regions.Zeng et al.[16]employed a hybrid deep-learning model to detect steganographic operations in JPEG images.The model utilizes techniques such as quantization and truncation to enhance its robustness and generalization capabilities.Quan et al.[17]developed a convolutional neural network-based universal model capable of classifying images into natural and computer-generated categories,making it suitable for various fake image detection scenarios.

    Despite the promising results that unimodal modal methods can offer to some extent,data in social networks often involve multimodal information such as text and images.Unimodal modal methods fall short of adequately capturing and processing this diversity and complexity.As a result,researchers are beginning to explore the combination of text and images to address the limitations of unimodal methods.

    2.2 Multimodal-Based Fake News Detection

    Currently,deep neural networks (DNNs) excel in nonlinear representation [18],making them a prominent choice for many multimodal representation learning methods aimed at enhancing the capability of fake news detection.Jin et al.[19]proposed a deep learning-based approach capable of learning multimodal content and social information from news posts.They introduced an attention mechanism to fuse this information and obtain multimodal features,enhancing the model’s focus on crucial information and improving the weight allocation for different modal data,allowing for more effective utilization of multimodal information.In EANN,Wang et al.[20] employed an adversarial network with a multimodal feature extractor to learn invariant features of events,acquiring multimodal features for each news article to facilitate fake news detection.Learning invariant features of events through adversarial networks enhances the model’s generalization,yielding favorable results across diverse events.In MVAE,Khattar et al.[21]utilized a multimodal variational autoencoder for fake news identification,inputting various modal features of posts into a bimodal variational autoencoder to obtain multimodal feature representations.The introduction of variational autoencoders aids in learning latent representations of data,thereby enhancing the model’s expressive and generalization capabilities.Cui et al.[22]introduced an end-to-end deep embedding framework(SAME) for fake news detection.In this model,the emotions of post publishers serve as the basis for discerning fake news.By embedding emotional features with other characteristics through deep learning,the model distinguishes between real and fake news.Leveraging post publishers’emotions as a basis for judging fake news provides additional information,contributing to a more comprehensive understanding of the authenticity of news.SpotFake [23] utilizes a pre-trained BERT [24] model for text feature extraction from news posts and employs a pre-trained VGG-19 model on ImageNet[25] for image feature extraction.The use of pre-trained BERT models enables the learning of rich text representations.SpotFake+[26],an enhanced version of SpotFake,utilizes an improved BERT variant,XLNet[27]for text feature extraction.

    Despite the current technological advancements propelling the development of multimodal fake news detection,there remains limited exploration and utilization of relationships between different modalities.This paper aims to address this gap by introducing similarity representation learning and inference,filling the research void in understanding the relationships between news text and visual information.By exploring multimodal information and similarity relationships,this study seeks to comprehensively understand and learn the representations of news articles.Additionally,the introduction of adversarial networks for learning invariant features of events aims to advance the frontier of research in multimodal fake news detection.

    3 Method

    3.1 Model Overview

    This paper introduces a multimodal social media fake news detection model based on similarity reasoning and adversarial networks.The model comprises a multimodal feature extractor,similarity representation learning and inference,multimodal feature fusion,a fake news detector,and an event classifier.Initially,the model independently preprocesses text and images,subsequently extracting feature representations.Textual features are extracted using the BERT and Text-CNN models.Subsequently,an attention mechanism is introduced post-Text-CNN to capture the global features of the text.For image feature representation,the pre-trained VGG-19 model is employed to acquire local image features,followed by applying a Self-Attention mechanism to these local features to derive global image feature representations.The model learns local similarity representations for text and image through the Text-CNN model for text local features and VGG-19 for image local features.Simultaneously,it acquires global similarity representations for text and image through the global features extracted from text and images,respectively.All the local similarity representations and global similarity representations serve as nodes within a graph,and we calculate the edges connecting these nodes.The graph undergoes similarity reasoning,which involves updating the nodes and edges iteratively over the N steps of reasoning.The output of the global nodes from the final step is used as the inferred similarity representation.This representation is then passed through a fully connected layer to generate the ultimate similarity score.Concatenate the textual features extracted by the BERT model with the text local features extracted by the Text-CNN model to form the textual feature representation.Subsequently,perform feature fusion by concatenating this textual feature representation with the local image features extracted by VGG-19 and the similarity representation resulting from text-image inference.The event classifier is a neural network model with a structure consisting of two fully connected layers,each equipped with corresponding activation functions.It employs clustering algorithms to accurately categorize newly emerged news information into specific event classes.The computation loss of an event is indicative of the similarity of the event distributions,with a larger loss denoting a greater similarity.The fake news detector utilizes the Softmax function as the activation function for the output layer.This function transforms the output of the fully connected layer into activation values representing the probability of fake news.Through this mechanism,the model can classify input features,discerning whether the news is genuine or fake.The framework of the multimodal social media fake news detection model based on similarity reasoning and adversarial networks(EANBS)is illustrated in Fig.2.

    Figure 2: EANBS model structural framework

    3.2 Multimodal Feature Extractor

    3.2.1 Textual Feature Extractor

    This paper utilizes two critical text feature extractors:The Text-CNN and the BERT model.The Text-CNN allows for a focused analysis of localized perspectives and fine-grained features within the text,while the BERT model excels at extracting deep-seated semantic characteristics from the text.The synergy between these two approaches enables a more efficient extraction of textual semantic features.

    In Text-CNN,a convolutional layer is utilized to extract features at a local level.By applying convolutional operations,the model can capture nuanced characteristics within the text.This,in turn,facilitates a more comprehensive understanding of localized information present in the text,ultimately refining the representation of text features.The standard configuration of the Text-CNN model consists of an embedding layer,a convolutional layer,a pooling layer,and a fully connected layer.The arrangement of the Text-CNN model is visually depicted in Fig.3.

    Figure 3: The structure of Text-CNN model

    Local textual feature:(a)The Embedding layer:Words or phrases can be depicted as continuous,low-dimensional vectors.In the embedding layer,a sentenceXconsisting of m words can be represented as shown in Eq.(1).

    whereXirepresents thei-th word of the current text and ⊕represents the vector splicing operation.

    (b)The convolutional layer:After converting text into word vectors using Word2Vec,we employ CNN’s convolutional operation to capture local features within the text.This is achieved by configuring various convolution kernel sizessto process textual fragments of different lengths.The convolutional operation can be represented as shown in Eq.(2).

    whereWrepresents the weight matrix of the output convolution kernel,fis the activation function,and the vector composed ofhiis the feature vector extracted from the convolution layer,i.e.,h={h1,h2,···,hn},which is taken as the input to the pooling layer.

    (c)The pooling layer: Following the convolutional layer,Text-CNN typically employs a maxpooling operation to reduce the dimensionality of the output generated by the convolutional operation.This helps in extracting essential features from the text effectively,as illustrated in Eq.(3).

    (d)The fully connected layer: Finally,the vector representation obtained from the preceding pooling layer is combined through the fully connected layer to yield the local text representationH.

    Global textual feature:Incorporating the attention mechanism can help the neural network model focus more on essential words.This allows the model to prioritize information from these keywords while ignoring less significant segments.As a result,the influence of unnecessary data is reduced,enhancing the model’s ability to extract crucial information.This,in turn,boosts the model’s efficiency and accuracy.

    The Self-Attention mechanism,a specific case of the general attention mechanism,stands out for its ability to learn intrinsic textual correlations.When combined with CNN or RNN,it significantly enhances the model’s learning capabilities and improves the interpretability of the neural network.The computational steps of the Self-Attention mechanism can be represented as Eqs.(4)–(7).

    whereHrepresents the local textual features,Qis the query vector,Kis the“key”vector,Vis the value vector,andWQ WK WVis the corresponding weight matrices.represents the global textual features.

    BERT,a pre-trained language representation model,is constructed with stacked bidirectional Transformer[28]encoder structures.In contrast to traditional unidirectional models,BERT comprehensively captures contextual nuances in text,proving particularly advantageous for intricate textual scenarios encountered in tasks such as fake news detection.Leveraging extensive unsupervised pretraining,BERT acquires rich language representations encompassing a broad spectrum of semantic knowledge.The incorporation of these pre-trained weights into our task endows the model with the ability to benefit from a wealth of prior knowledge,thereby enhancing its capacity to articulate text features.Through a self-attention mechanism,BERT adeptly learns global contextual information,facilitating a deeper understanding of semantic relationships across the entire text.This results in more comprehensive word vector representations.Consequently,we integrate Text-CNN and BERT as the two core modules for text feature extraction.First,tokenize the input text,converting the tokens into word embeddings and positional embeddings.Subsequently,input these embeddings into the BERT model.The feature extraction process by BERT is outlined in Eqs.(8),(9).

    whereXrepresents the input text,andW1is the weight matrix of the fully connected layers in the corresponding pre-trained model.

    To synergistically leverage the advantages of both Text-CNN and BERT models,this study adopts a concatenation approach,as illustrated in Eq.(10).

    Specifically,the text features extracted by BERT are concatenated with the local features extracted by Text-CNN to form the final representation of text features.This fusion strategy aims to integrate global and local information,enabling a more comprehensive and multidimensional expression of features in news text.

    In the context of fake news detection tasks,the significance of this fusion strategy lies in its ability to enhance the model’s comprehension of complex,multilayered information.By integrating both global and local features,the model becomes adept at distinguishing between authentic and fake news,as it can more comprehensively capture the semantic and structural information embedded in news text.

    3.2.2 Visual Feature Extractor

    In general,the brain learns and comprehends visual information much faster than textual information.Based on this insight,we have considered the visual features of news.Integrating the extracted image features with textual features enhances the feature representation,leading to a more comprehensive understanding and assessment of fake news.VGG-19 stands as a classical convolutional neural network architecture,exhibiting exceptional performance in image classification tasks.The hierarchical structure of VGG-19,with its relative depth,facilitates the extraction of abstract features from images,demonstrating excellent adaptability to the intricate patterns and structures potentially present in images associated with fake news.Pre-training on large-scale image datasets endows VGG-19 with a rich feature representation.Leveraging these pre-trained weights allows the model to learn universal visual features from diverse images,a significant advantage in the context of fake news image detection.Therefore,this paper employs the pre-trained VGG-19 model for extracting image features.To better preserve image information,resize the input image to 256 × 256 pixels.Perform a center crop to reduce it to 224×224 pixels.Preprocess the cropped image and feed it into the VGG-19 model.Add a fully connected layer at the end,adjusting the final image feature dimension toc,serving as the representation of the image’s local regionsI={I1,I2,···,In},withIi∈Rc.The above can be represented in Eqs.(11),(12).

    wheregrepresents the input image andW2is the weight matrix of the fully connected layer in the corresponding pre-trained model.Then the self-attention mechanism is utilized to derive the global image region features by Eqs.(13)–(16).

    whereIrepresents the local textual feature,Qis the query vector,Kis the“key”vector,Vis the value vector,andWQ WK WVare the corresponding weight matrices.represents the global visual features.The choice of Text-CNN,BERT,and VGG-19 as feature extractors is driven by their outstanding performance in their respective domains and their complementary characteristics.This selection aims to enhance the performance of fake news detection.

    3.3 Similarity Representation Learning and Reasoning

    In the realm of fake news detection,traditional methodologies often focus solely on extracting text and image features,disregarding the inherent parallels between the two.However,comprehending this intrinsic similarity is paramount for accurate fake news prediction.Thus,the introduction of similarity representation learning and inference emerges as a crucial avenue.This approach facilitates the absorption of shared semantic nuances between text and image,consequently elevating the accuracy and robustness of fake news detection.

    3.3.1 Similarity Representation Learning

    Traditional methods utilize the cosine or Euclidean distance to represent the similarity between two feature vectors,which can capture the relevance to a certain degree while lacking detailed correspondence.In this paper,we follow [29] to compute a similarity representation,which is a similarity vector instead of a similarity scalar,to capture more detailed associations between feature representations from different modalities.The similarity function representation can be represented by Eqs.(17),(18).

    wherex∈Rd,y∈Rd,|·|2is the element-wise square,‖ · ‖2is thel2normalization,andW∈Rq×dis a learnable parameter matrix to obtain aq-dimensional similarity vector,sim(d) is the similarity function.

    Local Similarity Representation.To calculate the local similarity representations between local features found in visual and textual observations,apply textual-to-visual attention[30],calculate the cosine similarity between the region featureIiwith to word featureHjfirst by Eq.(19).

    The cosine similarity matrix is then normalized and can be represented in Eq.(20).

    Next,we calculate the similarity between the word featureHjand the entire visual features.Compute attention weight for each region,then we generate visual featuresaIjconcerning thej-th word by Eqs.(21),(22).

    whereαijrepresents the attention weight of each region.

    3.3.2 Similarity Reasoning

    whereFin∈Rq×qandFout∈Rq×qrepresent the linear transformations for incoming and outgoing nodes,respectively.

    In this paper,the similarity is inferred through similarity propagation,linear transformation,and a non-linear activation function.Specifically,the similarity is first propagated as Eq.(24).

    whereWt∈Rq×qis a learnable parameter.

    We iterate reasoning forNsteps,and the output of the global node at the last step of iterative reasoning is utilized as the similarity representation of the reasoning and inputted into the fully connected layer to obtain the final similarity score as Eq.(26).

    3.4 Multimodal Feature Fusion

    The representations of textual features,visual features,and the similarity representations between text and image are combined to create a multimodal feature representationRFin Eq.(27).whereTis the concatenation of textual features extracted by BERT and local features extracted by Text-CNN,Irepresents the local visual features extracted by VGG-19,andSfrepresents the similarity representation between textual features extracted by Text-CNN and visual features extracted by VGG-19.This concatenation method enables the retention of distinctive features from each modality,facilitating a better understanding of the interactions between text and images,thereby improving the performance in fake news detection tasks.The multimodal feature extractor is represented asFwhereMusually refers to a set of textual and visual posts,andθfrepresents the parameters to be learned.

    3.5 Fake News Detector

    This module is designed to implement a neural network for detecting fake news,which is built upon a multimodal feature extractor.Textual features,visual features,and their similar representations are combined to create a multimodal feature representationRF.This representation is then used as input to the network and deploys a SoftMax fully connected layer for classification to predict whether the post is fake news.The fake news detector is denoted asC(F;θC).The probability that this post is a fake is shown in Eq.(28).

    whereθCrepresents all the parameters of this network,?yrepresents the probability that the current post is fake news.The real news is labeled as 0,and the fake news is labeled as 1.Y is used to denote the true labels of news events and the detection loss is computed using sigmoid cross-entropy,as shown in Eq.(29).

    whereMusually refers to a set of textual and visual posts.We seek to minimize the loss in classifying fake news by searching for optimal parameters and this process can be represented in Eq.(30).

    3.6 Event Classifier

    The event classifier consists of two fully connected layers,and its role is to classifyKevents.It evaluates the performance and robustness of the feature extractor by comparing the differences between the feature representation and the original events.This approach eliminates the strict dependence on specific events in the collected dataset and enables better generalization to unseen events,which in turn guides the training of the feature extractor.The event classifier is denoted asD(F;θD),whereθDrepresents its parameters.The loss of event classifier is defined by cross-entropy,and the label set of events is denoted usingYe.This process can be represented in Eq.(31).

    The minimization of the loss function is expressed as Eq.(32).

    By using the loss functionLDto measure similarities and differences between different events,a larger value of the loss function indicates that the distribution of different events is more similar.Therefore,the gradient descent method is used to find the optimal parametersθfto maximize the loss functionLD,allowing for better distinction between events and fake news,and discovering the association between them.

    3.7 Model Integration

    During training,minimizing lossLCis crucial to enhance the model’s ability to discern fake information and improve classification accuracy.To ensure that the model can effectively acquire shared event features,it is necessary to maximize the lossLDof the event classifier.Simultaneously,the event classifier also strives to minimize lossLCto extract event-specific information from multimodal feature representations.Consequently,the overall loss can be represented in Eq.(33).

    where the coefficient λ ∈Ris employed to strike a balance between the objective functions of the fake news detector and the event classifier.For the problem of the maximin game,this paper utilizes a Gradient Reversal Layer(GRL).During the forward pass,the Gradient Reversal Layer acts as an identity function,whereas during the backward pass,GRL multiplies the gradients-λ and propagates them to the preceding layer.The parameter optimization process can be represented in Eq.(34).

    4 Experiment

    4.1 Dataset

    Weibo Dataset:The Weibo Dataset,as presented by Jin et al.[32],has been extensively employed in numerous studies focused on multimodal fake news.This dataset spans confirmed fake news from May 2012 to January 2016,officially verified by Weibo,and real news validated by China’s authoritative news source,Xinhua News Agency.During the data preprocessing phase,a meticulous multi-step approach was applied to ensure dataset quality.Initially,duplicate images were removed to alleviate redundancy in the data.Subsequently,low-quality images were filtered out to ensure that all images in the dataset maintained high clarity and usability standards.Only data samples featuring both textual and image modalities were utilized to prevent distributional biases in unimodal and multimodal experiments,thereby enhancing the persuasiveness and credibility of the results.The dataset was partitioned into training,validation,and testing sets in a 7:1:2 ratio.

    Twitter Dataset: The Twitter dataset utilized in this study is sourced from the MediaEval2015 dataset[33],encompassing both a training set and a test set.Each news item in the dataset consists of supplementary images/videos,text,and labels.In the data preprocessing stage,punctuation,numbers,special characters,and short words were removed from the tweets.Given the emphasis of our work on textual and image information,tweets with videos were excluded.Examples of images and corresponding text in the dataset are illustrated in Figs.4 and 5.The specific distribution of each dataset is presented in Table 1.

    Table 1: Distribution of each dataset

    Figure 4: Instances of fake news in the Twitter dataset

    Figure 5: Instances of real news in the Twitter dataset

    4.2 Experimental Details

    The specific model parameters for this experiment are detailed in Table 2.Throughout this paper,several experiments were conducted to optimize these parameter settings.By making repeated adjustments and conducting multiple experiments,we were able to identify the optimal parameter configurations.The Text-CNN filter window selects a fixed set of sizes[1,2,3,4],which serves to reduce the hyperparameter search space of the model.This simplification alleviates the complexity and tedium associated with the hyperparameter tuning process.By capturing features at various levels,the model becomes better suited to adapt to text inputs of differing lengths and complexities.This adaptation contributes to the enhancement of classification accuracy and robustness across various scenarios.

    Table 2: Model parameters

    4.3 Evaluation Metrics

    We used the traditional performance metrics namely accuracy,recall,precision,and F1score to evaluate the proposed model.Here is a brief explanation of these metrics.

    where True positive (TP): Fake news forecasted as fake;True negative (TN): Real news forecasted as real;False positive(FP):Real news forecasted as fake;False negative(FN):Fake news forecasted as real.

    4.4 Baseline

    To validate the effectiveness of the proposed model,this study selects two categories of baseline models:Unimodal models and multimodal models.

    4.4.1 Unimodal Models

    The unimodal model employs either textual or visual information alone to detect fake news.Thus,this paper proposes the following two sample baselines:

    ?Textual.The model exclusively relies on the textual content within the post for post-classification,utilizing a pre-trained 300-dimensional Word2Vec model from Sogou Labs to represent word vectors.Firstly,Text-CNN is employed for text feature extraction,transforming the textual information into a feature representation.Subsequently,the extracted text features undergo processing through a 32-dimensional fully connected layer to accomplish the task of detecting fake news.

    ?Visual.The model relies exclusively on the image information within the post for postclassification,starting by extracting the image feature,denoted asFv,using a pre-trained VGG-19 model.Subsequently,the acquired image featureFvis input into a 32-dimensional fully connected layer to make predictions regarding fake news.

    4.4.2 Multimodal Models

    Multimodal approaches utilize information from multiple modalities to classify fake news.

    ?VQA [34].Visual Question and Answer (VQA) is a system that provides answers to questions based on a given image.While the original VQA model was designed for a multiclassification task,the primary focus of this paper is on binary classification.

    ?NeuralTalk[35].NeuralTalk is a model designed to generate subtitles for a given image.It obtains potential representations of the text sequence by averaging the output of the RNN(Recurrent Neural Network)at each time step.These latent representations are then passed to a fully connected layer for the prediction of fake news.Both the LSTM and the fully connected layer have a hidden size of 32.

    ?att-RNN [32].att-RNN employs an attention mechanism that combines textual,visual,and social contextual features.It utilizes Long Short-Term Memory(LSTM)networks to extract textual features and integrates them with visual features through a cross-modal attention mechanism.

    ?EANN[20].The Event Adversarial Neural Networks(EANN)consist of three components:The multimodal feature extractor,the fake news detector,and the event discriminator.The multimodal feature extractor extracts textual and visual features from posts and collaborates with the fake news detector to learn distinctive representations for fake news detection.The event discriminator is responsible for removing specific features related to events.All parameters used in training this model remain consistent with those of the original model.

    ?MVAE [21].This methodology employs an encoding-decoding paradigm to capture shared representations encompassing both visual and textual modalities,to detect fake news.Through the training of a multimodal variational autoencoder,the approach involves the concatenation of text and visual features to derive multimodal representations.These representations undergo decoding,guided by reconstruction loss,to revert them to their original modalities.The resulting multimodal representations are strategically harnessed for the discrimination of fake news.

    ?BDANN[36].Textual features in BDANN are extracted using a pre-trained BERT model,while visual features are obtained through a pre-trained VGG-19 model.Dependency on specific events is mitigated by incorporating a domain classifier.

    ?Roberta+CNN [37].This framework incorporates a dedicated convolutional neural network model for image analysis and a sentence transformer for textual analysis.Features extracted from visual and textual modalities are embedded through dense layers,ultimately converging to predict deceptive imagery.

    ?MEAN [38].This approach comprises two integral components: A multimodal generator and a dual discriminator.The multimodal generator is instrumental in extracting latent discriminative feature representations for both text and image modalities.For each modality,a decoder is employed to mitigate information loss during the generation process.The dual discriminator consists of a modality discriminator and an event discriminator.These discriminators are designed to classify features based on either modality or event,with network training guided by an adversarial scheme.

    This paper employs conventional evaluation metrics for binary classification to assess the model’s performance.These metrics include accuracy,precision,recall,and F1score.The experimental comparison results are presented in Table 3 and Fig.6.

    Table 3: Comparison of accuracy,precision,recall,and F1 score for different baselines

    Figure 6: Comparison of experimental results

    According to Table 3,on the Weibo dataset,the textual modality demonstrates a significant advantage in the task of detecting fake news compared to the visual modality.Text is more effective in conveying the core content of events,and its embedded semantic information directly facilitates the identification of fake news.In contrast,although the visual modality provides some visual information,its expressive capability is relatively limited,making it challenging to offer as rich semantic information as text.Therefore,in the fake news detection task,text modality proves to be more effective than the visual modality,better distinguishing between genuine and fake news information.While the unimodal model exhibits some effectiveness in fake news detection,their performance remains inadequate compared to multimodal models,further confirming the excellence of multimodal fake news detection methods.Multimodal fake news detection methods can fully leverage diverse information sources,such as text and images,to obtain more comprehensive and enriched feature representations.Text and images complement each other in expressing information,and through the effective fusion of these modalities’features,the model can enhance its ability to identify fake news.

    In the realm of multimodal fake news detection models,the MVAE exhibits superior performance by leveraging a multimodal variational autoencoder.Outperforming other models such as VAQ,NeuralTalk,and att-RNN,the EANN and BDANN models introduce Event Adversarial Neural Networks,achieving superior results in fake news detection.Through the incorporation of Event Adversarial Neural Networks,the EANN and BDANN models excel in learning and applying common features among events,thereby demonstrating robust performance in fake news detection tasks.Event Adversarial Networks contribute to the model’s ability to learn more general and transferable feature representations,mitigating reliance on specific events and enhancing both robustness and generalization.In comparison to the EANN and BDANN models,MEAN exhibits outstanding performance by learning both modality-invariant and event-invariant features through dual discriminators.On the Twitter dataset,the model’s performance is similar to that on the Weibo dataset.

    The proposed EANBS fake news detection model demonstrates superior performance across various metrics on both Weibo and Twitter datasets compared to the contrastive models.On the Twitter dataset,the accuracy and precision of fake news detection surpass the best results in the comparative methods by 0.07%and 2.9%.On the Weibo dataset,the recall and precision of fake news detection surpass the best results in the comparative methods by 4%and 1%,respectively.Leveraging similarity representation learning to capture relationships between different modalities,the model comprehensively understands news content,thereby improving the accuracy and robustness of fake news detection.The introduction of adversarial networks further enhances the model’s performance,aiding in learning more robust feature representations and eliminating dependence on specific events.This improvement increases the model’s generalization capability towards unseen events,effectively identifying fake news and enhancing the overall quality of news on social media.

    4.5 Analysis of Ablation Experiments

    To verify the significance of the model components outlined in this paper,we created several model variants.These variants primarily fall into three types: EANBS-SIM,which eliminates similarity representation learning and reasoning;EANBS-BERT,which removes the BERT component;and EANBS-GAN,which excludes adversarial neural networks.

    (1) Eliminate Similarity Representation Learning and Reasoning.The textual modality employs Text-CNN and BERT models to extract textual features,while the pre-trained VGG-19 model is used to extract visual features for the visual modality.Subsequently,the resulting feature vectors from both modalities are merged and used as input to both the event classifier and fake news detector.

    (2)Eliminate BERT Component.The Text-CNN model is employed for the textual modality,while the pre-trained VGG-19 model is used for the visual modality.The similarity representation learning and reasoning module calculates the similarity between the extracted textual and visual features.Finally,textual features,visual features,and their similarity are concatenated as input for both the event classifier and the fake news detector.

    (3) Eliminates the Adversarial Neural Networks.For the textual modality,both Text-CNN and BERT models are employed to extract textual features,while the pre-trained VGG-19 model is used for visual feature extraction.Subsequently,the similarity representation learning and reasoning module computes the similarity between the textual features obtained from Text-CNN and the visual features obtained from the pre-trained VGG-19.The resulting textual features,visual features,and their similarity are then combined and input into a fully connected layer with SoftMax for classification.

    The results of ablation experiments are presented in Table 4,indicating that the removal of any component of the model results in a noticeable decrease in classification accuracy.This underscores the effectiveness of each model component in the experiments.The introduction of BERT provides our model with enhanced semantic understanding and more robust feature representation capabilities,aiding in capturing crucial features within fake news text.Utilizing the BERT model allows for more accurate comprehension of semantic relationships in text,enabling the identification of hidden information and effective discrimination between fake and genuine news.The similarity representation learning and inference model proves effective in capturing the relationship,i.e.,similarity,between news text and image information in fake news detection.Through this model,common features between text and images are learned,facilitating inference regarding their degree of similarity.Such a model enables a more comprehensive understanding of news events,leading to more accurate assessments of news authenticity.

    Table 4: Comparison of results of ablation experiments

    While the role of adversarial neural network models in fake news detection may be relatively weaker,comprehensive experiments combining BERT,similarity representation learning and inference models,and adversarial neural network models demonstrate superior performance compared to the ablation experiment results.This suggests the crucial complementary and synergistic effects among these models in enhancing fake news detection.By synergistically leveraging BERT’s semantic understanding,the association-capturing capabilities of similarity representation learning and inference models,and the feature fusion optimization of adversarial neural network models,we achieve outstanding results in fake news detection on two datasets.This multimodal combination approach offers new perspectives and technical means for fake news detection research,enhancing model performance and reliability.

    4.6 Visualization Analysis

    To further assess the efficacy of the event classifier,we visualize the textual feature representations obtained by the model in this paper,both with and without adversarial neural networks,using the Weibo test dataset.Fig.7 depicts the visualization of textual representations,where red dots represent labeled features of fake news,and blue dots represent labeled features of real news.Based on the feature distribution,it can be observed that the model without the adversarial neural networks can learn distinguishable features.However,the learned features are still intertwined when compared to the feature representations learned by the model in this paper.This also indicates that during the training phase,the event classifier endeavors to eliminate dependencies between feature representations and specific events.Through the minimax game,the multimodal feature extractor can acquire invariant feature representations of different events,enabling the learned generic features to be employed for transfer learning to discern the authenticity of breaking fake news in sudden events.This enhances the model’s transferability and its ability to generalize to new events,ultimately improving the performance of fake news detection.

    Figure 7: Visualization of textual feature representations learned on the Weibo test dataset

    4.7 Parameter Analysis

    To examine the influence of model parameters on performance,we conducted several experimental sets on two datasets,Weibo and Twitter,followed by a comparative analysis of their outcomes.

    These experiments primarily focused on variations in the learning rate and different settings of the optimizer.Figs.8 and 9 demonstrate the comparison of the experimental results.

    (1) Effect of Learning Rate: Fig.8 illustrates the impact of various learning rate values on the performance of the proposed model on two datasets,Weibo and Twitter.As shown in the figure,the model achieved its highest accuracy and F1score on two datasets when the learning rate was set to 0.001.Therefore,in our experiments,we chose to set the learning rate to 0.001.

    (2)Impact of Optimizers:Fig.9 illustrates the impact of different optimizers on the performance of the proposed model on the microblogging dataset.In terms of model prediction accuracy,the use of the SGD optimizer results in a classification accuracy of around 0.85 after 50 rounds of training,while the Adam optimizer achieves a classification accuracy close to 1.0.Therefore,in our experiments,we opted to use the Adam optimizer for model training.The optimizer guides the various parameters of the loss function during the backpropagation process to update in the correct direction with an appropriate magnitude,continuously approaching the global minimum.In deep learning,selecting an appropriate optimizer can significantly improve both training efficiency and accuracy.

    Figure 9: Effects of different optimizers on the results

    4.8 Convergence Analysis

    To verify the convergence of the proposed model,we selected parametersα=10 andβ=0.75 for the experiments.Fig.10 illustrates the changes in the loss function on two datasets,Weibo and Twitter.In the initial stages of training,the loss rapidly decreases,indicating that the choice of learning rate is appropriate,and the model has entered the gradient descent process.As training progresses,the loss function gradually stabilizes,indicating that the model has reached a certain state of equilibrium,demonstrating that the model has undergone effective learning.

    Figure 10: Change of loss

    4.9 Fault Case Study

    To further illustrate the performance of our proposed method,we collected and analyzed some failure cases.Figs.11a and 11b depict two instances where our proposed approach failed to detect fake news.In Fig.11a,the excessive use of exclamation marks in the post’s text,along with a presentation format not typical of traditional news media,is likely to impact the model’s discriminative ability on the data.In Fig.11b,the post’s text is exceptionally short,resulting in suboptimal performance of the proposed similarity reasoning and adversarial network models.Additionally,to explore enhanced performance,we intend to address these limitations in future work.

    Figure 11: Certain fake news that cannot be correctly classified by the proposed EANBS

    5 Conclusion

    The spread of fake news not only damages the reputation of news organizations but also pollutes the online information landscape,posing a significant threat to the growth of social media.This paper aims to address the issue of detecting fake news in social media by constructing a multimodal social media detection model that utilizes similarity reasoning and adversarial networks.This is achieved by examining the similarities between the textual and visual components of fake news content.Through the analysis of the results from various feature experiments,we have discovered that the similarity features between the textual and visual aspects are highly effective in distinguishing between fake and real news.By implementing a game between the feature extractor and classifier,the model can acquire event-invariant representations by eliminating specific event-related features,thereby reducing the strong dependence on specific events in fake news.Moreover,the model can detect shared event characteristics in fake news,which improves its ability for feature transfer and the identification of fake news in emerging events.We have conducted numerous experiments on two datasets to demonstrate the model’s effectiveness.However,it is worth noting that we have primarily focused on the correlation between text and image in our model.As a result,we intend to explore the integration of video information features in our future research.

    Acknowledgement:The authors would like to thank the anonymous referees for their valuable comments and helpful suggestions.

    Funding Statement:This paper is supported by the National Natural Science Foundation of China(No.62302540),with author F.F.S.For more information,please visit their website at https://www.nsfc.gov.cn/.Additionally,it is also funded by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020),where F.F.S is an author.Further details can be found at http://xt.hnkjt.gov.cn/data/pingtai/.The research is also supported by the Natural Science Foundation of Henan Province Youth Science Fund Project(No.232300420422),and for more information,you can visit https://kjt.henan.gov.cn/2022/09-02/2599082.html.Lastly,it receives funding from the Natural Science Foundation of Zhongyuan University of Technology(No.K2023QN018),where F.F.S is an author.You can find more information at https://www.zut.edu.cn/.

    Author Contributions:The authors confirm contribution to the paper as follows: Study conception and design: Fangfang Shan,Huifang Sun;data collection: Huifang Sun,Mengyi Wang;analysis and interpretation of results:Huifang Sun;draft manuscript preparation:Huifang Sun.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The Weibo data used to support the findings of this study have been deposited on the website:https://drive.google.com/file/d/14VQ7EWPiFeGzxp3XC2DeEHi-BEisDINn/view?usp=sharing.The Twitter data used to support the findings of this study have been deposited on the website:https://github.com/MKLab-ITI/image-verification-corpus.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲精品一卡2卡三卡4卡5卡| 怎么达到女性高潮| 欧美激情国产日韩精品一区| 人妻夜夜爽99麻豆av| 久久婷婷人人爽人人干人人爱| 亚洲熟妇熟女久久| 男女视频在线观看网站免费| 久久伊人香网站| 免费看光身美女| 国产精品影院久久| 国产成人福利小说| 欧美黄色片欧美黄色片| 一个人看的www免费观看视频| 日韩欧美在线乱码| 国产 一区 欧美 日韩| 熟妇人妻久久中文字幕3abv| av中文乱码字幕在线| 天堂动漫精品| 午夜精品在线福利| av专区在线播放| 日本黄大片高清| 高清日韩中文字幕在线| 午夜福利在线观看吧| 亚洲欧美激情综合另类| 国产亚洲av嫩草精品影院| 永久网站在线| 国产极品精品免费视频能看的| 中文亚洲av片在线观看爽| 久久国产精品影院| 免费在线观看影片大全网站| av中文乱码字幕在线| 亚洲美女视频黄频| 黄色配什么色好看| 18+在线观看网站| 色5月婷婷丁香| 久久精品91蜜桃| 国产精品久久久久久亚洲av鲁大| 欧美日本亚洲视频在线播放| 在线观看一区二区三区| 少妇人妻一区二区三区视频| 免费观看人在逋| 97超级碰碰碰精品色视频在线观看| 99久久精品一区二区三区| 国内精品美女久久久久久| 亚洲,欧美精品.| 最后的刺客免费高清国语| 亚洲第一电影网av| 999久久久精品免费观看国产| 国产精品免费一区二区三区在线| 日韩欧美免费精品| 最近视频中文字幕2019在线8| netflix在线观看网站| 国产色婷婷99| 老司机深夜福利视频在线观看| av天堂中文字幕网| 在线观看av片永久免费下载| 色噜噜av男人的天堂激情| 美女高潮喷水抽搐中文字幕| 搡老妇女老女人老熟妇| 免费观看的影片在线观看| 很黄的视频免费| 亚洲不卡免费看| 18+在线观看网站| 九九在线视频观看精品| 亚洲精品亚洲一区二区| 国产伦精品一区二区三区视频9| 国产一区二区亚洲精品在线观看| 欧美xxxx黑人xx丫x性爽| 很黄的视频免费| netflix在线观看网站| 日本五十路高清| 欧美性感艳星| 蜜桃久久精品国产亚洲av| 成年版毛片免费区| 亚洲国产欧洲综合997久久,| bbb黄色大片| 9191精品国产免费久久| 午夜a级毛片| 老鸭窝网址在线观看| 免费黄网站久久成人精品 | 自拍偷自拍亚洲精品老妇| 嫩草影院精品99| xxxwww97欧美| 欧美激情久久久久久爽电影| 日韩欧美三级三区| 午夜福利视频1000在线观看| 直男gayav资源| 一级黄色大片毛片| 国产精品日韩av在线免费观看| 日本一二三区视频观看| 中文在线观看免费www的网站| 三级男女做爰猛烈吃奶摸视频| 在线观看一区二区三区| 毛片女人毛片| 国产欧美日韩精品一区二区| 亚洲乱码一区二区免费版| 精品免费久久久久久久清纯| 国产成人啪精品午夜网站| h日本视频在线播放| 国产免费av片在线观看野外av| 18美女黄网站色大片免费观看| 国产在视频线在精品| 在线观看免费视频日本深夜| 国产三级黄色录像| 欧美xxxx性猛交bbbb| 最近最新中文字幕大全电影3| 国产欧美日韩精品一区二区| 免费av观看视频| 99久久九九国产精品国产免费| 1000部很黄的大片| 国产不卡一卡二| 变态另类丝袜制服| 一边摸一边抽搐一进一小说| 国产v大片淫在线免费观看| 中出人妻视频一区二区| 免费一级毛片在线播放高清视频| 精品不卡国产一区二区三区| 日本与韩国留学比较| 久久久久亚洲av毛片大全| 亚洲 国产 在线| 美女高潮喷水抽搐中文字幕| а√天堂www在线а√下载| 亚洲成人久久爱视频| 免费av观看视频| 丁香欧美五月| 在线观看午夜福利视频| 国产精品野战在线观看| 亚洲国产精品sss在线观看| 欧美zozozo另类| 成人亚洲精品av一区二区| 欧美日韩综合久久久久久 | 国产在线男女| 亚洲最大成人手机在线| 真人一进一出gif抽搐免费| 亚洲男人的天堂狠狠| 日韩欧美在线乱码| 精品一区二区三区人妻视频| 亚洲综合色惰| 俄罗斯特黄特色一大片| 亚洲aⅴ乱码一区二区在线播放| 久久久久性生活片| 高潮久久久久久久久久久不卡| 人人妻人人看人人澡| 亚洲精品在线美女| 亚洲成人精品中文字幕电影| 宅男免费午夜| 麻豆一二三区av精品| 亚洲avbb在线观看| 亚洲av成人av| 老鸭窝网址在线观看| 国产精品久久久久久久久免 | 国产极品精品免费视频能看的| eeuss影院久久| 欧美bdsm另类| 欧美在线一区亚洲| 国内精品久久久久精免费| 在线观看av片永久免费下载| 国产精品久久视频播放| 欧美日本视频| 亚洲av美国av| 两人在一起打扑克的视频| 又黄又爽又免费观看的视频| 亚洲内射少妇av| 亚洲人成伊人成综合网2020| 夜夜看夜夜爽夜夜摸| 女生性感内裤真人,穿戴方法视频| 免费在线观看成人毛片| 亚洲色图av天堂| 日韩欧美一区二区三区在线观看| 美女高潮的动态| 麻豆成人av在线观看| 哪里可以看免费的av片| 精品99又大又爽又粗少妇毛片 | 欧美午夜高清在线| 能在线免费观看的黄片| 三级男女做爰猛烈吃奶摸视频| 国产欧美日韩精品亚洲av| 欧美色欧美亚洲另类二区| 99久久九九国产精品国产免费| 五月玫瑰六月丁香| 国产亚洲欧美98| 日本成人三级电影网站| 欧美性感艳星| 国产在视频线在精品| 又粗又爽又猛毛片免费看| 欧美激情在线99| 国产成人a区在线观看| av天堂在线播放| 国产精品久久久久久人妻精品电影| 国产在线精品亚洲第一网站| 美女免费视频网站| 热99re8久久精品国产| 欧美不卡视频在线免费观看| АⅤ资源中文在线天堂| 国内久久婷婷六月综合欲色啪| 一区二区三区激情视频| 亚洲,欧美精品.| 1024手机看黄色片| 特级一级黄色大片| 美女大奶头视频| 日本 欧美在线| 国产在线男女| 午夜a级毛片| 简卡轻食公司| 夜夜看夜夜爽夜夜摸| 欧美性感艳星| 国产激情偷乱视频一区二区| 亚洲,欧美精品.| 成人鲁丝片一二三区免费| 成人特级黄色片久久久久久久| 国产精品自产拍在线观看55亚洲| 久久人妻av系列| 2021天堂中文幕一二区在线观| 夜夜爽天天搞| 国产精品99久久久久久久久| 精品久久久久久久久av| 老司机午夜福利在线观看视频| 亚洲第一区二区三区不卡| 亚洲av一区综合| 色尼玛亚洲综合影院| 精品人妻偷拍中文字幕| 亚洲精品久久国产高清桃花| 国产精品亚洲美女久久久| 国产精品,欧美在线| 国产精品亚洲av一区麻豆| 国产极品精品免费视频能看的| 国产黄色小视频在线观看| 国内久久婷婷六月综合欲色啪| 日韩欧美国产一区二区入口| 亚洲国产色片| 亚洲av二区三区四区| 色综合站精品国产| 欧美一级a爱片免费观看看| 少妇被粗大猛烈的视频| 国内精品久久久久久久电影| 亚洲人成网站在线播放欧美日韩| 最近中文字幕高清免费大全6 | 成人精品一区二区免费| 我的女老师完整版在线观看| 精品午夜福利视频在线观看一区| 国产av在哪里看| 国产成人影院久久av| 国产亚洲精品av在线| 亚洲综合色惰| 亚洲avbb在线观看| 亚洲国产精品sss在线观看| 久久久色成人| 一个人看的www免费观看视频| 亚洲av电影在线进入| www.熟女人妻精品国产| 少妇人妻一区二区三区视频| 欧美日韩福利视频一区二区| 日韩中文字幕欧美一区二区| 国产高清有码在线观看视频| 久久久久久久久中文| 国产高清激情床上av| 国产成人av教育| 网址你懂的国产日韩在线| 一进一出抽搐动态| 久久精品国产亚洲av涩爱 | 看免费av毛片| 日韩精品青青久久久久久| 国产精品一区二区三区四区久久| 亚洲美女黄片视频| 国产精品久久视频播放| 看免费av毛片| 国产真实伦视频高清在线观看 | 夜夜爽天天搞| 欧美国产日韩亚洲一区| 国产免费一级a男人的天堂| 国内毛片毛片毛片毛片毛片| 亚洲人成网站在线播| 看十八女毛片水多多多| 亚洲18禁久久av| 美女 人体艺术 gogo| 看十八女毛片水多多多| 男女床上黄色一级片免费看| 午夜激情福利司机影院| 亚洲国产精品合色在线| 久久久成人免费电影| 国产亚洲av嫩草精品影院| 成人美女网站在线观看视频| 国产精品98久久久久久宅男小说| 亚洲国产欧洲综合997久久,| 久久九九热精品免费| 久久久久精品国产欧美久久久| 国产精品三级大全| 精品人妻熟女av久视频| 国产在视频线在精品| 亚洲在线自拍视频| 亚洲一区二区三区色噜噜| 十八禁网站免费在线| 一级作爱视频免费观看| 在线播放无遮挡| 男女下面进入的视频免费午夜| 久久天躁狠狠躁夜夜2o2o| 可以在线观看毛片的网站| 波多野结衣高清无吗| 国产伦人伦偷精品视频| 乱码一卡2卡4卡精品| av在线观看视频网站免费| 亚洲狠狠婷婷综合久久图片| 成熟少妇高潮喷水视频| 精品不卡国产一区二区三区| 色哟哟·www| 精品日产1卡2卡| 久久香蕉精品热| 波多野结衣高清作品| aaaaa片日本免费| 色哟哟哟哟哟哟| 99riav亚洲国产免费| 中文资源天堂在线| 我要看日韩黄色一级片| 国产精品不卡视频一区二区 | 精品一区二区三区av网在线观看| 99热只有精品国产| 国产 一区 欧美 日韩| 亚洲美女视频黄频| 亚洲国产精品久久男人天堂| 99久久精品热视频| 欧美国产日韩亚洲一区| 丰满的人妻完整版| 亚洲国产精品久久男人天堂| 久久精品国产亚洲av香蕉五月| 久久草成人影院| 国产精品影院久久| 国内精品久久久久久久电影| 自拍偷自拍亚洲精品老妇| 少妇人妻精品综合一区二区 | 亚洲五月婷婷丁香| av天堂中文字幕网| 性色avwww在线观看| 亚洲欧美清纯卡通| 国产大屁股一区二区在线视频| 又黄又爽又免费观看的视频| www.999成人在线观看| 蜜桃久久精品国产亚洲av| 精品人妻1区二区| 日本免费a在线| 国产欧美日韩精品一区二区| 亚洲精品一区av在线观看| а√天堂www在线а√下载| 国内久久婷婷六月综合欲色啪| 好男人电影高清在线观看| 国产一区二区在线观看日韩| 97热精品久久久久久| 成人一区二区视频在线观看| 日韩免费av在线播放| 男女床上黄色一级片免费看| 夜夜夜夜夜久久久久| 国产白丝娇喘喷水9色精品| 变态另类丝袜制服| 中文在线观看免费www的网站| 精华霜和精华液先用哪个| 在现免费观看毛片| 国产美女午夜福利| 亚洲欧美日韩卡通动漫| 丰满人妻熟妇乱又伦精品不卡| 好男人电影高清在线观看| 国产精品一区二区性色av| 国产91精品成人一区二区三区| 夜夜夜夜夜久久久久| 日本黄色视频三级网站网址| 成人特级av手机在线观看| 老司机福利观看| 我的老师免费观看完整版| 搡老妇女老女人老熟妇| 精品99又大又爽又粗少妇毛片 | 亚洲午夜理论影院| 深夜精品福利| 性色avwww在线观看| 色5月婷婷丁香| 亚洲av成人av| 精品久久久久久久久久久久久| 欧美bdsm另类| 老司机午夜福利在线观看视频| 国产高清视频在线播放一区| 听说在线观看完整版免费高清| 内地一区二区视频在线| 日韩欧美精品v在线| 国产伦人伦偷精品视频| 成人一区二区视频在线观看| 自拍偷自拍亚洲精品老妇| 亚洲最大成人手机在线| 麻豆一二三区av精品| 丰满的人妻完整版| 国产成人欧美在线观看| 一区二区三区高清视频在线| 亚洲欧美日韩卡通动漫| 91在线观看av| 亚洲精品粉嫩美女一区| 久久精品国产亚洲av天美| 免费看a级黄色片| 一级作爱视频免费观看| .国产精品久久| 性色av乱码一区二区三区2| 可以在线观看毛片的网站| 日韩欧美三级三区| 国产探花在线观看一区二区| 色综合站精品国产| 国产伦一二天堂av在线观看| 在线观看美女被高潮喷水网站 | 欧美午夜高清在线| 真实男女啪啪啪动态图| 亚洲精品粉嫩美女一区| 国产午夜精品论理片| 黄色视频,在线免费观看| 精品一区二区三区av网在线观看| 亚洲av.av天堂| 亚洲在线自拍视频| 国产高清视频在线观看网站| av黄色大香蕉| 久久99热这里只有精品18| 精品人妻一区二区三区麻豆 | 亚洲第一区二区三区不卡| 精品一区二区三区视频在线| 99国产极品粉嫩在线观看| 国产单亲对白刺激| 精品一区二区免费观看| 国产在线精品亚洲第一网站| 欧美性猛交黑人性爽| 91狼人影院| av视频在线观看入口| 国产白丝娇喘喷水9色精品| 日韩欧美精品免费久久 | 夜夜夜夜夜久久久久| www日本黄色视频网| 亚洲第一电影网av| 少妇的逼好多水| a级一级毛片免费在线观看| 淫秽高清视频在线观看| 亚洲国产精品合色在线| 桃红色精品国产亚洲av| 国产一区二区三区视频了| 精品熟女少妇八av免费久了| 无遮挡黄片免费观看| 国产成人欧美在线观看| 丰满的人妻完整版| 日日摸夜夜添夜夜添小说| 欧美性猛交╳xxx乱大交人| 天天一区二区日本电影三级| 高清毛片免费观看视频网站| 久久精品人妻少妇| 久久久国产成人精品二区| 欧美bdsm另类| 精品一区二区三区av网在线观看| or卡值多少钱| 国内精品一区二区在线观看| 在线观看午夜福利视频| 日韩人妻高清精品专区| 蜜桃亚洲精品一区二区三区| 色吧在线观看| 丰满人妻熟妇乱又伦精品不卡| 性插视频无遮挡在线免费观看| 日韩欧美在线二视频| 亚洲乱码一区二区免费版| 男人的好看免费观看在线视频| 国产熟女xx| 精品久久国产蜜桃| 国产大屁股一区二区在线视频| 午夜福利高清视频| 桃红色精品国产亚洲av| 精品福利观看| 精品免费久久久久久久清纯| 国产午夜精品论理片| 欧美日韩中文字幕国产精品一区二区三区| 99热这里只有是精品50| 国产高清激情床上av| 老司机福利观看| 久久国产乱子免费精品| 中文字幕精品亚洲无线码一区| 国内精品美女久久久久久| 国产精华一区二区三区| 少妇熟女aⅴ在线视频| 亚洲av电影在线进入| 国产激情偷乱视频一区二区| 亚洲熟妇熟女久久| 最近最新中文字幕大全电影3| 国产免费av片在线观看野外av| 欧美另类亚洲清纯唯美| 91在线精品国自产拍蜜月| 在线播放无遮挡| or卡值多少钱| 国产大屁股一区二区在线视频| 色综合婷婷激情| 波多野结衣高清作品| 真实男女啪啪啪动态图| 两个人的视频大全免费| 精品欧美国产一区二区三| 91久久精品电影网| 99riav亚洲国产免费| 女同久久另类99精品国产91| 亚洲国产精品成人综合色| 内射极品少妇av片p| 在线观看美女被高潮喷水网站 | 在线十欧美十亚洲十日本专区| 国产一区二区三区在线臀色熟女| 免费人成在线观看视频色| АⅤ资源中文在线天堂| 精品国内亚洲2022精品成人| 久久人人爽人人爽人人片va | 日本成人三级电影网站| 18禁裸乳无遮挡免费网站照片| 国产乱人视频| 亚洲人成伊人成综合网2020| 日韩欧美一区二区三区在线观看| 亚洲中文日韩欧美视频| 久久久久久久精品吃奶| av在线蜜桃| 亚洲成a人片在线一区二区| 久久99热这里只有精品18| 国产伦精品一区二区三区四那| 国产高清有码在线观看视频| 国产免费一级a男人的天堂| 国产精品伦人一区二区| 亚洲18禁久久av| 特级一级黄色大片| 午夜精品在线福利| 久久国产乱子免费精品| 夜夜躁狠狠躁天天躁| 亚洲av一区综合| 久久精品91蜜桃| 成人鲁丝片一二三区免费| 久久中文看片网| 日日摸夜夜添夜夜添av毛片 | 亚洲天堂国产精品一区在线| 听说在线观看完整版免费高清| 小说图片视频综合网站| 亚洲成a人片在线一区二区| 国产精品一及| 久久久久免费精品人妻一区二区| 18美女黄网站色大片免费观看| 亚洲专区中文字幕在线| 内地一区二区视频在线| 成人特级av手机在线观看| 婷婷精品国产亚洲av| 18禁黄网站禁片午夜丰满| eeuss影院久久| 国产精品电影一区二区三区| 岛国在线免费视频观看| 日韩免费av在线播放| 亚洲国产精品久久男人天堂| 亚洲最大成人av| 欧美最新免费一区二区三区 | 亚洲精品在线美女| 国产精品久久久久久久久免 | 美女高潮的动态| 国产不卡一卡二| 老司机午夜十八禁免费视频| 内地一区二区视频在线| 色噜噜av男人的天堂激情| 精品午夜福利视频在线观看一区| 亚洲自拍偷在线| 人妻久久中文字幕网| 一级a爱片免费观看的视频| 亚洲精品乱码久久久v下载方式| 国产私拍福利视频在线观看| 精品午夜福利视频在线观看一区| 午夜福利欧美成人| 午夜久久久久精精品| 波多野结衣高清作品| 成人一区二区视频在线观看| 亚洲av日韩精品久久久久久密| 韩国av一区二区三区四区| 日本免费a在线| 国语自产精品视频在线第100页| 嫩草影院入口| 午夜两性在线视频| 欧美成狂野欧美在线观看| 日韩欧美 国产精品| 狂野欧美白嫩少妇大欣赏| 精品人妻视频免费看| 色尼玛亚洲综合影院| 五月玫瑰六月丁香| 午夜视频国产福利| 日韩欧美免费精品| 日韩精品中文字幕看吧| 男人的好看免费观看在线视频| 99久久久亚洲精品蜜臀av| 热99re8久久精品国产| 亚洲美女黄片视频| 久久精品国产99精品国产亚洲性色| 18禁黄网站禁片免费观看直播| 欧美极品一区二区三区四区| 一进一出好大好爽视频| 免费在线观看日本一区| 偷拍熟女少妇极品色| 成年版毛片免费区| 日本 欧美在线| 悠悠久久av| 亚洲人成伊人成综合网2020| 搡女人真爽免费视频火全软件 | 中国美女看黄片| 精品久久久久久,| av专区在线播放| 久久午夜福利片| 亚洲国产精品999在线| 久久人妻av系列| 岛国在线免费视频观看| 此物有八面人人有两片| 国产在线精品亚洲第一网站| 亚洲国产精品成人综合色| 免费看a级黄色片| 欧美最黄视频在线播放免费| 最近最新免费中文字幕在线| 最好的美女福利视频网| 天堂网av新在线| 色精品久久人妻99蜜桃| 久久国产乱子免费精品| 小蜜桃在线观看免费完整版高清| 禁无遮挡网站| 国产高清激情床上av| 欧美+日韩+精品| 一进一出抽搐动态| 精品乱码久久久久久99久播|