• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Feature-Based Augmentation in Sarcasm Detection Using Reverse Generative Adversarial Network

    2024-01-12 03:47:36DerwinSuhartonoAlifTriHandoyoandFranzAdetaJunior
    Computers Materials&Continua 2023年12期

    Derwin Suhartono ,Alif Tri Handoyo and Franz Adeta Junior

    1Computer Science Department,School of Computer Science,Bina Nusantara University,Jakarta,11480,Indonesia

    2Cyber Security Program,Computer Science Department,School of Computer Science,Bina Nusantara University,Jakarta,11480,Indonesia

    ABSTRACT Sarcasm detection in text data is an increasingly vital area of research due to the prevalence of sarcastic content in online communication.This study addresses challenges associated with small datasets and class imbalances in sarcasm detection by employing comprehensive data pre-processing and Generative Adversial Network (GAN)based augmentation on diverse datasets,including iSarcasm,SemEval-18,and Ghosh.This research offers a novel pipeline for augmenting sarcasm data with Reverse Generative Adversarial Network(RGAN).The proposed RGAN method works by inverting labels between original and synthetic data during the training process.This inversion of labels provides feedback to the generator for generating high-quality data closely resembling the original distribution.Notably,the proposed RGAN model exhibits performance on par with standard GAN,showcasing its robust efficacy in augmenting text data.The exploration of various datasets highlights the nuanced impact of augmentation on model performance,with cautionary insights into maintaining a delicate balance between synthetic and original data.The methodological framework encompasses comprehensive data pre-processing and GAN-based augmentation,with a meticulous comparison against Natural Language Processing Augmentation(NLPAug)as an alternative augmentation technique.Overall,the F1-score of our proposed technique outperforms that of the synonym replacement augmentation technique using NLPAug.The increase in F1-score in experiments using RGAN ranged from 0.066%to 1.054%,and the use of standard GAN resulted in a 2.88%increase in F1-score.The proposed RGAN model outperformed the NLPAug method and demonstrated comparable performance to standard GAN,emphasizing its efficacy in text data augmentation.

    KEYWORDS Data augmentation;Generative Adversarial Network(GAN);Reverse GAN(RGAN);sarcasm detection

    1 Introduction

    Interpretation of a statement is crucial to determine the results of the analysis.Meanwhile,the results of the proper analysis based on data lead to the right action.Currently,there is an abundance of information being shared on social media platforms in the form of statements,thoughts,or comments.These expressions encompass both positive and negative sentiments.However,it is within this spectrum of statements that negative sentiments are occasionally veiled through the use of sarcasm.Sarcastic remarks,by nature,contain an implied message,rendering them more challenging to decipher.

    Sarcasm,as defined,is a form of negative sentiment concealed within seemingly pleasant sentences[1].Recent studies have further categorized sarcasm as an aggressive variant of irony used to convey unfavorable messages[2].It is often intertwined with various forms of irony[3].Sarcasm can manifest through both verbal and textual communication.Verbal sarcasm carries distinct characteristics such as volume,speaking tempo,tone of voice,and accompanying gestures,making it relatively discernible[1].Conversely,textual sarcasm,commonly encountered on social media and product/service reviews,presents a more formidable challenge due to the absence of these contextual cues[4].

    Over the past five to ten years,the research landscape has witnessed a notable surge in studies pertaining to sarcasm detection[5].This surge underscores the pivotal role sarcasm detection plays in facilitating well-informed decision-making through the interpretation of sarcastic expressions.Fig.1 provides an overview of the trends in sarcasm detection research spanning from 2010 to 2022.

    Figure 1:Sarcasm trend research from 2010 to 2022

    While previous research efforts have employed a spectrum of methodologies,these endeavors predominantly fall within two overarching domains: machine learning and deep learning.Machine learning-based approaches have been explored utilizing techniques such as the Support Vector Machine (SVM) [6],Lexical influence [7],and the ensemble method of SVM,K-Nearest Neighbor(KNN),and decision tree[8].

    However,traditional machine learning approaches have exhibited limitations when confronted with sarcastic statements carrying implicit messages,as they struggle to contextualize the sentence as a whole.This necessitated a transition towards deep learning methods.Subsequently,research has embraced a deep learning paradigm for sarcasm classification,incorporating techniques such as multi-layer perceptrons[9]and hybrid neural networks that combine Convolutional Neural Networks(CNN)and bidirectional Long Short Term Memory(LSTM)architectures[10].While these endeavors have primarily focused on model development,this research endeavors to bridge the gap by exploring and developing augmentation techniques tailored specifically for sarcasm data.

    Apart from advancing deep learning model methodologies,this research acknowledges the significance of data augmentation in enhancing a model’s classification capability.Existing research has explored a range of data augmentation techniques to improve model performance in sarcasm detection.However,one relatively uncharted avenue within the realm of sarcasm text augmentation is the application of Generative Adversarial Networks(GANs).GAN-based augmentation has yielded satisfactory results in image-processing domains such as medical imaging[11],face detection[12],and agriculture[13].Nevertheless,its potential in sarcasm text augmentation remains underexplored.

    Inspired by the success of GANs in augmenting datasets,this research introduces a novel framework employing the Reverse Generative Adversarial Neural Network (RGAN) technique.This framework aims to enhance the accuracy of deep learning models in sarcasm detection.The fundamental premise of RGAN involves reversing the labels of genuine and synthetic data.This reversal encourages the generator to produce data closely resembling real data while challenging the discriminator to develop a more comprehensive understanding of subtle distinctions between authentic and synthetic data.

    In summary,the contributions of this research encompass:

    ? The author’s proposed framework introduces a novel approach for enhancing sarcastic data through the utilization of a Reverse Generative Adversarial Network(RGAN).The purpose of reversing the labels of actual and fake data is to encourage the generator to produce data that closely resembles real data while simultaneously pushing the discriminator to develop a more comprehensive understanding of the subtle differences between real and fake data.

    ? The research involved the execution of tests and subsequent analysis to provide evidence supporting the effectiveness of data augmentation through the use of RGAN in enhancing the model’s ability to differentiate between sarcastic and non-sarcastic texts.This was compared to the alternative methods of synonym replacement in NLPAug and the traditional GAN method.Then,This research also analyses the distribution of data generated from GAN-based models.

    ? Performed RGAN testing on balanced and unbalanced datasets.Tests were conducted with 4 augmentation scenarios on each dataset based on percentages of 15%,30%,and 45% and adjusting the number of data additions with the highest class.To analyze GAN’s efficiency further,this research also analyses the distribution of data generated from GAN-based models.

    The remainder of this paper is structured as follows: Section 2 reviews previous research on sarcasm detection and augmentation techniques in sarcasm datasets.Section 3 goes over the datasets used,pre-processing techniques,proposed models,and experimental methods.Session 4 explains the data generated by GAN as well as the experimental results.Finally,in Session 5,the conclusions of this research are discussed.

    2 Related Works

    In this session,we will discuss previous research in detecting sarcasm and augmentations used in text data.The summary of previous research shown in Table 1 shows that sarcasm sentence research tends to explore less data augmentation.

    Table 1:Summary of previous sarcasm detection research

    2.1 Sarcasm Detection with Machine Learning

    Previous studies have explored the detection of sarcasm through the utilization of multiple machine learning models,which are combined utilizing ensemble learning techniques[8].The dataset utilized in this study was sourced from the Twitter social media platform,comprising instances that were classified as either sarcasm or non-sarcasm.Ensemble learning encompasses various combinations of models.In a general sense,an ensemble learning approach that incorporates Support Vector Machines(SVM),Logistic Regression(LR),and Decision Trees(DT),utilizing a voting system to determine class prediction,demonstrates superior average accuracy performance when evaluated on five distinct datasets,surpassing alternative ensemble models.The Principal Component Analysis(PCA)algorithm for dimension reduction is utilized in order to represent numerous features through a decreased feature dimension in the experiments.The ensemble learning of Support Vector Machines(SVM),Linear Discriminant Analysis(LD),and Decision Trees(DT)achieved an accuracy of 98.37%on the evaluated dataset.While Support Vector Machines (SVM),Logistic Regression (LR),and Decision Trees(DT)are capable of identifying the relationship between words in a phrase,it should be noted that typical LR models are not specifically designed to handle sequential data,such as text.Traditional machine learning (ML) methods also exhibit inadequate capability in comprehending context,resulting in a failure to grasp semantic links between words.This phenomenon has the potential to result in misclassification or the occurrence of false positives.

    In the same year,a study conducted by Godara et al.[8]yielded findings that were consistent with the prior research.Nevertheless,the current study does not employ an ensemble learning methodology and conducts the classification procedure separately for each model[14].The dataset was obtained by utilizing an Application Programming Interface (API) provided by Twitter.Specifically,comments containing the hashtag #sarcasm were selected,resulting in a dataset comprising a total of 76,799 tweets.The experimental findings indicate that the Decision Tree algorithm achieves the highest level of accuracy,specifically 91.84%.This outcome is attributed to the utilization of the sarcastic feature set,which comprises various linguistic elements associated with sarcasm,including question marks,exclamation marks,and repeated ellipses.The feature set for sarcasm detection includes both positive and negative sentiment data as additional evidence for identifying sarcastic sentences.

    Previous studies have employed a rule-based approach to identify ironic statements,a specific form of sarcasm[15].The dataset employed in this study is derived from SemEval 2018-T3,which consists of phrases that exhibit irony.The SVM,Naive Bayes,Decision Tree,and Random Forest algorithms are enhanced by the utilization of rule-based lexical and semantic techniques.These techniques serve to eliminate irrelevant words and assess the level of sarcasm,thus improving the ability to recognize contextual information within a phrase.The Random Forest algorithm yields the most accurate results.

    2.2 Sarcasm Detection with Deep Learning

    There have been studies to detect sarcasm in texts.Recognizing the form of sarcasm in a text is very useful for analyzing customer satisfaction and providing the right steps for making business decisions.However,detecting sarcasm remains difficult,particularly in understanding the context of a sarcastic sentence.There are researchers who use multi-head attention on bidirectional LSTM to detect sarcasm [17].The private dataset utilized is a collection of comments that include quotation marks,exclamation points,and a mix of question marks and exclamation points.These characteristics are thought to be able to aid the model in identifying the context of sarcasm.Meanwhile,the bidirectional LSTM has forward and backward modes to capture all of the information from a sentence.With multihead attention,which gives each word a different weight,it is possible to understand the relationship between complex words [21].Compared to SVM and bidirectional without multi-head attention in this study,multi-head attention mixed with bidirectional LSTM performs better because it can capture word context.Despite the use of an attention mechanism in this research,the model has an accuracy of less than 80%.Regardless of the fact that the dataset used has features such as the number of quotes,exclamation marks,question marks,ellipses,and interjections,this research does not investigate dataset augmentation.

    Using C-Net,there is research on how to categorize sarcasm and non-sarcasm [16].C-Net is composed of many Bidirectional Encoder Representation from Transformers (BERT) models that are trained independently on the response data,the last sentence of the context set,the second last sentence of context,and the first sentence of context,and are then integrated at the fusion layer.BERT[22]is a highly effective natural language understanding model.BERT can read sentences from left to right and vice versa in order to better understand the context of the sentence as a whole.Some words from the dataset will be masked during the tokenization process,and the model will make efforts to guess these words based on the unmasked words.Aside from that,BERT can predict subsequent sentences.The C-Net model experiment makes use of dialog-sentence-formatted datasets from Twitter and Reddit.Sentences in the text are marked sequentially using timestamping.Overall,this study contrasts traditional machine learning with a transformer-based approach.According to the results,the transformer model outperforms all traditional machine learning methods,with F1-scores of 75%and 66.3%on the Twitter and Reddit datasets,respectively.Similarly to previous studies by[17],the research did not investigate dataset augmentation.The dataset used is quite small,with less than 10,000 data points for Twitter and Reddit.

    Research on sarcasm detection using a hybrid neural network consisting of CNN and Bidirectional LSTM with an attention module has been carried out[10].CNN can benefit from input encoded from LSTM by spotting n-gram word patterns.Due to the weighting provided by the attention module,the model can then better understand the context of a word.Using the attention module,CNN hybrid architecture and Bidirectional LSTM can detect incongruity in a sentence.The test accuracy obtained by comparing the baseline model and the intended method was 84.88%and 89.7%,respectively.Model development can significantly improve accuracy,but the quality of the model is not solely determined by the architecture.This research does not show the pre-processing side of the dataset used,and no dataset augmentation is explored.

    2.3 Data Augmentation for Sarcasm Detection

    Prior studies on the detection of sarcastic sentences have employed various techniques,such as augmenting existing sarcastic datasets by the incorporation of external datasets,utilizing word embedding methodologies,and employing sentence repetition [18].The external dataset utilized is sourced from the Twitter social media network,as well as the SemEval-18 and ArSarcasm-v2 datasets.In contrast,the primary dataset included in this study is a proprietary dataset comprising sarcastic statements in both English and Arabic languages,with a total of 6570 and 4502 instances,respectively.The process of external dataset augmentation involves merging the original dataset with an external dataset in order to introduce a significant level of variability.Word embedding is a technique employed to substitute words within a sentence with synonymous alternatives.In order to get a balanced distribution of sardonic and non-sarcastic classes,the technique of repeating words is employed to replicate the same case.The accuracy of validation and tests has shown a significant disparity as a consequence of conducting experiments on three distinct augmentation techniques.Word embedding is a technique employed to substitute words within a sentence with their corresponding synonyms.In order to achieve a fair distribution of sarcastic and non-sarcastic classes,the technique of repeating sentences is employed to replicate instances of the same content.The accuracy of validation and tests has shown a significant disparity as a consequence of conducting experiments on three distinct augmentation techniques.Among the numerous experiments undertaken,it was seen that the BERT model,when augmented with the repetition of words,achieved a validation accuracy of 0.92.Additionally,when the model was tested using pre-processing techniques that involved converting emojis to strings,a test accuracy of 0.87 was obtained.Nevertheless,there are still symptoms of overfitting present in the model due to the possibility of the augmentation technique being excessively highlighted as a result of the frequent repetition of phrases.In contrast,the external dataset augmentation exhibited notable performance,achieving the highest validation and test accuracy scores of 0.41 and 0.07,respectively.Ultimately,the technique of synonym replacement augmentation demonstrated superior performance,achieving scores of 0.86 and 0.84,respectively.Excessive variance in the external dataset,as well as an overemphasis on certain elements,such as repeated words,might lead to overfitting of the model or a decline in its performance.

    The model’s capacity to recognize sarcasm may be influenced by the data augmentation of sarcastic sentences[23].Generative Adversarial Network(GAN)is a potential method for augmenting data.Common applications of GANs in the field of image augmentation include the generation of synthetic data with high levels of similarity to the original data.In this approach,synthetic data can be utilized to expand the range of the original dataset [24].GAN technique paired with BERT is another method for performing data augmentation for text datasets[20].Both labeled and unlabeled data are sent to BERT as input for vectorization.Meanwhile,the GAN generator reproduces false data derived from random distribution noise.The discriminator’s job is to distinguish between authentic and false data.Training continues until the discriminator is unable to distinguish between genuine and fraudulent data that has been reproduced by the generator.GAN-BERT was tested on two datasets:Stanford Sentiment Treebank with 5 different classes(SST-5)for sentiment analysis and Multi-Genre Natural Language Inference (MNLI) for natural language inference.GAN-results BERT’s improve accuracy by 8.2% on the SST-5 sentiment analysis dataset.There is evidence that using a smaller proportion of labeled data is more beneficial when using GAN-BERT.However,no tests of the fully labeled dataset in cases of sarcasm detection with more complex characteristics have been conducted.

    Inverting class labels is another GAN technique [25].Image data was used in the research.Typically,GAN trains the discriminator to distinguish between real and fake data and requires the generator to produce data that is as close to the original data as possible[26].Reference[25],however,attempted to reverse the labels so that the discriminator can be viewed as a classifier that learns features from the original data.GANs that perform label inversion can learn more than just the difference between real and fake data.The research was successful in demonstrating another point of view through the use of GANs,but this technique still produces unstable results and has a chance of success only in certain cases.

    Another research on data augmentation in sarcasm detection using the synonym replacement and duplication methods with NLPAug was done by [19].The F1-score was evaluated using BERT,Robustly Optimized BERT Approach(RoBERTa)[27],and DistilBERT[28].RoBERTa is a BERTbased model that has no next-sentence prediction(NSP)to predict a subsequent sentence prediction.Meanwhile,DistilBERT is a BERT model with smaller parameter values that is faster than BERT but has lower classification performance.The duplication augmentation technique improves performance on datasets iSarcasm [29],Ghosh et al.[30],and SemEval-18 [31].The results obtained,however,demonstrate that augmentation data enhances model accuracy when it comes to non-sarcastic detection,as shown by an increase in true negatives.

    According to previous works,performing augmentation on sarcasm data is challenging due to the unique complexity of the data.Meanwhile,in text data augmentation research,the GAN-based approach appears to be more promising than synonym replacement using NLPAug,repeating words,and external dataset augmentation,and there has been no text data research that used RGAN as a method of data augmentation.The performance of the model in detecting sarcasm is determined by the dataset,appropriate hyper-parameters,and appropriate model architecture.However,the main aim of this research is to propose a novel augmentation strategy for enhancing the sarcastic dataset through the utilization of Reverse Generative Adversarial Networks(RGAN).The characteristics of the data and the results of the sarcasm detection will be investigated thoroughly.

    3 Methodology

    In this section,the methodological framework for investigating the effects of using Generative Adversarial Networks(GANs)for data augmentation in sarcasm detection is outlined.The methodology serves as the foundation upon which the selection of datasets and subsequent analysis is based.

    3.1 Dataset

    This research utilizes four unique datasets to support theories and conduct an in-depth analysis of the effects of using GAN as a data augmentation.The dataset is divided into two categories:small(less than 10,000 sentences)and large(more than 30,000 sentences).This research uses iSarcasm[29]and SemEval [31] for small datasets.Each dataset has unique characteristics;for example,iSarcasm is a dataset obtained from Twitter via an online survey.Participants in survey responses provided sarcastic sentences and labels;this allows manual labeling techniques to be avoided because they cannot accurately determine sarcastic sentences from the perspective of the author of sarcasm sentences.In addition,this dataset has an unbalanced number of sarcastic and non-sarcastic.Unlike SemEval-18,this dataset has a relatively balanced combination of sarcastic and non-sarcastic words.SemEval-18 is a Twitter-sourced dataset.The data is labeled manually using a fine-grained annotation[32]scheme.The annotators are three linguistics students who speak English as a second language.

    Ghosh et al.[30]was the dataset from the large category that was used in this research scenario.Ghosh is a Twitter dataset that contains sarcastic and non-sarcastic sentences.Sarcasm classes are collected by searching for the hashtag (#) sarcasm and #not.One example of a sarcastic sentence obtained by removing‘#not’is“I#not love when people start rumors about me.”It Becomes“I love it when people start rumors about me.” Meanwhile,when a sentence lacks a positive marker,it is classified as non-sarcastic.The obtained sentences are not in the form of lengthy conversations.Table 2 illustrates the size of the distribution of each class in the dataset used in detail.

    Table 2:The quantity of data for each class in the dataset that was used

    3.2 Data Pre-Processing and Augmentation

    Fig.2 shows the data preprocessing scheme up to the input data to perform GAN data augmentation.All collected datasets undertake a cleaning process,such as URL links,hashtags,foreign languages,stop words,non-English ASCII characters,and emojis.After cleaning the dataset,an 80:20 splitting train and validation were performed for each dataset.Only the sarcasm class is used as input to the RGAN model.The main reason for augmentation in the sarcasm class is that sarcasm data is difficult to obtain [30],and augmenting the non-sarcastic class will only increase data inequality between classes.Furthermore,GAN is used on the dataset to perform unbalanced sarcasm data balancing.The augmentation process begins with the embedding process using DistilBERT,which is fed a dataset with only the sarcasm class as input.The generator then generates data in the form of noise from a random distribution.The discriminator uses word embedding and fake data from the generator to distinguish between real and fake data.As a result,the discriminator loss can be fed into the generator to generate data that is as close to the original data as possible.The data generated from the generator is in the form of features whose distribution results are close to the original data.

    3.3 Proposed GAN Model

    This research proposed a novel pipeline that would make use of GAN for data augmentation based on the Reverse GAN(RGAN)[25],which is commonly used with image data.Using a similar concept,and made several changes to accommodate the sarcasm data.Fig.3 depicts a more detailed stage of the novel RGAN pipeline proposed.Sarcasm data that has been pre-processed will be used as input for embedding with distilBERT.The class used for embedding only contains fully labeled sarcasm classes.The main reasons for using only one class are:(1)Sarcasm data is difficult to obtain due to its high level of complexity,and(2)Balance sarcasm and non-sarcastic classes in each dataset used.The embedding process then utilizes pre-trained distilBERT,resulting in a high-quality embedding because the pretrained distilBERT was trained on a large corpus of words.The result of the embedding is feature data,which is labeled as fake data.The original data is marked as fake data in the RGAN concept,and vice versa.As a result,the noise data generated by the generator is labeled as original data,so the discriminator must be certain that the data generated by the generator is real data,and the original data that is tagged false serves to cause the generator to produce data that is similar to but not an exact duplicate of the original data.The loss discriminator and generator are used as feedback for the generator to produce good data quality,with an indication that the lower the value of the loss generator and discriminator,the better the quality of the resulting data.The hyperparameter used is the learning rate of 0.001 as a result of hyperparameter parameter tuning in the generator and discriminator models and batch size of 16.This RGAN model also employs the Adam optimizer.The activation function used is the Rectified Linear Unit(ReLu).Instead of RGAN,we employ a standard GAN scheme that does not swap real and fake data labels.In the standard GAN,there are various indicators of a good model.For example,if the discriminator loss value is greater,it can indicate that the data generated by the generator is similar to the original data,and the discriminator is unsure whether the data is real or fake.

    Figure 2:Data pre-processing stage

    Figure 3:Proposed Reverse GAN(RGAN)

    3.4 Experimental Framework

    The generator’s new data is divided into three scenarios with data augmentation scales of 15%,30%,and 45%.The generator’s synthetic data is combined with the original data as input for model training.The outcomes of each scenario will be compared to determine the GAN’s ability to detect sarcasm sentences.More detailed scenarios can be seen in Fig.4.

    Figure 4:Model training with the data generated by the generator combined with the original data

    The final evaluation results will compare the proposed RGAN model to the general GAN model.Aspects of the analysis performed,beginning with the quality of data analysis,the results of the resulting data distribution,and the effects observed when training with original data using the MLP model.Visualization is performed to analyze the data generated by RGAN by reducing the dimensions of the data.Because the characteristics of the RGAN data are quite complex,the t-Distribute Stochastic Neighbor Embedding (t-SNE) [33] algorithm was used to interpret the data visually.The t-SNE algorithm employs the following equation:

    The Eq.(1)is used to determine the pairwise similarity of data points in high-dimensional space.It assigns a probability density to each pair of data points based on their Euclidean distance using a Gaussian kernel.Then,Eq.(2)is used to determine the similarity of data points in low-dimensional space.It assigns a probability density to each pair of data points using a Student’s t-distribution.Gradient descent is used iteratively to minimize Kullback–Leibler (KL) divergence by adjusting the position of the data distribution with the following Eq.(3):

    Meanwhile,the Multi Layer Perceptron (MLP) model results from the F1-score and the MLP model loss will demonstrate validation of the quality of the data generated by the RGAN.The hyperparameter used in the MLP model is a learning rate of 0.0001 using the Adam optimizer,batch size 16,epoch 100,early stopping with large patience of 10 and a seed value of 200.The augmentation method with RGAN is compared with the augmentation method using the original GAN,where the labels of the original data and synthetic data are not reversed,and NLPAug,which is one of the popular augmentation frameworks on text data[34].

    4 Result and Discussion

    In this section,the results of the experiments are presented and discussed,focusing on the application of Generative Adversarial Networks(GANs)for data augmentation in sarcasm detection.The primary objective is to evaluate the impact of GAN-based augmentation on model performance,emphasizing the quality of augmented data and its effect on classification accuracy.The analysis begins with an evaluation of data quality following the GAN augmentation process.This evaluation employs loss values of the generator and discriminator as well as dimensionality reduction techniques to visualize the differences between real and synthetic data distributions.Subsequently,the discussion delves into the experimental results,comparing GAN augmentation with alternative techniques,such as NLPAug and unaugmented data.These experiments provide insights into the benefits and limitations of GAN augmentation,particularly in scenarios involving small datasets and class imbalance.

    4.1 Augmented Data Quality Evaluation

    Multidimensional features are utilized to store the information generated by the GAN generator.The evaluation of data quality takes place subsequent to data generation.Loss values from both the generator and discriminator serve as valuable indicators for assessing data quality.Furthermore,to gain deeper insights into the disparities between the distributions of real and synthetic data,the t-SNE technique is employed for dimensionality reduction.The results of this visualization technique are presented in Figs.5–7.

    Figure 5:Generated data for iSarcasm dataset

    Figure 6:Generated data for SemEval-18 dataset

    Figure 7:Generated data for Ghosh dataset

    The resulting pattern shows the difference in data distribution between standard GAN and RGAN.The standard GAN tends to follow the original dataset’s pattern and has a more defined data center point,whereas the RGAN has a more varied pattern.Both types of GAN produce data that is close enough to the original data to be considered similar.However,if the resulting data exceeds the original data,the RGAN has a high level of outliers.For example,in Fig.5b,the resulting data far outnumbers the original data.Meanwhile,the data distribution in Fig.5a is more consistent.The amount of the learning rate has an impact on the data distribution distance produced by the Reverse Generative Adversarial Network (RGAN).A lower learning rate leads to a more realistic representation of the original data,which can introduce noise due to the duplication of several features.On the other hand,in the case of an excessively high learning rate,the resultant feature distance will be significantly greater,leading to an increased data variance that can result in overfitting[35].Therefore,a learning rate of 0.001 is commonly employed in the Adam optimizer [36] to ensure that the data distribution remains suitably balanced in relation to the original feature.

    4.2 Experimental Result

    In this section,the focus shifts to the discussion of experimental results and the ramifications of employing GAN augmentation.It is important to note that the dataset augmentation was exclusively applied to the sarcasm class.A comparative analysis is conducted,contrasting the outcomes of GAN augmentation with those of NLPAug and the original dataset.The data generated by the generator in GAN is in the form of a feature that approximates the distribution of sarcasm class features in the DistilBERT-encoded.Meanwhile,NLPAug replaces adjectives in the sarcasm class with synonyms and does not increase the amount of data.Figs.8–10 are class distributions in each dataset used with changes in the amount of data in the sarcasm class after augmentation with GAN.Table 3 shows the results of experiments on the iSarcasm dataset.The obtained results show that the standard GAN with an augmentation percentage of 45% has the highest F1-score.When compared to RGAN,standard GAN has a more stable data distribution.Experiments on balanced GANs,on the other hand,show that the RGAN has a much higher value than the standard GAN.In the balanced sarcasm class experiment,data augmentation produced nearly 2.1 times fake data based on the original one.The data generated by the standard GAN has a noisy indication,which reduces accuracy.RGAN,on the other hand,has succeeded in producing more varied data,even though there are data points that are further away from the original data points,it does not make the data dirty.However,in order to obtain a low loss value and a high F1-score,the data generated by the reverse GAN must be adjusted to the original dataset.When compared to NLPAug,all augmentation data using GAN has a higher value,implying that GAN augmentation is appropriate for datasets with small amounts of data and unbalanced classes.

    Table 3:Experimental results on the iSarcasm dataset

    Figure 8:The difference in the amount of data in the sarcasm class in the iSarcasm dataset is based on the percentage of augmentation

    Figure 9:The difference in the amount of data in the sarcasm class in the SemEval-18 dataset is based on the percentage of augmentation

    Figure 10:The difference in the amount of data in the sarcasm class in the Ghosh dataset is based on the percentage of augmentation

    Experiments with the SemEval-18 dataset yielded the same results in Table 4.Standard GAN has the highest F1-score and the lowest validation loss,implying that GAN augmentation is also suitable for small datasets but has a balanced class.However,if the data generated by GAN causes an imbalance between classes,the accuracy tends to decrease,as in the case of the addition of 45%data,where the sarcastic data far outnumbers the non-sarcastic data.Unlike iSarcasm,the SemEval-18 dataset shows a better GAN standard under balanced sarcasm class conditions because the resulting data is still within reasonable limits below 15%.Meanwhile,the results of data augmentation with NLPAug did not increase the F1-score and tended to decrease the F1-score so that it could be indicated if NLPAug caused noise in the data.

    Table 4:Experimental results on the SemEval-18 dataset

    Table 5 shows that augmentation with RGAN has the highest F1-score,at 69.01%,based on experiments on the Ghosh dataset.Balance class is a dataset that contains non-sarcastic numbers with balanced sarcasm and non-sarcastic classes of 22,725 and 22,725 sarcasm.With a relatively balanced distribution of data for each class in the SemEval-18 and Ghosh datasets,it has the consistency that if the augmentation data of the sarcastic class far exceeds the non-sarcastic class,the accuracy will tend to decrease.In a balanced class condition,the RGAN has a better F1-score,so it shows the same indication as the iSarcasm dataset.Augmentation with a standard GAN yields similar results to augmentation with an RGAN.NLPAug also produces results that are consistent with lower values than GAN augmentation.

    Table 5:Experimental results on the Ghosh dataset

    In general,the proposed data augmentation in this research differs from GAN-BERT.The data generated by the generator is only used as input to the discriminator in GAN-BERT,so it is unclear whether changes in accuracy in classifying sarcasm data are used when using GAN or not.Meanwhile,the generator that is used to generate new data will be filled into the model as input as features with data points that are similar to the original data,with the goal of evaluating the quality of the resulting data.GAN-BERT uses unlabeled labels as supporting data for unbalanced classes on the generator to balance unbalanced data.However,using an excessive amount of unlabeled data may cause the generator to produce data in a balanced class,allowing data to become even more unbalanced.Consequently,the approach here is to train the GAN model with data from unbalanced classes.All experiments showed that GAN-based augmentation improved the model’s ability to classify sarcasm compared to NLPAug.Data augmented with NLPAug does not generate new data,so there is still an imbalance between classes.NLPAug is better suited for adding variety to balanced datasets like Ghosh.NLPAug does not improve at all on unbalanced datasets like iSarcasm.

    A comparison and discussion between this research and other works in implementing data augmentation can be seen in Table 6.

    Table 6:Comparison between this research with other works

    In comparison to the studies presented in Table 6,this research offers a distinct and superior approach to sarcasm detection through Generative Adversarial Networks(GANs)for data augmentation.While the mentioned studies have primarily focused on external datasets,synonym replacement,or simple augmentation techniques,our research introduces a novel framework utilizing Reverse GAN(RGAN)in the context of sarcasm detection.The results of our experiments on datasets like iSarcasm,SemEval-18,and Ghosh showcase the efficacy of GAN-based augmentation,particularly when the synthetic data closely aligns with the volume of the original data.Notably,our approach outperforms NLPAug in scenarios with small datasets and class imbalances.Moreover,we demonstrate that RGAN,a less common technique,can achieve performance results comparable to those of standard GAN.The ability to generate synthetic text data that closely matches the original data in features sets our research apart,offering a more balanced and effective approach to augmenting text data for sarcasm detection.This research emphasizes the versatility and effectiveness of the RGAN technique,providing a robust solution for improving sarcasm detection accuracy in diverse augmentation scenarios.

    5 Conclusion

    1.The successful application of the suggested novel framework for enhancing text data through the incorporation of additional data features has been demonstrated in its ability to enhance the performance of the model in identifying sarcasm within specific augmentation scenarios.

    2.Due to the different characteristics of each dataset,GAN-based augmentation in different datasets could have a different impact on performance.Overall,based on the analysis,it is found that if the synthetic data does not exceed the amount of original data,GAN-based augmentation can improve performance significantly when compared to using NLPAug.

    3.The utilization of the Reverse GAN technique,although not commonly practiced,delivered performance outcomes in sarcasm detection that are on par with those achieved using the standard GAN.

    In conclusion,this study has introduced a novel framework for enhancing text data through the incorporation of additional data features,demonstrating its success in improving model performance in sarcasm detection within specific augmentation scenarios.This study revealed that the impact of GAN-based augmentation on performance varies across datasets,with a consistent finding that GAN-based augmentation outperforms NLPAug when synthetic data does not significantly exceed the volume of original data.One of the key contributions of this research is the utilization of the Reverse GAN (RGAN) technique,a less common approach,which yielded performance results in sarcasm detection comparable to those achieved using the standard GAN.This suggests the effectiveness and versatility of RGAN in enhancing text data.

    Augmentation with GAN in the sarcasm class tends to lose accuracy if the data generated far exceeds that of the non-sarcastic class.Meanwhile,if the augmentation in the sarcasm class produces data that is many times the size of the original data,using an RGAN,as in the iSarcasm dataset,will be more profitable.The SemEval-18 dataset yields the opposite result,demonstrating that using standard GANs is more advantageous when adding data reaches a balance point.However,the Ghosh dataset demonstrates that a relatively balanced dataset does not necessitate a large amount of synthetic data.This is because Tables 3 and 4 show that the best augmentation results were obtained in experiments with generated data less than 45%.Because of the relatively high level of difficulty,the future research potential for augmenting text datasets is still very broad.Producing synthetic text data in the form of text (rather than features) is a challenging task.Currently,the Reverse Generative Adversarial Network (RGAN) lacks the capability to reconstruct feature forms into textual representations.Another challenge is the development of transformer models capable of reading input in the form of features;there are currently only a few state-of-the-art models capable of receiving input data in the form of features.

    Acknowledgement:None.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:Study conception and design: Derwin Suhartono;data collection: Derwin Suhartono,Alif Tri Handoyo;analysis and interpretation of results: Franz Adeta Junior,Alif Tri Handoyo;draft manuscript preparation: Franz Adeta Junior.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:All dataset in this paper is publicly available in GitHub repositories.For iSarcasm dataset (https://anonymous.4open.science/r/24639225-ac0e-4057-b2d4-16e7e50570 d0/README.md),SemEval-2018 (https://github.com/Cyvhee/SemEval2018-Task3),and Ghosh(https://github.com/MirunaPislar/Sarcasm-Detection).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    毛片一级片免费看久久久久| 亚洲三级黄色毛片| 国产在视频线精品| 麻豆国产97在线/欧美| 精品午夜福利在线看| 精品久久久久久久久亚洲| 亚洲丝袜综合中文字幕| 亚洲欧美成人精品一区二区| 国内精品美女久久久久久| 国产精品av视频在线免费观看| 久久久久久伊人网av| 有码 亚洲区| 亚洲乱码一区二区免费版| 国产精品美女特级片免费视频播放器| 在线观看美女被高潮喷水网站| 麻豆国产97在线/欧美| 国产精品一区二区三区四区久久| 免费不卡的大黄色大毛片视频在线观看 | 变态另类丝袜制服| 国产高清视频在线观看网站| 精品午夜福利在线看| 日本猛色少妇xxxxx猛交久久| 国产视频首页在线观看| 视频中文字幕在线观看| 亚洲欧美日韩卡通动漫| 三级国产精品片| 免费观看精品视频网站| 最近中文字幕2019免费版| 国产精品久久久久久久电影| 国产精品,欧美在线| 午夜福利在线观看吧| 久热久热在线精品观看| 91精品一卡2卡3卡4卡| 国产免费福利视频在线观看| 亚洲精品乱久久久久久| 国产亚洲av嫩草精品影院| 精品99又大又爽又粗少妇毛片| 国产伦理片在线播放av一区| 国产真实伦视频高清在线观看| 九九热线精品视视频播放| 日韩亚洲欧美综合| 中文字幕av在线有码专区| 国产成人精品久久久久久| 色综合亚洲欧美另类图片| 岛国在线免费视频观看| 美女国产视频在线观看| 亚洲国产精品成人综合色| 免费看日本二区| av在线亚洲专区| 神马国产精品三级电影在线观看| 成人特级av手机在线观看| 麻豆成人午夜福利视频| 国产激情偷乱视频一区二区| 国产三级在线视频| 一夜夜www| 美女高潮的动态| 97超碰精品成人国产| 中文字幕av成人在线电影| 国产国拍精品亚洲av在线观看| 国产精华一区二区三区| 夜夜爽夜夜爽视频| 欧美日韩一区二区视频在线观看视频在线 | 欧美另类亚洲清纯唯美| 大香蕉久久网| 高清日韩中文字幕在线| 日本一二三区视频观看| videos熟女内射| 国产精品一区二区在线观看99 | 国产男人的电影天堂91| 精品一区二区三区视频在线| 成人一区二区视频在线观看| 久久久午夜欧美精品| 国国产精品蜜臀av免费| 国产精品国产三级国产专区5o | 欧美zozozo另类| 国产v大片淫在线免费观看| 麻豆成人av视频| 国产极品天堂在线| av在线天堂中文字幕| 中文字幕精品亚洲无线码一区| 啦啦啦韩国在线观看视频| 欧美三级亚洲精品| 日韩三级伦理在线观看| 麻豆成人午夜福利视频| 亚洲精华国产精华液的使用体验| 亚洲欧美精品专区久久| 日本爱情动作片www.在线观看| 欧美3d第一页| 日韩,欧美,国产一区二区三区 | 级片在线观看| 免费观看精品视频网站| 国产精品永久免费网站| 又黄又爽又刺激的免费视频.| 亚洲人成网站高清观看| 亚洲av中文字字幕乱码综合| 国产探花极品一区二区| 中文亚洲av片在线观看爽| 国产极品天堂在线| www.色视频.com| 久久综合国产亚洲精品| 国产精华一区二区三区| 国产一级毛片在线| 国产淫语在线视频| 精品酒店卫生间| 91久久精品电影网| 免费播放大片免费观看视频在线观看 | 高清av免费在线| 欧美性感艳星| 亚洲av熟女| 午夜福利视频1000在线观看| 少妇人妻精品综合一区二区| 尾随美女入室| 日韩精品有码人妻一区| 99国产精品一区二区蜜桃av| 九九在线视频观看精品| 欧美日韩综合久久久久久| 2021天堂中文幕一二区在线观| 91久久精品国产一区二区三区| 少妇人妻精品综合一区二区| 久久久欧美国产精品| 2022亚洲国产成人精品| 久久精品综合一区二区三区| 热99re8久久精品国产| 小蜜桃在线观看免费完整版高清| 婷婷色麻豆天堂久久 | 国产成人freesex在线| 日韩精品有码人妻一区| 乱系列少妇在线播放| 久久这里有精品视频免费| 一区二区三区高清视频在线| 欧美性猛交黑人性爽| 国产乱来视频区| 国产成人一区二区在线| 国产伦精品一区二区三区视频9| 18禁在线无遮挡免费观看视频| 99久久成人亚洲精品观看| 熟女电影av网| 久久久久久久午夜电影| 国产69精品久久久久777片| 国产在线男女| 蜜臀久久99精品久久宅男| 欧美性猛交╳xxx乱大交人| 啦啦啦啦在线视频资源| 国产高清不卡午夜福利| 亚洲欧洲日产国产| 欧美3d第一页| 精品国产露脸久久av麻豆 | 偷拍熟女少妇极品色| 如何舔出高潮| 亚洲中文字幕日韩| 国产一区二区亚洲精品在线观看| 男人狂女人下面高潮的视频| 欧美日韩一区二区视频在线观看视频在线 | 能在线免费观看的黄片| 国产免费男女视频| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 中文字幕免费在线视频6| 国产精品三级大全| 卡戴珊不雅视频在线播放| 一本一本综合久久| 成人无遮挡网站| 国产精品嫩草影院av在线观看| 麻豆精品久久久久久蜜桃| 高清毛片免费看| 国产伦理片在线播放av一区| 一级黄色大片毛片| 老司机影院毛片| 久久这里只有精品中国| 久久久久九九精品影院| 亚洲av不卡在线观看| 精品久久久久久久人妻蜜臀av| 欧美三级亚洲精品| 亚洲av福利一区| 日韩亚洲欧美综合| 级片在线观看| 高清视频免费观看一区二区 | av专区在线播放| 精品不卡国产一区二区三区| 亚洲欧美日韩无卡精品| 国产视频首页在线观看| 在线免费观看不下载黄p国产| 免费人成在线观看视频色| 熟妇人妻久久中文字幕3abv| 欧美一区二区精品小视频在线| 天堂√8在线中文| 春色校园在线视频观看| 成人美女网站在线观看视频| 亚洲中文字幕一区二区三区有码在线看| 国产精品人妻久久久影院| 亚洲av免费高清在线观看| 老司机福利观看| 可以在线观看毛片的网站| 国产成人精品久久久久久| 精品一区二区免费观看| 床上黄色一级片| av卡一久久| 成年免费大片在线观看| 免费黄网站久久成人精品| 亚洲国产日韩欧美精品在线观看| 亚洲中文字幕日韩| 亚洲av福利一区| 三级毛片av免费| 亚洲国产日韩欧美精品在线观看| 人人妻人人澡欧美一区二区| 亚洲av一区综合| 老司机福利观看| 一个人看视频在线观看www免费| 波野结衣二区三区在线| 国产女主播在线喷水免费视频网站 | 国产色爽女视频免费观看| 在线天堂最新版资源| 国产精品国产三级国产av玫瑰| 国产亚洲精品久久久com| 可以在线观看毛片的网站| 久久久午夜欧美精品| 亚州av有码| 能在线免费观看的黄片| 国产欧美日韩精品一区二区| ponron亚洲| 黑人高潮一二区| 国模一区二区三区四区视频| 亚洲美女搞黄在线观看| 精品熟女少妇av免费看| 久久久久久久午夜电影| 成人二区视频| 国产探花极品一区二区| 亚洲一级一片aⅴ在线观看| 亚洲av免费高清在线观看| av在线老鸭窝| 亚洲成色77777| 五月伊人婷婷丁香| 久久韩国三级中文字幕| 在线播放国产精品三级| 国产真实伦视频高清在线观看| 国产黄a三级三级三级人| 国产精品一二三区在线看| 亚洲欧美成人精品一区二区| 伊人久久精品亚洲午夜| 国产精品国产三级国产av玫瑰| 日日干狠狠操夜夜爽| 国产成人a区在线观看| 男的添女的下面高潮视频| 亚洲欧洲日产国产| 秋霞在线观看毛片| 日韩国内少妇激情av| 热99在线观看视频| 亚洲精品影视一区二区三区av| 丝袜喷水一区| 国内精品美女久久久久久| 亚洲国产欧美人成| 99热这里只有是精品在线观看| 伦精品一区二区三区| 国产三级在线视频| 99在线人妻在线中文字幕| 99久久精品国产国产毛片| 亚洲美女搞黄在线观看| 成年版毛片免费区| 日韩制服骚丝袜av| 亚洲欧美清纯卡通| 久久人妻av系列| 在线天堂最新版资源| 国产成年人精品一区二区| 久久久国产成人免费| 午夜视频国产福利| 蜜臀久久99精品久久宅男| 深爱激情五月婷婷| 久热久热在线精品观看| 精品酒店卫生间| 日本wwww免费看| 18禁裸乳无遮挡免费网站照片| 国产日韩欧美在线精品| 日韩欧美 国产精品| 久久久久久久午夜电影| 干丝袜人妻中文字幕| 亚洲经典国产精华液单| 成人漫画全彩无遮挡| 久久人妻av系列| 精华霜和精华液先用哪个| 乱系列少妇在线播放| 精品不卡国产一区二区三区| 免费搜索国产男女视频| 熟妇人妻久久中文字幕3abv| 舔av片在线| 成人美女网站在线观看视频| 国产成人免费观看mmmm| 99久国产av精品| 成人毛片a级毛片在线播放| 国产成人a∨麻豆精品| 亚洲人成网站在线观看播放| 国产精品国产三级专区第一集| 噜噜噜噜噜久久久久久91| 99热这里只有是精品在线观看| 少妇熟女aⅴ在线视频| 久久久久久久久大av| 女人被狂操c到高潮| 三级国产精品欧美在线观看| 插阴视频在线观看视频| 岛国毛片在线播放| 久久久精品欧美日韩精品| 久久久a久久爽久久v久久| 国产不卡一卡二| 热99re8久久精品国产| 午夜福利在线观看免费完整高清在| 69av精品久久久久久| 国产亚洲精品av在线| 精品午夜福利在线看| 九九爱精品视频在线观看| 亚洲真实伦在线观看| 日韩亚洲欧美综合| 欧美三级亚洲精品| 丰满少妇做爰视频| 国产精品乱码一区二三区的特点| 国产高清国产精品国产三级 | 国产国拍精品亚洲av在线观看| 99久久人妻综合| 成人三级黄色视频| 日本黄色片子视频| 蜜臀久久99精品久久宅男| 久久久国产成人免费| 九九在线视频观看精品| 国模一区二区三区四区视频| av福利片在线观看| 好男人视频免费观看在线| 一个人看视频在线观看www免费| 久久久精品94久久精品| 99热精品在线国产| 久久精品影院6| 国产 一区 欧美 日韩| eeuss影院久久| 三级国产精品片| 91久久精品电影网| 精品人妻熟女av久视频| 91久久精品电影网| 国产真实乱freesex| 天堂中文最新版在线下载 | 久久99热这里只有精品18| 亚洲一区高清亚洲精品| 国产精品无大码| 国产视频内射| 国产精品无大码| 桃色一区二区三区在线观看| 自拍偷自拍亚洲精品老妇| 秋霞在线观看毛片| 国产精品福利在线免费观看| 国产一级毛片在线| 国产不卡一卡二| 亚洲国产成人一精品久久久| 搡女人真爽免费视频火全软件| 亚洲av电影不卡..在线观看| 在线观看一区二区三区| 亚洲不卡免费看| 欧美日韩精品成人综合77777| 国产精品伦人一区二区| av国产免费在线观看| 欧美3d第一页| 3wmmmm亚洲av在线观看| 内地一区二区视频在线| 日日撸夜夜添| 国产日韩欧美在线精品| 有码 亚洲区| 国产欧美日韩精品一区二区| 国产精品爽爽va在线观看网站| 亚洲人成网站在线观看播放| 国产精品精品国产色婷婷| 国产午夜精品一二区理论片| 美女xxoo啪啪120秒动态图| 中国美白少妇内射xxxbb| 国产精品精品国产色婷婷| 欧美日本视频| 最近中文字幕高清免费大全6| 不卡视频在线观看欧美| 久久6这里有精品| 少妇熟女aⅴ在线视频| 两个人视频免费观看高清| 久久久久国产网址| 六月丁香七月| 国产精品久久久久久久电影| 精品一区二区三区视频在线| 亚洲精品自拍成人| 麻豆乱淫一区二区| 村上凉子中文字幕在线| 美女高潮的动态| 成年女人看的毛片在线观看| 午夜老司机福利剧场| 最近中文字幕2019免费版| 国产精品女同一区二区软件| 国产探花在线观看一区二区| 国产亚洲午夜精品一区二区久久 | 国产精品麻豆人妻色哟哟久久 | 99在线视频只有这里精品首页| 成人毛片60女人毛片免费| 精品欧美国产一区二区三| 三级国产精品欧美在线观看| 免费黄色在线免费观看| 国产真实伦视频高清在线观看| 国产精品伦人一区二区| 草草在线视频免费看| 午夜精品在线福利| 中文天堂在线官网| 日韩一区二区三区影片| 婷婷色综合大香蕉| 99久久成人亚洲精品观看| 日韩大片免费观看网站 | 热99在线观看视频| av在线老鸭窝| 国产精品日韩av在线免费观看| 久久久a久久爽久久v久久| 热99在线观看视频| 中文乱码字字幕精品一区二区三区 | 黄片无遮挡物在线观看| 午夜免费激情av| 亚洲,欧美,日韩| 国产精品日韩av在线免费观看| 国产v大片淫在线免费观看| 久久久亚洲精品成人影院| 久久午夜福利片| 日韩在线高清观看一区二区三区| 热99在线观看视频| 欧美3d第一页| 久久人人爽人人片av| 亚洲中文字幕日韩| 嫩草影院新地址| videos熟女内射| .国产精品久久| 中国国产av一级| 如何舔出高潮| 久久欧美精品欧美久久欧美| 亚洲欧美清纯卡通| 国产又色又爽无遮挡免| 亚洲怡红院男人天堂| 一级毛片久久久久久久久女| 国产 一区 欧美 日韩| 边亲边吃奶的免费视频| 69人妻影院| 观看免费一级毛片| 国产精品福利在线免费观看| 国产精品久久久久久av不卡| 小蜜桃在线观看免费完整版高清| 国产精品一区二区三区四区久久| 麻豆精品久久久久久蜜桃| 网址你懂的国产日韩在线| 亚洲国产精品专区欧美| 成人二区视频| 国产高清不卡午夜福利| 久久久久久大精品| 久久草成人影院| 久久热精品热| 99国产精品一区二区蜜桃av| 联通29元200g的流量卡| 91av网一区二区| 欧美成人免费av一区二区三区| 夫妻性生交免费视频一级片| 亚洲av男天堂| 国产大屁股一区二区在线视频| 99久久中文字幕三级久久日本| 一级毛片aaaaaa免费看小| 亚洲av福利一区| 亚洲av成人精品一区久久| 国产一区二区亚洲精品在线观看| 欧美一级a爱片免费观看看| a级毛片免费高清观看在线播放| 欧美一级a爱片免费观看看| 免费不卡的大黄色大毛片视频在线观看 | 国产不卡一卡二| 男人和女人高潮做爰伦理| 高清午夜精品一区二区三区| 视频中文字幕在线观看| 日韩一区二区视频免费看| 亚洲伊人久久精品综合 | 国产av一区在线观看免费| 国产一级毛片七仙女欲春2| 18禁在线播放成人免费| videos熟女内射| 午夜日本视频在线| 久久精品久久精品一区二区三区| 直男gayav资源| 久久久午夜欧美精品| 能在线免费看毛片的网站| 国产精品.久久久| 亚洲aⅴ乱码一区二区在线播放| 久久久久久久久中文| av免费在线看不卡| 亚洲国产精品国产精品| 狂野欧美激情性xxxx在线观看| 美女内射精品一级片tv| 国产极品精品免费视频能看的| 欧美xxxx黑人xx丫x性爽| 两个人的视频大全免费| 国产精品久久久久久精品电影小说 | 在线a可以看的网站| 国产精华一区二区三区| 边亲边吃奶的免费视频| 极品教师在线视频| 久久这里只有精品中国| 欧美极品一区二区三区四区| 亚洲欧洲国产日韩| 国产亚洲一区二区精品| 亚洲av男天堂| 啦啦啦韩国在线观看视频| 综合色丁香网| 国产精品.久久久| 国产淫片久久久久久久久| 天天躁夜夜躁狠狠久久av| 久久午夜福利片| 国产女主播在线喷水免费视频网站 | 国产欧美日韩精品一区二区| 国产成人福利小说| 国产色婷婷99| 一级毛片久久久久久久久女| 国产精品国产高清国产av| 久久久色成人| 久久久亚洲精品成人影院| 午夜福利成人在线免费观看| 亚洲伊人久久精品综合 | 蜜臀久久99精品久久宅男| 国产亚洲一区二区精品| 在线观看av片永久免费下载| 老司机影院成人| 国产一级毛片七仙女欲春2| 国产麻豆成人av免费视频| 热99在线观看视频| 国产在视频线在精品| 久久精品综合一区二区三区| 亚洲性久久影院| 一级二级三级毛片免费看| 啦啦啦韩国在线观看视频| 变态另类丝袜制服| 岛国毛片在线播放| 18禁裸乳无遮挡免费网站照片| 汤姆久久久久久久影院中文字幕 | 桃色一区二区三区在线观看| 2021天堂中文幕一二区在线观| 午夜久久久久精精品| 精品一区二区免费观看| 国产在视频线在精品| 97超视频在线观看视频| 中文字幕久久专区| 久久精品久久精品一区二区三区| av天堂中文字幕网| 午夜激情欧美在线| 成人三级黄色视频| 中文字幕久久专区| 97在线视频观看| 国产精品乱码一区二三区的特点| 99久久无色码亚洲精品果冻| 天天一区二区日本电影三级| 亚洲va在线va天堂va国产| av在线蜜桃| 国产精品久久久久久久久免| 99久久精品热视频| 日韩在线高清观看一区二区三区| 欧美性猛交黑人性爽| 成人亚洲欧美一区二区av| 亚洲四区av| 亚洲精品成人久久久久久| 色尼玛亚洲综合影院| 亚洲一级一片aⅴ在线观看| 熟妇人妻久久中文字幕3abv| 天天躁日日操中文字幕| 2022亚洲国产成人精品| 三级毛片av免费| 日本wwww免费看| 在线播放国产精品三级| 欧美精品国产亚洲| 国产精品麻豆人妻色哟哟久久 | 国产探花极品一区二区| 最近视频中文字幕2019在线8| 白带黄色成豆腐渣| 我要搜黄色片| 国产一级毛片七仙女欲春2| 99久国产av精品国产电影| 亚洲欧美日韩无卡精品| 亚洲精品乱码久久久v下载方式| 丰满乱子伦码专区| 国产精品99久久久久久久久| 好男人视频免费观看在线| 寂寞人妻少妇视频99o| 久久久久久久国产电影| 五月玫瑰六月丁香| av在线亚洲专区| 毛片一级片免费看久久久久| 亚洲精品亚洲一区二区| 国产一级毛片在线| 成年女人永久免费观看视频| 赤兔流量卡办理| 国模一区二区三区四区视频| 国产爱豆传媒在线观看| 午夜免费激情av| 亚洲自拍偷在线| 国产一级毛片在线| 99在线人妻在线中文字幕| 久久精品熟女亚洲av麻豆精品 | 欧美又色又爽又黄视频| 国产综合懂色| 七月丁香在线播放| 内射极品少妇av片p| 国产视频首页在线观看| 国产av码专区亚洲av| 熟女人妻精品中文字幕| 99久久中文字幕三级久久日本| 麻豆一二三区av精品| 男插女下体视频免费在线播放| 欧美最新免费一区二区三区| 人妻系列 视频| 亚洲欧洲国产日韩| 日日摸夜夜添夜夜添av毛片| 青青草视频在线视频观看| 国产爱豆传媒在线观看| 国产v大片淫在线免费观看| 丰满乱子伦码专区| 伦理电影大哥的女人| 久久99热6这里只有精品| 亚洲怡红院男人天堂| 内射极品少妇av片p| 亚洲国产精品成人综合色|