• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Feature-Based Augmentation in Sarcasm Detection Using Reverse Generative Adversarial Network

    2024-01-12 03:47:38DerwinSuhartonoAlifTriHandoyoandFranzAdetaJunior
    Computers Materials&Continua 2023年12期

    Derwin Suhartono ,Alif Tri Handoyo and Franz Adeta Junior

    1Computer Science Department,School of Computer Science,Bina Nusantara University,Jakarta,11480,Indonesia

    2Cyber Security Program,Computer Science Department,School of Computer Science,Bina Nusantara University,Jakarta,11480,Indonesia

    ABSTRACT Sarcasm detection in text data is an increasingly vital area of research due to the prevalence of sarcastic content in online communication.This study addresses challenges associated with small datasets and class imbalances in sarcasm detection by employing comprehensive data pre-processing and Generative Adversial Network (GAN)based augmentation on diverse datasets,including iSarcasm,SemEval-18,and Ghosh.This research offers a novel pipeline for augmenting sarcasm data with Reverse Generative Adversarial Network(RGAN).The proposed RGAN method works by inverting labels between original and synthetic data during the training process.This inversion of labels provides feedback to the generator for generating high-quality data closely resembling the original distribution.Notably,the proposed RGAN model exhibits performance on par with standard GAN,showcasing its robust efficacy in augmenting text data.The exploration of various datasets highlights the nuanced impact of augmentation on model performance,with cautionary insights into maintaining a delicate balance between synthetic and original data.The methodological framework encompasses comprehensive data pre-processing and GAN-based augmentation,with a meticulous comparison against Natural Language Processing Augmentation(NLPAug)as an alternative augmentation technique.Overall,the F1-score of our proposed technique outperforms that of the synonym replacement augmentation technique using NLPAug.The increase in F1-score in experiments using RGAN ranged from 0.066%to 1.054%,and the use of standard GAN resulted in a 2.88%increase in F1-score.The proposed RGAN model outperformed the NLPAug method and demonstrated comparable performance to standard GAN,emphasizing its efficacy in text data augmentation.

    KEYWORDS Data augmentation;Generative Adversarial Network(GAN);Reverse GAN(RGAN);sarcasm detection

    1 Introduction

    Interpretation of a statement is crucial to determine the results of the analysis.Meanwhile,the results of the proper analysis based on data lead to the right action.Currently,there is an abundance of information being shared on social media platforms in the form of statements,thoughts,or comments.These expressions encompass both positive and negative sentiments.However,it is within this spectrum of statements that negative sentiments are occasionally veiled through the use of sarcasm.Sarcastic remarks,by nature,contain an implied message,rendering them more challenging to decipher.

    Sarcasm,as defined,is a form of negative sentiment concealed within seemingly pleasant sentences[1].Recent studies have further categorized sarcasm as an aggressive variant of irony used to convey unfavorable messages[2].It is often intertwined with various forms of irony[3].Sarcasm can manifest through both verbal and textual communication.Verbal sarcasm carries distinct characteristics such as volume,speaking tempo,tone of voice,and accompanying gestures,making it relatively discernible[1].Conversely,textual sarcasm,commonly encountered on social media and product/service reviews,presents a more formidable challenge due to the absence of these contextual cues[4].

    Over the past five to ten years,the research landscape has witnessed a notable surge in studies pertaining to sarcasm detection[5].This surge underscores the pivotal role sarcasm detection plays in facilitating well-informed decision-making through the interpretation of sarcastic expressions.Fig.1 provides an overview of the trends in sarcasm detection research spanning from 2010 to 2022.

    Figure 1:Sarcasm trend research from 2010 to 2022

    While previous research efforts have employed a spectrum of methodologies,these endeavors predominantly fall within two overarching domains: machine learning and deep learning.Machine learning-based approaches have been explored utilizing techniques such as the Support Vector Machine (SVM) [6],Lexical influence [7],and the ensemble method of SVM,K-Nearest Neighbor(KNN),and decision tree[8].

    However,traditional machine learning approaches have exhibited limitations when confronted with sarcastic statements carrying implicit messages,as they struggle to contextualize the sentence as a whole.This necessitated a transition towards deep learning methods.Subsequently,research has embraced a deep learning paradigm for sarcasm classification,incorporating techniques such as multi-layer perceptrons[9]and hybrid neural networks that combine Convolutional Neural Networks(CNN)and bidirectional Long Short Term Memory(LSTM)architectures[10].While these endeavors have primarily focused on model development,this research endeavors to bridge the gap by exploring and developing augmentation techniques tailored specifically for sarcasm data.

    Apart from advancing deep learning model methodologies,this research acknowledges the significance of data augmentation in enhancing a model’s classification capability.Existing research has explored a range of data augmentation techniques to improve model performance in sarcasm detection.However,one relatively uncharted avenue within the realm of sarcasm text augmentation is the application of Generative Adversarial Networks(GANs).GAN-based augmentation has yielded satisfactory results in image-processing domains such as medical imaging[11],face detection[12],and agriculture[13].Nevertheless,its potential in sarcasm text augmentation remains underexplored.

    Inspired by the success of GANs in augmenting datasets,this research introduces a novel framework employing the Reverse Generative Adversarial Neural Network (RGAN) technique.This framework aims to enhance the accuracy of deep learning models in sarcasm detection.The fundamental premise of RGAN involves reversing the labels of genuine and synthetic data.This reversal encourages the generator to produce data closely resembling real data while challenging the discriminator to develop a more comprehensive understanding of subtle distinctions between authentic and synthetic data.

    In summary,the contributions of this research encompass:

    ? The author’s proposed framework introduces a novel approach for enhancing sarcastic data through the utilization of a Reverse Generative Adversarial Network(RGAN).The purpose of reversing the labels of actual and fake data is to encourage the generator to produce data that closely resembles real data while simultaneously pushing the discriminator to develop a more comprehensive understanding of the subtle differences between real and fake data.

    ? The research involved the execution of tests and subsequent analysis to provide evidence supporting the effectiveness of data augmentation through the use of RGAN in enhancing the model’s ability to differentiate between sarcastic and non-sarcastic texts.This was compared to the alternative methods of synonym replacement in NLPAug and the traditional GAN method.Then,This research also analyses the distribution of data generated from GAN-based models.

    ? Performed RGAN testing on balanced and unbalanced datasets.Tests were conducted with 4 augmentation scenarios on each dataset based on percentages of 15%,30%,and 45% and adjusting the number of data additions with the highest class.To analyze GAN’s efficiency further,this research also analyses the distribution of data generated from GAN-based models.

    The remainder of this paper is structured as follows: Section 2 reviews previous research on sarcasm detection and augmentation techniques in sarcasm datasets.Section 3 goes over the datasets used,pre-processing techniques,proposed models,and experimental methods.Session 4 explains the data generated by GAN as well as the experimental results.Finally,in Session 5,the conclusions of this research are discussed.

    2 Related Works

    In this session,we will discuss previous research in detecting sarcasm and augmentations used in text data.The summary of previous research shown in Table 1 shows that sarcasm sentence research tends to explore less data augmentation.

    Table 1:Summary of previous sarcasm detection research

    2.1 Sarcasm Detection with Machine Learning

    Previous studies have explored the detection of sarcasm through the utilization of multiple machine learning models,which are combined utilizing ensemble learning techniques[8].The dataset utilized in this study was sourced from the Twitter social media platform,comprising instances that were classified as either sarcasm or non-sarcasm.Ensemble learning encompasses various combinations of models.In a general sense,an ensemble learning approach that incorporates Support Vector Machines(SVM),Logistic Regression(LR),and Decision Trees(DT),utilizing a voting system to determine class prediction,demonstrates superior average accuracy performance when evaluated on five distinct datasets,surpassing alternative ensemble models.The Principal Component Analysis(PCA)algorithm for dimension reduction is utilized in order to represent numerous features through a decreased feature dimension in the experiments.The ensemble learning of Support Vector Machines(SVM),Linear Discriminant Analysis(LD),and Decision Trees(DT)achieved an accuracy of 98.37%on the evaluated dataset.While Support Vector Machines (SVM),Logistic Regression (LR),and Decision Trees(DT)are capable of identifying the relationship between words in a phrase,it should be noted that typical LR models are not specifically designed to handle sequential data,such as text.Traditional machine learning (ML) methods also exhibit inadequate capability in comprehending context,resulting in a failure to grasp semantic links between words.This phenomenon has the potential to result in misclassification or the occurrence of false positives.

    In the same year,a study conducted by Godara et al.[8]yielded findings that were consistent with the prior research.Nevertheless,the current study does not employ an ensemble learning methodology and conducts the classification procedure separately for each model[14].The dataset was obtained by utilizing an Application Programming Interface (API) provided by Twitter.Specifically,comments containing the hashtag #sarcasm were selected,resulting in a dataset comprising a total of 76,799 tweets.The experimental findings indicate that the Decision Tree algorithm achieves the highest level of accuracy,specifically 91.84%.This outcome is attributed to the utilization of the sarcastic feature set,which comprises various linguistic elements associated with sarcasm,including question marks,exclamation marks,and repeated ellipses.The feature set for sarcasm detection includes both positive and negative sentiment data as additional evidence for identifying sarcastic sentences.

    Previous studies have employed a rule-based approach to identify ironic statements,a specific form of sarcasm[15].The dataset employed in this study is derived from SemEval 2018-T3,which consists of phrases that exhibit irony.The SVM,Naive Bayes,Decision Tree,and Random Forest algorithms are enhanced by the utilization of rule-based lexical and semantic techniques.These techniques serve to eliminate irrelevant words and assess the level of sarcasm,thus improving the ability to recognize contextual information within a phrase.The Random Forest algorithm yields the most accurate results.

    2.2 Sarcasm Detection with Deep Learning

    There have been studies to detect sarcasm in texts.Recognizing the form of sarcasm in a text is very useful for analyzing customer satisfaction and providing the right steps for making business decisions.However,detecting sarcasm remains difficult,particularly in understanding the context of a sarcastic sentence.There are researchers who use multi-head attention on bidirectional LSTM to detect sarcasm [17].The private dataset utilized is a collection of comments that include quotation marks,exclamation points,and a mix of question marks and exclamation points.These characteristics are thought to be able to aid the model in identifying the context of sarcasm.Meanwhile,the bidirectional LSTM has forward and backward modes to capture all of the information from a sentence.With multihead attention,which gives each word a different weight,it is possible to understand the relationship between complex words [21].Compared to SVM and bidirectional without multi-head attention in this study,multi-head attention mixed with bidirectional LSTM performs better because it can capture word context.Despite the use of an attention mechanism in this research,the model has an accuracy of less than 80%.Regardless of the fact that the dataset used has features such as the number of quotes,exclamation marks,question marks,ellipses,and interjections,this research does not investigate dataset augmentation.

    Using C-Net,there is research on how to categorize sarcasm and non-sarcasm [16].C-Net is composed of many Bidirectional Encoder Representation from Transformers (BERT) models that are trained independently on the response data,the last sentence of the context set,the second last sentence of context,and the first sentence of context,and are then integrated at the fusion layer.BERT[22]is a highly effective natural language understanding model.BERT can read sentences from left to right and vice versa in order to better understand the context of the sentence as a whole.Some words from the dataset will be masked during the tokenization process,and the model will make efforts to guess these words based on the unmasked words.Aside from that,BERT can predict subsequent sentences.The C-Net model experiment makes use of dialog-sentence-formatted datasets from Twitter and Reddit.Sentences in the text are marked sequentially using timestamping.Overall,this study contrasts traditional machine learning with a transformer-based approach.According to the results,the transformer model outperforms all traditional machine learning methods,with F1-scores of 75%and 66.3%on the Twitter and Reddit datasets,respectively.Similarly to previous studies by[17],the research did not investigate dataset augmentation.The dataset used is quite small,with less than 10,000 data points for Twitter and Reddit.

    Research on sarcasm detection using a hybrid neural network consisting of CNN and Bidirectional LSTM with an attention module has been carried out[10].CNN can benefit from input encoded from LSTM by spotting n-gram word patterns.Due to the weighting provided by the attention module,the model can then better understand the context of a word.Using the attention module,CNN hybrid architecture and Bidirectional LSTM can detect incongruity in a sentence.The test accuracy obtained by comparing the baseline model and the intended method was 84.88%and 89.7%,respectively.Model development can significantly improve accuracy,but the quality of the model is not solely determined by the architecture.This research does not show the pre-processing side of the dataset used,and no dataset augmentation is explored.

    2.3 Data Augmentation for Sarcasm Detection

    Prior studies on the detection of sarcastic sentences have employed various techniques,such as augmenting existing sarcastic datasets by the incorporation of external datasets,utilizing word embedding methodologies,and employing sentence repetition [18].The external dataset utilized is sourced from the Twitter social media network,as well as the SemEval-18 and ArSarcasm-v2 datasets.In contrast,the primary dataset included in this study is a proprietary dataset comprising sarcastic statements in both English and Arabic languages,with a total of 6570 and 4502 instances,respectively.The process of external dataset augmentation involves merging the original dataset with an external dataset in order to introduce a significant level of variability.Word embedding is a technique employed to substitute words within a sentence with synonymous alternatives.In order to get a balanced distribution of sardonic and non-sarcastic classes,the technique of repeating words is employed to replicate the same case.The accuracy of validation and tests has shown a significant disparity as a consequence of conducting experiments on three distinct augmentation techniques.Word embedding is a technique employed to substitute words within a sentence with their corresponding synonyms.In order to achieve a fair distribution of sarcastic and non-sarcastic classes,the technique of repeating sentences is employed to replicate instances of the same content.The accuracy of validation and tests has shown a significant disparity as a consequence of conducting experiments on three distinct augmentation techniques.Among the numerous experiments undertaken,it was seen that the BERT model,when augmented with the repetition of words,achieved a validation accuracy of 0.92.Additionally,when the model was tested using pre-processing techniques that involved converting emojis to strings,a test accuracy of 0.87 was obtained.Nevertheless,there are still symptoms of overfitting present in the model due to the possibility of the augmentation technique being excessively highlighted as a result of the frequent repetition of phrases.In contrast,the external dataset augmentation exhibited notable performance,achieving the highest validation and test accuracy scores of 0.41 and 0.07,respectively.Ultimately,the technique of synonym replacement augmentation demonstrated superior performance,achieving scores of 0.86 and 0.84,respectively.Excessive variance in the external dataset,as well as an overemphasis on certain elements,such as repeated words,might lead to overfitting of the model or a decline in its performance.

    The model’s capacity to recognize sarcasm may be influenced by the data augmentation of sarcastic sentences[23].Generative Adversarial Network(GAN)is a potential method for augmenting data.Common applications of GANs in the field of image augmentation include the generation of synthetic data with high levels of similarity to the original data.In this approach,synthetic data can be utilized to expand the range of the original dataset [24].GAN technique paired with BERT is another method for performing data augmentation for text datasets[20].Both labeled and unlabeled data are sent to BERT as input for vectorization.Meanwhile,the GAN generator reproduces false data derived from random distribution noise.The discriminator’s job is to distinguish between authentic and false data.Training continues until the discriminator is unable to distinguish between genuine and fraudulent data that has been reproduced by the generator.GAN-BERT was tested on two datasets:Stanford Sentiment Treebank with 5 different classes(SST-5)for sentiment analysis and Multi-Genre Natural Language Inference (MNLI) for natural language inference.GAN-results BERT’s improve accuracy by 8.2% on the SST-5 sentiment analysis dataset.There is evidence that using a smaller proportion of labeled data is more beneficial when using GAN-BERT.However,no tests of the fully labeled dataset in cases of sarcasm detection with more complex characteristics have been conducted.

    Inverting class labels is another GAN technique [25].Image data was used in the research.Typically,GAN trains the discriminator to distinguish between real and fake data and requires the generator to produce data that is as close to the original data as possible[26].Reference[25],however,attempted to reverse the labels so that the discriminator can be viewed as a classifier that learns features from the original data.GANs that perform label inversion can learn more than just the difference between real and fake data.The research was successful in demonstrating another point of view through the use of GANs,but this technique still produces unstable results and has a chance of success only in certain cases.

    Another research on data augmentation in sarcasm detection using the synonym replacement and duplication methods with NLPAug was done by [19].The F1-score was evaluated using BERT,Robustly Optimized BERT Approach(RoBERTa)[27],and DistilBERT[28].RoBERTa is a BERTbased model that has no next-sentence prediction(NSP)to predict a subsequent sentence prediction.Meanwhile,DistilBERT is a BERT model with smaller parameter values that is faster than BERT but has lower classification performance.The duplication augmentation technique improves performance on datasets iSarcasm [29],Ghosh et al.[30],and SemEval-18 [31].The results obtained,however,demonstrate that augmentation data enhances model accuracy when it comes to non-sarcastic detection,as shown by an increase in true negatives.

    According to previous works,performing augmentation on sarcasm data is challenging due to the unique complexity of the data.Meanwhile,in text data augmentation research,the GAN-based approach appears to be more promising than synonym replacement using NLPAug,repeating words,and external dataset augmentation,and there has been no text data research that used RGAN as a method of data augmentation.The performance of the model in detecting sarcasm is determined by the dataset,appropriate hyper-parameters,and appropriate model architecture.However,the main aim of this research is to propose a novel augmentation strategy for enhancing the sarcastic dataset through the utilization of Reverse Generative Adversarial Networks(RGAN).The characteristics of the data and the results of the sarcasm detection will be investigated thoroughly.

    3 Methodology

    In this section,the methodological framework for investigating the effects of using Generative Adversarial Networks(GANs)for data augmentation in sarcasm detection is outlined.The methodology serves as the foundation upon which the selection of datasets and subsequent analysis is based.

    3.1 Dataset

    This research utilizes four unique datasets to support theories and conduct an in-depth analysis of the effects of using GAN as a data augmentation.The dataset is divided into two categories:small(less than 10,000 sentences)and large(more than 30,000 sentences).This research uses iSarcasm[29]and SemEval [31] for small datasets.Each dataset has unique characteristics;for example,iSarcasm is a dataset obtained from Twitter via an online survey.Participants in survey responses provided sarcastic sentences and labels;this allows manual labeling techniques to be avoided because they cannot accurately determine sarcastic sentences from the perspective of the author of sarcasm sentences.In addition,this dataset has an unbalanced number of sarcastic and non-sarcastic.Unlike SemEval-18,this dataset has a relatively balanced combination of sarcastic and non-sarcastic words.SemEval-18 is a Twitter-sourced dataset.The data is labeled manually using a fine-grained annotation[32]scheme.The annotators are three linguistics students who speak English as a second language.

    Ghosh et al.[30]was the dataset from the large category that was used in this research scenario.Ghosh is a Twitter dataset that contains sarcastic and non-sarcastic sentences.Sarcasm classes are collected by searching for the hashtag (#) sarcasm and #not.One example of a sarcastic sentence obtained by removing‘#not’is“I#not love when people start rumors about me.”It Becomes“I love it when people start rumors about me.” Meanwhile,when a sentence lacks a positive marker,it is classified as non-sarcastic.The obtained sentences are not in the form of lengthy conversations.Table 2 illustrates the size of the distribution of each class in the dataset used in detail.

    Table 2:The quantity of data for each class in the dataset that was used

    3.2 Data Pre-Processing and Augmentation

    Fig.2 shows the data preprocessing scheme up to the input data to perform GAN data augmentation.All collected datasets undertake a cleaning process,such as URL links,hashtags,foreign languages,stop words,non-English ASCII characters,and emojis.After cleaning the dataset,an 80:20 splitting train and validation were performed for each dataset.Only the sarcasm class is used as input to the RGAN model.The main reason for augmentation in the sarcasm class is that sarcasm data is difficult to obtain [30],and augmenting the non-sarcastic class will only increase data inequality between classes.Furthermore,GAN is used on the dataset to perform unbalanced sarcasm data balancing.The augmentation process begins with the embedding process using DistilBERT,which is fed a dataset with only the sarcasm class as input.The generator then generates data in the form of noise from a random distribution.The discriminator uses word embedding and fake data from the generator to distinguish between real and fake data.As a result,the discriminator loss can be fed into the generator to generate data that is as close to the original data as possible.The data generated from the generator is in the form of features whose distribution results are close to the original data.

    3.3 Proposed GAN Model

    This research proposed a novel pipeline that would make use of GAN for data augmentation based on the Reverse GAN(RGAN)[25],which is commonly used with image data.Using a similar concept,and made several changes to accommodate the sarcasm data.Fig.3 depicts a more detailed stage of the novel RGAN pipeline proposed.Sarcasm data that has been pre-processed will be used as input for embedding with distilBERT.The class used for embedding only contains fully labeled sarcasm classes.The main reasons for using only one class are:(1)Sarcasm data is difficult to obtain due to its high level of complexity,and(2)Balance sarcasm and non-sarcastic classes in each dataset used.The embedding process then utilizes pre-trained distilBERT,resulting in a high-quality embedding because the pretrained distilBERT was trained on a large corpus of words.The result of the embedding is feature data,which is labeled as fake data.The original data is marked as fake data in the RGAN concept,and vice versa.As a result,the noise data generated by the generator is labeled as original data,so the discriminator must be certain that the data generated by the generator is real data,and the original data that is tagged false serves to cause the generator to produce data that is similar to but not an exact duplicate of the original data.The loss discriminator and generator are used as feedback for the generator to produce good data quality,with an indication that the lower the value of the loss generator and discriminator,the better the quality of the resulting data.The hyperparameter used is the learning rate of 0.001 as a result of hyperparameter parameter tuning in the generator and discriminator models and batch size of 16.This RGAN model also employs the Adam optimizer.The activation function used is the Rectified Linear Unit(ReLu).Instead of RGAN,we employ a standard GAN scheme that does not swap real and fake data labels.In the standard GAN,there are various indicators of a good model.For example,if the discriminator loss value is greater,it can indicate that the data generated by the generator is similar to the original data,and the discriminator is unsure whether the data is real or fake.

    Figure 2:Data pre-processing stage

    Figure 3:Proposed Reverse GAN(RGAN)

    3.4 Experimental Framework

    The generator’s new data is divided into three scenarios with data augmentation scales of 15%,30%,and 45%.The generator’s synthetic data is combined with the original data as input for model training.The outcomes of each scenario will be compared to determine the GAN’s ability to detect sarcasm sentences.More detailed scenarios can be seen in Fig.4.

    Figure 4:Model training with the data generated by the generator combined with the original data

    The final evaluation results will compare the proposed RGAN model to the general GAN model.Aspects of the analysis performed,beginning with the quality of data analysis,the results of the resulting data distribution,and the effects observed when training with original data using the MLP model.Visualization is performed to analyze the data generated by RGAN by reducing the dimensions of the data.Because the characteristics of the RGAN data are quite complex,the t-Distribute Stochastic Neighbor Embedding (t-SNE) [33] algorithm was used to interpret the data visually.The t-SNE algorithm employs the following equation:

    The Eq.(1)is used to determine the pairwise similarity of data points in high-dimensional space.It assigns a probability density to each pair of data points based on their Euclidean distance using a Gaussian kernel.Then,Eq.(2)is used to determine the similarity of data points in low-dimensional space.It assigns a probability density to each pair of data points using a Student’s t-distribution.Gradient descent is used iteratively to minimize Kullback–Leibler (KL) divergence by adjusting the position of the data distribution with the following Eq.(3):

    Meanwhile,the Multi Layer Perceptron (MLP) model results from the F1-score and the MLP model loss will demonstrate validation of the quality of the data generated by the RGAN.The hyperparameter used in the MLP model is a learning rate of 0.0001 using the Adam optimizer,batch size 16,epoch 100,early stopping with large patience of 10 and a seed value of 200.The augmentation method with RGAN is compared with the augmentation method using the original GAN,where the labels of the original data and synthetic data are not reversed,and NLPAug,which is one of the popular augmentation frameworks on text data[34].

    4 Result and Discussion

    In this section,the results of the experiments are presented and discussed,focusing on the application of Generative Adversarial Networks(GANs)for data augmentation in sarcasm detection.The primary objective is to evaluate the impact of GAN-based augmentation on model performance,emphasizing the quality of augmented data and its effect on classification accuracy.The analysis begins with an evaluation of data quality following the GAN augmentation process.This evaluation employs loss values of the generator and discriminator as well as dimensionality reduction techniques to visualize the differences between real and synthetic data distributions.Subsequently,the discussion delves into the experimental results,comparing GAN augmentation with alternative techniques,such as NLPAug and unaugmented data.These experiments provide insights into the benefits and limitations of GAN augmentation,particularly in scenarios involving small datasets and class imbalance.

    4.1 Augmented Data Quality Evaluation

    Multidimensional features are utilized to store the information generated by the GAN generator.The evaluation of data quality takes place subsequent to data generation.Loss values from both the generator and discriminator serve as valuable indicators for assessing data quality.Furthermore,to gain deeper insights into the disparities between the distributions of real and synthetic data,the t-SNE technique is employed for dimensionality reduction.The results of this visualization technique are presented in Figs.5–7.

    Figure 5:Generated data for iSarcasm dataset

    Figure 6:Generated data for SemEval-18 dataset

    Figure 7:Generated data for Ghosh dataset

    The resulting pattern shows the difference in data distribution between standard GAN and RGAN.The standard GAN tends to follow the original dataset’s pattern and has a more defined data center point,whereas the RGAN has a more varied pattern.Both types of GAN produce data that is close enough to the original data to be considered similar.However,if the resulting data exceeds the original data,the RGAN has a high level of outliers.For example,in Fig.5b,the resulting data far outnumbers the original data.Meanwhile,the data distribution in Fig.5a is more consistent.The amount of the learning rate has an impact on the data distribution distance produced by the Reverse Generative Adversarial Network (RGAN).A lower learning rate leads to a more realistic representation of the original data,which can introduce noise due to the duplication of several features.On the other hand,in the case of an excessively high learning rate,the resultant feature distance will be significantly greater,leading to an increased data variance that can result in overfitting[35].Therefore,a learning rate of 0.001 is commonly employed in the Adam optimizer [36] to ensure that the data distribution remains suitably balanced in relation to the original feature.

    4.2 Experimental Result

    In this section,the focus shifts to the discussion of experimental results and the ramifications of employing GAN augmentation.It is important to note that the dataset augmentation was exclusively applied to the sarcasm class.A comparative analysis is conducted,contrasting the outcomes of GAN augmentation with those of NLPAug and the original dataset.The data generated by the generator in GAN is in the form of a feature that approximates the distribution of sarcasm class features in the DistilBERT-encoded.Meanwhile,NLPAug replaces adjectives in the sarcasm class with synonyms and does not increase the amount of data.Figs.8–10 are class distributions in each dataset used with changes in the amount of data in the sarcasm class after augmentation with GAN.Table 3 shows the results of experiments on the iSarcasm dataset.The obtained results show that the standard GAN with an augmentation percentage of 45% has the highest F1-score.When compared to RGAN,standard GAN has a more stable data distribution.Experiments on balanced GANs,on the other hand,show that the RGAN has a much higher value than the standard GAN.In the balanced sarcasm class experiment,data augmentation produced nearly 2.1 times fake data based on the original one.The data generated by the standard GAN has a noisy indication,which reduces accuracy.RGAN,on the other hand,has succeeded in producing more varied data,even though there are data points that are further away from the original data points,it does not make the data dirty.However,in order to obtain a low loss value and a high F1-score,the data generated by the reverse GAN must be adjusted to the original dataset.When compared to NLPAug,all augmentation data using GAN has a higher value,implying that GAN augmentation is appropriate for datasets with small amounts of data and unbalanced classes.

    Table 3:Experimental results on the iSarcasm dataset

    Figure 8:The difference in the amount of data in the sarcasm class in the iSarcasm dataset is based on the percentage of augmentation

    Figure 9:The difference in the amount of data in the sarcasm class in the SemEval-18 dataset is based on the percentage of augmentation

    Figure 10:The difference in the amount of data in the sarcasm class in the Ghosh dataset is based on the percentage of augmentation

    Experiments with the SemEval-18 dataset yielded the same results in Table 4.Standard GAN has the highest F1-score and the lowest validation loss,implying that GAN augmentation is also suitable for small datasets but has a balanced class.However,if the data generated by GAN causes an imbalance between classes,the accuracy tends to decrease,as in the case of the addition of 45%data,where the sarcastic data far outnumbers the non-sarcastic data.Unlike iSarcasm,the SemEval-18 dataset shows a better GAN standard under balanced sarcasm class conditions because the resulting data is still within reasonable limits below 15%.Meanwhile,the results of data augmentation with NLPAug did not increase the F1-score and tended to decrease the F1-score so that it could be indicated if NLPAug caused noise in the data.

    Table 4:Experimental results on the SemEval-18 dataset

    Table 5 shows that augmentation with RGAN has the highest F1-score,at 69.01%,based on experiments on the Ghosh dataset.Balance class is a dataset that contains non-sarcastic numbers with balanced sarcasm and non-sarcastic classes of 22,725 and 22,725 sarcasm.With a relatively balanced distribution of data for each class in the SemEval-18 and Ghosh datasets,it has the consistency that if the augmentation data of the sarcastic class far exceeds the non-sarcastic class,the accuracy will tend to decrease.In a balanced class condition,the RGAN has a better F1-score,so it shows the same indication as the iSarcasm dataset.Augmentation with a standard GAN yields similar results to augmentation with an RGAN.NLPAug also produces results that are consistent with lower values than GAN augmentation.

    Table 5:Experimental results on the Ghosh dataset

    In general,the proposed data augmentation in this research differs from GAN-BERT.The data generated by the generator is only used as input to the discriminator in GAN-BERT,so it is unclear whether changes in accuracy in classifying sarcasm data are used when using GAN or not.Meanwhile,the generator that is used to generate new data will be filled into the model as input as features with data points that are similar to the original data,with the goal of evaluating the quality of the resulting data.GAN-BERT uses unlabeled labels as supporting data for unbalanced classes on the generator to balance unbalanced data.However,using an excessive amount of unlabeled data may cause the generator to produce data in a balanced class,allowing data to become even more unbalanced.Consequently,the approach here is to train the GAN model with data from unbalanced classes.All experiments showed that GAN-based augmentation improved the model’s ability to classify sarcasm compared to NLPAug.Data augmented with NLPAug does not generate new data,so there is still an imbalance between classes.NLPAug is better suited for adding variety to balanced datasets like Ghosh.NLPAug does not improve at all on unbalanced datasets like iSarcasm.

    A comparison and discussion between this research and other works in implementing data augmentation can be seen in Table 6.

    Table 6:Comparison between this research with other works

    In comparison to the studies presented in Table 6,this research offers a distinct and superior approach to sarcasm detection through Generative Adversarial Networks(GANs)for data augmentation.While the mentioned studies have primarily focused on external datasets,synonym replacement,or simple augmentation techniques,our research introduces a novel framework utilizing Reverse GAN(RGAN)in the context of sarcasm detection.The results of our experiments on datasets like iSarcasm,SemEval-18,and Ghosh showcase the efficacy of GAN-based augmentation,particularly when the synthetic data closely aligns with the volume of the original data.Notably,our approach outperforms NLPAug in scenarios with small datasets and class imbalances.Moreover,we demonstrate that RGAN,a less common technique,can achieve performance results comparable to those of standard GAN.The ability to generate synthetic text data that closely matches the original data in features sets our research apart,offering a more balanced and effective approach to augmenting text data for sarcasm detection.This research emphasizes the versatility and effectiveness of the RGAN technique,providing a robust solution for improving sarcasm detection accuracy in diverse augmentation scenarios.

    5 Conclusion

    1.The successful application of the suggested novel framework for enhancing text data through the incorporation of additional data features has been demonstrated in its ability to enhance the performance of the model in identifying sarcasm within specific augmentation scenarios.

    2.Due to the different characteristics of each dataset,GAN-based augmentation in different datasets could have a different impact on performance.Overall,based on the analysis,it is found that if the synthetic data does not exceed the amount of original data,GAN-based augmentation can improve performance significantly when compared to using NLPAug.

    3.The utilization of the Reverse GAN technique,although not commonly practiced,delivered performance outcomes in sarcasm detection that are on par with those achieved using the standard GAN.

    In conclusion,this study has introduced a novel framework for enhancing text data through the incorporation of additional data features,demonstrating its success in improving model performance in sarcasm detection within specific augmentation scenarios.This study revealed that the impact of GAN-based augmentation on performance varies across datasets,with a consistent finding that GAN-based augmentation outperforms NLPAug when synthetic data does not significantly exceed the volume of original data.One of the key contributions of this research is the utilization of the Reverse GAN (RGAN) technique,a less common approach,which yielded performance results in sarcasm detection comparable to those achieved using the standard GAN.This suggests the effectiveness and versatility of RGAN in enhancing text data.

    Augmentation with GAN in the sarcasm class tends to lose accuracy if the data generated far exceeds that of the non-sarcastic class.Meanwhile,if the augmentation in the sarcasm class produces data that is many times the size of the original data,using an RGAN,as in the iSarcasm dataset,will be more profitable.The SemEval-18 dataset yields the opposite result,demonstrating that using standard GANs is more advantageous when adding data reaches a balance point.However,the Ghosh dataset demonstrates that a relatively balanced dataset does not necessitate a large amount of synthetic data.This is because Tables 3 and 4 show that the best augmentation results were obtained in experiments with generated data less than 45%.Because of the relatively high level of difficulty,the future research potential for augmenting text datasets is still very broad.Producing synthetic text data in the form of text (rather than features) is a challenging task.Currently,the Reverse Generative Adversarial Network (RGAN) lacks the capability to reconstruct feature forms into textual representations.Another challenge is the development of transformer models capable of reading input in the form of features;there are currently only a few state-of-the-art models capable of receiving input data in the form of features.

    Acknowledgement:None.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:Study conception and design: Derwin Suhartono;data collection: Derwin Suhartono,Alif Tri Handoyo;analysis and interpretation of results: Franz Adeta Junior,Alif Tri Handoyo;draft manuscript preparation: Franz Adeta Junior.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:All dataset in this paper is publicly available in GitHub repositories.For iSarcasm dataset (https://anonymous.4open.science/r/24639225-ac0e-4057-b2d4-16e7e50570 d0/README.md),SemEval-2018 (https://github.com/Cyvhee/SemEval2018-Task3),and Ghosh(https://github.com/MirunaPislar/Sarcasm-Detection).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精华一区二区三区| 婷婷六月久久综合丁香| 中文资源天堂在线| 久久精品亚洲精品国产色婷小说| 香蕉久久夜色| 欧美激情久久久久久爽电影| netflix在线观看网站| 精品国内亚洲2022精品成人| 亚洲男人的天堂狠狠| 香蕉国产在线看| 日韩欧美免费精品| 免费高清视频大片| 国产主播在线观看一区二区| 香蕉久久夜色| 人妻丰满熟妇av一区二区三区| 法律面前人人平等表现在哪些方面| 国产亚洲欧美98| 制服丝袜大香蕉在线| 国产一区二区在线av高清观看| www.999成人在线观看| 精品人妻1区二区| av福利片在线观看| 精品一区二区三区av网在线观看| 老司机午夜十八禁免费视频| 精品久久久久久久末码| 嫩草影院入口| 久久久国产精品麻豆| 久久精品国产清高在天天线| 欧美日韩黄片免| 欧美不卡视频在线免费观看| 9191精品国产免费久久| 国内精品久久久久精免费| 日韩欧美国产在线观看| 在线看三级毛片| 亚洲欧美一区二区三区黑人| 精品欧美国产一区二区三| 男插女下体视频免费在线播放| 精品久久蜜臀av无| 99热这里只有是精品50| 91老司机精品| 欧美绝顶高潮抽搐喷水| 精品一区二区三区视频在线观看免费| 无遮挡黄片免费观看| 日本与韩国留学比较| 色噜噜av男人的天堂激情| 国内揄拍国产精品人妻在线| 欧美一区二区精品小视频在线| e午夜精品久久久久久久| 美女 人体艺术 gogo| 亚洲av电影不卡..在线观看| 午夜福利成人在线免费观看| 特大巨黑吊av在线直播| 天堂影院成人在线观看| 淫妇啪啪啪对白视频| 最近最新中文字幕大全电影3| 亚洲欧洲精品一区二区精品久久久| 别揉我奶头~嗯~啊~动态视频| 久久久国产欧美日韩av| 18禁黄网站禁片免费观看直播| 男女做爰动态图高潮gif福利片| 久久99热这里只有精品18| 97碰自拍视频| 久久人人精品亚洲av| 国产一区在线观看成人免费| 国产蜜桃级精品一区二区三区| 日韩欧美国产一区二区入口| 国产精品98久久久久久宅男小说| 亚洲无线观看免费| 成人精品一区二区免费| av黄色大香蕉| 美女扒开内裤让男人捅视频| 久久久久久久久免费视频了| 国产激情欧美一区二区| 无限看片的www在线观看| 国产午夜精品论理片| 又黄又粗又硬又大视频| 99国产精品一区二区三区| 国产精品99久久99久久久不卡| av福利片在线观看| 日韩欧美 国产精品| 18禁观看日本| 欧美av亚洲av综合av国产av| 草草在线视频免费看| 97超视频在线观看视频| 视频区欧美日本亚洲| 叶爱在线成人免费视频播放| av视频在线观看入口| 精品欧美国产一区二区三| aaaaa片日本免费| 久久国产精品人妻蜜桃| 老熟妇仑乱视频hdxx| 久久久久久国产a免费观看| 在线免费观看不下载黄p国产 | 亚洲成人免费电影在线观看| 哪里可以看免费的av片| 国产97色在线日韩免费| 黄色日韩在线| 亚洲精品美女久久av网站| 18禁黄网站禁片午夜丰满| netflix在线观看网站| 日韩欧美精品v在线| 亚洲欧美一区二区三区黑人| 午夜成年电影在线免费观看| 久久久久免费精品人妻一区二区| 五月伊人婷婷丁香| a在线观看视频网站| 亚洲在线自拍视频| av中文乱码字幕在线| 香蕉丝袜av| 成人亚洲精品av一区二区| 国产精品98久久久久久宅男小说| 中文字幕人妻丝袜一区二区| 亚洲人成电影免费在线| 欧美绝顶高潮抽搐喷水| 88av欧美| 美女 人体艺术 gogo| 一本综合久久免费| 久久精品国产亚洲av香蕉五月| 国产精华一区二区三区| 国产成+人综合+亚洲专区| 欧美在线黄色| 国产精华一区二区三区| 亚洲一区二区三区不卡视频| 男人舔女人下体高潮全视频| 久久午夜亚洲精品久久| 欧美乱色亚洲激情| 黄片小视频在线播放| 香蕉国产在线看| 亚洲中文日韩欧美视频| 久久精品国产亚洲av香蕉五月| 国产精华一区二区三区| 国产伦一二天堂av在线观看| 亚洲激情在线av| 亚洲国产色片| 色老头精品视频在线观看| 久久天堂一区二区三区四区| 热99re8久久精品国产| 特级一级黄色大片| 亚洲av美国av| 久久久国产成人精品二区| 国产精品一区二区精品视频观看| 19禁男女啪啪无遮挡网站| 免费在线观看亚洲国产| 精品不卡国产一区二区三区| 噜噜噜噜噜久久久久久91| 狂野欧美激情性xxxx| 网址你懂的国产日韩在线| 99精品欧美一区二区三区四区| 99久久国产精品久久久| 一二三四在线观看免费中文在| 日本精品一区二区三区蜜桃| 老司机午夜福利在线观看视频| h日本视频在线播放| 亚洲av美国av| 99久久综合精品五月天人人| 国产视频一区二区在线看| 日韩三级视频一区二区三区| www.精华液| 成年女人看的毛片在线观看| 两个人视频免费观看高清| 久久久久精品国产欧美久久久| 999久久久精品免费观看国产| 一个人免费在线观看的高清视频| 午夜久久久久精精品| 午夜福利高清视频| 嫩草影院入口| 国产亚洲精品一区二区www| 亚洲国产看品久久| 国产美女午夜福利| 国产精品99久久久久久久久| 国产私拍福利视频在线观看| 欧美黄色淫秽网站| 国产高潮美女av| 精品国产乱码久久久久久男人| 国产高清videossex| 老熟妇仑乱视频hdxx| 日本a在线网址| 中文字幕最新亚洲高清| 亚洲自拍偷在线| 国模一区二区三区四区视频 | 久久中文字幕一级| 制服人妻中文乱码| av国产免费在线观看| 桃红色精品国产亚洲av| 日韩国内少妇激情av| 黄片小视频在线播放| 久久这里只有精品19| 成年版毛片免费区| 听说在线观看完整版免费高清| 法律面前人人平等表现在哪些方面| 免费在线观看影片大全网站| 老鸭窝网址在线观看| 成人一区二区视频在线观看| 草草在线视频免费看| 熟女人妻精品中文字幕| 麻豆成人av在线观看| 90打野战视频偷拍视频| 久久久久久久久免费视频了| 国内揄拍国产精品人妻在线| 久99久视频精品免费| 亚洲最大成人中文| 噜噜噜噜噜久久久久久91| www.熟女人妻精品国产| 一级a爱片免费观看的视频| 成年人黄色毛片网站| 国产成+人综合+亚洲专区| 国产精品,欧美在线| 久久这里只有精品19| 精品不卡国产一区二区三区| or卡值多少钱| 国内精品久久久久精免费| 97人妻精品一区二区三区麻豆| 国模一区二区三区四区视频 | 嫩草影院入口| 欧美绝顶高潮抽搐喷水| 亚洲中文av在线| 狂野欧美激情性xxxx| 日本黄大片高清| 91av网站免费观看| 一级毛片女人18水好多| 黄色女人牲交| 久久久久国产一级毛片高清牌| 麻豆久久精品国产亚洲av| 岛国在线免费视频观看| 亚洲av片天天在线观看| 哪里可以看免费的av片| 国产aⅴ精品一区二区三区波| 午夜免费成人在线视频| 男人的好看免费观看在线视频| 欧美乱码精品一区二区三区| АⅤ资源中文在线天堂| 亚洲天堂国产精品一区在线| 国产真人三级小视频在线观看| 久久久色成人| 九九在线视频观看精品| 91久久精品国产一区二区成人 | 岛国在线观看网站| 嫩草影院精品99| 中文在线观看免费www的网站| 亚洲精品一区av在线观看| 禁无遮挡网站| 在线a可以看的网站| 俄罗斯特黄特色一大片| 丁香六月欧美| 欧美黑人欧美精品刺激| 亚洲欧洲精品一区二区精品久久久| 久久人人精品亚洲av| 精品欧美国产一区二区三| 激情在线观看视频在线高清| 亚洲九九香蕉| 久久久久久久久中文| 宅男免费午夜| 欧美中文综合在线视频| 巨乳人妻的诱惑在线观看| 日韩有码中文字幕| 国产精品亚洲一级av第二区| 国产精品久久久久久人妻精品电影| 成人精品一区二区免费| 国产黄a三级三级三级人| 欧美一级毛片孕妇| 国产高潮美女av| a级毛片在线看网站| 9191精品国产免费久久| 国产亚洲欧美在线一区二区| 亚洲在线自拍视频| 久久精品国产清高在天天线| 老司机午夜十八禁免费视频| 日韩人妻高清精品专区| 久久久色成人| 亚洲精品在线美女| 桃红色精品国产亚洲av| 免费在线观看视频国产中文字幕亚洲| 99re在线观看精品视频| 亚洲欧美日韩卡通动漫| 少妇的丰满在线观看| 小蜜桃在线观看免费完整版高清| 老熟妇仑乱视频hdxx| 变态另类丝袜制服| 国模一区二区三区四区视频 | 国产伦精品一区二区三区视频9 | 国产欧美日韩精品亚洲av| 国产亚洲av嫩草精品影院| 国产aⅴ精品一区二区三区波| 国产视频一区二区在线看| 国产精品免费一区二区三区在线| cao死你这个sao货| 校园春色视频在线观看| 757午夜福利合集在线观看| 在线视频色国产色| 国产精品精品国产色婷婷| 性欧美人与动物交配| 亚洲专区国产一区二区| 天堂动漫精品| 青草久久国产| 嫩草影院入口| 可以在线观看毛片的网站| 久久精品aⅴ一区二区三区四区| a在线观看视频网站| 国产av麻豆久久久久久久| 亚洲中文字幕一区二区三区有码在线看 | 亚洲美女黄片视频| 黄频高清免费视频| 成人亚洲精品av一区二区| 国产精品一区二区三区四区免费观看 | 亚洲欧美激情综合另类| 国产伦一二天堂av在线观看| 日韩欧美精品v在线| 国产一区二区三区在线臀色熟女| 亚洲成a人片在线一区二区| 国产伦一二天堂av在线观看| av欧美777| 亚洲成av人片免费观看| 成人精品一区二区免费| 日韩成人在线观看一区二区三区| 国产精品 国内视频| 精品国产乱子伦一区二区三区| 此物有八面人人有两片| 女警被强在线播放| 伊人久久大香线蕉亚洲五| 99在线视频只有这里精品首页| 午夜两性在线视频| 一本久久中文字幕| 精品国产亚洲在线| 男人和女人高潮做爰伦理| 99视频精品全部免费 在线 | 国产三级中文精品| 亚洲欧美日韩高清专用| 国产一区二区三区视频了| 久久久久九九精品影院| 成年女人看的毛片在线观看| 欧美成狂野欧美在线观看| 国产视频一区二区在线看| a级毛片在线看网站| 成年版毛片免费区| 国产精品久久视频播放| 日本成人三级电影网站| www.www免费av| 亚洲欧美精品综合一区二区三区| 久久天躁狠狠躁夜夜2o2o| 国内精品一区二区在线观看| 两个人视频免费观看高清| 老司机深夜福利视频在线观看| av在线天堂中文字幕| 欧美高清成人免费视频www| 久久精品综合一区二区三区| 国产1区2区3区精品| 床上黄色一级片| 怎么达到女性高潮| 亚洲精品国产精品久久久不卡| 日韩大尺度精品在线看网址| 免费高清视频大片| 日日摸夜夜添夜夜添小说| 亚洲熟妇熟女久久| 天堂av国产一区二区熟女人妻| 亚洲熟妇熟女久久| 波多野结衣巨乳人妻| 久久久久久国产a免费观看| 成人一区二区视频在线观看| 欧美日韩中文字幕国产精品一区二区三区| 免费一级毛片在线播放高清视频| 视频区欧美日本亚洲| 精品久久久久久久人妻蜜臀av| 国产一区二区在线av高清观看| 免费在线观看日本一区| 欧美一区二区国产精品久久精品| 99国产精品99久久久久| 久久久精品欧美日韩精品| 亚洲午夜理论影院| 亚洲性夜色夜夜综合| 日韩欧美在线乱码| 亚洲无线在线观看| 亚洲成av人片在线播放无| 国产精品自产拍在线观看55亚洲| 搞女人的毛片| 国产高清有码在线观看视频| 成人国产综合亚洲| 一边摸一边抽搐一进一小说| 全区人妻精品视频| 国产淫片久久久久久久久 | 一级毛片精品| 亚洲狠狠婷婷综合久久图片| 无限看片的www在线观看| 国产一区二区三区在线臀色熟女| 国产精品 欧美亚洲| 禁无遮挡网站| 日韩大尺度精品在线看网址| 此物有八面人人有两片| 91久久精品国产一区二区成人 | 亚洲av第一区精品v没综合| 校园春色视频在线观看| 国产午夜福利久久久久久| 国产欧美日韩精品亚洲av| 亚洲av电影不卡..在线观看| 黄色日韩在线| 久久久久久大精品| 国产精品 欧美亚洲| 韩国av一区二区三区四区| 久久草成人影院| ponron亚洲| 免费人成视频x8x8入口观看| e午夜精品久久久久久久| 欧美极品一区二区三区四区| 国产乱人伦免费视频| 床上黄色一级片| 午夜成年电影在线免费观看| 身体一侧抽搐| 色综合站精品国产| 日本在线视频免费播放| 久久精品人妻少妇| 国产精品精品国产色婷婷| 久久天堂一区二区三区四区| 欧美丝袜亚洲另类 | 亚洲熟妇熟女久久| 国产野战对白在线观看| 日日干狠狠操夜夜爽| 狂野欧美激情性xxxx| 国产视频一区二区在线看| 国产91精品成人一区二区三区| 高清毛片免费观看视频网站| 久久精品91无色码中文字幕| e午夜精品久久久久久久| 观看免费一级毛片| 一本综合久久免费| 国内揄拍国产精品人妻在线| 69av精品久久久久久| 精品99又大又爽又粗少妇毛片 | 婷婷精品国产亚洲av| 他把我摸到了高潮在线观看| 毛片女人毛片| 精品国产乱子伦一区二区三区| 精品国产超薄肉色丝袜足j| 九九热线精品视视频播放| 18禁黄网站禁片午夜丰满| 天天一区二区日本电影三级| 美女午夜性视频免费| 国产亚洲欧美98| 午夜福利18| 国产v大片淫在线免费观看| 国产一区二区在线观看日韩 | xxx96com| 哪里可以看免费的av片| 9191精品国产免费久久| 精品免费久久久久久久清纯| 亚洲色图av天堂| 精品久久久久久久末码| 亚洲 欧美 日韩 在线 免费| 黄色片一级片一级黄色片| av中文乱码字幕在线| 久久久国产成人精品二区| 天天躁狠狠躁夜夜躁狠狠躁| svipshipincom国产片| 久久精品国产清高在天天线| 国内精品久久久久精免费| 女生性感内裤真人,穿戴方法视频| 少妇的逼水好多| 中文字幕熟女人妻在线| 岛国在线观看网站| 在线观看午夜福利视频| 午夜福利高清视频| 性欧美人与动物交配| 中文字幕人成人乱码亚洲影| av黄色大香蕉| 国产av不卡久久| 国产亚洲精品综合一区在线观看| 性色av乱码一区二区三区2| 国产精品1区2区在线观看.| 午夜免费激情av| 欧美一区二区精品小视频在线| aaaaa片日本免费| 亚洲成人中文字幕在线播放| 黄色丝袜av网址大全| 99精品欧美一区二区三区四区| 国产1区2区3区精品| 免费看美女性在线毛片视频| 免费无遮挡裸体视频| 国产成人影院久久av| 黄色 视频免费看| 免费在线观看成人毛片| 男女下面进入的视频免费午夜| 精品久久久久久久久久久久久| 亚洲18禁久久av| 在线国产一区二区在线| 国产高清有码在线观看视频| 99视频精品全部免费 在线 | 久久久久久九九精品二区国产| 99久久精品国产亚洲精品| 99国产极品粉嫩在线观看| 精品久久蜜臀av无| 宅男免费午夜| 国产伦精品一区二区三区四那| 91av网站免费观看| av天堂中文字幕网| 欧美日韩福利视频一区二区| 村上凉子中文字幕在线| 亚洲第一电影网av| 国产av不卡久久| 无限看片的www在线观看| 中亚洲国语对白在线视频| 黄色日韩在线| 最近在线观看免费完整版| 伊人久久大香线蕉亚洲五| 狂野欧美激情性xxxx| 免费电影在线观看免费观看| 波多野结衣巨乳人妻| 免费大片18禁| 变态另类成人亚洲欧美熟女| 久99久视频精品免费| 村上凉子中文字幕在线| 欧美大码av| 综合色av麻豆| 成人国产综合亚洲| 香蕉国产在线看| 人人妻人人澡欧美一区二区| 国内久久婷婷六月综合欲色啪| 久久婷婷人人爽人人干人人爱| 国产精品一区二区免费欧美| 听说在线观看完整版免费高清| 亚洲色图av天堂| 久久天躁狠狠躁夜夜2o2o| 日韩人妻高清精品专区| 特大巨黑吊av在线直播| 五月玫瑰六月丁香| 超碰成人久久| 免费观看的影片在线观看| 99国产极品粉嫩在线观看| 色尼玛亚洲综合影院| 国产亚洲欧美在线一区二区| www日本在线高清视频| 久久久久久久精品吃奶| 999久久久精品免费观看国产| 老司机午夜十八禁免费视频| 少妇丰满av| 国内精品久久久久精免费| 男女之事视频高清在线观看| 成人特级黄色片久久久久久久| 三级毛片av免费| 草草在线视频免费看| 99国产精品99久久久久| 亚洲国产精品成人综合色| 两个人的视频大全免费| 成人亚洲精品av一区二区| 床上黄色一级片| 亚洲最大成人中文| 久久久国产成人免费| 成人三级黄色视频| 中文字幕久久专区| 亚洲欧美日韩卡通动漫| 国内少妇人妻偷人精品xxx网站 | 免费高清视频大片| 日本熟妇午夜| 欧美日韩一级在线毛片| 国产高清激情床上av| 国产精品野战在线观看| 亚洲精华国产精华精| 91字幕亚洲| 日韩欧美在线二视频| 久久久久久国产a免费观看| 成人国产综合亚洲| 黄色片一级片一级黄色片| 久久久成人免费电影| 欧美高清成人免费视频www| 久久久久国内视频| 亚洲专区中文字幕在线| 国产av一区在线观看免费| 久久久久久大精品| av天堂中文字幕网| 成年免费大片在线观看| 久久中文看片网| 国产熟女xx| 天天躁日日操中文字幕| 久久欧美精品欧美久久欧美| www.熟女人妻精品国产| 色综合站精品国产| 中文亚洲av片在线观看爽| 桃色一区二区三区在线观看| 中文字幕人妻丝袜一区二区| 少妇熟女aⅴ在线视频| 一级作爱视频免费观看| 亚洲国产看品久久| 丁香六月欧美| 日韩av在线大香蕉| 嫩草影视91久久| 一本精品99久久精品77| 香蕉丝袜av| 1000部很黄的大片| 欧美中文综合在线视频| 久久久国产精品麻豆| 中文亚洲av片在线观看爽| 免费看日本二区| 一二三四社区在线视频社区8| 老熟妇乱子伦视频在线观看| 可以在线观看的亚洲视频| 国产主播在线观看一区二区| 在线观看舔阴道视频| 一本一本综合久久| 精品国产三级普通话版| 99久久99久久久精品蜜桃| 禁无遮挡网站| 五月伊人婷婷丁香| 国产av麻豆久久久久久久| 老鸭窝网址在线观看| 精品人妻1区二区| 天堂影院成人在线观看| 18禁美女被吸乳视频| 91字幕亚洲| 两个人看的免费小视频| 最近最新中文字幕大全免费视频| ponron亚洲| 国产综合懂色| 精品免费久久久久久久清纯| 大型黄色视频在线免费观看| 精品久久久久久久久久久久久| 制服丝袜大香蕉在线| 悠悠久久av| 级片在线观看|