• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Credit Card Fraud Detection Model Based on Multi-Feature Fusion and Generative Adversarial Network

    2023-10-26 13:13:04YalongXieAipingLiBiyinHuLiqunGaoandHongkuiTu
    Computers Materials&Continua 2023年9期

    Yalong Xie ,Aiping Li,? ,Biyin Hu ,Liqun Gao and Hongkui Tu

    1College of Computer,National University of Defense Technology,Changsha,410003,China

    2Credit Card Department,Bank of Changsha,Changsha,410016,China

    ABSTRACT Credit Card Fraud Detection(CCFD)is an essential technology for banking institutions to control fraud risks and safeguard their reputation.Class imbalance and insufficient representation of feature data relating to credit card transactions are two prevalent issues in the current study field of CCFD,which significantly impact classification models’performance.To address these issues,this research proposes a novel CCFD model based on Multifeature Fusion and Generative Adversarial Networks (MFGAN).The MFGAN model consists of two modules:a multi-feature fusion module for integrating static and dynamic behavior data of cardholders into a unified highdimensional feature space,and a balance module based on the generative adversarial network to decrease the class imbalance ratio.The effectiveness of the MFGAN model is validated on two actual credit card datasets.The impacts of different class balance ratios on the performance of the four resampling models are analyzed,and the contribution of the two different modules to the performance of the MFGAN model is investigated via ablation experiments.Experimental results demonstrate that the proposed model does better than state-of-the-art models in terms of recall,F1,and Area Under the Curve(AUC)metrics,which means that the MFGAN model can help banks find more fraudulent transactions and reduce fraud losses.

    KEYWORDS Credit card fraud detection;imbalanced classification;feature fusion;generative adversarial networks;anti-fraud systems

    1 Introduction

    With the development of digital banking and e-payment,credit cards have become one of the most popular payment methods.More and more people prefer to pay with credit cards when shopping online or offline.While the number of credit card transactions has increased significantly,so has the amount of money lost due to fraud.The Nilson report predicted that by 2023,yearly global fraud losses would amount to$35.67 billion[1].Therefore,the CCFD system has become a crucial requirement for financial institutions.Machine learning technology has been widely used in CCFD models.How to improve the performance of CCFD models and reduce fraud losses has been the focus of this field[2].Some researchers have used or enhanced classical machine learning algorithms to handle the problem in CCFD,such as logistic regression [3–5],decision trees [6,7],support vector machines [8–10],and artificial neural networks[11,12].In recent years,with the great success of deep learning techniques in computer vision and natural language processing,some researchers have adopted Deep Neural Networks (DNN) [13–15],Long Short-Term Memory networks (LSTM) [16,17],and other neural networks to detect fraudulent transactions.

    Even though research in the past has made significant progress and many algorithms or models have worked well in practice,CCFD is still challenging for the following reasons:

    (1) In general,the data related to cardholders can be divided into three types.First,the basic information data,such as gender,age,marital status,occupation,etc.Second,the transaction behavior data,such as transaction time,transaction amount,balance,the average number of transactions within 30 days,etc.Third,the operation behavior data of cardholders in various financial service channels of the card issuing bank,such as logging in to e-bank,purchasing financial products through the bank counter,reading financial information in the mobile banking app,etc.The basic information is called static features.The transaction and operation behavior data are called dynamic features.In real-world scenarios,an effective CCFD model should be built on the comprehensive analysis of heterogeneous multi-source feature data instead of a single type of feature data.Due to user privacy protections and a lack of data,most studies do not effectively use different types of credit card feature data.For example,Carcillo et al.[18] and Forough et al.[19] only utilized transactional behavior features of cardholders and ignored static features.In addition,Fiore et al.[20]and Gangwar et al.[21]regarded different types of data as having equally essential features for learning,making it difficult for classification models to obtain high-dimensional hidden features between different types of data.This paper proposes a fusion method for heterogeneous feature data to tackle this problem effectively.In addition to using static features and transaction behavior features,this work also utilizes operation behavior features,which are helpful for generating a more comprehensive feature representation of cardholders and enhancing the ability to detect fraudulent transactions.

    (2) In the actual credit card dataset,the proportion of fraudulent transactions is much lower than the number of legitimate transactions.This phenomenon is known as class imbalance.The supervised classification model for fraud detection is known to be negatively impacted by the class imbalance problem.In an imbalanced dataset,the number of minority class examples may be so small that the learning algorithm may discard them as noise and classify each example as a member of the majority class[22].This will cause the classification models to be biased toward the majority class in the training dataset [23].Previous studies [24,25] adopted upsampling methods to address the class imbalance problem.Although the recall rate of these models for fraudulent samples was improved,they also increased the false positive rate for legitimate transactions,resulting in a large increase in investigation costs for banking institutions.

    In this study,all the challenges and limitations mentioned above are considered and alleviated to a certain degree.A novel CCFD model is presented to fuse multiple heterogeneous features and effectively alleviate the problem of class imbalance.The MFGAN model includes both a fusion and a balance module.Initially,a Feedforward Neural Network(FNN)is used to obtain the representation of cardholders’static features.Two distinct Bi-directional Long Short-Term Memory networks (Bi-LSTM)are used to generate the representation of operation behavior features and transaction behavior features,respectively.Then,these three types of features are integrated through a merge layer to generate a unified,high-dimensional training set input for the classification model.In the balance module,some fraud samples are synthesized by a generative adversarial network and then merged with the original training data,constructing an augmented training set that is more balanced to achieve the desired effect by using a traditional classifier.

    There are three main contributions to this work.Firstly,three distinct types of cardholder data are fused into a unified feature space for representation.In addition to static data and transaction behavior data,the operation behavior data of cardholders in various financial service channels is used for the first time.Secondly,several fraud instances are synthesized based on the actual distribution of fraud samples through the generative adversarial network and Borderline Synthetic Minority Oversampling Technique (BSMOTE),which mitigates the class imbalance problem in CCFD.Lastly,a careful experimental evaluation of two actual credit card datasets indicates that the proposed model can do better,especially in terms of recall rate.

    2 Related Works

    2.1 Credit Card Fraud Detection

    Machine learning techniques,particularly supervised learning techniques,are regarded as one of the most efficient methods to solve the problem in CCFD.One of the first experiences in CCFD using machine learning has been proposed by Dorronsoro et al.[26].Bhattacharyya et al.[4]evaluated three machine learning approaches—support vector machines,random forests,and logistic regression—as part of an attempt to better detect credit card fraud.Several studies[2,11,15]introduced state-of-theart CCFD models,public datasets,performance evaluation matrices,advantages and disadvantages of classical machine learning models,etc.Zhang et al.[14],Bahnsen et al.[27],and Lucas et al.[28]proposed feature engineering strategies and methods for credit card transaction data,which could provide more effective input data for fraud detection models.Soemers et al.[29]built a CCFD model that minimizes fraud losses and investigation costs based on contextual bandits and decision trees.Taha et al.[30]presented an approach for detecting fraudulent transactions using an optimized light gradient boosting machine with a Bayesian-based hyperparameter optimization algorithm.

    Several papers[13,15,31]have discussed deep learning for CCFD.Forough et al.[32]developed a CCFD model using sequence labeling based on deep neural networks and probabilistic graphical models.In a subsequent study[19],they also demonstrated that sequential models such as the LSTM and Gate Recurrent Unit (GRU) performed better than other non-sequential models.In contrast to the method proposed by Forough et al.[19],Li et al.[33] processed transaction data through a convolutional neural network.They constructed a deep representation learning model with a new loss function that considered distances and angles among features.The main shortcoming with most models was that they only used cardholders’time-based transaction behavior data and ignored their static behavior data,or vice versa.Inadequate application of cardholders’operation behavior features resulted in a loss of recognition accuracy for fraudulent transactions.The method proposed in this study can effectively fuse static features with dynamic features of cardholders to generate a unified,high-dimensional feature representation,which can help classification models improve their overall performance.

    2.2 Imbalanced Data Learning Methods

    Class imbalance is a prevalent issue in actual credit card datasets.If a standalone supervised learning model is trained on a highly imbalanced dataset,the model may shift to the majority class,thereby reducing the prediction accuracy of the minority class.In Jensen’s study [34],the technical problems associated with the class imbalance in fraud detection were discussed.Resampling methods are commonly used to solve the class imbalance problem,including oversampling and undersampling.Synthetic Minority Oversampling Technique (SMOTE) [35] and BSMOTE [36] are well-known oversampling methods that can synthesize minority class samples using specific strategies and are widely used in CCFD.Although the SMOTE method can improve the recall rate of the minority class to a certain extent,it may also increase the false positive rate,hence increasing the investigation expenses for banking institutions.One reason for this phenomenon is that the minority class samples synthesized by SMOTE may not match the actual distribution of the minority class.

    Some other approaches to address the class imbalance problem include cost-sensitive learning methods[6,37]and ensemble learning methods[38,39].Akila et al.[40]presented a cost-sensitive riskinduced Bayesian inference bagging model and a cost-sensitive weighted voting combiner for CCFD.The disadvantage of cost-sensitive learning methods is that the cost matrix cannot be accurately calculated and must be estimated by business experts.Shen et al.[41] and Niu et al.[42] combined resampling methods with ensemble learning methods,which simultaneously optimized the classifier and the training data distribution.The downside was that the model needed more computing resources and training time.

    2.3 Generative Adversarial Networks

    The Generative Adversarial Network(GAN)[43]was proposed by Goodfellow et al.in 2014.After several years of development,GAN has been successfully applied in image processing,object detection,video generation,and other fields.GAN consists of two competing neural networks,a generatorGand a discriminatorD.Ggenerates new candidates,while its competitorDevaluates the quality of the candidates.

    To address the class imbalance problem in CCFD,Fiore et al.[20]and Douzas et al.[44]tried to synthesize minority samples by GAN and conditional GAN,respectively.Usually,their methods are effective.However,when there are not enough minority instances in the training set,the performance of their methods may significantly drop because the GAN model is easy to overfit and the generator can’t be effectively trained.There may be a large difference between the distributions of simulated minority samples and actual minority samples,which will affect the performance of classification models.

    In actual CCFD scenarios,obtaining minority samples is difficult and restricted.Therefore,this study offers a strategy that combines BSMOTE with GAN to synthesize minority samples,improve the balance rate of the training set,and ultimately improve the performance of models.

    3 Methodology

    3.1 Multi-Feature Fusion Module

    The multi-feature fusion module (as shown in Fig.1) is mainly used to fuse three distinct types of features into a unified feature space and generate a high-dimensional feature representation that describes static features and dynamic behavior features of cardholders.To obtain the feature representation of the user’s static data,a FNN model is constructed,and two kinds of Bi-LSTM models are adopted to get the feature representation of the user’s dynamic transaction behavior data and operation behavior data,respectively.Finally,these three distinct features are concatenated to generate a single,high-dimensional representation of features that classification models can use.

    Figure 1:Structure of the multi-feature fusion module

    3.1.1 FNN for Static Features Fusion

    The setB=(b1,b2,···,bk)represents a cardholder’s static features,where each elementb1,b2,···,bkin the set represents a static attribute of the cardholder,such as gender,occupation,marital status,etc.Each static attributebiis mapped as a one-hot vector.Then,a FNN model is built with an input layer,an output layer,and three hidden layers with multiple neurons.The conversion and computation procedure fromn-thlayer neurons to(n+1)-thlayer neurons is shown as Eq.(1).

    fn+1is the activation function of the(n+1)-thlayer neurons,Wn+1is the weight matrix between then-thlayer neurons and the(n+1)-thlayer neurons,and?is the bias.By using the FNN model,the setBis transformed into a feature vectorp,which helps to build a high-dimensional hidden representation between different static features.For example,a cardholder’s job may strongly correlate with his or her gender.

    3.1.2 Bi-LSTM for Transaction Behavior Features Fusion

    Jha et al.[3] aggregated transactions to capture the purchasing behavior of consumers prior to each transaction.This aggregated data was then used for model estimation to identify fraudulent transactions.They found that aggregated transaction behavior features were more helpful than raw transaction features.Referencing their research,this paper develops a Bi-LSTM model to capture the dynamic payment behavior of users over time and obtain the hidden stateq.

    The setE=(e1,e2,···,eT)is used to represent the sequence of a cardholder’s credit card transactions during a specific time period [1,T],E∈RK×T,where each element inErepresents a transaction record of the cardholder at a specific time.Each transaction record contains K attributes,including transaction type,transaction amount,counterparty,etc.Each attribute represents a feature.is the representation vector of transaction behavior features for the cardholder at time t,and its calculation method is shown as Eq.(2),whereis the output of the forward LSTM network andis the output of the backward LSTM network.The operator ⊕denotes concatenation.

    At time t,etis the input data of the LSTM cell,ctis the value of the memory cell,htis the output of the LSTM cell,andht-1is the output of the LSTM cell at the previous moment.The formula for the LSTM unit calculation method is as follows:

    Wc,Wi,Wf,andWorepresent the weight matrix,while?c,?i,?f,and?orepresent the bias.

    it,ft,andotstand for the input gate vector,the forget gate vector,and the output gate vector of the dynamic payment behavior feature at time t,respectively.Lastly,a pooling layer is adopted to integrate the dynamic behavior features of all transactions for the cardholder in time T by the vectorq.

    3.1.3 Bi-LSTM for Operation Behavior Features Fusion

    The majority of credit card products and services in China are provided by banking institutions.Generally speaking,while using the credit card products of a bank,the cardholder will also use the debit card,e-bank,mobile banking,and other financial service products provided by the same bank.Therefore,in addition to the user’s credit card-related data,the issuing bank can also collect a large amount of operation behavior data about the user in other financial service channels,such as the user’s deposit and withdrawal records from Automated Teller Machines (ATM),transfer records through the e-bank,records of browsing financial information via the mobile banking app,etc.By adding this information about operations,the issuing bank can create a more accurate baseline for cardholders and find fraudulent transactions more easily.

    The setO=(op1,op2,···,)is used to represent the operation behavior sequence of cardholders in various financial service channels of the issuing bank within a specific time period[1,T′],O∈,where each element inOrepresents an operation record of the cardholder.Each operation record includesK′attributes(such as time,channel type,operation type,etc.).Each attribute represents a feature.The operation behavior data is time series data,which is the same as transaction behavior data.Therefore,as shown in Eq.(5),this research constructs a Bi-LSTM network similar to the network structure described in Section 3.1.2 to embed the operation behavior sequence setOin a feature vectorr.

    Finally,the static feature vectorp,the transaction behavior feature vectorq,and the operation behavior feature vectorrare concatenated to form a unified high-dimensional feature representation vectorsthat integrates three types of features for the cardholder.The calculation is depicted in Eq.(6).

    3.2 Balance Module

    To reduce the negative effect of the class imbalance problem on the performance of CCFD models,this paper presents a balance module based on BSMOTE and GAN to synthesize minority class samples.The data flow diagram of the balance module is shown in Fig.2.

    Figure 2:Data flow diagram of the balance module

    In contrast to most previous research,the MFGAN model does not use GAN alone in the balance module,as experiments show that the performance of the resampling approach based on the combination of BSMOTE and GAN is better than that of the resampling method using only GAN.Details will be provided in the upcoming section on experiments and results.

    3.2.1 BSMOTE

    SMOTE and BSMOTE are famous resampling algorithms.The key steps of the SMOTE algorithm are as follows:

    Step 1:For each samplexifrom the minority class setSmin,randomly select a samplexneighborfrom the k nearest neighbor samples fromxi.

    Step 2:Randomly select a point along the line connection samplesxiandxneighborto generate a new samplexnew.The procedure for calculatingxnewis shown as Eq.(7),whereλis a random number.

    Step 3:Repeat steps 1 and 2 until the desired number of minority samples have been synthesized.

    The drawback of SMOTE is that the synthesized samples may weaken the boundary and increase the possibility of overlap between different classes in the original dataset,which may cause the model to misclassify boundary samples[36].BSMOTE is optimized on the basis of SMOTE.

    Firstly,BSMOTE divides the minority class into a“noise set,”a“safe set,”and a“dangerous set.”For each samplexifrom the minority setSmin,theknearest neighbor samples fromxiare picked out from the training setS.xiis assigned to the“noise set”if theksamples are all majority samples.xiis assigned to the“safe set”if the number of majority samples among theksamples is less thank/2.Otherwise,xiis assigned to the“dangerous set.”Then,BSMOTE randomly selectsfrom the“dangerous set”alone,which is different from the SMOTE method.Finally,a new sample is synthesized based on.

    3.2.2 Generative Adversarial Networks

    The generative adversarial network consists of two modules:the generatorGand the discriminatorD.Typically,bothGandDare deep neural networks[45].Glearns the probability distribution of actual samples and generates new candidate samples through the random noise inputz.Djudges whether or not a sample instance is synthesized by the generator.The goal ofDis to distinguish actual samples from synthetic samples as accurately as possible,while the purpose ofGis to generate instances that resemble actual samples to cheatD.In the procedure described above,GandDcompete against each other and continuously improve their learning ability until a balance is reached.The discriminatorDis trained to minimize its prediction error,and the generatorGis trained to maximize the prediction error ofD.The competition betweenGandDcan be formalized as a minimax game,shown as Eq.(8).

    pdatarepresents the distribution of actual samples,andpzrepresents the distribution of noise samples.In an ideal situation,after numerous iterations of training,each samplexgsynthesized by the generator can perfectly trick the discriminator.This means that the discriminator cannot accurately distinguish whetherxgis an actual sample or not.Maintaining a balance between the generator and discriminator is crucial to training a GAN.If the discriminator’s performance improves early,the generator will be unable to match the improved speed of the discriminator,resulting in the failure of the GAN’s training.Therefore,training a GAN is known to not be an easy task[20,46].

    3.2.3 Balance Algorithm Based on BSMOTE and GAN

    To compensate for the fact that the GAN method does not work well when there are not enough fraud samples,this work proposes a balance technique based on the combination of BSMOTE and GAN(as shown in Fig.2).The balance algorithm mainly includes three steps.Firstly,the BSMOTE is used to synthesize a portion of instances to increase the number of fraud samples.Then,the actual and synthesized fraud samples are merged and used to train the GAN.Finally,a part of the fraud samples is generated by the GAN to make the training set class balanced.The detailed steps are as follows:

    Step 1:The original imbalanced training setTis preprocessed by the multi-feature fusion module to produce the transformed training setT′,which is then split into a fraudulent sample setSfraand a legitimate sample setSlegal.

    Step 2:Using the training setT′as input,the BSMOTE is utilized to synthesizeNbfraud samples,which are represented by the setSbsm,andNb=|Sbsm|.

    Step 3: Feeding the actual fraud sample setSfraand the synthetic fraud sample setSbsminto the GAN,training the GAN until convergence,and then synthesizingNgfraud samples by the generator of the GAN,which is represented by the setSgan.

    Step 4:MergingSfra,Sbsm,Sgan,andSlegalto create a new balanced training setT′′.

    The pseudo-code of the balance method is shown in Algorithm 1.

    4 Experiments

    4.1 Dataset Description

    This research compares the performance of the proposed model and state-of-the-art models on two actual credit card datasets.The first is a public dataset from University of California,Irvine(UCI),while the second is a private dataset from a bank in China.Due to concerns about trade secrets and privacy,the provider of the private dataset did not explain in detail how the data was collected and how fraudulent transactions were labeled.They also did not illustrate how many fraudulent transactions actually happen in real business.Some statistical information about these two datasets is presented in Table 1.

    (1) The UCI dataset[47].This dataset contains 30000 instances,6636 of which are fraudulent and 23364 of which are legitimate.The fraud rate of the dataset is 22.12%.Each payment record has 23 features.There are five static features,like gender,education level,credit limit,etc.,and 17 transaction behavior features,like the history of past payments,the amount of the billing statement,etc.

    (2) The private dataset.This dataset contains 34756 instances,including 2091 fraudulent instances and 32665 legitimate instances.The fraud rate is 6.02%,which shows that the dataset is highly imbalanced.Each instance of this dataset represents a customer’s credit card transaction record.Each record has 40 features,including eight static features (like age,gender,marital status,etc.)and 32 transaction behavior features(like the transaction amount,the number of transactions in the past 30 days,the average transaction amount in the past 30 days,etc.).In addition,the issuing bank provides 272584 records on cardholders’operating behavior across various financial service channels.Each record is comprised of six attributes(such as customer ID,operation type,etc.).However,this part of the operation behavior data lacks labels,so it needs to be processed by hand before classification models can utilize it.

    Table 1:Dataset description

    4.2 Performance Measures

    The AUC of the receiver operating characteristic is often used to evaluate the performance of imbalanced classification models,such as in the studies by Douzas et al.[44] and Singla et al.[48].In addition,AUC is usually applied in CCFD research.For example,Esenogho et al.[16] and Fang et al.[49]used AUC as the key performance indicator for their proposed models.This research adopted AUC,F1,recall,and precision as performance metrics to simplify comparisons with other baseline models and state-of-the-art models.The calculation methods for Recall,Precision,and F1 are depicted in Eqs.(9)–(11),where TP indicates a fraudulent transaction is predicted to be fraudulent,FP denotes a legal transaction is predicted to be fraudulent,and FN means a fraudulent transaction is predicted to be legal.Eq.(12)shows how to figure out AUC,whereD+andD-denote the collection of fraudulent transactions and legitimate transactions,respectively.

    4.3 Experimental Design

    4.3.1 Baseline Methods

    This paper constructs the following four baseline models based on the latest literature.It is well known that the setting of hyperparameters significantly impacts the performance of classification models.So,the grid search technique was used to set the hyperparameters of the model in this study to ensure that these baseline models can achieve the highest AUC and F1 on the validation dataset.

    (1) The DNN model.As the base classifier,a deep neural network model is built and combined with different balance techniques,such as SMOTE,BSMOTE,etc.The number of deep network layers is a basic hyperparameter.Models with too few layers are easy to be underfitting,while models with too many layers are easy to be overfitting.Five network structures with three to seven layers are evaluated,and the performance of the model is most stable when the number of layers is set to four.

    (2) The Support Vector Machine with Information Gain model (SVMIG).According to the method proposed by Poongodi et al.[9],a SVMIG model is constructed in this work.The input of the SVMIG model is the raw feature data without fusion.

    (3) The LSTM model.This paper implements a LSTM model with an attention mechanism according to the method presented by Benchaji et al.[17].Then,the performance of the LSTM model is compared with that of the DNN model.The LSTM model contains six layers,and its input is the fused feature data.

    (4) The GAN model.Following the method described in the study by Fiore et al.[20],a GANbased resampling module is constructed to replace the balance module of the MFGAN model.The generator and discriminator of the GAN model contain three layers,respectively.Except for the output layer,the number of neurons in other layers is 30 to 60.

    4.3.2 Model Implementation and Experimental Details

    (1) The multi-feature fusion module.The FNN model for static feature fusion consists of four layers.There are five and seven layers in the Bi-LSTM models for transaction behavior and operation behavior feature fusion,respectively.The number of neurons in each layer is less than 100.Learning rates on a logarithmic grid(1×10-4,5×10-3,and 1×10-3)are tested for different datasets.

    (2) The balance module.The generatorGand the discriminatorDare network structures with three layers.The number of neurons in each layer varies slightly according to the datasets used,but neither exceeds 80.Rectified Linear Unit (ReLU) and sigmoid are the activation functions ofGandD,respectively.The model optimizers are Adams,and the learning rates of these two models are 1×10-4and 5×10-3on the UCI and private datasets,respectively.The hyperparameterδis tuned in the range of[0.05,0.5].

    (3) Dataset splitting.The original imbalanced dataset was split into two parts: 80% of it was training data(Str),and 20% was test data(Ste).Firstly,the balance module was used to optimizeStrto generate a new training set().Secondly,the setwas used to train the model and adjust its parameters.The five-fold cross-validation method was utilized to find the best classification model hyperparameters.Finally,the performance of different classification models is evaluated using the imbalanced setSte.

    (4) Hardware and software environments for experimentation.All experiments in this paper were performed on a computer with 16 GB of RAM,a NVIDIA GeForce GTX 1080 Ti 11G GPU,and an Intel Core i7-9700 CPU @ 3.00 GHz running Windows 10 Professional with Python 3.6,Anaconda 4.5,and TensorFlow 2.4.

    5 Experimental Results and Discussion

    5.1 Comparative Analysis of the Results of MFGAN and Baseline Models

    The experimental results are summarized in Table 2.DNN denotes that the raw data without feature fusion is used as the DNN model’s input.DNN(fus)indicates that the fused features of the dataset are obtained first,and then the fused features are used as the input of the DNN model.DNN(SMOTE)means that SMOTE is adopted to balance the training set.The following observations can be made based on the experimental outcomes of the UCI dataset:

    (1) For the raw data without feature fusion,SVMIG achieved a slightly higher recall rate than DNN,but its precision,F1 score,and AUC were lower than DNN’s (8.37% for precision,4.57% for F1 score,and 2.6% for AUC).This indicated that DNN was better at fitting complex features than SVMIG.

    (2) The performance of DNN(fus)was better than that of DNN,in which the recall rate,precision,F1 score,and AUC were increased by 3.31%,0.91%,2.14%,and 1.45%,respectively.The results proved the effectiveness of the multi-feature fusion module from another aspect.For models that used fused features as input,the performance of LSTM(fus)and DNN(fus)was basically the same,and LSTM didn’t show a significant advantage.

    (3) DNN (fus,SMOTE) and DNN (fus,BSMOTE) models performed better than DNN (fus).When the SMOTE and BSMOTE algorithms were used to balance the training dataset,the recall rate of the DNN model increased by 1.86% and 3.22%,respectively,while the F1 score was not decreased.This indicated that the balance algorithm could help the classification model learn from minority samples better and make it more accurate for the minority samples to be identified.

    (4) The GAN model achieved the best recall rate(62%)and AUC(76.64%)among baseline models.Compared to DNN(fus,SMOTE)and DNN(fus,BSMOTE),the AUC of GAN improved by 1.46% and 1.22%,respectively,and the recall rate increased by 6.49% and 5.13%,respectively.This indicated that the minority samples synthesized by the balance module based on GAN were more consistent with the distribution of actual minority samples.In other words,the GAN model generated better fraud samples.

    (5) The accuracy of classification models decreased to varying degrees while resampling algorithms were used.Compared to the DNN(fus),the precision of the DNN(fus,SMOTE)and the DNN(fus,BSMOTE)decreased by 0.6% and 1.33%,respectively.Although the GAN and MFGAN models lost 4.92% and 4.49% of their precision,their recall rates increased by 8.35% and 11.41%,respectively.The direct and indirect reputational losses caused by misidentifying fraudulent transactions are far greater than the increased investigation costs resulting from misjudging legitimate transactions.Banks can accept a slight drop in precision while increasing the recall rate for fraudulent transactions.The recall rate improvement of MFGAN is 2.5 times that of the decrease in precision,which will help banks to reduce fraud losses.

    (6) The MFGAN model proposed in this study integrated the advantages of GAN and BSMOTE,alleviated the class imbalance problem in CCFD by synthesizing higher quality samples for the minority class,and achieved a more stable and better performance.MFGAN improved the recall rate,F1 score,and AUC by 3.06%,1.34%,and 0.64% compared with the best baseline model.Compared with the classic balance algorithm SMOTE,it increased the recall rate,F1 score,and AUC by 9.55%,1.38%,and 2.1%,respectively.

    Table 2:Experimental results of MFGAN and baseline models(%)

    The experimental results of the private dataset showed a similar situation to that of the UCI dataset.The following points need to be noted:

    (1) Compared to DNN,DNN (fus) increased the recall rate,precision,F1 score,and AUC by 4.31%,2.7%,3.61%,and 1.9%,respectively,denoting that the fusion of different types of cardholder features contributed to improving the performance of the classification model.It also showed that the multi-feature fusion module could help classification models get a better and more unified representation of features than raw data.

    (2) After the training dataset was processed by balance methods,the recall rate,precision,and F1 score of the classification model were improved to varying degrees.For example,compared to DNN (fus),the recall rate,precision,and F1 score of the model with SMOTE increased by 4.02%,5.06%,and 6.12%,respectively;the model with BSMOTE increased by 3.64%,5.61%,and 6.62%,respectively;and the model with GAN performed better than both of them,reaching 5.45%,5.98%,and 7.32%,respectively.

    (3) The MFGAN model achieved the best values on all evaluation metrics.Its recall rate,precision,F1 score,and AUC were 1.73%,2.43%,2.76%,and 0.86%higher than the GAN model,which obtained the best performance among baseline models.Compared to the traditional DNN model without feature fusion and class balance,the performance of MFGAN improved even more,reaching 11.49%,11.11%,13.69%,and 5.86%on the recall rate,precision,F1 score,and AUC,respectively.

    5.2 Comparative Analysis of the Results with Different Numbers of Synthesized Samples

    The balance module of MFGAN can create different training datasets with different balance ratios.This is done by setting the number of minority samples that need to be synthesized.In general,the balance ratios within the training dataset influence the performance of the classification model.So,it would be beneficial for banks to figure out how many synthesized samples should be added to the training set for the classification model to work best.For the UCI dataset,this work tested eight different numbers of synthetic samplesNgbased on the number of fraudulent samplesNtwithin the training dataset,including 1/16,1/8,1/4,1/2,1,3/2,and 2 times ofNt,and the number of samples that need to be synthesized to make classes balanced.The experimental results are shown in Tables 3 and 4.As shown in Fig.3,the results are also presented as a line graph,which makes it easier to compare them.

    Table 3:Changes in AUC and F1 score as the number of generated examples(Ng)varies(%).The bold values indicate the best value in the corresponding column

    As shown in Fig.3A,when an augmented training set was used,the AUC of GAN and MFGAN increased rapidly and then remained stable.At the same time,the AUC of SMOTE and BSMOTE began to decline after a slight increase.This indicated that the balance method based on generative adversarial networks was more stable in terms of AUC.For the F1 score,as depicted in Fig.3B,the performance of models with SMOTE and BSMOTE grew slightly at first,then declined rapidly,and was much lower than the original F1 score when the class was balanced.This may be due to the fact that the balance algorithm synthesized a large number of noisy samples,which reduced the precision of the classification model for detecting fraudulent samples.The F1 scores of GAN and MFGAN presented a wave-like shape,rising first and then falling,but they were generally better than the original F1 score.As shown in Fig.3C,the recall rate of the classification models was effectively improved when the training set was preprocessed by the balance method.SMOTE and BSMOTE improved recall rates less than MFGAN and GAN.In terms of precision,as shown in Fig.3D,SMOTE and BSMOTE first fluctuated slightly and then dropped sharply afterNg=2654.The stability of MFGAN was the best.After a significant decline in the initial phase,its precision tended to be stable and began to increase gradually in the last phase.All four resampling methods drop the precision metrics,but to different degrees.This may be because the original classification model shifted toward fraud samples when synthetic fraud samples were added,increasing the misclassification rate of the model for fraud transactions.It can be seen from Fig.3 that the comprehensive performance of the MFGAN model was better than the other three models,and the best value was achieved whenNgwas set to 1/8 times ofNt.

    Figure 3:Performance with different numbers of samples generated:AUC(A),F1(B),recall(C),and precision(D)

    5.3 Ablation Study

    This research analyzed the contributions made by each component of the MFGAN model by performing ablation experiments on the UCI dataset and the private dataset.“(w/o)FNN”represents that the FNN model was removed from the multi-feature fusion module.This means that the raw data of cardholders’static features was directly input into the classification model without fusing.“(w/o)Bi-LSTM for transactions”indicates that the Bi-LSTM model used to fuse transaction behavior data of cardholders was removed from the multi-feature fusion module;that is,the raw transaction behavior data was directly fed into the classification model.“(w/o) Bi-LSTM for operations”means that the operation behavior data of cardholders was not fused.This part of the raw data was not input into the classification model because it was too large and lacked labels.“(w/o)balance module”specifies that the balance module was removed from the MFGAN model,leaving the imbalanced training set to be directly input into the classification model.

    Table 5 shows the results of the ablation experiment.Because the UCI dataset doesn’t have any information about cardholders’operation behavior,“–”is used to indicate the result of this item in the ablation experiment.The results of the experiments show that the balance module helped the classification model improve its performance in a big way.For example,in terms of recall rate,the model with the balance module was improved by 11.41% and 7.18% on the UCI and the private datasets,respectively.In addition,fusing three different types of feature data could also improve the model’s performance to varying degrees,proving that the multi-feature fusion module could help the model extract advanced hidden features of cardholders from different data.For example,when operational behavior data was integrated into the input data,the model’s recall rate,precision,F1 score,and AUC on the private dataset were improved by 3.59%,0.91%,1.53%,and 1.27%,respectively.

    Table 5:Results of ablation experiments(%)

    6 Conclusion

    This paper proposed a novel credit card fraud detection model called MFGAN.The MFGAN mode was made up of two modules: a fusion module for extracting advanced hidden features from multi-source heterogeneous data and a balance module for alleviating the problem of class imbalance.In the multi-feature fusion module,three neural network models were built to process the static and dynamic features of cardholders,respectively.In addition,the operation behavior data of cardholders in different financial service channels was also utilized in this study,which was rarely addressed in other recent CCFD literature for reasons such as user privacy protection and a lack of data.In the balance module,a resampling algorithm based on the generative adversarial network and BSMOTE was proposed.Compared to state-of-the-art resampling methods,the proposed algorithm could synthesize better minority samples and help the classification model improve its performance.Lastly,experiments were conducted on two real-world credit card datasets,and how the performance changes with the number of synthesized minority samples were also investigated.The experimental results showed that,compared to state-of-the-art models,the MFGAN model achieved a higher AUC and recall rate without reducing the F1 score,which proved that the MFGAN model was feasible and effective.

    There are some flaws in this paper.For example,the fusion module will diminish the interpretability of the model,and the model doesn’t take into account the influence of concept drift problems[50],like changes in fraudulent behavior over time.Therefore,the MFGAN model needs some targeted improvements before it can be put into production as the bank’s anti-fraud system.In the future,an adaptive module of concept drift will be designed to make the proposed approach more stable when fraud behavior changes.

    The main contributions of this study are as follows: First,a multi-feature fusion method is proposed to address the issue of insufficient feature extraction and representation in CCFD.Second,an upsampling method based on GAN and BSMOTE is proposed as a solution to the class imbalance problem.Third,the effectiveness of the proposed model is validated using two real-world credit card datasets.

    Acknowledgement:The authors wish to acknowledge the contribution of Basic Software Engineering Research Center of NUDT.

    Funding Statement:This work was partially supported by the National Key R&D Program of China(Nos.2022YFB3104103,and 2019QY1406) and the National Natural Science Foundation of China(Nos.61732022,61732004,61672020,and 62072131).

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design:Yalong Xie,Aiping Li;experiments and interpretation of results:Yalong Xie,Liqun Gao,Hongkui Tu;draft manuscript preparation: Yalong Xie,Biyin Hu.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data and materials used to support the findings of this study are available from the corresponding author upon request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    99精品在免费线老司机午夜| 亚洲熟妇熟女久久| av天堂中文字幕网| 高清在线国产一区| 性色avwww在线观看| 亚洲av成人av| 欧美一区二区精品小视频在线| 国产又黄又爽又无遮挡在线| tocl精华| 床上黄色一级片| 最好的美女福利视频网| 国产亚洲精品综合一区在线观看| 欧美色欧美亚洲另类二区| 色播亚洲综合网| 波多野结衣高清作品| 成人特级黄色片久久久久久久| 天堂av国产一区二区熟女人妻| 国产视频内射| 国模一区二区三区四区视频 | e午夜精品久久久久久久| 18禁美女被吸乳视频| 美女午夜性视频免费| 亚洲人成网站在线播放欧美日韩| 黄色丝袜av网址大全| 日韩有码中文字幕| 制服丝袜大香蕉在线| 亚洲 国产 在线| 精品久久久久久久人妻蜜臀av| 久久久久久久精品吃奶| 99在线人妻在线中文字幕| 亚洲成人精品中文字幕电影| 三级男女做爰猛烈吃奶摸视频| 中文字幕高清在线视频| 成人三级做爰电影| 精品一区二区三区视频在线 | 制服丝袜大香蕉在线| 成年女人永久免费观看视频| 欧美一区二区国产精品久久精品| 亚洲中文av在线| 久久久久久久午夜电影| 深夜精品福利| 午夜免费观看网址| 国产精品久久久久久精品电影| 国产精品久久久久久精品电影| 黄片小视频在线播放| 宅男免费午夜| 哪里可以看免费的av片| 亚洲av免费在线观看| 丁香欧美五月| 99久久久亚洲精品蜜臀av| 无遮挡黄片免费观看| 日本成人三级电影网站| 午夜两性在线视频| 亚洲熟女毛片儿| 亚洲人成网站在线播放欧美日韩| 精品国产亚洲在线| 全区人妻精品视频| 搡老熟女国产l中国老女人| 亚洲国产日韩欧美精品在线观看 | 亚洲精品一卡2卡三卡4卡5卡| 精品国产乱子伦一区二区三区| 日本三级黄在线观看| 国产成人av教育| 午夜福利视频1000在线观看| 久久中文字幕一级| a级毛片a级免费在线| 麻豆国产97在线/欧美| 欧美黑人欧美精品刺激| 欧美av亚洲av综合av国产av| 日韩精品青青久久久久久| 大型黄色视频在线免费观看| 亚洲 欧美 日韩 在线 免费| 成人永久免费在线观看视频| 亚洲最大成人中文| 亚洲国产高清在线一区二区三| 999精品在线视频| av福利片在线观看| 欧美日韩精品网址| 亚洲最大成人中文| 免费在线观看视频国产中文字幕亚洲| 国产欧美日韩精品一区二区| 久久精品亚洲精品国产色婷小说| 高潮久久久久久久久久久不卡| 老司机福利观看| 一级毛片精品| 亚洲第一欧美日韩一区二区三区| 老司机福利观看| 18禁黄网站禁片午夜丰满| 99久久精品国产亚洲精品| 国产精品 欧美亚洲| 中文字幕高清在线视频| 无限看片的www在线观看| 成人一区二区视频在线观看| 男女那种视频在线观看| 国产美女午夜福利| 国产精品久久视频播放| 成人特级av手机在线观看| 国产私拍福利视频在线观看| 18禁观看日本| www日本黄色视频网| 色综合站精品国产| 精品国产三级普通话版| 国产黄a三级三级三级人| 亚洲七黄色美女视频| 国产成人影院久久av| 亚洲人成伊人成综合网2020| 特大巨黑吊av在线直播| 国产美女午夜福利| 免费看十八禁软件| 91av网站免费观看| 变态另类丝袜制服| 久久婷婷人人爽人人干人人爱| 好男人电影高清在线观看| 国产毛片a区久久久久| 国产午夜精品论理片| 中文资源天堂在线| 国产男靠女视频免费网站| 午夜免费成人在线视频| 免费看a级黄色片| 女人高潮潮喷娇喘18禁视频| 日韩欧美国产一区二区入口| 一进一出抽搐gif免费好疼| 欧美日韩黄片免| 黄色 视频免费看| 色哟哟哟哟哟哟| 亚洲国产精品sss在线观看| 村上凉子中文字幕在线| 大型黄色视频在线免费观看| 欧美+亚洲+日韩+国产| 欧美+亚洲+日韩+国产| 欧美一区二区精品小视频在线| 久久久久久人人人人人| 国产乱人视频| 国产高潮美女av| 国产精品久久久久久久电影 | 一级毛片高清免费大全| 午夜精品一区二区三区免费看| 久久午夜综合久久蜜桃| 色尼玛亚洲综合影院| 欧美国产日韩亚洲一区| 在线播放国产精品三级| 一本精品99久久精品77| 国产在线精品亚洲第一网站| 国产在线精品亚洲第一网站| 窝窝影院91人妻| 中文字幕高清在线视频| 国产亚洲精品av在线| 国产精品99久久久久久久久| 99国产精品一区二区三区| 亚洲最大成人中文| 欧美日韩精品网址| 亚洲熟女毛片儿| 三级国产精品欧美在线观看 | 嫁个100分男人电影在线观看| 禁无遮挡网站| 在线观看日韩欧美| 亚洲欧美精品综合一区二区三区| 中文资源天堂在线| 韩国av一区二区三区四区| 亚洲av成人一区二区三| 精品久久久久久久人妻蜜臀av| 欧美一级a爱片免费观看看| xxx96com| 天堂影院成人在线观看| 亚洲国产中文字幕在线视频| 国产激情久久老熟女| 国产成人啪精品午夜网站| 色综合欧美亚洲国产小说| 亚洲成a人片在线一区二区| 在线看三级毛片| 欧美黄色淫秽网站| 99久国产av精品| 蜜桃久久精品国产亚洲av| 成人亚洲精品av一区二区| 最新在线观看一区二区三区| 又爽又黄无遮挡网站| 桃色一区二区三区在线观看| 欧美丝袜亚洲另类 | 国产成人系列免费观看| 日韩成人在线观看一区二区三区| 午夜福利18| www日本黄色视频网| 五月伊人婷婷丁香| 午夜福利高清视频| 国产av在哪里看| 男人舔女人下体高潮全视频| 美女黄网站色视频| 欧美性猛交黑人性爽| 狠狠狠狠99中文字幕| 国产成人影院久久av| 可以在线观看的亚洲视频| 成人鲁丝片一二三区免费| 99热这里只有精品一区 | 一级黄色大片毛片| 老汉色∧v一级毛片| 男女午夜视频在线观看| 国内揄拍国产精品人妻在线| 男女床上黄色一级片免费看| 中文字幕熟女人妻在线| 夜夜爽天天搞| 午夜精品在线福利| 久久久久性生活片| 美女高潮的动态| 99热这里只有精品一区 | 白带黄色成豆腐渣| 国产亚洲精品久久久久久毛片| 三级国产精品欧美在线观看 | 国产真实乱freesex| 性欧美人与动物交配| 国产综合懂色| 看黄色毛片网站| 国产精品乱码一区二三区的特点| 久久国产精品影院| 亚洲 国产 在线| 亚洲片人在线观看| 国产av不卡久久| 观看美女的网站| 18美女黄网站色大片免费观看| 国产精品久久视频播放| 91在线精品国自产拍蜜月 | 欧美黄色片欧美黄色片| 国产精品精品国产色婷婷| 成人无遮挡网站| 美女 人体艺术 gogo| 99热6这里只有精品| 又爽又黄无遮挡网站| 制服人妻中文乱码| 99久久精品一区二区三区| 国内精品一区二区在线观看| 1000部很黄的大片| 国产真人三级小视频在线观看| 长腿黑丝高跟| 99国产精品一区二区蜜桃av| 成人性生交大片免费视频hd| 亚洲avbb在线观看| 99国产精品99久久久久| 国产精品99久久99久久久不卡| 国产乱人视频| 在线永久观看黄色视频| 在线免费观看的www视频| 日本免费a在线| 真人一进一出gif抽搐免费| xxxwww97欧美| 国产在线精品亚洲第一网站| 欧美绝顶高潮抽搐喷水| 欧美日韩黄片免| 亚洲国产看品久久| 国产亚洲欧美在线一区二区| 日本 欧美在线| 国内精品久久久久久久电影| xxx96com| www日本在线高清视频| 老司机深夜福利视频在线观看| 亚洲最大成人中文| 午夜福利高清视频| 国产精品久久久久久精品电影| 亚洲av电影在线进入| 在线观看午夜福利视频| 在线看三级毛片| 国产主播在线观看一区二区| 久久精品综合一区二区三区| 国产成人精品无人区| 三级毛片av免费| 中文在线观看免费www的网站| 亚洲无线观看免费| 午夜福利高清视频| 九九久久精品国产亚洲av麻豆 | 老鸭窝网址在线观看| 三级毛片av免费| 久久午夜综合久久蜜桃| 欧美激情久久久久久爽电影| 好男人电影高清在线观看| 老司机深夜福利视频在线观看| 亚洲自偷自拍图片 自拍| 免费在线观看影片大全网站| 久久婷婷人人爽人人干人人爱| 国产私拍福利视频在线观看| 一夜夜www| 欧美日韩福利视频一区二区| 中文字幕最新亚洲高清| 99国产精品一区二区三区| 成人国产一区最新在线观看| 黄片小视频在线播放| 日韩av在线大香蕉| 两人在一起打扑克的视频| 国内精品久久久久精免费| 一级黄色大片毛片| 美女午夜性视频免费| 综合色av麻豆| 99在线视频只有这里精品首页| 国产精品永久免费网站| 在线十欧美十亚洲十日本专区| 国产精品影院久久| 国产成人福利小说| 免费观看的影片在线观看| 亚洲国产精品久久男人天堂| 日韩国内少妇激情av| 久久久久亚洲av毛片大全| 亚洲精品美女久久av网站| 一边摸一边抽搐一进一小说| 色av中文字幕| 欧美乱码精品一区二区三区| 人妻夜夜爽99麻豆av| 久久中文看片网| 久久天躁狠狠躁夜夜2o2o| 国产激情欧美一区二区| 91字幕亚洲| 一进一出抽搐gif免费好疼| 色吧在线观看| 99精品久久久久人妻精品| 可以在线观看毛片的网站| 1000部很黄的大片| 18禁国产床啪视频网站| 夜夜爽天天搞| 成熟少妇高潮喷水视频| 国产精品99久久99久久久不卡| 国产男靠女视频免费网站| 香蕉av资源在线| 免费大片18禁| 别揉我奶头~嗯~啊~动态视频| 亚洲男人的天堂狠狠| 少妇的逼水好多| 一个人看的www免费观看视频| 国产91精品成人一区二区三区| 免费人成视频x8x8入口观看| 国产成人欧美在线观看| 小蜜桃在线观看免费完整版高清| 曰老女人黄片| 国产乱人视频| 国产伦精品一区二区三区视频9 | 无限看片的www在线观看| 不卡一级毛片| 九九久久精品国产亚洲av麻豆 | 国产伦人伦偷精品视频| 午夜福利免费观看在线| 精品免费久久久久久久清纯| 99精品久久久久人妻精品| 亚洲在线自拍视频| 色老头精品视频在线观看| 日韩 欧美 亚洲 中文字幕| av视频在线观看入口| 欧美日韩亚洲国产一区二区在线观看| 一级毛片女人18水好多| 国产高潮美女av| 99精品欧美一区二区三区四区| 国产亚洲av嫩草精品影院| 级片在线观看| 亚洲成人免费电影在线观看| 美女高潮喷水抽搐中文字幕| 精品欧美国产一区二区三| 99精品在免费线老司机午夜| 精品电影一区二区在线| 成年女人永久免费观看视频| 狂野欧美白嫩少妇大欣赏| 人人妻人人看人人澡| 欧美日本亚洲视频在线播放| 国产伦人伦偷精品视频| 成人亚洲精品av一区二区| 国产乱人伦免费视频| 他把我摸到了高潮在线观看| 高清毛片免费观看视频网站| 国产成人av激情在线播放| 久久久久久九九精品二区国产| 午夜福利免费观看在线| 美女高潮的动态| 成人国产综合亚洲| 男女做爰动态图高潮gif福利片| 又大又爽又粗| 日韩大尺度精品在线看网址| 亚洲 国产 在线| 免费大片18禁| 欧美日韩亚洲国产一区二区在线观看| 成年版毛片免费区| 国产欧美日韩一区二区三| 俺也久久电影网| 精品国内亚洲2022精品成人| 好男人电影高清在线观看| 偷拍熟女少妇极品色| 一卡2卡三卡四卡精品乱码亚洲| av福利片在线观看| 香蕉av资源在线| 亚洲欧美日韩卡通动漫| 在线播放国产精品三级| 不卡一级毛片| or卡值多少钱| 精品午夜福利视频在线观看一区| 亚洲精品一卡2卡三卡4卡5卡| 亚洲精品色激情综合| 国产美女午夜福利| 无限看片的www在线观看| 久久精品aⅴ一区二区三区四区| 美女cb高潮喷水在线观看 | 宅男免费午夜| 99精品久久久久人妻精品| 搡老熟女国产l中国老女人| 国产一区二区在线观看日韩 | 日本精品一区二区三区蜜桃| 成人性生交大片免费视频hd| 国产伦在线观看视频一区| 国产主播在线观看一区二区| 欧美成狂野欧美在线观看| 啦啦啦韩国在线观看视频| 国产三级在线视频| 一个人观看的视频www高清免费观看 | 日日干狠狠操夜夜爽| 午夜久久久久精精品| 国产精品综合久久久久久久免费| 久久中文字幕一级| 日本在线视频免费播放| 亚洲成人久久性| 观看美女的网站| 欧美大码av| 国产精品影院久久| 夜夜爽天天搞| 欧美最黄视频在线播放免费| 久久人人精品亚洲av| 18美女黄网站色大片免费观看| 亚洲 欧美一区二区三区| 亚洲欧美日韩无卡精品| 色精品久久人妻99蜜桃| 成人一区二区视频在线观看| 亚洲国产精品sss在线观看| a在线观看视频网站| 中文字幕av在线有码专区| www日本黄色视频网| 亚洲成av人片免费观看| 欧美日韩国产亚洲二区| 色吧在线观看| 国产亚洲欧美98| 国产精品亚洲一级av第二区| 欧美性猛交黑人性爽| 国产久久久一区二区三区| 18禁观看日本| 嫩草影院入口| 看片在线看免费视频| avwww免费| 成人性生交大片免费视频hd| 少妇熟女aⅴ在线视频| 十八禁网站免费在线| 欧美3d第一页| 国产一区二区三区视频了| 一个人免费在线观看电影 | 国产欧美日韩一区二区三| x7x7x7水蜜桃| 黑人巨大精品欧美一区二区mp4| 一个人免费在线观看的高清视频| 成人18禁在线播放| 国产精品久久久久久亚洲av鲁大| 亚洲精华国产精华精| 亚洲av成人一区二区三| 国产精品一及| 夜夜夜夜夜久久久久| 一区二区三区激情视频| 精品国产乱码久久久久久男人| 久久人妻av系列| 色老头精品视频在线观看| 老汉色av国产亚洲站长工具| 91麻豆av在线| 午夜福利高清视频| 日本 欧美在线| 国产亚洲精品av在线| 亚洲av美国av| 国产v大片淫在线免费观看| 午夜视频精品福利| 最近最新免费中文字幕在线| 日韩欧美在线乱码| 美女黄网站色视频| 国产v大片淫在线免费观看| 老熟妇仑乱视频hdxx| 国产精品久久视频播放| 在线观看舔阴道视频| 中文字幕人妻丝袜一区二区| 久久草成人影院| 日韩欧美国产一区二区入口| 久久这里只有精品19| 国产高清视频在线播放一区| 香蕉国产在线看| 欧美三级亚洲精品| 黄片大片在线免费观看| 亚洲色图av天堂| 中文字幕熟女人妻在线| 成熟少妇高潮喷水视频| 此物有八面人人有两片| 男人的好看免费观看在线视频| 两性午夜刺激爽爽歪歪视频在线观看| 九色成人免费人妻av| 99久久成人亚洲精品观看| 色在线成人网| 国产精品美女特级片免费视频播放器 | 人人妻,人人澡人人爽秒播| 两性夫妻黄色片| 欧美日韩瑟瑟在线播放| 国产精品爽爽va在线观看网站| 精品久久久久久久久久免费视频| 午夜a级毛片| 两人在一起打扑克的视频| 1000部很黄的大片| 亚洲精品久久国产高清桃花| 亚洲va日本ⅴa欧美va伊人久久| 视频区欧美日本亚洲| 亚洲在线自拍视频| 黄色女人牲交| 亚洲欧美精品综合久久99| 亚洲精品在线观看二区| av女优亚洲男人天堂 | 99久久无色码亚洲精品果冻| 久久久久免费精品人妻一区二区| 变态另类丝袜制服| 两个人看的免费小视频| 在线观看一区二区三区| 国产69精品久久久久777片 | 麻豆成人午夜福利视频| 国语自产精品视频在线第100页| 久久久精品欧美日韩精品| 免费观看精品视频网站| 宅男免费午夜| 久久伊人香网站| www日本黄色视频网| 国产69精品久久久久777片 | 国产欧美日韩一区二区精品| 99视频精品全部免费 在线 | 两性午夜刺激爽爽歪歪视频在线观看| 午夜福利免费观看在线| 日日夜夜操网爽| 中文字幕人妻丝袜一区二区| 日韩欧美国产在线观看| 久久99热这里只有精品18| 法律面前人人平等表现在哪些方面| 亚洲真实伦在线观看| 一级作爱视频免费观看| 欧美三级亚洲精品| 最近最新免费中文字幕在线| 国产精品99久久久久久久久| 欧美日韩中文字幕国产精品一区二区三区| 日韩欧美国产在线观看| 国产不卡一卡二| 国产69精品久久久久777片 | 可以在线观看的亚洲视频| 免费av不卡在线播放| 精品国产乱码久久久久久男人| 久久久精品大字幕| 草草在线视频免费看| 国产亚洲欧美98| 12—13女人毛片做爰片一| 欧美黑人巨大hd| 国产精品亚洲美女久久久| 女生性感内裤真人,穿戴方法视频| 国产综合懂色| 精品国产美女av久久久久小说| 蜜桃久久精品国产亚洲av| 国产一区在线观看成人免费| 美女高潮的动态| 三级男女做爰猛烈吃奶摸视频| 国内毛片毛片毛片毛片毛片| 中文字幕久久专区| 国产精品久久久久久精品电影| 国产单亲对白刺激| 午夜福利高清视频| 一边摸一边抽搐一进一小说| 天天一区二区日本电影三级| 90打野战视频偷拍视频| 国产在线精品亚洲第一网站| 热99在线观看视频| 国产单亲对白刺激| 亚洲精品在线观看二区| 国产真实乱freesex| 亚洲成人免费电影在线观看| 免费搜索国产男女视频| 超碰成人久久| 成在线人永久免费视频| 99精品在免费线老司机午夜| 国产精品一区二区三区四区久久| 久久精品国产99精品国产亚洲性色| 久久久久国产精品人妻aⅴ院| 最近视频中文字幕2019在线8| 韩国av一区二区三区四区| 精品久久久久久久毛片微露脸| 看黄色毛片网站| 国产精品精品国产色婷婷| 国产精品亚洲美女久久久| 国产黄色小视频在线观看| 757午夜福利合集在线观看| 69av精品久久久久久| 国产乱人伦免费视频| www.www免费av| 国产一级毛片七仙女欲春2| 午夜精品久久久久久毛片777| 精品电影一区二区在线| 亚洲人成伊人成综合网2020| 日本与韩国留学比较| 99久久综合精品五月天人人| 无遮挡黄片免费观看| 老司机福利观看| 国产精品久久久人人做人人爽| 国产 一区 欧美 日韩| 老汉色av国产亚洲站长工具| 免费一级毛片在线播放高清视频| 精品久久久久久久毛片微露脸| 国产伦在线观看视频一区| 亚洲欧美日韩卡通动漫| 三级男女做爰猛烈吃奶摸视频| 免费观看的影片在线观看| 久久久久久人人人人人| 很黄的视频免费| 亚洲成人中文字幕在线播放| 亚洲国产精品久久男人天堂| 丰满人妻一区二区三区视频av | 青草久久国产| 国内少妇人妻偷人精品xxx网站 | 午夜视频精品福利| 在线看三级毛片| 亚洲国产看品久久| 国产综合懂色| 国产一区二区激情短视频| 国产亚洲精品一区二区www|