• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Dealing with Imbalanced Dataset Leveraging Boundary Samples Discovered by Support Vector Data Description

    2021-12-16 06:39:30ZhengboLuoHamParvHarishGargSultanNomanQasemKimHungPhoandZulkefliMansor
    Computers Materials&Continua 2021年3期

    Zhengbo Luo, Ham?d Parv?n,Harish Garg,Sultan Noman Qasem, Kim-Hung Pho and Zulkefli Mansor

    1Graduate School of Information, Production and Systems, Waseda University, Tokyo,Japan

    2Institute of Research and Development, Duy Tan University, Da Nang, 550000, Vietnam

    3Faculty of Information Technology, Duy Tan University, Da Nang, 550000,Vietnam

    4Department of Computer Science, Nourabad Mamasani Branch, Islamic Azad University, Mamasani, Iran

    5School of Mathematics, Thapar Institute of Engineering and Technology, Deemed University, Patiala, Punjab, 147004,India

    6Computer Science Department,College of Computer and Information Sciences,Al Imam Mohammad Ibn Saud Islamic University,Riyadh,Saudi Arabia

    7Computer Science Department, Faculty of Applied Science,Taiz University, Taiz, Yemen

    8Fractional Calculus,Optimization and Algebra Research Group,Faculty of Mathematics and Statistics,Ton Duc Thang University,Ho Chi Minh City, Vietnam

    9Fakulti Teknologi dan Sains Maklumat, Universiti Kebangsaan Malaysia, Selangor, Malaysia

    Abstract: These days, imbalanced datasets, denoted throughout the paper by ID,(a dataset that contains some (usually two) classes where one contains considerably smaller number of samples than the other(s))emerge in many real world problems (like health care systems or disease diagnosis systems, anomaly detection,fraud detection, stream based malware detection systems, and so on) and these datasets cause some problems (like under-training of minority class(es) and over-training of majority class(es), bias towards majority class(es), and so on)in classification process and application.Therefore, these datasets take the focus of many researchers in any science and there are several solutions for dealing with this problem.The main aim of this study for dealing with IDs is to resample the borderline samples discovered by Support Vector Data Description (SVDD).There are naturally two kinds of resampling: Under-sampling (U-S) and oversampling(O-S).The O-S may cause the occurrence of over-fitting(the occurrence of over-fitting is its main drawback).The U-S can cause the occurrence of significant information loss (the occurrence of significant information loss is its main drawback).In this study, to avoid the drawbacks of the sampling techniques, we focus on the samples that may be misclassified.The data points that can be misclassified are considered to be the borderline data points which are on border(s)between the majority class(es) and minority class(es).First by SVDD, we find the borderline examples;then,the data resampling is applied over them.At the next step,the base classifier is trained on the newly created dataset.Finally,we compare the result of our method in terms of Area Under Curve(AUC)and F-measure and G-mean with the other state-of-the-art methods.We show that our method has better results than the other state-of-the-art methods on our experimental study.

    Keywords: Imbalanced learning;classification;borderline examples

    1 Introduction

    Data mining is a sub-field in artificial intelligence[1-10].It has wide applications in classification and clustering of data in real world problems [11-20].Nowadays, different classifiers have been gradually proposed through different underlying assumptions and mechanisms in order to enhance classification accuracy [21-32].One of the most challenging problems for classifiers has been to learn an Imbalanced Dataset (ID) problem.A dataset will be considered to be an imbalanced one, if it contains at least two classes where the number of data points in one class (majority class) overshadows the number of data points in the other class (minority class).Ordinary supervised learning algorithms are weak in learning ID problems.They are inclined to the majority class [33].Ignoring the minority class is not tolerated in many problems such as the medical ones [34], risk financial assessment ones [35], etc.To tackle the challenges of IDs, different methods have been proposed that are divided into two categories: (a) external approaches and (b) internal approaches.The methods of the first category are the ones that try to balance the distribution of the class data points.The methods of the second category are the ones that try to manipulate machine learning algorithms so as to be able to handle IDs.In the current research as an approach of first type, it has been tried to boost the data points in sub-sampling trials that are error-prone.To do this, we have used an auxiliary set of the boundary data points discovered by Support Vector Data Description (SVDD).

    Base classifiers perform poorly when dealing with IDs.Therefore, the learning of a given ID is considered to be a great challenge.Standard base classifiers poorly diagnose the minority class samples.Several approaches have been established for dealing with the problem of class imbalance in IDs, to improve the generalization in classification.We can categorize them into 2 general classes [36]: (1) the approaches which solve the mentioned problem in algorithm-level, (2) those which solve the mentioned problem in data-level.Those in the first class solve the learning ID problem through adjusting previous machine learning methods so as to learn better in the new imbalanced situation.The approaches in the second class solve the learning ID problem through manipulation of training data (minority class(es) and/or majority class(es)) so as to make the dataset balanced.It is generally done through an over-sampling(O-S) or an under-sampling (U-S) (or a hybrid of them).O-S increases the minority class size, but U-S decreases the majority class size [36].It is widely acceptable that U-S is a better solution in the learning ID problem [37].

    Nevertheless, most of these techniques neglect the effect of borders’ samples on classification performance; the high impact borderline samples might expose to misclassification.In this paper, a new framework is introduced to deal with learning ID problem.The performance of our framework is evaluated and compared with other state-of-the-art systems.A number of experiments have been performed under some benchmark datasets with different imbalanced ratios.The results obtained from our framework, when compared to the state-of-the-art works, confirm its better performance for the different datasets and different base classifiers.

    Many attempts have been made to alleviate the problem of class imbalance.The Synthetic Minority Over-Sampling TEchnique (SMOTE) [38,39] is an O-S approach that was developed to deal with the problem of IDs’ learning by creating synthetic minority class samples.SMOTE resamples the minority class instances by synthesizing new samples of the minority class.Several variants of SMOTE have been proposed to overcome the drawbacks of SMOTE.Such as Borderline-SMOTE [39] which determines boundary minority class samples by using neighbor information and then applied SMOTE on the border samples; Safe-level-SMOTE [40] synthesizing the minority samples according to the safe level which is computed by using nearest neighbor minority instances; MWMOTE [41] generates samples from the weighted informative samples using a clustering approach; K-means SMOTE [42] and so on.Han et al.[39] proposed the borderline-SMOTE algorithm, which was modified later by He et al.[43], to improve SMOTE performance as it has inevitably randomness, where the numbers of the majority class instances and the border instances neighboring the minority class are compared.Then the O-S is done for the border samples of the minority class; that is, the interpolation is carried out in the proper area, they found that borderline-SMOTE performs better than SMOTE.Nevertheless, as SMOTE creates artificial instances with minority class label and ignores the majority class instances during creation of artificial instances, it is highly likely that it causes class mixture and makes over-generalization [44].In this paper,a new approach is proposed which is suitable to address the ID problem.Our proposed approach is tested and assessed on different benchmarks and it is compared with many state of the art approaches that have been introduced to deal with learning ID problem.

    In recent years, machine learning communities have paid much attention to imbalanced learning.Considering vast domain of the real-world problems, attention to imbalanced learning challenge grows every day.It is worthy to be mentioned that we are involved in imbalanced learning in many real-world problems.For example, the analysis of the satellite high-resolution images and healthcare recognition systems are two problems involving in imbalanced learning problem.It is a key point that minority class(es) is(are) target class(es); due to its(their) insufficient samples, it is (they are) hardly distinguishable from majority class(es) in imbalanced learning problems.For example, patients are hardly distinguishable from healthy individuals.The questions posed in the current study are: (a) “how is it possible to change skewed class distribution into balanced one?”, and (b) “when is the proposed method superior to the previous methods for learning IDs?”;and the answers to these questions are provided in the following.

    In the current era, IDs are a great part of real world datasets.As for IDs, majority class(es) is(are)superior to minority class(es); therefore, correct classification of samples of minority class(es) is of high importance.For example, the problem of detection of diabetic or Escherichia coli-infected patients can be considered to be imbalanced learning.Diabetic patients go to the minority class showing the superiority of minority class to majority class in terms of importance.For each new sample, there are 4 possibilities:(a) a diabetic patient is diagnosed as a diabetic patient, (b) a diabetic patient is diagnosed as a healthy person, (c) a healthy person is diagnosed as a healthy person, and (d) a healthy person is diagnosed as a diabetic patient.Accordingly, if a healthy person is diagnosed as diabetic, it will not be a very bad thing(at least not fatal); but if a diabetic patient is diagnosed as a healthy one, we will face a misclassification which may threaten the life of a human.

    The paper is organized into 5 sections.Section 1 includes topic and problem introduction.Section 2 is dedicated to definitions and literature.The presentation of the proposed method and its explanation are available in Section 3.Experimental results are presented in Section 4 in detail.Finally, Section 5 concludes the paper and presents future research directions.

    2 Background

    2.1 Definitions

    Imbalanced dataset:A dataset which has more data points in one or multiple of its class(es)compared to data points in the other class(es)is an ID.The mentioned more frequent class(es)are called majority and the other(s) is(are) minority class(es).Fig.1a shows an arbitrary ID with one minority class and one majority class.

    Figure 1: (a) An ID with one minority and one majority classes; (b) Flowchart of the proposed approach

    The drawback in learning of IDs is that traditional classification algorithms are biased toward majority classes (negative samples).Consequently, increasing misclassification of samples in minority classes(positive samples) is likely.Recently, numerous solutions have been proposed to deal with the mentioned problem.The following definitions focus on some definitions needed for understanding these methods.

    Cost-sensitive learning techniques:This type of solutions contains approaches at the data level,at the algorithmic level or at the both levels combined considering higher costs for the misclassification of examples of the positive class(es) (minority class(es)) with respect to the negative class(es) (majority class(es)).Most of the studies on the behavior of several standard classifiers in the ID domains have shown that significant loss of performance is mainly due to the skewed class distribution given by the imbalance ratio defined as the ratio of the number of instances in the majority class to the number of examples in the minority class [45].

    Data sampling:In which the training instances are modified in a way to produce a more or less balanced class distribution that allows a basic classifier to perform in a similar manner to the standard classification.OS and U-S techniques are applied on the training data distribution.The both techniques can be used for dealing with learning of IDs.Keep in mind that a change in training data distribution leads to biased training of dataset, because uniform misclassification costs are incurred.For example, the training dataset distribution is changed; the ratio of correct examples to false examples will change from 1:1 to 2:1.Accordingly, one example goes to misclassified class.Sampling is proposed for some reasons including:(a) First and the most important reason is that there is no need to administer cost-sensitive approach for all training algorithms.Therefore, only a learning-based approach is available, (b) There are numerous biased training datasets and the size of training dataset has to be reduced for academic learning, and(c) There is no precise cost defined for each misclassification.

    Over-sampling:O-S is a process to extract a data superset from original set of minority class(es).It is a process of resampling or generating new examples from the existing ones in minority class(es).

    Under-sampling:U-S is a process to extract a data subset from original set of majority class(es).It is a process of eliminating some of examples in majority class(es).

    Artificial O-S techniques:Artificial O-S techniques(like SMOTE)are those that aim at increasing data samples of the minority class(es)to deal with effect of the low number of the samples in minority class(es)of ID.In this method, a set of synthetic data samples from minority class(es) are produced and then they are added to ID to be balanced.By producing an additional number of samples from minority class(es),traditional base classifiers, such as decision trees and support vector machine, artificial neural network,will be able to enhance their decision-making.

    Ensemble methods:Ensemble classifiers are known as models with multiple classifiers.These methods aim at enhancing the performance of models with single classifiers.They generate multiple classifiers and combine them in order to introduce a new classifier having the capacity of all the combined classifiers in itself.The main idea is to develop multiple classifiers from the original dataset and then to sum up their predictions facing an unknown example.Ensemble methods are based on combination of ensemble learning algorithms using the techniques that are similar to the ones employed by cost-sensitive methods.A complete categorization of ensemble methods in ID problems has been recently introduced.Some ensemble methods, which have been specifically proposed for ID problems, are as follows [46]: (a) Easy Ensemble, (b) Balance Cascade, (c) Bagging-based method, (c.1) Over-Bagging (such as SMOTE Bagging), (c.2) Under-Bagging (such as Quasi-Bagging, Asymmetric Bagging, Roughly Balanced Bagging, and Bagging Ensemble Variation), (d) Boost methods, (d.1) Boost SMOTE, (d.2) Boost MSMOTE, (d.3) DataBoost-IM, (e) Cost-sensitive Boost methods, (e.1) AdaCost, (e.2) CSB, (e.3) CSB2,(e.4) RareBoost, (e.5)AdaC1, (e.6) AdaC2,(e.7) AdaC3.

    In the following, the proposed method is introduced inspiring some of the mentioned methods using SVDD.

    2.2 Related Work

    Seiffert et al.[47] have proposed a combined method called RUSBooST to reduce class errors.If the training dataset is an ID, achieving an efficient classifier may be challenging.Their paper studies the performance of RUSBooST in comparison to its components RUS and AdaBoost.They have indicated that RUSBooST outperforms RUS and AdaBoost in terms of classification accuracy.Additionally,RUSBooST is compared to another member of the same family, SMOTEBoost.The results are the same as that of SMOTE.The study shows the results for each basic learner with no sampling or bagging.This study proves that RUSBooST is a fast and simple algorithm with less complication in replacement compared to SMOTEBoost.SMOTEBoost has two major drawbacks: it is complicated to implement and time-consuming.The mentioned drawbacks could be solved replacing RUS by SMOTE.

    Hajizadeh et al.[48]have studied the nearest neighbor classifier with locally weighted distance method(NNLWD).This study aims at promoting the performance of the nearest neighbor classifier in IDs without interrupting the original data distribution.The approach proposed in this study performs well in minority class(es).Also, it performs acceptable in majority class(es).The mentioned approach precisely classifies the samples of different classes.With regard to class distribution, each class is designated a weight.The distance between query examples and original examples has a direct relationship with the weight of the original examples.Using this approach, the examples with lower weights which are the nearest neighbors of new query examples have greater chances.Weighting which leads to better performance of nearest neighbors’ method is based on G-Mean.Generally, the study showed that O-S of minority class(es) and U-S of majority class(es) were useful in dealing with IDs.It also indicated that overuse of the two methods leads to some complications including loss of important information and over-fitting phenomenon.

    Weiss et al.[45]have studied the comparison between cost-sensitive and sampling methods in dealing with IDs.Performance of a classification algorithm for a two-class problem(a problem which has only two classes:true or false)is evaluated.In the mentioned method,the optimized metric to investigate the classifier performance is the total costs if classification costs are miscalculated.In the mentioned study,the only metric is the total costs.

    Chawla et al.[38] has studied AdaBoost algorithm to solve the ID problems.Synthetic minority O-S technique has been specifically designed to solve the imbalanced learning problems.In the mentioned study, SMOTEBoost has been incorporated with boost techniques in order to solve the imbalanced learning problems.Contrary to standard boost method designating equal weights for all misclassified examples, SMOTEBoost generates synthetic minority examples and directly changes the newly designated weights.It finally adjusts the skewed class distribution.In the given method, some synthetic minority examples are generated by operating on feature space.Having generated more synthetic minority examples, categorical training algorithms including decision tree have been applied.This study deals with the two features: continuous and discrete.For calculation of minority nearest neighbor,Euclidean distance is used and for continuous features and absolute-value distance is used for discrete features.Their proposed algorithm [38] uses the benefits of BOOST and SMOTE algorithms successfully.It is summarized as:“While BOOST algorithm enhances the prediction accuracy of classifiers focusing on complicated examples of all classes,SMOTE enhances the performance of minority example classifiers”.

    Liu et al.[49] have studied the usability of decision tree in imbalanced learning problems.They have introduced a new decision tree.The decision tree of relative certainty enhanced the classifier performance.To produce a well-defined decision tree, the study started with data collection.C4.5 was used for measurement.It resulted in an explanation for why the final datasets skew toward the majority class.To solve the bias, a variable named CCP has been introduced.The newly introduced variable has been a basis for CCPDT.To develop statistically meaningful rules, a set of methods have been derived from bottom-up and top-down methods using Fisher test to prune the statistically meaningless branches.In their method, the statistical classifier performance enhanced and trees have faced balanced datasets.Their study geometrically and theoretically indicated that CCP is sensitive to class distribution.Accordingly,CCP is embedded in datasets to use the optimized variables in decision tree.

    Chawla[50]has studied the IDs and sampling alternatives and also decision tree.A dataset has been considered imbalanced by him if the class(es)is(are)presented unequally.A question is posed in this study that what is the proper dataset distribution according to different dataset distributions?Observations show that normal data distribution is mostly the optimized distribution for classifier learning algorithm.Additionally, IDs lead to greater dispersion with regard to IDs in feature space.Therefore, O-S and U-S may lose their usability.Accordingly, this study frequently uses O-S and U-S along with synthetic minority sampling.In this study, C4.5 has been used for 3 sampling methods.The experimental analysis has aimed at evaluating the structural effects, estimation and sampling methods upon Area Under Curve (AUC).

    SVDD[51]has a sphere borderline surrounding datasets.Similar to SVM,SVDD uses flexible kernel matrices.Generally speaking,data distribution description is of many advantages:first,it helps elimination of irrelevant and poor-defined data.Second,it is useful to classify datasets a class of which is well-sampled and another class is poor-sampled.Another advantage is the ability to compare datasets.Imagine a dataset is trained after multiple expensive stages have been completed.If there is a new dataset for the similar process, the two mentioned datasets can be compared.If the old and the new datasets are similar, training can be eliminated but if they are different, new training dataset would be obligatory.

    Another work[52]has proposed to categorize majority examples into x-member classes,whereN is number of negative samples and P is number of positive samples.The x partitions extracted from negative (majority) class are without overlapping.For each partition of negative class, we add all samples minority classes, and then an AdaBoost classifier is run.Finally, the obtained results for all x datasets are incorporated.

    Balanced Random Forest [53] abbreviated as BRF is different from Random Forest in that it uses balanced initiators.It also is different from under-sampling+random forest in which it pre-processes training datasets and then applies random forest.

    ASYMBoost[54]is a cost-sensitive AdaBoost algorithm.In the mentioned algorithmis defined as imbalanced surface.N is the number of majority examples and P is the number of minority examples.For each run of the mentioned algorithm,the positive example weight is multiplied byT is the ith repetition.In ASYM,all datasets are used as input.

    Liu et al.[55]have conducted a study titled exploratory U-S for imbalanced class training to deal with imbalanced class problems.U-S is a popular method for ID problems using majority subsets.It leads to an efficient method.The mentioned study has aimed at proposing two methods to solve the ID problem.First known as easy ensemble derive multiple majority subsets for each of which a training algorithm is assigned.Then their results are incorporated.Second known as balance cascade trains the training algorithms consecutively.In this way, well-classified majority examples of each class would be eliminated from the given dataset at the next classification stage.

    The family of Spider method[56]has been proposed to solve the problem of cost-sensitivity.To this end,majority class clearance stages are incorporated into minority class US.

    In 2015,some researchers have proposed a new method to deal with ID problems[57].Their method,KernelDASYN, has introduced an adaptive synthetic kernel matrix for IDs.In their method, an adaptive synthetic structure has been built-up for minority classes.Adaptive data distribution is estimated by kernel matrix and weighted by stiffness degree.In the mentioned method, a function named PDF is used to estimate likelihood density.After that, numerous potent classification methods have been recently proposed[58].

    In[59],a new synthetic classification method has been proposed for ID problems.It is called ISEOMs.In the mentioned method, SOM-based learning modification is possible by searching the winner neuron based on energy function and by minimizing the local error at competitive learning stage.The current method has enhanced the classifier performance extracting knowledge from minority classes.Positive and negative examples of training phase are used for minority and majority classes, respectively.Positive SOM has been developed based on the original minority class.

    In [60], some researchers have proposed a new method to design a balanced classifier on imbalanced training data based on margin distribution theory.Recently, Large margin Distribution Machine (LDM)has been put forward and it has obtained superior classification performance compared with Support Vector Machine (SVM) and many state-of-the-art methods.However, one of the deficiencies of LDM is that it easily leads to the lower detection rate of the minority class than that of the majority class on ID which contradicts to the needs of high detection rate of the minority class in the real application.In the mentioned paper, Cost-Sensitive Large margin Distribution Machine (CS-LDM) has been brought forward to improve the detection rate of the minority class by introducing cost-sensitive margin mean and cost-sensitive penalty.

    In[61],the performance of a novel method,Parallel Selective Sampling(PSS),has been assessed.It is able to select data from the majority class to reduce imbalance in large datasets.PSS was combined with the Support Vector Machine (SVM) classification.PSS-SVM has showed excellent performances on synthetic datasets,much better than SVM.Moreover, it has been shown that on real datasets PSS-SVM classifiers had performances slightly better than those of SVM and RUSBoost classifiers with reduced processing times.In fact, their proposed strategy was conceived and designed for parallel and distributed computing.In conclusion, PSS-SVM is a valuable alternative to SVM and RUSBoost for the problem of classification by huge and imbalanced data,due to its accurate statistical predictions and low computational complexity.

    In[62]some researchers have proposed a feature learning method based on the autoencoder to learn a set of features with better classification capabilities of the minority and the majority classes to address the imbalanced classification problems.Two sets of features are learned by two stacked autoencoders with different activation functions to capture different characteristics of the data and they are combined to form the Dual Autoencoding Features.Samples are then classified in the new feature space learnt in this manner instead of the original input space.

    In [63], the authors have described preprocessing, cost-sensitive learning and ensemble techniques,carrying out an experimental study to contrast these approaches in an intra and inter-family comparison.They have carried out a thorough discussion on the main issues related to using data intrinsic characteristics in this classification problem.This has helped them to improve the given models with respect to: the presence of small disjuncts, the lack of density in the training data, the overlapping between classes, the identification of noisy data, the significance of the borderline instances, and the dataset shift between the training and the test distributions.Finally, they have introduced several approaches and recommendations to address these problems in conjunction with ID, and they have shown some experimental examples on the behavior of the learning algorithms on data with such intrinsic characteristics.

    A geometric structural ensemble(GSE)has been introduced[64].GSE partitions instances of majority class and then eliminates useless instances through constructing a hypersphere using the Euclidean criterion.By repeating the mentioned task,the simple models will be created.

    3 Proposed Method

    According to the previous sections, the classification algorithms well-tuned for ID outperform the conventional classification methods.The current study aims at introducing a new method well-tuned for ID that is based on O-S concept (like methods such as SMOTE) and also U-S concept (like methods such as RUS).It uses SVDD to find the borderline (or error-prone) data samples.Then, using the mentioned data samples, we introduce a hybrid O-S and U-S mechanism.The study uses a different method for ID classification.

    SMOTE and RUS-based borderline finding techniques and classifiers including RF,IBK and AdaBoost have been used in the proposed method.The proposed method approaches the predefined goal focusing on the desired classification accuracy.The final results have been significantly optimized.Before introducing the complete description of the proposed method, the three classification frameworks used in this method are briefed.

    Random Forest(RF)[53]is a concept of random-decision forest.RF is an ensemble learning method for classification conducting the classification process by building a number of decision trees during its training phase.The output aims at determining class tag of test instances.As a matter of fact,RF solves the problem of the decision tree over-fitting to training dataset.AdaBoost [65] can be used in combination with other learning algorithms to enhance the performance of those algorithms.The outputs of other simple learning algorithms are incorporated into weights to provide a powerful synthetic output.AdaBoost is called adaptive because the next poor-learning algorithms easily find the misclassified instances.AdaBoost is sensitive to noise and irrelevant data.IBK is a k-nearest-neighbors classifier using the attribute of distance.The number of K in the k-nearest neighbors (the default is K =1) can be clearly described.Predictions related to more than one neighbor can be assigned to different weights based on their distances from the test example.The mentioned algorithm proposes two relationships for changing distance into weight.The number of training examples holding with classifier is limited.Generally speaking, there is a data distribution description for each dataset.The data distribution description means the location of dataset examples in feature space based on features of each example.The current study is started by dividing data samples into healthy and unhealthy groups through classification task.Healthy samples are those accurately classified and unhealthy ones are those wrongly classified.In different datasets, there would be some misclassified samples or wrongly dropped samples locating near to or on the borderline between classes.This study aims at finding these borderline samples using SVDD.After the borderline samples have been identified, the process aims at resampling of borderline data samples to find a novel balanced dataset.Finally, well-known classifiers come into help to classify the novel balanced dataset.Keep in mind that the proposed method uses 80% of data as training data and 20% of data as test data.This section introduces SVDD and resampling methods along with our solution to ID classification.As mentioned before, the present method finds the borderline samples using SVDD.SVDD receives a dataset as input to determine the kernel using kernel matrix.The next step aims at finding the dataset borderline denoted by R.Then, each sample distance from kernel is calculated.It is obvious that samples near to R are called borderline samples.Borderline samples are classified into two groups:positive samples (the samples with class values equal to 1) and negative samples (the samples with class values equal to 0).According to ID features, negative samples outnumber positive ones.So, positive samples undergo O-S and negative ones undergo U-S to balance the dataset.The balanced dataset is a novel dataset.Then, the novel dataset is classified.The pseudo codes of the mentioned algorithms are described as follows.In SVDD, a constant named sigma is required as cross-sectional variable in kernel radial basis matrix.This study achieves the optimized numerical value of 23 after assigning different values to the sigma parameter.The proposed approach has summarized in Fig.1b.SMOTE pseudo code is presented in Fig.3.SMOTE has been described in Section 2.In the following, RUS and SVDD are shown in Fig.3.Fig.3 also shows the proposed algorithm composing of the three above methods.

    Figure 3: Pseudo codes of SMOTE,RUS and SVDD

    4 Experimental Study

    There are different methods to evaluate the classification quality.The current study uses AUC, Fmeasure and G-mean.Classification accuracy ranges from 0 to 1 meaning whether a data is accurately classified or not.Most of the classifiers determine the uncertainty with roughly estimated values.To calculate accuracy, a threshold boundary has to be defined.The average threshold boundary is 0.5.Assume that there is a classifier being able to provide correct answers for all questions.Assume threshold 0.7 leads to 100 correct answers for negative samples and threshold 0.9 results in 100 correct answers for positive ones.Under the predefined condition,choosing threshold 0.8 leads to undesirable results neither for negative samples nor positive samples.But,threshold 0.8 can be a good value for threshold.Keep in mind that AUC considers all possibilities for threshold.Different threshold values result in different true positive and false positive values.The greater the threshold value,the greater true positive value.Different definitions point to the area under curve as AUC.The point is that AUC is not defined as the area under the ROC curve.In case of IDs, AUC is useful to call curve accuracy.The AUC values are calculated asF-Measure uses precision and calls to retrieve information.It is obvious that the greater value of F-Measure leads to higher classification quality.Geometric mean (or G-Mean) in mathematics is an effective method to find the centroid attitude of a dataset by their values multiplication.The advantage of G-mean is that real values of members are not required to be defined.G-mean is calculated as: G-mean=where TPrateshows the true positive rate and TNrateindicates the true negative rate.To have the optimized comparison between results of different datasets,the time period needed to run various algorithms on various datasets to obtain the assumed answer is summarized as a timetable.Among various methods for statistical test, paired k-hold-out t-test has been chosen [66].In the t-test, experimental t is calculated and compared to real t considering the confidence interval of 0.05.If estimated t is bigger than real t,there is meaningful difference.

    4.1 Experiments and Analysis

    This section aims at evaluating the results of the proposed calcification algorithm.The results of the proposed and the previous state-of-the-art algorithms are compared in terms of F-measure, G-mean and AUC.In the current study, some of the datasets frequently used for ID problems are experimented;including Pima, Abalone, Haberman, Housing, Phoneme, SatImage and Ionosphere.They have been studied in the previous studies.These datasets are extracted from UCI[67]and their details are given in Tab.1.

    Table 1: Datasets and their detailed features

    In the following, the proposed method is compared to the previous state-of-the-art methods.Fig.4 provides the results of the proposed method based on the evaluated measures in comparison to other methods.The compared methods are Bagging [68], AdaBoost [65], SMOTE [39], Borderline-SMOTE[39], KernelADASYN [57], RF [53], BRF [53], Under-RF [53], Over-RF [53], Asym [55], Easy [56,69]and Cascade [56,69].Split-balancing and cluster-balancing [70] are compared in three different classification models.Borderline-SMOTE used in the paper has been the method mentioned “borsmote1”by their authors and the sampling is done so as to equally balance the both classes.

    Figure 4: The results of the proposed method in comparison with other methods

    According to Fig.4,the proposed method is superior to the state-of-the-art methods in the Ionosphere,Abalone and Haberman benchmarks in terms of F-measure,G-mean and AUC.But it fails to be superior to some the state-of-the-art methods in the Housing, Pima, Phoneme and SatImage benchmarks in terms of AUC.Tab.2 shows the results of 100 runs of the proposed algorithm on the datasets given in Tab.1 in terms of F-measure, G-mean and AUC.

    According to Tab.2, F-measure, G-mean and AUC obtained after 100 runs of various algorithms are summarized and SVDD is known for providing the optimized mean value.Tab.3 indicates the AUC mean obtained via split-balancing and cluster-balancing.Tab.3 summarizes the results of each method based on RF,SMO and IBK as basic classifier.

    According to Tab.3,the proposed method provides the optimized performances for all of basic classifiers such as RF,SMO and IBK.Tab.4 introduces the results of t-test run on the proposed method in comparison with other methods using various datasets.Let’s assume the number of the methods that are meaningfully outperformed by any given method is w.Let’s also assume the number of the methods that meaningfully outperform that given method is l.Each number in Tab.4 for a method indicates its (w-l).

    Table 2: The results averaged over 100 independent runs of different methods(the results are also averaged over all benchmark datasets)

    Table 3: AUC mean values obtained via Split-balancing and Cluster-balancing and its comparison to other methods (the results are also averaged over all benchmark datasets)

    Tab.4 shows the relationship between methods and datasets.It is obvious that the most meaningful relationship has been found between the proposed method and datasets.Tab.5 shows the time required to run the algorithm in comparison with other methods averaged on all datasets.According to Tab.5, the proposed algorithm takes longer time to run in comparison with other state-of-the-art methods because it preprocesses datasets many times.Tab.5 also summarizes mean values of F-Measure, G-mean and AUC.

    Tab.6 shows the time required to run the proposed algorithm in comparison with other state-of-the-art methods on each dataset mentioned in Tab.1.

    Table 4: Summary results of t-test on the proposed method in comparison with other state-of-the-art methods

    Table 5: Mean values for F-Measure, AUC,G-measure, and consumed time

    Table 5 (continued).Methods Average F-measure Average AUC Average G-measure Average Time KernelADASYN 61.11 85.41 71.96 4.469 Cascade 62.34 85.35 72.97 7.824 Easy 67.60 86.11 74.12 5.316 RF 57.35 86.11 64.55 3.435 BRF 63.34 85.97 73.41 4.367 Under-RF 61.19 85.77 73.30 3.530 Over-RF 60.28 85.94 67.50 4.822 Proposed method 86.05 87.87 84.50 4.068

    Table 6: The time required to run the algorithms on each mentioned dataset

    5 Conclusions and Future Work

    Data mining is frequently used in various scientific fields.It has been recently developed.One of the tasks in data mining is considered to be classification.Nowadays, an obstacle that classification algorithms face is IDs.Simple classification algorithms will not be applicable if the dataset contains at least two classes, one with very numerous data samples (called also majority class) and one with a few samples (called also minority class).Two common approaches widely used to tackle with the ID problem are O-S and U-S.A shared disadvantage of all U-S methods is the elimination of useful samples.A shared drawback of O-S methods is that they can be the reason of over-fitting occurrence.

    The proposed solution to the mentioned problems is borderline resampling in the current study.To accomplish the mentioned solution, the current study aims at focusing on the error-prone data samples(the samples that highly likely are misclassified).The mentioned samples are located on the borderline between classes.To find the error-prone data samples, Support Vector Data Description (SVDD) has been employed.

    Therefore,the primary aim is to find these datasets to run O-S and U-S.Finally,the new dataset can be classified using various traditional classification methods.The results are compared to the previous ones to show that the current method is superior to the previous stat-of-the-art ones.According to experimental result analysis section,the proposed algorithm provides better values in terms of F-measure,G-mean and AUC.For future studies,it is recommended to run the proposed algorithm using KNN.Advantages of this method are its simplicity,efficacy and cost-effectiveness of learning process.

    Funding Statement:This study is supported by grants to HAR and HP.HAR is supported by UNSW Scientia Program Fellowship and is a member of the UNSW Graduate School of Biomedical Engineering.

    Conflicts of Interest:The authors declare that they have no conflicts of interest.

    综合色丁香网| 三上悠亚av全集在线观看 | av网站免费在线观看视频| 精品国产一区二区久久| 夫妻午夜视频| 国产免费视频播放在线视频| 人妻系列 视频| 亚洲精品第二区| 高清在线视频一区二区三区| 日韩熟女老妇一区二区性免费视频| 自拍偷自拍亚洲精品老妇| 中文欧美无线码| 亚洲av中文av极速乱| 女人精品久久久久毛片| 亚洲精品国产色婷婷电影| 日韩亚洲欧美综合| 夜夜爽夜夜爽视频| 久久亚洲国产成人精品v| 边亲边吃奶的免费视频| 一区在线观看完整版| 亚洲av男天堂| 日韩免费高清中文字幕av| 中文字幕制服av| 99re6热这里在线精品视频| 汤姆久久久久久久影院中文字幕| 3wmmmm亚洲av在线观看| 久久久久久伊人网av| 观看美女的网站| 最近的中文字幕免费完整| 成人综合一区亚洲| 国产永久视频网站| 日韩人妻高清精品专区| 一区二区三区精品91| 国内揄拍国产精品人妻在线| 91精品国产国语对白视频| 国产精品久久久久久久久免| 免费大片黄手机在线观看| www.av在线官网国产| 这个男人来自地球电影免费观看 | 久久精品国产亚洲av涩爱| 久久鲁丝午夜福利片| 欧美精品一区二区免费开放| 午夜免费鲁丝| 午夜精品国产一区二区电影| 亚洲国产日韩一区二区| 男的添女的下面高潮视频| 五月伊人婷婷丁香| 少妇猛男粗大的猛烈进出视频| av天堂久久9| videossex国产| 老司机影院成人| 极品教师在线视频| 看十八女毛片水多多多| 国产成人91sexporn| 亚洲欧美日韩卡通动漫| 欧美日韩亚洲高清精品| 亚洲高清免费不卡视频| 高清欧美精品videossex| 日本黄色片子视频| 国产欧美亚洲国产| 日本wwww免费看| 黄色毛片三级朝国网站 | 黑丝袜美女国产一区| 亚洲国产av新网站| 高清欧美精品videossex| 精品人妻一区二区三区麻豆| 亚洲国产日韩一区二区| 婷婷色麻豆天堂久久| 亚洲精品国产成人久久av| 97精品久久久久久久久久精品| a级一级毛片免费在线观看| 精品酒店卫生间| 国内少妇人妻偷人精品xxx网站| 午夜久久久在线观看| 水蜜桃什么品种好| 久久亚洲国产成人精品v| 人人妻人人澡人人爽人人夜夜| 欧美bdsm另类| 日本wwww免费看| 91在线精品国自产拍蜜月| 欧美日韩亚洲高清精品| av福利片在线观看| 国产精品国产三级国产av玫瑰| 丰满人妻一区二区三区视频av| 国产 一区精品| 七月丁香在线播放| 国产成人免费无遮挡视频| 少妇人妻精品综合一区二区| 亚洲不卡免费看| 日本欧美视频一区| 午夜福利在线观看免费完整高清在| 日韩精品有码人妻一区| 91久久精品国产一区二区成人| 一级黄片播放器| a级毛色黄片| 中文字幕人妻熟人妻熟丝袜美| 在线播放无遮挡| 色5月婷婷丁香| 成人毛片a级毛片在线播放| 免费av中文字幕在线| 人妻少妇偷人精品九色| av在线播放精品| 99热这里只有精品一区| 久久精品熟女亚洲av麻豆精品| 亚洲,一卡二卡三卡| 毛片一级片免费看久久久久| 亚洲国产毛片av蜜桃av| 2022亚洲国产成人精品| 高清不卡的av网站| av在线老鸭窝| 欧美亚洲 丝袜 人妻 在线| 91久久精品国产一区二区三区| 青春草国产在线视频| 日韩,欧美,国产一区二区三区| 五月伊人婷婷丁香| 美女cb高潮喷水在线观看| 男人添女人高潮全过程视频| 自拍偷自拍亚洲精品老妇| 久久久久久久精品精品| 9色porny在线观看| 国产91av在线免费观看| 日本黄大片高清| 少妇的逼好多水| 狂野欧美激情性bbbbbb| 99国产精品免费福利视频| 精品久久久久久久久亚洲| 免费观看无遮挡的男女| 国产成人精品一,二区| 午夜福利网站1000一区二区三区| 久久人人爽av亚洲精品天堂| 久久久久久久久久久丰满| 精品国产一区二区三区久久久樱花| 七月丁香在线播放| 午夜激情福利司机影院| 女性生殖器流出的白浆| 国产成人91sexporn| 免费观看的影片在线观看| 日韩视频在线欧美| 深夜a级毛片| 国产高清有码在线观看视频| 国产一区有黄有色的免费视频| 伊人亚洲综合成人网| 欧美国产精品一级二级三级 | 男女啪啪激烈高潮av片| 国产成人a∨麻豆精品| 观看免费一级毛片| 只有这里有精品99| 我要看黄色一级片免费的| 成人综合一区亚洲| 国产亚洲5aaaaa淫片| 卡戴珊不雅视频在线播放| av在线app专区| 国产免费福利视频在线观看| 亚洲欧洲精品一区二区精品久久久 | 两个人的视频大全免费| 一级毛片久久久久久久久女| 高清在线视频一区二区三区| 亚洲精品aⅴ在线观看| 久久久久人妻精品一区果冻| av线在线观看网站| av网站免费在线观看视频| 青青草视频在线视频观看| 九九爱精品视频在线观看| 肉色欧美久久久久久久蜜桃| 国产日韩一区二区三区精品不卡 | 欧美成人午夜免费资源| 边亲边吃奶的免费视频| 欧美性感艳星| av黄色大香蕉| 亚洲真实伦在线观看| av卡一久久| 欧美97在线视频| 国产黄色免费在线视频| 亚洲精品国产av成人精品| 国产欧美日韩精品一区二区| 成人黄色视频免费在线看| 精品久久久精品久久久| 伦精品一区二区三区| 亚洲国产精品专区欧美| 熟女人妻精品中文字幕| 国产男女超爽视频在线观看| 国产精品欧美亚洲77777| 美女视频免费永久观看网站| 亚洲国产精品一区二区三区在线| 亚洲欧美日韩卡通动漫| 两个人免费观看高清视频 | 精品亚洲成a人片在线观看| 精品人妻偷拍中文字幕| 亚洲高清免费不卡视频| 尾随美女入室| 日本vs欧美在线观看视频 | 国产真实伦视频高清在线观看| 王馨瑶露胸无遮挡在线观看| 80岁老熟妇乱子伦牲交| 最新中文字幕久久久久| 亚洲精品日本国产第一区| 国产毛片在线视频| 久久久久久久久大av| 最近2019中文字幕mv第一页| 看免费成人av毛片| 精品一区在线观看国产| 日本91视频免费播放| 婷婷色麻豆天堂久久| 国产精品久久久久久av不卡| 亚洲无线观看免费| 一级片'在线观看视频| 亚洲精品456在线播放app| 国产精品久久久久久久电影| 啦啦啦中文免费视频观看日本| 欧美精品高潮呻吟av久久| 国产精品偷伦视频观看了| 嫩草影院入口| 国产午夜精品一二区理论片| 亚洲综合色惰| 亚洲国产欧美日韩在线播放 | 91精品伊人久久大香线蕉| 在线观看av片永久免费下载| 我的女老师完整版在线观看| 热99国产精品久久久久久7| 久久久久久久久久久丰满| 大香蕉久久网| 99久久中文字幕三级久久日本| 欧美日韩av久久| a级片在线免费高清观看视频| 哪个播放器可以免费观看大片| 成人特级av手机在线观看| 亚洲av综合色区一区| 国产黄片美女视频| 精品一区二区免费观看| 中文字幕av电影在线播放| 亚洲不卡免费看| 久久人妻熟女aⅴ| 成人特级av手机在线观看| 男人狂女人下面高潮的视频| 国产av精品麻豆| 久久久久久伊人网av| 春色校园在线视频观看| 亚洲图色成人| 久久人妻熟女aⅴ| 日本猛色少妇xxxxx猛交久久| 最近手机中文字幕大全| 国产精品久久久久久久久免| 十八禁网站网址无遮挡 | 99九九线精品视频在线观看视频| 最近2019中文字幕mv第一页| 又黄又爽又刺激的免费视频.| 熟女电影av网| 亚洲精品乱久久久久久| 午夜福利影视在线免费观看| 国产高清三级在线| 三级国产精品欧美在线观看| 丰满人妻一区二区三区视频av| av国产久精品久网站免费入址| 少妇精品久久久久久久| 一级毛片 在线播放| 日本av手机在线免费观看| 97精品久久久久久久久久精品| 亚洲三级黄色毛片| 欧美精品一区二区大全| 精品一区在线观看国产| 韩国高清视频一区二区三区| 欧美激情国产日韩精品一区| 亚洲欧美成人综合另类久久久| av国产久精品久网站免费入址| 一级爰片在线观看| 国产一区二区在线观看日韩| a级毛色黄片| 国产免费福利视频在线观看| 黄色毛片三级朝国网站 | 观看美女的网站| 青春草视频在线免费观看| 一区二区av电影网| 中文字幕制服av| 久久久午夜欧美精品| av不卡在线播放| 亚洲中文av在线| 激情五月婷婷亚洲| 久久久久久久亚洲中文字幕| 日日撸夜夜添| 亚洲av在线观看美女高潮| 欧美精品一区二区免费开放| 久久久久精品性色| 你懂的网址亚洲精品在线观看| 啦啦啦中文免费视频观看日本| 国产在线男女| 精品久久国产蜜桃| 国产淫语在线视频| 自拍欧美九色日韩亚洲蝌蚪91 | 国产成人91sexporn| 亚洲成人一二三区av| 欧美精品人与动牲交sv欧美| 国产亚洲精品久久久com| 女性生殖器流出的白浆| 一区二区三区四区激情视频| 欧美高清成人免费视频www| 久久久久视频综合| 久久久久久久国产电影| 日本午夜av视频| 亚洲欧美一区二区三区国产| 成年美女黄网站色视频大全免费 | 欧美3d第一页| 免费人妻精品一区二区三区视频| 亚洲精品成人av观看孕妇| 男女免费视频国产| 欧美 亚洲 国产 日韩一| 国产精品99久久久久久久久| 亚洲国产精品一区三区| 美女中出高潮动态图| 色婷婷av一区二区三区视频| 免费黄网站久久成人精品| 中文欧美无线码| 狂野欧美白嫩少妇大欣赏| 你懂的网址亚洲精品在线观看| 91久久精品国产一区二区成人| 人妻一区二区av| 激情五月婷婷亚洲| av视频免费观看在线观看| 各种免费的搞黄视频| 亚洲av在线观看美女高潮| 国产熟女午夜一区二区三区 | 极品少妇高潮喷水抽搐| 亚洲欧美成人综合另类久久久| 久久99热6这里只有精品| 亚洲精品aⅴ在线观看| 欧美日韩国产mv在线观看视频| 国产在线免费精品| av在线播放精品| 亚洲性久久影院| 91久久精品国产一区二区三区| 男人添女人高潮全过程视频| 777米奇影视久久| 中文在线观看免费www的网站| 少妇被粗大猛烈的视频| 国产精品久久久久久精品古装| 日日啪夜夜撸| 久久久久精品性色| 天天躁夜夜躁狠狠久久av| 99久久精品国产国产毛片| 性高湖久久久久久久久免费观看| 在线观看免费高清a一片| 日本黄色日本黄色录像| 啦啦啦中文免费视频观看日本| 欧美老熟妇乱子伦牲交| 国产高清国产精品国产三级| 蜜桃在线观看..| 一级a做视频免费观看| 欧美97在线视频| 日韩三级伦理在线观看| 噜噜噜噜噜久久久久久91| 乱码一卡2卡4卡精品| 草草在线视频免费看| 久久久久久久国产电影| 亚洲欧洲国产日韩| 亚洲精品国产av蜜桃| 哪个播放器可以免费观看大片| 国产亚洲一区二区精品| 2022亚洲国产成人精品| 建设人人有责人人尽责人人享有的| 亚洲一区二区三区欧美精品| 狂野欧美激情性bbbbbb| 天天操日日干夜夜撸| 中文乱码字字幕精品一区二区三区| 欧美xxⅹ黑人| 大香蕉97超碰在线| 国产成人精品无人区| 日本欧美国产在线视频| 纵有疾风起免费观看全集完整版| 色婷婷久久久亚洲欧美| 欧美精品人与动牲交sv欧美| 少妇 在线观看| 日韩av在线免费看完整版不卡| 国产精品99久久99久久久不卡 | 国内揄拍国产精品人妻在线| 亚洲图色成人| 成人国产av品久久久| 亚洲精品第二区| 日韩一本色道免费dvd| 欧美精品亚洲一区二区| 一级,二级,三级黄色视频| 麻豆成人午夜福利视频| 成人毛片a级毛片在线播放| 精品一区二区三区视频在线| 边亲边吃奶的免费视频| 狂野欧美激情性bbbbbb| 亚洲激情五月婷婷啪啪| 色视频www国产| 自拍偷自拍亚洲精品老妇| 天堂俺去俺来也www色官网| 自拍偷自拍亚洲精品老妇| 性色av一级| 国产精品偷伦视频观看了| 色视频www国产| av播播在线观看一区| 亚洲av综合色区一区| 亚洲美女黄色视频免费看| 亚洲成人一二三区av| 中文欧美无线码| 男人狂女人下面高潮的视频| 欧美日韩精品成人综合77777| 亚洲国产成人一精品久久久| 精品久久久久久电影网| 老熟女久久久| 久久久久视频综合| 日韩一区二区视频免费看| 黑人猛操日本美女一级片| 午夜视频国产福利| 18禁在线播放成人免费| 欧美激情极品国产一区二区三区 | 少妇的逼好多水| 亚洲丝袜综合中文字幕| 成人影院久久| 国产又色又爽无遮挡免| 天美传媒精品一区二区| 国产成人精品福利久久| 一区二区三区乱码不卡18| 亚洲电影在线观看av| 在线观看www视频免费| 国产 一区精品| 中文字幕亚洲精品专区| 日日撸夜夜添| 免费人成在线观看视频色| 97在线视频观看| 91精品一卡2卡3卡4卡| 少妇裸体淫交视频免费看高清| 人妻制服诱惑在线中文字幕| 国产精品久久久久久av不卡| 国产有黄有色有爽视频| 欧美日韩av久久| 伊人亚洲综合成人网| 最近的中文字幕免费完整| 汤姆久久久久久久影院中文字幕| 香蕉精品网在线| 亚洲美女视频黄频| 国产日韩欧美在线精品| 男女国产视频网站| 日韩在线高清观看一区二区三区| av国产精品久久久久影院| 国产精品一区二区在线观看99| 春色校园在线视频观看| 在线观看av片永久免费下载| 国产精品久久久久久精品古装| 久久久a久久爽久久v久久| 免费不卡的大黄色大毛片视频在线观看| av在线老鸭窝| 欧美少妇被猛烈插入视频| 久久av网站| 一区二区三区四区激情视频| 精品国产露脸久久av麻豆| 国产老妇伦熟女老妇高清| 欧美精品人与动牲交sv欧美| 国产视频内射| 精品国产乱码久久久久久小说| 97在线人人人人妻| 欧美精品国产亚洲| 国产伦理片在线播放av一区| 中文字幕av电影在线播放| 欧美区成人在线视频| 女性被躁到高潮视频| 亚洲精品日韩在线中文字幕| 久久久久久久大尺度免费视频| 少妇人妻 视频| 国产一区有黄有色的免费视频| 国产精品三级大全| 欧美激情国产日韩精品一区| 嫩草影院入口| 欧美精品一区二区大全| 久久99热6这里只有精品| 久久精品久久久久久噜噜老黄| 91成人精品电影| 男女边吃奶边做爰视频| 欧美激情国产日韩精品一区| 美女内射精品一级片tv| 我的老师免费观看完整版| 韩国av在线不卡| 精品久久久久久久久亚洲| 91成人精品电影| 夫妻午夜视频| 精品少妇黑人巨大在线播放| 人妻人人澡人人爽人人| 亚洲精品视频女| 极品教师在线视频| 最近最新中文字幕免费大全7| 欧美97在线视频| 亚洲精品国产色婷婷电影| 免费大片黄手机在线观看| 插阴视频在线观看视频| 亚洲精品自拍成人| 亚洲人成网站在线播| 日韩三级伦理在线观看| 日韩欧美一区视频在线观看 | 欧美日韩综合久久久久久| 国产精品久久久久久精品电影小说| 日本与韩国留学比较| 高清毛片免费看| 中文精品一卡2卡3卡4更新| 久久亚洲国产成人精品v| 久久韩国三级中文字幕| 纵有疾风起免费观看全集完整版| 丰满饥渴人妻一区二区三| 精品久久久久久电影网| 久热这里只有精品99| 国产高清有码在线观看视频| 午夜福利在线观看免费完整高清在| 精品人妻熟女av久视频| 观看av在线不卡| 免费黄频网站在线观看国产| 久久午夜综合久久蜜桃| 看十八女毛片水多多多| 人妻少妇偷人精品九色| 日韩中字成人| 国产免费又黄又爽又色| 日韩成人av中文字幕在线观看| 99视频精品全部免费 在线| 丝袜脚勾引网站| 免费观看a级毛片全部| 丁香六月天网| 亚洲国产精品一区三区| av福利片在线观看| 少妇 在线观看| 免费播放大片免费观看视频在线观看| videos熟女内射| 欧美国产精品一级二级三级 | 最近手机中文字幕大全| 国产极品天堂在线| 又爽又黄a免费视频| 免费久久久久久久精品成人欧美视频 | 日韩强制内射视频| 波野结衣二区三区在线| 97精品久久久久久久久久精品| 国产一区二区三区av在线| videos熟女内射| 久久午夜综合久久蜜桃| 国产av精品麻豆| 99热全是精品| 3wmmmm亚洲av在线观看| 熟妇人妻不卡中文字幕| 人人妻人人澡人人看| 中文在线观看免费www的网站| 国产成人精品婷婷| 久久鲁丝午夜福利片| 国产老妇伦熟女老妇高清| 哪个播放器可以免费观看大片| tube8黄色片| 人人澡人人妻人| 肉色欧美久久久久久久蜜桃| 人人妻人人爽人人添夜夜欢视频 | 免费观看性生交大片5| 老熟女久久久| 看非洲黑人一级黄片| 精品亚洲乱码少妇综合久久| 亚洲av男天堂| 亚洲国产精品国产精品| 大码成人一级视频| 成人美女网站在线观看视频| 一级毛片电影观看| 欧美人与善性xxx| 国产高清国产精品国产三级| 精品人妻熟女毛片av久久网站| 亚洲精品久久午夜乱码| 亚洲欧美日韩东京热| 精品国产国语对白av| 两个人的视频大全免费| 丰满饥渴人妻一区二区三| 九九久久精品国产亚洲av麻豆| 亚洲第一av免费看| 自拍欧美九色日韩亚洲蝌蚪91 | 伊人亚洲综合成人网| 国产精品一区www在线观看| 男女啪啪激烈高潮av片| 99热全是精品| 日产精品乱码卡一卡2卡三| 欧美3d第一页| 国产一区二区在线观看日韩| 精品99又大又爽又粗少妇毛片| 精品国产一区二区久久| 一个人免费看片子| 国产男女内射视频| 久久亚洲国产成人精品v| 91在线精品国自产拍蜜月| av有码第一页| 一本久久精品| 欧美激情国产日韩精品一区| 黄色毛片三级朝国网站 | 一级毛片我不卡| 91久久精品电影网| 欧美丝袜亚洲另类| 一区二区av电影网| 成人综合一区亚洲| 成人18禁高潮啪啪吃奶动态图 | 日本91视频免费播放| 亚洲欧洲日产国产| 欧美激情国产日韩精品一区| 好男人视频免费观看在线| 国产成人精品无人区| 麻豆乱淫一区二区| 在线观看三级黄色| 亚洲经典国产精华液单| 久久99热6这里只有精品| 中文精品一卡2卡3卡4更新| 国产黄片美女视频| 在线观看www视频免费| 一边亲一边摸免费视频| 久久久久久久久久人人人人人人| 亚洲成人一二三区av| 免费久久久久久久精品成人欧美视频 | 欧美日韩国产mv在线观看视频| 日本av免费视频播放| 国产伦精品一区二区三区四那| 日本爱情动作片www.在线观看| 99视频精品全部免费 在线| 国产 精品1| 亚洲人成网站在线观看播放| 精品国产一区二区久久| 亚洲第一av免费看| 青青草视频在线视频观看|