• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improving Association Rules Accuracy in Noisy Domains Using Instance Reduction Techniques

    2022-08-24 07:02:26MousaAlAkhrasZainabDarwishSamerAtawnehandMohamedHabib
    Computers Materials&Continua 2022年8期

    Mousa Al-Akhras,Zainab Darwish,Samer Atawneh and Mohamed Habib,3

    1College of Computing and Informatics,Saudi Electronic University,Riyadh,11673,Saudi Arabia

    2King Abdullah II School for Information Technology,The University of Jordan,Amman,11942,Jordan

    3Faculty of Engineering,Port Said University,Port Said,42523,Egypt

    Abstract: Association rules’learning is a machine learning method used in finding underlying associations in large datasets.Whether intentionally or unintentionally present,noise in training instances causes overfitting while building the classifier and negatively impacts classification accuracy.This paper uses instance reduction techniques for the datasets before mining the association rules and building the classifier.Instance reduction techniques were originally developed to reduce memory requirements in instance-based learning.This paper utilizes them to remove noise from the dataset before training the association rules classifier.Extensive experiments were conducted to assess the accuracy of association rules with different instance reduction techniques,namely:Decremental Reduction Optimization Procedure(DROP)3,DROP5,ALL K-Nearest Neighbors (ALLKNN),Edited Nearest Neighbor (ENN),and Repeated Edited Nearest Neighbor (RENN) in different noise ratios.Experiments show that instance reduction techniques substantially improved the average classification accuracy on three different noise levels:0%,5%,and 10%.The RENN algorithm achieved the highest levels of accuracy with a significant improvement on seven out of eight used datasets from the University of California Irvine(UCI) machine learning repository.The improvements were more apparent in the 5% and the 10% noise cases.When RENN was applied,the average classification accuracy for the eight datasets in the zero-noise test enhanced from 70.47%to 76.65%compared to the original test.The average accuracy was improved from 66.08%to 77.47%for the 5%-noise case and from 59.89% to 77.59% in the 10%-noise case.Higher confidence was also reported in building the association rules when RENN was used.The above results indicate that RENN is a good solution in removing noise and avoiding overfitting during the construction of the association rules classifier,especially in noisy domains.

    Keywords: Association rules classification;instance reduction techniques;classification overfitting;noise;data cleansing

    1 Introduction

    Data mining deals with the discovery of interesting unknown relationships in big data.It is the main technique used for knowledge discovery.Knowledge discovery is the extraction of hidden and possibly remarkable knowledge and underlying relations in raw data [1].The aim of data mining is detecting the implicit and meaningful knowledge within raw data,which mainly has the following functions:Automatic prediction of trends and behavior,automatic data mining in large databases to find predictive information,the need for a large number of manual analyses of the problem can be done quickly and directly from the data itself to draw conclusions [2].Association rule mining is a knowledge discovery technique that discovers remarkable patterns in big datasets;this is considered a crucial task during data mining[3,4].A program learns from training samples in machine learning if it enhances its performance for a specified task with experience[5].Learning techniques can be divided into two main groups[6]:

    a.Supervised Learning:The training dataset has input vectors and matching target value(s).

    b.Unsupervised Learning:The training dataset has input vectors but no output values related to them.Thus,the learning procedure determines distinct clusters or groups inside the datasets.

    For classical supervised machine learning cases,the number of training instances is usually large.It could be enormous for certain types of problems.Each training instance consists of a vector of attribute values that are highly likely to uniquely correspond to a specific target class.Association rules algorithms are applied to these training examples to discover frequent patterns later used to classify unseen instances.Improving the quality of mined association rules is a complicated task involving different methods such as prevention,process control,and post evaluation,which utilize appropriate mechanisms.Users’active contribution in mining is the key to solving the problem [2].The accuracy of machine learning algorithms is affected by the overfitting problem that occurs due to closely inseparable classes or more frequently due to noisy data.Overfitting means that the precision of the classification of current training examples is high,whereas the precision of classifying unseen test examples is much smaller[7].

    Fig.1a illustrates the case when noisy instances,which cause overfitting,are present.The constructed decision boundary between class o and class x overfits the training data,but it does not generalize well,which may cause the misclassification of unseen instances.In Fig.1b,border(possibly noisy)instances are eliminated,simplifying finding a borderline that separates the two classes and is more likely to achieve a generalization accuracy.To reduce the effect of overfitting on the accuracy,noisy data should be eliminated.If the training dataset is free from noise,removing duplicates and invalid data;could improve the classification accuracy.However,most real-world data is not clean,and error rates are frequent in the range between 0.5%and 30%,with 1%-5%being very frequent[8].

    This study aims to avoid overfitting problems in association rules learning using instance reduction techniques as noise filters.The investigated solution is a dual goal approach.It uses instance reduction techniques,which are proposed for minimizing memory requirements in instance-based learning[9],to minimize overfitting without degrading the classification accuracy of the association rules.As a result,it reduces the complexity of the classifier.Implementing noise-filtering techniques aims to decrease the number of misclassified instances caused by noise.Previous preliminary work investigated the effect of applying some limited instance reduction techniques on chosen datasets.Preceding association rules mining and building classifier with instance reduction as a pre-cleansing step,aiming to minimize the effect of noise on association rules-based classification [10].The results showed good improvement in classification accuracy after applying the ALL K-Nearest Neighbors (ALLKNN) algorithm,particularly with higher noise ratios.This study examines through extensive empirical experiments the effect of applying five instance reduction techniques,namely Decremental Reduction Optimization Procedure (DROP) 3,DROP5,ALLKNN,Edited Nearest Neighbor (ENN),and Repeated Edited Nearest Neighbor(RENN)techniques on the accuracy of association rules classifier.These techniques are covered in Section 2.4.

    Figure 1:Decision boundaries with and without noisy instances.a) overfitting due to noise b)eliminating overfitting by eliminating noisy instances

    The rest of this paper is organized as follows:Section 2 presents an overview of association rules learning and its related performance concepts,then it describes instance reduction techniques that will be utilized in the research,related literature is also discussed.Section 3 covers the research methodology;it includes experiments description,performance metrics,the used datasets,and the experiment settings.The conducted experiments are described,and the obtained results are discussed in Section 4 with illustrations and comparisons.Section 5 summarizes the results and presents avenues for future work.

    2 Literature Review

    This section gives an overview of association rules learning and its related performance concepts,then it describes utilized instance reduction techniques,related literature is also discussed.

    2.1 Association Rules

    Association rules classifier is a technique implemented to discover new knowledge from hidden relations among data items in a dataset.An association rule is presented as a relationship between two sides,A→B,where A and B represent a variable or a group of variables.A represents the ancestor,while B represents the consequent.A commonly represents attributes illustrate a specific data record that governs the other part B,representing the objective class(output).Association rules learning can be applied to:

    ? Datasets with transactions on them:a collection of transaction records for specific data items,such as transactions on a supermarket’s items.

    ? Datasets have no transactions:such as medical records for patients.

    ? Data with no timestamp is persistent through time,such as DNA sequencing.

    Many association rules algorithms have been presented,for example,Apriori,Apriori Transaction Identifier (AprioriTID),Frequent Pattern (FP) Growth (FP-Growth),and others [11,12].Apriori is the most widely used algorithm for mining association rules [11].In Apriori,a rule-based classifier is built from the extracted association rules,which are mined from a large dataset.It calculates the occurrences of each item combination,and then combinations below a specific minimum threshold are excluded.Once the items sets are excluded,the support for each subset of items is computed using Breadth-First Search and a Hash-Tree structure.Support is described in Section 2.2.

    FP-Growth is another algorithm for extracting association rules.Similarly,the support of each itemset is computed;individual items within an itemset are ordered in descending way according to the support values.Any itemsets under the minimum specified support are eliminated.The remaining itemsets are then used to build a Frequent Pattern Tree(FP-Tree)[13].The FP-Tree illustrates a prefix tree presenting the transactions [13],where an individual path represents a group of transactions sharing similar prefix items.Then,each item is represented by a node.All nodes referring to the same item are linked in a list.Iteratively,a new path is constructed for each unique transaction,and if it shares a common prefix itemset,nodes are added as needed.The processing order of examples does not matter in association rules learning,nor each variable’s order within the example.

    2.2 Support and Confidence

    The biggest problem in mining association rules from big datasets is the vast number of rules.The number of rules increases exponentially as the dataset size increases.Thus,most algorithms limit the discovered rules to specific quality measures.Those measures are usually support and confidence for each discovered rule.The following equation calculates the support for each itemset:

    I0is an instance from the items’set I,R is a transaction set on I,andτ(tau)is a transaction on I0[1].

    The confidence for generated association rule is computed using the following equation:

    A →C is the examined rule for calculating its accuracy[1].Typical algorithms for association rules mining find all itemsets meeting the minimum specified support.Association rules are then generated from them[14],resulting in an excessively large number of very specific association rules or rule sets with low predictive power.

    2.3 Overfitting and Association Rules Pruning

    Overfitting the training results is considered the most serious problem for discovering association rules.Overfitting is when very accurate classification accuracy is obtained for the training examples,while it is much worse for unseen examples.The pruning method is used to prevent the overfitting issue.Association rules pruning is divided into two main approaches[15]:

    a.Pre-Pruning:Sample from the generated rules are terminated.It is commonly applied when the algorithm implemented for generating association rules uses a decision tree in intermediate form.

    b.Post-Pruning:Some of the rules are eliminated after generating all rules.Two main methods for post-pruning can be applied depending on their error rates.One of the approaches divides the dataset into three parts:training,validation,and testing datasets.The training dataset is used to generate association rules,and then pruning is applied based on the association rules performance on the validation set by eliminating rules under a minimum stated threshold.The other method is pessimistic error pruning;where the training part is used for both training and validation,and the pessimistic error rate is calculated per rule.Rules that have a pessimistic error greater than the corresponding sub-rules will be removed.

    2.4 Instance Reduction Algorithms

    For big datasets,excessive storage and time complexity are needed to process the large number of instances in a dataset.Moreover,some of these instances may cause the classifier to be very specific and generate an overfitted classifier that produces unreliable classification results on unseen data.Instance reduction algorithms are mainly used to minimize the vast memory requirements for huge datasets[9].Most instance reduction algorithms have been designed and combined with the nearest neighbor classifier.As a lazy learning algorithm,it classifies unseen instances to the class to which the majority of their k nearest neighbors in the training set belong based on a certain distance or similarity measure.Thus,reducing the training set allows us to reduce computations complexity and alleviate the high storage requirement of this classifier[16].

    Instance reduction algorithms are categorized into two groups:

    ? Incremental reduction algorithms:the process starts with an empty set,and in each iteration,essential items from the original dataset are added incrementally,e.g.,Encoding Length(ELGrow)[17].

    ? Decremental reduction algorithms:begin with the complete dataset,then progressively eliminate irrelevant items,e.g.,Decremental Reduction Optimization Procedures(DROPs)[9].

    In this research,instance reduction algorithms are used as filtering techniques to keep the nearest instances based on a specified distance function and discard other distant instances that could cause overfitting.

    The following is a brief description of the instance reduction algorithms tested in this work:

    ? ALLKNN algorithm[18]:ALLKNN extended ENN.This algorithm works as follows:for i=1 to k,any instance that is not classified correctly by its neighbors is flagged as a bad instance.After completing the loop all k times,instances flagged as bad are removed from S.

    ? ENN algorithm[19]:A Decremental reduction algorithm;it starts with the whole dataset S and then removes each instance that does not conform to any of its K nearest neighbors(with K=3,typically)according to the applied distance function.This process smooths decision boundaries by removing noisy cases and close-border instances.ENN algorithm has been used in various condensation methods as a pre-processing filter to exclude the noisy instances[20].

    ? RENN Algorithm [19]:RENN applies ENN repeatedly until the majority of the remaining instances’neighbors have the same class,leaving clear classes with smooth decision boundaries.

    ? DROP3 algorithm[19]:This algorithm starts with a noise filtering pass similar to ENN.Any instance misclassified by its k nearest neighbors is removed from S.Then instances are sorted according to the distance between them and their nearest enemy(nearest neighbor in a different class).Instances with higher distances are removed first.

    ? DROP5 algorithm[9]:The removal criterion for an instance is:“Remove instance p if at least as many of its associates in T would be classified correctly without p.”The removal process starts with instances that are nearest to their nearest enemy.

    The mentioned instance reduction algorithms keep the patterns that have a higher contribution for pattern classification and remove the large number of inner patterns and all outlier patterns[20].

    2.5 Related Work

    Many of the algorithms for discovering association rules work on structured data.However,nowadays,with the widespread sensor-rich environments and the massive volume of flexible and extensible data (e.g.,JavaScript Object Notation (JSON)),these algorithms are not designed for unstructured or semi-structured datasets.Jury on [21]presented a data analytics method for data models in a tree-based format to support the discovery of positive and negative association rules.His method evaluates fractions of Extensible Markup Language (XML) based data to determine if they could present informative negative rules or not,even if their support and confidence values were not enough to the given conditions.The work in this paper can also be applied to unstructured data.However,in this case,datasets have to be transformed using an information retrieval scheme to apply filtering techniques before generating association rules.

    Dong et al.[22]referenced the main problem in mining Positive and Negative Association Rules(PNAR),which represent a huge count of discovered rules.Indeed,the number of negative rules is more than three times of the positive rules discovered.This makes it very difficult for users to build a decision from these rules.A novel methodology is presented to prune redundant negative association rules by applying logical reasoning.They merged the correlation coefficient with multiple minimum confidences to assure that the discovered PNARs are related,this proposed model supports the control of the count of all rule’s types,and it prunes weak correlation rules.

    An analysis of the crucial risk factors and treatment mechanisms,following the integrated Bayesian network,then applying association rule algorithms has been confronted by Du et al.[23].They applied their study to analyze the methods to minimize the risk of postpartum hemorrhage after cesarean section.The probability of risk factors influencing the main factors that cause postpartum hemorrhage after the cesarean section has been computed by a Bayesian network model depending on regression analysis.The discovered rules confronted solutions to different causes of postpartum hemorrhage and offered suggestions for medical institutions to amend the efficiency of variance treatment.

    Yang et al.[24]focused on the process of finding and pruning time series association rules from sensor data.Regular association rules algorithms produce a huge number of rules;this makes it very hard to interpret or use;thus,they presented a two-step pruning approach to decrease the redundancy in a huge result set of time series rules.The first step targets determining rules that can correspond to other rules or carry more information than other rules.The second step summarizes the leftover rules using the bipartite graph association rules analysis method,which is appropriate for demonstrating the distribution of the rules and summarizing the interesting clusters of rules.

    Najafabadi et al.[25]applied a modification step to the pre-processing phase prior to the association rules mining to discover similar patterns.Also,they used the clustering method in the proposed algorithm to minimize the data size and dimensionality.The results indicated that this algorithm improved the performance over traditional collaborative filtering techniques measured by precision and recall metrics.

    Most business enterprises aim to anticipate their client potential to help business decisions and determine possible business intelligence operations to acquire dependable forecasting results.Yang et al.[26]modified the Na?ve Bayes classifier in association rules mining to determine the relations for marketing data in the banking system.In the first step,a classifier was implemented to classify dataset items.Then,the Apriori algorithm was employed to merge interrelated attributes to minimize the dataset’s features.

    Nguyen et al.[27]used collaborative filtering for quantitative association rules to build a recommendation system.A solution has been presented to discover association rules on binary data and to support quantitative data.The algorithm was applied on Microsoft Web(MSWEB)and MovieLens datasets,binary and quantitative datasets.The results indicate that the proposed collaborative filtering model to discover implication rules is more efficient than the traditional model measured by accuracy,performance,and the required time for building a recommender system.

    Zhang et al.[28]proposed a novel multi-objective evolutionary algorithm to discover positive and negative association rules.This algorithm aims to enhance the process of multi-objective optimization by applying a reference point that depends on a non-dominated sorting method.The genetic crossover technique is applied to extend the process for crossover,and the rules mutation has been improved.In addition,the algorithm can deal with all attribute types in the datasets.The results show that this improved algorithm performs much more effectively in quality measurements.

    3 Research Methodology

    3.1 Experiments Description

    The proposed approach includes performing a set of experiments to demonstrate results compared to other approaches.Experiments are applied as follows:

    a.Read a dataset from the UCI machine learning repository.

    b.Produce new datasets by injecting noise with 0%,5%,and 10%to the original datasets.Noise is injected by randomly changing the class attribute of the itemsets.

    c.For each of the above noise ratios,generate association rules in each of the following cases:

    ? Neither noise filtering nor pruning techniques are applied.

    ? Built-in pruning is used without implementing noise filtering techniques.

    ? Noise filtering techniques are used without implementing built-in pruning.

    ? Noise filtering techniques and built-in pruning are both applied.

    Experiment(a)aims to study the impact of noise on classification accuracy,while experiment(b)is intended to show whether applying only pruning will improve the classification accuracy.Experiment(c)aims to discover the efficiency of only implementing noisy filtering techniques.Finally,experiment(d)tests the efficiency of combining noise filtering and pruning algorithms in succession.The previous experiments aim to illustrate the impact of noise on classification accuracy and examine the effect of built-in pruning and noise filtering techniques on the classification accuracy in noisy instances.To conduct the above experiments and apply association rules classification,the Waikato Environment for Knowledge Analysis(WEKA)data mining tool[29]is employed to build an association rules classifier from datasets.WEKA implements several algorithms to build association rules.The Apriori algorithm is used in this research since it is adequate with all the chosen datasets.The research methodology is illustrated in Fig.2.

    The pruning function implemented in WEKA applies a pessimistic error rate-based pruning in C4.5.After deleting one of their conditions,it keeps the rules that have an estimated error rate lower than those of the same rules.The pessimistic error rate is a top-down greedy pruning approach that eliminates conditions successively from the Apriori-Tree if this reduces the estimated error.The problem is that some rules could be discarded totally,which means that some cases will not be covered,and this will affect the prediction of unseen instances.This will be confirmed empirically in this research.

    Figure 2:The research methodology followed in the paper

    In WEKA,JCBAPrunning is a class implementing the pruning step of the classification Based on Association(CBA)algorithm using a Cache-conscious Rectangle Tree(CR-Tree).The CR-tree is a prefix tree structure to explore the sharing among rules,which achieves substantial compactness.CR-tree itself is also an index structure for rules and serves rule retrieval efficiently.Valid options for JCBAPruning are:

    ? C the confidence value:The confidence value for the optional pessimistic-error-rate-based pruning step(default:0.25).

    ? N:If set,no pessimistic-error-rate-based pruning is performed.

    3.2 Performance Metrics

    The accuracy of association rules classification in different experiments is compared using several performance metrics:precision,recall,and percentage of correctly classified instances.Here is a brief explanation for each measure:

    where False negatives:percentage of incorrectly classified examples as not belonging to class x while they truly belong to class x.

    3.3 Datasets

    To evaluate the proposed approach,eight benchmark datasets were selected from the University of California Irvine(UCI)machine learning repository[30]to conduct the experiments.These datasets vary in size,the number of attributes,and data type.These datasets are convenient to supervised learning as they include only one class,as the Apriori algorithm accepts only single target attribute datasets.Tab.1 shows the details for the used datasets.

    Table 1:UCI eight benchmark datasets

    3.4 Experiments Settings

    WEKA,a data mining tool from the University of Waikato,New Zealand,was used to conduct the pre-described experiments and build associate rules classifiers from the above datasets.The version that was used to apply association rule-based classification is WEKA 3.8.4.This version of WEKA contains a package manager that enables the user to install learning schemes(in our case,we will install Java association rule-based classifier) since it is not embedded implicitly.This classifier implements Apriori and predictive Apriori algorithms.In the conducted experiments,Apriori is applied since it fits the chosen datasets.An optional pruning parameter is included in the Apriori algorithm to enable/disable built-in pruning.The algorithm works as a decision list classifier and includes mandatory and optional pruning phases.The optional pruning parameters are deactivated when no pruning is implemented.A pessimistic error rate-based pruning is applied in the pruning function such as in C4.5.Thus,the rules are pruned with an approximate error,which is greater than the corresponding rule after deletion of one of its conditions.Elective pessimistic error rate-based pruning has a confidence level that ranges from zero to 1.0.The conducted experiments,set confidence value to its default value of 0.25.To prepare the datasets for the experiments,attribute values must be discretized as required by the classifier.

    In WEKA,Discretize is an instance filter that discretizes a range of numeric attributes in a dataset into nominal attributes.Discretization is a simple binning.Skips the class attribute if set.Continuous ranges are divided into sub-ranges by the user-specified parameter,such as equal width (specifying range of values)and equal frequency(number of instances in each interval).

    Valid options for the discretizing filter are:

    ?-unset-class-temporarily:Unsets the class index temporarily before the filter is applied to the data.(default:no)-B<num>:Specifies the(maximum)number of bins to divide numeric attributes into.(default=10)-M<num>:Specifies the desired weight of instances per bin for equal-frequency binning.If this is set to a positive number,then the-B option will be ignored.(default=-1)-F:Use equal-frequency instead of equal-width discretization.-O:Optimize the number of bins using leave-one-out estimate of estimated entropy(for equal-width discretization).If this is set,then the-B option will be ignored.-R<col1,col2-col4,...>:Specifies list of columns to Discretize.First and last are valid indexes.(default:first-last)-V:Invert matching sense of column indexes.-D:Output binary attributes for discretized attributes.-Y:Use bin numbers rather than ranges for discretized attributes.-precision<integer>:Precision for bin boundary labels.(default=6 decimal places).-spread-attribute-weight:When generating binary attributes,spread weight of old attribute across new attributes.Do not give each new attribute the old weight.

    4 Experiments and Results

    The proposed approach aims to reduce the effect of overfitting in noisy domains by applying instance reduction techniques,which will act as noise filters before generating association rules and conducting association rules classification.Classification accuracy will be compared with the case when only built-in pessimistic error pruning is applied.Further comparisons will explore the effect of combining both instance reduction (noise filtering) techniques and built-in pruning to improve classification accuracy.

    In addition to the noise-free base case,the noise was also introduced to the datasets in two ratios:5%and 10%by changing classes of the itemsets.In the classification task,10-fold cross-validation is used to test the learning algorithm.Also,filtering techniques,when applied,use 10-folds.The results reported in this section show the average for the 10-folds.The used performance metrics are the percentage of correctly classified instances,precision,and recall.

    4.1 Investigating the Effect of Noise

    In this experiment,both noise filtering and built-in pruning were not applied.This experiment aims to study the effect of noise and construct a baseline that can be compared with the results in subsequent experiments.Tab.2 shows the performance of associate rules using different datasets under 0%,5%,and 10%noise ratios.Fig.3 compares the performance of the association rules classifier using different noise ratios when both noise filtering and built-in pruning were not applied.It can intuitively be noticed that all three performance metrics degrade with the increased noise level.The above set of experiments will be referred to as baseline when compared with the results obtained in subsequent sections.

    Table 2:Association rules classifier results with no pruning,no filtering using different noise ratios

    4.2 The Effect of Applying Filtering and Pruning

    In this section,a series of experiments are conducted to study the effect of implementing noise filtering and built-in pruning in different combinations on the datasets with 0%,5%,and 10%noise ratios inserted.

    4.2.1 The Effect of Applying Filtering and Pruning with 0%Noise

    Experiments set(A1-A4)are applied on the original datasets without noise injection.

    ? A1.Baseline results were obtained in Section 4.1 without applying noise filtering or pruning.

    ? A2.Results were obtained by applying built-in pruning methods with no noise filtering method.

    ? A3.Results were obtained by applying noise filtering methods without built-in pruning.

    ? A4.Results were obtained by applying both noise filtering and built-in pruning.

    Tab.3 compares the accuracy for cases:A1,A2,and A3 for the chosen datasets using different instance reduction techniques.Tab.4 compares the accuracy for cases:A1,A2,and A4.Results highlighted in Bold indicate classification accuracy enhancements compared with the baseline case A1.In all Subsequent tables,CC implies Correctly Classified instances,P implies precision,and R implies recall.

    It can be clearly noticed that applying only pruning to the datasets in case A2 was insufficient to increase the classification accuracy.It unexpectedly reduced the overall precision compared with the base case A1.This was a probable result due to the behavior of pessimistic error rate-based pruning methodology as a greedy pruning approach that consecutively eliminates conditions from the Apriori-Tree if this minimizes the estimated error.Some potential important rules could be discarded;consequently,the prediction accuracy for unseen instances could be affected.A3 cases show the impact of using noise filtering techniques only without applying pruning.ENN showed good improvement,and better results were achieved when RENN was applied as the classification accuracy improved in five out of eight datasets.

    The last set of tests,A4,investigates the impact of implementing both noise filtering and builtin pruning algorithms.When results are compared with A3 cases,classification accuracy is reduced heavily,even when compared to the benchmark case A1.Therefore,implementing both filtering and pruning algorithms concurrently would not certainly result in better classification accuracy.It must be noted that the behavior of this combination of algorithms still needs to be studied in noisy domains,as explored in the subsequent experiments.As the best results in the current set of experiments were achieved when only noise filtering was applied before building association rules (A3),the impact of applying noise filtering techniques on the quality of association rules will be examined next.

    Tabs.5 and 6 compare the resulting rules without applying filtering or built-in pruning with the resulting rules after applying RENN(the best noise filter)for the Breast-Cancer dataset.Tab.5 shows the first ten rules and their confidence value produced in the zero-noise case without applying filtering or pruning.The confidence range for the rules produced after applying RENN is shown in Tab.6.

    It can be noticed that applying RENN affected the confidence of the generated association rules.When neither noise filtering nor pruning were applied,the confidence ranged from 1 to 0.94,while it improved with RENN from 1 to 0.99.

    More experiments are needed in noisy domains to check the effectiveness of instance reduction techniques as noise filters.

    Table 3:Comparing classification accuracy results for cases A1,A2,and A3

    Table 4:Comparing classification accuracy results for cases A1,A2,and A4

    Table 5:Confidence value for the first ten rules when neither noise filtering nor built-in pruning was applied at 0%noise ratio

    Table 6:Confidence value for the first ten rules when RENN noise filtering was applied without builtin pruning at 0%noise ratio

    4.2.2 The Effect of Applying Filtering and Pruning with 5%Noise

    Experiments set (B1-B4) 5% noise ratio was injected into the datasets by changing the class attribute.Four different tests are conducted as follows:

    ? B1.Baseline results were obtained in Section 4.1 without applying noise filtering or pruning.

    ? B2.Results were obtained by applying built-in pruning methods with no noise filtering method.

    ? B3.Results were obtained by applying noise filtering methods without built-in pruning.

    ? B4.Results were obtained by applying both noise filtering and built-in pruning.

    Tab.7 compares the accuracy for cases:B1,B2,and B3 for the chosen datasets using different instance reduction techniques.Tab.8 compares the accuracy for cases:B1,B2,and B4.Results highlighted in Bold indicate accuracy enhancements compared with the baseline case B1.

    It can be noticed that in B2 when only built-in pruning was applied,classification accuracy did not increase for the 5%noise injected datasets.It even reduced overall accuracy due to the pessimistic error rate-based pruning approach.From B3 experiments it appears that applying ALLKNN showed a great improvement,RENN achieved a significant improvement.It can be noticed from Tab.7 that the difference between the results achieved by RENN(77.47%)in comparison to the baseline(66.08%)is greater than the same difference(76.65%to 70.47%)when the noise was 0%as shown in Tab.3.This indicates that the improvement due to the noise filter would be more apparent with the increase in the noise ratio.This observation will later be thoroughly analyzed.The results for the last set B4 were worse than those in B3,applying ALLKNN and RENN with pruning introduced good results,but still the best results achieved by applying ALLKNN and RENN without pruning.

    Tab.9 shows the first ten rules produced for the Breast-Cancer dataset and their confidence without applying filtering or built-in pruning with a 5% noise ratio.Tab.9 shows the first ten rules with their confidence produced after applying RENN.It can be noticed from Tab.9 that the ten rules have a 0.94 confidence value,which is lower than the zero-noise case,and this is an expected result due to noise.Again,the confidence value for the constructed association rules improved when the RENN noise filter was applied;it ranges from 1 to 0.99,as shown in Tab.10.

    4.2.3 The Effect of Applying Filtering and Pruning with 10%Noise

    Experiments set (C1-C4) 10% noise ratio was injected into the datasets by changing the class attribute.Four different tests are conducted as follows:

    ? C1.Baseline results were obtained in Section 4.1 without applying noise filtering or pruning.

    ? C2.Results were obtained by applying built-in pruning methods with no noise filtering method.

    ? C3.Results were obtained by applying noise filtering methods without built-in pruning.

    ? C4.Results were obtained by applying both noise filtering and built-in pruning.

    Tab.11 compares the accuracy for cases:C1,C2,and C3 for the chosen datasets using different instance reduction techniques.Tab.12 compares the accuracy for cases:C1,C2,and C4.Results highlighted in Bold indicate accuracy enhancements compared with the baseline case C1.

    In this experiment,it is hard to notice an improvement in C2 compared to C1.Accuracy enhanced significantly in C3 when ALLKNN and RENN were applied,which indicates an excellent performance for these techniques with higher noise ratios.Also,improvements are made in C4 when applying ALLKNN and RENN simultaneously with pruning,compared to C1.

    The efficiency of applying only noise filtering,mainly ALLKN and RENN,is becoming more evident with the increased injected noise ratio.RENN achieved 77.59% classification accuracy compared to(59.89%)achieved by the baseline experiment with the same noise ratio.When the noise ratio was 0%,RENN achieved 76.65%,while the achieved accuracy with the baseline at the same noise ratio was 70.47%,as reported in Tab.3.The difference now is more apparent,indicating the increasing importance of using noise filtering techniques in the presence of more noise.

    Table 7:Comparing classification accuracy results for cases B1,B2,and B3

    Table 8:Comparing classification accuracy results for cases B1,B2,and B4

    Table 9:Confidence value for the first ten rules when neither noise filtering nor built-in pruning was applied at a 5%noise ratio

    Table 10:Confidence value for the first ten rules when RENN noise filtering was applied without built-in pruning at a 5%noise ratio

    Table 11:Comparing classification accuracy results for cases C1,C2,and C3

    Table 12:Comparing classification accuracy results for cases C1,C2,and C4

    The use of filtering algorithms also enhanced the confidence of the discovered association rules in both 0%and 5%noise cases.The confidence values of the association rules with the 10%-noise case will be compared.Tab.13 shows the first ten rules produced for the Breast-Cancer dataset and their confidence without applying filtering or built-in pruning with a 10%noise ratio.Tab.14 shows the first ten rules with their confidence produced after applying RENN.It can be noticed from Tab.13 that the ten rules have a 0.9 confidence value,which is lower than both the 0%noise and 5%noise cases,and this is an expected result due to noise.Again,the confidence value for the constructed association rules improved when the RENN noise filter was applied;it ranges from 0.94 to 0.91,as shown in Tab.14.

    Table 13:Confidence value for the first ten rules when neither noise filtering nor built-in pruning was applied at a 10%noise ratio

    4.3 Effect of Pruning

    As previously discussed,the results show that applying pruning without filtering did not improve the classification accuracy compared to using noise filtering alone.Tab.15 illustrates the average accuracy for the used datasets on cases when neither pruning nor filtering (i.e.,baseline) was used and when cases when only pruning was used at different noise ratios.It is shown that accuracy drops steadily as the noise ratio rises for the baseline and pruning-only experiments.Therefore,using pruning to overcome the noise effect is not a good choice.

    Table 14:Confidence value for the first ten rules when RENN noise filtering was applied without built-in pruning at a 10%noise ratio

    Table 15:Average performance comparison between baseline experiments and pruning only experiments at different noise ratios

    Fig.4 compares baseline experiments with Pruning only experiments at different noise ratios in terms of classification accuracy,Precision,and Recall.Both sets of experiments show performance degradation in the presence of noise.No noticeable improvement can be observed when pruning only was applied.

    Figure 4:Comparison of average performance of baseline experiments with pruning only experiments at different noise ratios.a)classification accuracy,b)Precision,and c)Recall

    4.4 Effect of Filtering

    This section compares the performance of the best three noise filters,namely:ALKNN,ENN,and RENN,with the baseline experiments’performance at different noise ratios.

    The purpose of this experiment is to investigate whether implementing filtering algorithms alone will be a good choice to overcome the overfitting problem and improve the classification accuracy.Tab.16 shows the average accuracy for the used datasets on the baseline experiments and with cases when ALLKNN,ENN,and RENN were used at different noise ratios.It is shown that accuracy drops steadily as the noise ratio rises for the baseline case.Noise filters,however,show greater resistance to noise.The best noise filter was RENN.Even the difference between the noise filter performance and baseline was more apparent with the increase in noise ratio.The accuracy difference between RENN and baseline was 6.18%(76.65-70.47),11.39%(77.47-66.08),and 17.7%(77.59-59.89)with 0%,5%and 10% noise ratios,respectively.Figs.5a,4b,and 4c compare baseline experiments with filtering only experiments at different noise ratios in terms of classification accuracy,Precision,and Recall,respectively.

    Table 16:Average performance comparison between baseline experiments and filtering only experiments at different noise ratios

    Figure 5:Comparison of average performance of baseline experiments with filtering only experiments at different noise ratios.a)classification accuracy,b)Precision,and c)Recall

    Applying instance reduction techniques as noise filters showed a significant improvement in classification accuracy,especially with a higher noise ratio.It appears that filtering techniques work better in higher noise domains,which is the desired result.The results of applying filtering alone were even higher than those of applying both filtering and pruning,as was the evidence in Tabs.3,4,7,8,11,and 12.Filtering only is also less complex during the building of the classifier.

    4.5 Statistical Analysis of the Results

    Training data is a random sample from their populations in machine learning problems.These samples could be representative of their populations or not.This means that the results obtained from applying any technique vary depending on the data;a systematic way is needed to exclude the probability of getting extreme results due to sampling errors.

    The null hypothesis of the learning problem needs to be tested and either accepted or rejected.The null hypothesis assumes that all treatments have similar means:H0:μ1=μ2=μ3...=μk.

    If the null hypothesis is accepted,there is no significant difference among the tested algorithms.If the null hypothesis is rejected,it can be concluded that there is a significant difference among the tested algorithms.In case of rejection,paired t-Tests have to be conducted to show where the performance has been improved.

    One-way Analysis of Variance(ANOVA)is one of the techniques used to test the null hypothesis.It computes thep-value,which determines whether there is a significant difference or not.P-value expresses the significance level.This value usually takes one of the values:0.01,0.05,or 0.10.In this research,0.05 will be used,corresponding to 95%confidence in the results.Tab.17 shows the ANOVA results on the 0%noise,5%noise,and 10%noise cases.

    Table 17:ANOVA-single factor for 0%noise,5%noise and 10%noise.Each group has a count of 8

    The overallp-value for the 0% noise is 0.1258>0.05,indicating that there is no significant difference among these treatments.The null hypothesis is accepted;therefore,there is no need to perform paired t-Test for this case.This is not a surprising result as the superiority of applying noise filtering techniques is expected to be more apparent in noisy domains.

    In Tab.18,only DROP3-Pruned and DROP5-Pruned showed significant differences compared to the base case withp-values less than 0.05.They are significantly worse than the baseline case with 95%confidence.

    Table 18:Paired t-test results for 5%-noise case

    Thep-value for the 5% noise is 0.03716,which is less than 0.05,and it means that the reported improvement is statistically significant.The next step is to determine what treatments produced a significant improvement.Tab.18 shows the results of paired t-Test for the 5%-noise case.P-values less than 0.05 indicate a significant difference.There is a particular interest in the difference compared to the base case(no pruning,no filtering).These values are marked inboldface.

    Similarly,thep-value for the 10% noise (from Tab.17) is 0.0313,which is less than 0.05,which means that the reported improvement is statistically significant.The next step is to determine what treatments produced a significant improvement.Tab.19 shows the results of paired t-Test for the 10%-noise case.P-values less than 0.05 indicate a significant difference.Particular interest in the difference compared to the base case(no pruning,no filtering).These values are marked in boldface.In Tab.19,RENN,ALKNN,and ENN showed significant differences compared to the base case withp-values less than 0.05.They are significantly better than the baseline case with 95%confidence.On the other hand,DROP5-Pruned is significantly worse than the baseline with 95%confidence.

    Table 19:Paired t-test results for 10%-noise case

    Table 19:Continued

    4.6 Summary of Results per Dataset

    To summarize the impact of using the best noise filtering algorithms on the eight datasets,Tab.20 shows the number of datasets improved by applying ALLKNN,ENN and RENN compared with the base case.These algorithms produced the highest average accuracy.

    Table 20:Number of improved datasets with ENN and RENN

    RENN was the best noise filtering algorithm in terms of classification accuracy and the number of improved datasets.It will further be analyzed to check its performance on each dataset.Tab.21 compares the datasets that improved significantly with RENN compared to the baseline scenario using 0%,5%,and 10%noise ratios.The++implies a statistically significant improvement,while+implies an improvement but not statistically significant.Similarly,the--indicates that accuracy decreased significantly,while-indicates that the decrease in accuracy was not significant.Comparing the number of improved datasets based on the numbers in Tab.21,it can be noticed that in the zero-noise case,the classification accuracy with RENN improved statistically significantly in five out of eight datasets compared to the baseline.

    In the 5%-noise case,seven datasets improved.In six of them,RENN improvement was statistically significant than the base case.In the 10%-noise case,seven datasets improved.In six of them,the improvement was statistically significant.For the remaining dataset,the difference was not significant.The Voting dataset is the only dataset that did not improve the classification accuracy for all noise ratios.A possible justification is that learners(in this case:Association rules)can be biased to specific data sets.It does not necessarily mean the dataset itself is inaccurate or not balanced.

    Table 21:Comparison of significantly improved datasets with RENN for all noise-cases

    5 Conclusions and Future Work

    5.1 Conclusions

    This research introduced a novel approach to increase association rules-based classification accuracy in noisy domains.The proposed approach applied instance reduction techniques to the datasets before generating association rules.This step works as a data cleaning procedure to eliminate noisy instances as much as possible before building the classifier.Unlike pre and post-pruning procedures that consume a large number of probably unnecessary computations,applying noise filtering algorithms will result in cleaner datasets and avoid extracting low confident association rules,leading to building an efficient association rules-based classifier on unseen examples.

    The findings and contribution of this research are that:

    -Five filtering algorithms were tested:DROP3,DROP5,ALLKNN,ENN,and RENN.

    -The experiments were conducted on three noise levels:0%,5%,and 10%injected into datasets.

    -Average classification accuracy improved remarkably compared to the base case where neither noise filtering nor built-in pruning was applied.

    -The improvement in the classification accuracy was even more apparent with the increase in noise ratios,which is the intended goal of this research.

    -Association rules’classification accuracy remarkably improved when applying ALLKNN,ENN and RENN,especially with higher noise levels,while the results of RENN were the most promising with a significant improvement in 7 out of 8 datasets,with 5%and 10%noise.

    -As the use of filtering techniques led to removing noisy instances,this saved the unnecessary extraction of low confident association rules that contribute to the problem of overfitting.

    RENN’s average classification accuracy for the eight datasets in the zero-noise case improved from 70.47%to 76.65%compared to the base case when RENN was not used.The average accuracy was improved from 66.08% to 77.47% for the 5%-noise case and finally,from 59.89% to 77.59% in the 10%-noise case.This improvement in classification accuracy qualifies RENN to be an excellent solution to increase the accuracy of association rules classification,especially in noisy domains.It can be noticed that the improvement was more remarkable with the increase in noise levels.

    5.2 Future Work

    The idea of applying noise filtering algorithms to improve association rules classification accuracy can still be investigated to test its effectiveness on massive size datasets from different sources.Datasets available at www.data.gov,an open-source data repository of the US government that contains a large number of variant datasets in different fields.

    Another research direction may involve applying instance reduction algorithms,as noise filters,on other machine learning techniques to improve classification accuracy,especially in noisy domains.

    Dimensionality reduction algorithms such as Principal Component Analysis(PCA)can also be utilized to reduce the dimension of a learning problem.Dimensionality reduction minimizes the set of attributes measured in each itemset without affecting classification accuracy.This may enhance classification accuracy and reduce processing time significantly.

    Acknowledgement:The APC was funded by the Deanship of Scientific Research,Saudi Electronic University.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美极品一区二区三区四区| 丝袜喷水一区| 成人美女网站在线观看视频| 国产精品不卡视频一区二区| 亚洲电影在线观看av| 午夜日本视频在线| 搡老乐熟女国产| 免费观看的影片在线观看| 亚洲美女搞黄在线观看| 免费av不卡在线播放| 91aial.com中文字幕在线观看| 国产精品女同一区二区软件| 韩国av在线不卡| 国产成人a区在线观看| 黄色配什么色好看| 91aial.com中文字幕在线观看| 日韩强制内射视频| 亚洲一区高清亚洲精品| 午夜免费男女啪啪视频观看| 亚洲av日韩在线播放| 在线观看一区二区三区| 日日干狠狠操夜夜爽| 久久99热6这里只有精品| 秋霞伦理黄片| 亚洲精品aⅴ在线观看| 最近最新中文字幕免费大全7| 日韩av在线大香蕉| 日本黄色片子视频| 亚洲性久久影院| 亚洲精品视频女| 午夜激情久久久久久久| 亚洲最大成人手机在线| 免费黄频网站在线观看国产| 免费av不卡在线播放| 久久这里只有精品中国| 美女内射精品一级片tv| 在线观看人妻少妇| 美女脱内裤让男人舔精品视频| 色综合亚洲欧美另类图片| 色播亚洲综合网| 国产亚洲一区二区精品| 一个人看视频在线观看www免费| 日产精品乱码卡一卡2卡三| 久久精品综合一区二区三区| 国产淫片久久久久久久久| 国产大屁股一区二区在线视频| 人人妻人人澡人人爽人人夜夜 | 噜噜噜噜噜久久久久久91| av女优亚洲男人天堂| 黄色欧美视频在线观看| 亚洲av国产av综合av卡| 免费大片18禁| 中文字幕亚洲精品专区| 色网站视频免费| 亚洲欧美日韩无卡精品| 午夜福利视频精品| 一个人看视频在线观看www免费| 欧美日韩在线观看h| 听说在线观看完整版免费高清| 亚洲精华国产精华液的使用体验| 大话2 男鬼变身卡| 夫妻午夜视频| 最近2019中文字幕mv第一页| 久久这里有精品视频免费| 永久免费av网站大全| 国产亚洲91精品色在线| 秋霞在线观看毛片| 精品少妇黑人巨大在线播放| 久久精品国产自在天天线| 免费大片18禁| 直男gayav资源| 国产精品1区2区在线观看.| 在线观看一区二区三区| 婷婷色综合www| 国产精品麻豆人妻色哟哟久久 | 麻豆成人午夜福利视频| 国产激情偷乱视频一区二区| 大话2 男鬼变身卡| 嫩草影院精品99| 水蜜桃什么品种好| 亚洲性久久影院| 精华霜和精华液先用哪个| 日日摸夜夜添夜夜爱| 日本三级黄在线观看| 美女脱内裤让男人舔精品视频| 午夜福利视频1000在线观看| 真实男女啪啪啪动态图| 日韩人妻高清精品专区| 国产精品三级大全| 国产v大片淫在线免费观看| 特级一级黄色大片| 国产 一区精品| 搡老乐熟女国产| 女人被狂操c到高潮| 精品少妇黑人巨大在线播放| 亚洲伊人久久精品综合| 成年人午夜在线观看视频 | 精品一区二区免费观看| 日本一二三区视频观看| 美女被艹到高潮喷水动态| 国产高潮美女av| 狂野欧美激情性xxxx在线观看| 免费在线观看成人毛片| 欧美+日韩+精品| 精品人妻视频免费看| av在线播放精品| 精品99又大又爽又粗少妇毛片| 免费无遮挡裸体视频| 美女国产视频在线观看| 成人性生交大片免费视频hd| 亚洲成人中文字幕在线播放| 日日摸夜夜添夜夜爱| 亚洲精品一区蜜桃| 男的添女的下面高潮视频| 青春草亚洲视频在线观看| 日韩欧美国产在线观看| 天堂中文最新版在线下载 | 久久99热这里只频精品6学生| 午夜精品国产一区二区电影 | 国产乱人视频| 日日干狠狠操夜夜爽| 直男gayav资源| 日本-黄色视频高清免费观看| 久久人人爽人人爽人人片va| 国产黄色视频一区二区在线观看| 欧美激情久久久久久爽电影| 丰满人妻一区二区三区视频av| 男人爽女人下面视频在线观看| 国产在线男女| 亚洲精品成人久久久久久| 大片免费播放器 马上看| 精品人妻一区二区三区麻豆| 亚洲欧美精品专区久久| 亚洲国产最新在线播放| 国产淫语在线视频| 欧美zozozo另类| 久久精品国产鲁丝片午夜精品| 你懂的网址亚洲精品在线观看| 一级片'在线观看视频| 亚洲人与动物交配视频| 亚洲欧美日韩东京热| 在线观看一区二区三区| 亚洲最大成人手机在线| 日韩国内少妇激情av| 美女国产视频在线观看| 水蜜桃什么品种好| 免费av不卡在线播放| 欧美精品国产亚洲| 特大巨黑吊av在线直播| 久久久久久久久久人人人人人人| 国产黄a三级三级三级人| 国产激情偷乱视频一区二区| 看非洲黑人一级黄片| 亚洲精品国产av成人精品| 啦啦啦韩国在线观看视频| 久久99蜜桃精品久久| 亚洲国产精品成人综合色| videos熟女内射| 黄色一级大片看看| 亚洲精品色激情综合| 午夜激情欧美在线| 国产精品一区二区在线观看99 | 午夜福利成人在线免费观看| 永久免费av网站大全| 超碰av人人做人人爽久久| 丝瓜视频免费看黄片| 三级国产精品欧美在线观看| 天美传媒精品一区二区| 美女被艹到高潮喷水动态| 亚洲熟女精品中文字幕| 18禁裸乳无遮挡免费网站照片| 成年女人在线观看亚洲视频 | 深爱激情五月婷婷| 久久久久精品久久久久真实原创| 卡戴珊不雅视频在线播放| av线在线观看网站| 只有这里有精品99| 在线观看美女被高潮喷水网站| 亚洲精品国产av蜜桃| 免费观看在线日韩| 啦啦啦中文免费视频观看日本| 午夜久久久久精精品| 人妻夜夜爽99麻豆av| 久久精品国产亚洲av涩爱| 久久久久精品久久久久真实原创| 日韩 亚洲 欧美在线| 黄色配什么色好看| 一级毛片电影观看| 熟妇人妻不卡中文字幕| 亚洲精品日韩在线中文字幕| 美女内射精品一级片tv| 久久久久久久久久成人| 国产 一区精品| av卡一久久| 久久久a久久爽久久v久久| 汤姆久久久久久久影院中文字幕 | 亚洲性久久影院| 国产成人免费观看mmmm| 一级av片app| 天堂中文最新版在线下载 | 亚洲va在线va天堂va国产| 欧美变态另类bdsm刘玥| 亚洲av男天堂| 免费看日本二区| 国产一级毛片七仙女欲春2| 视频中文字幕在线观看| 可以在线观看毛片的网站| 婷婷色综合大香蕉| 久久人人爽人人片av| 人人妻人人澡人人爽人人夜夜 | 成人美女网站在线观看视频| 国产不卡一卡二| 亚洲国产色片| 三级男女做爰猛烈吃奶摸视频| 亚洲最大成人av| 亚洲国产精品国产精品| 国产 一区精品| 一本一本综合久久| 老司机影院毛片| 舔av片在线| 久久久久国产网址| 淫秽高清视频在线观看| 精品一区二区三卡| 精品一区二区免费观看| 女的被弄到高潮叫床怎么办| 天天躁夜夜躁狠狠久久av| 嫩草影院入口| 成人高潮视频无遮挡免费网站| 五月玫瑰六月丁香| 又爽又黄a免费视频| av又黄又爽大尺度在线免费看| 少妇熟女欧美另类| 日韩欧美 国产精品| 久久6这里有精品| 成人欧美大片| 婷婷色综合大香蕉| 久久精品久久久久久久性| 你懂的网址亚洲精品在线观看| 国产精品爽爽va在线观看网站| 极品少妇高潮喷水抽搐| 最新中文字幕久久久久| 成人鲁丝片一二三区免费| 精品熟女少妇av免费看| 18禁在线无遮挡免费观看视频| 国产又色又爽无遮挡免| 日本猛色少妇xxxxx猛交久久| 午夜精品国产一区二区电影 | 精品国产三级普通话版| 色视频www国产| 亚洲第一区二区三区不卡| 日韩成人av中文字幕在线观看| 青春草视频在线免费观看| 男人舔女人下体高潮全视频| 青春草国产在线视频| 日本免费在线观看一区| 国产精品久久久久久av不卡| 天天一区二区日本电影三级| 少妇的逼好多水| 蜜桃久久精品国产亚洲av| 久久精品人妻少妇| 老司机影院成人| 国产毛片a区久久久久| 亚洲欧洲日产国产| 亚洲av成人精品一区久久| 黄片wwwwww| 久久精品国产亚洲网站| 男女啪啪激烈高潮av片| 五月天丁香电影| 国产精品一区二区三区四区免费观看| 成人特级av手机在线观看| av在线老鸭窝| 精品久久久久久电影网| 亚洲av不卡在线观看| 国产乱人视频| 国产精品一区www在线观看| 人人妻人人澡人人爽人人夜夜 | 啦啦啦啦在线视频资源| 亚洲,欧美,日韩| 亚洲精品日韩av片在线观看| 午夜福利在线观看吧| 国产一区有黄有色的免费视频 | 少妇人妻精品综合一区二区| 人妻夜夜爽99麻豆av| 91精品伊人久久大香线蕉| 亚洲av中文字字幕乱码综合| 成人漫画全彩无遮挡| 亚洲三级黄色毛片| 国产精品国产三级国产av玫瑰| 晚上一个人看的免费电影| 天天躁夜夜躁狠狠久久av| 国产亚洲av嫩草精品影院| 免费大片黄手机在线观看| av一本久久久久| 男女那种视频在线观看| 亚洲av男天堂| 日韩欧美国产在线观看| 亚洲av男天堂| 精品久久久久久久久久久久久| 国产 一区精品| 亚洲精品成人av观看孕妇| 午夜福利在线观看免费完整高清在| 久久6这里有精品| 中文资源天堂在线| 久久久精品欧美日韩精品| 国产色婷婷99| 国产男人的电影天堂91| 又黄又爽又刺激的免费视频.| 欧美bdsm另类| 特大巨黑吊av在线直播| 黄片wwwwww| 成人鲁丝片一二三区免费| 亚洲激情五月婷婷啪啪| 能在线免费观看的黄片| ponron亚洲| 日韩亚洲欧美综合| 人妻夜夜爽99麻豆av| 亚洲欧美清纯卡通| 精品熟女少妇av免费看| 国产精品嫩草影院av在线观看| h日本视频在线播放| 校园人妻丝袜中文字幕| 亚洲最大成人中文| 一夜夜www| 91精品一卡2卡3卡4卡| 国产精品三级大全| 波多野结衣巨乳人妻| 亚洲18禁久久av| 亚洲国产精品国产精品| 七月丁香在线播放| 真实男女啪啪啪动态图| 黑人高潮一二区| 两个人的视频大全免费| 国产人妻一区二区三区在| 国产午夜福利久久久久久| 最后的刺客免费高清国语| 成年版毛片免费区| 男女视频在线观看网站免费| 成年版毛片免费区| 欧美xxxx黑人xx丫x性爽| 99久久九九国产精品国产免费| 日本免费a在线| av在线亚洲专区| 中文字幕免费在线视频6| av播播在线观看一区| 国产精品国产三级国产专区5o| 成年女人看的毛片在线观看| 亚洲va在线va天堂va国产| 国产高清不卡午夜福利| 春色校园在线视频观看| 精品一区二区三区人妻视频| 国产老妇伦熟女老妇高清| 欧美日韩一区二区视频在线观看视频在线 | 久久久国产一区二区| 亚洲av成人精品一区久久| 亚洲精品乱久久久久久| 黄色欧美视频在线观看| 麻豆国产97在线/欧美| 可以在线观看毛片的网站| 高清av免费在线| 联通29元200g的流量卡| 国产成人福利小说| 免费av观看视频| a级毛片免费高清观看在线播放| 欧美97在线视频| 亚洲av成人av| 可以在线观看毛片的网站| 一级a做视频免费观看| 精品一区二区三卡| 久久久精品94久久精品| 91精品一卡2卡3卡4卡| 嫩草影院精品99| 亚洲国产日韩欧美精品在线观看| 午夜亚洲福利在线播放| 伦理电影大哥的女人| 成人午夜高清在线视频| h日本视频在线播放| 深爱激情五月婷婷| 免费大片黄手机在线观看| 国产黄片视频在线免费观看| 久久久久久久久久久免费av| 亚洲国产精品成人综合色| 岛国毛片在线播放| 亚洲精品影视一区二区三区av| 看十八女毛片水多多多| 国语对白做爰xxxⅹ性视频网站| 久久久久国产网址| 女人十人毛片免费观看3o分钟| 18+在线观看网站| 亚洲内射少妇av| 成人一区二区视频在线观看| 噜噜噜噜噜久久久久久91| 中文资源天堂在线| 欧美激情在线99| 中国美白少妇内射xxxbb| 免费电影在线观看免费观看| av线在线观看网站| 亚洲精品色激情综合| 中文字幕免费在线视频6| 日韩av在线免费看完整版不卡| 国产麻豆成人av免费视频| 亚洲精品乱码久久久久久按摩| 午夜激情欧美在线| 亚洲精品日韩av片在线观看| 久久久久久久久久久丰满| www.色视频.com| 午夜久久久久精精品| 黑人高潮一二区| 成人一区二区视频在线观看| 乱人视频在线观看| 欧美激情久久久久久爽电影| 18禁裸乳无遮挡免费网站照片| 欧美xxⅹ黑人| 午夜激情欧美在线| 亚洲欧美日韩无卡精品| av线在线观看网站| 亚洲熟妇中文字幕五十中出| kizo精华| 69av精品久久久久久| h日本视频在线播放| 一本久久精品| 日韩人妻高清精品专区| 婷婷色av中文字幕| 亚洲怡红院男人天堂| 精品久久久久久成人av| 免费不卡的大黄色大毛片视频在线观看 | 国产精品国产三级专区第一集| 日韩一区二区三区影片| 欧美bdsm另类| 又黄又爽又刺激的免费视频.| 床上黄色一级片| kizo精华| 在线观看免费高清a一片| av天堂中文字幕网| 久久久色成人| 亚洲精品日韩在线中文字幕| 国产精品伦人一区二区| 亚洲av男天堂| 久久久精品免费免费高清| 淫秽高清视频在线观看| 国内精品一区二区在线观看| 最近手机中文字幕大全| 男的添女的下面高潮视频| 青春草国产在线视频| 少妇被粗大猛烈的视频| 中文在线观看免费www的网站| 久久久国产一区二区| 亚洲欧美成人精品一区二区| 69av精品久久久久久| 国产有黄有色有爽视频| 能在线免费观看的黄片| 日本色播在线视频| 精品亚洲乱码少妇综合久久| 国产精品一区二区三区四区久久| 人人妻人人看人人澡| 日韩成人av中文字幕在线观看| 黄色日韩在线| 国产老妇伦熟女老妇高清| 国产精品国产三级专区第一集| xxx大片免费视频| 高清在线视频一区二区三区| 色哟哟·www| 久久久欧美国产精品| 成人特级av手机在线观看| 欧美+日韩+精品| 日本av手机在线免费观看| 成人亚洲精品一区在线观看 | 婷婷色av中文字幕| 草草在线视频免费看| 男女那种视频在线观看| 听说在线观看完整版免费高清| 伦理电影大哥的女人| 99久国产av精品国产电影| 久久久午夜欧美精品| 久久久久久久国产电影| 黑人高潮一二区| 国产伦精品一区二区三区四那| 国产亚洲91精品色在线| 成人综合一区亚洲| xxx大片免费视频| 国产精品不卡视频一区二区| 好男人视频免费观看在线| 纵有疾风起免费观看全集完整版 | 国产免费一级a男人的天堂| 亚洲精品第二区| 日本午夜av视频| 国产高清三级在线| or卡值多少钱| 夫妻性生交免费视频一级片| 国产成人aa在线观看| 国产真实伦视频高清在线观看| 久久精品国产亚洲av天美| 国产精品一区二区三区四区久久| 欧美人与善性xxx| 国语对白做爰xxxⅹ性视频网站| 男女啪啪激烈高潮av片| 国产91av在线免费观看| 中文精品一卡2卡3卡4更新| 日韩在线高清观看一区二区三区| 少妇猛男粗大的猛烈进出视频 | 2018国产大陆天天弄谢| 日韩,欧美,国产一区二区三区| 免费观看的影片在线观看| 国产亚洲5aaaaa淫片| 高清毛片免费看| 九色成人免费人妻av| 国产一区二区三区综合在线观看 | kizo精华| 欧美成人一区二区免费高清观看| 男人舔奶头视频| 只有这里有精品99| 精品久久久久久久久久久久久| 丰满乱子伦码专区| 免费av不卡在线播放| 国产美女午夜福利| 最新中文字幕久久久久| av专区在线播放| 69av精品久久久久久| 国产免费一级a男人的天堂| 婷婷色av中文字幕| av又黄又爽大尺度在线免费看| 国产伦一二天堂av在线观看| 免费观看的影片在线观看| 日本与韩国留学比较| 日日啪夜夜爽| 国产精品精品国产色婷婷| 波野结衣二区三区在线| 久久人人爽人人片av| 亚洲av电影不卡..在线观看| 日韩制服骚丝袜av| 国产精品人妻久久久久久| 好男人视频免费观看在线| 午夜日本视频在线| 亚洲天堂国产精品一区在线| 欧美性猛交╳xxx乱大交人| 美女脱内裤让男人舔精品视频| 人妻系列 视频| 色吧在线观看| 麻豆成人av视频| 亚洲综合精品二区| 深夜a级毛片| 国产色婷婷99| 91久久精品国产一区二区成人| ponron亚洲| 亚洲不卡免费看| 久久久久久国产a免费观看| 色综合色国产| 免费黄色在线免费观看| 亚洲av.av天堂| 麻豆成人av视频| 男插女下体视频免费在线播放| 男人舔奶头视频| 别揉我奶头 嗯啊视频| 日韩强制内射视频| 亚洲欧美精品专区久久| 精品久久久噜噜| 女人十人毛片免费观看3o分钟| 天堂中文最新版在线下载 | 中文在线观看免费www的网站| 成人漫画全彩无遮挡| 国产熟女欧美一区二区| 国产精品伦人一区二区| 少妇的逼好多水| 亚洲经典国产精华液单| 国产中年淑女户外野战色| 菩萨蛮人人尽说江南好唐韦庄| 91精品一卡2卡3卡4卡| 婷婷色综合www| 亚洲精品日韩av片在线观看| 啦啦啦韩国在线观看视频| 嫩草影院入口| 国产精品福利在线免费观看| 黄色欧美视频在线观看| 午夜福利视频精品| 国产av在哪里看| 国产精品麻豆人妻色哟哟久久 | 寂寞人妻少妇视频99o| 久久久久久久大尺度免费视频| 国产精品日韩av在线免费观看| 国产成人精品福利久久| 狂野欧美白嫩少妇大欣赏| 免费看美女性在线毛片视频| 永久网站在线| 国产视频内射| 色吧在线观看| 九草在线视频观看| 少妇熟女欧美另类| 国产精品人妻久久久影院| 欧美不卡视频在线免费观看| 免费黄色在线免费观看| av免费在线看不卡| 亚洲精华国产精华液的使用体验| 插阴视频在线观看视频| 亚洲欧美日韩无卡精品| 国产伦精品一区二区三区四那| 日韩欧美精品免费久久| 精品人妻视频免费看| 天堂av国产一区二区熟女人妻| 一区二区三区乱码不卡18| 精华霜和精华液先用哪个| 中文字幕av在线有码专区| 精品久久久精品久久久| 欧美日韩在线观看h| 可以在线观看毛片的网站| 最后的刺客免费高清国语| 97人妻精品一区二区三区麻豆| 久久精品熟女亚洲av麻豆精品 | 免费看光身美女| 少妇人妻精品综合一区二区| 国产成年人精品一区二区| 99久久九九国产精品国产免费| 青春草国产在线视频| 高清av免费在线| 99视频精品全部免费 在线| 晚上一个人看的免费电影| 狂野欧美激情性xxxx在线观看|