• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Medical Data Clustering and Classification Using TLBO and Machine Learning Algorithms

    2022-03-14 09:22:44AshutoshKumarDubeyUmeshGuptaandSonalJain
    Computers Materials&Continua 2022年3期

    Ashutosh Kumar Dubey,Umesh Gupta and Sonal Jain

    1Chitkara University Institute of Engineering and Technology,Chitkara University,Punjab,India

    2Institute of Engineering and Technology,JK Lakshmipat University,Jaipur,India

    Abstract:This study aims to empirically analyze teaching-learning-based optimization(TLBO)and machine learning algorithms using k-means and fuzzy c-means (FCM) algorithms for their individual performance evaluation in terms of clustering and classification.In the first phase,the clustering(k-means and FCM)algorithms were employed independently and the clustering accuracy was evaluated using different computational measures.During the second phase, the non-clustered data obtained from the first phase were preprocessed with TLBO.TLBO was performed using k-means (TLBO-KM) and FCM (TLBO-FCM) (TLBO-KM/FCM) algorithms.The objective function was determined by considering both minimization and maximization criteria.Non-clustered data obtained from the first phase were further utilized and fed as input for threshold optimization.Five benchmark datasets were considered from the University of California,Irvine(UCI)Machine Learning Repository for comparative study and experimentation.These are breast cancer Wisconsin (BCW), Pima Indians Diabetes, Heart-Statlog, Hepatitis, and Cleveland Heart Disease datasets.The combined average accuracy obtained collectively is approximately 99.4% in case of TLBO-KM and 98.6% in case of TLBOFCM.This approach is also capable of finding the dominating attributes.The findings indicate that TLBO-KM/FCM,considering different computational measures,perform well on the non-clustered data where k-means and FCM,if employed independently, fail to provide significant results.Evaluating different feature sets,the TLBO-KM/FCM and SVM(GS)clearly outperformed all other classifiers in terms of sensitivity, specificity and accuracy.TLBOKM/FCM attained the highest average sensitivity (98.7%), highest average specificity (98.4%) and highest average accuracy (99.4%) for 10-fold cross validation with different test data.

    Keywords: K-means; FCM; TLBO; TLBO-KM; TLBO-FCM; TLBO-KM/FCM; machine learning algorithms

    1 Introduction

    Data mining and machine learning algorithms are efficient in pattern identification, extraction and data separation through clustering and classification [1].The major challenge in biological data is the insight evolution in terms of structure and function for the exploration of the adaptation, diversity, and complexity of the system [2,3].Developing computational, and statistical approaches and validating applicability in the analysis of parameters and attributes is the grand challenge [3,4].Since the symptoms of diseases are not similar across patients, it is essential to characterize their distinctive features [5].Pattern detection has been found to be important in correctly identifying hidden patterns [6].For example, data mining and machine learning techniques can identify these hidden patterns and are effective [7].Another aspect is the appropriate association and correlation between methods and their tuning parameters, threshold ranges and attribute dominance factors.So, the main motivation is to adhere to and develop an efficient framework for the process of advance computing applicability with the inclusion of feature extraction, along with the decision support capability to accelerate individual and integrated aspects of computing biological sets.The clustering algorithm organizes the data into similar groups.Therefore, it can be applied to distinguish disease and non-disease attributes.The typical clustering algorithms are the k-means and fuzzy c-means (FCM) algorithms [8].Although the k-means may fail for a badly-placed cluster center, better results can be obtained through an appropriate selection of the initial points.However, the results may also suffer in the case of FCM if the dataset is large, and if there is uncertainty in the data objects and optimal parameters setting [9].In these scenarios, the classification approach can be helpful in determining the selection points and preparing uniform data for experimentation.The classification approach categorizes the data based on training data that generate the target class with proper boundaries.The optimization algorithm, with efficient data-point-selection and uniform-data-creation property, can also be useful in data classification [10].The popular optimization algorithms include ant colony optimization (ACO), particle swarm optimization (PSO), genetic algorithm (GA), and artificial bee colony (ABC) algorithm.The performance of the above-mentioned algorithms depends on the tuning of their parameters [11,12].Rao et al.proposed the teaching-learning-based optimization(TLBO) algorithm, which only requires common controlling parameters such as population size and number of generations but does not require any algorithm-specific parameters [11].This solves the problem of unsuitable tuning of the parameters.The elitism concept in the TLBO algorithm was introduced also for complex constrained optimization problems [12].Moreover, the TLBO algorithm was found to be capable of identifying the centroids of a user-specified number of clusters of numerical data [13].Machine learning algorithms was found to be helpful in the disease detection and variable applicability in different domains [14-16].So, the main contribution in this paper is the hybrid use of altogether and provide the complete comparative analysis.It means our framework allows the combination of algorithms for different purposes.The objective of our study was to analyze the performance of the TLBO with k-means (TLBO-KM) and FCM (TLBOFCM) (TLBO-KM/FCM) algorithms along with machine learning algorithms considering variable parameters and computational aspects.

    2 Literature Review

    Pedireddla et al.[17] suggested the hybridization of TLBO and MapReduce for working with a huge dataset.In addition, the TLBO was used to solve clustering problems like local optima and for automatic clustering of large unlabeled datasets [18,19].The latter approach does not require any prior knowledge of the data.Swapna et al.[20] obtained a better accuracy by using the modified-TLBO (MTLBO) algorithm.In 2020, Zadeh et al.[21] discussed triple-negative breast cancer.They have suggested that it is unresponsive to targeted hormonal therapies.So,it is limited to the treatment of nonselective chemotherapeutic agents.They considered basallike breast cancers.They applied dimensionality reduction data mining techniques with a feature section method on the triple-negative breast cancer dataset.Their result has been prominent in proper identification and diagnosis.In 2020, Simsek et al.[22] constructed a hybrid data miningbased method, constructed for differentiation for survival changes.They considered least absolute shrinkage, selection operator and genetic algorithm, along with artificial neural networks and logistic regression models in the final stage.In 2020, Chiudinelli et al.[23] discussed the careflow mining algorithm, considering data from electronic health records, mined and examined on the basis of data recorded for administrative purposes.Their results were found to be significant for decision-making systems in hospitals.In 2020, Jonsdottir et al.[24] discussed the predictive outcome model.They have developed a model selection tool, a collection of classification algorithms.The results indicate that the same performance was achieved irrespective of the algorithms considered.In 2020, Tanha et al.[25] discussed prognostic indices in terms of breast cancer groups for patients in Iran.Their main aim was to design a classification model for pattern discovery.They used decision tree and rule-based algorithms.Their results were prominent in showing the relationship between different prognostic indices.In 2018, Alwidian et al.[26] discussed the prediction of breast cancer.They suggested that the association classification technique suffers from prioritization at the attribute level.They have presented a new pruning and prediction technique.Results indicate that the algorithms applied can also be applied in different domains.In 2009, Yeh et al.[27] developed a hybrid data mining approach with two phases.Preprocessing is done in the first phase, including statistical methods.It is capable of reducing computational complexity and also speeds up the process.In the next phase discrete particle swarm optimization was applied.Their results will be helpful in the decision-making process.In 2020, Salehi et al.[28]discussed breast cancer survivability.They considered the surveillance, epidemiology, and end results program (SEER) dataset, using a multi-layer perceptron.For the machine evaluation they considered the k-fold cross-validation technique.Their results show an average accuracy of 84%.In 2020, Prabadevi et al.[29] discussed the accurate discovery of cancerous breast cells.They applied several machine learning algorithms to their comparative study.These are random forest(RF), support vector machine (SVM), naive Bayes (NB), decision tree (DT), neural networks (NN)and logistic regression (LR).In 2020, Nizam and Hassan [30] discussed unsupervised learning,studying and analyzing clustering algorithms.They suggested that classification accuracy may be affected if different distance metrics are used.They also suggested the combination of k-means with Manhattan and FCM with Euclidean distance for the best results.In 2007, Ahmad and Dey [31] presented a clustering algorithm, based on the k-means algorithm.This was well suited well for mixed numeric and categorical features.They have proposed a new cost function and distance measure, based on co-occurrence of values.Their approach uses a modified description of the cluster center.This may be useful to remove the limitation of the numeric data limitation problem.Their approach has been prominent among traditional methods.In 2011, Minaei-Bidgoli et al.[32] proposed an ensemble-based approach for feature selection.They worked on the parameter sensitivity problem, selecting highest score features based on the ensemble method.Their approach’s main advantage is the parameter insensitive support.There is no need to set any parameter in the case of this method.In 2015, Parvin et al.[33] discussed the classification problem and the recognition of a classifier for the specific problem.They suggested ensemble learning to provide a near-optimal solution and proposed a novel method for the ensemble creation.This is called classifier selection based on clustering.Their base classifier is like the DT or a multilayer perceptron classifier.They have used the weighted majority vote method as an aggregate function and investigated the influence of cluster number.They have used the University of California, Irvine (UCI) repository for the experimentation and their method has become prominent.In 2013, Parvin et al.[34] discussed data point distribution and the imbalance dataset.They have also discussed relative or non-relative datasets in the case of imbalanced shape and presented an algorithm for non-relative imbalanced datasets.Their results were also prominent.In 2020, Dashti et al.[35] discussed colorectal cancer.They developed a statistical pipeline based on a ‘gene-motif’.This merge mutated gene information with a tri-nucleotide motif.Their approach is useful in the case of cancer subtypes and cancer biomarker identification.In 2021, Baccouche et al.[36] proposed a You-Only-Look-Once (YOLO) model.Their model is helpful in the suspicious breast lesions classifications.They achieved the average accuracy of 98%.In 2021, Rasam et al.[37] explored the ArcGIS Online and Web Apps.It has been explored in terms of tuberculosis.Their main purpose is to manage the disease dataset.In 2021, Bardhi et al.[38] aimed for the patient survivability detection in different diseases.SVM classifier found to be best.In 2021, Flores et al.[39] discussed and analyzed various aspects of machine learning and artificial intelligence techniques in the direction of peripheral artery diseases.

    The above review and analysis suggested the need of algorithms with aggregate functionality.It also depicted the concentration major on preprocessing and feature selection as the symptom’s variability is higher in case of medical data.

    3 Materials and Methods

    In this paper, five different benchmark datasets have been considered.These are BCW dataset(Number of instances: 699, Number of features: 9, Number of classes: 2), Pima Indians Diabetes(Number of instances: 768, Number of features: 8, Number of classes: 2), Heart-Statlog (Number of instances: 270, Number of features: 13, Number of classes: 2), Hepatitis (Number of instances:155, Number of features: 19, Number of classes: 2) and Cleveland Heart Disease (Number of instances: 296, Number of features: 13, Number of classes: 5).This has been taken from the UCI Machine Learning Repository [40].

    The k-means clustering depends on the closest centroid.In case of a medical dataset, the data can be either malign or benign.If k-means is applied to these datasets, sometimes the initial centroids re-adjust themselves and sometimes they do not, and this process is repeated several times.The accuracy of the results highly depends on whether this process can provide the closest centroid or not.On the other hand, the FCM algorithm processes the data by allocating membership to each data point corresponding to each cluster center.The fuzziness shows the degree of truth (>1), whereas the termination criterion and epsilon value lie between 0 and 1.The process is repeated till the termination criteria.This may influence the results as the data point may be affected.So, there is the chance of trapping it into local optima.If the values are arranged considering an optimization problem, the above-mentioned problem can be solved to a great extent as the readjustment is already performed and the final outcome is more organized and normalized.If the k-means or FCM algorithm is then applied to this data, the clustering accuracy can be improved further.

    In order to achieve a good performance, all optimization algorithms require the tuning of their parameters [5].In this study, TLBO is used first with k-means and FCM algorithms as it requires only common controlling parameters or a smaller number of parameters.The TLBO algorithm is based on the influence of a teacher on the outcome of a learner.There are two phases in this algorithm: the teacher phase and the learner phase.The learners learn from the teacher as well as from other learners.The main parameters of this algorithm are the population size, design variables, and teaching factors.In our case, the population size is the size of the medical datasets.The design variables or the number of subjects is the attribute.In general, the teaching factor value is either 1 or 2 (it is 1 in our case).This determines the revised mean value.Thereafter, the best learner among the whole population is considered as a teacher.If the objective is minimization, then the lowest value is considered as the teacher; and if the objective is maximization, then the highest value is considered as the teacher.In the teacher phase, the different mean is calculated to increase the knowledge level by applying it to the attributes.These values are used as the input to the learner phase.In this phase, a learner can learn from any other learner having more knowledge.If the objective is minimization, then the knowledge is transferred from the lowest value, and if the objective is maximization, then the knowledge is transferred from the highest value.Based on this, the updated values of the attributes and the objective functions based on the fitness comparison are obtained.The output of the learner phase was used as the input for the clustering algorithms.The whole procedure can be better understood through the proposed system as shown in Fig.1.

    Figure 1: Block diagram for the TLBO-KM/FCM procedure and system structure

    The proposed framework provides different functionalities and computational parametric variations with the solutions to variable problem areas.This implies that the required setup for the preprocessing and clustering of data is implemented and evaluated.This approach can then be utilized in suitable places according to need.The functionalities include data selection,preprocessing, partitioning, clustering, classification, and the computational parametric variations based on variable parameters.The proposed framework also provides a basic set of application tools which can be extended with different methodological prospects and dataset expansion, with new attributes for the classification and clustering purposes.In phase-I, only clustering algorithms(k-means and FCM) were used and the clustering accuracy was evaluated using different computational measures.In phase-II, the non-clustered data were treated with the TLBO.In the third phase, the non-clustered data obtained from the TLBO process were clustered using k-means and FCM algorithms.The TLBO-KM and TLBO-FCM (TLBO-KM/FCM) algorithms were used to find the most accurate clusters.The optimized objective function was determined by considering both minimization and maximization.Here, non-clustered refers to the remaining data by k-means and FCM after clustering.Termination criteria refers to the termination criteria in case of FCM for finalizing the clusters.The TLBO-KM/FCM algorithm depict the complete picture.The terms used in the algorithm is shown in Tab.1.

    Table 1: Notations

    Algorithm: TLBO-KM/FCM Phase-I Input: Non-clustered result data from the set Output: Pre-processed attribute values in case of minimization and maximization Step 1: Teacher phase 1.1 Improvement in the mean result of the class dmi= ri X (Xs,kbest,i- Tf X mXs,j)1.2 Existing solution is updated based on the mean difference Updated (Xs,k,i) = Xs,k,i + dmi 1.3 Updated values of the attributes and the objective function are based on the fitness comparison.These values are inputted to the learner phase.Step 2: Learner phase 2.1 Student interaction If the value of interaction combination first f(Xf) is better than the interaction combination second f(Xs) then the knowledge transfer is from Xf to Xs)updated (Xj,a,i) = Xj,a,i+ ri (Xj,a,i- Xj,b,i)else updated (Xj,a,i) = Xj,a,i+ ri (Xj,b,i- Xj,a,i)2.2 Updated values of the attributes and the objective function are based on the fitness comparison.2.3 Steps 1 and 2 are repeated till the last iteration.Step 3: Pre-processed attribute values in case of minimization and maximization have been obtained as the result.Phase-II Input: Attribute values after minimization and maximization as the input for the k-means clustering Output: Final clustering results of non-clustered data Step 1: The number of clusters, in this case, are 2/Step 2: Centroid initialization 2.1 Centroid initiation and processing.2.2 The Euclidean distance formula is considered for the distance calculation between the cluster centers.Closer distance is the criteria for the cluster assignment:ED =images/BZ_312_589_2160_627_2205.pngnimages/BZ_312_630_2202_678_2248.pngi=1(Xi-Yi)Step 3: The simple and variance split methods are applied.Step 4: Mean values are calculated as follows:for i = 0 to row for j = 0 to column mean[i][j] += X[i][j];Step 5: Variance are calculated as follows:for i = 0 to row for j = 0 to column variance [i][j] += (mean[i][j] - X[i][j]) × (mean[i][j] - X[i][j]);sum = 0;

    (Continued)

    for i = 0 to row for j = 0 to column variance[i][j]= variance[i][j]n ;sum += variance[i][j];Step 6: Cluster centers are calculated as follows:CCi=(1/Ri)Riimages/BZ_313_789_766_837_811.pngj=1 Xi Step 7: Steps 2-6 is repeated until the means are changed.Step 8: Clustering results of non-clustered data have been obtained as the result.Phase-III Input: Attribute values after minimization and maximization as the input for the FCM clustering Output: Final clustering results of non-clustered data Step 1: Updated (Xj,a,i) were set as the data point.D dimension data were used for clustering.Step 2: The following equation has been used for the i and n.Cimages/BZ_313_432_1262_480_1307.pngj=1 Mijd =1.0 Step 3: Calculate degree of membership and center vector.CCjd=images/BZ_313_583_1415_619_1450.pngNi=1 MfijdXidimages/BZ_313_608_1478_644_1513.pngNi=1 Mfijd Step 4: Distance calculation has been performed through the following equation:EDijd= xid- CCjd Step 5: Update degree of membership Mijd = 1images/BZ_313_590_1754_627_1789.pngCc=1images/BZ_313_680_1731_704_1766.pngEDijd EDicmimages/BZ_313_797_1731_820_1766.png 2 f-1 Step 6: This has been terminated through epsilon value (ε) that is Mijd ≤ε.Step 7: Non-clustered data have been obtained as the result.

    For the experiment, the attributes A1-Anwere considered.The objective function is shown in Eq.(1).We considered the minimization and maximization both, assigning the upper and lower limit of number of attributes, respectively.

    Range of variables: 1 ≤Ai≤n

    The first difference mean, according to Eq.(2), is calculated for A1-An.The updated values were generated after different iterations based on Eq.(3).

    For comparative study and analysis different classification algorithms, along with our approach, have been considered for the experimentation.The classification algorithms used are RF, k-nearest neighbor (KNN), SVM, SVM with grid search (SVM (GS)) and NB.To avoid any ambiguous inference, each experiment is repeated for 50 cycles for the calculation of average accuracy.

    4 Results

    Five different benchmark datasets have been considered for experimentation.These are BCW dataset (D1), Pima Indians Diabetes (D2), Heart-Statlog (D3), Hepatitis (D4) and Cleveland Heart Disease (D5).

    This section discusses the outcome of TLBO-KM/FCM and machine learning algorithms in different cases.First, TLBO-KM/FCM results were considered with different cases with D1 dataset.For the comparison of the results, positive predictive value (PPV) was considered first(Eq.(4)).

    In the case of k-means, foggy and random centroids have been used for initialization.The Euclidean distance algorithm is used to find the distance between the cluster center and the data points.The simple and variance split methods were applied for data splitting.The cluster centers were calculated based on the mean and variance.Tab.2 presents a list of the cases considered for comparison.D1 dataset was considered for the following cases shown in Tab.2.

    Table 2: Case comparison

    For Case 1, the results were obtained using foggy centroid, Euclidean distance, simple-split method, epoch, and variations in the design variables with 10-fold cross validation in a complete cycle.The simple-split method is used to cluster more elements.The epoch determines the stopping condition of the iteration in the process of identifying the cluster center.Fig.2 shows the corresponding results for a population size of 250.In this case, both the TLBO minimization and maximization were considered.The design variables are the parameters of the objective function.The results are shown on a scale of 0-1.Fig.2 shows that the highest, average and the lowest PPV values are 89.0%, 84.0%, and 81.0%, respectively.A better outcome could be obtained with variations in the variance and the same centroid.Fig.3 show the results based on the TLBO-KM for different design variables.When the k-means algorithm fails and the TLBO-KM is applied to the non-clustered data, the results of minimization and maximization for the five different design variable selections are (96.4% and 91.3%), (97.0% and 90.8%), (95.4% and 91.3%), (95.7%and 90.5%), and (95.0% and 91.0%), respectively, with average clustering accuracies of 91.2% and 88.4%, respectively.Design variables shows the consideration of different attributes.

    Figure 2: K-means results based on different attributes with ten iterations in five cycles

    Figure 3: The results based on case 1 with different design variables [R1 = TLBO-KM with design variable-2, R2 = TLBO-KM with design variable-3, R3 = TLBO-KM with design variable-4,R4 = TLBO-KM with design variable-5 and R5 = TLBO-KM with design variable-6]

    The parameters remain the same for Cases 2-5.However, the whole population was considered here.The results of k-means for Cases 2, 4, and 5 with the highest and lowest clustering accuracy of (91.0% and 86.0%), (92.0% and 89.0%), and (94.0% and 90.0%), and the average accuracies of (89.6% and 85.4%), (90.6%, 88.3%), and (91.4%, 89.7%) for cases 2, 4, and 5,respectively.The non-clustered records are then processed with the TLBO-KM.The highest and lowest minimization clustering accuracies are (98.0%, 92.0%), (100%, 97.0%), and (99.0%, 94.0%),while that of the maximization are (97.0%, 94.0%), (98.0%, 92.0%), and (95.0%, 92.0%) for Cases 2, 4, and 5, respectively.The average clustering accuracies in case of minimization and maximization are (95.6%, 91.4%), (98.8%, 96.4%) and (98.8%, 92.7%), respectively.For Case 3, as the initialization remained the same in all iterations, no variations are found in the case of means.Although the results may vary with the TLBO-KM, the variations are caused by the random initialization only.Therefore, the specific results of Case 3 are not presented.These results are shown in Fig.4.

    For Case 6, the same parameters were used with a completely random selection of attributes,with variation in TLBO knowledge transfer (interaction cycle).Fig.5 shows the corresponding result with the highest, average, and lowest clustering accuracy of 91.0%, 87.4%, and 85.0%,respectively.The non-clustered records produced by this process are then processed with TLBOKM and are shown in Fig.6.If TLBO-KM is applied on the non-clustered records where the k-means algorithm alone has failed, then the results in the case of minimization and maximization with five different design variables with random parameter selections are (97.7% and 94.0%), (98.8% and 94.3%), (97.1% and 95.0%), (98.9% and 93.7%), and (97.8% and 96.3%), for minimization and maximization, respectively, with an average clustering accuracy of 97.6% and 93.2%.

    Figure 4: The results based on case 2, case 4 and case 5[R1 = case 2 with k-means, R2 = case 4 with k-means, R3 = case5 with k-means, R4 = case 2 with k-means (average), R5 = case 4 with k-means (average), R6 = case 5 with k-means (average), R7 = case 2 with TLBO-KM, R8 =case 4 with TLBO-KM, R9 = case 5 with TLBO-KM, R10 = case 2 with TLBO-KM (average),R11 = case 4 with TLBO-KM (average) and R12 = case 5 with TLBO-KM (average)]

    Figure 5: K-means results based on design variables with ten iterations in five cycles

    Figure 6: The results based on case 6 with different parameters [R1 = TLBO-KM with random parameters 1, R2 = TLBO-KM with random parameters 2, R3 = TLBO-KM with random parameters 3, R4 = TLBO-KM with random parameters 4 and R5 = TLBO-KM with random parameters 5]

    Thereafter, the FCM algorithm was applied.The experimentation was performed on the basis of the variation in the fuzziness value and termination criteria.In our approach, the fuzziness variations considered from 2-5 and the epsilon value lie between 2 × 10-5and 6 × 10-5.Tab.3 shows the results of FCM based on different epsilon values and fuzziness factors.The non-clustered data from the FCM are then processed with the TLBO algorithm.The results produced by the TLBO-FCM are shown in Figs.7 and 8 for minimization and maximization,respectively.The highest and lowest results in the case of minimization and maximization with different epsilon and fuzziness factors are (82.1% and 75.9%) and (78.6% and 70.6%), respectively.The average clustering accuracies in the case of minimization and maximization are 80.4% and 73.3%, respectively.The accuracy obtained collectively (non-clustered and clustered data obtained from k-means) is approximately 99.4% in case of TLBO-KM (Fig.9).The accuracy obtained collectively (non-clustered and clustered data obtained from FCM) is approximately 98.6% in the case of TLBO-FCM (Fig.9).Here, CM indicates the computational measures and TCFV shows the termination criteria with fuzziness value.

    Table 3: FCM results and the notation used for Figs.7 and 8

    Figure 7: Results, based on TLBO-FCM, on non-clustered records based on minimization

    Figure 8: Results, based on TLBO-FCM, on non-clustered records based on maximization

    Figure 9: Overall comparisons based on different computational measures

    Mean, standard deviation (SD) and the standard error of the mean (SEM) were considered for the variability variations from the complete population.Mean shows the average of the weight instances divided by the complete numbers.SD and SEM have been used for the presentation of the data characteristics.SD has been used to show the accurate dispersion of the individual values.SEM has been used for statistical inference.The variance has also been discussed to check the suitability of the objective function.The mean (), SD (σ), SEM ()can be calculated as follows (Eqs.(5)-(7)):

    wiis the weight instance.nis the complete numbers.xrepresents the data point.sshows the sample population size.

    Fig.10 shows the mean and standard deviation obtained for the individual attributes.Fig.11 shows the average mean and standard deviation obtained for the individual attributes.Fig.12 shows the variance obtained for the individual attributes.Fig.13 shows the average variance obtained for the individual attributes.From the results of D1 dataset, highest variance has been observed in the case of the A7 attribute.This indicates that attribute ranking, which may dominate the feature selection, are in the order of A7, A3, A4, A9 and A2 attributes.Most dominating attributes are A7, A3 and A4.The variance analysis clearly indicates few features with more predictive value.An overfitting problem may arise in case of high variance and low bias.There are chances that the model may predict differently, but in our case, this is negligible, as several repetitions have been considered along with the consideration of average values.The classification performances were analyzed according the following metrics.

    Figure 10: Mean and standard deviation obtained for the individual attributes

    Figure 11: Average mean and standard deviation obtained for the individual attributes

    Figure 12: Variance obtained for the individual attributes

    Figure 13: Average variance obtained for the individual attributes

    Accuracy: It shows the rate of outcomes which are predicted based on the total outcomes.It is shown in Eq.(8).

    where, TP shows the true positive value, TN shows the true negative value, FP shows the false positive value, and FN shows the false negative value.

    Sensitivity: It shows the rate of outcomes which are predicted positive to all outcomes for the yes.It is shown in Eq.(9).

    Specificity: It shows the rate of outcomes which are predicted negative to all outcomes for the no It is shown in Eq.(10).

    14.Goody: A more exact translation of Perrault s French would be my dear lady. Goody is short for Goodwife or Goodwoman (usually used for the middle classes), a polite term of address such as Mrs. or Ms. is today, but slightly more familiar.Return to place in story.

    For comparative study and analysis different classification algorithms, along with our approach, were considered for the experimentation.The algorithms used were RF, KNN, SVM,SVM (GS) and NB.To avoid any ambiguous inference, each experiment is repeated for 50 cycles for the calculation of average accuracy.Fig.14 shows the sensitivity analysis of TLBO-KM/FCM on different test data with six classification algorithms.Fig.15 shows the specificity analysis of TLBO-KM/FCM on different test data with six classification algorithms.Fig.16 shows the comparative analysis of TLBO-KM/FCM accuracy with different classification algorithms.Evaluating D1, D2, D3, D4 and D5 feature sets, the TLBO-KM/FCM and SVM(GS) outperformed in terms of sensitivity, specificity and accuracy.TLBO-KM/FCM attained the highest average sensitivity(98.7%), highest average specificity (98.4%) and highest average accuracy (99.4%) for 10-fold cross validation with different test data.Fig.17 shows the accuracy analysis of TLBO-KM/FCM with different datasets.The 60-80% variations considered in case of splitting ratio for training and testing data.It is considered for validating the results in different variations.

    Figure 14: Sensitivity analysis of TLBO-KM/FCM on different test data with six classification algorithms

    Figure 15: Specificity analysis of TLBO-KM/FCM on different test data with six classification algorithms

    Figure 16: Comparative analysis of TLBO-KM/FCM accuracy with different classification algorithms

    Figure 17: Accuracy analysis of TLBO-KM/FCM with different datasets

    5 Discussion

    In this study, k-means, FCM, TLBO-KM/FCM and machine learning algorithms have been applied to the five benchmark datasets for achieving better performance in terms of sensitivity,specificity and accuracy.The key processes and findings are listed below.

    (1) The TLBO was used for the data-preprocessing and the TLBO-KM/FCM outperforms in all cases.

    (2) In Case 1 (BCW dataset), at first, only k-means was applied on the random but complete and unique data.The clustering accuracy obtained here is approximately 90%.Thereafter,TLBO was applied on the left-over data which k-means was unable to cluster.The results obtained after five cycles of TLBO were then re-applied to k-means, and approximately 97% and 92% average clustering accuracies are obtained in the case of minimization and maximization, respectively.This clearly depicts that most of the non-clustered data are classified after applying TLBO.

    (3) In Cases 2-5 (BCW dataset), instead of selecting randomly, the whole population was considered.Case 2 includes the variations in TLBO design variables and foggy centroid,and Case 3 additionally includes the variations in random centroid.Case 4 includes the variations in different epochs.Case 5 includes the variations in the variance and same centroid.The clustering accuracies obtained by k-means were approximately 91%, 92%,and 94% for Cases 2, 4, and 5, respectively.The non-clustered data produced by this process were then processed with TLBO-KM and the corresponding average accuracies of approximately 98%, 97%, and 99% in the case of minimization and 93%, 92%, and 93%in the case of maximization are obtained.

    (4) In Case 3 (BCW dataset), no variation was detected as the initialization remains the same in all iterations.The results may vary with TLBO.However, the variation caused by the random initialization is already covered in other cases.

    (5) In Case 6 (BCW dataset), the whole population with a completely random selection of attributes, with the variations in TLBO knowledge transfer (interaction cycle), was considered.The clustering accuracy obtained is approximately 91% in the case of k-means.The TLBO-KM applied to the non-clustered data achieves an average clustering accuracy of approximately 99% and 98% for the minimization and maximization, respectively.It is depicted from the results that TLBO-KM performs.

    (6) The clustering accuracies obtained were approximately 95%.The TLBO-FCM with different epsilon values and fuzziness factors achieve an average clustering accuracy of approximately, 97% and 98%, respectively.It is depicted from the results that TLBO-FCM performs better in comparison to FCM alone.

    (8) The combined average accuracy obtained collectively is approximately 99.4% in case of TLBO-KM and 98.6% in case of TLBO-FCM.

    (9) Evaluating different feature sets, the TLBO-KM/FCM and SVM(GS) clearly outperformed all other classifiers in terms of sensitivity, specificity and accuracy.TLBO-KM/FCM attained the highest average sensitivity (98.7%), highest average specificity (98.4%) and highest average accuracy (99.4%) for 10-fold cross validation with different test data.

    Replications and Future Directions

    The experimental framework has been developed in NETBEANS 7.2 IDE (Apache Software Foundation, Wakefield, USA).The Java Development Kit (JDK) (Oracle Corporation, California,USA) version is 1.7., using an Intel?Core?i5-7200 U CPU running at 2.8 GHz with 4 GB RAM.The system type is a 64-bit operating system and ×64-based processor.This experiment can be replicated and enhanced in future by changing centroid calculation and validating different distance measures.Different combinations of data mining, classification algorithms and evolutionary algorithms may be used, but how these algorithms can be used together and which techniques will be more effective in combined form are the points warrant future research.This work can be extended for datasets with different arity and attributes.

    6 Conclusion

    In this study, TLBO-KM/FCM and machine learning algorithms were used for the clustering and classification of medical datasets.In order to compare their efficiency, they were applied separately to the same dataset.Various computational measures of integrative clustering were taken into account using multivariate parameters such as foggy centroid, random centroid, epoch variations, design variables, fuzziness value, termination criteria, and interaction cycle.For the explanation and discussion, the BCW dataset has been considered first.The TLBO-KM was able to cluster 99.4% and 97.4% of the non-clustered data (produced by applying k-means alone)in the case of minimization and maximization, respectively.Similarly, TLBO-FCM was able to cluster 98.6% and 96.4% of the non-clustered data (produced by applying FCM alone) in the case of minimization and maximization, respectively.The combined average accuracy obtained collectively is approximately 99.4% in case of TLBO-KM and 98.6% in case of TLBO-FCM.Moreover, the variations in the results of minimization and maximization were small.Thus, it can be inferred that our approach produces better results for the minimization or the maximization of the objective function.When the results of minimization and maximization are compared,it is seen that the minimization cases produce a better result.This approach is also useful in the determination of the dominating attributes.The TLBO-KM/FCM and SVM (GS) clearly outperformed all other classifiers in terms of sensitivity, specificity and accuracy.It shows the highest average sensitivity (98.7%), highest average specificity (98.4%) and highest average accuracy(99.4%) for the 10-fold cross validation.The present study suggests that the TLBO-KM/FCM with different computational measures and multivariate parameters, in different iterations and multiple TLBO preprocessing cycles, can efficiently handle medical data.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    一级二级三级毛片免费看| 在线天堂最新版资源| 成人三级黄色视频| 日日干狠狠操夜夜爽| 精品一区二区三区视频在线| 精品国内亚洲2022精品成人| 精品少妇黑人巨大在线播放 | 在线免费观看不下载黄p国产| videossex国产| 成人无遮挡网站| 亚洲欧美一区二区三区国产| 久久人人爽人人片av| 啦啦啦啦在线视频资源| 一区二区三区免费毛片| 亚洲四区av| 成年免费大片在线观看| 亚洲av电影不卡..在线观看| 精品人妻一区二区三区麻豆| 直男gayav资源| 成人综合一区亚洲| 亚洲成av人片在线播放无| 久久鲁丝午夜福利片| 亚洲成色77777| 在线观看66精品国产| 人体艺术视频欧美日本| 欧美精品一区二区大全| 精华霜和精华液先用哪个| 爱豆传媒免费全集在线观看| 床上黄色一级片| 卡戴珊不雅视频在线播放| 国产一级毛片在线| 欧美激情久久久久久爽电影| 日韩欧美精品v在线| 国产亚洲午夜精品一区二区久久 | 国产探花极品一区二区| 欧美日本亚洲视频在线播放| 男女国产视频网站| 熟女人妻精品中文字幕| 亚洲丝袜综合中文字幕| 国产精品.久久久| 一区二区三区高清视频在线| 三级国产精品欧美在线观看| 国产av在哪里看| 婷婷六月久久综合丁香| 国产私拍福利视频在线观看| 国产精品一区二区三区四区久久| 菩萨蛮人人尽说江南好唐韦庄 | 人妻少妇偷人精品九色| 国产精品国产三级国产专区5o | 亚洲欧美成人综合另类久久久 | 丝袜喷水一区| 美女被艹到高潮喷水动态| 国产伦一二天堂av在线观看| 白带黄色成豆腐渣| 成人毛片a级毛片在线播放| 亚洲av免费在线观看| av国产久精品久网站免费入址| 亚洲美女搞黄在线观看| 免费av不卡在线播放| 热99在线观看视频| 在线观看美女被高潮喷水网站| 亚洲国产精品成人综合色| 成人亚洲精品av一区二区| 中文字幕久久专区| 午夜久久久久精精品| 国产亚洲av片在线观看秒播厂 | 亚洲一区高清亚洲精品| 亚洲av二区三区四区| 国内揄拍国产精品人妻在线| 好男人视频免费观看在线| 别揉我奶头 嗯啊视频| 夫妻性生交免费视频一级片| 国产精品一区二区三区四区久久| 麻豆精品久久久久久蜜桃| 国产亚洲精品av在线| 日本五十路高清| 成人特级av手机在线观看| 能在线免费观看的黄片| 国产伦在线观看视频一区| 亚洲国产精品sss在线观看| 日本免费a在线| 亚洲国产成人一精品久久久| 国产精品久久久久久久电影| 少妇的逼好多水| 亚洲成人久久爱视频| 亚洲成人中文字幕在线播放| 欧美成人一区二区免费高清观看| 国产成人a区在线观看| 欧美三级亚洲精品| 国产高清国产精品国产三级 | 男人舔女人下体高潮全视频| 午夜免费激情av| 特大巨黑吊av在线直播| 久久久精品大字幕| 欧美97在线视频| 国产淫语在线视频| 免费av不卡在线播放| 亚洲伊人久久精品综合 | 高清视频免费观看一区二区 | 美女xxoo啪啪120秒动态图| 卡戴珊不雅视频在线播放| 国产在线一区二区三区精 | 一夜夜www| 亚洲久久久久久中文字幕| 国产精品.久久久| 亚洲国产精品专区欧美| 午夜福利视频1000在线观看| 欧美变态另类bdsm刘玥| 在线观看一区二区三区| 国模一区二区三区四区视频| 国产色婷婷99| 国产精品无大码| 亚洲国产精品成人综合色| 日韩欧美国产在线观看| 又粗又爽又猛毛片免费看| 乱人视频在线观看| 国产一级毛片在线| 亚洲成人av在线免费| 亚洲一级一片aⅴ在线观看| 国产精品嫩草影院av在线观看| 免费人成在线观看视频色| 亚洲精品乱码久久久久久按摩| 欧美性感艳星| 精品人妻一区二区三区麻豆| 亚洲精品自拍成人| 亚洲五月天丁香| 日本一本二区三区精品| 日本五十路高清| 日日摸夜夜添夜夜爱| 亚洲精品一区蜜桃| 91久久精品电影网| 校园人妻丝袜中文字幕| 亚洲国产成人一精品久久久| 国产精品99久久久久久久久| 七月丁香在线播放| 免费在线观看成人毛片| 亚洲av二区三区四区| 精品午夜福利在线看| 人妻少妇偷人精品九色| 白带黄色成豆腐渣| 国国产精品蜜臀av免费| 免费黄网站久久成人精品| 在线天堂最新版资源| 亚洲欧洲日产国产| 亚洲人成网站在线播| 夜夜看夜夜爽夜夜摸| 国国产精品蜜臀av免费| 久久精品国产亚洲av天美| 国产成人精品久久久久久| 久久久久久久久久黄片| 好男人视频免费观看在线| 亚洲欧美精品综合久久99| 国产精品乱码一区二三区的特点| 久久人人爽人人爽人人片va| 成年免费大片在线观看| 国产午夜福利久久久久久| 亚洲最大成人av| 中文字幕亚洲精品专区| 日韩欧美 国产精品| 少妇裸体淫交视频免费看高清| 蜜桃久久精品国产亚洲av| 国产午夜福利久久久久久| 久久久久久大精品| 欧美另类亚洲清纯唯美| 久久精品国产亚洲av涩爱| 久久99精品国语久久久| 亚洲精品影视一区二区三区av| 国产日韩欧美在线精品| 精华霜和精华液先用哪个| 午夜免费激情av| 国产又色又爽无遮挡免| 久久亚洲精品不卡| 日韩一区二区三区影片| 小说图片视频综合网站| 久久久精品欧美日韩精品| 精品久久久久久成人av| 欧美又色又爽又黄视频| 免费观看的影片在线观看| 欧美变态另类bdsm刘玥| 久久精品夜夜夜夜夜久久蜜豆| 国产精华一区二区三区| 人妻制服诱惑在线中文字幕| 国国产精品蜜臀av免费| 尤物成人国产欧美一区二区三区| 日产精品乱码卡一卡2卡三| 久久韩国三级中文字幕| 日韩在线高清观看一区二区三区| 日韩大片免费观看网站 | 永久网站在线| 成人午夜高清在线视频| 免费一级毛片在线播放高清视频| 日韩制服骚丝袜av| 国产成人aa在线观看| 九草在线视频观看| 综合色av麻豆| 久久久精品大字幕| 搡女人真爽免费视频火全软件| 好男人在线观看高清免费视频| 尾随美女入室| 欧美精品国产亚洲| 干丝袜人妻中文字幕| 欧美日韩一区二区视频在线观看视频在线 | 国产免费又黄又爽又色| 日韩精品有码人妻一区| 麻豆乱淫一区二区| 九九在线视频观看精品| 国产黄片视频在线免费观看| 看片在线看免费视频| 久久精品国产自在天天线| 日本欧美国产在线视频| 国产精品.久久久| 国产精品三级大全| 老女人水多毛片| 国产精品国产三级专区第一集| av国产久精品久网站免费入址| 日韩欧美在线乱码| 亚洲自偷自拍三级| 国产精品久久电影中文字幕| 2021天堂中文幕一二区在线观| 午夜福利在线观看吧| 午夜免费男女啪啪视频观看| 免费不卡的大黄色大毛片视频在线观看 | 国产v大片淫在线免费观看| 久久精品91蜜桃| 在线观看美女被高潮喷水网站| 日本一本二区三区精品| 在线观看一区二区三区| 欧美3d第一页| 国产成人精品久久久久久| 69人妻影院| 夜夜爽夜夜爽视频| 亚洲综合色惰| 国产麻豆成人av免费视频| 亚洲色图av天堂| 有码 亚洲区| 亚洲国产精品久久男人天堂| av播播在线观看一区| 日本欧美国产在线视频| 在线a可以看的网站| 成人漫画全彩无遮挡| 又黄又爽又刺激的免费视频.| 日本与韩国留学比较| 一本久久精品| 三级经典国产精品| 97在线视频观看| 国产av一区在线观看免费| 日本五十路高清| 日本欧美国产在线视频| 精品国产一区二区三区久久久樱花 | 三级国产精品片| 日本黄色视频三级网站网址| 天天躁日日操中文字幕| 一个人看视频在线观看www免费| 高清av免费在线| 99热这里只有精品一区| 深夜a级毛片| 禁无遮挡网站| 成人午夜精彩视频在线观看| 国产三级中文精品| 老司机影院成人| 看十八女毛片水多多多| 91久久精品国产一区二区三区| 国产 一区 欧美 日韩| 亚洲欧美精品专区久久| 精品欧美国产一区二区三| 午夜免费男女啪啪视频观看| 爱豆传媒免费全集在线观看| videos熟女内射| 亚洲欧美日韩无卡精品| 久久人人爽人人片av| 身体一侧抽搐| 日本一二三区视频观看| 99视频精品全部免费 在线| 观看美女的网站| 欧美潮喷喷水| 黄片wwwwww| 久久国内精品自在自线图片| 老司机影院毛片| 久久综合国产亚洲精品| 久久久欧美国产精品| 国产精品一区二区三区四区久久| 久久人妻av系列| 欧美bdsm另类| 亚洲精品456在线播放app| 中文字幕av成人在线电影| 亚洲欧美清纯卡通| 免费av不卡在线播放| 精品免费久久久久久久清纯| 高清视频免费观看一区二区 | 非洲黑人性xxxx精品又粗又长| 免费看光身美女| 高清日韩中文字幕在线| 女人被狂操c到高潮| 欧美精品一区二区大全| 亚洲国产精品成人久久小说| 国产一级毛片七仙女欲春2| 美女内射精品一级片tv| 成年版毛片免费区| 亚洲av熟女| 在线播放无遮挡| 高清av免费在线| 直男gayav资源| 国产免费福利视频在线观看| 欧美zozozo另类| 日韩在线高清观看一区二区三区| 男女国产视频网站| 亚洲在线自拍视频| 日韩欧美三级三区| 亚洲天堂国产精品一区在线| 七月丁香在线播放| 最近最新中文字幕大全电影3| 婷婷色综合大香蕉| 18禁动态无遮挡网站| 亚洲欧美精品专区久久| 成人鲁丝片一二三区免费| 日本免费一区二区三区高清不卡| 91久久精品国产一区二区成人| 久久精品熟女亚洲av麻豆精品 | 成年女人永久免费观看视频| 欧美潮喷喷水| 成年女人永久免费观看视频| av专区在线播放| 99久久中文字幕三级久久日本| 亚洲久久久久久中文字幕| 日本色播在线视频| 热99re8久久精品国产| 熟妇人妻久久中文字幕3abv| 国产成人免费观看mmmm| 韩国av在线不卡| 久久久久久久久久久免费av| 亚洲熟妇中文字幕五十中出| 国产视频内射| 男女啪啪激烈高潮av片| 能在线免费观看的黄片| 91午夜精品亚洲一区二区三区| 在现免费观看毛片| 国产熟女欧美一区二区| 午夜福利视频1000在线观看| 日本黄色视频三级网站网址| 一本一本综合久久| 边亲边吃奶的免费视频| 国产精品嫩草影院av在线观看| av.在线天堂| 亚洲va在线va天堂va国产| 免费av不卡在线播放| 卡戴珊不雅视频在线播放| 成人高潮视频无遮挡免费网站| 99九九线精品视频在线观看视频| 麻豆成人av视频| 一级二级三级毛片免费看| 欧美高清成人免费视频www| 色哟哟·www| 免费人成在线观看视频色| 高清日韩中文字幕在线| 亚洲人成网站高清观看| 国产亚洲91精品色在线| 国产激情偷乱视频一区二区| 九九热线精品视视频播放| 国产伦精品一区二区三区四那| 亚洲熟妇中文字幕五十中出| 联通29元200g的流量卡| 精品午夜福利在线看| 亚洲av成人精品一区久久| 国产精品国产三级国产av玫瑰| 99久久成人亚洲精品观看| 草草在线视频免费看| 亚洲不卡免费看| 国产伦精品一区二区三区视频9| 欧美极品一区二区三区四区| 久久精品夜夜夜夜夜久久蜜豆| 精品99又大又爽又粗少妇毛片| 精品久久久久久久久亚洲| 亚洲精品456在线播放app| 日本色播在线视频| 午夜精品国产一区二区电影 | 久久久久国产网址| 一个人观看的视频www高清免费观看| 欧美激情久久久久久爽电影| 一区二区三区免费毛片| 国产极品精品免费视频能看的| 午夜精品在线福利| 午夜福利高清视频| 午夜激情福利司机影院| 亚洲欧美精品专区久久| 一卡2卡三卡四卡精品乱码亚洲| 五月伊人婷婷丁香| 欧美成人午夜免费资源| 美女黄网站色视频| 亚洲丝袜综合中文字幕| 青春草亚洲视频在线观看| 日韩在线高清观看一区二区三区| 一夜夜www| www.av在线官网国产| 男女视频在线观看网站免费| 国产高清不卡午夜福利| 免费观看的影片在线观看| 身体一侧抽搐| 国产中年淑女户外野战色| 国产三级在线视频| 亚洲人成网站在线播| 日韩制服骚丝袜av| 久久久精品大字幕| 91久久精品国产一区二区三区| 亚洲国产日韩欧美精品在线观看| 午夜亚洲福利在线播放| 看十八女毛片水多多多| 毛片女人毛片| 一区二区三区高清视频在线| 青春草国产在线视频| 亚洲乱码一区二区免费版| 国产69精品久久久久777片| 国产一区二区三区av在线| 欧美成人精品欧美一级黄| 黄片wwwwww| 99久久人妻综合| 99热网站在线观看| 免费人成在线观看视频色| 亚洲精品国产成人久久av| 91狼人影院| 欧美性猛交黑人性爽| av在线老鸭窝| 国产精品人妻久久久影院| 欧美97在线视频| 五月玫瑰六月丁香| 久久久久久久久久久丰满| 亚洲精品久久久久久婷婷小说 | 午夜视频国产福利| 五月玫瑰六月丁香| 国产 一区 欧美 日韩| 最新中文字幕久久久久| 久久久国产成人精品二区| 国产乱人视频| 日韩亚洲欧美综合| 老司机影院毛片| 日日摸夜夜添夜夜爱| 亚洲国产日韩欧美精品在线观看| 国产高清视频在线观看网站| 国产91av在线免费观看| 欧美又色又爽又黄视频| 99在线视频只有这里精品首页| 大香蕉久久网| 乱人视频在线观看| 青春草视频在线免费观看| 欧美精品国产亚洲| 18+在线观看网站| 日本av手机在线免费观看| 内射极品少妇av片p| 欧美成人a在线观看| 九草在线视频观看| 亚洲av一区综合| 国产精品.久久久| 美女cb高潮喷水在线观看| 欧美一区二区精品小视频在线| 免费观看性生交大片5| 国产成人a区在线观看| 日本wwww免费看| 蜜桃亚洲精品一区二区三区| 国产乱人视频| 国产精品久久久久久精品电影小说 | 久久久久免费精品人妻一区二区| 有码 亚洲区| 黄色欧美视频在线观看| 久久久久久久久大av| 午夜精品国产一区二区电影 | 成年免费大片在线观看| 日韩av不卡免费在线播放| 1024手机看黄色片| 精品国产一区二区三区久久久樱花 | 国产成人免费观看mmmm| 亚洲图色成人| 国产一区二区在线观看日韩| 久久精品国产99精品国产亚洲性色| 日本与韩国留学比较| 中文字幕精品亚洲无线码一区| 天天躁夜夜躁狠狠久久av| 在现免费观看毛片| 寂寞人妻少妇视频99o| 最近手机中文字幕大全| 小说图片视频综合网站| av卡一久久| 三级经典国产精品| 一二三四中文在线观看免费高清| 亚洲欧美日韩东京热| 嫩草影院精品99| 久久这里有精品视频免费| 少妇的逼好多水| 中文字幕久久专区| 天堂√8在线中文| 精品国内亚洲2022精品成人| 国产男人的电影天堂91| 亚洲人成网站高清观看| 日韩三级伦理在线观看| 午夜久久久久精精品| 国产精品av视频在线免费观看| 亚洲欧美精品综合久久99| 哪个播放器可以免费观看大片| 色视频www国产| 汤姆久久久久久久影院中文字幕 | 最近中文字幕高清免费大全6| 精品少妇黑人巨大在线播放 | 日本熟妇午夜| 国产黄片视频在线免费观看| 国产精品乱码一区二三区的特点| 日韩成人av中文字幕在线观看| 国产伦在线观看视频一区| 亚洲av一区综合| 91狼人影院| 日本免费a在线| 国产成人freesex在线| 视频中文字幕在线观看| 一个人看视频在线观看www免费| 69人妻影院| 国产精品国产高清国产av| 91精品伊人久久大香线蕉| 国产高清不卡午夜福利| 男插女下体视频免费在线播放| 在线天堂最新版资源| 国产亚洲精品久久久com| 精品一区二区三区人妻视频| 国产精品嫩草影院av在线观看| 亚洲av日韩在线播放| 精品欧美国产一区二区三| 国内精品美女久久久久久| 1024手机看黄色片| 成人毛片a级毛片在线播放| 日韩 亚洲 欧美在线| 中文字幕av成人在线电影| 中文字幕精品亚洲无线码一区| 国产极品天堂在线| 日本黄色片子视频| 久久国产乱子免费精品| 18禁动态无遮挡网站| 最新中文字幕久久久久| 直男gayav资源| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 日产精品乱码卡一卡2卡三| 久久精品久久久久久久性| 亚洲欧美精品专区久久| 国产一级毛片在线| 免费播放大片免费观看视频在线观看 | 国产在视频线在精品| 国产精品久久久久久精品电影| 日韩欧美国产在线观看| 成年女人永久免费观看视频| 亚洲成色77777| 99热6这里只有精品| 久久久久久久久大av| 欧美日本视频| 亚洲va在线va天堂va国产| 亚洲乱码一区二区免费版| 久久久a久久爽久久v久久| 少妇裸体淫交视频免费看高清| 亚洲电影在线观看av| 又爽又黄无遮挡网站| 亚州av有码| 男女那种视频在线观看| 亚洲18禁久久av| 在线播放无遮挡| 一本久久精品| 久久国产乱子免费精品| 欧美97在线视频| 欧美zozozo另类| 久久热精品热| 国产精品永久免费网站| 嫩草影院精品99| 夜夜看夜夜爽夜夜摸| 久久精品国产99精品国产亚洲性色| 亚洲av福利一区| 国产毛片a区久久久久| 国产麻豆成人av免费视频| 色综合亚洲欧美另类图片| 尾随美女入室| 午夜精品在线福利| 亚洲精品aⅴ在线观看| 在线免费观看不下载黄p国产| av在线观看视频网站免费| 极品教师在线视频| 热99在线观看视频| 99热这里只有是精品50| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 国产免费福利视频在线观看| 熟女人妻精品中文字幕| 国产午夜精品久久久久久一区二区三区| 国产91av在线免费观看| 七月丁香在线播放| 国产精品电影一区二区三区| 最近视频中文字幕2019在线8| 亚洲欧美日韩东京热| 18禁在线无遮挡免费观看视频| 在线天堂最新版资源| 最近2019中文字幕mv第一页| 国产三级在线视频| 亚洲精品乱码久久久v下载方式| 国产一区亚洲一区在线观看| 成人一区二区视频在线观看| 久久综合国产亚洲精品| 亚洲av二区三区四区| 亚洲最大成人手机在线| 汤姆久久久久久久影院中文字幕 | 国产人妻一区二区三区在| 日产精品乱码卡一卡2卡三| 久久久久久久国产电影| 免费不卡的大黄色大毛片视频在线观看 | 少妇人妻精品综合一区二区| 精华霜和精华液先用哪个| av黄色大香蕉| 国产亚洲91精品色在线| 国产亚洲精品av在线| 99久久无色码亚洲精品果冻| 你懂的网址亚洲精品在线观看 | 日韩av在线免费看完整版不卡| 极品教师在线视频| 国产精品国产三级国产av玫瑰| 18+在线观看网站| 一个人观看的视频www高清免费观看|