• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Hybrid System for Customer Churn Prediction and Retention Analysis via Supervised Learning

    2022-11-11 10:44:14SobanArshadKhalidIqbalSheneelaNazSadafYasminandZobiaRehman
    Computers Materials&Continua 2022年9期

    Soban Arshad,Khalid Iqbal,*,Sheneela Naz,Sadaf Yasmin and Zobia Rehman

    1Department of Computer Science,COMSATS University,Islamabad,Attock Campus,Pakistan

    2Department of Computer Science,COMSATS University,Islamabad,Islamabad Campus,Pakistan

    Abstract: Telecom industry relies on churn prediction models to retain their customers.These prediction models help in precise and right time recognition of future switching by a group of customers to other service providers.Retention not only contributes to the profit of an organization,but it is also important for upholding a position in the competitive market.In the past,numerous churn prediction models have been proposed, but the current models have a number of flaws that prevent them from being used in real-world largescale telecom datasets.These schemes,fail to incorporate frequently changing requirements.Data sparsity, noisy data, and the imbalanced nature of the dataset are the other main challenges for an accurate prediction.In this paper,we propose a hybrid model,name as“A Hybrid System for Customer Churn Prediction and Retention Analysis via Supervised Learning (HCPRs)”that used Synthetic Minority Over-Sampling Technique (SMOTE)and Particle Swarm Optimization(PSO)to address the issue of imbalance class data and feature selection.Data cleaning and normalization has been done on big Orange dataset contains 15000 features along with 50000 entities.Substantial experiments are performed to test and validate the model on Random Forest(RF), Linear Regression (LR), Na?ve Bayes (NB)and XG-Boost.Results show that the proposed model when used with XGBoost classifier,has greater Accuracy Under Curve(AUC)of 98%as compared with other methods.

    Keywords:Telecom churn prediction;data sparsity;class imbalance;big data;particle swarm optimization

    1 Introduction

    DATA volume is significantly growing in recent decade due to advancements in information technology.Concurrently, an enormous development is being made in machine learning algorithms to process this data and discover hidden patterns independently.Machine learning techniques learn through data and also have the potential to automate the analytical model building.Machine learning is divided into three categories such as:1)supervised,2)semi-supervised,and 3)unsupervised learning.Supervised learning is used to discover hidden patterns from labeled datasets.Whereas unsupervised learning is used to discover hidden patterns from unlabeled data.Therefore, unsupervised learning is beneficial in finding the structure and useful insights from an unknown dataset.However, semisupervised learning falls in between unsupervised and supervised learning[1].

    Customers are the most important profit resource in any industry.Therefore, telecom industry fear customer churn due to changing interests, and demand for new applications and services.Customer churn is defined as the movement of people from one company to another company for several reasons.The organization’s major concern is to retain their unsatisfied customers to survive in a highly competitive environment.To reduce customer churn, the company should be able to predict the behavior of customers correctly.For this, a churn prediction model is used to get an estimate of customers, who may switch a given service provider in the near future [2].Along with the telecom industry,churn prediction has also been used in subscription-based services,e-commerce,and industrial area[3,4].Nowadays,the use of the mobile phone is integral to our everyday life.This increases competition in the telecom sector,as it is less costly for a customer to switch services.The behavior of a churner typically relies on multiple attributes.From a company perspective,the expense of attracting new customers is 6 to 7 times higher than retaining the unsatisfied ones [5].On the contrary,companies are aware of the fact of losing customers,resulting in decrease profits.Therefore,Customer Relationship Management(CRM)needs an effective model to predict future churners for automating the retention mechanism.Many operators also use simple pattern matching programs to identify potential churners.However, these programs require regular maintenance, and fail to incorporate the changing requirements.Therefore, machine learning algorithms have the potential to continually learn from new data and adapt as new patterns emerge.

    In the past decade, significant research was performed to predict customer churn in telecom companies.Most approaches used machine learning for churn prediction [6].One study combined two different ML models for customer churn prediction.The ML models are Back Propagation Artificial Neural Networks and Self Organizing Maps[7].Various classification algorithms are used in this research[8]and compared their result to discover the most accurate algorithm for prediction of customer churn in businesses.In other research,the author creates a custom classification model using a combination of Artificial Neural Networks,Fuzzy modeling,and Tree-based models[9].The proposed research is called the locally linear model tree (LOLIMOT).The results show that the LOLIMOT model achieved accurate classification as compared to other classification algorithms even in extremely unbalanced datasets.Similarly, researchers have suggested a large number of classifier-based techniques like Enhanced Minority Oversampling Technique (EMOTE), Support Vector Nachine (SVM), Support Vector Data Description (SVDD), Netlogo (agent-based model),Fuzzy Classifier, Random Forest (RF)(e.g., [10-18]).Also, there are hybrid techniques for churn prediction that merge two classifiers like K-Means with Decision Tree(DT),DT along with Logistic Regression (LR)and K-Means and classic rule inductive technique (FOIL)[19-21].Also, Particle Swarm Optimization(PSO)based technique was proposed for feature selection in[22-25].Nowadays,customer churn is a major issue for every organization.The operation of customer churn prediction has become more complicated.Therefore, there is a need to develop some new and effective that accurately predict customer churn that help the company in more effective allocation of its resources.

    In this paper, our main concern is churn prediction for large datasets.We collected data from the international competition KDD Cup,2009(provided by Orange,Inc)[26].We proposed a hybrid system, name as HCPRs, based on PSO feature selection and classification via different classifiers with the aim to generate better performance,and we addressed the class imbalance problem by using SMOTE.Furthermore, we reduce the overall computational cost through features selection.This method targets the issue of predicting customer churn, and retention analysis in telecom industry.The proposed system may help telecom companies to retain existing customers along with attracting new ones.

    The following are our main contribution of this paper:

    ? We propose a hybrid model,named as A Hybrid System for Customer Churn Prediction and Retention Analysis via Supervised Learning(HCPRs)to address the issue of imbalance class data and feature selection.It uses a PSO based feature selection model which makes churn identification more quickly.

    ? We perform stratified five-fold cross validation for better testing of data, and performance evaluation.

    ? To demonstrate the effectiveness of HCPRs,we evaluate the proposed model against prominent techniques.Experimental results show an improved performance on RF compared to other classifiers.

    The rest of the paper is organized as follows.In Section 2,we present previous work along with the statistics related to past work.Motivation and research questions are presented in Section 3.In Section 4,we describe the widely known performance metrics used in the Churn prediction problem.In Section 5,we discuss our proposed work.In Section 6,the prediction and evaluation are presented for multiple machine learning classifiers.In Section 7, experiments and results have been discussed.Finally,Section 8 concludes the paper and future work.

    2 Related Work

    Churn in the telecom industry has been a long-term challenge for telecom companies.Typically,experts would manually perform churn analysis and make predictions accordingly.However,with the ever-increasing number of mobile subscribers and cellular data,it is not possible to predict manually.Hence, the research community has been attracted to explore the use of classifiers-based and PSObased models for churn prediction.

    2.1 Classifier Based Churn Prediction

    In some previous researches, supervised learning approaches were used to identify churn like Naive Bayes, Logistic Regression, Support Vector Machines, Decision Tree and Random Forest(e.g., [27-29]).Awang et al.[30] presented a regression-based churn prediction model.This model utilizes the customer’s feature data for analysis and churns identification.Vijaya et al.[31]proposed a predictive model for customer churn using machine learning techniques like KNN, Random Forest, and XG Boost.The author compared the accuracy of several machine learning algorithms to determine the better algorithm of higher accuracy.One more research[16]proposed a fuzzy based churn prediction model and compared the accuracy of several classifiers with the fuzzy model.The author proved in predicting customer churn that fuzzy classifiers are more accurate as compared to others.De Bock et al.[32] design the GAMensplus classification algorithm for interpretability and strong classification.Karanovic et al.[5] proposed a questionnaire-based data collection technique processed over Enhanced Minority Oversampling Technique (EMOTE)classifier.Maldonado et al.[33] proposed relational and the non-relational learner’s classifiers handling data sparsity by social network analytic method.

    2.2 Hybrid Churn Prediction

    In earlier days, many researchers show that single model based churn prediction techniques do not produce satisfactory results.Therefore, researchers switched on to hybrid models [18-20].The basic principle with the hybrid model is to combine the features of two or more techniques.One study combined two different ML models for customer churn prediction such as Back Propagation Artificial Neural Networks and self-organizing Maps [7].A data filtration process was performed using a hybrid model combining two neural networks.After that,data classification was performed using Self-Organized Maps (SOM).The proposed hybrid model was evaluated through two fuzzy testing sets and one general testing set.The evaluation results show that the proposed hybrid model was outperforming in prediction and classification accuracy, using a single neural network baseline model.In one more research,the author creates a custom classification model using a combination of Artificial Neural Networks,Fuzzy modeling,and Tree based models[9].

    2.3 PSO-Based Churn Prediction

    PSO-based techniques were proposed to solve the problem of customer churn(e.g.,[21,24,28,31]).Huang et al.[21] proposed a technique for churn prediction using particle swarm optimization(PSO).Furthermore, the author proposed three variants of PSO that are 1)PSO incorporated with feature selection, 2)PSO embedded with simulated annealing and 3)PSO with a combination of both feature selection and simulated annealing.It was observed that proposed PSO and its variants give better results in imbalanced scenarios.Guyon et al.[34] designed a model for efficient churn prediction by using data mining techniques.In preprocessing stage, the k-means algorithm is used.After preprocessing,attributes are selected by employing the minimum Redundancy and Maximum Relevance(mRMR)approach.This technique uses the Support Vector Machine with Particle Swarm Optimization(SVM with PSO)to examine the customer churn separation or prediction.The experiments show that the proposed model attain better performance as compared to the existing models in terms of accuracy,true-positive rate,false-positive rate,and processing time.Vijaya et al.[31]handled imbalanced data distribution by features selection using PSO.It used Principle Component Analysis(PCA), Fisher’s ratio, F-score, and Minimum Redundancy and Maximum Relevance (mRMR)techniques for feature selection.Moreover, Random Forest (RF)and K Nearest Neighbor (KNN)classifiers are utilized to evaluate the performance.

    3 Proposed Methodology

    In this section, we present the overall architecture of the proposed model along with its major component descriptions.

    The performance of the proposed model is evaluated in a telecom churn prediction model after knowing the problem statement in this era.Telecom providers T = {t1, t2, t3,...,tk} competing each other that may result in customers churn where C = {c1, c2, c3,...., cn}.Telecom providers (ti?T)require an identification system for churners(c ?C)having a high possibility to churn.Considered multiple features F={f1,f2,f3,....fj}of customers either to churn or non-churn along with a class label(L).The feature selection process is done to get the prediction result on considering valuable features(f ?F).

    3.1 System Overview

    The overall work flow diagram of the proposed system is illustrated in Fig.1.We explain the components of the churn prediction proposed model in a step-by-step manner.In the first step,data pre-processing is performed that comprises the data cleaning process, removal of imbalanced data features,and normalized the data.The synthetic Minority Oversampling(SMOTE)technique is used to balance the imbalanced data in telecommunication industries to improve the performance of churn prediction.In the second step, important features are extracted from data using the particle swarm optimization(PSO)mechanism.In the third step,different classification algorithms are employed for categorizing the customers into the churn and non-churn customers.The classification algorithms consist of Logistic Regression(LR),Na?ve Bayes(NB),Random Forest(RF),and XG-boost.

    Figure 1:Precise churn prediction proposed model

    3.2 Dataset

    A publicly available Orange Telecom Dataset(OTD)is provided by the French telecom company[35].The orange dataset consists of churners and non-churners.The dataset contains a large amount of information related to the customers and mobile network services.This information is considered in the KDD cup held for customer relationship prediction[36].The dataset consists of 15000 variables and 50000 instances;the dataset is further divided into five chunks(C1,C2,C3,C4,C5)that contain an equal number of samples(10,000 each).Furthermore,out of 50000 samples,3672 and 46328 samples were churners and non-churners.The approximate percentage ratio between churner and non-churner in OTD is 7:93.Due to which class imbalance problem occurs in such data set,Fig.2 shows a graphical representation between churnervs.non-churners.The names of the features are not defined to respect the customer’s privacy.OTD is a heterogeneous dataset that consists of noisy data with variations in the measurement scale,features with null values,features with missing values,and data sparsity.Hence,for which data pre-processing is a requirement on such kind of dataset.

    Figure 2:Churner vs.non-churner in OTD

    3.3 Data Preprocessing

    The dataset consists of noisy features, sparsity and missing values.In the dataset it was noticed that approximately 19:70%of the data have missing values.The main purpose of data pre-processing step is to consider data cleaning for missing values and noisy data and data transformation.For data normalization,there are several methods like Z-Score,Decimal Scaling and Min-Max.Resolving the data sparsity problem, we used Min-Max normalization method.Min-Max normalization method performs a linear transformation on the data.In this method,we normalize the data in a predefined interval that is valued 0 and 1.

    Class Imbalance

    Distribution of the dataset where one class has a very large number of instances compared to the other class.The class with few samples is the minority class, and the class with relatively additional instances is the majority class.The imbalance between two classes is represented by the use of the“Ratio Imbalance”which is defined as the ratio between a number of samples of the majority class and that of a minority class.In forecasting the customer churn rate,the number of nonsense are relatively high compared to the churn number.Several techniques have been proposed to solve the problems associated with an unbalanced dataset.These techniques can be classified into four categories such as[36]:

    ? Data level approaches,

    ? Algorithm level approaches,

    ? Cost-sensitive learning approaches,and

    ? Classifier Ensemble techniques

    Data level oversampling technique reduces the imbalance ratio of the skewed dataset by duplicating minority instances.The most commonly used an oversampling technique is the Synthetic Minority Oversampling Technique(SMOTE)[37].A SMOTE introduces additional synthetic samples into the minority class instead of directly duplicating instances.

    Using synthetic samples helps create larger and less specific decision regions.The algorithm first finds k nearest neighbors from each minority class sample using Euclidean distance as a measure of distance.Synthetic examples are generated along the line segments connecting the original minority class sample to its nearest k neighbors.The value of k depends on the number of artificial instances that need to be added.Steps for generating synthetic samples[36]:

    1.Generate a random number between 0 and 1

    2.Compute the difference between the feature vector of the minority class sample and its nearest neighbor

    3.Then Multiply this difference by a random number(as generated in step 1)

    4.After multiplication,adds the result of multiplication of the feature vector of the minority class sample

    5.The resulting feature vector determines the newly generated sample

    In this paper,we considered the Orange dataset,the distribution of the number of churners and the number of non-churner had a large amount of difference in the dataset.Computed values between the churner against the nonchurner are 3672(7:34%)and 46328(92:65%),respectably.It shows the ratio of 1:13 between churners and non-churns.In customer churn prediction,the number of non-churners is relatively high with respect to the number of churners as shown in Fig.3.

    Figure 3:Chunk wise OTD churner vs.non-churner

    For such an unbalanced distribution of the two classes,the few churners getting the same weight in a cost function as the non-churner will result in a high misclassification rate.As the classifier will be biased towards the majority class.To resolve this imbalance issue, we used an advanced oversampling technique SMOTE.The working of generic SMOTE is demonstrated in Fig.4.A synthetic oversampling technique performed rather than simple Random Under Sampling(RUS)and Random Over Sampling (ROS)technique.We resolved the imbalance issue by making the minority class(churners)equal to the majority one(nonchurner)with a ratio of 1:1.

    3.4 PSO Based Feature Selection

    Choosing subsets of features from an original dataset or eliminating unnecessary features are the fundamental principle underlying feature selection.Having irrelevant functionality in the dataset can reduce the accuracy of classification models and force the classification algorithm to process based on irrelevant functionality.A subset that must represent the original data necessarily and reasonably while still being useful for analytical activities.The feature selection activity focuses on finding an optimal solution in a generally large search space to mitigate classification activity.Therefore, it is recommended that performs a feature selection task before training a model.In this work, we use PSO-based feature selection mechanism of generating the best optimum subset for each of the chunks individually.It is a suitable algorithm for addressing feature selection problems for the following reasons:easy feature encoding,global search function,computationally reasonable,fewer parameters,and easy implementation [3].The PSO is implemented for feature selection because of the above reasons.

    Figure 4:Generic SMOTE synthetic churner instance creation

    The algorithm was introduced as an optimization technique for natural number spaces and solved complex mathematical problems.This algorithm work on the principle of interaction to share information between the members.This method performs the search of the optimal solution through particles.Each particle can be treated as the feasible solution to the optimization problem in the search space.The flight behavior of particles is considered as the search process by all members.PSO is initialized with a group of particles,and each particle moves randomly.A particleiis defined by its velocity vector,vi,and its position vector toxi.Each particle’s velocity and position are dynamically updated in order to find the best set of features until the stopping criterion is met.The stopping criteria can be a maximum number of iterations or a good fitness value.

    In PSO,each particle updates its velocity VE and positions PO with following equations:

    whereidenote the index of the swarm global best particle, VE is the velocity andξis the inertia weighting factor which is dynamically reduced; r1and r2are random variables generated from the uniform distribution on the interval [0, 1]; c1and c2parameters denote as acceleration coefficients;pbest(i,t)is the historically best position until iterationtand gbest is the global best particle with best position in the swarm(giving the best fitness value)are defined as:

    whereNpis the total number of particles,fis the fitness function,pis the position andtis the current iteration number.

    The first part of Eq.(1)(i.e.,ξVEi(t))is known as inertia that represents the previous velocity,whereas thesecond part (i.e.,c1r1(pbest(i,t)-poi(t))) is known as the cognitive component that encourages the particles to move towards their own best position, and in the third part (i.e.,c2r2(gbest(t)-poi(t)))is known as cooperation component that represents the collaborative effect of the particle[25].

    After the feature selection stage,we will obtain meaningful global best selected features X=[x1,x2,x3,....,xi](i.e.,(X(f)).Hence,after doing all the above process step by step we are in a position to have a purified Orange Telecom Dataset,visualized in Fig.5.In purified dataset class,imbalance issue is removed,the dataset has no null or missing values,dataset values are normalized in-between[0,1],and the most relevant features have been selected.

    Figure 5:Purified OTD

    3.5 Cross Validation

    It is necessary to evaluate the model in order to determine which one is more reliable.Crossvalidation is one of the most used methods to assess the generalization of a predictive model and avoid overfitting.There are three categories of cross-validation: 1)Leave-one-out cross-validation(LOOCV),2)k-Fold cross-validation,and 3)Stratified cross-validation.This study focuses primarily on the stratification k-fold cross-validation(SK).

    The Stratification k-fold cross-validation(SK)works as the following steps:

    1)SK splits the data into k folds,making particular, each fold is a proper representation of the original data

    ? The proportion of the feature of interest in the training and test sets is the same as in the original dataset.

    2)SK selects the first fold as the test set.

    ? The test set selects one by one in order.For instance, in the second iteration, the second fold will be selected as the test

    3)Repeat steps 1 and 2 for k times

    In this paper,stratified k-fold forward cross-validation is used,which is an improved version over traditional k-fold cross-validation for evaluating explorative prediction power of models.Instead of randomly partitioning the dataset, the sampling is performed so that the class proportions in the individual subsets reflect the proportions in the learning set.Stratified k-fold cross-validation is an improved version of traditional k-fold cross-validation.SK can preserve the imbalanced class distribution for each fold.Instead of randomly partitioning the dataset, stratified sampling is performed in such a way that the samples are selected in the same proportion as they appear in the population as shown in Fig.6.For example,if he learning set contains n=100 cases of two classes,the positive and the negative class,with n+=80 and n-=20.If random sampling is done without stratification,then some validation sets may contain only positive cases(or only negative cases).With stratification,however, each validation set of 5-fold cross-validation is guaranteed to contain about eight positive cases and two negative cases,thereby reflecting the class ratio in the learning set.

    Figure 6:Visualization of a stratified k-fold validation when k=5

    3.6 Prediction and Evaluation

    In this section,we used multiple classifiers that are Na?ve Bayes(NB),Logistic Regression(LR),Random Forest (RF), and XG-Boost to get accurate and efficient prediction results of customer’s churners.

    3.6.1 Na?ve Bayes

    Naive Bayes (NB)[38,39] is a type of classification algorithm based on the Bayesian theorem.It determines the probabilities of classes on every single instance and feature [x1, x2, x3, ..., xi] to derive a conditional probability for the relationships between the feature values and the class.The model contains two types of probabilities that can be calculated directly from the training data:(i)the probability of each class and (ii)the conditional probability for each class given each x value.Here,the Eq.(4)used for Bayes Theorem is given as,

    where yiis the target class, and x1,x2,x3,...,xiis the data, P(yi) is the class probability (prior probability), P(x1,x2,x3,...,xi) is the predictor probability (prior probability), P(x1,x2,x3,...,xi|yi)is the probability based on the conditions of the hypothesis, P(yi|x1,x2,x3,...,xi) is a hypothesis probability-based on conditions(posterior probability).

    Hence,the equation of Bayes theorem can also be written

    In this paper,Na?ve Bayes algorithm was implemented to predict either the customer will churn or not.

    3.6.2 Logistic Regression

    Logistic regression (LR)is the machine learning technique that solved binary classification problems.LR takes the real valued inputs and estimates the probability of an object belonging to a class.In this paper, a regression algorithm is evaluated to classify the customer churn and nonchurners.

    whereyis the dependent variable andxis the set of independent variables.The value of y is‘1’implies the churned customer or y is‘0’implies the non-churn customer.

    LR is estimated through the following equation:

    whereβis the coefficients to be learned andPris the probability of churn or not churn.If the value of probabilitypris>0.5 then it takes the output as class 0(i.e.,non-corners)otherwise it takes the output as class 1(i.e.,churners).

    3.6.3 Random Forest

    Random Forest (RF)[40,41] is a classification algorithm that builds up many decision trees.It adds a layer of randomness that aggregates the decision trees using the“bagging”method to get a more precise and stable prediction.Therefore,RF performs very well compared to many other classifiers.Both classification and regression tasks can be accomplished with RF.It is robust against overfitting and very user-friendly[41].

    The random forest technique’s main idea is as follows:

    1.Feature selection is accomplished on the decision tree to purify the classified data set.GINI index is taken as the purity measurement standard:

    where G represents the GINI function;q represents the number of categories in sample D;Pi represents the proportion of category i samples to the total number of samples and k represents that sample D is divided into k parts,that is,there are kDjdata sets.

    When the value of the GINI index (Eq.(8))reach the maximum, then the node splitting is accomplished.

    2.The generated multiple decision trees establish the random forest, and a majority voting mechanism is adopted to complete the prediction.The final classification decision is shown in Eq.(9).

    whereL(X)represents the combined classification algorithm;lirepresents the classification algorithm of ithdecision tree,andyis the target variable.I(?)is the indicative function.Fig.7 presents the generic working of the RF ensemble model on a purified orange dataset with final predictions.

    Figure 7:Generic overview of RF in churn prediction

    3.6.4 XGBoost

    In this work, we adopt the Extreme Gradient Boosting (XGBoost)an algorithm as a machine learning algorithm that is employed for classification and regression problems.XGBoost is also a decision-tree-based ensemble algorithm that uses a gradient boosting framework that boosts weak learners to become stronger.XGBoost experimented on the Orange Telecom Dataset.XGBoosting algorithm only takes numerical values that are a suitable technique to use the orange dataset.It is famous due to the speed and performance factors.

    XGBoost used the primary three gradients boosting techniques such as 1)Regularized, 2)Gradient, and 3)Stochastic boosting to enhance and tune the model.Moreover, it can reduce and control overfitting and decrease time consumption.The advantage of XGboost is that it can use multiple core parallel and fasten the computation by combining the results.Accuracy gained with the XGBoost algorithm was better than all the previous methods.

    3.6.5 Multiple Linear Regression

    MLR determines the effect when a variety of parameters are involved.For example, while predicting the behavior of churn users, multiple factors could be considered such as: cost, services,customer dealing which a telecom company provides.The effect of these different variables is used to calculate y(dependent variable)is calculated by multiplying each term with assigned weight value and adding all the results.

    4 Model Evaluation Metrics

    In this study,Accuracy,Precision,Recall,F1-Measure,and Accuracy Under Curve(AUC)based evaluation measures are used to quantify the accuracy of the proposed HCPRs churn prediction model.These well-known performance metrics are employed due to their popularity considered in existing literature for evaluating the quality of the classifiers that are used for churn prediction[42-44].The following evaluation measures are used:

    4.1 Accuracy

    Accuracy is defined as the ratio of the correct classifications of the number of samples to the total number of samples for a given test dataset.It is mainly used in classification problems for the correct prediction of all types of predictions.Mathematically,it is defined in Eq.(10).

    where‘TN’is True Negative,‘TP’is True Positive,‘FN’is False Negative and‘FP’is False Positive.

    4.2 Precision

    Precision is defined as the ratio of the correct classifications of positive samples to all numbers of the classified positive samples.It describes that the part of the prediction data is positive.Mathematically,it is defined in Eq.(11).

    4.3 Recall

    Recall measures the ratio of correctly classified relevant instances to the total amount of relevant instances.It can be showed for the churn and non-churn classes by the following equations,respectively.

    4.4 F1-Measure

    The F1 score is defined as a weighted average precision and recall.Where the F1 best score value is 1,and the worst score value is 0.The relative part of precision and recall to the F1 score are equal.Mathematically,it is defined in Eq.(13).

    4.5 Accuracy Under Curve(AUC)

    We also used the standard scientific accuracy indicator, AUC (Area under the Curve)ROC(Receiver Operating Characteristics)curve to evaluate the test data.An excellent model with the best performance has higher the area under the curve(AUC)in an ROC.

    Mathematically,it is defined in Eq.(14).

    wherePrepresents the number true churners andNshows the number of true non-churners.Arranging the churners in descending order,ranknis assigned to the highest probability customer,the next with rankn-1and so on.

    5 Experiments and Results

    The proposed HCPRs approach is validated with the comprehensive experimentation carried on respective combinations of sampling, feature selection, and classification methodologies.In this section,a comparative analysis of HCPRs with other existing approaches are also included.Orange Telecom Dataset(OTD)is used,as discussed in Section 3.4 performance evaluation of the proposed churn prediction model.In this study, 5-fold cross-validation testing is adopted for analyzing the performance of the proposed model.OTD dataset is further divided into five chunks (C1, C2, C3,C4,C5)that contains an equal number of samples.

    5.1 Classifiers Performance Evaluation on Split OTD

    We used multiple classifiers by not rely only on a single classifier as evaluation results vary from classifier to classifier.Classifiers considered in our study are Na?ve Bayes(NB),Logistic Regression(LR), Random Forest (RF), and XG-Boost.Performance metrics used to predict the efficiency of the model chunk wise are Accuracy,Recall,Precision,F-Measure,whose results can be visualized in Fig.8.It can be seen that chunk 3 shows a high accuracy rate of Random Forest,XGBoost,Logistic Regression 95%,96%,88%,respectively whereas Na?ve Bayes has a low accuracy rate of 74%.From the experimental results,the XGBoost outperformed the other classification algorithms on accuracy evaluation measures.Random Forest,XGBoost,Logistic Regression performed very well at Precision 96%,9%,and 90%,respectively.Although Na?ve Bayes has a lower precision rate of 83%.Although XGBoost outperformed the other algorithms on precision.XGBoost and Random Forest classifier gives higher recall score,i.e.,96%,95%compared to Na?ve Bayes and Logistic Regression classifiers.As displayed in Fig.8,we confirm that the XGBOOST algorithm and Random Forest outperformed the rest of the tested algorithms with an F1-measure value of 95%.Logistic Regression algorithm occupied second place with an F1-measure value of 88%, while Na?ve Bayes came last in the F1-measure ranking with a value of 73%.

    From the experimental results, XGBoost algorithm outperformed the other classification algorithms on most evaluation measures.

    5.2 Split OTD Area under ROC Curve Visualization with Multiple Classifiers

    This section presents the ROC curve of LR,Naive Byes,Random Forest and XGBOOST.ROC curve is widely used to measure the test’s ability as a criterion.In general,the ROC curve is used to predict the model accuracy.The area under the ROC curve(AUC)is a measure of how well parameters can distinguish between churned and not churned groups.The AUC value is between 0:5 and 1.AUC value is better if it is close to 1.Moreover,the mean value of folds is also computed on the classifiers.Fig.9 shows an overall view of the ROC curve on multiple classifiers that visualized on a split orange dataset along with mean AUC values.Stratified 5-fold cross-validation is used on each of the chunks.After 5-fold cross-validation,the Na?ve Bayes Classifier means the AUC value of each chunk is 0:75 as shown in Fig.9.

    Figure 8:Classifier performance evaluation on split OTD

    After that, we implemented a Logistic Regression classifier to obtain more accurate results.Similarly, like the previous technique, the environment is the same, and the mean AUC value is predicted 0:93 of C1, C2, and C4, and the mean AUC obtained on C3 and C5 is 0:94.Moreover,to predict churner more accurately,we also used an ensemble technique,Random Forest.ROC curve showed much more required results of each chunk with a mean AUC value of 0:962.Furthermore,we also used a boosting technique (i.e., XGBoost)on the same split orange dataset to obtain more accurate results.ROC curve gave more accurate and efficient results than all the previous techniques.The mean AUC value of each chunk(i.e.,C1,C2,C3,C4,C5)is 0:98.Results showed that the ensemble technique outperformed on the Orange dataset.Fig.10 demonstrates the graphical representation of mean AUC results reported by the classifiers used in our research.

    5.3 Performance Comparison with other Existing Approaches

    Numerous approaches applied with different classifiers in the domain of churn prediction.The comparison was taken based on performance evaluation metrics such as accuracy,precision,recall,F1-measure,and the ROC/AUC with the same dataset and different telecom datasets.These metrics were chosen to identify the performance of the HCPRs technique.Comparison of the HCPRs technique with K-MEANS-DT [45], Hybrid Firefly [26], PSO with a combination of both feature selection and simulated annealing(PSO-FSSA)[31],Weighted K-means and a classic rule inductive technique(FOIL)(WK-FOIL)[21] and Artificial Neural Networks and Multiple Linear Regression models(ANN-MLR)[46]were performed to measure the difference in performance levels Tab.1 shows the comparison of the current technique with different approaches with the same dataset.

    Figure 9:Split OTD area under ROC curve visualization with multiple classifiers

    Figure 10:Multiple classifiers mean AUROC curve

    From the experimental results,the proposed HCPRs significantly perform better as compared to the other algorithms on most evaluation measures in predicting telecom churners when evaluated on an Orange Telecom Dataset(OTD).Although it performed not very well at Precision,it outperformed the other algorithms on Accuracy, Recall, F1-score, and ROC/AUC.In addition, the predictive performance of our proposed model of the ROC curve is most excellent.

    Table 1: Comparative analysis on Orange Telecom Dataset(OTD)

    6 Conclusion

    A hybrid PSO based churn prediction model is presented in this paper.We tested our model on an orange telecom dataset.For preprocessing, SMOTE technique is used for data cleaning, and removal of imbalanced data features.After that important features are extracted from data with PSO.Furthermore, Logistic Regression (LR), Naive Bayes (NB)and Random Forest (RF)are used for categorizing customers into two categories i.e.,churn,and non-churn customers.It is shown through results that using a stratified 5-fold cross validation procedure improves the performance of our prediction model.Naive Bayes is given 0:75 least accurate result on AUC in comparison with Logistic Regression and Random Forest giving 0:934 and 0:962 respectively.

    For the future work, we plan to automate the retention mechanism based on these prediction methods,which is now a days a necessary requirement of a telecom company.Furthermore,we intend to perform experiments with increasing number of folds up to 10-folds for gaining accurate results.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产亚洲精品av在线| 国产精品av视频在线免费观看| 国产精品亚洲av一区麻豆| 999久久久国产精品视频| 欧美中文日本在线观看视频| 日本精品一区二区三区蜜桃| 国产午夜精品论理片| 久久亚洲真实| 日韩欧美国产一区二区入口| 欧美中文综合在线视频| 色吧在线观看| h日本视频在线播放| 人人妻人人看人人澡| 美女被艹到高潮喷水动态| 黑人操中国人逼视频| 国产精品亚洲美女久久久| 99久久国产精品久久久| 午夜亚洲福利在线播放| 此物有八面人人有两片| 色综合欧美亚洲国产小说| 噜噜噜噜噜久久久久久91| 日韩人妻高清精品专区| 人人妻人人看人人澡| 三级毛片av免费| АⅤ资源中文在线天堂| 国产亚洲精品久久久久久毛片| 伦理电影免费视频| 亚洲精品久久国产高清桃花| 国产三级在线视频| 国产午夜精品论理片| 99久久精品国产亚洲精品| 久久中文字幕一级| 免费电影在线观看免费观看| 国产极品精品免费视频能看的| 制服人妻中文乱码| 国产精品久久视频播放| 黄片小视频在线播放| 变态另类成人亚洲欧美熟女| 色综合欧美亚洲国产小说| 欧美xxxx黑人xx丫x性爽| 99精品欧美一区二区三区四区| 色哟哟哟哟哟哟| 精品久久久久久久人妻蜜臀av| 热99re8久久精品国产| 最近视频中文字幕2019在线8| 999久久久国产精品视频| 人人妻人人看人人澡| 日韩精品青青久久久久久| 香蕉av资源在线| 国产乱人伦免费视频| 在线播放国产精品三级| 真人一进一出gif抽搐免费| 国产亚洲欧美在线一区二区| 亚洲精品在线美女| 精品日产1卡2卡| 亚洲欧美日韩卡通动漫| 国产精品一及| 一级黄色大片毛片| av女优亚洲男人天堂 | 琪琪午夜伦伦电影理论片6080| 国产精品av视频在线免费观看| 欧美日韩黄片免| 欧美日韩一级在线毛片| 久99久视频精品免费| 国产私拍福利视频在线观看| 91九色精品人成在线观看| 桃色一区二区三区在线观看| 国产精品爽爽va在线观看网站| 三级国产精品欧美在线观看 | 欧美日本亚洲视频在线播放| 搡老熟女国产l中国老女人| 很黄的视频免费| 麻豆国产97在线/欧美| 伊人久久大香线蕉亚洲五| 国产单亲对白刺激| 狠狠狠狠99中文字幕| 国产又色又爽无遮挡免费看| 日韩精品中文字幕看吧| 亚洲精品乱码久久久v下载方式 | 国产精品自产拍在线观看55亚洲| 色在线成人网| 麻豆成人av在线观看| 搞女人的毛片| 怎么达到女性高潮| 天堂影院成人在线观看| 国产成人啪精品午夜网站| av视频在线观看入口| 99久久精品热视频| 天天躁狠狠躁夜夜躁狠狠躁| 午夜福利在线观看免费完整高清在 | 国产精品,欧美在线| 日本五十路高清| 成人国产综合亚洲| 国语自产精品视频在线第100页| 不卡av一区二区三区| 日韩精品中文字幕看吧| 国产探花在线观看一区二区| 欧美成人免费av一区二区三区| 最近最新免费中文字幕在线| 日韩 欧美 亚洲 中文字幕| 成年女人看的毛片在线观看| 9191精品国产免费久久| 亚洲欧洲精品一区二区精品久久久| 精品一区二区三区视频在线 | 国产精品一区二区三区四区久久| 最好的美女福利视频网| 亚洲精品一卡2卡三卡4卡5卡| 日韩国内少妇激情av| av天堂中文字幕网| 久久香蕉精品热| av国产免费在线观看| 久久久水蜜桃国产精品网| 国产一区在线观看成人免费| 国内揄拍国产精品人妻在线| 成人三级黄色视频| 国产精品综合久久久久久久免费| 亚洲精品在线观看二区| 欧美另类亚洲清纯唯美| 色尼玛亚洲综合影院| www.www免费av| 国产精品爽爽va在线观看网站| 欧美在线黄色| 国产三级中文精品| 精品一区二区三区视频在线观看免费| 国产午夜精品论理片| 久久久精品大字幕| 国产男靠女视频免费网站| 日本撒尿小便嘘嘘汇集6| 老鸭窝网址在线观看| 91av网站免费观看| 久久国产乱子伦精品免费另类| 国产精品亚洲一级av第二区| 亚洲av片天天在线观看| 日韩欧美在线二视频| 国产精品久久久久久人妻精品电影| 国产男靠女视频免费网站| 国产av在哪里看| 岛国在线观看网站| 美女被艹到高潮喷水动态| 国产精品永久免费网站| 久久草成人影院| 午夜福利成人在线免费观看| 欧美3d第一页| 亚洲国产精品久久男人天堂| 美女扒开内裤让男人捅视频| 桃色一区二区三区在线观看| 欧美日韩中文字幕国产精品一区二区三区| 曰老女人黄片| 精品久久久久久久久久免费视频| 亚洲成a人片在线一区二区| 午夜精品久久久久久毛片777| 亚洲五月婷婷丁香| 久久草成人影院| 亚洲av熟女| 美女大奶头视频| 国产伦一二天堂av在线观看| 国产成人aa在线观看| 天堂√8在线中文| 成年女人看的毛片在线观看| 日韩人妻高清精品专区| 18禁国产床啪视频网站| av视频在线观看入口| 1024手机看黄色片| 99精品久久久久人妻精品| 首页视频小说图片口味搜索| 真实男女啪啪啪动态图| 亚洲色图av天堂| 国产精品影院久久| 天堂√8在线中文| 国产一区二区在线av高清观看| 国产一区二区三区在线臀色熟女| 51午夜福利影视在线观看| 最近视频中文字幕2019在线8| 欧美一区二区国产精品久久精品| 久久久久久国产a免费观看| 国产精品99久久99久久久不卡| 亚洲男人的天堂狠狠| 美女扒开内裤让男人捅视频| 国产高清videossex| 嫁个100分男人电影在线观看| 长腿黑丝高跟| 日韩精品青青久久久久久| 国产成人精品久久二区二区免费| 亚洲av免费在线观看| 亚洲人成网站在线播放欧美日韩| 成年人黄色毛片网站| 久久精品国产亚洲av香蕉五月| 中出人妻视频一区二区| 午夜福利高清视频| 两性夫妻黄色片| 午夜福利在线在线| 精品福利观看| 日韩精品中文字幕看吧| 亚洲色图av天堂| 国产爱豆传媒在线观看| 丰满人妻一区二区三区视频av | 淫妇啪啪啪对白视频| 午夜激情欧美在线| 国产成人一区二区三区免费视频网站| 国产精品99久久99久久久不卡| 久久人妻av系列| 午夜精品在线福利| 久久这里只有精品中国| 国产一区二区三区视频了| 啦啦啦观看免费观看视频高清| 亚洲精品中文字幕一二三四区| 亚洲自偷自拍图片 自拍| 叶爱在线成人免费视频播放| 99久久99久久久精品蜜桃| 岛国在线免费视频观看| 99久久久亚洲精品蜜臀av| 精品久久久久久久末码| 中文字幕精品亚洲无线码一区| 夜夜躁狠狠躁天天躁| 日本成人三级电影网站| 国产精品久久久久久亚洲av鲁大| 欧美av亚洲av综合av国产av| 中文字幕人成人乱码亚洲影| 国产高清视频在线播放一区| av国产免费在线观看| 91九色精品人成在线观看| 久久精品91蜜桃| 久久精品国产亚洲av香蕉五月| av视频在线观看入口| 美女高潮的动态| 久久精品91无色码中文字幕| 亚洲精品国产精品久久久不卡| 日韩成人在线观看一区二区三区| 人妻夜夜爽99麻豆av| 日本免费一区二区三区高清不卡| 国内精品久久久久精免费| 成人三级做爰电影| 性色avwww在线观看| 床上黄色一级片| 亚洲精品在线美女| 99久久精品一区二区三区| 久久久久性生活片| 久久性视频一级片| 色噜噜av男人的天堂激情| 在线十欧美十亚洲十日本专区| 又紧又爽又黄一区二区| 亚洲欧美精品综合一区二区三区| h日本视频在线播放| 午夜影院日韩av| 99国产精品一区二区三区| 最好的美女福利视频网| 午夜福利免费观看在线| 国产成人av教育| 国产精品电影一区二区三区| 国产毛片a区久久久久| 亚洲av成人不卡在线观看播放网| 亚洲精品粉嫩美女一区| 性色avwww在线观看| 中文字幕最新亚洲高清| 少妇的逼水好多| 日韩免费av在线播放| 久久久久久国产a免费观看| 午夜福利在线观看吧| 国产精品 欧美亚洲| 婷婷精品国产亚洲av| 久久婷婷人人爽人人干人人爱| 性色av乱码一区二区三区2| 午夜激情欧美在线| 国产亚洲精品综合一区在线观看| 午夜免费成人在线视频| 日韩欧美三级三区| 亚洲av美国av| 国产亚洲av嫩草精品影院| 久久午夜综合久久蜜桃| 香蕉久久夜色| av天堂在线播放| 精品久久久久久久毛片微露脸| 国产精品美女特级片免费视频播放器 | 久久草成人影院| 又黄又粗又硬又大视频| 日韩大尺度精品在线看网址| 久久天躁狠狠躁夜夜2o2o| 丝袜人妻中文字幕| 成人一区二区视频在线观看| 神马国产精品三级电影在线观看| 亚洲精品国产精品久久久不卡| 日本 欧美在线| 国产视频内射| 国产亚洲精品一区二区www| 国产精品综合久久久久久久免费| 国内揄拍国产精品人妻在线| 精品免费久久久久久久清纯| 免费在线观看成人毛片| 一级毛片女人18水好多| 日本黄大片高清| 亚洲真实伦在线观看| 精品国内亚洲2022精品成人| 可以在线观看毛片的网站| 久久久久久九九精品二区国产| 国产私拍福利视频在线观看| 这个男人来自地球电影免费观看| 午夜福利成人在线免费观看| 国产黄片美女视频| 无遮挡黄片免费观看| 五月玫瑰六月丁香| 国产伦人伦偷精品视频| 在线免费观看不下载黄p国产 | 国产精品亚洲美女久久久| 久久欧美精品欧美久久欧美| 久久久精品欧美日韩精品| 日韩有码中文字幕| 91老司机精品| 亚洲成人中文字幕在线播放| 国产美女午夜福利| 在线观看免费视频日本深夜| 俄罗斯特黄特色一大片| 99国产精品99久久久久| 十八禁网站免费在线| 亚洲男人的天堂狠狠| 成人国产一区最新在线观看| 亚洲五月婷婷丁香| 免费av不卡在线播放| 国产一区二区激情短视频| www.熟女人妻精品国产| 国产精品影院久久| 成人性生交大片免费视频hd| 中文亚洲av片在线观看爽| 国产伦人伦偷精品视频| 亚洲午夜理论影院| 亚洲精品国产精品久久久不卡| 男人舔女人的私密视频| 变态另类丝袜制服| 少妇的丰满在线观看| ponron亚洲| 国产成人影院久久av| 一个人看视频在线观看www免费 | 99久久精品国产亚洲精品| 精品国产超薄肉色丝袜足j| 亚洲欧美日韩卡通动漫| 欧美日韩国产亚洲二区| 综合色av麻豆| 免费一级毛片在线播放高清视频| 一二三四社区在线视频社区8| e午夜精品久久久久久久| 身体一侧抽搐| 国产精华一区二区三区| 黄色成人免费大全| 亚洲欧美日韩卡通动漫| 搞女人的毛片| 香蕉丝袜av| 久久久久久久久免费视频了| 一进一出好大好爽视频| 两人在一起打扑克的视频| 成人av一区二区三区在线看| tocl精华| 久久精品91蜜桃| 夜夜爽天天搞| 曰老女人黄片| 一个人免费在线观看电影 | 男女之事视频高清在线观看| 国产亚洲精品综合一区在线观看| 久久久国产精品麻豆| 亚洲在线观看片| 久久久成人免费电影| 成人鲁丝片一二三区免费| 成人国产一区最新在线观看| 成人性生交大片免费视频hd| 悠悠久久av| 一边摸一边抽搐一进一小说| 白带黄色成豆腐渣| 黄频高清免费视频| 亚洲av片天天在线观看| 麻豆国产97在线/欧美| 伦理电影免费视频| 久久99热这里只有精品18| 亚洲欧美精品综合久久99| 亚洲真实伦在线观看| 国产精品美女特级片免费视频播放器 | 极品教师在线免费播放| netflix在线观看网站| 亚洲精品国产精品久久久不卡| 久久久久久久久免费视频了| 美女黄网站色视频| 中文字幕最新亚洲高清| 久久天堂一区二区三区四区| 一本精品99久久精品77| 久久久成人免费电影| 精品99又大又爽又粗少妇毛片 | 丰满人妻熟妇乱又伦精品不卡| 中文字幕人成人乱码亚洲影| 成人三级做爰电影| 国产不卡一卡二| 亚洲精品456在线播放app | svipshipincom国产片| 少妇人妻一区二区三区视频| x7x7x7水蜜桃| 国产男靠女视频免费网站| 美女 人体艺术 gogo| 国产一级毛片七仙女欲春2| 国模一区二区三区四区视频 | 亚洲性夜色夜夜综合| 亚洲avbb在线观看| 黄色 视频免费看| 日本精品一区二区三区蜜桃| 女人被狂操c到高潮| 一区二区三区激情视频| 91av网站免费观看| 日韩大尺度精品在线看网址| 97人妻精品一区二区三区麻豆| 黄片大片在线免费观看| 欧美最黄视频在线播放免费| 亚洲成人精品中文字幕电影| 免费看十八禁软件| 久久久精品大字幕| 国产乱人视频| 欧美激情在线99| 精品午夜福利视频在线观看一区| 亚洲一区二区三区色噜噜| 十八禁网站免费在线| 国产成人一区二区三区免费视频网站| 成人精品一区二区免费| 欧美在线黄色| 亚洲精品久久国产高清桃花| 在线观看舔阴道视频| 国产免费av片在线观看野外av| 国产成人影院久久av| 免费观看精品视频网站| 国产av在哪里看| cao死你这个sao货| 国产成人精品久久二区二区免费| 亚洲国产看品久久| 香蕉国产在线看| 亚洲最大成人中文| or卡值多少钱| 国产成+人综合+亚洲专区| 日韩欧美免费精品| 亚洲最大成人中文| 日韩欧美在线乱码| 长腿黑丝高跟| 亚洲中文日韩欧美视频| 国产亚洲欧美在线一区二区| 亚洲午夜理论影院| 午夜亚洲福利在线播放| 亚洲午夜精品一区,二区,三区| 亚洲av熟女| 91在线观看av| 亚洲av成人av| 高清在线国产一区| 99国产极品粉嫩在线观看| 男人和女人高潮做爰伦理| 亚洲va日本ⅴa欧美va伊人久久| 国产黄a三级三级三级人| 久久久精品大字幕| 久久久久久国产a免费观看| 少妇的逼水好多| 综合色av麻豆| 又黄又粗又硬又大视频| 69av精品久久久久久| 亚洲国产精品合色在线| 嫩草影院入口| 亚洲人成网站在线播放欧美日韩| 免费观看的影片在线观看| 日本黄大片高清| 成人性生交大片免费视频hd| 久久精品人妻少妇| 欧美3d第一页| 一级a爱片免费观看的视频| 熟女少妇亚洲综合色aaa.| 在线观看一区二区三区| 午夜久久久久精精品| 久久精品国产清高在天天线| 啦啦啦观看免费观看视频高清| 97超视频在线观看视频| 高潮久久久久久久久久久不卡| 19禁男女啪啪无遮挡网站| 99国产精品99久久久久| 免费观看人在逋| 色老头精品视频在线观看| 最近最新免费中文字幕在线| 午夜福利在线观看免费完整高清在 | 一进一出抽搐gif免费好疼| 国产精品久久久久久人妻精品电影| 免费av不卡在线播放| 亚洲黑人精品在线| 欧美高清成人免费视频www| 天天添夜夜摸| 欧美高清成人免费视频www| 少妇裸体淫交视频免费看高清| 久久久久国产精品人妻aⅴ院| 久久亚洲真实| 999久久久精品免费观看国产| 精品熟女少妇八av免费久了| 午夜福利在线在线| 欧美丝袜亚洲另类 | 老司机午夜福利在线观看视频| 精华霜和精华液先用哪个| 蜜桃久久精品国产亚洲av| 色吧在线观看| 可以在线观看毛片的网站| 亚洲狠狠婷婷综合久久图片| 日韩国内少妇激情av| 母亲3免费完整高清在线观看| 国产亚洲av高清不卡| 麻豆国产av国片精品| 久久精品人妻少妇| 欧美黄色淫秽网站| 18禁黄网站禁片免费观看直播| 亚洲男人的天堂狠狠| 精品国产美女av久久久久小说| 国产成人av激情在线播放| 亚洲无线在线观看| 欧美日韩瑟瑟在线播放| 桃红色精品国产亚洲av| 嫩草影视91久久| 欧美中文日本在线观看视频| 久久久成人免费电影| 日韩大尺度精品在线看网址| 欧美性猛交╳xxx乱大交人| 国产毛片a区久久久久| 十八禁人妻一区二区| 欧美一区二区精品小视频在线| 在线免费观看不下载黄p国产 | 高清在线国产一区| 亚洲激情在线av| 精品国内亚洲2022精品成人| 国内毛片毛片毛片毛片毛片| 白带黄色成豆腐渣| 黄片大片在线免费观看| www.自偷自拍.com| 一卡2卡三卡四卡精品乱码亚洲| 丰满的人妻完整版| 国产探花在线观看一区二区| 香蕉丝袜av| 变态另类成人亚洲欧美熟女| 欧美日韩一级在线毛片| 亚洲美女黄片视频| 伦理电影免费视频| 久久久久久国产a免费观看| 两性午夜刺激爽爽歪歪视频在线观看| www.www免费av| 亚洲专区中文字幕在线| 99久国产av精品| 一进一出好大好爽视频| 99国产精品99久久久久| 欧美激情在线99| 国产99白浆流出| 欧美色欧美亚洲另类二区| 亚洲片人在线观看| 久久久精品大字幕| 免费看美女性在线毛片视频| 亚洲专区国产一区二区| 国产成人aa在线观看| 精华霜和精华液先用哪个| 久久国产精品人妻蜜桃| 精品久久久久久久毛片微露脸| 白带黄色成豆腐渣| 久久久国产成人免费| 2021天堂中文幕一二区在线观| 成人18禁在线播放| 九色成人免费人妻av| 不卡av一区二区三区| 久久久久国内视频| 999久久久精品免费观看国产| 日本撒尿小便嘘嘘汇集6| 亚洲真实伦在线观看| 亚洲成a人片在线一区二区| 日本免费一区二区三区高清不卡| 99在线视频只有这里精品首页| 两个人的视频大全免费| 十八禁人妻一区二区| 亚洲成人免费电影在线观看| 免费看十八禁软件| 亚洲中文字幕日韩| a在线观看视频网站| 老司机深夜福利视频在线观看| 午夜福利免费观看在线| 成人欧美大片| 精品一区二区三区视频在线观看免费| 美女免费视频网站| 色综合欧美亚洲国产小说| 欧美又色又爽又黄视频| 成人性生交大片免费视频hd| 欧美激情久久久久久爽电影| а√天堂www在线а√下载| 亚洲无线观看免费| 国产欧美日韩精品亚洲av| 别揉我奶头~嗯~啊~动态视频| 午夜久久久久精精品| 黄片小视频在线播放| 欧美国产日韩亚洲一区| 免费在线观看影片大全网站| 久久久久久国产a免费观看| 日本一本二区三区精品| 日本精品一区二区三区蜜桃| 国产精品永久免费网站| 免费电影在线观看免费观看| 成人18禁在线播放| 精品乱码久久久久久99久播| 久久久精品大字幕| 色综合亚洲欧美另类图片| www.精华液| 一进一出好大好爽视频| 久久久精品大字幕| 欧美日韩中文字幕国产精品一区二区三区| 日日摸夜夜添夜夜添小说| 麻豆国产97在线/欧美| 中文资源天堂在线| 激情在线观看视频在线高清| 97超视频在线观看视频| 精品国产乱子伦一区二区三区| 天堂动漫精品| 成人一区二区视频在线观看| 色综合站精品国产| 国产成+人综合+亚洲专区| 在线观看美女被高潮喷水网站 | 男女床上黄色一级片免费看| 亚洲在线观看片| 免费高清视频大片| 国产1区2区3区精品| 色综合亚洲欧美另类图片|