• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Key-Attributes-Based Ensemble Classifier for Customer Churn Prediction

    2018-04-08 03:11:18YuQianLiangQiangLiJianRongRanandPeiJiShao

    Yu Qian, Liang-Qiang Li, Jian-Rong Ran, and Pei-Ji Shao

    1.Introduction

    Data mining has become increasingly important in management activities, especially in the support of decision making, most of which can be attributed to the task of classification. Therefore, classification analysis has been widely used in the study of management decision problems[1]-[4], for example, trend prediction and customer segmentation.Obviously, classification methods with high accuracy would reduce the decision loss of misclassification. However, with the increasing complexity of modern management and the diversity of related data, the results provided by a single classifier are suspected of having poor semantics and thus are hard to understand in management practice, especially for the prediction tasks with complex data and managerial scenarios[5].

    In recent years, ensemble classifiers have been introduced into solving complicated classification problems[6], and they represent the new direction for the improvement of the performance of classifiers. These classifiers could be based on a variety of classification methodologies, and could achieve different rates of correct classified individuals. The goal of the classification result integration algorithms is to generate more certain, precise, and accurate results[7].

    In literature, numerous methods have been suggested for the creation of ensemble classifiers[7],[8]. Although the ensemble classifiers constructed by any of the general methods have archived a great number of applications in classification tasks[8],they have to face two challenges in performance under some real managerial scenarios. The first one is the expensive time cost for classifiers’ training/learning, and the second one is about the poor semantic understanding(management insights) of the classification results.

    In this research, we propose a method which builds an ensemble classifier based on the key attributes (values) that are filtered out from the initial data. Experiment results with real data show that the proposed method not only has high relative precision in classification, but also has high comprehensibility of its calculation results.

    2.Related Work

    2.1 Classification Models for Churn Prediction

    In most real applications, studies are mainly focused on improving the performance of a single algorithm in predicting activities, typically in predicting the customer churn in the service industry.

    In this stream, Hu et al. analyzed and evaluated three implementations for decision trees in the churn prediction system with big data[9]. Kim et al. used a logistic regression to construct the customer churn prediction model[10]. Tian et al.adopted the Bayesian classifier to build a customer churn prediction model[11]. More complicatedly, artificial neural network (ANN)[12]and random forest (RF)[13]have been adopted to build the customer churn prediction model. Ultsch introduced a self-organizing map (SOM) to build the customer churn prediction model[14]. Rodan et al.[15]used support vector machine (SVM) to predict customer churn. Au et al. built the customer churn prediction model based on evolutionary learning algorithms[16].

    2.2 Ensemble Classifier

    The main idea of the ensemble classifier is to build multiple classifiers on the collected original data set, and then gather the results of these individual classifiers in the classification process. Here, individual classifiers are called base/weak classifiers. During the training, the base classifiers are trained separately on the data set. During the prediction, the base classifiers provide a decision on the test dataset. An ensemble method then combines the decisions produced by all the base classifiers into one final result. Accordingly, there are a lot of fusion methods in the literature including voting, the Borda count, algebraic combiners, and so on[7].

    The theory and practices in literature have proved that an ensemble classifier can improve the performance of classification significantly, which might be better than the performance provided by any single classifier[8]. Generally,there are two methods to construct an ensemble classifier[7],[8]:1) algorithm-oriented method: Implementing different classification learning algorithms on the same data, for example, the neural network and decision tree; 2) data-oriented method: Separating the initial dataset into parts and using different subsets of training data with a single classification method.

    Particularly, for the decline of the prediction precision caused by the complex data structure, processing the training data is a feasible way for ensemble classifier construction.Bagging and boosting are two typical ensemble methods of handling the datasets[17]-[19].

    As mentioned before, the focuses of these researches have been put on the prediction accuracy of each single model.However, we could also address the problems of constructing an ensemble classifier based on the data distribution for better prediction results.

    3.Research Method

    3.1 Research Problem

    Representing each user as an entity, then the dataset is composed by the values of user-attributes can be treated as an initial matrix as shown in Table 1. In which, the value of xiis typically vectors of the formand it denotes the whole values to the Useri. The value of Aiis typically vectors of the formwhose components are discrete or real value representing the values for attribute Ajsuch as age, income, and location. The attributes are called the features of xi.

    Table 1: Initial data matrix

    Since the basic research context of this study is about complicated training data and complex decision scenarios, both of the algorithm- and data-oriented methods would be taken into consideration in ensemble classifier construction by training a group of classifiers.

    3.2 Key Attribute Selection

    As the dimensionality of the data increases, many types of data classification problems become significantly harder,particularly at the high computational cost and memory usage[20]. As a result, the reduction of the attribute space may lead to a better understandable model and simplifies the usage of different visualization techniques. Thus, a learning algorithm to induce a classifier must address the two issues: Selecting some key attributes (dimension reduction) and further splitting a dataset into parts according to the value distributions of these key attributes.

    Key attribute selection used in this study is to select a lot of attributes from data sets and the selection is basically consistency with the goal of prediction.

    The two ways of supervised and unsupervised methods can be used to selecte attributes. The supervised method is also called the management-oriented method. It is used to determine whether an attribute A is a key attribute according to the management needs and prior knowledge. The typical method is asking some experts to label out the key attributes. The advantage of this method is that its calculation process is simple and its results have higher comprehensibility. To avoid the selection bias from the experts’ side, sometimes, the unsupervised method is used for data preprocessing by introducing some methods with the computational capacity of grouping or dimension reduction, for example, clustering or principal component analysis (PCA).

    To simplify the calculation, we introduce the following“clustering-annotation” process to selecte the key attributes.Firstly, we use a clustering method to cluster the attributes ofintogroups, i.e.,,according to their values’ similarity. In other words, ifare similar to each other, then

    Next, we associate one representative attribute for πiin accordance with the management semantics of attributes in πi.The basic rule for the association is that the selected attribute should have strong potential correlation (management insights)with the decision-making problem.

    3.3 Attribute Value Based Dataset Splitting

    After the key attributes are selected, then the data set X would be split (clustered) into k parts by the value distributions of these key attributes.

    The general method for such task is the binning method which can distribute sorted values into a number of bins.Assume that the maximum value of attribute A is max and its minimum is min, and divide the original data set into k subdataset. The record x, whose value of attribute A satisfies the following condition, will be classified as a member of the group Ci:

    where i = 1, 2, ···, k.

    In literature, researchers have introduced some efficient methods to split the initial dataset into sub-datasets automatically, for example, the maximum information gain method and Gini method[8],[21]. The performance of such unsupervised methods is affected by the type, range, and distributions of the attribute values, and especially, they may suffer from the higher computational complexity.

    Equation (2) works well on data splitting with one attribute A. Moreover, we could split data with a set of attributes as clustered in (1). To deal with a very large dataset, it is argued that the singular value decomposition (SVD) of matrices might provide an excellent tool[22].

    Based on the values of selected key attributes of πi, in this study, the datasetwill be split as follows:

    2) Computing the SVD of matrixsuch that

    where U and V are orthonormal and S is diagonal. The column vectors of U are taken from the orthonormal eigenvectors of, and ordered right to left from largest corresponding eigenvalue to the smallest.

    3) The elements of S are only nonzero on the diagonal, and are called the singular values. By convention, the ordering of the singular values is determined by high-to-low sorting, so that we can choose the top-k eigenvalues of S and cluster the vectors x(πi) ininto k clusters:.Finally, the cluster information foris further used to map each vector of x in X into the group:

    3.4 Ensemble Classifier

    To keep more managerial information, we can construct an ensemble classifier as following:

    Firstly, given a decision-making goal, we cluster all the attributes into l groups and associate each group with a representative feature. Then, we introduce SVD to split the data matrix offor the group πi, and the results are used to map all the vectors in X into k groups, each of which is a sub-dataset specially for the purpose of better prediction for the targeted decision-making goal. Next, based on the new generated subdataset, we can introduce the general algorithm or data oriented method to train a set of approximate classifiers and use them to perform the classification tasks for decision-making problem.At last, a fused result will be reported for the prediction.

    Another important work is to select an appropriate classification algorithm for those aforementioned sub-datasets.Considering the cost of calculation and the precision of results,in this study, we choose three typical classification algorithms of neural net, logistic, and C5.0[23]as the basic algorithms to build the hybrid model.

    The classification of a new instance x is made by voting on all classifiers {CFt}, each has a weight of αtwhere t={Neural net, Logistic, C5.0}. The final prediction can be written as:

    where, αtis a value between [0, 1] according to the performance of CFt. In order to simplify the calculation, αtcan be set as 1 for the best classifier and 0 for the others.

    3.5 Evaluation Method

    In this paper, the precision[21]and receiver operating characteristic (ROC)[24]are used to evaluate the results.

    Given a set of prediction results made by a classifier, the confusion matrix of two classes “true” and “false” are shown in Table 2. Here, variable the A, B, C, and D are used to denote the number of true positive, true negative, false positive,and false negative results, respectively.

    Table 2: Results matrix predicted by the classifier

    The ROC is a graphical plot that illustrates the performance of a classifier system as its discrimination threshold is varied.The ROC analysis is originated from the statistical decision theory in the 1950s and has been widely used in the performance assessment of classification[21]. The ROC curve is plotted by treating the ratio of true positive as Y-axis and ratio of false positive as X-axis. The closer to the upper left corner of ROC curve, the higher the accuracy of the model predictions.The area under curve (AUC) can be used as a measure of the prediction effect. The value of AUC generally ranges between 1.000 and 0.500 and it represents the better prediction if the area value approaches closer to 1.000.

    4.Experiment Results

    4.1 Data Set

    With the speedup of the market competition, maintaining the existing customers is then becoming the core marketing strategy to survive in the telecommunication industry. For the better performance of customer maintaining, it is necessary to predict those who are about to churn in the future. Studying the churn prediction is an important concern in the telecommunication industry. For instants, in the following experiments, the data is collected from China Mobile.

    Note that, due to the great uncertainty of the consumer behavior and little data recorded in companies’ operation databases, the records generated by temporary customers and the customers who buy a SIM card and discard the card soon after short-term consumption are cleared. At last, all together 47735 customers are randomly selected from three main sub-branches which located in 3 different cities separately. The observation period is from January 1st, 2008 to May 31st, 2008 and the extracted information is about the activities of the users in using the telecommunication services, such as contract data, consumer behaviors, and billing data.

    After data preprocessing such as data clean, integration,transformation, and discretization, the valid customer data is 47365 (99.2% of the total number of samples and noted as dataset X), in which, 3421 users are churn customers (the churn rate is 7.2%). In the experiments, the data set X has been separated into two parts: The training data which were generated from January 1st to March 31st, 2008, denoted by X1,and the test data which are generated from April 1st to May 31st, 2008, denoted by X2.

    The experiment platform is SPSS Clementine12.0, which provides well-programmed software tools for the classification algorithms of C5.0, logistic, and neural net.

    4.2 Attribute Selection and Dataset Splitting

    In total, there are 116 (n=116) variables included in the customer relationship management (CRM) system are extracted as the initial data set X.

    Implement the cosine similarity based k-means clustering method on vectorsin X. Inspired by the customer segmentation in marketing (in conjunction with necessarily experts’ annotations), we cluster the common variables according to their relations in marketing practice.At last, the attributes are clustered into 4 (l=4) groups and 4 attributes of brand, area, age, and bill (having strong correlation with customers’ churn in the telecommunication industry) are chosen as the key attributes, respectively.

    Moreover, the values of these four attributes are split into 3(k=3) sub-datasets, respectively, according to the SVD clustering results. The results are summarized in Table 3.

    Table 3: Subdivision categories of each variables

    4.3 Ensemble Model Construction

    In the following, four ensemble classifiers will be built according to the sub-datasets separated by four attributes of brand, area, age and bill.

    The classification algorithms of C5.0, logistic, and neural net algorithms are implemented on each sub-dataset for a series of repeated prediction experiments. The logic view of the ensemble classifier model construction is shown in Fig. 1.

    Fig. 1. Logic view of ensemble classifier models.

    For the attribute of brand, the training set X1is firstly divided into three sub-datasets, namely GoTone, EasyOwn, and M-Zone. Each of them accounts for 7.2%, 80.7%, and 12.1%customers, respectively. In the learning process, each subset is separated firstly into training and test sets according to the ratio of 60.0% and 40.0%.

    Among all the classification results reported by each algorithm on the test dataset, the result with the largest AUC area under the ROC curve is selected as the basic model for such a sub-dataset. The AUC results reported by three models on each brand are shown in Table 4.

    The comparative results are shown in Fig. 2. The results in Table 4 show that the neural net algorithm works the best in the prediction of GoTone and EasyOwn sub-datasets, whereas the C5.0 works the best on the M-Zone sub-dataset.

    Table 4: AUC of prediction on brand sub-datasets

    Similarly, the performances of classification (prediction) on sub-datasets split by attributes of area, age, and bill are reported in Tables 5 to 7, respectively. Accordingly, the visualized results are shown in Figs. 3 to 5.

    4.4 Result Evaluation

    The previous experiments on sub-datasets separated by different key attributes provide 4 hybrid models. Next, we will use these four models to make prediction on datasetX2. Also,the measurements of precision and ROC curve are used to evaluate the performance of each model.

    1) Comparison of precision

    The average accuracy of prediction provided by the four of each model based onX2is summarized in Table 8. It shows that there is the highest precision (86.1%) reported while using the key attribute of area for data segmentation to build a hybrid model, followed by the result generated with the attribute bill(85.9%). However, the performance of hybrid models constructed by the attributes brand and age for data segmentation is lower (81.2% and 76.2%).

    Fig. 2. AUC of prediction on brand sub-datasets: (a) GoTone, (b) EasyOwn, and (c) M-Zone.

    Fig. 3. AUC of prediction on area sub-datasets: (a) area A, (b) area B, and (c) area C.

    Fig. 4. AUC of prediction on age sub-datasets: (a) net age low, (b) net age middle, and (c) net age high.

    Fig. 5. AUC of prediction on bill sub-datasets: (a) low consumption level, (b) middle consumption level, and (c) high consumption level.

    Table 5: AUC of prediction on area sub-datasets

    Table 6: AUC of prediction on age sub-datasets

    Table 7: AUC of prediction on bill sub-datasets

    Table 8: Prediction accuracy of the four hybrid models on test set X2

    2) Comparison of ROC

    The ROC curves for the prediction results provided by the four hybrid models on testing set X2is shown in Fig. 6. The area under the ROC curve of each hybrid model is calculated in Table 9.

    Comparing the results in Fig. 6 and Table 9, we know that the two hybrid models constructed based on attributes of area and bill would generate a better AUC (0.888 and 0.855) than based on brand and age (0.828 and 0.845).

    According to the experiment results, we can conclude that using the attribute of area as the segment variable would get the best prediction results, which are followed by those of the bill attribute. However, the key attributes age and brand would perform relatively poorly. Therefore, in practice of customer churn prediction, it is recommended that telecommunication companies use the consumers’ bill information as the key attribute to build the customer churn prediction hybrid model for each area separately. Moreover,it is necessary to strengthen brand management and to improve the customer segmentation effect of different brands.

    Fig. 6. ROC curve of prediction accuracy of the four hybrid models on test set X2.

    Table 9: AUC of prediction accuracy of the four hybrid models on test set X2

    3) Limitations

    The main idea of the method proposed in this work is to construct an ensemble classifier for higher precision and managerial insights. We should note some limitations of this work. First, there is a lack of criteria for how many base classifiers should be selected in the hybrid classifier. Second,the proposed method has involved some time consumption preprocessing processes in ensemble classifier construction, for example, the PCA and SVD methods, which would cause the higher complexity of computation.

    5.Conclusions

    Classification analysis has been widely used in the study of decision problems. However, with the increasing complexity of modern management and the diversity of related data, the results provided by a single classifier are suspected of having poor semantics, thus are hard to understand in the management practice, especially for the prediction tasks with the very complex data and managerial scenarios.

    Regarding to the management issues of classification and prediction, an ensemble of single classifiers is an effective way to improve the prediction results. In order to solve the problems of poor precision and management semantics caused by the ordinary ensemble classifiers, in this paper, we proposed the ensemble classifier construction method based on the key attributes in the data set. The experimental results based on the real data collected from China Mobile show that the keyattributes-based ensemble classifier has the advantages on both prediction accuracy and result comprehensibility.

    [1]M. J. Berry and G. S. Linoff,Data Mining Techniques: for Marketing, Sales, and Customer Support, New York: John Wiley & Sons, 1997, ch. 8.

    [2]Y. K. Noh, F. C. Park, and D. D. Lee, “Diffusion decision making for adaptivek-nearest neighbor classification,”Advances in Neural Information Processing Systems, vol. 3,pp. 1934-1942, Jan. 2012.

    [3]X.-L. Xia and Jan H.-H. Huang, “Robust texture classification via group-collaboratively representation-based strategy,”Journal of Electronic Science and Technology, vol.11, no. 4, pp. 412-416, Dec. 2013.

    [4]S. Archana and D. K. Elangovan, “Survey of classification technique in data mining,”Intl. Journal of Computer Science and Mobile Applications, vol. 2, no. 2, pp. 65-71, 2014.

    [5]H Grimmett, R. Paul, R. Triebel, and I. Posner, “Knowing when we don’t know: Introspective classification for mission-critical decision making,” inProc. of IEEE Intl.Conf. on Robotics and Automation, Karlsruhe, 2013, pp.4531-4538.

    [6]R. L. MacTavish, S. Ontanon, J. Radhakrishnan,et al., “An ensemble architecture for learning complex problem-solving techniques from demonstration,”ACM Trans. on Intelligent Systems and Technology, vol. 3, no. 4, pp. 1-38, 2012.

    [7]T. G. Dietterich,Ensemble Methods in Machine Learning,Multiple Classifier Systems, Berlin: Springer, 2000, pp. 1-15.

    [8]Z.-H. Zhou,EnsembleMethods: Foundations and Algorithms, Boca Raton: Chapman and Hall/CRC, 2012.

    [9]X.-Y. Hu, M.-X. Yuan, J.-G. Yao,et al., “Differential privacy in telco big data platform,” inProc. of the 41st Intl.Conf. on Very Large Data Bases, Kohala Coast, 2015, pp.1692-1703.

    [10]H. S. Kim and C. H. Yoon, “Determinants of subscriber churn and customer loyalty in the Korean mobile telephony market,”Telecommunications Policy, vol. 28, no. 9, pp. 751-765, 2004.

    [11]L. Tian, K.-P. Zhang, and Z. Qin, “Application of a Bayesian network learning algorithm in telecom CRM,”Modern Electronics Technique, vol. 10, pp. 52-55, Oct. 2005.

    [12]A. Sharma and P. K. Panigrahi, “A Neural network based approach for predicting customer churn in cellular network services,”Intl. Journal of Computer Applications, vol. 27,no. 11, pp. 26-31, Aug. 2011.

    [13]Y.-Q. Huang, F.-Z. Zhu, M.-X. Yuan,et al., “Telco churn prediction with big data,” inProc. of ACM SIGMOD Intl.Conf. on Management of Data, Melbourne, 2015, pp. 607-618.

    [14]A. Ultsch, “Emergent self-organizing feature maps used for prediction and prevention of churn in mobile phone markets,”Journal of Targeting, Measurement and Analysis for Marketing, vol. 10, no. 4, pp. 314-324, 2002.

    [15]A. Rodan, H. Faris, J. Alsakran, and O. Al-Kadi, “A support vector machine approach for churn prediction in telecom industry,”Intl. Journal on Information, vol. 17, pp. 3961-3970, Aug. 2014.

    [16]W. H. Au, KCC. Chen, and X. Yao, “A novel evolutionary data mining algorithm with applications to churn prediction,”IEEE Trans. on Evolutionary Computation, vol. 7, no. 6, pp.532-545, 2003.

    [17]S. B. Kotsiantis and P. E. Pintelas, “Combining bagging and boosting,”Intl. Journal of Computational Intelligence, vol. 1,no. 4, pp. 324-333, 2004.

    [18]N. C. Oza, “Online bagging and boosting,” inProc. of IEEE Intl. Conf. on Systems, Man & Cybernetics, Tucson, 2001,pp. 2340-2345.

    [19]C.-X. Zhang, J.-S. Zhang, and G.-W. Wang, “A novel bagging ensemble approach for variable ranking and selection for linear regression models,” inMultiple Classifier Systems, Friedhelm Schwenker, Ed. Switzerland: Springer,2015, pp. 3-14.

    [20]W. Drira and F. Ghorbel, “Decision bayes criteria for optimal classifier based on probabilistic measures,”Journal of Electronic Science and Technology, vol. 12, no. 2, pp. 216-219, 2014.

    [21]J. Han, M. Kamber, and J. Pei,Data Mining: Concepts and Techniques: Concepts and Techniques, 3rd ed., San Francisco: Morgan Kaufmann, 2011.

    [22]P. Drineas, A. Frieze, R. Kannan, S. Vempala, and V. Vinay,“Clustering large graphs via the singular value decomposition,”Machine Learning, vol. 56, no. 1, pp. 9-33,2004.

    [23]T. Bujlow, T. Riaz, and J. M. Pedersen, “A method for classification of network traffic based on C5.0 machine learning algorithm,” inProc. of IEEE Intl. Conf. on Computing, Networking and Communications, Okinawa,2012, pp. 237-241.

    [24]A. P. Bradley, “The use of the area under the ROC curve in the evaluation of machine learning algorithms,”Pattern Recognition, vol. 30, pp. 1145-1159, Jul. 1997.

    国产精品国产三级专区第一集| 久久99热6这里只有精品| 久久精品影院6| 久久久久久久午夜电影| 亚洲欧美清纯卡通| 国产又黄又爽又无遮挡在线| 国产精品1区2区在线观看.| 2021天堂中文幕一二区在线观| 国产一区亚洲一区在线观看| 国内精品宾馆在线| 国产又黄又爽又无遮挡在线| 身体一侧抽搐| 天堂√8在线中文| 国产精品精品国产色婷婷| 国产一区二区在线av高清观看| 两个人视频免费观看高清| 丰满乱子伦码专区| 色哟哟·www| 国产一区亚洲一区在线观看| 男女国产视频网站| 18禁裸乳无遮挡免费网站照片| 18禁裸乳无遮挡免费网站照片| 国产精品一区www在线观看| 一本一本综合久久| 一区二区三区免费毛片| 美女cb高潮喷水在线观看| 一区二区三区四区激情视频| 午夜激情福利司机影院| 国产久久久一区二区三区| 国产麻豆成人av免费视频| 内地一区二区视频在线| 免费人成在线观看视频色| 日本免费a在线| 嫩草影院精品99| 国国产精品蜜臀av免费| 久久国产乱子免费精品| 国产一区二区三区av在线| 国内少妇人妻偷人精品xxx网站| 成年av动漫网址| 亚洲成人久久爱视频| 91aial.com中文字幕在线观看| 亚洲精品乱码久久久久久按摩| 纵有疾风起免费观看全集完整版 | 成年版毛片免费区| 色吧在线观看| 成人无遮挡网站| 高清毛片免费看| 不卡视频在线观看欧美| 亚洲av成人精品一二三区| 亚洲av熟女| 久久草成人影院| 国产伦在线观看视频一区| 成人毛片60女人毛片免费| 国产 一区精品| 黄色日韩在线| 岛国毛片在线播放| 深夜a级毛片| 国产精品人妻久久久影院| 精品午夜福利在线看| 五月伊人婷婷丁香| 在线观看66精品国产| 精品少妇黑人巨大在线播放 | 超碰av人人做人人爽久久| 国产精品美女特级片免费视频播放器| 免费av观看视频| 天天躁夜夜躁狠狠久久av| 国产成人精品婷婷| 国产乱来视频区| 亚洲无线观看免费| 26uuu在线亚洲综合色| 亚洲欧美日韩卡通动漫| 国产高清三级在线| 日韩人妻高清精品专区| 免费大片18禁| 干丝袜人妻中文字幕| 欧美人与善性xxx| 亚洲成人精品中文字幕电影| 免费搜索国产男女视频| 一级av片app| 久久久久久久亚洲中文字幕| 真实男女啪啪啪动态图| 少妇被粗大猛烈的视频| 只有这里有精品99| 哪个播放器可以免费观看大片| 久久婷婷人人爽人人干人人爱| 亚州av有码| 2021天堂中文幕一二区在线观| 神马国产精品三级电影在线观看| 午夜a级毛片| 一区二区三区四区激情视频| 国产又黄又爽又无遮挡在线| 日韩精品青青久久久久久| 青春草亚洲视频在线观看| 久久99热6这里只有精品| 欧美高清成人免费视频www| 蜜臀久久99精品久久宅男| 一级毛片电影观看 | 插逼视频在线观看| 国产黄a三级三级三级人| 水蜜桃什么品种好| 特级一级黄色大片| 亚洲国产精品专区欧美| 午夜爱爱视频在线播放| 丝袜喷水一区| 成人一区二区视频在线观看| 99久久精品热视频| 中文精品一卡2卡3卡4更新| 精华霜和精华液先用哪个| 久久99蜜桃精品久久| 3wmmmm亚洲av在线观看| 最近最新中文字幕免费大全7| 我的女老师完整版在线观看| 观看美女的网站| 欧美成人精品欧美一级黄| 国产毛片a区久久久久| 午夜a级毛片| 国产在视频线精品| 秋霞伦理黄片| 五月伊人婷婷丁香| 亚洲精品国产av成人精品| 午夜精品一区二区三区免费看| 国产精品综合久久久久久久免费| 免费观看人在逋| 欧美成人精品欧美一级黄| 在线a可以看的网站| 亚洲自偷自拍三级| 久久久国产成人精品二区| 亚洲国产精品合色在线| 国产成年人精品一区二区| 久99久视频精品免费| 国产亚洲精品av在线| 成人三级黄色视频| 免费观看的影片在线观看| 免费人成在线观看视频色| av在线观看视频网站免费| 91久久精品国产一区二区成人| 国产乱人偷精品视频| 欧美日韩一区二区视频在线观看视频在线 | 亚洲成人精品中文字幕电影| 丰满少妇做爰视频| 久久久欧美国产精品| av播播在线观看一区| 一个人观看的视频www高清免费观看| 级片在线观看| 亚洲精品,欧美精品| 国产高清有码在线观看视频| 精品久久久久久久久久久久久| 亚洲第一区二区三区不卡| 亚洲欧洲日产国产| 色尼玛亚洲综合影院| av在线蜜桃| 我要看日韩黄色一级片| 久久人妻av系列| 国产精品综合久久久久久久免费| 亚洲av熟女| 亚洲在线自拍视频| 亚洲国产精品sss在线观看| 精品久久久久久久末码| 国产成人午夜福利电影在线观看| 91av网一区二区| 91精品伊人久久大香线蕉| 最近最新中文字幕大全电影3| 永久免费av网站大全| 色综合站精品国产| 亚洲精品456在线播放app| 国语自产精品视频在线第100页| 亚洲一级一片aⅴ在线观看| 亚洲成人久久爱视频| 国产伦一二天堂av在线观看| 国产成人精品婷婷| 免费看av在线观看网站| 麻豆久久精品国产亚洲av| 听说在线观看完整版免费高清| 成年av动漫网址| 成人毛片60女人毛片免费| 综合色丁香网| 成人欧美大片| 色综合亚洲欧美另类图片| 熟女人妻精品中文字幕| 国产又黄又爽又无遮挡在线| 国产高清不卡午夜福利| 国产三级在线视频| 国产黄片视频在线免费观看| 嘟嘟电影网在线观看| 久久久久国产网址| 黄片wwwwww| 一二三四中文在线观看免费高清| 别揉我奶头 嗯啊视频| 黄色一级大片看看| 亚洲欧洲国产日韩| 日本一二三区视频观看| av在线老鸭窝| 中文天堂在线官网| 久久久久久国产a免费观看| 麻豆乱淫一区二区| 少妇高潮的动态图| av免费在线看不卡| 在现免费观看毛片| 婷婷色av中文字幕| 国产精品麻豆人妻色哟哟久久 | 麻豆一二三区av精品| 99视频精品全部免费 在线| 精品不卡国产一区二区三区| 亚洲真实伦在线观看| 亚洲av成人av| 在现免费观看毛片| 欧美激情久久久久久爽电影| 综合色av麻豆| 午夜久久久久精精品| 国产精品国产三级专区第一集| 午夜福利在线观看吧| 国产精品av视频在线免费观看| 一区二区三区乱码不卡18| 国产成人午夜福利电影在线观看| 亚洲av.av天堂| 国产精品一二三区在线看| 三级男女做爰猛烈吃奶摸视频| 国产一区二区亚洲精品在线观看| 男女那种视频在线观看| 亚洲av成人精品一二三区| 国产麻豆成人av免费视频| 在线观看66精品国产| 嫩草影院精品99| 国产 一区 欧美 日韩| 国产精品福利在线免费观看| 乱系列少妇在线播放| 欧美不卡视频在线免费观看| 日韩强制内射视频| 精品久久久噜噜| 精品一区二区三区人妻视频| 日韩一本色道免费dvd| 国产精品永久免费网站| a级毛色黄片| 黄色欧美视频在线观看| 亚洲电影在线观看av| 亚洲精品亚洲一区二区| 久久亚洲精品不卡| 国产一区二区在线av高清观看| 18禁在线无遮挡免费观看视频| 国产老妇伦熟女老妇高清| 国产乱人偷精品视频| 天堂中文最新版在线下载 | 国产av在哪里看| 日韩欧美国产在线观看| 亚洲色图av天堂| 亚洲伊人久久精品综合 | 国产精品永久免费网站| 大香蕉久久网| 亚洲经典国产精华液单| 久久99蜜桃精品久久| 我要搜黄色片| 最近2019中文字幕mv第一页| 国产在线男女| 亚洲激情五月婷婷啪啪| 男人和女人高潮做爰伦理| 黄色配什么色好看| 一级二级三级毛片免费看| av在线老鸭窝| 亚洲综合色惰| 亚洲图色成人| 亚洲成人中文字幕在线播放| 国产亚洲一区二区精品| 日韩av不卡免费在线播放| 插阴视频在线观看视频| 成年版毛片免费区| 美女xxoo啪啪120秒动态图| 午夜精品一区二区三区免费看| 男人舔女人下体高潮全视频| 国产v大片淫在线免费观看| 男的添女的下面高潮视频| 欧美成人a在线观看| 97超碰精品成人国产| 亚洲18禁久久av| 欧美另类亚洲清纯唯美| 日韩欧美精品v在线| 亚洲国产精品成人久久小说| 日本黄色视频三级网站网址| 好男人视频免费观看在线| 内射极品少妇av片p| 九色成人免费人妻av| 亚洲精品一区蜜桃| 久久精品国产自在天天线| 99热6这里只有精品| 日本-黄色视频高清免费观看| 女人十人毛片免费观看3o分钟| 一级毛片aaaaaa免费看小| 国产私拍福利视频在线观看| 欧美成人午夜免费资源| 日韩欧美精品v在线| 51国产日韩欧美| 国产精品久久久久久久电影| 亚洲va在线va天堂va国产| 国产毛片a区久久久久| av在线天堂中文字幕| 大又大粗又爽又黄少妇毛片口| 亚洲av男天堂| 国产精品永久免费网站| 婷婷色综合大香蕉| 国产视频内射| 午夜福利在线在线| 51国产日韩欧美| 精品一区二区三区人妻视频| 国产精品熟女久久久久浪| 看十八女毛片水多多多| 男女视频在线观看网站免费| 伦理电影大哥的女人| 美女xxoo啪啪120秒动态图| 国产激情偷乱视频一区二区| 床上黄色一级片| 国产大屁股一区二区在线视频| 最新中文字幕久久久久| 晚上一个人看的免费电影| 婷婷色麻豆天堂久久 | av播播在线观看一区| 国产美女午夜福利| 男女啪啪激烈高潮av片| 最近的中文字幕免费完整| 亚洲精品aⅴ在线观看| 亚洲va在线va天堂va国产| h日本视频在线播放| 国产高清有码在线观看视频| 免费看光身美女| 久久久精品94久久精品| 欧美高清性xxxxhd video| 国产免费一级a男人的天堂| 男女视频在线观看网站免费| 少妇的逼水好多| 最新中文字幕久久久久| 国产久久久一区二区三区| 黄片wwwwww| 男人狂女人下面高潮的视频| av免费观看日本| 99热这里只有是精品在线观看| 亚洲精品亚洲一区二区| 韩国av在线不卡| 91午夜精品亚洲一区二区三区| 精品一区二区三区视频在线| 午夜福利高清视频| 午夜激情福利司机影院| 日本五十路高清| 99热6这里只有精品| 永久免费av网站大全| 99国产精品一区二区蜜桃av| 日韩视频在线欧美| 2022亚洲国产成人精品| 丝袜美腿在线中文| 又爽又黄无遮挡网站| 日本黄色视频三级网站网址| 成人特级av手机在线观看| 少妇熟女欧美另类| 网址你懂的国产日韩在线| 久久久成人免费电影| 久99久视频精品免费| 国产精品.久久久| 亚洲国产欧洲综合997久久,| 久热久热在线精品观看| 亚洲电影在线观看av| 青青草视频在线视频观看| 老师上课跳d突然被开到最大视频| 日日啪夜夜撸| 性色avwww在线观看| 国产伦一二天堂av在线观看| 亚州av有码| 久热久热在线精品观看| 永久免费av网站大全| 日韩av在线大香蕉| 国产激情偷乱视频一区二区| 日韩精品有码人妻一区| 亚洲国产精品成人久久小说| 伦理电影大哥的女人| 69人妻影院| 日本熟妇午夜| 观看美女的网站| 特级一级黄色大片| 日韩av在线大香蕉| 人人妻人人澡人人爽人人夜夜 | 18禁在线无遮挡免费观看视频| 看十八女毛片水多多多| 一级毛片我不卡| 欧美日韩综合久久久久久| 亚洲va在线va天堂va国产| 亚洲成人av在线免费| videossex国产| 精品国产一区二区三区久久久樱花 | 18禁在线播放成人免费| 亚洲最大成人中文| 色播亚洲综合网| 免费无遮挡裸体视频| 国产免费视频播放在线视频 | 91aial.com中文字幕在线观看| 久久久国产成人精品二区| 村上凉子中文字幕在线| 18禁动态无遮挡网站| 一级黄色大片毛片| eeuss影院久久| 永久免费av网站大全| 免费不卡的大黄色大毛片视频在线观看 | 熟女电影av网| 亚洲一级一片aⅴ在线观看| 日韩亚洲欧美综合| 久久久久久久久久成人| 午夜a级毛片| 国产精品一二三区在线看| 亚洲第一区二区三区不卡| 一夜夜www| 亚洲在线自拍视频| 亚洲精品成人久久久久久| 熟女人妻精品中文字幕| 国产老妇女一区| 青春草亚洲视频在线观看| 大又大粗又爽又黄少妇毛片口| 国产精品av视频在线免费观看| 在线免费观看不下载黄p国产| 精品国产露脸久久av麻豆 | 久久亚洲精品不卡| 人妻系列 视频| 免费观看在线日韩| 欧美色视频一区免费| 亚洲av日韩在线播放| 国产成人aa在线观看| 亚洲精品,欧美精品| 99久国产av精品| 草草在线视频免费看| 最近最新中文字幕大全电影3| 亚洲av成人av| 久久久久免费精品人妻一区二区| av视频在线观看入口| 波野结衣二区三区在线| 亚洲综合精品二区| 久久久亚洲精品成人影院| 国产精品一二三区在线看| 3wmmmm亚洲av在线观看| 老女人水多毛片| 国产黄a三级三级三级人| 免费av观看视频| 久久99热这里只频精品6学生 | 日韩成人av中文字幕在线观看| 色综合色国产| 插阴视频在线观看视频| 日韩一本色道免费dvd| 成年免费大片在线观看| 在线观看一区二区三区| 亚洲在线自拍视频| 国产亚洲午夜精品一区二区久久 | 欧美97在线视频| 男女国产视频网站| 大香蕉久久网| 在现免费观看毛片| 自拍偷自拍亚洲精品老妇| 特大巨黑吊av在线直播| 在线免费观看的www视频| 大又大粗又爽又黄少妇毛片口| 国产视频首页在线观看| 亚洲在久久综合| 国产精品久久视频播放| 日本黄色视频三级网站网址| 中文字幕精品亚洲无线码一区| 亚洲精品日韩av片在线观看| 欧美另类亚洲清纯唯美| 亚洲av福利一区| 亚洲欧美成人精品一区二区| 毛片一级片免费看久久久久| 午夜福利网站1000一区二区三区| 亚洲一区高清亚洲精品| 美女国产视频在线观看| 国产欧美日韩精品一区二区| 国产黄片视频在线免费观看| 亚洲中文字幕日韩| 人妻系列 视频| 精品国产三级普通话版| 啦啦啦观看免费观看视频高清| 青春草亚洲视频在线观看| 99久久九九国产精品国产免费| 99久久精品国产国产毛片| 日韩精品青青久久久久久| 亚洲欧美精品综合久久99| 国产精品,欧美在线| 欧美潮喷喷水| 一区二区三区四区激情视频| 国产极品精品免费视频能看的| 韩国av在线不卡| 免费播放大片免费观看视频在线观看 | 极品教师在线视频| 日本与韩国留学比较| 人妻系列 视频| 欧美又色又爽又黄视频| 狂野欧美白嫩少妇大欣赏| 久久欧美精品欧美久久欧美| 亚洲国产色片| 寂寞人妻少妇视频99o| 午夜免费激情av| 国产老妇女一区| 18禁在线无遮挡免费观看视频| 国产伦理片在线播放av一区| 国产亚洲91精品色在线| 老司机福利观看| 精华霜和精华液先用哪个| 99热这里只有是精品在线观看| 久久久午夜欧美精品| 少妇人妻精品综合一区二区| 日韩一本色道免费dvd| 亚洲人与动物交配视频| 午夜福利高清视频| 国产成人aa在线观看| 日韩一本色道免费dvd| 国产精品美女特级片免费视频播放器| 两个人的视频大全免费| 亚洲欧美日韩高清专用| 永久免费av网站大全| 国产真实乱freesex| 午夜福利高清视频| 国产成人aa在线观看| 91午夜精品亚洲一区二区三区| 最近的中文字幕免费完整| 男女啪啪激烈高潮av片| 午夜福利成人在线免费观看| 少妇的逼好多水| 免费看光身美女| 国产精品熟女久久久久浪| 黄片无遮挡物在线观看| 久久久久久久亚洲中文字幕| 视频中文字幕在线观看| 亚洲国产高清在线一区二区三| 青春草国产在线视频| 菩萨蛮人人尽说江南好唐韦庄 | 国产精品一及| av在线亚洲专区| 国产在视频线精品| 91aial.com中文字幕在线观看| 国产色爽女视频免费观看| 亚洲四区av| 久久99蜜桃精品久久| 丝袜喷水一区| 天美传媒精品一区二区| 国产精品一区二区性色av| 国产精品久久久久久久久免| 国产男人的电影天堂91| 黄色配什么色好看| 乱系列少妇在线播放| 亚洲欧美成人综合另类久久久 | 丝袜美腿在线中文| 精品人妻偷拍中文字幕| 内地一区二区视频在线| 91久久精品国产一区二区三区| 免费av不卡在线播放| 国产美女午夜福利| 精品一区二区免费观看| 久久久久免费精品人妻一区二区| 99久久精品国产国产毛片| 精品人妻视频免费看| 欧美97在线视频| 欧美zozozo另类| 22中文网久久字幕| 在线播放国产精品三级| 亚洲最大成人av| 在线天堂最新版资源| 又爽又黄无遮挡网站| 亚洲av成人av| 白带黄色成豆腐渣| 国产黄色视频一区二区在线观看 | 亚洲av中文字字幕乱码综合| 好男人视频免费观看在线| 只有这里有精品99| 亚洲在线观看片| 欧美zozozo另类| 亚洲精品亚洲一区二区| 老女人水多毛片| 一夜夜www| 欧美一区二区精品小视频在线| 午夜精品国产一区二区电影 | 亚洲性久久影院| 校园人妻丝袜中文字幕| 波野结衣二区三区在线| 亚洲最大成人手机在线| 国产爱豆传媒在线观看| 一级黄片播放器| 亚洲国产精品sss在线观看| 午夜精品国产一区二区电影 | 51国产日韩欧美| 中文欧美无线码| 高清毛片免费看| 国产一区亚洲一区在线观看| 亚洲av中文字字幕乱码综合| 超碰av人人做人人爽久久| 国产精品.久久久| 日本欧美国产在线视频| 精品人妻偷拍中文字幕| 成人鲁丝片一二三区免费| 成人国产麻豆网| 久久久精品大字幕| 舔av片在线| 久久久久久久久久久免费av| 一级毛片我不卡| 又粗又爽又猛毛片免费看| 免费看日本二区| 国产成人免费观看mmmm| 精品国内亚洲2022精品成人| 成人漫画全彩无遮挡| 国产精品一区二区三区四区免费观看| 春色校园在线视频观看| 午夜福利在线在线| 内地一区二区视频在线| 国产精品人妻久久久影院| 国产伦理片在线播放av一区| 毛片一级片免费看久久久久| 午夜福利网站1000一区二区三区| 久久精品国产亚洲av天美| 免费看美女性在线毛片视频| av在线蜜桃| 精品久久久久久久久久久久久| 日本av手机在线免费观看| 又粗又爽又猛毛片免费看| 国产色爽女视频免费观看| 少妇猛男粗大的猛烈进出视频 | 天天躁夜夜躁狠狠久久av|