• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Key-Attributes-Based Ensemble Classifier for Customer Churn Prediction

    2018-04-08 03:11:18YuQianLiangQiangLiJianRongRanandPeiJiShao

    Yu Qian, Liang-Qiang Li, Jian-Rong Ran, and Pei-Ji Shao

    1.Introduction

    Data mining has become increasingly important in management activities, especially in the support of decision making, most of which can be attributed to the task of classification. Therefore, classification analysis has been widely used in the study of management decision problems[1]-[4], for example, trend prediction and customer segmentation.Obviously, classification methods with high accuracy would reduce the decision loss of misclassification. However, with the increasing complexity of modern management and the diversity of related data, the results provided by a single classifier are suspected of having poor semantics and thus are hard to understand in management practice, especially for the prediction tasks with complex data and managerial scenarios[5].

    In recent years, ensemble classifiers have been introduced into solving complicated classification problems[6], and they represent the new direction for the improvement of the performance of classifiers. These classifiers could be based on a variety of classification methodologies, and could achieve different rates of correct classified individuals. The goal of the classification result integration algorithms is to generate more certain, precise, and accurate results[7].

    In literature, numerous methods have been suggested for the creation of ensemble classifiers[7],[8]. Although the ensemble classifiers constructed by any of the general methods have archived a great number of applications in classification tasks[8],they have to face two challenges in performance under some real managerial scenarios. The first one is the expensive time cost for classifiers’ training/learning, and the second one is about the poor semantic understanding(management insights) of the classification results.

    In this research, we propose a method which builds an ensemble classifier based on the key attributes (values) that are filtered out from the initial data. Experiment results with real data show that the proposed method not only has high relative precision in classification, but also has high comprehensibility of its calculation results.

    2.Related Work

    2.1 Classification Models for Churn Prediction

    In most real applications, studies are mainly focused on improving the performance of a single algorithm in predicting activities, typically in predicting the customer churn in the service industry.

    In this stream, Hu et al. analyzed and evaluated three implementations for decision trees in the churn prediction system with big data[9]. Kim et al. used a logistic regression to construct the customer churn prediction model[10]. Tian et al.adopted the Bayesian classifier to build a customer churn prediction model[11]. More complicatedly, artificial neural network (ANN)[12]and random forest (RF)[13]have been adopted to build the customer churn prediction model. Ultsch introduced a self-organizing map (SOM) to build the customer churn prediction model[14]. Rodan et al.[15]used support vector machine (SVM) to predict customer churn. Au et al. built the customer churn prediction model based on evolutionary learning algorithms[16].

    2.2 Ensemble Classifier

    The main idea of the ensemble classifier is to build multiple classifiers on the collected original data set, and then gather the results of these individual classifiers in the classification process. Here, individual classifiers are called base/weak classifiers. During the training, the base classifiers are trained separately on the data set. During the prediction, the base classifiers provide a decision on the test dataset. An ensemble method then combines the decisions produced by all the base classifiers into one final result. Accordingly, there are a lot of fusion methods in the literature including voting, the Borda count, algebraic combiners, and so on[7].

    The theory and practices in literature have proved that an ensemble classifier can improve the performance of classification significantly, which might be better than the performance provided by any single classifier[8]. Generally,there are two methods to construct an ensemble classifier[7],[8]:1) algorithm-oriented method: Implementing different classification learning algorithms on the same data, for example, the neural network and decision tree; 2) data-oriented method: Separating the initial dataset into parts and using different subsets of training data with a single classification method.

    Particularly, for the decline of the prediction precision caused by the complex data structure, processing the training data is a feasible way for ensemble classifier construction.Bagging and boosting are two typical ensemble methods of handling the datasets[17]-[19].

    As mentioned before, the focuses of these researches have been put on the prediction accuracy of each single model.However, we could also address the problems of constructing an ensemble classifier based on the data distribution for better prediction results.

    3.Research Method

    3.1 Research Problem

    Representing each user as an entity, then the dataset is composed by the values of user-attributes can be treated as an initial matrix as shown in Table 1. In which, the value of xiis typically vectors of the formand it denotes the whole values to the Useri. The value of Aiis typically vectors of the formwhose components are discrete or real value representing the values for attribute Ajsuch as age, income, and location. The attributes are called the features of xi.

    Table 1: Initial data matrix

    Since the basic research context of this study is about complicated training data and complex decision scenarios, both of the algorithm- and data-oriented methods would be taken into consideration in ensemble classifier construction by training a group of classifiers.

    3.2 Key Attribute Selection

    As the dimensionality of the data increases, many types of data classification problems become significantly harder,particularly at the high computational cost and memory usage[20]. As a result, the reduction of the attribute space may lead to a better understandable model and simplifies the usage of different visualization techniques. Thus, a learning algorithm to induce a classifier must address the two issues: Selecting some key attributes (dimension reduction) and further splitting a dataset into parts according to the value distributions of these key attributes.

    Key attribute selection used in this study is to select a lot of attributes from data sets and the selection is basically consistency with the goal of prediction.

    The two ways of supervised and unsupervised methods can be used to selecte attributes. The supervised method is also called the management-oriented method. It is used to determine whether an attribute A is a key attribute according to the management needs and prior knowledge. The typical method is asking some experts to label out the key attributes. The advantage of this method is that its calculation process is simple and its results have higher comprehensibility. To avoid the selection bias from the experts’ side, sometimes, the unsupervised method is used for data preprocessing by introducing some methods with the computational capacity of grouping or dimension reduction, for example, clustering or principal component analysis (PCA).

    To simplify the calculation, we introduce the following“clustering-annotation” process to selecte the key attributes.Firstly, we use a clustering method to cluster the attributes ofintogroups, i.e.,,according to their values’ similarity. In other words, ifare similar to each other, then

    Next, we associate one representative attribute for πiin accordance with the management semantics of attributes in πi.The basic rule for the association is that the selected attribute should have strong potential correlation (management insights)with the decision-making problem.

    3.3 Attribute Value Based Dataset Splitting

    After the key attributes are selected, then the data set X would be split (clustered) into k parts by the value distributions of these key attributes.

    The general method for such task is the binning method which can distribute sorted values into a number of bins.Assume that the maximum value of attribute A is max and its minimum is min, and divide the original data set into k subdataset. The record x, whose value of attribute A satisfies the following condition, will be classified as a member of the group Ci:

    where i = 1, 2, ···, k.

    In literature, researchers have introduced some efficient methods to split the initial dataset into sub-datasets automatically, for example, the maximum information gain method and Gini method[8],[21]. The performance of such unsupervised methods is affected by the type, range, and distributions of the attribute values, and especially, they may suffer from the higher computational complexity.

    Equation (2) works well on data splitting with one attribute A. Moreover, we could split data with a set of attributes as clustered in (1). To deal with a very large dataset, it is argued that the singular value decomposition (SVD) of matrices might provide an excellent tool[22].

    Based on the values of selected key attributes of πi, in this study, the datasetwill be split as follows:

    2) Computing the SVD of matrixsuch that

    where U and V are orthonormal and S is diagonal. The column vectors of U are taken from the orthonormal eigenvectors of, and ordered right to left from largest corresponding eigenvalue to the smallest.

    3) The elements of S are only nonzero on the diagonal, and are called the singular values. By convention, the ordering of the singular values is determined by high-to-low sorting, so that we can choose the top-k eigenvalues of S and cluster the vectors x(πi) ininto k clusters:.Finally, the cluster information foris further used to map each vector of x in X into the group:

    3.4 Ensemble Classifier

    To keep more managerial information, we can construct an ensemble classifier as following:

    Firstly, given a decision-making goal, we cluster all the attributes into l groups and associate each group with a representative feature. Then, we introduce SVD to split the data matrix offor the group πi, and the results are used to map all the vectors in X into k groups, each of which is a sub-dataset specially for the purpose of better prediction for the targeted decision-making goal. Next, based on the new generated subdataset, we can introduce the general algorithm or data oriented method to train a set of approximate classifiers and use them to perform the classification tasks for decision-making problem.At last, a fused result will be reported for the prediction.

    Another important work is to select an appropriate classification algorithm for those aforementioned sub-datasets.Considering the cost of calculation and the precision of results,in this study, we choose three typical classification algorithms of neural net, logistic, and C5.0[23]as the basic algorithms to build the hybrid model.

    The classification of a new instance x is made by voting on all classifiers {CFt}, each has a weight of αtwhere t={Neural net, Logistic, C5.0}. The final prediction can be written as:

    where, αtis a value between [0, 1] according to the performance of CFt. In order to simplify the calculation, αtcan be set as 1 for the best classifier and 0 for the others.

    3.5 Evaluation Method

    In this paper, the precision[21]and receiver operating characteristic (ROC)[24]are used to evaluate the results.

    Given a set of prediction results made by a classifier, the confusion matrix of two classes “true” and “false” are shown in Table 2. Here, variable the A, B, C, and D are used to denote the number of true positive, true negative, false positive,and false negative results, respectively.

    Table 2: Results matrix predicted by the classifier

    The ROC is a graphical plot that illustrates the performance of a classifier system as its discrimination threshold is varied.The ROC analysis is originated from the statistical decision theory in the 1950s and has been widely used in the performance assessment of classification[21]. The ROC curve is plotted by treating the ratio of true positive as Y-axis and ratio of false positive as X-axis. The closer to the upper left corner of ROC curve, the higher the accuracy of the model predictions.The area under curve (AUC) can be used as a measure of the prediction effect. The value of AUC generally ranges between 1.000 and 0.500 and it represents the better prediction if the area value approaches closer to 1.000.

    4.Experiment Results

    4.1 Data Set

    With the speedup of the market competition, maintaining the existing customers is then becoming the core marketing strategy to survive in the telecommunication industry. For the better performance of customer maintaining, it is necessary to predict those who are about to churn in the future. Studying the churn prediction is an important concern in the telecommunication industry. For instants, in the following experiments, the data is collected from China Mobile.

    Note that, due to the great uncertainty of the consumer behavior and little data recorded in companies’ operation databases, the records generated by temporary customers and the customers who buy a SIM card and discard the card soon after short-term consumption are cleared. At last, all together 47735 customers are randomly selected from three main sub-branches which located in 3 different cities separately. The observation period is from January 1st, 2008 to May 31st, 2008 and the extracted information is about the activities of the users in using the telecommunication services, such as contract data, consumer behaviors, and billing data.

    After data preprocessing such as data clean, integration,transformation, and discretization, the valid customer data is 47365 (99.2% of the total number of samples and noted as dataset X), in which, 3421 users are churn customers (the churn rate is 7.2%). In the experiments, the data set X has been separated into two parts: The training data which were generated from January 1st to March 31st, 2008, denoted by X1,and the test data which are generated from April 1st to May 31st, 2008, denoted by X2.

    The experiment platform is SPSS Clementine12.0, which provides well-programmed software tools for the classification algorithms of C5.0, logistic, and neural net.

    4.2 Attribute Selection and Dataset Splitting

    In total, there are 116 (n=116) variables included in the customer relationship management (CRM) system are extracted as the initial data set X.

    Implement the cosine similarity based k-means clustering method on vectorsin X. Inspired by the customer segmentation in marketing (in conjunction with necessarily experts’ annotations), we cluster the common variables according to their relations in marketing practice.At last, the attributes are clustered into 4 (l=4) groups and 4 attributes of brand, area, age, and bill (having strong correlation with customers’ churn in the telecommunication industry) are chosen as the key attributes, respectively.

    Moreover, the values of these four attributes are split into 3(k=3) sub-datasets, respectively, according to the SVD clustering results. The results are summarized in Table 3.

    Table 3: Subdivision categories of each variables

    4.3 Ensemble Model Construction

    In the following, four ensemble classifiers will be built according to the sub-datasets separated by four attributes of brand, area, age and bill.

    The classification algorithms of C5.0, logistic, and neural net algorithms are implemented on each sub-dataset for a series of repeated prediction experiments. The logic view of the ensemble classifier model construction is shown in Fig. 1.

    Fig. 1. Logic view of ensemble classifier models.

    For the attribute of brand, the training set X1is firstly divided into three sub-datasets, namely GoTone, EasyOwn, and M-Zone. Each of them accounts for 7.2%, 80.7%, and 12.1%customers, respectively. In the learning process, each subset is separated firstly into training and test sets according to the ratio of 60.0% and 40.0%.

    Among all the classification results reported by each algorithm on the test dataset, the result with the largest AUC area under the ROC curve is selected as the basic model for such a sub-dataset. The AUC results reported by three models on each brand are shown in Table 4.

    The comparative results are shown in Fig. 2. The results in Table 4 show that the neural net algorithm works the best in the prediction of GoTone and EasyOwn sub-datasets, whereas the C5.0 works the best on the M-Zone sub-dataset.

    Table 4: AUC of prediction on brand sub-datasets

    Similarly, the performances of classification (prediction) on sub-datasets split by attributes of area, age, and bill are reported in Tables 5 to 7, respectively. Accordingly, the visualized results are shown in Figs. 3 to 5.

    4.4 Result Evaluation

    The previous experiments on sub-datasets separated by different key attributes provide 4 hybrid models. Next, we will use these four models to make prediction on datasetX2. Also,the measurements of precision and ROC curve are used to evaluate the performance of each model.

    1) Comparison of precision

    The average accuracy of prediction provided by the four of each model based onX2is summarized in Table 8. It shows that there is the highest precision (86.1%) reported while using the key attribute of area for data segmentation to build a hybrid model, followed by the result generated with the attribute bill(85.9%). However, the performance of hybrid models constructed by the attributes brand and age for data segmentation is lower (81.2% and 76.2%).

    Fig. 2. AUC of prediction on brand sub-datasets: (a) GoTone, (b) EasyOwn, and (c) M-Zone.

    Fig. 3. AUC of prediction on area sub-datasets: (a) area A, (b) area B, and (c) area C.

    Fig. 4. AUC of prediction on age sub-datasets: (a) net age low, (b) net age middle, and (c) net age high.

    Fig. 5. AUC of prediction on bill sub-datasets: (a) low consumption level, (b) middle consumption level, and (c) high consumption level.

    Table 5: AUC of prediction on area sub-datasets

    Table 6: AUC of prediction on age sub-datasets

    Table 7: AUC of prediction on bill sub-datasets

    Table 8: Prediction accuracy of the four hybrid models on test set X2

    2) Comparison of ROC

    The ROC curves for the prediction results provided by the four hybrid models on testing set X2is shown in Fig. 6. The area under the ROC curve of each hybrid model is calculated in Table 9.

    Comparing the results in Fig. 6 and Table 9, we know that the two hybrid models constructed based on attributes of area and bill would generate a better AUC (0.888 and 0.855) than based on brand and age (0.828 and 0.845).

    According to the experiment results, we can conclude that using the attribute of area as the segment variable would get the best prediction results, which are followed by those of the bill attribute. However, the key attributes age and brand would perform relatively poorly. Therefore, in practice of customer churn prediction, it is recommended that telecommunication companies use the consumers’ bill information as the key attribute to build the customer churn prediction hybrid model for each area separately. Moreover,it is necessary to strengthen brand management and to improve the customer segmentation effect of different brands.

    Fig. 6. ROC curve of prediction accuracy of the four hybrid models on test set X2.

    Table 9: AUC of prediction accuracy of the four hybrid models on test set X2

    3) Limitations

    The main idea of the method proposed in this work is to construct an ensemble classifier for higher precision and managerial insights. We should note some limitations of this work. First, there is a lack of criteria for how many base classifiers should be selected in the hybrid classifier. Second,the proposed method has involved some time consumption preprocessing processes in ensemble classifier construction, for example, the PCA and SVD methods, which would cause the higher complexity of computation.

    5.Conclusions

    Classification analysis has been widely used in the study of decision problems. However, with the increasing complexity of modern management and the diversity of related data, the results provided by a single classifier are suspected of having poor semantics, thus are hard to understand in the management practice, especially for the prediction tasks with the very complex data and managerial scenarios.

    Regarding to the management issues of classification and prediction, an ensemble of single classifiers is an effective way to improve the prediction results. In order to solve the problems of poor precision and management semantics caused by the ordinary ensemble classifiers, in this paper, we proposed the ensemble classifier construction method based on the key attributes in the data set. The experimental results based on the real data collected from China Mobile show that the keyattributes-based ensemble classifier has the advantages on both prediction accuracy and result comprehensibility.

    [1]M. J. Berry and G. S. Linoff,Data Mining Techniques: for Marketing, Sales, and Customer Support, New York: John Wiley & Sons, 1997, ch. 8.

    [2]Y. K. Noh, F. C. Park, and D. D. Lee, “Diffusion decision making for adaptivek-nearest neighbor classification,”Advances in Neural Information Processing Systems, vol. 3,pp. 1934-1942, Jan. 2012.

    [3]X.-L. Xia and Jan H.-H. Huang, “Robust texture classification via group-collaboratively representation-based strategy,”Journal of Electronic Science and Technology, vol.11, no. 4, pp. 412-416, Dec. 2013.

    [4]S. Archana and D. K. Elangovan, “Survey of classification technique in data mining,”Intl. Journal of Computer Science and Mobile Applications, vol. 2, no. 2, pp. 65-71, 2014.

    [5]H Grimmett, R. Paul, R. Triebel, and I. Posner, “Knowing when we don’t know: Introspective classification for mission-critical decision making,” inProc. of IEEE Intl.Conf. on Robotics and Automation, Karlsruhe, 2013, pp.4531-4538.

    [6]R. L. MacTavish, S. Ontanon, J. Radhakrishnan,et al., “An ensemble architecture for learning complex problem-solving techniques from demonstration,”ACM Trans. on Intelligent Systems and Technology, vol. 3, no. 4, pp. 1-38, 2012.

    [7]T. G. Dietterich,Ensemble Methods in Machine Learning,Multiple Classifier Systems, Berlin: Springer, 2000, pp. 1-15.

    [8]Z.-H. Zhou,EnsembleMethods: Foundations and Algorithms, Boca Raton: Chapman and Hall/CRC, 2012.

    [9]X.-Y. Hu, M.-X. Yuan, J.-G. Yao,et al., “Differential privacy in telco big data platform,” inProc. of the 41st Intl.Conf. on Very Large Data Bases, Kohala Coast, 2015, pp.1692-1703.

    [10]H. S. Kim and C. H. Yoon, “Determinants of subscriber churn and customer loyalty in the Korean mobile telephony market,”Telecommunications Policy, vol. 28, no. 9, pp. 751-765, 2004.

    [11]L. Tian, K.-P. Zhang, and Z. Qin, “Application of a Bayesian network learning algorithm in telecom CRM,”Modern Electronics Technique, vol. 10, pp. 52-55, Oct. 2005.

    [12]A. Sharma and P. K. Panigrahi, “A Neural network based approach for predicting customer churn in cellular network services,”Intl. Journal of Computer Applications, vol. 27,no. 11, pp. 26-31, Aug. 2011.

    [13]Y.-Q. Huang, F.-Z. Zhu, M.-X. Yuan,et al., “Telco churn prediction with big data,” inProc. of ACM SIGMOD Intl.Conf. on Management of Data, Melbourne, 2015, pp. 607-618.

    [14]A. Ultsch, “Emergent self-organizing feature maps used for prediction and prevention of churn in mobile phone markets,”Journal of Targeting, Measurement and Analysis for Marketing, vol. 10, no. 4, pp. 314-324, 2002.

    [15]A. Rodan, H. Faris, J. Alsakran, and O. Al-Kadi, “A support vector machine approach for churn prediction in telecom industry,”Intl. Journal on Information, vol. 17, pp. 3961-3970, Aug. 2014.

    [16]W. H. Au, KCC. Chen, and X. Yao, “A novel evolutionary data mining algorithm with applications to churn prediction,”IEEE Trans. on Evolutionary Computation, vol. 7, no. 6, pp.532-545, 2003.

    [17]S. B. Kotsiantis and P. E. Pintelas, “Combining bagging and boosting,”Intl. Journal of Computational Intelligence, vol. 1,no. 4, pp. 324-333, 2004.

    [18]N. C. Oza, “Online bagging and boosting,” inProc. of IEEE Intl. Conf. on Systems, Man & Cybernetics, Tucson, 2001,pp. 2340-2345.

    [19]C.-X. Zhang, J.-S. Zhang, and G.-W. Wang, “A novel bagging ensemble approach for variable ranking and selection for linear regression models,” inMultiple Classifier Systems, Friedhelm Schwenker, Ed. Switzerland: Springer,2015, pp. 3-14.

    [20]W. Drira and F. Ghorbel, “Decision bayes criteria for optimal classifier based on probabilistic measures,”Journal of Electronic Science and Technology, vol. 12, no. 2, pp. 216-219, 2014.

    [21]J. Han, M. Kamber, and J. Pei,Data Mining: Concepts and Techniques: Concepts and Techniques, 3rd ed., San Francisco: Morgan Kaufmann, 2011.

    [22]P. Drineas, A. Frieze, R. Kannan, S. Vempala, and V. Vinay,“Clustering large graphs via the singular value decomposition,”Machine Learning, vol. 56, no. 1, pp. 9-33,2004.

    [23]T. Bujlow, T. Riaz, and J. M. Pedersen, “A method for classification of network traffic based on C5.0 machine learning algorithm,” inProc. of IEEE Intl. Conf. on Computing, Networking and Communications, Okinawa,2012, pp. 237-241.

    [24]A. P. Bradley, “The use of the area under the ROC curve in the evaluation of machine learning algorithms,”Pattern Recognition, vol. 30, pp. 1145-1159, Jul. 1997.

    午夜精品国产一区二区电影 | 久久6这里有精品| 精品人妻熟女av久视频| 国产 一区 欧美 日韩| 亚洲欧美日韩东京热| 日韩av在线免费看完整版不卡| 亚洲伊人久久精品综合| 亚洲高清免费不卡视频| 尤物成人国产欧美一区二区三区| 国产成人freesex在线| 91午夜精品亚洲一区二区三区| xxx大片免费视频| 好男人视频免费观看在线| 国产欧美日韩一区二区三区在线 | 91久久精品国产一区二区成人| 三级国产精品片| 午夜视频国产福利| 欧美亚洲 丝袜 人妻 在线| 久久久久久国产a免费观看| 久久久久九九精品影院| 国产探花在线观看一区二区| 日韩av在线免费看完整版不卡| 有码 亚洲区| 老司机影院成人| 国产精品麻豆人妻色哟哟久久| 老司机影院毛片| 又大又黄又爽视频免费| 丰满少妇做爰视频| 国产色爽女视频免费观看| 最近中文字幕2019免费版| 搡女人真爽免费视频火全软件| 亚洲色图综合在线观看| 蜜桃亚洲精品一区二区三区| 久久久久久久久久久免费av| 亚洲av男天堂| 国产老妇伦熟女老妇高清| 国产色爽女视频免费观看| 国产有黄有色有爽视频| xxx大片免费视频| 国内精品宾馆在线| 黄色欧美视频在线观看| 男女边摸边吃奶| 精品久久国产蜜桃| 乱系列少妇在线播放| 老师上课跳d突然被开到最大视频| 18+在线观看网站| 国产成人精品福利久久| 久久久久久久久久久免费av| 九九在线视频观看精品| 夜夜看夜夜爽夜夜摸| 男人舔奶头视频| 亚洲欧美成人综合另类久久久| 九色成人免费人妻av| 欧美一区二区亚洲| 一个人观看的视频www高清免费观看| 超碰97精品在线观看| 国产精品久久久久久精品古装| 特大巨黑吊av在线直播| 国产成人精品一,二区| 亚洲无线观看免费| 在线a可以看的网站| 亚洲av欧美aⅴ国产| 精华霜和精华液先用哪个| 国产男人的电影天堂91| 日本爱情动作片www.在线观看| 亚洲最大成人手机在线| 亚洲久久久久久中文字幕| 国产成人一区二区在线| 在线看a的网站| 丰满人妻一区二区三区视频av| 日韩免费高清中文字幕av| 真实男女啪啪啪动态图| 日韩av在线免费看完整版不卡| 亚洲精品乱码久久久久久按摩| 七月丁香在线播放| 日韩 亚洲 欧美在线| 少妇的逼水好多| 亚洲精品乱码久久久久久按摩| 舔av片在线| 免费看av在线观看网站| 黄色配什么色好看| 亚洲av免费在线观看| 久久久色成人| 亚洲无线观看免费| 国内揄拍国产精品人妻在线| 男人和女人高潮做爰伦理| 精品一区二区三卡| 亚洲自拍偷在线| 日日摸夜夜添夜夜爱| 久久精品国产亚洲av天美| 亚洲成人av在线免费| 一级毛片 在线播放| 午夜福利视频1000在线观看| 99热这里只有是精品50| 免费看a级黄色片| 黄色怎么调成土黄色| 中文字幕亚洲精品专区| 国产免费福利视频在线观看| a级毛片免费高清观看在线播放| 久久人人爽人人片av| 国产毛片在线视频| 免费播放大片免费观看视频在线观看| 美女cb高潮喷水在线观看| 男人添女人高潮全过程视频| 亚洲自拍偷在线| 国产免费视频播放在线视频| 成人综合一区亚洲| 久久国产乱子免费精品| 午夜老司机福利剧场| 成人美女网站在线观看视频| 国产精品偷伦视频观看了| 亚洲,一卡二卡三卡| 欧美成人午夜免费资源| 国产精品一区二区性色av| 一级片'在线观看视频| 欧美xxxx性猛交bbbb| 亚洲无线观看免费| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 熟妇人妻不卡中文字幕| 成人毛片60女人毛片免费| av一本久久久久| 男人狂女人下面高潮的视频| 亚洲欧美精品自产自拍| 18禁在线播放成人免费| 国产高清三级在线| 国产爱豆传媒在线观看| 久久6这里有精品| 熟妇人妻不卡中文字幕| 在线精品无人区一区二区三 | 一个人观看的视频www高清免费观看| 高清欧美精品videossex| 九九爱精品视频在线观看| 黄色配什么色好看| 国产精品无大码| 久久6这里有精品| 日日啪夜夜爽| 欧美最新免费一区二区三区| 一级毛片 在线播放| 亚洲精品国产av成人精品| 高清在线视频一区二区三区| 欧美日韩综合久久久久久| 菩萨蛮人人尽说江南好唐韦庄| 一二三四中文在线观看免费高清| 嘟嘟电影网在线观看| 成人综合一区亚洲| 欧美精品一区二区大全| 爱豆传媒免费全集在线观看| 国产片特级美女逼逼视频| 国产伦精品一区二区三区四那| 成人毛片60女人毛片免费| 有码 亚洲区| 亚洲欧美中文字幕日韩二区| 国产精品久久久久久久久免| 尤物成人国产欧美一区二区三区| 99re6热这里在线精品视频| 国产精品秋霞免费鲁丝片| 久久国内精品自在自线图片| 特级一级黄色大片| 777米奇影视久久| 韩国高清视频一区二区三区| 免费观看av网站的网址| 日本av手机在线免费观看| 精品人妻一区二区三区麻豆| 汤姆久久久久久久影院中文字幕| 欧美成人一区二区免费高清观看| 成人欧美大片| 亚洲国产av新网站| 天堂俺去俺来也www色官网| 三级经典国产精品| 国产亚洲最大av| 中文字幕免费在线视频6| 亚洲国产高清在线一区二区三| 精品一区在线观看国产| 97在线人人人人妻| 日韩欧美精品v在线| 午夜老司机福利剧场| 精品久久久久久久人妻蜜臀av| 日本午夜av视频| 亚洲最大成人手机在线| 成年版毛片免费区| 男女边吃奶边做爰视频| 新久久久久国产一级毛片| 中文字幕av成人在线电影| 国产av国产精品国产| 亚洲人成网站高清观看| 久久精品久久久久久噜噜老黄| 亚洲av免费高清在线观看| 亚洲国产精品999| 秋霞在线观看毛片| 肉色欧美久久久久久久蜜桃 | 日韩欧美一区视频在线观看 | 久久久成人免费电影| 国产亚洲5aaaaa淫片| 亚洲av男天堂| 国产精品精品国产色婷婷| 国产黄色视频一区二区在线观看| 国产大屁股一区二区在线视频| 国产免费一级a男人的天堂| 黄色怎么调成土黄色| 国产精品麻豆人妻色哟哟久久| 免费看a级黄色片| 一区二区三区乱码不卡18| 国产午夜精品一二区理论片| 一本色道久久久久久精品综合| 日韩 亚洲 欧美在线| 成人亚洲精品一区在线观看 | 亚洲一区二区三区欧美精品 | 精品少妇黑人巨大在线播放| 久久久成人免费电影| 亚洲精品成人av观看孕妇| 国产高清不卡午夜福利| 日本爱情动作片www.在线观看| 黄色欧美视频在线观看| 国产成人91sexporn| 久久人人爽人人片av| 亚洲成人av在线免费| 激情 狠狠 欧美| av黄色大香蕉| eeuss影院久久| 午夜爱爱视频在线播放| 国产精品久久久久久久电影| 黄色怎么调成土黄色| 男人添女人高潮全过程视频| 免费av毛片视频| 色视频www国产| 国产成人精品久久久久久| 久久97久久精品| 成人欧美大片| 日日撸夜夜添| 亚洲久久久久久中文字幕| 男女啪啪激烈高潮av片| 最近手机中文字幕大全| 人人妻人人澡人人爽人人夜夜| 日韩强制内射视频| 久久精品熟女亚洲av麻豆精品| 久久精品久久久久久噜噜老黄| 亚洲人与动物交配视频| 亚洲经典国产精华液单| 中国国产av一级| 久久久a久久爽久久v久久| 欧美日韩在线观看h| 中文乱码字字幕精品一区二区三区| 欧美激情久久久久久爽电影| 欧美一区二区亚洲| 国产成人午夜福利电影在线观看| 日本猛色少妇xxxxx猛交久久| 久久精品国产亚洲av天美| 国产精品国产三级专区第一集| 国产在视频线精品| 国产一区二区三区综合在线观看 | 欧美xxxx性猛交bbbb| 精品国产一区二区三区久久久樱花 | 日韩欧美精品v在线| 国产精品嫩草影院av在线观看| 免费大片黄手机在线观看| 久久久久久久午夜电影| 国产淫语在线视频| 国产成年人精品一区二区| 中文字幕免费在线视频6| 2022亚洲国产成人精品| 丝袜脚勾引网站| 人人妻人人澡人人爽人人夜夜| 久久综合国产亚洲精品| 日韩欧美一区视频在线观看 | 日韩伦理黄色片| 久久久久性生活片| 交换朋友夫妻互换小说| 亚洲四区av| 一个人看的www免费观看视频| 欧美一区二区亚洲| kizo精华| 国产视频首页在线观看| 精品久久久久久久人妻蜜臀av| 国产精品国产三级国产av玫瑰| 国产v大片淫在线免费观看| 天天躁夜夜躁狠狠久久av| 亚洲国产最新在线播放| 看黄色毛片网站| 3wmmmm亚洲av在线观看| 日本一二三区视频观看| 直男gayav资源| 免费av不卡在线播放| 在线观看三级黄色| 成人二区视频| 国产毛片a区久久久久| 黑人高潮一二区| 国产精品一区www在线观看| 另类亚洲欧美激情| 亚洲自拍偷在线| 在线观看免费高清a一片| 最近最新中文字幕大全电影3| 22中文网久久字幕| 欧美性感艳星| 免费黄网站久久成人精品| 免费观看在线日韩| 精品一区在线观看国产| 国产精品久久久久久久久免| 欧美区成人在线视频| 国产男女内射视频| 热re99久久精品国产66热6| 女人久久www免费人成看片| 国产欧美另类精品又又久久亚洲欧美| 国产中年淑女户外野战色| 亚洲欧美日韩东京热| av又黄又爽大尺度在线免费看| 欧美成人午夜免费资源| 亚洲av成人精品一二三区| 婷婷色av中文字幕| 国产精品福利在线免费观看| 韩国高清视频一区二区三区| 亚洲精品,欧美精品| 亚洲成人久久爱视频| 大话2 男鬼变身卡| 亚洲精品日本国产第一区| 国产av码专区亚洲av| 天天躁日日操中文字幕| 波野结衣二区三区在线| 成人无遮挡网站| 色综合色国产| 久久久久久久精品精品| 不卡视频在线观看欧美| av在线播放精品| 91狼人影院| 狂野欧美激情性xxxx在线观看| 美女被艹到高潮喷水动态| 精品午夜福利在线看| 国产日韩欧美亚洲二区| 一级毛片 在线播放| 国产精品伦人一区二区| 3wmmmm亚洲av在线观看| 特大巨黑吊av在线直播| 又大又黄又爽视频免费| 久久精品国产a三级三级三级| 亚洲成人中文字幕在线播放| 久久久久久久大尺度免费视频| 免费看a级黄色片| 亚洲精品成人av观看孕妇| 久久热精品热| 免费在线观看成人毛片| 街头女战士在线观看网站| 亚洲欧美日韩另类电影网站 | 欧美日韩一区二区视频在线观看视频在线 | 国产免费视频播放在线视频| av线在线观看网站| 国产免费福利视频在线观看| 一级毛片久久久久久久久女| 国产精品伦人一区二区| 国产精品久久久久久久电影| 中国三级夫妇交换| 国产一级毛片在线| 在线观看av片永久免费下载| 久久6这里有精品| 成人欧美大片| 久久久久久伊人网av| 国产成人aa在线观看| 国产伦理片在线播放av一区| 大码成人一级视频| 国产精品蜜桃在线观看| av网站免费在线观看视频| 免费大片18禁| 亚洲人成网站高清观看| 91aial.com中文字幕在线观看| 午夜福利视频精品| 欧美一级a爱片免费观看看| 97人妻精品一区二区三区麻豆| 大香蕉97超碰在线| 99re6热这里在线精品视频| 国产成人精品婷婷| 中文字幕亚洲精品专区| 亚洲精品乱码久久久久久按摩| 亚洲高清免费不卡视频| 免费av观看视频| 国产精品国产三级国产av玫瑰| 男人狂女人下面高潮的视频| 成人毛片a级毛片在线播放| 在线观看美女被高潮喷水网站| 在线观看一区二区三区激情| 精品国产乱码久久久久久小说| 久久久久国产精品人妻一区二区| 国产有黄有色有爽视频| 啦啦啦中文免费视频观看日本| 亚洲丝袜综合中文字幕| 精品国产乱码久久久久久小说| 免费观看的影片在线观看| 18+在线观看网站| 日本wwww免费看| 久久久久性生活片| 国产有黄有色有爽视频| 精品少妇久久久久久888优播| 欧美潮喷喷水| 久久久成人免费电影| 国产亚洲一区二区精品| 大话2 男鬼变身卡| 天天躁日日操中文字幕| 熟女av电影| 自拍欧美九色日韩亚洲蝌蚪91 | 亚洲av福利一区| 免费黄网站久久成人精品| 国产极品天堂在线| 久久久色成人| 久久久久久久久久人人人人人人| 国产视频首页在线观看| 国产成人a区在线观看| 日本wwww免费看| av在线app专区| av又黄又爽大尺度在线免费看| 精品视频人人做人人爽| 久久久精品欧美日韩精品| 国产伦在线观看视频一区| av在线播放精品| 亚洲成色77777| 精品久久久精品久久久| 成人毛片a级毛片在线播放| 熟女人妻精品中文字幕| 亚洲av国产av综合av卡| 男人狂女人下面高潮的视频| 亚洲精品成人久久久久久| 欧美 日韩 精品 国产| 一区二区av电影网| 国产免费又黄又爽又色| 久久影院123| 国产一区二区三区综合在线观看 | 午夜精品国产一区二区电影 | 国产精品国产三级国产专区5o| 国产成人精品久久久久久| 能在线免费看毛片的网站| av.在线天堂| 午夜福利视频精品| 九草在线视频观看| 69人妻影院| av天堂中文字幕网| 成年人午夜在线观看视频| 国产永久视频网站| 啦啦啦啦在线视频资源| 亚洲美女视频黄频| 男人添女人高潮全过程视频| 99九九线精品视频在线观看视频| 国产白丝娇喘喷水9色精品| 亚洲欧美清纯卡通| 日韩av在线免费看完整版不卡| 免费看av在线观看网站| 亚洲国产精品专区欧美| 青春草视频在线免费观看| 夫妻午夜视频| 日韩国内少妇激情av| 精品久久久久久久末码| 成人毛片60女人毛片免费| 免费在线观看成人毛片| 免费播放大片免费观看视频在线观看| 亚洲av成人精品一区久久| 亚洲内射少妇av| 中文资源天堂在线| 亚洲婷婷狠狠爱综合网| 交换朋友夫妻互换小说| 一级av片app| 少妇人妻久久综合中文| 午夜免费观看性视频| 丝瓜视频免费看黄片| 亚洲成色77777| 亚洲精品久久午夜乱码| 赤兔流量卡办理| 国产亚洲午夜精品一区二区久久 | 日本黄大片高清| 日韩免费高清中文字幕av| 中国美白少妇内射xxxbb| 日韩大片免费观看网站| 欧美丝袜亚洲另类| 边亲边吃奶的免费视频| 大香蕉97超碰在线| 精品国产露脸久久av麻豆| 91在线精品国自产拍蜜月| 人人妻人人爽人人添夜夜欢视频 | 亚洲国产精品国产精品| 又大又黄又爽视频免费| 亚洲国产成人一精品久久久| 成人黄色视频免费在线看| 精品99又大又爽又粗少妇毛片| 亚洲av免费在线观看| 18+在线观看网站| 丝袜美腿在线中文| 国产女主播在线喷水免费视频网站| 麻豆精品久久久久久蜜桃| 免费播放大片免费观看视频在线观看| 中文字幕久久专区| 欧美日韩视频高清一区二区三区二| 波野结衣二区三区在线| 国产成人aa在线观看| 亚洲成人中文字幕在线播放| 国产精品麻豆人妻色哟哟久久| 搡老乐熟女国产| 成人亚洲精品一区在线观看 | 成人二区视频| 亚洲精品乱久久久久久| 亚洲精品国产av成人精品| 中文字幕av成人在线电影| 欧美日韩亚洲高清精品| 欧美精品国产亚洲| 亚洲在久久综合| 亚洲综合精品二区| 亚洲精品久久久久久婷婷小说| 欧美高清性xxxxhd video| 国产成人精品婷婷| 国产一区二区三区综合在线观看 | 精品一区二区三卡| 嫩草影院新地址| 欧美日韩一区二区视频在线观看视频在线 | 在线观看人妻少妇| 听说在线观看完整版免费高清| 黄色日韩在线| 在线免费观看不下载黄p国产| 精品国产三级普通话版| 国产免费又黄又爽又色| 国产黄片视频在线免费观看| 天堂中文最新版在线下载 | 1000部很黄的大片| 久久久久性生活片| 一级毛片 在线播放| 亚洲精华国产精华液的使用体验| videossex国产| 亚洲精品,欧美精品| 热re99久久精品国产66热6| 成人国产av品久久久| 免费看日本二区| 黄色日韩在线| 亚洲精品影视一区二区三区av| 特级一级黄色大片| 精品少妇久久久久久888优播| 搞女人的毛片| 欧美zozozo另类| 三级男女做爰猛烈吃奶摸视频| 一区二区三区乱码不卡18| 国产午夜福利久久久久久| 麻豆成人午夜福利视频| 91午夜精品亚洲一区二区三区| 亚洲精品成人av观看孕妇| 亚洲精品久久久久久婷婷小说| 久久99热6这里只有精品| 久久国产乱子免费精品| 熟妇人妻不卡中文字幕| 日日撸夜夜添| 欧美丝袜亚洲另类| 亚洲人成网站高清观看| 亚洲av免费在线观看| 国产成人精品一,二区| 国产亚洲91精品色在线| 午夜精品一区二区三区免费看| 九色成人免费人妻av| 亚洲人成网站在线观看播放| 超碰av人人做人人爽久久| 看十八女毛片水多多多| 国产精品三级大全| 精品久久久久久久末码| 亚洲精品自拍成人| 青春草国产在线视频| 91aial.com中文字幕在线观看| 在线亚洲精品国产二区图片欧美 | 一级片'在线观看视频| 人人妻人人看人人澡| 日韩成人av中文字幕在线观看| 色综合色国产| 一区二区三区乱码不卡18| 永久免费av网站大全| 久久久久性生活片| 免费观看a级毛片全部| 亚洲av在线观看美女高潮| 免费av毛片视频| 青春草视频在线免费观看| 欧美性猛交╳xxx乱大交人| 亚洲最大成人中文| 超碰av人人做人人爽久久| 国产精品.久久久| 亚洲最大成人手机在线| 2022亚洲国产成人精品| 女的被弄到高潮叫床怎么办| 中国三级夫妇交换| 99久久精品一区二区三区| 超碰av人人做人人爽久久| .国产精品久久| 国产亚洲最大av| 久久久久久久国产电影| 久久久久久久久大av| 亚洲精品国产色婷婷电影| 久久久久久久久久久免费av| 下体分泌物呈黄色| 黄片wwwwww| 国产探花在线观看一区二区| 黄色怎么调成土黄色| 夜夜看夜夜爽夜夜摸| 人妻系列 视频| 又大又黄又爽视频免费| 国产色爽女视频免费观看| 成人特级av手机在线观看| 免费大片黄手机在线观看| 男插女下体视频免费在线播放| 成人毛片a级毛片在线播放| 韩国av在线不卡| 日本午夜av视频| 亚洲精品一区蜜桃| 有码 亚洲区| 伊人久久精品亚洲午夜| 性色avwww在线观看| 91午夜精品亚洲一区二区三区| 国产一级毛片在线| 我的女老师完整版在线观看| 国产探花在线观看一区二区| 久久综合国产亚洲精品| 在线a可以看的网站| 国产成人aa在线观看| 五月伊人婷婷丁香| 国产高清三级在线| 舔av片在线| 久久女婷五月综合色啪小说 | 国产在线一区二区三区精| 精品午夜福利在线看| 国产毛片在线视频|