• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Abnormal user identification based on XGBoost algorithm

    2018-12-20 09:02:06SONGXiaoyuSUNXiangyangZHAOYang

    SONG Xiao-yu, SUN Xiang-yang, ZHAO Yang

    (School of Electronic and Information Engineering, Lanzhou Jiaotong University, Lanzhou 730070, China)

    Abstract: The eXtreme gradient boosting (XGBoost) algorithm is used to identify abnormal users. Firstly, the raw data were cleaned. Then user power characteristics were extracted from different aspects. Finally, the XGBoost classifier was used to identify the abnormal users respectively in the balanced sample set and the unbalanced sample set. In contrast, under the same characteristics, the k-nearest neighbor (KNN) classifier, back-propagation (BP) neural network classifier and random forest classifier were used to identify the abnormal users in the two samples. The experimental results show that the XGBoost classifier has higher recognition rate and faster running speed. Especially in the imbalanced data sets, the performance improvement is obvious.

    Key words: user identification; electricity characteristics; eXtreme gradient boosting (XGBoost); random forest

    0 Introduction

    The identification of abnormal users is the most difficult problem in the traditional power industry. With the rapid development of economy, social production and people’s daily electricity consumption have increased year by year. Due to illegal use of electricity by some people, the abnormal phenomenon of power supply is becoming more and more serious[1], which not only brings the economic loss to power supply enterprises, but also affects normal power supply and stability.

    In the electric power industry, the strong smart grid is rapidly developing with an unprecedented breadth and depth. The arrival of big data and cloud computing era injects new vitality into the development of traditional power industry[2]. The purpose of this paper is to use the big data algorithm to intelligently identify the abnormal electricity customers. With the help of big data technology, the monitoring of abnormal behavior and the recognition of abnormal power users can reduce the cost of abnormal behavior analysis and improve the recognition rate of abnormal behavior.

    For the detection of electricity users, many domestic and foreign experts use data mining technology[3-4]. In 2016, Zhang, et al. extracted a variety of characteristic quantities that could characterize the user power consumption model from the electricity consumption data. The power consumption sequence of all users was mapped to two-dimensional plane by dimension reduction first, and then the local outlier factor algorithm was used to calculate the outlier degree of each user, finally the abnormal users were identified[5]. In 2015, Zhou, et al proposed a sparse coding model method to mine the users’ original electricity data. Combined with the method of dictionary learning, user data were represented as a linear combination of features. By using the frequency of each feature to judge the abnormal value, the abnormal behavior was distinguished[6]. In 2012, Monedero, et al. used the Pearson correlation coefficient to detect the typical loss of electricity characterized by the sudden drop in electricity consumption. For other types of power loss, the Bayesian network and decision tree were used[7]. However, in real life, the abnormal users generally are in a minority, and abnormal users determined by artificial field must be in a minority, which led to the imbalance of positive and negative samples. In previous supervised learning research, most of researchers did not make research on imbalanced sample set and optimize the speed of model operation. As a result, power industry processing data records often reach tens of millions, even hundreds of millions, which makes most identification models run longer in practical applications.

    In this paper, large data analysis algorithm are used to identify abnormal power users. Firstly, the raw data are pre-processed, including vacancy values and duplicate records. Then the feature engineering is constructed from three aspects: statistical features, different time scale features and correlation coefficient features. After that, the eXtreme gradient boosting (XGBoost) classifier is used to identify the features. Finally, the classification results are compared with that ofk-nearest neighbor (KNN) classifier, back-propagation (BP) neural network classifier and random forest classifier.

    1 XGBoost algorithm

    The XGBoost is an extension of the gradient boosting decision tree algorithm[8-9]. Boosting classifier belongs to ensemble learning model, and its basic idea is to combine hundreds of tree models with lower classification accuracy. The model is iterated, and each iteration generates a new tree. How to generate a reasonable tree at each step is the core of the boosting classifier. Gradient boosting machine algorithm uses the idea of gradient descent when generating each tree. All the trees generated by the above steps are based on the direction of minimizing the given objective function. Under the reasonable parameter setting, a certain number of trees can reach the expected accuracy. XGBoost algorithm is the implementation of gradient boosting machine, which can automatically use CPU multi-threaded parallel[10]. The specific theory is as follows.

    Decision tree is widely used for classification model because of its intuitive representation and reliable basis. However, the performance of single decision tree is limited, so the decision tree ensemble is usually used to improve the performance. Given data setD={(xi,yi)}(|D|=n,xi∈Rm,yi∈R). The integrated model of trees is expressed as

    F={f(x)=ωq(x)}(q∶Rm→T,ω∈RT),

    (1)

    wherekis the number of models in the integrated model,Fis a set space of regression trees,xirepresents the eigenvectors of the firstidata points,qrepresents the index of the leaf,Trepresents the number of leaves on a tree, everyfk(·) corresponds to an independent tree structureq(·) and the weight of the leafω.

    The objective function consists of two parts, that is

    (2)

    (3)

    whereλandγare coefficients. The minimum value of the objective function is its optimal value.

    In Eq.(2), the objective function of the integrated decision tree model is trained by boosting method, that is to say, each time the model is retained, and a new function is added to the model as

    ?

    (4)

    (5)

    The objective function can be optimized throughft(·), When the error functionl(·) is squared error, the objective function can be written as

    Ω(ft)+C.

    (6)

    For other error functions in addition to squared errors, Taylor expansion is used to approximately define the objective function as

    (7)

    Ω(ft)+C.

    (8)

    After the constant term is removed, a relatively uniform objective function is obtained as

    (9)

    Based on the above theory, the XGBoost algorithm can be described as follows:

    1) Initialize a new decision tree.

    2) Calculate the first derivative and the second derivative of each training sample point.

    3) Calculate the residual of samples in function space.

    4) Turning towards gradient direction of residual reduction, a new decision tree is generated by greedy strategy.

    5) Add the newly generated decision tree to the model and update the model.

    6) Back to step 2, recursively execute until the objective function satisfies the condition.

    2 Detection model

    2.1 Data preprocessing

    The problems of the original data are missing values and repeated records. These problems affect the efficiency of the classification results, therefore data preprocessing is the first step of this study. For repeated records, reprocessing is performed according to the main attribute. The treatment of missing values needs careful consideration.

    The methods of dealing with the missing value include ignoring the record, removing the attributes, filling the vacancy manually, using default values, using mean values and inserting values by means of Lagrange interpolation or Newton interpolation. The Newton interpolation method is used in this paper since it is much simpler compared with Lagrange interpolation. It not only overcomes the shortcoming that the whole calculation work must be restarted when adding a node, but also saves the multiplication and division operation times.

    2.2 Feature extraction

    Feature extraction refers to the extraction of key features from the original data as needed. In order to extract more useful information, it is necessary to build new attributes on the basis of existing attributes. In this paper, feature extraction is carried out from the following aspects:

    1) Basic attribute characteristics

    The statistical characteristics of each user ID are recorded, including maximum, minimum, mean, variance, median, number of records, and so on. The number of records is closely related to the statistical characteristics. These are the basic features of electricity user.

    2) Characteristics on different time scales

    One is electricity consumption of the user in every three days, one month, three months and half a year. The other is the number of the user records in every three days, one month, three months and half a year. The characteristics of different time scales provide important information for the detection of different types of abnormal users.

    3) Similarity features on different time scales

    The Pearson correlation coefficient is used to measure the correlation between two variables[11]. The Pearson correlation coefficients of power consumption, power starting degree and power termination degree are calculated during per four weeks and five weeks.

    3 Results and evaluation

    3.1 Data set

    The sample data come from the State Grid Gansu Electric Power Company, which contain 6.3 million electricity consumption records from 9 956 users customers, 1 394 of which have been identified as abnormal users through offline investigation, and the rest are normal users. The data from each user include his daily electricity consumption as well as electricity consumption of the day and yesterday. The proportion of positive and negative sample data is about 1∶6, which makes the feature extraction and recognition of abnormal users become much difficult.

    3.2 Evaluation method

    3.2.1 Confusion matrix

    Essentially, abnormal user identification is a two-element classification problem and it divides all users into two categories: normal users and abnormal users. For the two-element classification problem, confusion matrix is a basic tool to evaluate the reliability of classifier.The confusion matrix in Table 1 shows all possible classification results of the classifier. The rows correspond to the categories to which the object actually belongs, and the lists show the categories of the prediction. In this paper, “positive” and “negative” correspond to abnormal users and normal users respectively.

    Table 1 Confusion matrix form

    VFPrefers to the first type error, andVFNrefers to the second type error. On the basis of confusion matrix, the evaluation indices of classifiers can be derived as follows: accuracy, true positive rate and specificity. The accuracy(VACC) describes the classification accuracy of the classifier. The true positive rate (VTPR) also called sensitivity describes the sensitivity of the classifier. The specificity (VSPC) describes the correct prediction of the negative in all negative samples.

    (10)

    3.2.2 Receiver operating characteristic curve and area under curve

    Receiver operating characteristic (ROC) curve describes the relative relationship between the growth rates of the two indices in the confusion matrix: false positive rate (VFPR) andVTPR[12]. For the continuous value of the output two-element classification model, the samples larger than the threshold are classified as positive class, and the samples less than the threshold are classified as negative class. Reducing the threshold can recognize more positive classes andVTPRwill increase, but at the same time, more negative samples will be classified as positive class wrongly andVFPRwill increase. The ROC curve can represent the process of change. The curve closed point (0, 1) shows the best classification effect. Area under curve (AUC) can indicate the quality of classifier. The much larger AUC represents the better performance. If AUC is equal to 1, it means an ideal classifier.

    3.3 Experimental results

    The software used in this experiment is R on the computer with Intel Corei5-4210, 2.4 GHz, 8 GB, win10x64. The experiments were carried out in two data sets using KNN classifier, random forest classifier and XGBoost classifier, respectively. A group of experiments using a balanced sample set was carried out, and the other group was carried out on a sample set with imbalanced sample set. In order to ensure the fairness of the experiments, the eigenvalues of each model were the same.

    3.3.1 Balanced sample set

    The balanced sample set was composed of 1 394 positive samples and 1 394 negative samples. The former samples were the existing abnormal users, and the latter samples were the samples randomly selected from 8 562 normal users. The balanced sample set was divided into five parts, four of which were training sets, one of which was a test set. The experiments were conducted five times under different recognition models. KNN classifier takedK=50. In the BP neural network, the number of hidden layer unit was 8. The training algorithm used QuickProp. The algorithm parameters were 0.1, 2, 0.000 1 and 0.1. The maximum number of iterations was 1 000. The XGBoost parameters were divided into three kinds: general parameters, booster parameters and learning target parameters. In this experiment, 0.01 was selected as the booster parameter shrinkage step size (ETA) to prevent overfitting, and the maximum iteration number (nrounds) was 1 500. The minimum sample weight of child nodes (min_child_weight) was 10. The ratio of feature sampling (colsample_bytree) was 0.8. In the learning target parameters, the objective function selected binary logistic regression (binary:logistic), and the evaluating indicator was the average accuracy (map). The rest parameters kept the default values. The results are shown in Table 2 and the values are the average of the five experiments.

    Table 2 Results of five experiments under different models

    From the above table, we can see that the averageVACC,VTPR,VSPCand the time of the KNN model are 69.53%, 49.42%, 89.64% and 234 s, respectively. The averageVACC,VTPR,VSPCand the time of BP neural network model are 73.24%, 70%, 76.47% and 232 s, respectively. The meanVACC,VTPR,VSPCand the time spent on random forest model are 79.60%, 75.83%, 83.38% and 249 s, respectively. Finally, the averageVACC,VTPR,VSPCand the time spent on the XGBoost model are 81.51%, 77.77%, 85.25% and 212 s, respectively. It can be seen that the XGBoost classifier are better than the other three classifiers. Figs.1-4 are ROC curve graphs of KNN model, BP neural network, random forest model and XGBoost model, respectively. The AUC of these models are 69.54%, 73.24%, 79.61%, 81.52%, respectively. The higher AUC value represents a better classification effect.

    Fig.1 ROC curve of KNN model

    Fig.2 ROC curve of BP neural network model

    Fig.3 ROC curve of random forest model

    Fig.4 ROC curve of XGBoost model

    3.3.2 Imbalanced sample set

    The imbalanced sample set consisted of 1 394 positive samples and 8 562 negative samples. The former samples were the existing abnormal users, and the latter samples were all normal users. The imbalanced sample set was divided into five parts, four of which were training sets, one of which was a test set. The experiments were conducted five times under different recognition models. KNN classifier takedK=100. In the BP neural network, the number of hidden layer units was 5. The training algorithm used QuickProp. The algorithm parameters were 0.1, 2, 0.000 1 and 0.1. The maximum number of iterations was 100. The XGBoost parameters were divided into three kinds: general parameters, booster parameters and learning target parameters. In this experiment, the booster parameter shrinkage step size (ETA) was 0.01 to prevent overfitting, and the maximum iteration number (nrounds) was 1 500. The minimum sample weight of child nodes (min_child _weight) was 10. The ratio of feature sampling(colsample_bytree) was 0.8. In the learning target parameters, the objective function selected binary logistic regression (binary: logistic), and the evaluating indicator was the average accuracy (map). The rest parameters kept the default values. The results are shown in Table 3 and the values are the average of five experiments.

    Table 3 Results of five experiments under different model

    It can be seen from Table 3 that the averageVACC,VTPR,VSPCand the time of KNN model are 88.71%, 24.78%, 98.95% and 256 s, respectively. The averageVACC,VTPR,VSPCand the time of the BP neural network model are 80.79%, 65.10%, 83.26% and 244 s, respectively. The meanVACC,VTPR,VSPCand the time spent on random forest model are 91.34%, 48.87%, 97.95% and 308 s, respectively. Finally, the averageVACC,VTPR,VSPCand the time spent on XGBoost model are 84.41%, 80.68%, 85.02 and 233 s, respectively. It can be seen that the overall performance of XGBoost classifier is better than that of the other three classifiers. In particular, its sensitivity is far beyond the other three classifiers. When the proportion of positive and negative samples in the data set is not balanced, the reference values ofVACCandVSPCwill be reduced. AUC is more capable of representing the performance of the classifier. For example, when the ratio of positive and negative samples is 99∶1, in this case, a classifier only needs to determine all the samples to be positive, thusVACCcan reach 99%. Figs.5-8 are ROC curve graphs of KNN model, BP neural network, random forest model and XGBoost model, respectively. The AUC values of these models are 61.87%, 74.18%, 73.41%, 82.85%, respectively.

    Fig.5 ROC curve of KNN model

    Fig.6 ROC curve of BP neural network model

    Fig.7 ROC curve of random forest model

    Fig.8 ROC curve of XGBoost model

    3.4 Analysis

    From the above charts, it can be seen that the XGBoost model outperforms the KNN model, BP neural network and random forest model in both balanced data set and imbalanced data set. Especially, from balanced data set to imbalanced data set, classification effects of KNN model and random forest model decrease significantly. While the classification effect of XGBoost model does not decrease but slightly improves. Furthermore, the time spent on the XGBoost model is significantly loss than that on the other three models.

    Compared with the single algorithm of KNN, XGBoost algorithm is one of the integration algorithm. It combines multiple classifiers and classifies them, showing better generalization ability and classification effect. BP neural network is an excellent classifier and has been widely used. But BP neural network requires strict parameters and has some disadvantages, such as slow convergence speed, ease of falling into local minimum point and difficulty of getting global optimal solution. Random forest and XGBoost are integrated algorithms. Although they all use column sampling to reduce overfitting and computation, there are differences elsewhere between them. Random forest adopts bagging thought, but XGBoost adopts boosting thought. Bagging uses sampling with reset but boosting sampling is based on error rate. Moreover, XGBoost takes the complexity of the model as the regularization term of the objective function to optimize the model. Therefore, the classification performance of XGBoost is better than that of random forest.

    In terms of running time, generally speaking, a single classifier consumes less time than an ensemble classifier, but the XGBoost classifier uses less time than KNN classifier and BP neural network. Compared with the random forest, which is also the ensemble classifier, the XGBoost classifier is far beyond the random forest. Compared with the other three models, the advantage of the XGBoost classifier in time mainly attributes to the following two aspects.

    1) The XGBoost algorithm performs second-order Taylor expansion of the cost function, and uses the first- and second-order derivatives to converge to the global optimum at the fastest speed.

    2) One of the most time-consuming steps of tree learning is to rank the values of the features to determine the best segmentation points. Before training, the XGBoost algorithm sorts the data in advance, and then saves them as a block structure. The structure is used repeatedly in the subsequent iterations, which greatly reduces the amount of computation.

    4 Conclusion

    In today’s world, electricity has been widely used. The development of power big data will accelerate the development of power industry and innovation of business model, achieving the transformation from production-oriented to data-oriented. This study utilizes big data technology, based on different time scale features and correlation coefficient extraction, by using XGBoost classifier to identify abnormal users. Compared with the KNN model, BP neural network model and random forest model, the XGBoost model has achieved good results. In order to deal with the huge amount of data in the power industry, the distribution of XGBoost algorithm will be the next step of our research.

    日韩欧美免费精品| 99热国产这里只有精品6| 日本一区二区免费在线视频| 国产高清国产精品国产三级| 亚洲av五月六月丁香网| 亚洲美女黄片视频| 国产一区二区激情短视频| 免费搜索国产男女视频| 狂野欧美激情性xxxx| 丝袜美足系列| 人人妻,人人澡人人爽秒播| 99在线人妻在线中文字幕| 国产精品 国内视频| 欧美中文日本在线观看视频| 亚洲自偷自拍图片 自拍| 日韩精品青青久久久久久| 亚洲久久久国产精品| 中国美女看黄片| av网站在线播放免费| 亚洲午夜理论影院| 999久久久精品免费观看国产| 国产蜜桃级精品一区二区三区| 中文字幕人妻丝袜制服| av片东京热男人的天堂| 又黄又粗又硬又大视频| 国产男靠女视频免费网站| 欧美黑人欧美精品刺激| 欧美日韩视频精品一区| 欧美最黄视频在线播放免费 | 久久狼人影院| 在线看a的网站| 国产单亲对白刺激| 欧美不卡视频在线免费观看 | 欧美大码av| 在线观看66精品国产| 亚洲精品久久成人aⅴ小说| 日本a在线网址| 久久久国产成人精品二区 | 中国美女看黄片| 色尼玛亚洲综合影院| 香蕉久久夜色| 久久欧美精品欧美久久欧美| 女性生殖器流出的白浆| 香蕉国产在线看| 热99re8久久精品国产| 国产欧美日韩综合在线一区二区| 久久午夜亚洲精品久久| 国产av又大| 首页视频小说图片口味搜索| 亚洲男人的天堂狠狠| 午夜成年电影在线免费观看| 怎么达到女性高潮| 日韩欧美在线二视频| 国产精品久久久久久人妻精品电影| 欧美日韩乱码在线| 亚洲七黄色美女视频| 亚洲五月天丁香| 国产色视频综合| 三级毛片av免费| 欧美丝袜亚洲另类 | 精品国产一区二区久久| 老司机午夜十八禁免费视频| 精品日产1卡2卡| 成人影院久久| 夜夜躁狠狠躁天天躁| 午夜福利影视在线免费观看| 97人妻天天添夜夜摸| 亚洲精品中文字幕在线视频| av天堂在线播放| 在线观看免费视频网站a站| 成年人免费黄色播放视频| 精品福利永久在线观看| 久久久久国产一级毛片高清牌| 99久久99久久久精品蜜桃| 国产成人欧美| 精品国产一区二区三区四区第35| 首页视频小说图片口味搜索| 18禁裸乳无遮挡免费网站照片 | 极品人妻少妇av视频| 国产亚洲欧美98| av天堂久久9| 咕卡用的链子| 成人特级黄色片久久久久久久| 久久欧美精品欧美久久欧美| a级片在线免费高清观看视频| 日韩欧美免费精品| 中国美女看黄片| 婷婷丁香在线五月| av国产精品久久久久影院| 国产精品久久久久成人av| 色综合婷婷激情| 久久草成人影院| 交换朋友夫妻互换小说| 久99久视频精品免费| 三级毛片av免费| 成人影院久久| 亚洲av片天天在线观看| 国产伦一二天堂av在线观看| 欧美国产精品va在线观看不卡| 久热爱精品视频在线9| ponron亚洲| 两个人免费观看高清视频| 日本精品一区二区三区蜜桃| 中文字幕另类日韩欧美亚洲嫩草| 久久九九热精品免费| 狂野欧美激情性xxxx| 黄色a级毛片大全视频| 国产精品乱码一区二三区的特点 | 女性被躁到高潮视频| 黄色 视频免费看| 18禁国产床啪视频网站| 美女 人体艺术 gogo| 91在线观看av| 国产精品国产av在线观看| 嫁个100分男人电影在线观看| 精品国产乱子伦一区二区三区| 亚洲国产欧美日韩在线播放| 日本免费a在线| 日本三级黄在线观看| 欧美成人午夜精品| 淫秽高清视频在线观看| 女人被狂操c到高潮| 亚洲一区中文字幕在线| 日韩三级视频一区二区三区| 亚洲欧美激情综合另类| 在线观看66精品国产| 亚洲久久久国产精品| 无人区码免费观看不卡| 亚洲精品久久午夜乱码| 啦啦啦免费观看视频1| 18美女黄网站色大片免费观看| 国产精品一区二区免费欧美| 国产亚洲精品第一综合不卡| 99riav亚洲国产免费| 亚洲精品av麻豆狂野| 亚洲精品粉嫩美女一区| 热99re8久久精品国产| 无限看片的www在线观看| 男女下面进入的视频免费午夜 | 岛国视频午夜一区免费看| 69精品国产乱码久久久| 欧美日韩国产mv在线观看视频| 国产精品久久久av美女十八| 久9热在线精品视频| 久久精品国产综合久久久| 国产不卡一卡二| 国产精品成人在线| 757午夜福利合集在线观看| 精品国产美女av久久久久小说| 后天国语完整版免费观看| 色综合婷婷激情| 国产不卡一卡二| 中文欧美无线码| 国产1区2区3区精品| 欧美日本亚洲视频在线播放| 成年人黄色毛片网站| 看黄色毛片网站| 天天躁狠狠躁夜夜躁狠狠躁| tocl精华| 欧美性长视频在线观看| 欧美日本中文国产一区发布| e午夜精品久久久久久久| 亚洲国产欧美网| 国产精品久久久久成人av| 久9热在线精品视频| 成熟少妇高潮喷水视频| 搡老熟女国产l中国老女人| 人人妻,人人澡人人爽秒播| 91精品三级在线观看| 久久性视频一级片| www.熟女人妻精品国产| 在线观看免费高清a一片| 一本大道久久a久久精品| 妹子高潮喷水视频| 国产av在哪里看| 国产三级黄色录像| 国产av又大| 亚洲在线自拍视频| 亚洲欧美日韩另类电影网站| 巨乳人妻的诱惑在线观看| 在线看a的网站| 成年女人毛片免费观看观看9| 亚洲专区国产一区二区| 女性生殖器流出的白浆| 久久午夜综合久久蜜桃| 国产xxxxx性猛交| 男人的好看免费观看在线视频 | 女性生殖器流出的白浆| 91大片在线观看| 男人舔女人的私密视频| 国产精华一区二区三区| 午夜福利影视在线免费观看| 黄色视频不卡| 最近最新中文字幕大全电影3 | 天堂影院成人在线观看| 亚洲专区国产一区二区| 91国产中文字幕| e午夜精品久久久久久久| 色尼玛亚洲综合影院| 国产片内射在线| 免费在线观看视频国产中文字幕亚洲| 日韩免费高清中文字幕av| 波多野结衣高清无吗| 久久久水蜜桃国产精品网| 国产欧美日韩综合在线一区二区| 久热爱精品视频在线9| 淫秽高清视频在线观看| 精品第一国产精品| 午夜a级毛片| 亚洲人成电影免费在线| 亚洲欧美激情在线| 天天添夜夜摸| 黄色怎么调成土黄色| 一级a爱片免费观看的视频| 电影成人av| 亚洲中文日韩欧美视频| 亚洲精品美女久久久久99蜜臀| 成人国语在线视频| 亚洲成人久久性| 69av精品久久久久久| 免费在线观看日本一区| 久久天堂一区二区三区四区| 老司机福利观看| 麻豆久久精品国产亚洲av | 中文字幕高清在线视频| 国产高清激情床上av| 久久这里只有精品19| 国产精品亚洲一级av第二区| 亚洲av第一区精品v没综合| 精品国内亚洲2022精品成人| 欧美黑人精品巨大| 久久草成人影院| 成人av一区二区三区在线看| 精品福利永久在线观看| 国产成人精品久久二区二区免费| 欧美午夜高清在线| 国产成人欧美在线观看| 欧美一区二区精品小视频在线| 亚洲国产欧美一区二区综合| 国产伦一二天堂av在线观看| 午夜福利免费观看在线| 日韩大尺度精品在线看网址 | av超薄肉色丝袜交足视频| 男男h啪啪无遮挡| 丰满的人妻完整版| 美女扒开内裤让男人捅视频| 亚洲成人精品中文字幕电影 | 精品少妇一区二区三区视频日本电影| 欧美日韩亚洲综合一区二区三区_| 精品一区二区三区av网在线观看| 黄色女人牲交| 欧美一级毛片孕妇| 亚洲自拍偷在线| 99香蕉大伊视频| 97人妻天天添夜夜摸| 国产av精品麻豆| 国产在线精品亚洲第一网站| 午夜福利一区二区在线看| aaaaa片日本免费| 亚洲成人免费电影在线观看| 狠狠狠狠99中文字幕| 热re99久久国产66热| 午夜免费观看网址| 丰满人妻熟妇乱又伦精品不卡| 国产亚洲精品第一综合不卡| 午夜精品久久久久久毛片777| 老司机在亚洲福利影院| 黄色视频,在线免费观看| 高潮久久久久久久久久久不卡| 国产一区在线观看成人免费| e午夜精品久久久久久久| 亚洲成人久久性| 一边摸一边抽搐一进一小说| 美女国产高潮福利片在线看| 精品免费久久久久久久清纯| 成人永久免费在线观看视频| 国产亚洲欧美在线一区二区| 天堂影院成人在线观看| 亚洲片人在线观看| 一级毛片精品| 妹子高潮喷水视频| 一a级毛片在线观看| 日韩人妻精品一区2区三区| 免费久久久久久久精品成人欧美视频| 亚洲第一av免费看| 精品久久蜜臀av无| 色在线成人网| 久久这里只有精品19| 欧美亚洲日本最大视频资源| 日韩大码丰满熟妇| 一级a爱视频在线免费观看| 欧美日韩乱码在线| 91在线观看av| 日韩视频一区二区在线观看| 国产成人免费无遮挡视频| 91麻豆精品激情在线观看国产 | 在线永久观看黄色视频| 美女扒开内裤让男人捅视频| 在线观看一区二区三区| 亚洲欧美精品综合久久99| 97碰自拍视频| 日本wwww免费看| 99久久久亚洲精品蜜臀av| 午夜免费激情av| 欧美精品亚洲一区二区| 免费人成视频x8x8入口观看| 手机成人av网站| 99国产综合亚洲精品| 黑人欧美特级aaaaaa片| 国产成年人精品一区二区 | 90打野战视频偷拍视频| 成人精品一区二区免费| 久久久久久大精品| 免费一级毛片在线播放高清视频 | 别揉我奶头~嗯~啊~动态视频| 亚洲熟女毛片儿| av网站免费在线观看视频| 国产成+人综合+亚洲专区| 国产成人精品在线电影| 91在线观看av| 国产有黄有色有爽视频| 美国免费a级毛片| 亚洲av日韩精品久久久久久密| 午夜免费鲁丝| 亚洲久久久国产精品| 久久人妻av系列| 他把我摸到了高潮在线观看| 欧美大码av| 日韩欧美一区二区三区在线观看| 无人区码免费观看不卡| 亚洲熟女毛片儿| www.999成人在线观看| 国产av一区二区精品久久| 桃红色精品国产亚洲av| 欧美人与性动交α欧美软件| 国产成年人精品一区二区 | 亚洲色图综合在线观看| 伊人久久大香线蕉亚洲五| 51午夜福利影视在线观看| 国产人伦9x9x在线观看| 国产深夜福利视频在线观看| 狠狠狠狠99中文字幕| 最新美女视频免费是黄的| 18禁观看日本| 亚洲av熟女| 国产99久久九九免费精品| 国产精品久久久av美女十八| 欧美乱妇无乱码| netflix在线观看网站| 国产高清激情床上av| 在线观看免费午夜福利视频| 老司机亚洲免费影院| 在线免费观看的www视频| 日本vs欧美在线观看视频| 欧美精品亚洲一区二区| 啦啦啦 在线观看视频| 日韩欧美在线二视频| av中文乱码字幕在线| 日韩大码丰满熟妇| 中文字幕精品免费在线观看视频| 99久久久亚洲精品蜜臀av| 成人国产一区最新在线观看| 色综合站精品国产| 久久久国产一区二区| 亚洲 欧美 日韩 在线 免费| 免费观看人在逋| 9色porny在线观看| 午夜影院日韩av| 成人手机av| 亚洲国产精品合色在线| 久久久水蜜桃国产精品网| 大码成人一级视频| 国产高清视频在线播放一区| а√天堂www在线а√下载| xxxhd国产人妻xxx| 一级a爱片免费观看的视频| 欧美日韩视频精品一区| 欧美不卡视频在线免费观看 | 脱女人内裤的视频| 久久精品人人爽人人爽视色| 亚洲一区二区三区不卡视频| 久久精品国产亚洲av高清一级| 热99国产精品久久久久久7| 国产伦人伦偷精品视频| 免费在线观看视频国产中文字幕亚洲| 国产精品久久久久久人妻精品电影| 午夜激情av网站| 久久精品91无色码中文字幕| 久久久久久免费高清国产稀缺| 男人操女人黄网站| 亚洲 国产 在线| 免费在线观看亚洲国产| 亚洲成人免费av在线播放| 狂野欧美激情性xxxx| 精品国产一区二区三区四区第35| 国产av一区在线观看免费| 久久天堂一区二区三区四区| xxxhd国产人妻xxx| 精品久久久久久久毛片微露脸| 精品久久久精品久久久| 男女下面插进去视频免费观看| 亚洲av五月六月丁香网| 在线观看免费日韩欧美大片| 999久久久国产精品视频| 午夜老司机福利片| 一级a爱片免费观看的视频| 欧美另类亚洲清纯唯美| 欧美午夜高清在线| 99精品在免费线老司机午夜| 极品教师在线免费播放| 一边摸一边抽搐一进一出视频| 日韩人妻精品一区2区三区| 一边摸一边做爽爽视频免费| 别揉我奶头~嗯~啊~动态视频| 嫩草影视91久久| 日本撒尿小便嘘嘘汇集6| 精品久久久久久,| 高清av免费在线| 日韩欧美一区二区三区在线观看| 黄色怎么调成土黄色| 亚洲av日韩精品久久久久久密| 欧美日韩国产mv在线观看视频| 男女之事视频高清在线观看| 久热爱精品视频在线9| 久久性视频一级片| 黑人巨大精品欧美一区二区蜜桃| 亚洲欧美精品综合久久99| 国产av一区二区精品久久| 国产精品一区二区免费欧美| videosex国产| 欧美国产精品va在线观看不卡| 国产成人精品久久二区二区91| 精品少妇一区二区三区视频日本电影| 一级作爱视频免费观看| 亚洲自拍偷在线| 欧洲精品卡2卡3卡4卡5卡区| 男女床上黄色一级片免费看| 99久久99久久久精品蜜桃| 18禁美女被吸乳视频| 国产精品1区2区在线观看.| 亚洲国产欧美一区二区综合| 老汉色∧v一级毛片| 热99re8久久精品国产| 在线视频色国产色| 级片在线观看| 美女高潮到喷水免费观看| 久久精品国产综合久久久| 高清黄色对白视频在线免费看| 又紧又爽又黄一区二区| 在线观看免费午夜福利视频| 精品国产一区二区久久| 日本免费一区二区三区高清不卡 | 韩国av一区二区三区四区| 性色av乱码一区二区三区2| 曰老女人黄片| 两性夫妻黄色片| 日韩欧美国产一区二区入口| 欧美另类亚洲清纯唯美| www.999成人在线观看| 国产成人精品久久二区二区免费| 日日干狠狠操夜夜爽| 欧美成人午夜精品| 99久久国产精品久久久| 啦啦啦免费观看视频1| 91大片在线观看| 国产精品日韩av在线免费观看 | 99久久人妻综合| 色综合欧美亚洲国产小说| 久久中文看片网| 真人做人爱边吃奶动态| 日本a在线网址| 亚洲人成网站在线播放欧美日韩| 午夜老司机福利片| avwww免费| 最近最新中文字幕大全电影3 | 妹子高潮喷水视频| 桃色一区二区三区在线观看| 他把我摸到了高潮在线观看| 精品高清国产在线一区| 亚洲一码二码三码区别大吗| 欧美日韩黄片免| 夜夜夜夜夜久久久久| 久久精品aⅴ一区二区三区四区| 怎么达到女性高潮| 黄频高清免费视频| 丰满饥渴人妻一区二区三| 黄色片一级片一级黄色片| 一级a爱片免费观看的视频| 国产aⅴ精品一区二区三区波| 欧美黑人欧美精品刺激| 日韩免费高清中文字幕av| 午夜福利在线观看吧| 日韩免费av在线播放| 夜夜夜夜夜久久久久| 国产免费av片在线观看野外av| 亚洲精品久久午夜乱码| 亚洲av第一区精品v没综合| 日本黄色视频三级网站网址| 亚洲成a人片在线一区二区| 一a级毛片在线观看| 日韩大尺度精品在线看网址 | 久久草成人影院| 日韩欧美三级三区| 91九色精品人成在线观看| 国产精品久久视频播放| 日本免费一区二区三区高清不卡 | 岛国在线观看网站| 香蕉国产在线看| 国产亚洲精品第一综合不卡| 国产aⅴ精品一区二区三区波| 一个人免费在线观看的高清视频| 两性午夜刺激爽爽歪歪视频在线观看 | 亚洲人成电影免费在线| 亚洲国产看品久久| 亚洲精品av麻豆狂野| 男女下面插进去视频免费观看| 动漫黄色视频在线观看| 女人精品久久久久毛片| 18禁国产床啪视频网站| 黑人操中国人逼视频| 这个男人来自地球电影免费观看| av天堂在线播放| 伦理电影免费视频| 天堂√8在线中文| 丁香六月欧美| 亚洲人成77777在线视频| 老司机福利观看| 色老头精品视频在线观看| 亚洲av熟女| av电影中文网址| 男女之事视频高清在线观看| 老司机亚洲免费影院| 老鸭窝网址在线观看| 国产视频一区二区在线看| 成人手机av| 国产精品综合久久久久久久免费 | 一进一出好大好爽视频| 校园春色视频在线观看| 日本 av在线| 国产亚洲欧美精品永久| 欧美黑人精品巨大| 正在播放国产对白刺激| 亚洲激情在线av| 精品一区二区三区av网在线观看| 久久精品影院6| 一级毛片精品| 国产精品爽爽va在线观看网站 | 日韩欧美三级三区| 日韩大尺度精品在线看网址 | 狠狠狠狠99中文字幕| 欧美精品亚洲一区二区| 91国产中文字幕| ponron亚洲| 亚洲成a人片在线一区二区| 欧美av亚洲av综合av国产av| 午夜久久久在线观看| 1024香蕉在线观看| 精品国内亚洲2022精品成人| 精品久久久久久久毛片微露脸| 欧美人与性动交α欧美软件| 免费人成视频x8x8入口观看| 久久香蕉精品热| 99香蕉大伊视频| 国产av在哪里看| 国产亚洲欧美精品永久| 91麻豆精品激情在线观看国产 | 久久精品影院6| a级毛片黄视频| 视频区图区小说| 午夜免费观看网址| 两性午夜刺激爽爽歪歪视频在线观看 | 99国产精品一区二区三区| 99在线视频只有这里精品首页| 在线观看免费视频网站a站| 中文字幕高清在线视频| 嫁个100分男人电影在线观看| 亚洲自拍偷在线| 亚洲 欧美一区二区三区| 久久久久久久久久久久大奶| 国产av在哪里看| 夜夜躁狠狠躁天天躁| 免费人成视频x8x8入口观看| 99国产极品粉嫩在线观看| 国产在线精品亚洲第一网站| 男女下面插进去视频免费观看| www.自偷自拍.com| 国产人伦9x9x在线观看| 18禁美女被吸乳视频| 男人舔女人的私密视频| 成人18禁高潮啪啪吃奶动态图| 亚洲欧美一区二区三区黑人| 欧美日韩黄片免| √禁漫天堂资源中文www| 亚洲性夜色夜夜综合| 久久婷婷成人综合色麻豆| 午夜成年电影在线免费观看| 国产有黄有色有爽视频| 一进一出抽搐gif免费好疼 | 久久天堂一区二区三区四区| 两性午夜刺激爽爽歪歪视频在线观看 | 久久香蕉精品热| 欧美一级毛片孕妇| 91成年电影在线观看| 丰满饥渴人妻一区二区三| 高清av免费在线| 99精国产麻豆久久婷婷| 成人av一区二区三区在线看| 黄色丝袜av网址大全| 日本a在线网址| 黑人巨大精品欧美一区二区蜜桃| 人人澡人人妻人| 亚洲精华国产精华精| 亚洲国产毛片av蜜桃av| 热re99久久国产66热| 制服诱惑二区|