• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    SMK-means: An Improved Mini Batch K-means Algorithm Based on Mapreduce with Big Data

    2018-10-09 08:45:26BoXiaoZhenWangQiLiuandXiaodongLiu
    Computers Materials&Continua 2018年9期

    Bo Xiao, Zhen Wang, Qi Liu and Xiaodong Liu

    Abstract: In recent years, the rapid development of big data technology has also been favored by more and more scholars. Massive data storage and calculation problems have also been solved. At the same time, outlier detection problems in mass data have also come along with it. Therefore, more research work has been devoted to the problem of outlier detection in big data. However, the existing available methods have high computation time, the improved algorithm of outlier detection is presented, which has higher performance to detect outlier. In this paper, an improved algorithm is proposed.The SMK-means is a fusion algorithm which is achieved by Mini Batch K-means based on simulated annealing algorithm for anomalous detection of massive household electricity data, which can give the number of clusters and reduce the number of iterations and improve the accuracy of clustering. In this paper, several experiments are performed to compare and analyze multiple performances of the algorithm. Through analysis, we know that the proposed algorithm is superior to the existing algorithms.

    Keywords: Big data, outlier detection, SMK-means, Mini Batch K-means, simulated annealing.

    1 Introduction

    Nowadays, electric energy has become the main part of energy utilization. With the rapid growth of China’s economy, social electricity consumption has also increased year by year, but the supply of power energy still cannot fully meet the needs of economic development. How to improve the efficiency of power energy utilization and explore the potential of power energy utilization is the most important part of the data mining in the power energy. In our country, because of many reasons, such as the obsolescence of public electricity or household electric equipment, and the weak consciousness of energy saving of consumers, the power energy has not been fully utilized. In addition, according to the statistics of the State Grid, it has been found that the theft of electricity in recent years has resulted in the loss of tens of millions of dollars, and the way of stealing electricity from the original barbaric violent electricity theft has developed into a more specialized means, more covert behavior and a large-scale through the use of intelligent equipment. The way of implementation of these abnormal power consumption will bring a lot of economic losses to business users and family users.

    Therefore, in or der to use the power energy more effectively and protect the rights and interests of the consumers of the power users, the abnormal analysis of the household electricity data will be effectively realized, and the corresponding measures are taken to reduce the waste of the energy consumption according to the identified anomalies, and the abnormal recognition operation of the electrical data for the family users is carried out.The application of data mining to the abnormal recognition of household electricity consumption will save a lot of unnecessary losses, and it will also play a role in promoting the management of the State Grid. With the increasing amount of electricity used by home users, a large number of data are produced. The cloud platform provides a distributed storage system for storing massive data. By mining and analyzing the data, we detect the anomalies of the data set, and then identify the abnormality in the process of the user’s electricity use.

    Cloud computing and data mining have received extensive attention at home and abroad.Because of the increase in the demand for electricity and the consumption of electricity,the power data also increases rapidly. Therefore, the excavation of power data will also consume more computing resources [Shanmugam and Singh (2017); Yildiz (2015)].Many distributed computing frameworks in cloud computing, such as Hadoop MapReduce, Apache Spark, Apache Flink, etc. have obvious advantages for the calculation of real time or offline massive data, and the cloud computing is fault tolerant,and is based on the high YARN cluster management. Availability is to prevent the node from downtime. Therefore, cloud computing has a huge role in the excavation of power data in Chen et al. [Chen, Li, Tang et al. (2017)].

    With the rapid development of computer technology and Internet technology, cloud computing technology has also emerged. With the powerful computing power brought by its distributed platform, cloud computing has ushered in a completely new computing experience for the processing of massive data. Data mining will release more potential under the cloud environment. This combination not only provides us with distributed storage and sharing of distributed file system (Hadoop Distributed File System, HDFS)files, but also provides fast distributed computing. Efficient data processing [Kumari,Kapoor and Singh (2016)]. In addition, there are many distributed computing frameworks for processing massive data, such as Hadoop MapReduce, Spark, Strom, Flink, etc. For processing in distributed computing, a job is divided into many tasks, each task consists of one or multiple computer nodes perform calculations, which are highly efficient for the processing and calculation of offline mass and real time data. Distributed computing has the advantages of resource sharing and load balancing, which can not only reduce the computing burden of the server, but also reduce the burden on the server. MapReduce uses “Map” and “Reduce” to handle the distributed processing of large data sets.

    The Mini BatchK-means algorithm has an impact on the clustering effect. SMK-means is proposed, which is an optimization of the Mini BatchK-means algorithm. It is not only suitable for processing massive data, but also improves clustering. The effect is to avoid the algorithm being trapped in a local optimal solution. SMK-means algorithm is mainly divided into the following steps, the first step is to bring about the simulated annealing algorithm for reducing the number of iterations and the second step is to utilize parallelization Mini BatchK-means algorithm on Hadoop and the third step is to calculate outlier scores by distance function.

    In this paper, the contribution of the improved algorithm is shown as follows:

    ?A novel algorithm is proposed for anomaly detection.

    ?The improved algorithm combines the probability based algorithm and clustering algorithm. That is, the Mini BatchK-means algorithm based on simulated annealing algorithm.

    ?Based on cloud platform, an improved anomaly detection algorithm is implemented,and SMK-means algorithm is parallelized and distributed.

    The rest of this paper is structured as follows: Section II summarizes Research status of outlier detection. The presented method in this paper is introduced in details in Section III.In Section IV, experimental environment, algorithm implementation and related performance analysis and comparison, the experimental studies and evaluation of methods are reported, while conclusion and future work are covered in Section V.

    2 Related work

    With the advent of the third scientific and technological revolution, electronic technologies, atomic energy technologies, and bioengineering technologies have also continued to develop. As a result, the demand for electrical energy has become increasingly strong. To date, electrical energy is still an indispensable energy source in human life. At the same time, domestic and foreign research on electric energy has generated a great deal of interest. Whether it is industrial electricity, domestic electricity,or biological electricity, there are many research outputs [Yan, Zhu and Tang (2015); Li and Wang (2013)]. In this section, we describe the research status of household electricity outlier detection and introduced relevant research work and introduce the research of clustering algorithm for outlier detection mainly, the already proposed algorithm may have more high computational complexity or less computation time, but these methods do not have better performance for high dimensional dataset and cannot reduce calculation time while ensuring accuracy.

    To avoid anomalies such as California’s power crisis of 2000 and 2001, the authors attempted to predict abnormalities using advanced machine learning algorithms,particularly the Electricity Price Change Point Detection (CPD) algorithm during the California power crisis. In order to solve the expensive calculation of a large amount of data at the time of application, the one-dimensional time series data Gaussian process (GP)is accelerated. This algorithm effectively makes it possible to use the hourly price data to calculate change points during the California power crisis [Gu, Choi, Gu et al. (2013)].

    A novel self-adaptive data shifting based method for one-class SVM (OCSVM) hyperparameter selection, is proposed to generate a controllable number of high-quality pseudo outlier data around target data by efficient edge pattern detection and a “negative shifting” mechanism, which can effectively regulate the OCSVM decision boundary for an accurate target data description.

    A large scale network traffic monitoring and analysis system based on Hadoop in 2014 was proposed, which is an open source distributed computing platform for commodity hardware big data processing. The system has been deployed in the core network of a large cellular network and has been widely evaluated. The results show that the system can effectively handle 4.2 TB of data from the 123 Gb/s link each day with high performance and low cost [Liu, Liu and Ansari (2014)].

    A new incremental and distributed classification based on the popular nearest neighbor algorithm was proposed. This method is implemented in Apache Spark and includes distributed metric space ordering to perform faster searches. In addition, an efficient incremental data instance selection method has been proposed for continuous update of large scale data streams and the elimination of outdated examples from case libraries.This alleviates the high computational requirements of the original classifier, making it suitable for the problem under consideration. Experimental studies conducted on a set of real mass data streams demonstrate the effectiveness of the proposed solution [Ramírez-Gallego, Krawczyk, García et al. (2017)].

    In Song et al. [Song, Rochas, Beze et al. (2016)], the authors compare the different algorithms with the KNN algorithm based on MapReduce, and evaluate them through the combination of theory and time. To be able to compare solutions, we identified three general steps for KNN computation on MapReduce: data preprocessing, data partitioning,and calculations. Analyze each step from the aspects of load balancing, accuracy, and complexity. Various data sets were used in the experiment. The influences of data volume, data dimension and k value were analyzed from multiple angles of time and space complexity and precision. The experimental part brings new advantages and disadvantages to each algorithm.

    In this paper, we employ SMK-means algorithm, which combines this algorithm with the simulated annealing algorithm. Meanwhile, SMK-means optimize the objective function,not only reduces the calculation time but also improves the accuracy of the algorithm.

    3 The proposed method

    In this section, we mainly introduce the method proposed in this paper. The first part introduces data preprocessing and feature engineering. The related concepts elaborated are described in details in second part. The SMK-means algorithm is explained in detail in third part.

    3.1 Data preprocessing

    Feature Engineering:When the data preprocessing is completed, it is necessary to select a meaningful feature input to the selected model for training modeling [Panigrahy, Santra and Chattopadhyay (2017)]. In general, select features from two perspectives:

    ?Whether the characteristic is divergence: If a characteristic is not divergent, for example, the variance is close to 0, that is, the sample is basically not different in this feature, which is not useful for the discrimination of the sample.

    ?Relevance of features and objectives: This is more obvious, and features that are highly relevant to the target should be preferred.

    A preliminary introduction has been made to whether or not the existing features are selected. Next, because the data features are limited and cluster related algorithms require more data features, it is suitable for clustering data sets. Therefore, before modeling the data, according to the characteristics of the model, the characteristics of the time characteristics are constructed according to the time characteristics of the data set. This is also the discretization of the data. For example, whether or not weekends, workdays,seasons, time periods, and the like are related to the power consumption of home users.Built up data features such as Tab. 1.

    Table 1: Data characteristics table

    Data P reprocessing:Due to the difference of data indexes and data magnitudes of different attributes, a plurality of data indexes or multiple data levels appear in one data set, so it is necessary to standardize the operation of the data set. The main way to achieve this is to scale the data according to a certain proportion of the EU, so that it falls into a range of characteristics. Especially when comparing or evaluating indicators, the unit attributes of the data need to be weakened to convert the data into dimensionless pure data values. In order to compare and weight the data indicators of different units or different orders of magnitude. We handle original dataset to accommodate our proposed method before we apply it. By intercepting partial sample data from original data set,removing the noise and employing normalized processing of sample data set. In this paper, we use standard deviation to process sample data. Standard deviation, the processed data is in accord with the standard normal distribution. Standardized data set as shown in Fig. 1.

    Figure 1: Standardized data set

    3.2 Basic clustering algorithm

    The idea of the basicK-means algorithm is simple. The constantkis determined in advance. The constantkmeans the number of the final clusters. The initial point is randomly selected as the centroid and the similarity between each sample and the centroid is calculated (Euclidean distance), the sample points are assigned to the most similar class, and then the centroid of each class is recalculated (the class center). This process is repeated until the centroid is not changed, and finally each sample is determined. The category they belong to and the centroid of each class. Since the similarity between all samples and each centroid is calculated every time, the convergence speed ofK-means algorithm is slow on large scale data sets [Yang (2017);Joshi, Sabitha and Choudhury (2017)].

    Mini BatchK-means algorithm is used as a variant of the standardK-means clustering algorithm [Cho and An (2014); Newling and Fleuret (2016); Feizollah, Anuar, Salleh et al. (2015)]. Through the idea of “divide and conquer”, the data is logically divided into multiple small batch data subsets. In other words, the algorithm does not need to perform calculation on all data samples in the calculation process, but instead randomly extracts subsets of data each time the algorithm is trained. This can greatly reduce the computation time for data. At the same time, Mini BatchK-Means also tries to optimize the objective function. The objective function is as follows:

    krepresentskclustering centers,cirepresents theicenter,jrepresents the sample points,anddistrepresents Euclidean distance. By calculating the Euclidean distance, the optimization function is calculated, which is the sum of squared errors (even if Sum of the Squared Error, SSE). We compare the principles of the above two algorithms as shown in Fig. 2.

    Figure 2: Two different clustering algorithm calculations: (a) K-means, (b) Mini Batch K-means

    Figure 3: Two different clustering algorithms: (a) K-means algorithm; (b) Mini Batch K-means

    Compared with other related algorithms, Mini BatchK-Means reduces the convergence time ofK-means. The effect of this algorithm is slightly worse than the standard algorithm. However, the Mini BatchK-Means algorithm implements further clustering by performing corresponding mathematical statistics on small batches of data in advance.That is to say, the standard clustering algorithm is that the center point is updated in a single point. As shown in Fig. 3, the differences between the two algorithms are shown from the intuitive and calculation methods respectively, so that the knowledge of the Mini BatchK-Means ratio is obtained. Clustering algorithms have faster convergence speeds and are more suitable for processing massive data.

    Mini BatchK-means is mainly divided into the following steps:

    (1) divide data sets into multiple batches randomly, and regard small batch as a whole.

    (2) set the number of initial cluster clusters.

    (3) allocate small batch data to the nearest cluster center.

    (4) update the cluster center iteratively until the cluster center does not change any more.Compared with theK-means algorithm, the Mini BatchK-means algorithm is based on every small sample set instead of a single data point. The operation object is also a small batch of data, and for each small batch of data, the update clustering center is realized by calculating the average value, and the small batch of data is allocated to the new cluster center. With the gradual increase of the number of iterations, the cluster center is gradually stable, until the cluster center no longer changes. Then the calculation is stopped.

    Mini BatchK-means algorithm is simple, easy to understand and implement, the effect of clustering and the speed of the standard clustering algorithm are few, and the concept of batch type is introduced to speed up the data clustering speed, and the operation time can be reduced on the premise of maintaining the accuracy of the data. Because it is a batch processing data set, it does not need to calculate all the data samples in the calculation process, and extracts some samples from different categories of data to calculate. The amount of data needed to be calculated is reduced much, so the running time will be reduced accordingly, so the performance will be shown when the data is large. It is superior to the standard clustering algorithm.

    3.3 SMK-means algorithm

    This section makes corresponding improvements to the shortcomings of the Mini BatchK-means algorithm. When referring to Mini BatchK-means, we must mention theK-means algorithm. The Mini batchK-means itself is an optimization algorithm of theK-means algorithm, which largely retains the advantages of the standardK-means algorithm.Improved the shortcomings of the long computation time of theK-means algorithm.Although the Mini BatchK-means is suitable for the calculation of massive data sets, the calculation time is greatly reduced, and the k-value convergence is accelerated. However,because this algorithm is a small-batch calculation, it reduces the accuracy of the algorithm. In order to improve the accuracy of the algorithm, it also improves the accuracy of detection of power anomalies for home users. The SMK-means algorithm draws on the idea of the simulated annealing algorithm. The simulated annealing algorithm is a stochastic algorithm and does not necessarily find the global optimal solution. It can quickly find the approximate optimal solution to the problem, combined with the Mini BatchK-means. The advantage of not only can improve the accuracy of clustering, but also greatly reduce the predicament trapped in the local optimum. In addition, the algorithm is deployed based on the cloud environment, so it is more advantageous for processing large amounts of data. Therefore, the SMK-means clustering algorithm is proposed. In order to better understand the core idea of the SMK-means algorithm, some detailed explanations will be given.

    The implementation of the algorithm based on the cloud environment, the computing framework adopted in this paper is MapReduce, which implements parallelization and distributed computing through MapReduce. In Hadoop, the MapReduce computing framework takes each task as a Job. Each Job is divided into two execution stages, the Map execution stage and the Reduce execution stage. The Map function interface implements data filtering and distribution. The Reduce function interface implements the consolidation of the results on the Map side. The Combine process is also nested in the middle. Combine links Map and Reduce. It is distributed in the Map and Reduce stages of MapReduce. Each Map generates a large amount of output. Combine The role is to first do a merge on the Map side output to reduce the amount of data transferred to Reduce, Combine the most basic function is to achieve the integration of the local key to reduce computation time. The output of Combine is the input of Reduce. In fact,Combine is also a special Reduce. MapReduce splits the data set stored on the HDFS distributed file system and splits it into the Map stage to map the split slice data set through Combine. Realize the initial merge function of data, and then enter the Reduce stage to complete the final merge of data.

    Fig. 4 shows the parallelization of the SMK-means algorithm. The two dashed lines in the figure are the two parallel parts of the SMK-means algorithm. The parallelization of the simulated annealing algorithm implements the initial center point of the clustering. The selection also prevents clustering from falling into the local optimal solution; the Mini BatchK-means parallelization implementation is based on the simulated annealing algorithm, which eliminates the steps of iteratively finding cluster midpoints and the data batching, greatly improving the computation of the algorithm.

    Figure 4: Parallel implementation of SMK-means algorithm

    4 Experiments and evaluation

    4.1 Experiments analysis

    The implementation of the distributed SMK-means clustering algorithm based on cloud environment is mainly the design and implementation of the Map and Reduce functions.The SMK-means algorithm based on the MapReduce computing framework can be divided into multiple subtasks, namely selectingk-values and Mini BatchK-means algorithm. Choosing thekvalue is to combine the advantages of the simulated annealing algorithm to select the number of clusters in advance and effectively avoid falling into the local optimum solution. The main task of the Mini BatchK-means algorithm is to calculate the distance between the data object and the cluster center. The abnormal points can be selected by clustering and the outlier scores can be calculated for these abnormal points.These two parts are all realized through distributed parallel computing. Whether it iskvalue or Mini BatchK-means algorithm, the distance calculation between data object and cluster center is independent and does not affect each other. Hadoop's MapReduce computing framework completes the functions of these two parts. The simulated annealing algorithm can predetermine the number of initial clusters. This is also a value needed by the Mini BatchK-means algorithm, so reducing the number of iterations of the algorithm.

    The experimental environment of this paper is implemented on Hadoop cluster with 9 nodes, including one master node and 8 slave nodes. Fig. 5 describes the layout of the experimental environment in this paper.

    Figure 5: Experimental environment layout

    The implementation of the MapReduce parallelization of the Mini BatchK-means algorithm is basically the same as that of the parallel algorithm of the standard algorithm.This paper is based on the simulated annealing algorithm to determine the cluster center in advance, reduce the number of iterations, and store it on the HDFS. The data is divided into multiple batches, and the batch is split to enter the Map stage. The preliminary data mapping is completed. The data is reduced locally through the Combiner process, and the Reduce process is finally completed. The resulting intermediate results are stored in the In HDFS, the MapReduce parallelization of the Mini BatchK-means algorithm is shown in Fig. 6.

    Figure 6: MapReduce parallelization flow chart of SMK-means algorithm

    4.2 Performance evaluation

    This section parallelizes the algorithm we propose. To test the accuracy and algorithm runtime of different algorithms, we useK-means algorithm, Mini BatchK-means algorithm,and SMK-means algorithms. The three algorithms perform parallel experiments. This experiment was performed on 9 Hadoop cluster. There are 1 master node and 8 slave nodes.

    ?Comparison of the accuracy of the algorithm

    The contour coefficient is an evaluation index to measure the quality of the clustering algorithm. Assume that the data is clustered. For a sample point x in a cluster, the formula for calculating the contour factor is as follows:

    Where a(i) = average (the distance of sample x to all other points in the cluster it belongs to), find the average of the distance of sample x from all other points in the cluster; m(i) =min (sample x to all the average distance of the points other than the cluster in which it is located) finds the minimum value of the distance from the samplexto the non-cluster midpoint. It can be seen that the range of the contour coefficient is [-1, 1]. The closer the value is to 1, the higher the degree of cohesion in the cluster is, and the higher the degree of separation between clusters and clusters. The comparison of the accuracy of the algorithm as shown in Fig. 7.

    Figure 7: Comparison of the accuracy of the algorithm

    ?Algorithm runtime comparison

    Three related algorithms are run for different amounts of data, statistics of their running time and drawing comparison are shown in Fig. 8.

    Figure 8: Algorithm runtime comparison

    From Fig. 8, it can be seen that when running the same amount of data, theK-means algorithm runs more time than the other two algorithms, and as the amount of data increases, theK-means algorithm runs The gap between time and the running time of the other two algorithms is also gradually widening; because the Mini BatchK-means algorithm is used to process data in batches, the computation time is also due to theK-means algorithm, and the SMK-means algorithm is predetermined by the initial The number of cluster centers reduces the number of iterations and also reduces the computing time.

    ?Algorithm accuracy comparison

    The accuracy rate is aimed at the results. How many positive samples are found in the positive samples and two positive samples, one of which is positive sample (TP) and the other is a negative sample calculated as a positive sample (FP) and a formula for calculating the accuracy rate as shown in below:

    By marking the same data set, we mark different outliers to compare the accuracy of the three algorithms.

    Table 2: Algorithm precision comparison

    From Tab. 2, the precision rate comparison of the algorithm, the precision rate of theK-means algorithm is still better, but the whole is not stable enough, but compared to the Mini BatchK-means algorithm, theK-means algorithm has many advantages. Compared with the other two algorithms, SMK-means is more stable in computing abnormal data, and the overall efficiency is better than that ofK-means and Mini BatchK-means algorithm.

    Meanwhile, the complexity of algorithm is a yardstick for computing data and running speed, the proposed algorithm is not only based on the large data platform, but also uses a hierarchical down sampling data, so it supports the calculation of mass data. Therefore,the complexity of SMK-means algorithm is lower than that of the traditional clustering algorithm.

    5 Conclusion

    Based on the implementation of the parallel implementation of the algorithm SMK-means based on Hadoop implementation, the pretreatment of the data set is first introduced, the processing of missing values, the construction of feature engineering, and the data standardization operations are performed. In the second part, the overall idea of the algorithm and the realization process of parallelization are introduced. The number of cluster centers is initialized based on the simulated annealing algorithm and it is realized by parallelization based on MapReduce. Then the parallelization of the Mini BatchK-means algorithm is implemented. The parallelization of the two steps is accomplished by the Map operation and the Reduce operation. The implementation process of the Map function and the Reduce function is introduced in detail, and the corresponding pseudo code is provided. Finally, by comparing the SMK-means algorithm with theK-means and Mini BatchK-means algorithms for accuracy, precision, and runtime, a number of performance indicators are shown to analyze the SMK-means algorithm from accuracy.Better than other algorithms, SMK-means is more stable in terms of accuracy, and its runtime is shorter. In summary, the SMK-means algorithm is not only suitable for the processing of massive data, but also can guarantee the accuracy of the algorithm. For the outlier detection of data, it also has a relatively stable precision.

    Acknowledgement:This work is supported by Major Program of the National Social Science Fund of China (Grant No. 17ZDA092) Chinese Scholarship Council (No.201608320062), the open project of Jiangsu Engineering Technology Research Center of Environmental Cleaning Materials (No. KFK1506), and Marie Curie Fellowship(701697-CAR-MSCA-IF-EF-ST), and the PAPD fund.

    国产高清三级在线| 男女边摸边吃奶| av在线老鸭窝| 国产色爽女视频免费观看| 亚洲国产毛片av蜜桃av| 99久久人妻综合| 久久精品久久久久久久性| 日本爱情动作片www.在线观看| 国产日韩一区二区三区精品不卡| 亚洲伊人久久精品综合| av在线观看视频网站免费| 女性生殖器流出的白浆| 日韩成人伦理影院| 999精品在线视频| 亚洲五月色婷婷综合| 久久人人爽人人片av| 欧美 亚洲 国产 日韩一| 国产日韩一区二区三区精品不卡| 精品午夜福利在线看| 大香蕉久久成人网| 免费女性裸体啪啪无遮挡网站| 国产毛片在线视频| 久久婷婷青草| 欧美变态另类bdsm刘玥| 成人午夜精彩视频在线观看| 999精品在线视频| 天天影视国产精品| 99re6热这里在线精品视频| 亚洲成色77777| 尾随美女入室| 老女人水多毛片| 丝袜在线中文字幕| 国产精品嫩草影院av在线观看| 啦啦啦中文免费视频观看日本| 高清av免费在线| 搡女人真爽免费视频火全软件| 亚洲激情五月婷婷啪啪| 老司机影院毛片| 国产精品人妻久久久久久| 在线观看一区二区三区激情| 人妻系列 视频| 一本大道久久a久久精品| 啦啦啦中文免费视频观看日本| 欧美国产精品一级二级三级| 搡女人真爽免费视频火全软件| 国产黄频视频在线观看| 人成视频在线观看免费观看| 亚洲中文av在线| 亚洲第一av免费看| 97在线人人人人妻| 亚洲国产欧美日韩在线播放| 亚洲欧美日韩另类电影网站| 国产在视频线精品| 国产精品女同一区二区软件| 99久久中文字幕三级久久日本| 亚洲国产av影院在线观看| 最黄视频免费看| 亚洲精品久久久久久婷婷小说| 肉色欧美久久久久久久蜜桃| www.av在线官网国产| 国产精品国产av在线观看| 久久精品久久精品一区二区三区| 国产一区二区在线观看日韩| a级片在线免费高清观看视频| av在线app专区| 天天躁夜夜躁狠狠躁躁| 日韩制服丝袜自拍偷拍| av黄色大香蕉| 国产高清三级在线| 成人午夜精彩视频在线观看| 国产又色又爽无遮挡免| 国产探花极品一区二区| 久久精品人人爽人人爽视色| 精品熟女少妇av免费看| 最近最新中文字幕大全免费视频 | 久久av网站| 亚洲久久久国产精品| 岛国毛片在线播放| 国产免费现黄频在线看| 亚洲av电影在线进入| 肉色欧美久久久久久久蜜桃| 亚洲欧美精品自产自拍| 午夜福利在线观看免费完整高清在| 边亲边吃奶的免费视频| 日本-黄色视频高清免费观看| 国产免费现黄频在线看| www.av在线官网国产| 精品人妻熟女毛片av久久网站| 日韩在线高清观看一区二区三区| 国产免费又黄又爽又色| 纯流量卡能插随身wifi吗| 国产69精品久久久久777片| 国产成人aa在线观看| 99久久精品国产国产毛片| 亚洲国产最新在线播放| 看免费av毛片| 91久久精品国产一区二区三区| 久久久久久伊人网av| 国产国语露脸激情在线看| 欧美日韩精品成人综合77777| 黄片无遮挡物在线观看| 在线天堂最新版资源| 插逼视频在线观看| 精品人妻偷拍中文字幕| 纵有疾风起免费观看全集完整版| 边亲边吃奶的免费视频| freevideosex欧美| 成人黄色视频免费在线看| 成人黄色视频免费在线看| 草草在线视频免费看| 日本wwww免费看| 亚洲内射少妇av| 91久久精品国产一区二区三区| 国产综合精华液| 国产高清国产精品国产三级| 国产男女内射视频| 欧美少妇被猛烈插入视频| 天美传媒精品一区二区| 黑人猛操日本美女一级片| 大片电影免费在线观看免费| 在线观看三级黄色| 自拍欧美九色日韩亚洲蝌蚪91| 我的女老师完整版在线观看| 一边亲一边摸免费视频| 中文字幕av电影在线播放| 久久97久久精品| 人妻系列 视频| 久热这里只有精品99| 精品国产一区二区三区久久久樱花| 在线天堂中文资源库| 免费看光身美女| 国产精品国产av在线观看| 久久精品熟女亚洲av麻豆精品| 国产精品麻豆人妻色哟哟久久| 欧美老熟妇乱子伦牲交| 久久精品国产亚洲av涩爱| 精品熟女少妇av免费看| 欧美日韩视频精品一区| 又大又黄又爽视频免费| 国产成人精品婷婷| 亚洲av电影在线观看一区二区三区| 黑人欧美特级aaaaaa片| 一区二区三区乱码不卡18| 一级黄片播放器| 激情视频va一区二区三区| 晚上一个人看的免费电影| 久久精品国产亚洲av天美| 国产免费又黄又爽又色| 99re6热这里在线精品视频| 久热这里只有精品99| 熟女电影av网| 人人妻人人澡人人看| 一级片'在线观看视频| 欧美日韩精品成人综合77777| 久久ye,这里只有精品| 国产1区2区3区精品| 人妻少妇偷人精品九色| 中文字幕亚洲精品专区| 精品亚洲成国产av| 人妻 亚洲 视频| 两个人免费观看高清视频| 免费观看av网站的网址| 丝袜人妻中文字幕| av又黄又爽大尺度在线免费看| 最近最新中文字幕免费大全7| 又粗又硬又长又爽又黄的视频| 两个人看的免费小视频| 大片免费播放器 马上看| 在线 av 中文字幕| 中文字幕亚洲精品专区| 男女无遮挡免费网站观看| 熟女电影av网| av免费观看日本| 男女边吃奶边做爰视频| 亚洲精品日本国产第一区| 国产无遮挡羞羞视频在线观看| 免费少妇av软件| 亚洲av日韩在线播放| 国产一区二区在线观看日韩| 免费大片18禁| www日本在线高清视频| 国产日韩欧美视频二区| 亚洲av在线观看美女高潮| 亚洲久久久国产精品| 天天操日日干夜夜撸| 日韩大片免费观看网站| 国产成人av激情在线播放| 天美传媒精品一区二区| a 毛片基地| 又黄又爽又刺激的免费视频.| 热99久久久久精品小说推荐| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 最近中文字幕2019免费版| a级毛片在线看网站| 亚洲国产精品专区欧美| 美女xxoo啪啪120秒动态图| 欧美国产精品va在线观看不卡| 亚洲五月色婷婷综合| 内地一区二区视频在线| 亚洲国产精品999| 香蕉丝袜av| av在线观看视频网站免费| 精品一区二区免费观看| 91精品伊人久久大香线蕉| 蜜桃国产av成人99| 熟女av电影| 久久久久久久久久久久大奶| 亚洲av男天堂| 亚洲情色 制服丝袜| 久久久久精品性色| 日日啪夜夜爽| 国产午夜精品一二区理论片| 制服丝袜香蕉在线| 欧美日韩精品成人综合77777| 亚洲国产精品一区三区| 中文乱码字字幕精品一区二区三区| 久久精品国产a三级三级三级| 午夜视频国产福利| 午夜福利,免费看| 欧美日韩综合久久久久久| 波多野结衣一区麻豆| 午夜久久久在线观看| 黄色毛片三级朝国网站| 女人被躁到高潮嗷嗷叫费观| 岛国毛片在线播放| 日本vs欧美在线观看视频| 欧美日韩国产mv在线观看视频| 老女人水多毛片| 狠狠婷婷综合久久久久久88av| 美女主播在线视频| 欧美性感艳星| 妹子高潮喷水视频| h视频一区二区三区| 久久人妻熟女aⅴ| 久久久久精品久久久久真实原创| 国产亚洲欧美精品永久| 曰老女人黄片| 国产爽快片一区二区三区| 人妻一区二区av| 欧美精品人与动牲交sv欧美| 日韩av免费高清视频| 人妻人人澡人人爽人人| 国产69精品久久久久777片| 国产色婷婷99| 人体艺术视频欧美日本| 亚洲美女黄色视频免费看| 777米奇影视久久| 精品国产国语对白av| 亚洲av在线观看美女高潮| 日韩成人伦理影院| 一区二区三区精品91| 9色porny在线观看| 青春草国产在线视频| 国产精品久久久av美女十八| 国产欧美日韩一区二区三区在线| 在线精品无人区一区二区三| 大香蕉久久成人网| 免费大片18禁| 三级国产精品片| 宅男免费午夜| 日韩av在线免费看完整版不卡| xxx大片免费视频| av电影中文网址| 伊人久久国产一区二区| 欧美激情国产日韩精品一区| 久久精品久久久久久噜噜老黄| 久久青草综合色| 久久久久久人人人人人| 在线天堂中文资源库| 亚洲天堂av无毛| 亚洲情色 制服丝袜| 99热6这里只有精品| 国产成人午夜福利电影在线观看| 男的添女的下面高潮视频| av又黄又爽大尺度在线免费看| 婷婷色综合www| 日本91视频免费播放| 女人久久www免费人成看片| 国产精品麻豆人妻色哟哟久久| 午夜免费鲁丝| 97精品久久久久久久久久精品| 欧美97在线视频| 欧美精品人与动牲交sv欧美| 青春草亚洲视频在线观看| 美女国产高潮福利片在线看| 赤兔流量卡办理| 日本黄大片高清| 国产精品三级大全| 国产在视频线精品| 久久久国产欧美日韩av| 美女脱内裤让男人舔精品视频| 亚洲欧洲国产日韩| 美女xxoo啪啪120秒动态图| 久久99热6这里只有精品| 午夜福利,免费看| 制服诱惑二区| 王馨瑶露胸无遮挡在线观看| 久久精品国产鲁丝片午夜精品| 国产成人精品久久久久久| 色哟哟·www| 亚洲国产精品成人久久小说| 国产1区2区3区精品| 在线观看人妻少妇| 99视频精品全部免费 在线| 精品一区二区免费观看| 亚洲精品国产av成人精品| 成年动漫av网址| 日本黄大片高清| 我的女老师完整版在线观看| 王馨瑶露胸无遮挡在线观看| 天天躁夜夜躁狠狠躁躁| 成人免费观看视频高清| 午夜av观看不卡| 9色porny在线观看| 少妇熟女欧美另类| 赤兔流量卡办理| 国产永久视频网站| 男女国产视频网站| 亚洲人与动物交配视频| 久久久欧美国产精品| 亚洲精品久久久久久婷婷小说| 国产精品欧美亚洲77777| 亚洲欧美一区二区三区国产| 亚洲三级黄色毛片| 欧美日韩一区二区视频在线观看视频在线| 十分钟在线观看高清视频www| 嫩草影院入口| 久久久久久人人人人人| 亚洲成色77777| 久久精品久久久久久噜噜老黄| 中文字幕av电影在线播放| 亚洲国产成人一精品久久久| 99国产精品免费福利视频| 22中文网久久字幕| 丰满乱子伦码专区| 在线观看www视频免费| 亚洲精品美女久久av网站| 国产欧美日韩综合在线一区二区| av国产精品久久久久影院| 香蕉精品网在线| 黄色一级大片看看| 日本wwww免费看| 精品久久久精品久久久| 精品一区二区三卡| 国产精品无大码| 丰满少妇做爰视频| 高清在线视频一区二区三区| 欧美最新免费一区二区三区| 免费不卡的大黄色大毛片视频在线观看| 老司机影院成人| 80岁老熟妇乱子伦牲交| 最新的欧美精品一区二区| 天美传媒精品一区二区| 乱人伦中国视频| 肉色欧美久久久久久久蜜桃| 色哟哟·www| 妹子高潮喷水视频| 色94色欧美一区二区| 亚洲一级一片aⅴ在线观看| 国产精品一二三区在线看| 欧美日韩一区二区视频在线观看视频在线| av天堂久久9| 国产亚洲av片在线观看秒播厂| 91aial.com中文字幕在线观看| 菩萨蛮人人尽说江南好唐韦庄| 黄色怎么调成土黄色| 午夜影院在线不卡| 日韩不卡一区二区三区视频在线| a 毛片基地| 一级毛片黄色毛片免费观看视频| 啦啦啦中文免费视频观看日本| 精品人妻在线不人妻| 亚洲一级一片aⅴ在线观看| 婷婷成人精品国产| 满18在线观看网站| 国产精品久久久久久av不卡| 成年动漫av网址| av免费在线看不卡| 亚洲美女搞黄在线观看| 久久99蜜桃精品久久| 久久久久精品性色| 制服丝袜香蕉在线| 观看美女的网站| 久热这里只有精品99| 亚洲av.av天堂| 少妇 在线观看| 母亲3免费完整高清在线观看 | 一级毛片黄色毛片免费观看视频| 在线观看免费高清a一片| 深夜精品福利| 交换朋友夫妻互换小说| 啦啦啦中文免费视频观看日本| 久久毛片免费看一区二区三区| 99久久中文字幕三级久久日本| 久久久久久人人人人人| a级毛片在线看网站| 欧美日韩国产mv在线观看视频| 捣出白浆h1v1| 久久热在线av| 日韩精品免费视频一区二区三区 | 中文字幕av电影在线播放| 飞空精品影院首页| 哪个播放器可以免费观看大片| 欧美精品一区二区大全| 黑人猛操日本美女一级片| 国产亚洲一区二区精品| 黄色配什么色好看| 亚洲伊人久久精品综合| 2021少妇久久久久久久久久久| 成人综合一区亚洲| 美女大奶头黄色视频| 美女xxoo啪啪120秒动态图| 97在线人人人人妻| 日韩人妻精品一区2区三区| av在线播放精品| av黄色大香蕉| 亚洲av欧美aⅴ国产| 免费大片18禁| 午夜福利视频在线观看免费| 日韩av不卡免费在线播放| 看免费av毛片| 久久久久久久亚洲中文字幕| 黄片无遮挡物在线观看| 女性被躁到高潮视频| 亚洲精品美女久久av网站| 一级片'在线观看视频| 久久久久久久大尺度免费视频| 亚洲国产精品一区二区三区在线| 成人18禁高潮啪啪吃奶动态图| 久久久久久人妻| 亚洲av欧美aⅴ国产| 日日摸夜夜添夜夜爱| 国产高清三级在线| 久久99精品国语久久久| 午夜91福利影院| 在线天堂中文资源库| 久久午夜综合久久蜜桃| 免费少妇av软件| 亚洲伊人久久精品综合| 妹子高潮喷水视频| 国产免费一级a男人的天堂| 毛片一级片免费看久久久久| 99re6热这里在线精品视频| 久久久久久伊人网av| 99热这里只有是精品在线观看| 亚洲欧美精品自产自拍| 日本爱情动作片www.在线观看| h视频一区二区三区| 99热国产这里只有精品6| 国产精品99久久99久久久不卡 | 午夜激情久久久久久久| 国产深夜福利视频在线观看| 黄色毛片三级朝国网站| 欧美最新免费一区二区三区| av天堂久久9| 一个人免费看片子| 91国产中文字幕| 大香蕉97超碰在线| 国产精品人妻久久久久久| 中国三级夫妇交换| 欧美成人午夜精品| 99国产综合亚洲精品| 久久热在线av| 丁香六月天网| 超色免费av| 最近手机中文字幕大全| 五月开心婷婷网| 久久久久久久精品精品| 高清视频免费观看一区二区| 丰满乱子伦码专区| 精品国产一区二区三区四区第35| 欧美日韩精品成人综合77777| av女优亚洲男人天堂| 国产精品.久久久| 多毛熟女@视频| 午夜福利视频在线观看免费| 免费高清在线观看视频在线观看| 精品少妇内射三级| 人妻一区二区av| √禁漫天堂资源中文www| 久久精品国产鲁丝片午夜精品| 在线观看免费日韩欧美大片| 在现免费观看毛片| 亚洲人成77777在线视频| 黄片播放在线免费| 成人综合一区亚洲| 国产欧美日韩综合在线一区二区| 熟女人妻精品中文字幕| 亚洲av电影在线进入| 日日爽夜夜爽网站| 国产成人一区二区在线| 中文精品一卡2卡3卡4更新| 妹子高潮喷水视频| 国产永久视频网站| 精品久久久精品久久久| 99国产精品免费福利视频| 国产国拍精品亚洲av在线观看| 亚洲av在线观看美女高潮| 美女主播在线视频| 免费黄色在线免费观看| 久久99一区二区三区| 亚洲国产色片| 全区人妻精品视频| 日韩成人av中文字幕在线观看| 午夜福利乱码中文字幕| 色吧在线观看| 中文字幕免费在线视频6| 亚洲av中文av极速乱| 日韩 亚洲 欧美在线| 一级片免费观看大全| 黄色视频在线播放观看不卡| 亚洲第一区二区三区不卡| 国产亚洲精品久久久com| 伦理电影大哥的女人| 黄片播放在线免费| 伦精品一区二区三区| 午夜免费鲁丝| 国产一区有黄有色的免费视频| 男女下面插进去视频免费观看 | 一二三四在线观看免费中文在 | 日韩中字成人| 最近2019中文字幕mv第一页| 欧美精品一区二区大全| 国语对白做爰xxxⅹ性视频网站| 日本黄色日本黄色录像| 看免费av毛片| kizo精华| 国产视频首页在线观看| 日韩在线高清观看一区二区三区| 亚洲av电影在线观看一区二区三区| 亚洲国产精品国产精品| 国产成人91sexporn| 最后的刺客免费高清国语| 亚洲第一区二区三区不卡| 男女边吃奶边做爰视频| 国产成人精品一,二区| 免费不卡的大黄色大毛片视频在线观看| 乱人伦中国视频| 亚洲欧美成人综合另类久久久| 成人二区视频| 两性夫妻黄色片 | 国产成人欧美| 国产极品天堂在线| 一二三四在线观看免费中文在 | 一本色道久久久久久精品综合| 国产爽快片一区二区三区| 午夜av观看不卡| 女的被弄到高潮叫床怎么办| 国产精品一国产av| 亚洲国产欧美日韩在线播放| 免费在线观看黄色视频的| 99久久人妻综合| 亚洲av男天堂| 日韩视频在线欧美| 国国产精品蜜臀av免费| 欧美变态另类bdsm刘玥| 欧美另类一区| 久久久欧美国产精品| 国产片内射在线| 免费看不卡的av| 一本色道久久久久久精品综合| 狠狠精品人妻久久久久久综合| 五月伊人婷婷丁香| 飞空精品影院首页| 制服诱惑二区| 成人二区视频| 制服诱惑二区| 中文字幕制服av| 26uuu在线亚洲综合色| 九九爱精品视频在线观看| 日韩精品有码人妻一区| 亚洲精品一区蜜桃| 日日撸夜夜添| 国产淫语在线视频| 国产综合精华液| 成人免费观看视频高清| 亚洲高清免费不卡视频| 欧美日韩一区二区视频在线观看视频在线| 亚洲四区av| 亚洲激情五月婷婷啪啪| 激情视频va一区二区三区| 99香蕉大伊视频| 97在线人人人人妻| 国产精品国产三级国产av玫瑰| 成人综合一区亚洲| 美女xxoo啪啪120秒动态图| 99国产精品免费福利视频| 国产免费视频播放在线视频| 免费女性裸体啪啪无遮挡网站| 欧美精品亚洲一区二区| 免费观看a级毛片全部| 黑人巨大精品欧美一区二区蜜桃 | 国产爽快片一区二区三区| 成人国语在线视频| 亚洲欧美成人精品一区二区| 中文字幕av电影在线播放| 亚洲欧美精品自产自拍| 性色av一级| 亚洲国产精品专区欧美| 日韩欧美一区视频在线观看| 久久青草综合色| 国产有黄有色有爽视频| 精品一区二区三区视频在线| 街头女战士在线观看网站| 女人被躁到高潮嗷嗷叫费观| 精品一区二区免费观看| 免费黄频网站在线观看国产| 国产精品免费大片| a级片在线免费高清观看视频| 国产视频首页在线观看| 久久精品国产亚洲av涩爱| 一区二区三区四区激情视频| 国产av码专区亚洲av| 女性被躁到高潮视频| 精品少妇黑人巨大在线播放|