• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Research on Optimization of Random Forest Algorithm Based on Spark

    2022-08-24 03:30:52SuzhenWangZhanfengZhangShanshanGengandChaoyiPang
    Computers Materials&Continua 2022年5期

    Suzhen Wang,Zhanfeng Zhang,*,Shanshan Geng and Chaoyi Pang

    1Hebei University of Economics and Business,Shijiazhuang,050061,China

    2Griffith University,Brisbane,4222,Australia

    Abstract: As society has developed, increasing amounts of data have been generated by various industries.The random forest algorithm,as a classification algorithm,is widely used because of its superior performance.However,the random forest algorithm uses a simple random sampling feature selection method when generating feature subspaces which cannot distinguish redundant features, thereby affecting its classification accuracy, and resulting in a low data calculation efficiency in the stand-alone mode.In response to the aforementioned problems,related optimization research was conducted with Spark in the present paper.This improved random forest algorithm performs feature extraction according to the calculated feature importance to form a feature subspace.When generating a random forest model,it selects decision trees based on the similarity and classification accuracy of different decision.Experimental results reveal that compared with the original random forest algorithm,the improved algorithm proposed in the present paper exhibited a higher classification accuracy rate and could effectively classify data.

    Keywords: Random forest; spark; feature weight; classification algorithm

    1 Introduction

    The rapid development of the Internet has led to the continuous generation of different types of data in various industries.As such, the question of how to obtain valuable information from the massive data through data mining has become a focus of attention.As a classification algorithm in data mining, the random forest algorithm [1], is widely used in credit evaluation [2–4], image classification [5,6], text classification [7], among others.This can be attributed to it being better at avoiding over-fitting without being sensitive to noise values.Research into the random forest algorithm, have mainly focused on the relationship between the classification interval of the classifier and the analysis of its generalization ability, while proposing a random forest pruning algorithm based on the classification interval weighting [8].Further, an interval weighted voting method was advanced [9], and a weighted random sampling was used to filter out features with minimal information in the selection process [10].In order to select the important features for decision tree splitting, weights were assigned to feature values and subspaces were constructed according to the mean of the weights, thereby improving the classification effect of the algorithm [11].

    As a memory-based distributed computing platform, Spark has significantly poweful computational efficiency, and the high parallelism of the random forest algorithm also means that the algorithm is quite suitable for integration into the Spark platform for parallel computing as a means to increase the computing speed of the algorithm and increase data processing efficiency.

    Several scholars have conducted research related to the random forest algorithm based on Spark.A method relying on similarity calculations and feature similarity graphs was proposed to optimize the performance of random forest from the perspective of feature selection [12].Additionally, a random forest algorithm based on rough set theory was also considered [13].Here,this algorithm calculated the importance of attributes based on a discernibility matrix and selected the first several attributes with the highest importance to form the feature subspace.Related research has also been conducted on randoms forest while processing high-dimensional data, and a hierarchical subspace implementation was proposed to provide a solution for the issue where random forest algorithms cannot distinguish feature correlation when processing high-dimensional data [14].

    In the present study, founded on the Spark framework, the feature subspace generation of the random forest algorithm was used as a starting point, and the feature subspace generation was optimized to achieve the purpose of optimizing the random forest algorithm.

    The main research conducted for the present paper includes the following parts:

    a.Theoretical research on the random forest algorithm

    b.Research on optimizing the feature subspace of the random forest algorithm

    c.Parallel implementation of the random forest algorithm based on Spark.

    2 Related Work

    2.1 Introduction to Spark

    As a distributed computing and processing platform similar to Hadoop, Spark uses the concept of Shuffle to implement distributed computing.Hence, Spark differs from Hadoop in that calculations are performed based on memory and the intermediate output and results from tasks are saved in the memory, thereby saving a significant amount of disc access overhead and effectively improving the efficiency of data reading and writing.

    The basic ecosystem of Spark is shown in Fig.1.

    As shown in Fig.1, the Spark ecosystem primarily includes:

    Spark SQL:primarily used for data query analysis and processing;

    Spark Streaming:primarily used for stream computing;

    Spark MLlib:primarily used for machine learning;

    Spark GraphX:primarily used for graph computing.

    The components above and Apache Spark form the Spark ecosystem.

    The most basic data structure is Resilient Distributed Dataset (RDD) and is the most fundamental operational unit in Spark [15].This data set has a certain scalability and supports parallel processing.RDD adopts a delay computing system and supports two operations:conversion and action.The calculation will only proceed when there is an absolute necessity for the results to be returned to the RDD.This lazy operation mode also effectively improves calculation efficiency.In addition, Spark provides users with richer and simpler operations, including creation operations,conversion operations, control operations, and mobile operations to allow developers to fully utilize these operators to operate on RDD.Due to the high computing efficiency inherent in Spark, the application research for Spark is considerably extensive, including large-scale text analysis [16], large-scale data mining [17], etc.

    Figure 1:Spark ecosystem

    2.2 Random Forest Algorithm

    The random forest algorithm is an ensemble algorithm that utilizes decision trees as the classifiers, and in this algorithm, the random forests are composed of multiple decision trees.As a classifier, the decision tree is similar to an inverted tree structure.Starting from the root node,the nodes are split according to the split rules until the decision tree is generated.In the decision tree, the leaf node is generated by splitting the data sample according to a certain attribute, which represents a class label, and the path from the root node to the leaf node represents a certain decision process [18].

    A decision tree has the characteristics inherent in a simple model and easily implemented,yet it is prone to defects, such as over-fitting and the inability to guarantee the global optimum when classifying.Therefore, stemming from the characteristics of the decision tree algorithm, the random forest algorithm was created.The random forest algorithm combines multiple decision trees to improve classification accuracy for the algorithm and avoid phenomena such as overfitting [19].

    The random forest algorithm is primarily concerned with solving the over-fitting phenomenon and optimal issues that easily occur when a single decision tree is classified.The principal reasons for solving these problems are as follows:

    (a) The random forest algorithm obtains a sample subset through random replacement sampling to ensure the randomness and diversity of the training samples of the decision tree.

    (b) The random forest algorithm provides optional feature attributes for node splitting by randomly extracting feature subsets, ensuring the difference of varieties of decision trees in the generation process.

    (c) When the random forest algorithm classifies the final result, the final result is selected by combining multiple decision trees to vote, thereby preventing the algorithm from encountering local optimal and over-fitting problems.

    3 Analysis of the Random Forest Algorithm

    The advantages inherent to the random forest algorithm are notably due to the randomness of the decision tree training subset generated during the training process and the randomness of the decision tree feature subspace.This is why the classification accuracy of the decision tree and the differences between different decision trees are integral to the classifier, as has a significant impact on the quality of performance.To ensure the classification effects of the classifiers, the effectiveness and diversity of the training subset and feature subspace must be ensured to guarantee the classification strength of the decision tree and the differences between varieties of decision trees.

    Feature subspace generation is a major step in the random forest algorithm.Meanwhile,the decision tree strength of the random forest algorithm and the difference between decision trees are also related to feature subspace generation.Although findings have been concluded that smaller feature subspaces lead to the greater the randomness and greater differences between decision trees, this will evoke less effective information and reduce the classification accuracy rate.Conversely, larger feature subspaces lead to more effective information a stronger classification effect for the decision tree.However, this will reduce the differences between decision trees and affect the classification accuracy of the algorithm.

    On the other hand, due to the different classification capabilities of different decision trees,when performing random forest ensemble, fair voting will affect the overall classification effect of random forests.Due to this, the classification results should be treated differently for decision trees being exported with different classification effects.

    To summarise, in the present study, for the purposes of ensuring the classification accuracy of the random forest algorithm while simultaneously ensuring the stability of the random forest algorithm, the feature importance was calculated, the features were distinguished, and then the feature subspace was generated based on the feature importance.This method could not only improve the classification strength of the decision tree, but also improve the overall classification effect inherent in the algorithm.Concurrently, the feature extraction with a weaker classification effect could be added to ensure the differences between different decision trees and the classification ability of the algorithm.Furthermore, when performing random forest model integration, different weights were assigned to decision trees with different classification strengths to increase the result weights of decision trees with strong classification abilities and improve the overall classification accuracy of the algorithm.

    4 Improved Random Forest Algorithm W-RF

    Regarding the traditional random forest algorithm, how classification accuracy influences redundant feature algorithms is not considered when forming the feature subspace, and the classification strengths of different decision trees are not distinguished.Therefore, a weighted random forest (W-RF) algorithm is proposed in the present paper.In this section, several pivotal parts of theW-RFalgorithm will be described in detail:feature importance calculation, feature subspace generation, and decision-tree weight calculation.

    4.1 Feature Subspace Generation Strategy Based on Feature Importance

    In the node splitting stage of decision tree training, the traditional random forest algorithm is utilized to randomly extract several features from the data set to form a feature subspace and select a feature from the feature subspace for node splitting.To that end, feature subspace effectiveness must be ensured.Firstly, the features need to be distinguished.The Relief-F algorithm is a feature selection algorithm, and feature selection is performed by calculating the feature weight in the classification.The feature weight is measured by the differences between the neighboring points of the features of the same type and the neighboring points of different types.If a small difference is presented in a feature between samples of the same type and a large difference between samples of different types, this indicates that the distinguishing ability of the feature is high; otherwise, this indicates that the distinguishing ability of the feature is low.In the present study, the feature weight of the Relief-F algorithm was converted into feature importance for feature distinction.

    The specific calculation process involved:

    A sampleXibelonging to categoryCin the data setSwas randomly selected.First, theknearest neighbor samples in similar samples toXiwere found and recorded asHj(j = 1, 2,3,..., k), and then thekneighbor samples were extracted asFdj(j = 1, 2, 3,..., k) from samples differing fromXiin the category sample set wheredrepresents a different class from the classCthatXibelongs to, and the distance betweenXiandHjandFdjwas calculated.This process was repeatedmtimes to obtain the feature importance of featureA.

    The importance calculation method of featureAis shown in Eq.(1).

    In Eq.(3),Pdis the proportion of classdsamples in the data set.The calculation method is shown in Eq.(4).XAirepresents the value of sampleXiin featureA.Diff represents the distance between feature values.For discrete features, the distance calculation method is shown in Eq.(5),and for continuous features, the distance calculation method is shown in Eq.(6).

    After the feature importanceWof all the features was calculated, the features could be sorted, and then the strong and weak features could be distinguished according to the importance of those features.

    Once the importance of all the features was calculated, the features were classified, that is, a higher importance for a feature meant higher category relevance during classification and conversely, a lower importance for a feature meant lower category relevance during classification.Taking this into account, two disjoint feature sets could be obtained.Specifically, the feature set was divided into strong correlation featuresNSand weak correlation featuresNWaccording to the thresholdα.The value ofαis the average value of feature importance, and the calculation method is shown in Eq.(7).

    Upon distinguishing between strong and weak correlation featuresNSandNW, for the purpose of ensuring the effectiveness and differences of feature subspace when extracting features to form feature subspace, the traditional random forest algorithm was not utilized to randomly extract features in the present study.Instead, stratification extracting features were used to form a feature subspace generation method to ensure the effectiveness and difference of the decision tree.

    When performing hierarchical extraction, the proportion of strong and weak related features is related to the importance of strong and weak related features, which is specifically expressed as:assuming that there areSubfeatures in the feature subspace, then the strong correlation feature in the feature subspace numberNumNSis:

    Among them,SNSis the proportion of the importance of the strong correlation feature in the importance of all features, and the calculation method is shown in Eq.(9).

    4.2 Random Forest of Weights

    The random forest algorithm is an integration of multiple decision trees.The construction process of each decision tree is independent of each other, and the classification ability of the formed decision tree is different, thus, a decision tree with poor classification ability will affect the classification accuracy of the random forest algorithm.

    For the aforementioned reason, in the present study, the decision tree combination process in the random forest algorithm was improved.According to the decision trees formed based on the weight feature subspace, the classification weight values of different decision trees were obtained by calculating the prediction classification accuracies for each decision tree.This was to enhance the influence of decision trees with strong classification abilities in the integration process, and weaken the influence of decision trees with poor classification abilities in the integration process,thereby improving the classification ability of the random forest algorithm.

    Conversely, to avoid an increase in similar decision trees in the obtained random forest model,the similarity of decision trees was calculated in the present study.By calculating the similarity between different decision trees, the decision trees were distinguished.Among the higher decision tree group, the decision tree with higher classification accuracy was selected for random forest model integration to improve the model effect.

    To summarize, the calculation category of the final classification is shown in Eq.(10), wherewirepresents the classification weight of theidecision tree, andFrepresents the final classification category of a sample.

    4.3 Improving the Random Forest Algorithm Process

    The core ideas of the improved algorithm are as shown in Tab.1.

    Table 1:Core idea

    5 Experimental

    Once the classifier completed its related tasks, evaluating the classification effect was a crucial part of the experiment.In evaluating the classification effect, the confusion matrix is a significantly useful tool.Taking two classifications as an example, the confusion matrix is shown in Tab.2.

    Table 2:Confusion matrix

    In the confusion matrix,TPrepresents the number of samples that originally belonged to category0and were correctly classified as category0by the classifier, whileFPrepresents the number of samples that originally belonged to category1but were incorrectly classified as category0by the classifier.FNrepresents the number of samples that originally belonged to category 0 but were incorrectly classified into Class1by the classifier, whileTNrepresents the number of samples that originally belonged to Class1and were correctly classified into Class1by the classifier.

    The evaluation indicators for classifiers were primarily divided into the aforementioned categories.The indicator used in the present article was the classification accuracy rate, which is a common indicator in classifier evaluation.The specific calculation method is shown in Eq.(11).

    In the present study, multiple data sets were tested to compare and verify, and the classification accuracy of different data sets before and after the algorithm was improved under different numbers of decision trees, as shown in Fig.2.

    Figure 2:Classification accuracy rate

    As shown in Fig.2, the experimental results reveal that under the same parameters, the optimized random forest algorithm proposed in the present paper exhibited a signifciant improvement in classification accuracy when compared with the random forest algorithm before optimization.Meanwhile, in the case of different feature dimension data sets, as the feature dimension of the data sets increased, although the classification accuracy would gradually decrease, the improved algorithm proposed in the present paper demonstrated a clearer effect that improved the classification accuracy.Hence, an observation can be made from Fig.2 that indicates that optimized random forest algorithm proposed exhibited better results on data sets with higher feature dimensions than the random forest algorithm before optimization.

    Subsequently, for the present study, algorithm acceleration effect verifciation was conducted in the Spark cluster environment to verify whether the algorithm exhibited a positive parallelization acceleration performance.The verification standard is the speedup ratio, which is used to describe the reduction effect of the running time of the parallel algorithm.The calculation method is shown in Eq.(12).

    In Eq.(12),T1represents the running time of the algorithm under a single node, andTnrepresents the running time of the algorithm in parallel computing on n nodes.The speedup change of the algorithm is shown in Fig.3.

    Figure 3:Speedup ratio

    Fig.3 evidently shows that as the number of working nodes in the Spark cluster increased,the algorithm speedup ratio also increased.This demonstrates that the algorithm had a positive parallelization effect, and from looking at the data volume from different data sets, dataset 5 was larger than dataset 4, indicating that the algorithm had a better speedup ratio when the amount of data increased.Hence, the parallelisation of optimization algorithms based on Spark could effectively conduct big data analysis and processing.

    Further, the CreditData data sets were utilized in the present study to conduct credit evaluation applications, and the manner for averaging the experimental results for multiple experiments was compared.The results are shown in Fig.4.

    Figure 4:Application results in the credit field

    An observation can be made in Fig.4 that compared with the random forest algorithm before optimization, the classification accuracy of the optimized random forest algorithm proposed in the present paper exhibited a significantly improvement in applying data sets in the field of credit evaluation, and could be effectively applied to the field of credit evaluation.

    6 Conclusion

    In attempting to overcome the problems of the random forest algorithm, in the present study, strong correlation features were distinguished from weak features by calculating feature weights, and stratified sampling was used to obtain feature subspaces to improve the classification effects of random forest algorithms.In addition, when the random forest performed decision tree integration, this was performed according to the classification accuracy and similarity of the decision tree, and then the optimized random forest algorithm was implemented in parallel in the Spark platform.Finally, the performance of the algorithm was compared and analyzed through a classification performance experiment, the results of which reveal how effective the algorithm improvement was.However, not enough consideration was given to the imbalance of data in the present study.In future research, the primary focuses will be calculating the degree of data balance and how to ensure the efficiency in algorithm classification and reduce the time complexity while improving the classification effect.

    Acknowledgement:The author(s) received no specific help from somebody other than the listed authors for this study.

    Funding Statement:This paper is partially supported by the Social Science Foundation of Hebei Province (No.HB19JL007), and the Education technology Foundation of the Ministry of Education (No.2017A01020).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美+日韩+精品| 精品一区在线观看国产| 最近最新中文字幕免费大全7| 国产高清国产精品国产三级 | 国产探花极品一区二区| 日韩欧美 国产精品| 久久精品国产自在天天线| av又黄又爽大尺度在线免费看| 久久99热这里只频精品6学生| 黄色视频在线播放观看不卡| 一级爰片在线观看| 99视频精品全部免费 在线| 国产精品国产三级国产专区5o| 国产一区有黄有色的免费视频| 看免费成人av毛片| 亚洲欧美精品专区久久| 国产高清国产精品国产三级 | 亚洲真实伦在线观看| av又黄又爽大尺度在线免费看| 欧美精品人与动牲交sv欧美| 黄片wwwwww| 又粗又硬又长又爽又黄的视频| 91久久精品国产一区二区成人| 免费在线观看成人毛片| 午夜免费男女啪啪视频观看| 午夜福利高清视频| 亚洲国产欧美人成| 国内精品宾馆在线| 国产乱人偷精品视频| 久久久久久久国产电影| 熟妇人妻不卡中文字幕| 久久久久久人妻| 女的被弄到高潮叫床怎么办| 精品亚洲成国产av| 久久这里有精品视频免费| 久久久久久久亚洲中文字幕| 青春草视频在线免费观看| 日韩av免费高清视频| 亚洲天堂av无毛| 亚洲一区二区三区欧美精品| 黄片无遮挡物在线观看| 26uuu在线亚洲综合色| 日本午夜av视频| 亚洲精品自拍成人| 汤姆久久久久久久影院中文字幕| 国产永久视频网站| 国产真实伦视频高清在线观看| 黄片无遮挡物在线观看| 日韩大片免费观看网站| 爱豆传媒免费全集在线观看| 亚洲精品国产色婷婷电影| 毛片女人毛片| 国产免费又黄又爽又色| 边亲边吃奶的免费视频| 国国产精品蜜臀av免费| 在线免费观看不下载黄p国产| 青春草视频在线免费观看| 日本免费在线观看一区| 国产精品人妻久久久影院| 丰满少妇做爰视频| 老师上课跳d突然被开到最大视频| 成人免费观看视频高清| 精品一区二区免费观看| 中文字幕精品免费在线观看视频 | 欧美精品国产亚洲| 51国产日韩欧美| 大又大粗又爽又黄少妇毛片口| 日本免费在线观看一区| 有码 亚洲区| 色婷婷久久久亚洲欧美| 国产av一区二区精品久久 | 18禁在线无遮挡免费观看视频| 国产精品蜜桃在线观看| 亚洲经典国产精华液单| 九九久久精品国产亚洲av麻豆| 亚洲高清免费不卡视频| 老女人水多毛片| 国产黄片美女视频| 色婷婷av一区二区三区视频| 中文乱码字字幕精品一区二区三区| 又黄又爽又刺激的免费视频.| 精品国产一区二区三区久久久樱花 | av在线播放精品| 免费黄色在线免费观看| 中国美白少妇内射xxxbb| 天天躁夜夜躁狠狠久久av| 一边亲一边摸免费视频| 下体分泌物呈黄色| 大又大粗又爽又黄少妇毛片口| 尾随美女入室| 久久精品国产自在天天线| 国产高清有码在线观看视频| 特大巨黑吊av在线直播| 99国产精品免费福利视频| 黑丝袜美女国产一区| 久久精品久久久久久噜噜老黄| 日日摸夜夜添夜夜爱| 狂野欧美激情性bbbbbb| 熟妇人妻不卡中文字幕| 欧美xxⅹ黑人| 国产精品偷伦视频观看了| 日韩三级伦理在线观看| av不卡在线播放| 国产成人a区在线观看| 在线天堂最新版资源| 中国三级夫妇交换| 18禁裸乳无遮挡免费网站照片| 亚洲精品一区蜜桃| 免费观看在线日韩| 插逼视频在线观看| 一级片'在线观看视频| 丝袜脚勾引网站| 国产欧美日韩精品一区二区| 亚洲一区二区三区欧美精品| 免费观看无遮挡的男女| 乱码一卡2卡4卡精品| 色网站视频免费| 黑人高潮一二区| 在线免费十八禁| 99久久精品一区二区三区| 边亲边吃奶的免费视频| 久久人人爽av亚洲精品天堂 | 99热网站在线观看| 在线免费十八禁| 精品久久久噜噜| 亚洲欧美精品自产自拍| 97在线视频观看| 人妻少妇偷人精品九色| 国产探花极品一区二区| 十分钟在线观看高清视频www | 亚洲欧美中文字幕日韩二区| 国产成人一区二区在线| 日韩精品有码人妻一区| 国产成人a∨麻豆精品| av在线app专区| 亚洲av日韩在线播放| 亚洲精品国产成人久久av| 国产精品久久久久久精品古装| 岛国毛片在线播放| 久久久成人免费电影| 日本欧美国产在线视频| 高清视频免费观看一区二区| 亚洲欧美精品自产自拍| 一级二级三级毛片免费看| 国产精品久久久久久久久免| 成年美女黄网站色视频大全免费 | 日韩中字成人| 天天躁日日操中文字幕| 亚洲不卡免费看| 九九在线视频观看精品| 2018国产大陆天天弄谢| 日韩一区二区三区影片| 国产成人精品福利久久| 成人国产麻豆网| 嫩草影院入口| 一级爰片在线观看| 亚洲欧美日韩无卡精品| 91在线精品国自产拍蜜月| 久久久久久久久久久丰满| 中文字幕精品免费在线观看视频 | 久久久久久伊人网av| 欧美zozozo另类| 中文在线观看免费www的网站| 永久免费av网站大全| 亚洲国产精品一区三区| 又爽又黄a免费视频| 成年美女黄网站色视频大全免费 | 国产成人免费观看mmmm| 亚洲精品第二区| 国产精品99久久久久久久久| 一级毛片 在线播放| 久久久久网色| 国产在线视频一区二区| 最近最新中文字幕免费大全7| 麻豆成人午夜福利视频| 久久精品国产亚洲av涩爱| 亚洲av中文av极速乱| 天美传媒精品一区二区| 精品亚洲成国产av| 少妇人妻一区二区三区视频| 婷婷色综合www| 99久久精品一区二区三区| h日本视频在线播放| 男人舔奶头视频| 精品一区二区免费观看| 六月丁香七月| 亚洲精品一区蜜桃| 久久国产精品大桥未久av | 少妇丰满av| 精品一区在线观看国产| 最新中文字幕久久久久| 久久综合国产亚洲精品| 嘟嘟电影网在线观看| 成人一区二区视频在线观看| 我的老师免费观看完整版| 亚洲欧美成人综合另类久久久| 国产精品一区二区在线不卡| 伊人久久国产一区二区| 久久99蜜桃精品久久| 美女内射精品一级片tv| 午夜免费男女啪啪视频观看| 97超视频在线观看视频| 香蕉精品网在线| 国产精品久久久久久久电影| 视频中文字幕在线观看| 成人美女网站在线观看视频| 日本av免费视频播放| 黑人高潮一二区| 99热全是精品| 亚洲成人av在线免费| 国产成人精品婷婷| 亚洲美女搞黄在线观看| 日韩一本色道免费dvd| 久久国产亚洲av麻豆专区| 黄色欧美视频在线观看| 一区二区av电影网| 国产无遮挡羞羞视频在线观看| 国产男女超爽视频在线观看| 国产高清国产精品国产三级 | 免费久久久久久久精品成人欧美视频 | av视频免费观看在线观看| 最黄视频免费看| 男人和女人高潮做爰伦理| 色5月婷婷丁香| 亚洲欧洲国产日韩| 欧美3d第一页| 久久久亚洲精品成人影院| 高清不卡的av网站| 熟女av电影| 少妇人妻久久综合中文| 妹子高潮喷水视频| 国产高潮美女av| 亚洲色图综合在线观看| 日韩在线高清观看一区二区三区| 日韩三级伦理在线观看| 亚洲精品乱码久久久v下载方式| 在线观看av片永久免费下载| 一级毛片我不卡| 久久久欧美国产精品| 国内揄拍国产精品人妻在线| 日韩一区二区三区影片| 成人毛片a级毛片在线播放| 久久精品久久精品一区二区三区| 久久久精品免费免费高清| 99热国产这里只有精品6| 国产片特级美女逼逼视频| 亚洲在久久综合| 国产一区亚洲一区在线观看| 国产91av在线免费观看| 亚洲美女黄色视频免费看| 婷婷色综合www| 十八禁网站网址无遮挡 | 色婷婷久久久亚洲欧美| 亚洲成人av在线免费| 日韩强制内射视频| 国产成人a区在线观看| 日本色播在线视频| 亚洲精品第二区| 国产精品久久久久久久电影| av卡一久久| 成人一区二区视频在线观看| 各种免费的搞黄视频| 国产精品伦人一区二区| 蜜臀久久99精品久久宅男| 国产免费一区二区三区四区乱码| 国产伦在线观看视频一区| 五月玫瑰六月丁香| 乱码一卡2卡4卡精品| av播播在线观看一区| 国产片特级美女逼逼视频| 亚洲经典国产精华液单| 国产av国产精品国产| 在线观看免费视频网站a站| 26uuu在线亚洲综合色| 国产中年淑女户外野战色| 精品国产一区二区三区久久久樱花 | 亚洲国产精品成人久久小说| 欧美日韩综合久久久久久| 在线观看美女被高潮喷水网站| 又黄又爽又刺激的免费视频.| 少妇被粗大猛烈的视频| 青春草国产在线视频| 国产av码专区亚洲av| 超碰av人人做人人爽久久| 日韩视频在线欧美| 国产欧美另类精品又又久久亚洲欧美| 丝瓜视频免费看黄片| 日本与韩国留学比较| 男男h啪啪无遮挡| 久久久欧美国产精品| av又黄又爽大尺度在线免费看| 特大巨黑吊av在线直播| 欧美日韩视频精品一区| 简卡轻食公司| 你懂的网址亚洲精品在线观看| 日本猛色少妇xxxxx猛交久久| 亚洲精品乱码久久久v下载方式| av播播在线观看一区| 简卡轻食公司| 边亲边吃奶的免费视频| videossex国产| 久久鲁丝午夜福利片| 国产精品免费大片| 一个人免费看片子| av国产免费在线观看| 久久精品国产亚洲网站| 婷婷色麻豆天堂久久| 性色avwww在线观看| 免费看光身美女| 黑人猛操日本美女一级片| 亚洲人成网站高清观看| 国产精品国产三级国产av玫瑰| 国产精品蜜桃在线观看| 日本午夜av视频| 日韩一区二区三区影片| 日产精品乱码卡一卡2卡三| 女性被躁到高潮视频| 黄色日韩在线| 啦啦啦视频在线资源免费观看| 蜜桃在线观看..| 国产男人的电影天堂91| 中文字幕久久专区| 国产欧美另类精品又又久久亚洲欧美| 国产精品麻豆人妻色哟哟久久| 九九久久精品国产亚洲av麻豆| 如何舔出高潮| 亚洲熟女精品中文字幕| 91精品国产九色| 亚洲精品,欧美精品| 亚洲人与动物交配视频| 男女边摸边吃奶| 亚洲四区av| 熟女电影av网| 卡戴珊不雅视频在线播放| 久久av网站| 欧美国产精品一级二级三级 | 男女免费视频国产| 春色校园在线视频观看| 午夜福利在线观看免费完整高清在| 久久韩国三级中文字幕| 黄色配什么色好看| 人人妻人人添人人爽欧美一区卜 | 欧美精品一区二区大全| 青青草视频在线视频观看| 亚洲成人一二三区av| 中文字幕精品免费在线观看视频 | 亚洲图色成人| 亚洲精品亚洲一区二区| 毛片一级片免费看久久久久| 久久久欧美国产精品| av又黄又爽大尺度在线免费看| 极品教师在线视频| 午夜福利网站1000一区二区三区| 女人十人毛片免费观看3o分钟| 成人一区二区视频在线观看| 亚洲国产精品专区欧美| 色视频www国产| 国产乱人视频| 夜夜骑夜夜射夜夜干| 国产探花极品一区二区| 国产精品免费大片| 国产亚洲av片在线观看秒播厂| 黄片wwwwww| 色视频www国产| 国产av精品麻豆| 国产精品精品国产色婷婷| 欧美极品一区二区三区四区| 制服丝袜香蕉在线| 中国国产av一级| 欧美成人午夜免费资源| 日韩av在线免费看完整版不卡| 好男人视频免费观看在线| 亚洲国产精品999| 99热网站在线观看| 色视频www国产| 久久久久久久久久人人人人人人| 啦啦啦在线观看免费高清www| 日韩欧美精品免费久久| 男人和女人高潮做爰伦理| 久久久久国产精品人妻一区二区| 国产毛片在线视频| 欧美精品国产亚洲| 中文字幕制服av| 肉色欧美久久久久久久蜜桃| 一级二级三级毛片免费看| 丝瓜视频免费看黄片| 22中文网久久字幕| 美女xxoo啪啪120秒动态图| 国产大屁股一区二区在线视频| 亚洲电影在线观看av| 噜噜噜噜噜久久久久久91| 日日啪夜夜爽| 日本与韩国留学比较| 国产老妇伦熟女老妇高清| 国产精品.久久久| 午夜激情久久久久久久| 亚洲人成网站在线播| 亚洲av国产av综合av卡| 97热精品久久久久久| 少妇的逼水好多| 亚洲精品乱久久久久久| 亚洲成人一二三区av| 日韩欧美 国产精品| 国产免费一区二区三区四区乱码| 色5月婷婷丁香| 一级片'在线观看视频| av在线app专区| 3wmmmm亚洲av在线观看| 亚洲精品成人av观看孕妇| 免费观看在线日韩| 黄色视频在线播放观看不卡| 蜜臀久久99精品久久宅男| 久久精品久久久久久久性| 自拍偷自拍亚洲精品老妇| 国产一级毛片在线| 成人国产麻豆网| av黄色大香蕉| 九色成人免费人妻av| 国产伦精品一区二区三区四那| 亚洲国产欧美人成| 亚洲在久久综合| 少妇猛男粗大的猛烈进出视频| 久久久久久久久久人人人人人人| 人人妻人人添人人爽欧美一区卜 | 大香蕉久久网| 综合色丁香网| 国产精品国产av在线观看| 国产欧美另类精品又又久久亚洲欧美| 亚洲不卡免费看| 国产精品无大码| 久久人人爽人人片av| 国产乱人视频| 国产片特级美女逼逼视频| 亚洲国产高清在线一区二区三| 色视频在线一区二区三区| 国产淫语在线视频| 黄片wwwwww| 少妇 在线观看| 夜夜看夜夜爽夜夜摸| 观看美女的网站| 成年女人在线观看亚洲视频| 搡女人真爽免费视频火全软件| 免费在线观看成人毛片| 你懂的网址亚洲精品在线观看| 一级毛片久久久久久久久女| 乱码一卡2卡4卡精品| 日韩制服骚丝袜av| 亚洲自偷自拍三级| 国产精品蜜桃在线观看| 91久久精品电影网| 最近中文字幕高清免费大全6| av专区在线播放| 99热网站在线观看| 免费看日本二区| 91午夜精品亚洲一区二区三区| 美女福利国产在线 | 边亲边吃奶的免费视频| 男男h啪啪无遮挡| 97超碰精品成人国产| 国产欧美另类精品又又久久亚洲欧美| 香蕉精品网在线| 男人舔奶头视频| 热99国产精品久久久久久7| 看免费成人av毛片| 国产视频首页在线观看| 国产大屁股一区二区在线视频| 欧美日韩视频精品一区| 51国产日韩欧美| 一级毛片黄色毛片免费观看视频| 一区二区三区四区激情视频| 18+在线观看网站| 中文精品一卡2卡3卡4更新| 在线亚洲精品国产二区图片欧美 | 干丝袜人妻中文字幕| 精品熟女少妇av免费看| 美女福利国产在线 | av播播在线观看一区| 亚洲国产精品成人久久小说| 久久国产精品大桥未久av | 性色av一级| 国产黄色免费在线视频| 三级国产精品片| 亚洲真实伦在线观看| 免费看av在线观看网站| 国产一区二区三区av在线| 国产亚洲一区二区精品| 国产精品无大码| 久热久热在线精品观看| 久久精品国产自在天天线| 国产精品伦人一区二区| 日本色播在线视频| 美女视频免费永久观看网站| 欧美日韩视频精品一区| 欧美丝袜亚洲另类| 日韩av不卡免费在线播放| 少妇 在线观看| 只有这里有精品99| 色视频www国产| 欧美日韩视频精品一区| 久久久久久久大尺度免费视频| 国产真实伦视频高清在线观看| 在线免费观看不下载黄p国产| 18禁在线播放成人免费| 日韩电影二区| 亚洲一级一片aⅴ在线观看| 亚洲美女搞黄在线观看| 美女主播在线视频| 美女高潮的动态| 在线观看av片永久免费下载| 在现免费观看毛片| 插逼视频在线观看| 六月丁香七月| 国产精品一区二区三区四区免费观看| 青青草视频在线视频观看| 午夜免费男女啪啪视频观看| 男女边吃奶边做爰视频| 久久久久久久亚洲中文字幕| 亚洲伊人久久精品综合| 18禁动态无遮挡网站| 深夜a级毛片| 日本黄色片子视频| 中文字幕av成人在线电影| 欧美成人一区二区免费高清观看| 伦理电影大哥的女人| 免费观看性生交大片5| 色5月婷婷丁香| 日韩强制内射视频| 欧美变态另类bdsm刘玥| 国产精品欧美亚洲77777| 久久久a久久爽久久v久久| 亚洲综合色惰| 97在线人人人人妻| 国内揄拍国产精品人妻在线| 老熟女久久久| 亚州av有码| 如何舔出高潮| 黄片wwwwww| 中文在线观看免费www的网站| xxx大片免费视频| 秋霞伦理黄片| 国产高清国产精品国产三级 | 亚洲精品国产色婷婷电影| 另类亚洲欧美激情| 天堂8中文在线网| 国产成人freesex在线| av卡一久久| 日本免费在线观看一区| 国产精品久久久久成人av| 纵有疾风起免费观看全集完整版| 国产黄色视频一区二区在线观看| 蜜臀久久99精品久久宅男| 欧美3d第一页| 99热国产这里只有精品6| 99久久人妻综合| 亚洲综合色惰| 亚洲三级黄色毛片| 国国产精品蜜臀av免费| 亚洲性久久影院| 97在线视频观看| 在线播放无遮挡| 18禁在线播放成人免费| 免费大片黄手机在线观看| 大片免费播放器 马上看| 亚洲经典国产精华液单| av国产久精品久网站免费入址| 久久久久久久久久久免费av| 高清不卡的av网站| 黄色日韩在线| 亚洲欧美清纯卡通| 天堂中文最新版在线下载| 我要看黄色一级片免费的| 寂寞人妻少妇视频99o| 在线 av 中文字幕| 这个男人来自地球电影免费观看 | 亚洲欧美日韩东京热| 男男h啪啪无遮挡| 肉色欧美久久久久久久蜜桃| 麻豆成人午夜福利视频| 日日啪夜夜撸| 国产69精品久久久久777片| 亚洲成人中文字幕在线播放| 午夜精品国产一区二区电影| 久久99热这里只有精品18| 久久久久久久亚洲中文字幕| 欧美国产精品一级二级三级 | .国产精品久久| 国产精品熟女久久久久浪| 3wmmmm亚洲av在线观看| 赤兔流量卡办理| 国模一区二区三区四区视频| 欧美日韩在线观看h| 日韩视频在线欧美| 大片免费播放器 马上看| 久久99蜜桃精品久久| 大又大粗又爽又黄少妇毛片口| 日本爱情动作片www.在线观看| 午夜福利高清视频| 国产大屁股一区二区在线视频| 亚洲精品久久午夜乱码| 国产乱人视频| av在线老鸭窝| 天美传媒精品一区二区| 在线观看av片永久免费下载| 乱码一卡2卡4卡精品| 亚洲色图综合在线观看| 亚洲av国产av综合av卡| 国产大屁股一区二区在线视频| 国产黄片视频在线免费观看| 亚洲精品乱码久久久久久按摩| 少妇精品久久久久久久| 91久久精品国产一区二区成人| 汤姆久久久久久久影院中文字幕| 免费少妇av软件| 美女内射精品一级片tv| 深夜a级毛片|