• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Grasshopper KUWAHARA and Gradient Boosting Tree for Optimal Features Classifications

    2022-08-24 07:03:08RababHamedAlyAzizaHusseinandKamelRahouma
    Computers Materials&Continua 2022年8期

    Rabab Hamed M.Aly,Aziza I.Hussein and Kamel H.Rahouma

    1The Higher Institute for Management and Information Technology,Minya,61768,Egypt

    2Department of Electrical and Computer Engineering,Effat University,Jeddah,KSA

    3Electrical Engineering Department,Faculty of Engineering,Minia University,Minia,6111,Egypt

    Abstract: This paper aims to design an optimizer followed by a Kawahara filter for optimal classification and prediction of employees’performance.The algorithm starts by processing data by a modified K-means technique as a hierarchical clustering method to quickly obtain the best features of employees to reach their best performance.The work of this paper consists of two parts.The first part is based on collecting data of employees to calculate and illustrate the performance of each employee.The second part is based on the classification and prediction techniques of the employee performance.This model is designed to help companies in their decisions about the employees’performance.The classification and prediction algorithms use the Gradient Boosting Tree classifier to classify and predict the features.Results of the paper give the percentage of employees which are expected to leave the company after predicting their performance for the coming years.Results also show that the Grasshopper Optimization,followed by “KF” with the Gradient Boosting Tree as classifier and predictor,is characterized by a high accuracy.The proposed algorithm is compared with other known techniques where our results are fund to be superior.

    Keywords: Metaheuristic algorithm;KUWAHARA filter;Grasshopper optimization algorithm;and Gradient boosting tree

    1 Introduction

    Nowadays,many companies are solving problems about their employees’performance by using artificial intelligence for prediction to get practical decisions in the companies.Many companies depend on the prediction of employees’performance which helps the companies to make quick and reasonable decisions.In addition,this drives the company to be successful.Organizations are paying attention to how to reduce the usage of paper in their decisions.It costs numerous resources for them.The first step to reduce this problem is identifying which employee will resign by using prediction techniques[1].

    Optimization techniques play an important role in prediction.The optimization process helps to get the prediction values more accurately and faster than any other methods.Optimization refers to the process of finding optimal solutions to a specific problem.Optimization techniques are applied in prediction methods using Machine Learning (ML) and Deep Learning (DL) [2].Prediction with optimization is considered a technique of analyzing data.

    On the other hand,there are many kinds of datasets that are high dimensional and contain irrelevant features.These datasets have useless information and affect the performance of prediction methods.Many authors introduced a set of methods to solve these problems.Feature selection is one of the methods which solve the problems of high-dimensional datasets[3].

    Note that,the accuracy of classification and prediction does not depend on the large selection features.The classification is divided into two groups:a)Binary classification.b)Multi classifications[4].The classification is very more practical with optimization method.In this paper,we will use optimization for classification based on the feature selection.The main category is using the binary classification based on Grasshopper Optimization as a classifier in the prediction model [5,6].The work of this paper is divided into some parts.The first part is collecting datasets of the company employees.The second part is clustering,and visualizing data based on hierarchical clustering with principal components analysis.The optimizer is built to select optimal data features.The optimizer type is called“Grasshopper Optimize”.

    A KUWAHARA Filter (GOKF) follows the optimizer.This new design of optimizer helps to select the optimal features based on KUWAHARA Filter(KF).KF is a non-linear smoothing filter used in image processing for adaptive noise reduction.The fact that any edges are preserved when smoothing makes it especially useful for feature extraction and segmentation.KF is based on placing asymmetric square neighborhood around each pixel of images or data of datasets and dividing it into four square sub-regions.The value of the central pixel or data is replaced by the average of general data over the most homogeneous sub-region.The sub-region refers to the lowest standard deviation values.This filter helps the optimizer to rapidly select the best solution and get the best performance.

    Both prediction and classification are based on Gradient Boosting Tree.The results of the proposed technique will be compared with other results based on Gradient Boosting Classifier Tree(GBT)obtained by using Quadratic Discriminant Analysis Function(QDF).

    The rest of the paper is organized as follows:Section(2):briefly introduces the literature review.Section (3):shows methodology.Section (4):discusses the empirical results of design.Eventually,conclusions are drawn in Section(5).

    2 Literature Review

    Several theories have been proposed for optimization techniques.Some techniques focus on how to use them in classifications and feature extractions,while others concentrate on predictions.In this section,we will show a number of the previous research which focused on different studies of optimization in different fields.Various authors have been focusing on growing ML for business studies and predicting the performance of the target of work[7].

    Authors in[7],presented three main experiments to predict employee attrition.The first experiment focused on Support Vector Machine(SVM)and K-Nearest Neighbors(KNN),and the second experiment showed the usage of Adaptive Synthetic(ADASYN)to overcome the class imbalance.At the same time,the third experiment involved using manual under-sampling to balance the classes.The results were achieved using 12 features based on the random forest as a feature selection method.

    Furthermore,certain authors described techniques of ML to classify the best employees in companies.Authors in[8],presented different methods of ML algorithms.ML algorithms are KNN(Neighbor K-Nearest),Na?ve Bayes,Decision Tree,Random Forest in addition to two techniques were called stacking and bagging.The results showed that Random Forest was the best method of classification.In addition to that,the Random Forest,stacking,and bagging methods achieved withdrawals of 88%.

    In[9],the author described the prediction techniques based on a Hybrid of K-means clustering and naive Bayes classifier.The method achieved high accuracy in testing employee performance.

    In[10],authors presented the prediction of employee attrition based on several ML models.The models were developed automatically and achieved high accurate results in prediction.

    Numerous authors introduced the ML algorithms to describe the prediction of employee turnover.In [11],authors explored the application of Extreme Gradient Boosting (XGBoost) technique.That showed significantly higher accuracy for predicting employee turnover.

    Moreover,authors in [12],introduced a study of how to design a system of an automatic job satisfaction based on an optimized neural network.This study was consisted of various parts.The initial part was preprocessing which was applied to convert data into numeric data.The second part was data analysis which was introduced by using three factors.Each factor described the details with the analysis of each employee.The third part showed how to determine the correlation between the factors.The authors added the genetic algorithm to enhance the quality of factors and described neural network to predict the employee satisfaction level.

    On the other hand,the DL based on optimization is considered one of the more practical prediction techniques.The optimizations have been described in different research and have shown the benefit of several optimization designs such as pipeline applications.In[13],the authors described DL with pipeline optimization for the Korean language framework.The paper showed that the entity extraction and the classification were based on the F1-score.The accuracy and F1-score were 98.2%,98.4%for intent classification,97.4%and 94.7%for entity extraction.The authors showed that it is the best accuracy through the experiment of this model.

    ML and DL are playing a vital role in the early diagnosis as it is important in treating diseases.There are different methods to diagnose several cases of different diseases.In [14],authors demonstrated methods by making a survey of ML techniques for diagnosing several diseases.

    Likewise,in[15]authors described discrete wavelet method to enhance the images of livers disease datasets based on Optimization of Support Vector Machines(OSVM)with Crow Search Algorithm(OSVCSA).OSVCSA is used for accurate diagnosis of livers diseases.The accuracy of classification 99.49%.

    LSTM plays a significant role in predicting pandemic diseases.In[16],authors introduced studies of how to predict data of COVID-19.The prediction of data is based on LSTM method and GRU by using python.The paper showed that LSTM achieved higher accuracy than GRU in prediction of COVID-19 data.

    In[17],the authors introduced new techniques of ML based on supervised learning and genetic optimization for occupational disease risk prediction.There were three ML methods which were introduced and compared.One of them was based on K-Means and another one was based on Support Vector Machines and K-Nearest Neighbours (KNN).The last approach was based on a genetic algorithm.The results described that the three techniques were clustering-based techniques that allowed a deeper knowledge,and they were helpful for further risk forecasting.

    In [18],the authors described a new technique of segmentation for COVID-19 in chest X-rays.They introduced a multi-task pipeline with special streaming of classifications and that helped in growing advances of deep neural network models.That helped them to train separately specific types of infection manifestation.They evaluated the proposed models on widely adopted datasets,and they demonstrated an increase of approximately 2.5%.On the other hand,they achieved a 60%reduction in computational time.

    Recently,certain authors involved DL in different complex medical research such as therapeutic antibodies.Authors,in [19],showed that the optimization with DL can be used in the prediction of antigen specificity from antibodies.

    As it is known,ML is of a significant benefit in predicting future outcomes.In addition to that,there are numerous Occupational Accidents around the world.Some authors introduced ML to predict the Occupational Accidents such as in[20].

    In[20],authors optimized ML to predict outcomes such as injury,near miss,and property damage using occupational accident data.They applied different methods of ML and optimizations such as genetic algorithm(GA)and particle swarm optimization(PSO)to achieve higher degree of accuracy and robustness.They also introduced case study to reveal its own potentiality and validity.

    In addition,there are some filters had been used in different application and approved practical results and helps in classification and predictions techniques such as KF[21].

    In [21],authors introduced KF as filter with K-means cluster to extract the optimal features from images of tumors to help in segmentation process.The design help to extract tumor and help in classification process to achieve result near to 95%.Based on the review of the literature presented above,in the following section,the new method which is based on optimization with filter for employee performance will be identified,and a new optimization technique will be introduced.

    3 Methodology

    The work of this paper consists of several stages show Fig.1 as follows:

    1.Data preparation.

    2.Building Optimization and prediction model.

    Figure 1:The system block diagram

    3.1 Data Preparation

    The first part of data preparation is based on the clustering analysis.As known,the filtration of data is the most frequent data manipulation operation.The filtration of this part after using the data is based on the library of python called“pandas”.The filtration and analysis with pandas are based on summarizing characteristics of data such as patterns,trends,outliers,and hypothesis testing using descriptive statistics and visualization[22,23].

    The clustering analysis of data is based on“hierarchical clustering”.This method is used to seek and build a hierarchy of clusters.Hierarchical clustering is considered an update of the performance of K-means clusters.K-means clusters are based on four stages:

    ? First,decide the number of clusters(k)

    ? Second,select k as a random point from the data as centroids

    ? Third,how to assign all the points to the nearest cluster centroid

    ? The last stage is to calculate the centroid of newly formed clusters,and then repeat the last two steps.

    The problem in K-Means clusters is the necessity of predefining the number of clusters because there are certain challenges with K-means which try to make clusters of the same size.The hierarchical clustering presented to improve this problem,so it is more practical,especially in the biggest data.There are two methods into hierarchical clustering as shown below in Fig.2:

    Figure 2:General example of agglomerative and divisive hierarchical clustering methods

    1.Agglomerative hierarchical clustering

    2.Divisive hierarchical clustering.

    In this paper,the most similar points or clusters in hierarchical clustering were processed by a series of fusions of the n objects into groups which called agglomerative.

    The mathematical formula of an agglomerative method is as the following:-

    -Pn,Pn-1,...,P1are observations of clusters for an agglomerative hierarchical clustering where Pncontains n single object clusters and P1consists of a single collection involving all n cases.

    At each stage,the most two similar are combined.Note that,for the introductory stage each cluster has an individual object,and the mounts are joined and there are different aspects of defining distance(or similarity)between clusters[23].

    -Single linkage agglomerative method:it represents the distance between the closest pair of objects,where only pairs consisting of one object from each group are taken into consideration.The distance D(r,s)is determined as(1).

    -Complete linkage agglomerative method:it shows the distance between the furthest pair of objects,one from each group.The distance D(r,s)is measured as(2).

    where r,s are distance between the two clusters(k,m).

    -Average linkage agglomerative method:it reflects the mean of distances between all pairs of objects.Each pair includes one object from each group.The distance D(r,s)is computed as(3).

    where the sum of all pairwise spaces between cluster r and cluster s“Trs”.In addition to that,the size of clusters are(Nr,Ns).

    In this paper,this average linkage clustering method was applied hierarchical clustering,and after that the features of principal component analysis were added to reduce the dimensionality and increase interpretability based on the mathematical formula of it which is introduced in[24,25].

    3.2 Building Optimization and Prediction Model

    In this paper,this part will focus on the design of the optimizer and prediction or classifier model.The prediction model will be built based on different parts.One of these parts is visualizing data to see the performance of employees before the prediction.In this part,the visualization is divided into two categories.The first category is based on the number of employees with several projects through a set of years.The second category is creating Label Encoder Object(LEO)with splitting the data of datasets.The last part is building a model of optimization and prediction.The optimizer is based on Grasshopper Optimization(GO)followed by KF to select optimal features which help in the classifier stage.

    The optimization part is based on GOKF.The first part is extracting the features of values and select optimal features based on the constructions of GOKF.

    The GO is decreasing the dimensionality of data or to select the optimal feature vectors with using KF.The last part is the classifier using GBT which was applied as a predictor for the performance of employees[26].

    The datasets in this paper are collected from two datasets of the online employee databases[23,27].The datasets were collected from the HR department to study the performance of employees and that helps in their decisions about employees after four years of working and that will be shown in the section of results in details.After the collection of data,the clustering method was achieved.Then the features were extracted which were optimized by using GOKF to extract and decrease the dimensionality of data or to select the optimal feature vectors.The cause of using KF with GO is the enhancement of it is more suitable in feature extraction technique[21].

    The GO depends on three components (gravity Gr,social relationship Si,and horizontal wind movement Wi)which affect the flying route of grasshoppers.

    The search process is based on the following equation:-

    where s is the strength of social forces andPi,jis the distance between ith and jth grasshopper that is estimated asPi,j=The unit vector of i,j is indicated byas shown in(5).

    We replaced this part by using KF equations as follows:-

    As known,the KF filter will be applied by using the concept of his equation by dividing the regions into four regions.The regions have based on arithmetical meanmi(x,y)and standard deviationσi(x,y)and the output of the KF filterP(x,y)for any point(x,y)as shown in Eq.(6)[28-29].

    The social relation of direction of swarm is s[26-28].The equation of s can be described as follows:

    where b is the attractive force,r is the distance between grasshoppers and L is attractive length.Fig.3 shows the primitive corrective patterns of GO.On the other hand,the mathematical expression of grasshopper interaction can be presented by(8).

    Notably,upk,Ipk are lower and upper boundand c is a coefficient which is used for reducing the comfort,repulsion,and attraction regions and it is considered the target to get the best solutions.In addition to that,k is a dimension which indicates.

    The equation of parameter c can be described as follows:-

    where N is the maximum iterations.

    Then,the GBT will applied to classify the features which extracted from optimal solution of optimization technique.As known,GBT involves subsampling the training dataset and training individual learners on random samples created by subsampling.The GBT design in some steps:-

    ? The first step in the GBT was to initialize the model with some constant value.The building of the model is used to predict the observations in the training features.For simplicity we take an average of the target column and assume that to be the predicted value.

    ? The difference in classification is in the calculation of average of the target column by using the log of values to get the constant value after initializing the model with some constant values based on Eq.(10)

    where L is loss function,p is probability of prediction and yiis the observed value.The python programming library“SKLEARN”is applied to achieve the results of the GBT and GO with K filter algorithms[30].

    Figure 3:The primitive corrective patterns of Grasshopper optimization

    3.3 The General Pseudo-Code of Design of GOKF

    Algorithm:1:Generate the initial population of Grasshopper Pi(i=1,2,...,n)based on the KF with a few steps:-Build sub-windows for data input as the same work for data from images.-Calculation of averages and variances on sub-windows.-Choice of the index with minimum variance.-Build the filtered features by using nested loop-Extract P(x,y)for data input.2:initialize Cmax,Cmin and maximum number of iteration N 3:Evaluate the fitness f(Pi)for each Grasshopper Pi based on P(x,y)data points.4:T is the best solution 5:While(L>N)do 6:Update C1,C2 using Eq.(8).7:For i=1to M(all M grasshopper in the population using Eq.(7))(Continued)

    do?Normalize distance between the grasshoppers based on Eqs.(3),(5).?Update the position of the current grasshopper based on Eq.(7).?Bring the current grasshopper back(outside boundaries).End for.8:Update T if there is the best solution 9:L=L+1 10:End While Return the best solution(The best solution is the features selection to classifier=yi for GBT Eq.(10))

    4 Results and Discussion

    This paper is applying prediction of employee performance under several stages:-

    ? Collecting the data of employees:data were collected from sample historical data for departments in the organization throughout the last five years of this sample.After that,the data is visualized by using python library to show the performance of employees based on the data collection as shown in Figs.4,5 and 6.Each figure shows the details of the structure of the collecting data.The Fig.4 shows that the total number of employees which left the company through the last five years and Fig.5 shows the total number of years spends in company.Contrast with the pervious figures,Fig.6 shows the number of employees with the total number of projects which achieved the targets in point of time through the last five years.It will be noticed that the total numbers of employees are 6000 which includes both the current and left employees.

    ? The next step is extracting the features from the dataset based on the clustering operation.The clustering analysis of data is based on“hierarchical clustering”.

    ? The classification of data introduced in some steps;First,the dismissal of employees depends on a critical factor with total number of projects through the last five years.If an employee worked from 4-6 projects through the last years,he/she is less expected to leave the company.Second,the time of work through the company is important factor to take decisions about the performance of employees.The decisions are based on the total number of hours which an employee spent in company.Notably,there is a huge drop between 3 and 4 years experienced employees.On the other hand,the percentages of employees left are 25% of the total of employees.Most of the employees are receiving salary either medium or low.The tester of Information Technology(IT) department is having the maximum number of employees followed by customer support and developer.

    ? Building the prediction model:this part is based on GBT:

    ? First is extracting features from python after that,the data were saved in CSV file.

    ? Second part is building GOKF to extract optimal features for classification.

    ? Third part gives optimal features based on GOKF and applies the prediction function based on the GBT using python function.The accuracy of the classification got 96.7% based on Eqs.(11)-(13).It is considered higher and authentic accuracy.The classification report is shown in Tab.1.

    Figure 4:The total of dismissed or left employees

    Figure 5:The total number of years spends in company

    Figure 6:The Total number of employees per projects

    where TP is True Positive,TN is True Negative;FP is False Positive and FN False Negative.

    Table 1:The report of GOKF system based on confusion matrix

    The GO is based on hierarchical clustering,and CNN achieved a higher degree of accuracy in prediction as the other introduced methods in [25]which give more practical optimal solutions as shown in “Tab.2”.In Tab.2 the QDF refers to Quadratic Discriminant Analysis Function with Kmeans clusters[28]and GBT is Gradient Boosting Tree with K-means clusters[29].

    In[29],authors introduced two unsupervised pattern recognition algorithms based on K-means clusters and called QDF and Gaussian Mixture Model(GMM).The accuracy was achieved 96%based on QDF and the same method was applied on the data of this paper and approved the same results but the method of GO-KF is faster and more practical for achieving the accuracy in more accurate time than the other method.Furthermore,in [30],authors applied the GBT for diabetes mellitus diagnosis system and achieved accuracy near to 97%.When the same method was applied for our datasets achieved the same performance when it was compared with the result of this paper.Tab.2 shows the compare between pervious work methods and the method of this paper.

    Table 2:Compare between the method of this paper with the other method from previous work

    5 Conclusion

    This paper introduced a technique for optimal classification and prediction of employees’performance.This technique is composed of a grasshopper optimizer followed by a Kawahara filter(KF).The employees’data are collected and then processed using a modified hierarchical K-means clustering method.The filter is used to obtain the best features of employees which match their best performance.This is done in two axes.Firstly,data of employees has been collected.From this data,the performance of each employee is calculated and illustrated.Secondly,the classification techniques are applied to classify the employee performance and prediction techniques are carried out to predict this performance in the future.This is done by obtaining the employees features.The Gradient Boosting Tree classifier is utilized for the purposes of features classification and prediction.The model has been applied.The percentage of employees,who are expected to leave the company after predicting their performance for the coming years,is calculated.The results were found highly accurate.A discussion of the results and a comparison with the previous research methods are explained.The proposed algorithm is found to be superior.

    Acknowledgement:The author would like to thank the editors and reviewers for their review and recommendations.

    Funding Statement:The author received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    99久久人妻综合| 王馨瑶露胸无遮挡在线观看| 亚洲全国av大片| 男女床上黄色一级片免费看| 欧美乱妇无乱码| 18在线观看网站| 美女扒开内裤让男人捅视频| 热99国产精品久久久久久7| 成人国产一区最新在线观看| 免费高清在线观看日韩| 老司机午夜十八禁免费视频| 精品少妇内射三级| 午夜视频精品福利| 国产亚洲欧美在线一区二区| 日本av免费视频播放| 成人影院久久| 建设人人有责人人尽责人人享有的| 欧美黑人精品巨大| 国产黄色免费在线视频| 久久久久精品国产欧美久久久| 亚洲va日本ⅴa欧美va伊人久久| 亚洲伊人色综图| 国产一区二区激情短视频| 嫁个100分男人电影在线观看| 欧美国产精品一级二级三级| 国产老妇伦熟女老妇高清| 精品亚洲成a人片在线观看| 国产亚洲欧美在线一区二区| 极品人妻少妇av视频| 日日夜夜操网爽| 妹子高潮喷水视频| 91精品三级在线观看| 久久久久国内视频| 久久热在线av| 三上悠亚av全集在线观看| 国产欧美日韩精品亚洲av| 国产精品偷伦视频观看了| 正在播放国产对白刺激| 国产无遮挡羞羞视频在线观看| 蜜桃在线观看..| 免费看十八禁软件| 国产人伦9x9x在线观看| 丰满饥渴人妻一区二区三| 国产精品久久久人人做人人爽| 天堂中文最新版在线下载| e午夜精品久久久久久久| netflix在线观看网站| 欧美+亚洲+日韩+国产| 国产欧美亚洲国产| aaaaa片日本免费| 高潮久久久久久久久久久不卡| 建设人人有责人人尽责人人享有的| 少妇被粗大的猛进出69影院| 国产黄频视频在线观看| 丁香六月欧美| videos熟女内射| 欧美日韩亚洲综合一区二区三区_| 免费不卡黄色视频| 成人永久免费在线观看视频 | 久久久久视频综合| 水蜜桃什么品种好| 欧美黑人欧美精品刺激| 热99re8久久精品国产| 日日摸夜夜添夜夜添小说| 少妇粗大呻吟视频| 丝袜美腿诱惑在线| 人妻久久中文字幕网| 女性被躁到高潮视频| 搡老岳熟女国产| 亚洲精品中文字幕一二三四区 | 一进一出好大好爽视频| 欧美人与性动交α欧美精品济南到| 亚洲精品久久午夜乱码| 国产日韩欧美亚洲二区| 人成视频在线观看免费观看| 亚洲情色 制服丝袜| 久久久久久免费高清国产稀缺| 亚洲成av片中文字幕在线观看| 欧美精品一区二区免费开放| 色播在线永久视频| 十八禁高潮呻吟视频| 国产av精品麻豆| 女人精品久久久久毛片| 九色亚洲精品在线播放| 老汉色av国产亚洲站长工具| 性少妇av在线| 97人妻天天添夜夜摸| 成年动漫av网址| 日韩人妻精品一区2区三区| 多毛熟女@视频| 亚洲欧美日韩另类电影网站| 十八禁网站免费在线| 99精品久久久久人妻精品| 国产精品电影一区二区三区 | 欧美激情极品国产一区二区三区| 成人国产av品久久久| 久久中文字幕人妻熟女| 国产精品免费大片| 婷婷成人精品国产| 国产免费现黄频在线看| 亚洲精品国产一区二区精华液| 国产精品99久久99久久久不卡| 一区福利在线观看| 亚洲色图 男人天堂 中文字幕| 99精品在免费线老司机午夜| 国产日韩欧美亚洲二区| 一夜夜www| 成人三级做爰电影| 亚洲精品中文字幕在线视频| 97人妻天天添夜夜摸| 久久久欧美国产精品| 黄片大片在线免费观看| 欧美日韩国产mv在线观看视频| 精品人妻1区二区| 肉色欧美久久久久久久蜜桃| 欧美大码av| 999精品在线视频| 五月开心婷婷网| 日本vs欧美在线观看视频| 亚洲午夜理论影院| 欧美黑人精品巨大| 免费观看a级毛片全部| 午夜日韩欧美国产| 两个人免费观看高清视频| 免费看十八禁软件| 日日摸夜夜添夜夜添小说| 精品人妻1区二区| 男女免费视频国产| 精品第一国产精品| 亚洲 国产 在线| 极品教师在线免费播放| 久久ye,这里只有精品| 热re99久久精品国产66热6| 啦啦啦 在线观看视频| 国产成人一区二区三区免费视频网站| 色婷婷久久久亚洲欧美| 一二三四社区在线视频社区8| 99香蕉大伊视频| 王馨瑶露胸无遮挡在线观看| 一区福利在线观看| 国产欧美日韩综合在线一区二区| 一边摸一边抽搐一进一出视频| 国产精品国产av在线观看| 男人操女人黄网站| 亚洲成人手机| 亚洲视频免费观看视频| 欧美日韩一级在线毛片| 国产精品熟女久久久久浪| 欧美午夜高清在线| 极品人妻少妇av视频| 757午夜福利合集在线观看| 国产欧美日韩一区二区三| 久久九九热精品免费| 国精品久久久久久国模美| av线在线观看网站| 欧美在线一区亚洲| 男男h啪啪无遮挡| 啦啦啦免费观看视频1| 老鸭窝网址在线观看| 最近最新中文字幕大全电影3 | 波多野结衣av一区二区av| 亚洲精华国产精华精| 精品国产乱子伦一区二区三区| 午夜免费成人在线视频| 久久亚洲真实| 欧美久久黑人一区二区| 欧美精品高潮呻吟av久久| 亚洲国产欧美一区二区综合| 大片免费播放器 马上看| 99re在线观看精品视频| 日本一区二区免费在线视频| 麻豆成人av在线观看| 亚洲精品国产精品久久久不卡| 成人国语在线视频| 又紧又爽又黄一区二区| 国产三级黄色录像| 久久午夜综合久久蜜桃| 国产亚洲精品第一综合不卡| 欧美成狂野欧美在线观看| 精品高清国产在线一区| 久久精品aⅴ一区二区三区四区| 欧美成狂野欧美在线观看| 黑人操中国人逼视频| 午夜福利欧美成人| 18禁黄网站禁片午夜丰满| 亚洲va日本ⅴa欧美va伊人久久| 亚洲,欧美精品.| 欧美日韩福利视频一区二区| 亚洲色图av天堂| 一二三四社区在线视频社区8| 两人在一起打扑克的视频| 黄色成人免费大全| 国产精品av久久久久免费| 国产片内射在线| 亚洲 国产 在线| 久久久久久久精品吃奶| 久久 成人 亚洲| 一区福利在线观看| av片东京热男人的天堂| 欧美日韩黄片免| 中文字幕另类日韩欧美亚洲嫩草| √禁漫天堂资源中文www| 精品久久蜜臀av无| 亚洲精品国产区一区二| 欧美精品人与动牲交sv欧美| 王馨瑶露胸无遮挡在线观看| 99re6热这里在线精品视频| 亚洲精品乱久久久久久| 亚洲专区中文字幕在线| 亚洲精品自拍成人| 精品卡一卡二卡四卡免费| 80岁老熟妇乱子伦牲交| 成人国语在线视频| 亚洲全国av大片| 国产精品 国内视频| 久久国产亚洲av麻豆专区| 国产精品香港三级国产av潘金莲| 女警被强在线播放| 精品一区二区三区视频在线观看免费 | 看免费av毛片| 国产又色又爽无遮挡免费看| 国产一区二区在线观看av| 在线观看66精品国产| 国产免费现黄频在线看| 青青草视频在线视频观看| 国产一区有黄有色的免费视频| 国产色视频综合| 亚洲专区字幕在线| 美女高潮到喷水免费观看| 美女扒开内裤让男人捅视频| 18禁观看日本| a在线观看视频网站| 一区二区三区乱码不卡18| 日韩欧美三级三区| 亚洲精品国产一区二区精华液| 久久久久久久国产电影| 国产黄色免费在线视频| 一边摸一边抽搐一进一出视频| 亚洲精品国产精品久久久不卡| 国产成人啪精品午夜网站| www.自偷自拍.com| 欧美日韩视频精品一区| 女同久久另类99精品国产91| 亚洲一码二码三码区别大吗| 色综合婷婷激情| 久久中文看片网| 久久这里只有精品19| 国产午夜精品久久久久久| a级毛片黄视频| 天天躁夜夜躁狠狠躁躁| 考比视频在线观看| 少妇裸体淫交视频免费看高清 | 久久免费观看电影| 精品久久蜜臀av无| 在线十欧美十亚洲十日本专区| 亚洲三区欧美一区| 精品人妻熟女毛片av久久网站| 汤姆久久久久久久影院中文字幕| 大片电影免费在线观看免费| 麻豆乱淫一区二区| 亚洲精品粉嫩美女一区| 麻豆国产av国片精品| 亚洲第一欧美日韩一区二区三区 | 亚洲专区字幕在线| 国产真人三级小视频在线观看| 黄频高清免费视频| 黑人猛操日本美女一级片| 一级黄色大片毛片| 中文字幕人妻丝袜一区二区| 免费在线观看黄色视频的| 亚洲精品av麻豆狂野| 好男人电影高清在线观看| 亚洲精品美女久久久久99蜜臀| 国产精品av久久久久免费| √禁漫天堂资源中文www| 变态另类成人亚洲欧美熟女 | 蜜桃国产av成人99| 亚洲久久久国产精品| 欧美日韩福利视频一区二区| 91大片在线观看| 久久久国产欧美日韩av| cao死你这个sao货| 久久国产精品人妻蜜桃| 嫩草影视91久久| 日韩欧美免费精品| 亚洲精品国产一区二区精华液| 正在播放国产对白刺激| 亚洲精品成人av观看孕妇| 久久av网站| 日本一区二区免费在线视频| 窝窝影院91人妻| 99国产极品粉嫩在线观看| 亚洲欧美一区二区三区黑人| 亚洲欧洲日产国产| 成人av一区二区三区在线看| 91av网站免费观看| 男女高潮啪啪啪动态图| 成人av一区二区三区在线看| www.999成人在线观看| 高清av免费在线| 麻豆成人av在线观看| 高清视频免费观看一区二区| 精品国产一区二区久久| 亚洲午夜精品一区,二区,三区| 少妇被粗大的猛进出69影院| 日本av免费视频播放| 久9热在线精品视频| 九色亚洲精品在线播放| 成年人免费黄色播放视频| 欧美国产精品一级二级三级| 国产真人三级小视频在线观看| 国产麻豆69| 精品国产超薄肉色丝袜足j| 高清黄色对白视频在线免费看| 亚洲中文日韩欧美视频| 成年人免费黄色播放视频| 久久午夜综合久久蜜桃| 国产av一区二区精品久久| 久久久久久亚洲精品国产蜜桃av| 一级a爱视频在线免费观看| 午夜福利,免费看| 国产成人av教育| 757午夜福利合集在线观看| av欧美777| 日本av免费视频播放| 亚洲欧美一区二区三区久久| 精品视频人人做人人爽| 国产av又大| 午夜日韩欧美国产| 亚洲国产看品久久| 这个男人来自地球电影免费观看| 777米奇影视久久| 少妇粗大呻吟视频| 国产成人精品久久二区二区91| 欧美黄色淫秽网站| 国产精品98久久久久久宅男小说| 高清视频免费观看一区二区| 中文字幕色久视频| 在线观看www视频免费| 18禁国产床啪视频网站| 女警被强在线播放| 美国免费a级毛片| 他把我摸到了高潮在线观看 | 热re99久久国产66热| 欧美亚洲日本最大视频资源| 在线看a的网站| 国产高清国产精品国产三级| 欧美+亚洲+日韩+国产| 人妻一区二区av| 日本一区二区免费在线视频| av免费在线观看网站| 久久精品熟女亚洲av麻豆精品| 亚洲av国产av综合av卡| 黄色怎么调成土黄色| 极品少妇高潮喷水抽搐| 国产成人影院久久av| 国产免费视频播放在线视频| 老司机靠b影院| 成年人免费黄色播放视频| 国产亚洲欧美精品永久| 婷婷成人精品国产| 交换朋友夫妻互换小说| 欧美成人免费av一区二区三区 | 亚洲国产欧美网| 国产xxxxx性猛交| 免费看十八禁软件| 中文字幕av电影在线播放| 天天躁夜夜躁狠狠躁躁| 一级片免费观看大全| 丝袜美足系列| 少妇被粗大的猛进出69影院| 国产成人av教育| 欧美 亚洲 国产 日韩一| 18禁国产床啪视频网站| 狂野欧美激情性xxxx| 不卡一级毛片| 国产成人精品无人区| www.精华液| 久久久久久免费高清国产稀缺| 交换朋友夫妻互换小说| 大片免费播放器 马上看| av欧美777| 久久久久久久精品吃奶| 啦啦啦视频在线资源免费观看| 巨乳人妻的诱惑在线观看| 久久性视频一级片| 欧美性长视频在线观看| 成年人免费黄色播放视频| 久久久久久亚洲精品国产蜜桃av| 热re99久久精品国产66热6| 老司机影院毛片| 大码成人一级视频| 黄色 视频免费看| 高清黄色对白视频在线免费看| 99精品在免费线老司机午夜| 久久精品aⅴ一区二区三区四区| 美女主播在线视频| 国产精品99久久99久久久不卡| 中文字幕人妻丝袜一区二区| 久久久久国产一级毛片高清牌| 亚洲av成人一区二区三| 国产不卡一卡二| 人人妻人人澡人人看| 欧美精品亚洲一区二区| 成人亚洲精品一区在线观看| 正在播放国产对白刺激| 国产精品久久久av美女十八| 亚洲色图 男人天堂 中文字幕| 丝袜美足系列| 国产91精品成人一区二区三区 | 色播在线永久视频| 久久天堂一区二区三区四区| 亚洲伊人久久精品综合| 超碰97精品在线观看| 亚洲av第一区精品v没综合| 精品少妇久久久久久888优播| 巨乳人妻的诱惑在线观看| 国产欧美日韩综合在线一区二区| 麻豆乱淫一区二区| 大型av网站在线播放| 国产成人免费观看mmmm| 亚洲专区字幕在线| 99在线人妻在线中文字幕 | svipshipincom国产片| 动漫黄色视频在线观看| 精品福利永久在线观看| 中文字幕高清在线视频| 久久天堂一区二区三区四区| 狂野欧美激情性xxxx| 成年版毛片免费区| 精品乱码久久久久久99久播| 人妻久久中文字幕网| 久久精品国产a三级三级三级| 天天添夜夜摸| 怎么达到女性高潮| 国产伦理片在线播放av一区| 一级片'在线观看视频| 色婷婷久久久亚洲欧美| 久久av网站| 亚洲黑人精品在线| 最近最新免费中文字幕在线| 久久久久精品人妻al黑| 美女高潮喷水抽搐中文字幕| 久久香蕉激情| 搡老熟女国产l中国老女人| 精品一品国产午夜福利视频| 中国美女看黄片| svipshipincom国产片| 一级黄色大片毛片| 久久天堂一区二区三区四区| 免费人妻精品一区二区三区视频| 男女高潮啪啪啪动态图| 日韩一区二区三区影片| 老汉色av国产亚洲站长工具| 精品久久久久久电影网| 国产视频一区二区在线看| 黄色视频不卡| 多毛熟女@视频| av线在线观看网站| 久久亚洲真实| 国精品久久久久久国模美| 日韩精品免费视频一区二区三区| 在线观看免费午夜福利视频| 变态另类成人亚洲欧美熟女 | 日韩中文字幕欧美一区二区| 女性生殖器流出的白浆| 欧美国产精品一级二级三级| 久久久久久人人人人人| 久久人人97超碰香蕉20202| 女人高潮潮喷娇喘18禁视频| 国产成人影院久久av| 欧美乱码精品一区二区三区| 18禁国产床啪视频网站| 久久久久久亚洲精品国产蜜桃av| 麻豆成人av在线观看| 日本a在线网址| 黑人操中国人逼视频| 国产三级黄色录像| 国产片内射在线| 国产亚洲精品第一综合不卡| 欧美乱妇无乱码| 99国产精品一区二区三区| 男女床上黄色一级片免费看| 亚洲成a人片在线一区二区| 一级黄色大片毛片| 天天添夜夜摸| 丰满迷人的少妇在线观看| 亚洲av成人不卡在线观看播放网| 在线看a的网站| 啦啦啦在线免费观看视频4| 日韩精品免费视频一区二区三区| 最黄视频免费看| 下体分泌物呈黄色| 性高湖久久久久久久久免费观看| 多毛熟女@视频| av天堂在线播放| 欧美乱妇无乱码| 热99国产精品久久久久久7| 99re6热这里在线精品视频| 精品久久蜜臀av无| 极品少妇高潮喷水抽搐| 操出白浆在线播放| 老汉色∧v一级毛片| 午夜福利在线观看吧| 无限看片的www在线观看| 国产一区二区激情短视频| 在线播放国产精品三级| 日本欧美视频一区| 黄色片一级片一级黄色片| 亚洲精品在线美女| 色精品久久人妻99蜜桃| 一本大道久久a久久精品| 精品少妇黑人巨大在线播放| 在线av久久热| 999久久久国产精品视频| 后天国语完整版免费观看| 亚洲欧美精品综合一区二区三区| 99国产综合亚洲精品| 久久 成人 亚洲| 日本精品一区二区三区蜜桃| 久久热在线av| 精品乱码久久久久久99久播| 搡老乐熟女国产| 考比视频在线观看| 欧美中文综合在线视频| videos熟女内射| 免费不卡黄色视频| 欧美+亚洲+日韩+国产| 别揉我奶头~嗯~啊~动态视频| 国产精品一区二区精品视频观看| 肉色欧美久久久久久久蜜桃| 一级毛片精品| 99国产精品免费福利视频| 男女高潮啪啪啪动态图| 丝袜喷水一区| 国产精品秋霞免费鲁丝片| 18禁观看日本| 国产麻豆69| 韩国精品一区二区三区| 99香蕉大伊视频| 999久久久精品免费观看国产| 久久精品91无色码中文字幕| 午夜激情av网站| 18禁美女被吸乳视频| 久久精品国产综合久久久| 91精品三级在线观看| 伦理电影免费视频| 日韩视频在线欧美| 黄色片一级片一级黄色片| 天堂俺去俺来也www色官网| 亚洲精品自拍成人| 少妇 在线观看| 电影成人av| 水蜜桃什么品种好| 人成视频在线观看免费观看| 超碰成人久久| 中文字幕人妻丝袜制服| 9191精品国产免费久久| 日韩熟女老妇一区二区性免费视频| 亚洲伊人久久精品综合| 伊人久久大香线蕉亚洲五| av福利片在线| 午夜精品国产一区二区电影| 69精品国产乱码久久久| 亚洲精品国产区一区二| 免费看十八禁软件| 女同久久另类99精品国产91| 中亚洲国语对白在线视频| 久久中文字幕一级| 亚洲中文日韩欧美视频| 欧美日韩中文字幕国产精品一区二区三区 | 色婷婷av一区二区三区视频| 人人妻,人人澡人人爽秒播| 国产成人一区二区三区免费视频网站| www.自偷自拍.com| 日本vs欧美在线观看视频| 日韩熟女老妇一区二区性免费视频| 久久热在线av| 人人妻人人爽人人添夜夜欢视频| 国产人伦9x9x在线观看| 国产精品国产av在线观看| 国产成人啪精品午夜网站| 成年动漫av网址| 午夜久久久在线观看| 欧美日韩亚洲高清精品| 久久香蕉激情| 18禁观看日本| 一区二区av电影网| tube8黄色片| av又黄又爽大尺度在线免费看| 精品国产一区二区三区四区第35| 人妻久久中文字幕网| 在线观看免费视频网站a站| 高清在线国产一区| 欧美老熟妇乱子伦牲交| 久久午夜综合久久蜜桃| 十八禁网站免费在线| 亚洲av片天天在线观看| 视频区欧美日本亚洲| 人人妻人人添人人爽欧美一区卜| 日日夜夜操网爽| 国产av一区二区精品久久| 亚洲国产av新网站| 精品福利观看| 丝袜人妻中文字幕| 久久婷婷成人综合色麻豆| 亚洲一卡2卡3卡4卡5卡精品中文| 十八禁人妻一区二区| 丰满饥渴人妻一区二区三| 成人黄色视频免费在线看| 亚洲国产欧美日韩在线播放| 国产欧美日韩一区二区精品| 在线亚洲精品国产二区图片欧美| 免费在线观看黄色视频的| 男女午夜视频在线观看|