• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Imbalanced Classification in Diabetics Using Ensembled Machine Learning

    2022-11-11 10:44:30SandeepKumarMohammadZubairKhanSukumarRajendranAymanNoorStephenDassandPrabhu
    Computers Materials&Continua 2022年9期

    M.Sandeep Kumar,Mohammad Zubair Khan,Sukumar Rajendran,Ayman Noor,A.Stephen Dass and J.Prabhu

    1School of Information Technology and Engineering,Vellore Institute of Technology,Vellore,Tamil Nadu,632014,India

    2Department of Computer Science and Information,Taibah University,Medina,Saudi Arabia

    3College of Computer Science and Engineering,Taibah University,Medina,Saudi Arabia

    Abstract: Diabetics is one of the world’s most common diseases which are caused by continued high levels of blood sugar.The risk of diabetics can be lowered if the diabetic is found at the early stage.In recent days, several machine learning models were developed to predict the diabetic presence at an early stage.In this paper,we propose an embedded-based machine learning model that combines the split-vote method and instance duplication to leverage an imbalanced dataset called PIMA Indian to increase the prediction of diabetics.The proposed method uses both the concept of over-sampling and under-sampling along with model weighting to increase the performance of classification.Different measures such as Accuracy,Precision,Recall,and F1-Score are used to evaluate the model.The results we obtained using K-Nearest Neighbor (kNN), Na?ve Bayes (NB), Support Vector Machines(SVM),Random Forest(RF),Logistic Regression(LR),and Decision Trees(DT)were 89.32%,91.44%,95.78%,89.3%,81.76%,and 80.38%respectively.The SVM model is more efficient than other models which are 21.38%more than exiting machine learning-based works.

    Keywords: Diabetics classification; imbalanced data; split-vote; instance duplication

    1 Introduction

    Classification-based models such as kNN,SVM,RF,and so on suffer from a problem called a class imbalance.Imbalanced classification is a situation where there are a significantly different number of instances across the various classes.Specifically,during the binary classification,if there are number of instances in one class(called as majority class)than the number of instances in the other class(called as minority class),then there is an imbalanced classification.When the frequency of instances among all the classes is not equally distributed,then the classifier understands more about a single class and very little about other classes.A classifier may produce high False Negative rates during the imbalance data scenario[1]because the classifier wrongly classifies an instance of one class to another.Imbalanced data causes problems many classification applications such as spam detection[2],bug prediction[3],sentimental analysis[4],credit card classification[5],and much more.

    There are many ways the researchers are handling this imbalanced data problem.One of the methods is called the weighting-based approach[6].In this approach,the classifier is allowed to learn from the training set with imbalanced instances.Later,a weight is assigned to the classifier to reduce the classification error.This approach can be very dangerous sometimes because it is very highly errorprone.Consider an imbalanced data scenario in which a classifier is deployed to determine a patient is having heart disease or not.The cost of wrongly predicting a heart patient as a normal patient is more dangerous than predicting a normal patient as a heart patient.

    The second method to deal with the imbalance problem is under-sampling[7],where the instances of the majority class are removed one by one until there is an equal instance distribution among all classes.The important drawback of this approach is the loss of information[8].The key information that determines the important attributes of a feature may get lost and the classifier may produce a high false rate during the testing phase.Many researchers omit this method due to the above-mentioned reason.In some cases,few samples may be noisy or redundant.If they are used in the training process,it creates unusual problems such as increased computation cost,degrading the performance,high false rates, and so on.These samples can be removed using the under-sampling method to eliminate the noises in the dataset.Few research works like[9]develop under-sampling from majority class to find the class boundary.Once the class boundary of the minority class is found,the original dataset is used for classification.

    The third method is over-sampling which increases the instance of the minority classes by adding new samples.The newly added samples are done statistically so that it is not a duplicate of already existing instances.In recent years, over-sampling is used by many research works [10,11].The main drawback of this approach is it increases the chances of overfitting.A combination of both undersampling and over-sampling can also increase the performance of a classifier[12,13].The generated dummy instances should not alter the original dataset and should obey the distribution of the minority class.An unbiased classification can be done only if the whole dataset distribution is unaltered.If the data distribution is known, it is very easy to generate the samples; however, in most cases, the distribution of the dataset is unknown.In that case,estimation should be done in such a way that the estimated parameters more or less match the original dataset.If not, then there will be misleading samples that will ruin the performance of the classification.

    One of the efficient methods in handling the imbalance problem is the ensemble approach where multiple classifiers collectively are used to classify an instance into a class [14,15].Despite many popular embedded-based binary imbalanced classifiers, the performance of the imbalanced classification still degrades.Over the years,this has attracted much attention in the research community to build more powerful imbalanced classifiers.Many research works focus on developing dynamic classification,where the classification is done by selecting subsets of the data.Finally,either selecting the best or combining multiple classifiers is done in the embedded process.The main novelty of this embedded process relay on how the merging is done.This paper focuses on using this ensemble approach to handle the imbalance problem by two methods called as split-vote method and dummy instance generation.We have used an ensemble of two approaches called as split-vote method and instance generation method.In the split-vote method, the dataset is first to split into multiple subdatasets and then each subset is used to train various machine learning models.We pick the best machine learning models for each sub-set and then finally perform the voting operation to predict the final class of an instance.During the splitting process,there is a high chance that a set of important features might get missed out in a particular set,so hence we perform instance duplication to each set.The subsets are generated by both unique instances as well as mixed instances.The mixed instances can assure that the same instance is present in more than one sub-set.The next step is instance duplication,where a clustering algorithm is used to group similar data points in the feature space and generate dummy instances without affecting the characteristics of a cluster.Finally,the voting is done from the two approaches with the original dataset and the final class is found out.

    In this paper, we propose an ensemble machine learning model that efficiently classifies an imbalanced dataset for diabetics.The main contributions are listed below

    ? To develop a split-vote methodology for dividing an imbalanced dataset into a finite number of balanced datasets.

    ? To generate dummy instances without affecting the statistical properties of an imbalanced dataset.

    ? To use model weighting and feature selection to enhance the voting process.

    The above-mentioned contributions aim to convert an imbalanced dataset into a balanced one.The proposed method first uses the under-sampling method by duplicating the dataset into finite number of times and at each set,random data samples were discarded to make all the classes evenly distributed.Then over-sampling is performed by generating dummy instances.The dummy instances are not an exact replicate of any original instances,but share only the statistical properties.Finally,the performance of the proposed system is increased by assigning weights based on how well each model has been learned.

    The rest of the paper is organized as follows.Section 2 briefs the literature related to imbalanced classification.Section 3 contains the working of the proposed algorithms.Section 4 presents the experimental results and the comparison with existing machine learning models and with other existing works.Finally,the conclusion is present in Section 5.

    2 Related Works

    A research work done by [16] proposes an ensemble classification approach.They aim to reduce the rate of overfitting by proposing implicit regularization.They have considered the binary imbalanced data classification problem.12 datasets were considered by the authors for validating the proposed method.Generating two new virtual spaces along with the original dataset and feeding the same to the SVM classifier can significantly reduce the imbalance problem as per [17].They have incorporated fuzzy concepts in their proposed architecture and found that the searching time is reduced.

    Random sampling is one of the widely used methods to handle imbalanced data; however, the use of random sampling can lead to undesirable results.Hence [18] proposes a stable method for determining sampling ratios based on genetic algorithms.They have used 14 datasets to validate the performance of their proposed work.

    A work done by[19]focuses on optimizing the AdaBoost algorithm.They have proposed a new weighting approach that can boost the weak classifiers.Two synthetic datasets and four original datasets were used to test the performance of the proposed work.

    Reference [20] uses the Hellinger Distance Weighted Ensemble model for tackling imbalanced data.Feature drift is one of the problems in spam detection which the authors consider for generating appropriate features.Using these features,spam detection is done efficiently.

    Many oversampling techniques such as[21]add the synthetic instance to minority class so that the number of minority class is equal to the majority class instances.However,there is a high chance that the synthetic instances can create noise in the dataset.Even the synthetic instances can also modify the decision boundary of the classifiers[21].

    Data resampling can cause important instances to be lost forever and often leads to oversampling,a work by[22]focuses on gaining advantages of both data level and the ensemble of classifiers.They apply a few pre-processing steps to the training phase of each classifier and compare them using eight datasets.

    Fuzzy-based methods are used in[23]where the authors have used two families of classifiers.One is purely based on bag level and another one is based on instance level.Using these two types they have solved the imbalance problem with the help of multi-instances.

    As many methods work on alternating the original dataset [24], a research work proposed by[22] aims to develop a balanced dataset from an imbalanced dataset and perform an ensemble to consolidate the result.This process prevents important data to be lost in the classification.Tab.1 shows a few existing research works in the field of imbalance classification.

    Four stage imbalance handling were proposed by[25]which includes component analysis,feature selection,SVM based minor classification and sampling.They have used coloring scheme to classify buildings and their connected components.Four machine learning models were used by the authors to classify 3D objects.They show that after using SVM,the performance of the classification increases to a promising amount.

    Another research[26]provided a cost sensitive classification by assigning weights to majority and minority data.This creates a strong bias which helps in reducing the classification errors.

    Table 1: Comparison of recent works related to imbalanced classification

    Table 1:Continued

    From the above-mentioned literature,the imbalanced classification needs lots of improvement in the areas of over-sampling and under-sampling.The proposed work introduces split-vote method for sub set creation and instance duplication for dataset balancing.We have used sim machine learning models are compared the results with existing works.

    3 Imbalance Data Classification

    A classifier expects all the classes in the training set to be balanced.However,in real-time,it is very difficult to find a dataset with balanced classes.Several techniques shown in Section 2 have been used by various researchers to overcome the imbalance problem.In this section,we present an embeddedbased machine learning model which works in three stages as shown in Fig.1.

    Figure 1:Architecture of the proposed embedded based model

    A good performance can be achieved if proper preprocessing is done before classification [32].In this paper, we do the preprocessing in many stages.In the first stage, the dataset is divided into multiple subsets such that each subset contains an equal number of instances in both positive and negative classes.The next stage generates dummy instances without affecting the statistical properties of the dataset.The last stage is the normal machine learning model which uses the raw dataset for classification.The output of each stage is passed to a weighting step where the classification output is given some priority based on the performance of each machine learning model.The working of each stage is explained in more detail in the following subsections.

    3.1 The Split-Vote Stage

    Let us consider D as the set of all instances as defined by 1.X is the input feature defined by Eq.(2)and Y is considered as binary value in this research work,whereC1represents the first class andC2represents the second class.An imbalance problem is when the number of instances of bothC1andC2is different and the difference exceeds the tolerable amount IM as defined by Eq.(3).

    The generation of sub sets is defined as per Eq.(4).Each subset is generated by balancing the number of instances of both classes.During the process of training,there is a high chance that very important instances fall in only a few of the sub-sets.Thus, the majority of the subsets may yield poor results.To tackle this problem,we have performed instance shuffling where a portion of random instances are duplicated across multiple sub-sets.This step largely reduces the risk of an important instance being missed out in the majority of the subsets.

    3.2 Instance Generation

    Unlike the first stage,the dataset is not divided into sub-datasets;moreover,dummy instances are generated in the minority class to match the number of classes in the majority class.To perform the instance generation,clustering is used to group instances into distinct classes.The clustering algorithm is explained in Algorithm 1.

    Algorithm 1:Instance Clustering(N)1I->set of instances begin:2H[N]->randomly pick N instances and assign as head 3for each instance i in I:4 dist=infinity 5 head=-1 6 fo each instance j in H:7 d_j = |i - j |2 8 if d_j<dist:9 dist=d_j 10head=j 11assign i to cluster group head 12for each i in H:13i= 1images/BZ_189_778_2684_843_2729.png|H[i]|j j∈H[i]

    Algorithm 1 is used to cluster each instance into one class.Then instance duplication can be done using the normal distribution as shown in Eq.(5).The generated instances ensure that the statistical parameters of each cluster are not affected.If there are 500 instances of the majority class and 200 instances of the minority class,then 300 instances are generated from the minority class to match the majority class instances.Here we use clustering for finding out the commonality between the instances.Suppose,there are 5 clusters formed within the 200 instances,then 60 dummy instances are created in each cluster to match the majority class.

    3.3 Imbalanced Classification

    The final step is that the classifiers use the raw pre-processed dataset to classify diabetics.Four standard machine learning models are used for classification.Each machine learning model is briefed in the following sub-sections.

    3.3.1 k Nearest Neighbor

    KNN is one of the simple machine learning models that work by the concept of neighbors.When an instance needs to be classified,the kNN considers k closest neighbors,and the target class is fixed by majority voting.The value of K should be fixed before starting the classification process.Based on the investigational study, the value of K is always an odd number.This classifier is called a lazy classifier because it does nothing during the training phase.The actual implementation is only done during the testing phase,where the distance between all the exiting points is calculated and sorted in ascending order.Finally,the first k values are picked to determine the class of the instance.

    3.3.2 Na?ve Bayes

    Naive Bayes is one of the frequently used ML models.This model utilizes the concepts of probability to find out the target class of the instance.NB groups similar instances based on the Bayes probability theorem.Na?ve Bayes is the second most used machine learning model after SVM for classifying diabetics.

    3.3.3 Support Vector Machines

    Support Vector Machines can be used for classifying both linear and non-linear data.It maps all the instances into a hyperplane; afterward, it ideally finds a linear separation among them.The separation is done using the endpoints which are also called support vectors because it is used to decide the separation.

    3.3.4 Random Forest

    RF is an ensemble model of multiple DTs.All the DTs are independently trained and hence with better prediction can be achieved.Every DT selects a class based on its trained knowledge and finally,a bagging strategy is performed to pick the class with the highest frequency.

    3.3.5 Logistic Regression

    LR considers one or more independent features and tries to approximate the relationships within them.Several types of LR exist such as binary model,multi-class model,ordered model,and conditional model.LR seems to produce less performance when compared with other models because it is highly error-prone.

    3.3.6 Decision Trees

    DT works with the concept of decision-making.It constructs tree-like structures where each branch represents a decision.If there are multiple decisions,then there are multiple branches.Highdimensional data can be easily processed using decision trees.

    3.4 Model Weighting

    After the classification is done,a weight should be given to each stage for the ensemble to happen.The weight is just the accuracy of the classifier.Suppose a machine learning model produces 75%accuracy, its weight is 75 because the model is 75% effective.The higher the accuracy, the better learning capacity the model has.The cumulative weights across all stages are considered to predict the final class.For example, if the cumulative weight of the negative class is 400 and the cumulative weight of the positive class is 350,then the patient data is classified as diabetic negative.Two weights are given for each classifier.The first weight considers all the features and the second weight considers only the 5 most important features.The most important feature is considered using correlation values.Fig.2 shows how each model receives two sets of features.

    4 Results and Discussion

    We have used PIMA Indian dataset for testing the proposed model and the analysis results were compared with existing machine learning models.We have used k fold validation (k = 10)for the experiment.

    4.1 Dataset Description

    The dataset contains 768 instances with nine attributes.The details of the attributes are listed in Tab.2.

    Table 2: Dataset description

    The dataset contains lots of missing values.We have used mean values to replace the missing values.Tab.3 contains the missing values and the details of the mean value for each feature.

    Table 3: Dataset description

    4.2 Feature Selection

    For the second wei ghting value, five features are selected based on correlation with the output feature.The five selected features are # of pregnancies, Glucose, BP, Skin Thickness, and Insulin.Tab.3 lists; they just preserve the statistical properties of the original dataset.The proposed model does not use sample generating methods such as interpolation because there is a high chance to disturb the distribution of the dataset.The experimental results prove that the proposed method increases the performance of the imbalance classification.

    We have compared our work with two other recent works related to diabetic classification.Shown in Fig.3.Tabs.4 and 5 shows the complete comparison in terms of accuracy.In some cases,the majority of the data points reside near the class borders.Hence, it becomes difficult for the classifier to judge the boundary and decide the class for new samples.Models which don’t consider hyperparameters face difficulty in classifying samples that are falling near the class borders.Models such as SVM handle this situation better and outputs better performance.The first step in the proposed model generates balanced sub sets,this can eliminate many border data points and hence it becomes easy to separate both classes,this is another reason why the accuracy of the proposed model is more than the existing ones.

    Figure 3:Performance evaluation of proposed model

    Table 4: Performance evaluation of the proposed model

    Table 5: Accuracy comparison with other works

    5 Conclusion

    Diabetes is one of the worst diseases in the medical domain.Nearly 422 million people are affected by diabetes.The risk of diabetes can be reduced significantly if the diabetes is predicted at an early stage.In this paper, we focus on developing a machine learning model which can predict whether a patient has diabetes or not.In this work, we have used both over-sampling and under-sampling at different stages.The results show that the combination has significantly increased the performance of classification in the imbalanced dataset.The over-sampling used in the work ensures that all the class has equal memberships by selecting and generating samples from the minority class until they are balanced.As random sampling is not used,the risk of oversampling is prevented,and also,since the samples are generated within the distribution,the statistical properties are preserved.The undersampling generates many sub sets, each sub sets have a high chance of removing border values and thus makes it easy to place the classification margin.We have used six machine learning models kNN,SVM,RF,NB,LR,and DT,and used the PIMA Indian dataset.We have measured the performance of all these models with and without the proposed algorithm.We have used the standard four metrics known as accuracy,precision,recall and F1 to evaluate the model.The results revealed that the Support Vector Machine was most effective than the other modes in predicting diabetes in terms of accuracy.In future work,we aim to develop a cross model that works on multiple datasets to overcome imbalanced classification.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that there is no conflict of interest to report regarding the present study.

    日韩免费av在线播放| 亚洲精品国产区一区二| av国产精品久久久久影院| 99久久综合精品五月天人人| 中文欧美无线码| 国产成人精品久久二区二区91| 久久中文看片网| 在线永久观看黄色视频| 久久精品熟女亚洲av麻豆精品| 日韩精品免费视频一区二区三区| 精品久久久久久,| 老熟女久久久| 18禁观看日本| 亚洲少妇的诱惑av| 日韩制服丝袜自拍偷拍| 黄色 视频免费看| 美女视频免费永久观看网站| 国产成人欧美| 午夜视频精品福利| 国产一区二区三区视频了| 在线观看免费视频日本深夜| 国产精品一区二区精品视频观看| 久久久久国产精品人妻aⅴ院 | 天堂动漫精品| 亚洲av日韩精品久久久久久密| 国产午夜精品久久久久久| 老汉色av国产亚洲站长工具| av有码第一页| 精品乱码久久久久久99久播| 国产精品二区激情视频| 99热只有精品国产| 国产精品免费一区二区三区在线 | 天堂中文最新版在线下载| а√天堂www在线а√下载 | 成人永久免费在线观看视频| 在线国产一区二区在线| 欧美日韩国产mv在线观看视频| 国产成人精品久久二区二区免费| 精品久久久久久久久久免费视频 | 午夜福利在线观看吧| 亚洲人成电影观看| 日韩人妻精品一区2区三区| 色精品久久人妻99蜜桃| a级毛片在线看网站| 中国美女看黄片| 午夜精品在线福利| 亚洲五月婷婷丁香| 波多野结衣一区麻豆| 99香蕉大伊视频| 性少妇av在线| 日韩一卡2卡3卡4卡2021年| 很黄的视频免费| 在线av久久热| 黄色丝袜av网址大全| 亚洲在线自拍视频| 国精品久久久久久国模美| 国产1区2区3区精品| 啦啦啦视频在线资源免费观看| 亚洲av片天天在线观看| 欧美最黄视频在线播放免费 | 极品人妻少妇av视频| 视频区图区小说| 一级片'在线观看视频| 久久精品91无色码中文字幕| 亚洲av日韩在线播放| 午夜福利在线观看吧| 国产色视频综合| 亚洲一卡2卡3卡4卡5卡精品中文| 制服诱惑二区| 久久久精品国产亚洲av高清涩受| 久久久国产成人精品二区 | 最新在线观看一区二区三区| 多毛熟女@视频| 久久热在线av| 精品第一国产精品| 在线国产一区二区在线| 91九色精品人成在线观看| 久久精品亚洲精品国产色婷小说| 国产免费男女视频| 在线十欧美十亚洲十日本专区| 国产亚洲av高清不卡| 欧美 日韩 精品 国产| 日韩免费av在线播放| 黑人巨大精品欧美一区二区蜜桃| 亚洲欧美色中文字幕在线| 丰满的人妻完整版| 欧美日本中文国产一区发布| 国产成人精品在线电影| 国产精品av久久久久免费| 电影成人av| 大香蕉久久成人网| 欧美成狂野欧美在线观看| 极品人妻少妇av视频| 免费久久久久久久精品成人欧美视频| 中文字幕最新亚洲高清| 精品国产乱子伦一区二区三区| 看片在线看免费视频| 香蕉久久夜色| 岛国在线观看网站| 999久久久精品免费观看国产| 国产精品影院久久| 久久国产乱子伦精品免费另类| 少妇被粗大的猛进出69影院| 深夜精品福利| 久久国产精品大桥未久av| 国产成人精品久久二区二区91| 久久久久久久久久久久大奶| 如日韩欧美国产精品一区二区三区| 日本一区二区免费在线视频| 18禁裸乳无遮挡动漫免费视频| 亚洲精品国产一区二区精华液| 制服人妻中文乱码| 窝窝影院91人妻| 亚洲色图综合在线观看| 不卡av一区二区三区| 97人妻天天添夜夜摸| 大型黄色视频在线免费观看| 精品久久久久久,| av在线播放免费不卡| 亚洲av美国av| 亚洲性夜色夜夜综合| 亚洲人成电影观看| 精品熟女少妇八av免费久了| 中出人妻视频一区二区| 精品少妇一区二区三区视频日本电影| 日本黄色视频三级网站网址 | 国产av又大| 国产区一区二久久| 法律面前人人平等表现在哪些方面| 夜夜躁狠狠躁天天躁| 国产精品乱码一区二三区的特点 | 日韩有码中文字幕| 国产单亲对白刺激| 亚洲情色 制服丝袜| 成人永久免费在线观看视频| 妹子高潮喷水视频| 精品国产一区二区三区四区第35| 亚洲精品美女久久久久99蜜臀| 大陆偷拍与自拍| 亚洲精品自拍成人| 免费不卡黄色视频| 欧美另类亚洲清纯唯美| 丰满迷人的少妇在线观看| 亚洲av片天天在线观看| av免费在线观看网站| 欧美亚洲日本最大视频资源| 日本五十路高清| 午夜免费观看网址| 老司机影院毛片| 欧美在线一区亚洲| 亚洲一卡2卡3卡4卡5卡精品中文| 中出人妻视频一区二区| 日日夜夜操网爽| 国产精品99久久99久久久不卡| 国产精品久久久av美女十八| 老司机影院毛片| 另类亚洲欧美激情| 国产xxxxx性猛交| 少妇被粗大的猛进出69影院| 亚洲性夜色夜夜综合| 大香蕉久久成人网| 精品第一国产精品| 国产精品亚洲一级av第二区| 嫩草影视91久久| 人妻 亚洲 视频| av国产精品久久久久影院| 黄色毛片三级朝国网站| 亚洲va日本ⅴa欧美va伊人久久| 身体一侧抽搐| 久久狼人影院| 久久精品人人爽人人爽视色| 最近最新免费中文字幕在线| 最新在线观看一区二区三区| 9色porny在线观看| av不卡在线播放| 国产无遮挡羞羞视频在线观看| 天天影视国产精品| 国产一区二区激情短视频| 91成年电影在线观看| 超色免费av| 国产成人欧美在线观看 | 国产欧美日韩一区二区三区在线| 精品久久久久久,| 免费在线观看完整版高清| 日韩欧美免费精品| 欧美精品一区二区免费开放| 大码成人一级视频| 黄色毛片三级朝国网站| 一级a爱视频在线免费观看| 国产深夜福利视频在线观看| 国产91精品成人一区二区三区| 男女免费视频国产| 国产成人一区二区三区免费视频网站| 制服人妻中文乱码| 9热在线视频观看99| 好男人电影高清在线观看| 在线免费观看的www视频| 欧美在线一区亚洲| 99热网站在线观看| 丁香六月欧美| 麻豆成人av在线观看| 搡老岳熟女国产| 国产亚洲欧美在线一区二区| 桃红色精品国产亚洲av| 成人三级做爰电影| 国产精品电影一区二区三区 | 黄频高清免费视频| 极品人妻少妇av视频| 亚洲 欧美一区二区三区| 亚洲欧美精品综合一区二区三区| 日韩制服丝袜自拍偷拍| 亚洲熟妇中文字幕五十中出 | a在线观看视频网站| 免费久久久久久久精品成人欧美视频| 国产成人精品久久二区二区免费| 国产高清视频在线播放一区| 人妻 亚洲 视频| 日韩欧美免费精品| 久久久精品国产亚洲av高清涩受| 人妻丰满熟妇av一区二区三区 | 亚洲专区中文字幕在线| 亚洲av美国av| 69av精品久久久久久| 中文字幕色久视频| 美国免费a级毛片| 婷婷精品国产亚洲av在线 | 最近最新中文字幕大全电影3 | 日日爽夜夜爽网站| 精品国产超薄肉色丝袜足j| 两个人免费观看高清视频| 免费高清在线观看日韩| 两性夫妻黄色片| 人妻 亚洲 视频| 国产91精品成人一区二区三区| 国产成人精品无人区| 操美女的视频在线观看| 啦啦啦在线免费观看视频4| 欧美成人午夜精品| 亚洲视频免费观看视频| av超薄肉色丝袜交足视频| 亚洲国产中文字幕在线视频| 两性午夜刺激爽爽歪歪视频在线观看 | 国产又色又爽无遮挡免费看| 国产精品av久久久久免费| 久久精品91无色码中文字幕| 中文字幕人妻熟女乱码| 精品午夜福利视频在线观看一区| 校园春色视频在线观看| 曰老女人黄片| 久久狼人影院| 久久99一区二区三区| 精品无人区乱码1区二区| 亚洲 欧美一区二区三区| 妹子高潮喷水视频| 18禁国产床啪视频网站| 男女下面插进去视频免费观看| 女人精品久久久久毛片| 9191精品国产免费久久| 亚洲伊人色综图| 久久国产精品大桥未久av| 性少妇av在线| 99国产精品99久久久久| 国产99白浆流出| 久久天堂一区二区三区四区| 欧美在线一区亚洲| 国产激情久久老熟女| a级毛片黄视频| av在线播放免费不卡| 亚洲精华国产精华精| 欧美精品av麻豆av| 国产精品国产高清国产av | 精品熟女少妇八av免费久了| 一级毛片女人18水好多| 一区二区三区激情视频| 欧美激情 高清一区二区三区| 大香蕉久久成人网| 国产在线精品亚洲第一网站| 老熟妇乱子伦视频在线观看| 免费少妇av软件| 51午夜福利影视在线观看| 精品久久蜜臀av无| 亚洲va日本ⅴa欧美va伊人久久| 精品亚洲成国产av| 久久人妻福利社区极品人妻图片| 很黄的视频免费| 黑人操中国人逼视频| 人人妻,人人澡人人爽秒播| 亚洲熟妇熟女久久| 中文字幕人妻熟女乱码| 亚洲av片天天在线观看| 欧美精品高潮呻吟av久久| 国产午夜精品久久久久久| 大香蕉久久成人网| 国产精品一区二区免费欧美| 欧美成人午夜精品| 99国产精品免费福利视频| 一区二区三区精品91| 欧美久久黑人一区二区| 国产一区二区三区视频了| 99精品欧美一区二区三区四区| 国产精品偷伦视频观看了| 高清在线国产一区| 又紧又爽又黄一区二区| 免费在线观看完整版高清| 制服诱惑二区| 人妻丰满熟妇av一区二区三区 | 亚洲,欧美精品.| 又大又爽又粗| 国内久久婷婷六月综合欲色啪| 亚洲熟女毛片儿| 99热只有精品国产| 波多野结衣一区麻豆| 黑人欧美特级aaaaaa片| 老汉色∧v一级毛片| 成人免费观看视频高清| 日本vs欧美在线观看视频| 精品久久久精品久久久| 免费少妇av软件| 久久久久久久久久久久大奶| 丰满人妻熟妇乱又伦精品不卡| 亚洲一区中文字幕在线| 久热这里只有精品99| av超薄肉色丝袜交足视频| 久久久久精品国产欧美久久久| 亚洲欧美激情在线| 他把我摸到了高潮在线观看| 欧美亚洲 丝袜 人妻 在线| 午夜免费鲁丝| 悠悠久久av| 91成年电影在线观看| 日本撒尿小便嘘嘘汇集6| 激情在线观看视频在线高清 | 亚洲国产中文字幕在线视频| 久久香蕉精品热| 国产精品二区激情视频| 18禁黄网站禁片午夜丰满| 女人爽到高潮嗷嗷叫在线视频| 久久狼人影院| 一区在线观看完整版| 亚洲av成人一区二区三| 国产一区在线观看成人免费| 欧美亚洲 丝袜 人妻 在线| 日日爽夜夜爽网站| 日韩成人在线观看一区二区三区| 满18在线观看网站| 国产aⅴ精品一区二区三区波| 欧美日韩亚洲高清精品| 亚洲精品中文字幕一二三四区| 亚洲精品乱久久久久久| 精品少妇一区二区三区视频日本电影| 99在线人妻在线中文字幕 | 日本五十路高清| av超薄肉色丝袜交足视频| 大型黄色视频在线免费观看| 久久精品aⅴ一区二区三区四区| 久久精品成人免费网站| 99精品在免费线老司机午夜| 涩涩av久久男人的天堂| cao死你这个sao货| 50天的宝宝边吃奶边哭怎么回事| 丝瓜视频免费看黄片| 交换朋友夫妻互换小说| 精品久久久久久久久久免费视频 | 天天躁日日躁夜夜躁夜夜| 久久亚洲精品不卡| 后天国语完整版免费观看| 咕卡用的链子| 一区福利在线观看| 国产精华一区二区三区| 热re99久久国产66热| 巨乳人妻的诱惑在线观看| 亚洲人成电影观看| 热re99久久国产66热| 亚洲精品久久成人aⅴ小说| 午夜精品在线福利| 色综合婷婷激情| 一区福利在线观看| 男人操女人黄网站| 欧美乱码精品一区二区三区| 久久久精品免费免费高清| 在线观看免费午夜福利视频| 无限看片的www在线观看| 日韩欧美三级三区| av片东京热男人的天堂| 水蜜桃什么品种好| 亚洲精品美女久久久久99蜜臀| 女人被狂操c到高潮| 日韩中文字幕欧美一区二区| 99国产综合亚洲精品| 丰满迷人的少妇在线观看| videos熟女内射| 午夜福利在线观看吧| 精品卡一卡二卡四卡免费| 精品国产美女av久久久久小说| 美女 人体艺术 gogo| 黄色视频不卡| 桃红色精品国产亚洲av| 啦啦啦视频在线资源免费观看| 亚洲欧美色中文字幕在线| 午夜精品在线福利| 亚洲专区字幕在线| 精品人妻在线不人妻| 免费观看a级毛片全部| 99国产精品一区二区三区| 国产不卡av网站在线观看| 老司机午夜十八禁免费视频| 亚洲欧美日韩另类电影网站| 国产一区二区激情短视频| 精品欧美一区二区三区在线| 国产精品久久久久久精品古装| 天堂中文最新版在线下载| 亚洲情色 制服丝袜| 在线视频色国产色| 亚洲五月婷婷丁香| 超碰成人久久| 国产一区二区三区综合在线观看| 亚洲精品国产一区二区精华液| 国产亚洲一区二区精品| 51午夜福利影视在线观看| 国精品久久久久久国模美| 黑人巨大精品欧美一区二区蜜桃| 日韩人妻精品一区2区三区| 人妻丰满熟妇av一区二区三区 | 国产一区二区激情短视频| 国产单亲对白刺激| 男人舔女人的私密视频| av网站免费在线观看视频| 脱女人内裤的视频| 精品一区二区三卡| 欧美日韩一级在线毛片| 侵犯人妻中文字幕一二三四区| 亚洲人成77777在线视频| 中文字幕人妻丝袜一区二区| 交换朋友夫妻互换小说| 一级毛片高清免费大全| 久久天堂一区二区三区四区| 国产精品秋霞免费鲁丝片| 97人妻天天添夜夜摸| 露出奶头的视频| 正在播放国产对白刺激| 在线观看一区二区三区激情| 麻豆国产av国片精品| 欧美日韩av久久| 免费高清在线观看日韩| 国产在线精品亚洲第一网站| 欧美在线黄色| 美女 人体艺术 gogo| 亚洲中文字幕日韩| 久久99一区二区三区| 看黄色毛片网站| 高清视频免费观看一区二区| 校园春色视频在线观看| 黑人巨大精品欧美一区二区蜜桃| 嫁个100分男人电影在线观看| 高清欧美精品videossex| 啦啦啦 在线观看视频| av天堂久久9| 久久精品熟女亚洲av麻豆精品| 99久久精品国产亚洲精品| 美女视频免费永久观看网站| 色精品久久人妻99蜜桃| 成人永久免费在线观看视频| 亚洲熟妇熟女久久| 国产在线精品亚洲第一网站| av欧美777| 国产97色在线日韩免费| 亚洲全国av大片| 欧美亚洲 丝袜 人妻 在线| 老司机亚洲免费影院| 丝袜美腿诱惑在线| 真人做人爱边吃奶动态| 亚洲久久久国产精品| 国产精品 欧美亚洲| 午夜免费成人在线视频| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲精品成人av观看孕妇| 国产伦人伦偷精品视频| 美女福利国产在线| 天堂动漫精品| 嫁个100分男人电影在线观看| 国产精品久久久久成人av| 视频区欧美日本亚洲| 国产亚洲精品第一综合不卡| 国产精品成人在线| 午夜激情av网站| 男女免费视频国产| 日韩免费高清中文字幕av| 久久精品人人爽人人爽视色| 国产精品电影一区二区三区 | 成人18禁在线播放| 国产野战对白在线观看| 国产精品美女特级片免费视频播放器 | 亚洲情色 制服丝袜| 亚洲国产看品久久| 精品无人区乱码1区二区| 动漫黄色视频在线观看| 女人高潮潮喷娇喘18禁视频| 一边摸一边做爽爽视频免费| 不卡一级毛片| 亚洲欧美一区二区三区黑人| 色综合婷婷激情| 天堂√8在线中文| 欧洲精品卡2卡3卡4卡5卡区| 在线观看免费视频网站a站| 狂野欧美激情性xxxx| 极品少妇高潮喷水抽搐| 在线观看免费午夜福利视频| 欧美亚洲日本最大视频资源| 亚洲五月婷婷丁香| 黑人巨大精品欧美一区二区mp4| а√天堂www在线а√下载 | 国产高清激情床上av| 99久久综合精品五月天人人| 精品人妻1区二区| 国产成人av教育| 久久久久久人人人人人| 高清毛片免费观看视频网站 | 黄色a级毛片大全视频| 久久中文看片网| 国产高清国产精品国产三级| 久久天堂一区二区三区四区| 熟女少妇亚洲综合色aaa.| 久久国产乱子伦精品免费另类| 两人在一起打扑克的视频| 十八禁高潮呻吟视频| 精品久久久久久电影网| 久久久久久人人人人人| 欧美人与性动交α欧美精品济南到| 在线观看www视频免费| 亚洲精品自拍成人| 欧美精品亚洲一区二区| 国产一区二区三区综合在线观看| 国产日韩一区二区三区精品不卡| 精品一品国产午夜福利视频| 国产单亲对白刺激| 国产日韩欧美亚洲二区| 日本wwww免费看| 婷婷精品国产亚洲av在线 | 日韩制服丝袜自拍偷拍| bbb黄色大片| 人人妻,人人澡人人爽秒播| 久久久精品国产亚洲av高清涩受| 18禁观看日本| bbb黄色大片| 新久久久久国产一级毛片| 性少妇av在线| 国产精品综合久久久久久久免费 | 真人做人爱边吃奶动态| 天堂√8在线中文| av国产精品久久久久影院| 亚洲久久久国产精品| 久久人人爽av亚洲精品天堂| 1024香蕉在线观看| 男女高潮啪啪啪动态图| 18禁国产床啪视频网站| 久久人妻av系列| 欧美黄色片欧美黄色片| 高潮久久久久久久久久久不卡| 欧美黄色片欧美黄色片| 人人澡人人妻人| 亚洲精品美女久久av网站| 免费在线观看日本一区| 亚洲精品美女久久av网站| 免费在线观看日本一区| 亚洲熟妇熟女久久| 亚洲欧洲精品一区二区精品久久久| 在线永久观看黄色视频| 在线播放国产精品三级| 他把我摸到了高潮在线观看| 国产aⅴ精品一区二区三区波| 国产精品免费一区二区三区在线 | 黄网站色视频无遮挡免费观看| 国产精华一区二区三区| 亚洲精品在线美女| 国产熟女午夜一区二区三区| 亚洲专区中文字幕在线| 精品一区二区三区四区五区乱码| 国产日韩欧美亚洲二区| 自拍欧美九色日韩亚洲蝌蚪91| 成年动漫av网址| 欧美精品av麻豆av| 欧美黑人欧美精品刺激| 精品少妇久久久久久888优播| 日本精品一区二区三区蜜桃| 国产99久久九九免费精品| 国产精品.久久久| 天天躁日日躁夜夜躁夜夜| 一夜夜www| 成年版毛片免费区| 一边摸一边抽搐一进一小说 | a在线观看视频网站| 超碰97精品在线观看| 国产精品美女特级片免费视频播放器 | 成人特级黄色片久久久久久久| 成年人黄色毛片网站| 一进一出抽搐gif免费好疼 | 很黄的视频免费| 天天影视国产精品| 黑人猛操日本美女一级片| 在线永久观看黄色视频| 久久午夜综合久久蜜桃| 日本黄色日本黄色录像| 亚洲av成人一区二区三| 亚洲精品中文字幕一二三四区| 欧美日韩av久久| 青草久久国产| 亚洲 国产 在线| 免费在线观看黄色视频的| av中文乱码字幕在线| 大片电影免费在线观看免费| 1024香蕉在线观看| 免费观看精品视频网站| 免费高清在线观看日韩| 不卡av一区二区三区| 在线观看午夜福利视频|