• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Approach Using Fuzzy Sets and Boosting Techniques to Predict Liver Disease

    2021-12-14 06:05:26PushpendraKumarandRamjeevanSinghThakur
    Computers Materials&Continua 2021年9期

    Pushpendra Kumarand Ramjeevan Singh Thakur

    1Maulana Azad National Institute of Technology,Bhopal,India

    2Central University of Jharkhand,Ranchi,India

    3Maulana Azad National Institute of Technology,Bhopal,India

    Abstract:The aim of this research is to develop a mechanism to help medical practitioners predict and diagnose liver disease.Several systems have been proposed to help medical experts by diminishing error and increasing accuracy in diagnosing and predicting diseases.Among many existing methods,a few have considered the class imbalance issues of liver disorder datasets.As all the samples of liver disorder datasets are not useful,they do not contribute to learning about classifiers.A few samples might be redundant,which can increase the computational cost and affect the performance of the classifier.In this paper,a model has been proposed that combines noise filter,fuzzy sets,and boosting techniques (NFFBTs) for liver disease prediction.Firstly,the noise filter(NF)eliminates the outliers from the minority class and removes the outlier and redundant pair from the majority class.Secondly,the fuzzy set concept is applied to handle uncertainty in datasets.Thirdly,the AdaBoost boosting algorithm is trained with several learners viz,random forest (RF),support vector machine (SVM),logistic regression (LR),and naive Bayes(NB).The proposed NFFBT prediction system was applied to two datasets(i.e.,ILPD and MPRLPD)and found that AdaBoost with RF yielded 90.65%and 98.95% accuracy and F1 scores of 92.09% and 99.24% over ILPD and MPRLPD datasets,respectively.

    Keywords:Fuzzy set;imbalanced data;liver disease prediction;machine learning;noise filter

    1 Introduction

    Liver diseases are the leading cause of death in India and across the world.Approximately two million people die annually because of liver disease throughout the world.In India alone,216,865 people died from liver disease in 2014,representing 2.44% of all deaths in the country.In 2017,the number of deaths increased to 259,749,representing 2.95% of all deaths [1].

    Diagnosing liver disease in its early stages is a complicated task,as the liver continues to perform normally until it is severely damaged [2].The diagnosis and treatment of liver disease are performed by medical experts.However,inappropriate treatment sometimes wastes time and money and causes the loss of life.Consequently,the development of an efficient and automatic liver disease prediction system is necessary for efficient and early diagnosis.Automated liver prediction systems take advantage of the data generated from the liver function test (LFT).This system can support the medical practitioner in diagnosing liver disease with less effort and more accuracy.The classification technique of a machine learning algorithm is applied when developing automated disease prediction systems [3,4].The purpose of the classification algorithm is to predict the class label of an unknown instance [5]and work adequately when the instances of the dataset are uniformly distributed among all the classes (balanced) [6].Most healthcare datasets,such as those for breast cancer [7,8],heartbeat [9],diabetes [10-13],kidney [14],and liver disorders [15-17],involve class imbalance.The standard classification performs poorly when a dataset is not uniformly distributed among all the classes (imbalanced) because minority class data are classified as majority class data [18-20].

    Four procedures have been proposed to mitigate the issues related to class imbalance.These are (a) algorithm modifications,(b) a sampling-based technique,(c) a cost-sensitive approach,and(d) ensemble learning techniques.

    Algorithm modifications:This procedure adjusts the conventional algorithm by biasing the learning to find a solution to the imbalance problem [21].This strategy does not disturb the original pattern of the data,whereas this methodology requires an awareness of the corresponding classifier and application [21,22].

    Sampling-based technique (SBT)[23-26]:Sampling can be accomplished either by oversampling or undersampling.Oversampling adds new or duplicate records to the minority class until the desired class proportion is obtained,whereas undersampling removes records from the majority class until the desired class ratio is achieved.The disadvantage of undersampling is that information may be lost if significant data are removed,while its advantage is that it decreases learning times by reducing the learning data size.Oversampling suffers from overfitting and increased model learning times.

    Cost-sensitive approach:This approach utilizes the variable cost matrix for instances that are misclassified by the model.The cost of misclassification needs to be defined in this approach,which is not usually given in datasets [24,25,27,28].

    Ensemble learning techniques (ELT):Reference [29]Ensemble learning (EL) uses multiple learning algorithms to accomplish the same task.ETL has a better classification and generalization ability than machine learners that use a single learner.In recent times,an EL that combines ELT and SBT gained recognition for its ability to solve class imbalance issues.

    The objective of this work is to develop a noise filter,fuzzy sets,and boosting technique(NFFBT) approach to predict liver disorder.The proposed NFFBT approach aids medical practitioners in interpreting the consequences of LFT.Existing liver disorder detection techniques mostly apply the boosting technique to handle imbalanced issues of LFT datasets only.Meanwhile,the proposed NFFBT approach applies a noise filter to eliminate all noise from the majority and minority classes.This preserves the dataset’s characteristics and reduces the model’s training time.Then,the fuzzification system—which eliminates the uncertainty in the relationship among the features of datasets—and the AdaBoost boosting algorithm are applied with different classifiers to handle issues of class imbalance.The architecture of the noise filter is shown in Fig.1.

    Figure 1:Architecture of noise filter

    The rest of this paper is arranged as follows.Section 2 discusses related works and the authors’vested motivation for this research work.A description of the proposed methodology for NFFBT development is presented in Section 3.The results and discussion are presented in Section 4.Finally,a summary of the findings and the conclusions of this research work are given in Section 5.

    2 Related Works

    In the last few years,a lot of studies have been performed on liver disorder predictions using classification techniques.In these studies,the decisions made by the prediction systems and input data from patients impacted liver disease diagnoses.Literature reviews concerned with the proposed methodology are summarized in Tab.1.

    Table 1:Summary of literature reviews concerned with the proposed methodology

    (Continued)

    Table 1:Continued

    (Continued)

    Table 1:Continued

    From the above studies,it is observed that there is still a need to develop an efficient and effective system for liver disease detection using a machine learning approach.

    Tab.2 compares previous studies about liver disease prediction.From the comparison,it is observed that these studies have not considered outliers of the majority and minority classes and have neglected the class imbalance issues of LFT datasets.This paper will address these issues.

    Table 2:Summary of literature reviews about liver disease prediction

    3 Proposed Methodology

    The proposed method consists of three stages:noise filtering,fuzzification,and the application of the AdaBoost boosting algorithm with different classifiers.

    3.1 Noise Removal

    The noise filter mechanism eliminates outliers from the dataset.It is an essential technique for noise removal,as real-world datasets are often noisy (LFT datasets are no exception).KNN filter and redundancy-driven Tomek-linked-based undersampling techniques are used to remove noise from minority and majority classes.

    3.1.1 KNN Filter

    The KNN filter [21]eliminates outliers from the minority class.It categorizes minority instances into highly desirable samples,moderately desirable samples,and outliers.A sample from the minority class is labeled highly desirable if all the nearest neighbors of that instance belong to the minority class.A sample from the minority class is labeled moderately desirable if all the nearest neighbors of that instance belong to both the minority and majority class.A sample from the minority class is labeled an outlier (or noise) if all the nearest neighbors of that instance belong to the majority class.The procedure of the KNN filter is given in Algorithm 1.

    For datasetD,Dm?DandDM?D.DmandDMare the minority and majority class sample,respectively,inD.

    Algorithm 1:KNN filter 1 Input: Dm,DM and K:number of nearest neighbors 2 Output:Outliers free minority class dataset D′m?i ∈Dm 3 For i=1 to |Dm|(1) Find the K-NN for instance i from the dataset (D) excluding the instance i.(2) Calculate the sum of nearest neighbors belonging to the minority class in sm and the majority class in sM(3) If K=sM,then(4) Instance i is considered an outlier and is marked with the label ‘o’.4 End for 5 Delete the instances marked as ‘o’.

    3.1.2 Redundancy-Driven Tomek-Linked Based Under Sampling(R_TLU)

    R_TLU [23,43]eliminates Tomek-linked pairs and redundancy from the majority class.A pair of the patternpmandpnare called aTomek-link pairif ??pk:d(pm,pk)<d(pm,pn),whereclass(pm)/=class(pn).Basically,pmandpnare called boundary instances that promote misclassification.An instance isredundantif there exists another instance with an equal ability to perform the same classification task.Redundant pairs are detected based on a similarity measure and can be defined as follows:and similarityBased on the contribution factor (Contrp),a redundant majority pattern can be eliminated from a majority redundant pairwhich is defined as follows:where n is the number of instances,m is the number of attributes of each instance,lnfis log likelihood function,andC1is the class label of the majority class.Instances with many redundancies and a low contribution factor are eliminated as defined in Eq.(1).

    3.2 Fuzzification Subsystem

    In 1965,Zadeh [44]introduced the concept of the fuzzy set,which deals with uncertainty arising due to the strength of the relationships among the elements of a set [37].Letbe a universal set,and let a fuzzy set (over) be represented aswhereμ(y)represents the degree of membership ofy.The attribute of liver disorder datasets is transformed into a fuzzy set with a specific membership value using a trapezoidal membership function [33].

    Here,n1,n2,n3andn4are applied to determine the membership values of the attribute valueA.

    3.3 Description and Fuzzification of Datasets

    Numerous studies were performed using machine learning techniques.However,liver disease predictions remain underexplored.So,the ILPD and MPRLPD datasets are used in the evaluation of this study.The ILPD dataset consists of 583 records obtained from two classes of liver patients(416 patients suffering from a liver disorder and 167 suffering from non-liver disorders).This dataset was collected from the UCI repository [45],and it has 10 features.The MPRLPD dataset consists of 7865 liver patient records.Of these patients,6282 had some kind of liver disease,and the other 1583 were healthy.This dataset consists of 12 features and was collected from Madhya Pradesh in the Bhopal region of India.The dataset’s statistics (after eliminating noise,or outliers,from the minority and majority classes) are shown in Tab.3.

    Table 3:Datasets’statistics

    3.3.1 Fuzzification of the ILPD Dataset

    ILPD [45]dataset has nine attributes with the numerical datatype.During fuzzification,six features,namely age,AlkPhos,SGPT,SGOT,TP,and albumin,are represented by three fuzzy variables.Total bilirubin (TB) and direct bilirubin (DB) are represented by four variables.The remaining attribute (A/G ratio) is represented by two variables.Fig.2 illustrates the fuzzification of the ILPD dataset using the membership function mentioned in Eq.(2).

    Figure 2:Fuzzification of the numerical features of ILPD datasets

    3.3.2 Fuzzification of the MPRLPD Dataset

    The MPRLPD dataset has 11 attributes with a numerical datatype.During fuzzification,seven attributes,namely age,TB,IB,SGPT,SGOT,TP,and A/G ratio,are represented by three variables,whereas AlkPhos is represented by three and four variables for children and adults,respectively.The remaining attributes (DB and albumin) are represented by four and two variables,respectively.

    3.4 Classification Subsystem

    The classification subsystem implements the boosting technique to improve the performance of the classifier for imbalanced datasets.The boosting technique builds a strong classifier from several weak classifiers.Weak classifiers are algorithms whose error rate is less than random guessing(50%).In the proposed work,classification is done using the AdaBoost boosting algorithm [46,47].The steps used in the AdaBoost algorithm are given below.

    Initialization step:?p∈D,set

    ω(p)=wherePis the total number of patterns.

    Iteration step:fork=1 toK

    1) Based on the weightω(p),find the best weak classifierhk(p)

    2) Compute total error asTotalerror

    3) Compute weightαk

    4) Update the weight for misclassified patterns

    ω(p)=ω(p).eak

    5) Normalize the weight so that

    6) Output of the final classifier

    4 Results and Discussion

    This section presents the evaluation of the NFFBT approach’s performance.The proposed approach is evaluated based on two datasets.One dataset is a benchmark dataset collected from the UCI repository,and the other is collected from a local hospital in Bhopal,India.Both datasets have two classes.RF [47],SVM [48],LR [49],and NB [6]machine learning algorithms are applied with a boosting technique on data prepared using the NFFBT approach (outlier-free datasets),as well as on original datasets.MATLAB R 2014a and Python are used to conduct the experiment.The NFFBT approach is implemented using MATLAB R 2014a,and classifications are performed using Python.

    The performance of the proposed model is validated according to measures that are calculated based on the values of the confusion matrix.The confusion matrix [50]summarizes the predicted results of a classifier (Tab.4).The performance measures—namely accuracy (Accu),specificity(Spec),sensitivity (Sens),precision (Prec),false positive rate (FPrate),false negative rate (FNrate),F1-score,G-mean,and area under the curve (AUC)—are used to appraise the developed model,(Tab.5).The results are evaluated using a 10-fold cross-validation technique over the mentioned measures.

    Table 4:Confusion matrix (CM)

    Table 5:Performance measures

    Tabs.6 and 8 show the results of original datasets,whereas Tabs.7 and 9 show the results on outlier-free datasets.Tab.6 contains the results of the original ILPD dataset.For this dataset,Accu (78.39%),Spec (64.34%),Prec (87.74%),FPrate(35.66%),F1-score (85.28%),G-mean(73.05%),and AUC.(73.65%) are better obtained using AdaBoost with RF.Meanwhile,Sens.(96.38%) and FNrate(3.62%) are better obtained using AdaBoost with NB.

    Table 6:Original ILPD dataset

    The ILPD dataset is processed using the NFFBT technique,for which AdaBoost is used along with RF,SVM,LR,and NB for the outlier-free ILPD dataset (Tab.7).It is found that AdaBoost with RF produces better results than other mentioned classifiers for Accu (90.65%),Spec (92.75%),Sens (89.30),Prec (95.05%),FPrate(7.25%),FNrate(10.70%),F1-score (92.09%),G-mean (91.01%),and AUC (91.03%).Tab.7 indicates better results than Tab.6 because it contains results derived from an improved ILPD dataset.

    Table 7:Outlier-free ILPD dataset

    Accuracy is a valid metric for the efficiency of a classifier for experiments performed using balanced datasets.In this study,both the ILPD and MPRLPD datasets are imbalanced.Therefore,in this case,the F1-score is expected to indicate balance between precision and recall.The F1-scores of AdaBoost+RF were 92.09% and 99.21% in Tabs.7 and 9,respectively.This confirmed that the AdaBoost+RF technique performs better than the other three techniques for these two datasets.

    Tab.8 shows the results of the original MPRLPD.AdaBoost with RF produced the best results for Accu (91.21%),Spec (85.28%),Prec (97.04%),FPrate(14.72%),F1-score (94.64%),G-mean (88.75%),and AUC (88.82%),whereas AdaBoost with NB produced the best results for Sens (99.70%) and FNrate(0.30%).

    Table 8:Original MPRLPD dataset

    Tab.9 shows the results of the improved MPRLPD dataset using the NFFBT approach.AdaBoost with RF produced the best results for Accu (98.98%),Spec (98.00%),Sens (99.42%),Prec (99.01%),FPrate(2.00%),FNrate(0.58%),F1-score (20.58%),G-mean (99.21%),and AUC(98.71%).

    The Prec value of 99.01% in Tab.9 indicates that the AdaBoost+RF combination can predict 99 out of 100 liver patients as diseased and one liver patient as healthy.Meanwhile,AdaBoost+SVM,AdaBoost+Logistic R,and AdaBoost+NB can predict 90.59%,89.87%,and 92.23% patients having a liver disorder.

    Table 9:Outlier-free MPRLPD dataset

    Figure 3:(a &b) The ROC curve for the ILPD dataset;(c &d) the ROC curve for the MPRLPD dataset

    Because liver disease is a significant cause of death in India and globally,patients need to be diagnosed accurately.If a liver patient is diagnosed as false positive,then that patient’s healthy status would be at risk.Hence,in cases with a high percentage of false positives,Spec is the best evaluation metric.In Tab.9,the Spec value for Adaboost+RF was 98%,meaning that false positives are rare (2%).

    The ROC curve is framed by plotting TPrateagainst FPrateat various threshold levels.It gives a visual portrayal of the relative tradeoffs between the TPrate(Sens) and FPrate(1-Spec)of classifications with respect to data distributions (FPrateis on the x-axis,and TPrateis on the y-axis).

    AUC is a measure of the separation capability of classifiers in a particular dataset.The ROC curve is drawn from the results of the proposed NFFBT in the ILPD and MPRLPD datasets.A comparison of Fig.3a and 3b shows that all the four techniques (i.e.,AdaBoost+RF,AdaBoost+SVM,Adaboost+LR,and AdaBoost+NB) presented comparatively better separability between classes of diseased and healthy people for the outlier-free ILPD dataset than the original ILPD dataset.Specifically,AdaBoost+RF produced the best disease predictions,and AdaBoost+SVM was the poorest performer.Similarly,these four techniques also showed more promising results in the outlier free MPRLPD dataset (Fig.3d) than in the original MPRLPD dataset (Fig.3c).Specifically,AdaBoost+RF performed the best regarding the separation of healthy patients and those with liver disease,indicated by the fact that the AUC was close to 1.

    5 Conclusion

    In this paper,an NFFBT approach is proposed.This approach works in two main phases.First noise is eliminated using KNN filter and R_TLU techniques.The KNN filter eliminates outliers from the minority class,and R_TLU eliminates outliers from the majority class.After that,datasets are fuzzified so that uncertainty can be handled.In the second phase,the fuzzified datasets are classified using AdaBoost with RF,SVM,LR,and NB.

    ILPD and MPRLPD datasets have been used in experiments to evaluate the performance of the NFFBT approach.These datasets are imbalanced,and so the AdaBoost algorithm is applied to the dataset because it can classify the imbalanced datasets.The AdaBoost boosting algorithm is applied with different classifiers,both without outlier removal (original dataset) and after removing noise from and fuzzifying (NFFBT) the datasets.

    The results show improvements in Accu (12.26%),Spec (28.41%),Sens (6.35%),Prec (7.31%),FPrate(28.41%),FNrate(6.35%),F1-score (6.81%),G-mean (17.96%),and AUC (17.38%) using the NFFBT approach when compared to the original ILPD dataset.Meanwhile,improvements in Accu (7.74%),Spec (12.72%),Sens (7.07%),Prec (1.97%),FPrate(12.72),FNrate(7.07%),F1-score (4.57%,G-mean (9.96%),and AUC (9.89%) were achieved using NFFBT approach when compared with the original MPRLPD dataset.

    These results confirm the advantageous performance of the proposed NFFBT approach when compared to AdaBoost with RF.Based on the results,we argue that the NFFBT can be used by healthcare organizations and liver research institutes to classify imbalanced LFT data.It can also be utilized as a screening tool by doctors to predict and diagnose liver disease.

    In the future,similar experiments can be done for imbalanced datasets in other domains like finance,cyber forensics,and athlete doping tests,among many others.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产私拍福利视频在线观看| 久久久国产欧美日韩av| 一级毛片高清免费大全| 亚洲熟妇熟女久久| 欧美成人午夜精品| 国产99久久九九免费精品| 国产v大片淫在线免费观看| 极品教师在线免费播放| 国产一区二区三区视频了| 国产免费男女视频| 一级毛片精品| 欧美+亚洲+日韩+国产| 最近最新中文字幕大全电影3| 精品久久久久久久末码| 女生性感内裤真人,穿戴方法视频| 国产精品久久视频播放| 欧美绝顶高潮抽搐喷水| 熟女少妇亚洲综合色aaa.| 99久久国产精品久久久| 亚洲色图 男人天堂 中文字幕| 亚洲熟妇熟女久久| 久久亚洲精品不卡| 免费看a级黄色片| 我的老师免费观看完整版| 亚洲av电影不卡..在线观看| 男插女下体视频免费在线播放| av在线天堂中文字幕| 国产精品一区二区三区四区久久| 熟女电影av网| 国产激情久久老熟女| 久久亚洲真实| 露出奶头的视频| 日韩三级视频一区二区三区| 国产精品一及| 国产一区二区在线观看日韩 | 亚洲人成伊人成综合网2020| 俄罗斯特黄特色一大片| 日韩精品免费视频一区二区三区| 亚洲av五月六月丁香网| 一级毛片女人18水好多| 国产成人av教育| 欧美日韩乱码在线| 亚洲精品美女久久久久99蜜臀| 午夜福利在线在线| 免费无遮挡裸体视频| 法律面前人人平等表现在哪些方面| 午夜免费激情av| 黄色视频,在线免费观看| 亚洲片人在线观看| 欧美三级亚洲精品| 欧美不卡视频在线免费观看 | 中文字幕人成人乱码亚洲影| 免费在线观看视频国产中文字幕亚洲| 婷婷丁香在线五月| 很黄的视频免费| a级毛片在线看网站| 人人妻人人看人人澡| 特大巨黑吊av在线直播| 欧美色欧美亚洲另类二区| 亚洲人成77777在线视频| 国模一区二区三区四区视频 | 麻豆国产97在线/欧美 | 欧美黑人欧美精品刺激| 日本五十路高清| 少妇熟女aⅴ在线视频| 久久热在线av| 欧美丝袜亚洲另类 | 国产av麻豆久久久久久久| www.精华液| 亚洲欧美日韩高清专用| 欧美成人一区二区免费高清观看 | 久久天堂一区二区三区四区| 国内少妇人妻偷人精品xxx网站 | 亚洲在线自拍视频| 国产欧美日韩一区二区精品| 叶爱在线成人免费视频播放| 亚洲av成人不卡在线观看播放网| 国产免费男女视频| av免费在线观看网站| 亚洲av五月六月丁香网| 免费在线观看视频国产中文字幕亚洲| 国内精品久久久久精免费| 久久久精品大字幕| 精品国内亚洲2022精品成人| 50天的宝宝边吃奶边哭怎么回事| 久久久精品大字幕| 天堂av国产一区二区熟女人妻 | 国产亚洲精品av在线| 国产成人av激情在线播放| 精品久久蜜臀av无| 性欧美人与动物交配| 国产单亲对白刺激| 久久久精品国产亚洲av高清涩受| 好男人在线观看高清免费视频| 久久国产精品人妻蜜桃| tocl精华| 久久香蕉激情| 一进一出抽搐动态| 久久午夜亚洲精品久久| 一级片免费观看大全| 日本 欧美在线| 精品欧美国产一区二区三| 国产熟女午夜一区二区三区| 久久九九热精品免费| 禁无遮挡网站| 99热6这里只有精品| tocl精华| 黄色片一级片一级黄色片| 在线观看免费午夜福利视频| 成人国语在线视频| 成年人黄色毛片网站| 成人手机av| 久久香蕉激情| 亚洲国产精品成人综合色| 搡老妇女老女人老熟妇| 露出奶头的视频| 亚洲精品美女久久久久99蜜臀| 国产成年人精品一区二区| 午夜福利在线在线| 午夜福利在线在线| 男女午夜视频在线观看| 精华霜和精华液先用哪个| 色噜噜av男人的天堂激情| 老熟妇仑乱视频hdxx| 男男h啪啪无遮挡| 99精品在免费线老司机午夜| 亚洲在线自拍视频| 在线播放国产精品三级| 婷婷丁香在线五月| 国产精华一区二区三区| 午夜精品在线福利| 国产69精品久久久久777片 | 午夜福利免费观看在线| 国产精品av久久久久免费| 日本免费a在线| 香蕉丝袜av| 深夜精品福利| 国产精品一及| 俄罗斯特黄特色一大片| 亚洲国产欧洲综合997久久,| 亚洲精品中文字幕在线视频| 亚洲成人免费电影在线观看| 一本精品99久久精品77| √禁漫天堂资源中文www| 国产97色在线日韩免费| 亚洲一区高清亚洲精品| 国内精品久久久久久久电影| 午夜激情福利司机影院| 国产成人aa在线观看| 久久草成人影院| 欧美久久黑人一区二区| 国产伦在线观看视频一区| 欧美高清成人免费视频www| 熟女电影av网| 少妇人妻一区二区三区视频| 亚洲精品粉嫩美女一区| 岛国在线免费视频观看| 亚洲成a人片在线一区二区| 男女那种视频在线观看| 白带黄色成豆腐渣| 久久九九热精品免费| www.www免费av| 亚洲人成77777在线视频| or卡值多少钱| 黄色视频,在线免费观看| 男女之事视频高清在线观看| 99国产精品一区二区蜜桃av| 日韩有码中文字幕| 欧美黄色淫秽网站| 欧美日韩亚洲国产一区二区在线观看| 中文在线观看免费www的网站 | 一边摸一边做爽爽视频免费| 12—13女人毛片做爰片一| 美女午夜性视频免费| 色噜噜av男人的天堂激情| 一进一出抽搐gif免费好疼| 国产黄片美女视频| 成人永久免费在线观看视频| 怎么达到女性高潮| 日本三级黄在线观看| 欧美国产日韩亚洲一区| 久久人妻av系列| 50天的宝宝边吃奶边哭怎么回事| 变态另类成人亚洲欧美熟女| 国产精品综合久久久久久久免费| 97人妻精品一区二区三区麻豆| 2021天堂中文幕一二区在线观| 国产真人三级小视频在线观看| www.自偷自拍.com| 一a级毛片在线观看| 男女床上黄色一级片免费看| 国产精品自产拍在线观看55亚洲| 熟女电影av网| 国产精品99久久99久久久不卡| 色噜噜av男人的天堂激情| 久久中文看片网| 性色av乱码一区二区三区2| 国产欧美日韩一区二区精品| 欧美成狂野欧美在线观看| 十八禁人妻一区二区| 国模一区二区三区四区视频 | 1024香蕉在线观看| 我要搜黄色片| 亚洲欧美日韩高清专用| 首页视频小说图片口味搜索| 日韩欧美国产在线观看| 国产激情偷乱视频一区二区| 成人高潮视频无遮挡免费网站| 妹子高潮喷水视频| 深夜精品福利| 亚洲国产欧美一区二区综合| 免费观看人在逋| 国产一区二区在线av高清观看| 岛国在线免费视频观看| 日本一二三区视频观看| 亚洲一区高清亚洲精品| 亚洲欧洲精品一区二区精品久久久| 免费在线观看日本一区| 小说图片视频综合网站| 一区福利在线观看| 岛国视频午夜一区免费看| 午夜精品在线福利| 国产日本99.免费观看| 国内精品久久久久精免费| 久久香蕉精品热| 两个人免费观看高清视频| 最近最新中文字幕大全免费视频| 欧美乱妇无乱码| 亚洲欧美精品综合久久99| 丁香六月欧美| 日本黄色视频三级网站网址| 国产不卡一卡二| 巨乳人妻的诱惑在线观看| 国产av在哪里看| 精华霜和精华液先用哪个| 免费电影在线观看免费观看| 久久久国产精品麻豆| 国产成人影院久久av| 国产熟女午夜一区二区三区| 久久久久性生活片| 成人一区二区视频在线观看| 国产精品免费视频内射| 三级国产精品欧美在线观看 | 亚洲全国av大片| 精品久久久久久,| 亚洲精品粉嫩美女一区| 欧美日韩亚洲综合一区二区三区_| 一进一出抽搐动态| 欧美乱妇无乱码| 非洲黑人性xxxx精品又粗又长| 99热6这里只有精品| 欧美另类亚洲清纯唯美| 久久午夜综合久久蜜桃| 国产成+人综合+亚洲专区| 不卡一级毛片| 妹子高潮喷水视频| 少妇人妻一区二区三区视频| 成在线人永久免费视频| 深夜精品福利| 亚洲五月天丁香| 又爽又黄无遮挡网站| 人人妻人人看人人澡| 一区二区三区激情视频| 啦啦啦免费观看视频1| 国产高清视频在线观看网站| 99久久无色码亚洲精品果冻| 国内精品久久久久久久电影| 久久久久久久久免费视频了| 国产伦在线观看视频一区| 精品国产超薄肉色丝袜足j| 人人妻人人澡欧美一区二区| 国产精品香港三级国产av潘金莲| 一二三四社区在线视频社区8| 性色av乱码一区二区三区2| 成人手机av| 亚洲人与动物交配视频| 国产成人av教育| 国产久久久一区二区三区| 成人欧美大片| 久久久久国产精品人妻aⅴ院| 老司机靠b影院| 亚洲成人中文字幕在线播放| 老汉色av国产亚洲站长工具| 国产成人精品无人区| 欧美色视频一区免费| 首页视频小说图片口味搜索| 日日爽夜夜爽网站| 国产乱人伦免费视频| 一边摸一边做爽爽视频免费| 国产精品综合久久久久久久免费| 亚洲成人中文字幕在线播放| 色综合亚洲欧美另类图片| 欧美日韩乱码在线| www国产在线视频色| 精品国内亚洲2022精品成人| 在线观看舔阴道视频| 久久婷婷人人爽人人干人人爱| 久久精品成人免费网站| 午夜福利视频1000在线观看| av片东京热男人的天堂| 国产精品电影一区二区三区| 久久精品亚洲精品国产色婷小说| 一进一出好大好爽视频| 欧美午夜高清在线| 岛国在线免费视频观看| 一边摸一边抽搐一进一小说| 性欧美人与动物交配| 可以在线观看毛片的网站| 久久久久久人人人人人| 欧美日韩亚洲综合一区二区三区_| 免费在线观看黄色视频的| 制服人妻中文乱码| 欧美乱色亚洲激情| 三级国产精品欧美在线观看 | 国产精品亚洲美女久久久| 日韩精品青青久久久久久| 国产成人精品久久二区二区91| www.熟女人妻精品国产| 国内少妇人妻偷人精品xxx网站 | 久久精品国产亚洲av香蕉五月| 亚洲性夜色夜夜综合| 中国美女看黄片| 亚洲国产中文字幕在线视频| 99热这里只有是精品50| 9191精品国产免费久久| 成人av一区二区三区在线看| 色综合亚洲欧美另类图片| 黄色视频不卡| 成人av在线播放网站| 国产激情偷乱视频一区二区| 欧美日韩黄片免| 亚洲国产欧洲综合997久久,| 亚洲成av人片免费观看| 国产激情偷乱视频一区二区| 亚洲成人精品中文字幕电影| 最近最新免费中文字幕在线| 黄色a级毛片大全视频| ponron亚洲| 九九热线精品视视频播放| 美女 人体艺术 gogo| 欧美中文日本在线观看视频| 欧美最黄视频在线播放免费| 欧美日韩精品网址| 18禁黄网站禁片免费观看直播| 国产精品九九99| 一级作爱视频免费观看| 国产高清videossex| 性欧美人与动物交配| 久久精品国产综合久久久| 日韩欧美国产一区二区入口| 女生性感内裤真人,穿戴方法视频| 国产高清视频在线观看网站| 亚洲av成人av| 欧美乱妇无乱码| 九色国产91popny在线| 欧美日韩瑟瑟在线播放| 久久99热这里只有精品18| 嫁个100分男人电影在线观看| 蜜桃久久精品国产亚洲av| 久热爱精品视频在线9| 久久精品国产亚洲av香蕉五月| 国产精品av视频在线免费观看| 久久香蕉国产精品| 国产成人影院久久av| 熟女少妇亚洲综合色aaa.| 不卡av一区二区三区| 中文字幕人成人乱码亚洲影| av有码第一页| 99久久综合精品五月天人人| 老熟妇仑乱视频hdxx| 亚洲一码二码三码区别大吗| 久久久久精品国产欧美久久久| 亚洲熟女毛片儿| 国产精品av久久久久免费| 成人国语在线视频| 老熟妇乱子伦视频在线观看| 两个人免费观看高清视频| 精品欧美国产一区二区三| 国产亚洲精品av在线| 婷婷六月久久综合丁香| 国产亚洲av嫩草精品影院| 午夜日韩欧美国产| 五月伊人婷婷丁香| 禁无遮挡网站| 欧美乱码精品一区二区三区| 亚洲国产看品久久| 麻豆成人午夜福利视频| 国产成+人综合+亚洲专区| 国产91精品成人一区二区三区| 99久久国产精品久久久| 又黄又爽又免费观看的视频| www.999成人在线观看| www国产在线视频色| 久久久久久免费高清国产稀缺| a级毛片a级免费在线| 国内久久婷婷六月综合欲色啪| 12—13女人毛片做爰片一| 日日干狠狠操夜夜爽| 国产aⅴ精品一区二区三区波| 久久久久亚洲av毛片大全| 国产亚洲欧美在线一区二区| 男人的好看免费观看在线视频 | 久久精品91蜜桃| 99在线人妻在线中文字幕| av片东京热男人的天堂| 午夜激情av网站| 九九热线精品视视频播放| 丰满人妻熟妇乱又伦精品不卡| 欧美一区二区精品小视频在线| 人成视频在线观看免费观看| 人妻丰满熟妇av一区二区三区| 最好的美女福利视频网| 免费看十八禁软件| 黑人操中国人逼视频| 国产私拍福利视频在线观看| 久久这里只有精品19| 在线观看免费视频日本深夜| 极品教师在线免费播放| 久久午夜综合久久蜜桃| 波多野结衣高清无吗| 国产精品日韩av在线免费观看| 中文在线观看免费www的网站 | 日韩有码中文字幕| 欧美精品啪啪一区二区三区| 变态另类丝袜制服| 久久伊人香网站| 黄色 视频免费看| 日韩av在线大香蕉| 久久精品亚洲精品国产色婷小说| www国产在线视频色| 国产亚洲精品久久久久久毛片| 嫩草影院精品99| 最近最新中文字幕大全电影3| 成人特级黄色片久久久久久久| 精品国产乱码久久久久久男人| 九色国产91popny在线| www日本黄色视频网| 久久天躁狠狠躁夜夜2o2o| 日本三级黄在线观看| 亚洲专区国产一区二区| 精品国产美女av久久久久小说| av超薄肉色丝袜交足视频| 国产精品久久电影中文字幕| 757午夜福利合集在线观看| av在线播放免费不卡| 亚洲国产欧美一区二区综合| 日本三级黄在线观看| 小说图片视频综合网站| 在线免费观看的www视频| 一本一本综合久久| 50天的宝宝边吃奶边哭怎么回事| 日本黄大片高清| 亚洲中文av在线| 国产视频内射| 国产精品乱码一区二三区的特点| 一级a爱片免费观看的视频| 一个人免费在线观看电影 | а√天堂www在线а√下载| 精品午夜福利视频在线观看一区| 亚洲va日本ⅴa欧美va伊人久久| 97人妻精品一区二区三区麻豆| 日日干狠狠操夜夜爽| 久久性视频一级片| 高潮久久久久久久久久久不卡| 亚洲成人免费电影在线观看| 久久中文看片网| 久9热在线精品视频| 精品久久久久久久人妻蜜臀av| 神马国产精品三级电影在线观看 | 最新在线观看一区二区三区| 夜夜看夜夜爽夜夜摸| 成人午夜高清在线视频| 中文字幕av在线有码专区| 深夜精品福利| 国产精品永久免费网站| 美女大奶头视频| 国产精品美女特级片免费视频播放器 | 国产精品爽爽va在线观看网站| 亚洲国产精品久久男人天堂| 精品人妻1区二区| 亚洲一码二码三码区别大吗| 欧美日本视频| 99在线人妻在线中文字幕| 制服人妻中文乱码| 国产欧美日韩一区二区三| 欧美国产日韩亚洲一区| 国产真人三级小视频在线观看| 亚洲欧美精品综合久久99| 国产又色又爽无遮挡免费看| 美女 人体艺术 gogo| 精品人妻1区二区| 久久久久久亚洲精品国产蜜桃av| 亚洲国产高清在线一区二区三| 少妇裸体淫交视频免费看高清 | 亚洲乱码一区二区免费版| 亚洲精华国产精华精| 啦啦啦观看免费观看视频高清| 久久国产乱子伦精品免费另类| 一边摸一边抽搐一进一小说| 亚洲av中文字字幕乱码综合| 啦啦啦免费观看视频1| 给我免费播放毛片高清在线观看| 国产亚洲精品av在线| 在线永久观看黄色视频| 99精品久久久久人妻精品| 少妇熟女aⅴ在线视频| 国产精品一区二区三区四区久久| 国产亚洲欧美98| 亚洲人成网站在线播放欧美日韩| 琪琪午夜伦伦电影理论片6080| 一卡2卡三卡四卡精品乱码亚洲| 精品一区二区三区视频在线观看免费| 听说在线观看完整版免费高清| 午夜成年电影在线免费观看| 国内少妇人妻偷人精品xxx网站 | 亚洲一码二码三码区别大吗| 中文字幕av在线有码专区| 99热只有精品国产| 一卡2卡三卡四卡精品乱码亚洲| 国产精品影院久久| 亚洲色图 男人天堂 中文字幕| 欧美日韩精品网址| 制服丝袜大香蕉在线| 看片在线看免费视频| 亚洲男人天堂网一区| 欧洲精品卡2卡3卡4卡5卡区| 人人妻人人澡欧美一区二区| 精品欧美国产一区二区三| 变态另类成人亚洲欧美熟女| 日韩欧美免费精品| 日日摸夜夜添夜夜添小说| 亚洲av美国av| 伦理电影免费视频| 国产精品久久久av美女十八| bbb黄色大片| 麻豆久久精品国产亚洲av| 丝袜人妻中文字幕| 白带黄色成豆腐渣| 精品国产乱码久久久久久男人| 精品久久久久久久人妻蜜臀av| 88av欧美| 中文字幕最新亚洲高清| 国产精品香港三级国产av潘金莲| 国产69精品久久久久777片 | 亚洲 国产 在线| 99在线视频只有这里精品首页| 亚洲成人免费电影在线观看| 99久久无色码亚洲精品果冻| 免费在线观看视频国产中文字幕亚洲| 天堂动漫精品| 日本一本二区三区精品| 久久天堂一区二区三区四区| 国产成人啪精品午夜网站| 巨乳人妻的诱惑在线观看| 国产69精品久久久久777片 | 亚洲全国av大片| 国产真实乱freesex| 日本黄色视频三级网站网址| 日本a在线网址| 国产亚洲精品综合一区在线观看 | 最新美女视频免费是黄的| 久久婷婷人人爽人人干人人爱| 欧美极品一区二区三区四区| 国产男靠女视频免费网站| 精品无人区乱码1区二区| 999久久久国产精品视频| 午夜精品久久久久久毛片777| 妹子高潮喷水视频| 91国产中文字幕| 中文亚洲av片在线观看爽| av超薄肉色丝袜交足视频| 手机成人av网站| 国产精品久久久久久人妻精品电影| 日韩国内少妇激情av| 俄罗斯特黄特色一大片| 黑人操中国人逼视频| 久久久精品欧美日韩精品| 亚洲成av人片在线播放无| 国产一区二区三区在线臀色熟女| 中文字幕av在线有码专区| a级毛片a级免费在线| 欧美高清成人免费视频www| 91成年电影在线观看| 两性午夜刺激爽爽歪歪视频在线观看 | 午夜精品久久久久久毛片777| 色哟哟哟哟哟哟| 国产欧美日韩一区二区精品| 亚洲成人久久性| 18禁黄网站禁片午夜丰满| 日韩精品青青久久久久久| x7x7x7水蜜桃| 亚洲av成人不卡在线观看播放网| 午夜老司机福利片| 亚洲精品在线观看二区| 亚洲人成伊人成综合网2020| 亚洲精品一区av在线观看| 亚洲成a人片在线一区二区| 久久久国产欧美日韩av| 精品欧美一区二区三区在线| 国产一区二区在线观看日韩 | 国产精品98久久久久久宅男小说| 亚洲黑人精品在线| 欧美激情久久久久久爽电影| 99久久无色码亚洲精品果冻| av福利片在线观看| 亚洲五月婷婷丁香| 999久久久精品免费观看国产| 精品欧美国产一区二区三| 日韩免费av在线播放| 国内精品久久久久久久电影| 日韩成人在线观看一区二区三区| 日本在线视频免费播放| www.999成人在线观看| 欧美日韩国产亚洲二区| 精品久久久久久久末码|