• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Iterative Semi-Supervised Learning Using Softmax Probability

    2022-11-11 10:48:24HeewonChungandJinseokLee
    Computers Materials&Continua 2022年9期

    Heewon Chung and Jinseok Lee

    Department of Biomedical Engineering,College of Electronics and Information,Kyung Hee University,Yongin-si,Gyeonggi-do,17104,Korea

    Abstract: For the classification problem in practice, one of the challenging issues is to obtain enough labeled data for training.Moreover, even if such labeled data has been sufficiently accumulated, most datasets often exhibit long-tailed distribution with heavy class imbalance,which results in a biased model towards a majority class.To alleviate such class imbalance, semisupervised learning methods using additional unlabeled data have been considered.However,as a matter of course,the accuracy is much lower than that from supervised learning.In this study,under the assumption that additional unlabeled data is available,we propose the iterative semi-supervised learning algorithms, which iteratively correct the labeling of the extra unlabeled data based on softmax probabilities.The results show that the proposed algorithms provide the accuracy as high as that from the supervised learning.To validate the proposed algorithms, we tested on the two scenarios: with the balanced unlabeled dataset and with the imbalanced unlabeled dataset.Under both scenarios,our proposed semi-supervised learning algorithms provided higher accuracy than previous state-of-the-arts.Code is available at https://github.com/HeewonChung92/iterative-semi-learning.

    Keywords: Semi-supervised learning; class imbalance; iterative learning;unlabeled data

    1 Introduction

    Image classification is a problem to categorize images into one of the multiple classes.It has been considered one of the most important tasks since it is the basis for other computer vision tasks such as image detection,localization and segmentation[1-6].Since AlexNet[7]was introduced,deep neural networks(DNNs)have evolved remarkably via VGG-16[8],GoogLeNet[9],ResNet[10],Inception-V3[11],especially to solve the image classification tasks.DNNs have been widely used for a variety of tasks and set the new state-of-the-art, sometimes even surpassing human performance on image classification tasks.

    However,when dealing with the classification problem in practice,we face many practical issues,and one of the most challenging issues is acquiring enough labeled data for training.The acquisition of the labeled data often requires a lot of time while also requiring professional and delicate works.A recent study reported that physicians spent an average of 16 minutes and 14 seconds per encounter using electronic health record(EHRs),with chart review(33%),documentation(24%),and ordering(17%)functions accounting for most of the time [12].The manual labeling of medical images also requires intensive labor[13,14].In addition,even if the labeled data is acquired enough,there is another challenging issue referred to as imbalanced dataset.For instance, for the classification of a specific disease data, there is much more information about the data from healthy subjects than those from patients.

    To resolve these issues, semi-supervised learning methods using additional unlabeled data have been being considered a lot.Semi-supervised learning is a machine learning approach that combines a small amount of labeled data with a large amount of unlabeled data during training[15-17].In this study,we propose a novel semi-supervised learning algorithms providing the performance at the level of supervised learning by focusing on automatically and accurately labeling additional unlabeled data.More specifically,to accurately label the unlabeled data,we use a softmax probability as a confidence index and decide whether to assign a pseudo-label to the unlabeled data.The data with labels are used continuously for training.Finally, the process is repeated until the pseudo-labels are assigned to all unlabeled data with high confidence.Our proposed approach is innovative because it effectively and accurately labels the unlabeled data using a simple mathematical function of softmax.For classification problems,softmax is essential part of a model,usually used in the last output layer.Thus,we expect to be able to effectively label the unlabeled data without additional computational complexity.

    This paper is organized as follows.Section 2 lists some related works.Section 3 provides a specific motivation of dealing with unlabeled data.In Section 4, we introduce our proposed iterative semisupervised learning using softmax probabilities.In Section 5, the performance of our algorithm is verified by comparative experiments.The conclusion and future work are described in Section 6.

    2 Related Works

    The difficulty of acquiring labeled data and the imbalanced data issue have been investigated by many research groups [18-21].One of the popular approach to handle the imbalanced data issue is with data-level techniques including over-sampling and under-sampling[22-24].The under-sampling is a technique to balance an imbalanced dataset by keeping all of the data in the minority group and decreasing the size of the majority group.This technique is mainly used when the amount of data belonging to minority and majority groups is large.The over-sampling is a technique to balance an imbalanced dataset by increasing the size of the minority group.This technique is mainly to duplicate minority data by randomly selecting the data from the minority group.A more advanced technique is the synthetic minority oversampling technique (SMOTE), which generates a new data point by selecting a point on a line connecting a randomly chosen minority class sample and one of its k nearest neighbors[25].Let us denote the synthetic data point byxnew,which can be expressed as

    wherexis a random data belonging to a minority group,xnearis one of theknearest neighbors ofx.The parameterλis independent and identically distributed number uniformly distributed on[0,1].This SMOTE has the advantage that of being able to increase the size of the minority group without duplicating the data.Similar to SMOTE,adaptive synthetic sampling(ADASYN)technique generates a new data point based on the k nearest neighbors [26].It generates more data that are harder to learn compared to the data that are easier to learn by considering the data distribution.Thus, it can adaptively shift the decision boundary to focus on the hard-to-learn data.Since the data-level techniques from over-sampling approach balance out the number of each group of data,the trained models have worked well in a variety of applications.However, such over-sampling techniques are available when the data is represented as a vector.

    Another approach to handle the imbalanced data issue is with algorithmic methods.In the algorithmic approach, the learning process is adjusted in a way that emphasizes the importance of the minority group data.Most commonly,the cost or loss function is modified to weigh more towards the minority group data or to weigh less towards the majority group data[18,27,28].Such a sample weighting in loss function is to weigh the loss computed for different samples differently based on whether they belong to the majority or the minority group.For the weight factors,inverse of number of samples or inverse of square root of the number of samples can be considered.Recently,Cui et al.[29]introduced the effective number of samplesEnc,which can be defined as

    wherencis the number of samples in classcandβis a hyperparameter on [0,1].By using the effective number of samples, the weight factor 1/Encweigh the loss from the data according to the majority or the minority group.This algorithm approach also worked well in a variety of applications.Nevertheless, the imbalanced dataset issue is not completely solved.The fundamental solution is to increase the number of data with diversity by acquiring more new data.

    As we mentioned above,the most challenging part of acquiring data is labeling new data.It not only takes a lot of time, but also requires professional and delicate works.Recently, Yang et al.[30]demonstrated that pseudo-label on extra unlabeled data can improve the classification performance,especially with the imbalanced dataset.The method is based on the fact that the unlabeled data is relatively easy to obtain while the labeled one is difficult to obtain.Based on the trained model with original data,extra unlabeled data was subsequently labeled.Accordingly,it was shown that the trained model with additional unlabeled data provided better performance.However,the pseudo-labels also can be biased towards a majority of data.Thus,the improvement from usage of the extra unlabeled data is limited.In our work,we focus on how to more correctly label the unlabeled data,which eventually provides better performance.

    3 Preliminaries and Motivation

    Given a simple binary classification from the dataPXYwith a mixture of two Gaussians,consider that each class data has the labelY: +1or-1.Also,consider the data distribution ofX|YiswhenY= +1.Similarly,whenY= -1,the data distribution ofX|Yis,whereμ1>μ2.Given one sample x, if, then x can be classified into +1; otherwise -1.Accordingly, the classifier can be expressed asf (x) = sign,where the termneeds to be learned based on the data set X and the corresponding label set Y.

    However, given imbalanced training data, the termin the trained classifier will be shifted to the mean value of a minority class.If a majority of data has the labelY= +1,then the classifier can be derived aswhereα >0.Fig.1a illustrates an example of a biased classifier,which focuses mainly on improving the classification performance of a majority class.Such a class imbalance issue can be resolved by balancing data class via data sampling approach such as oversampling or under-sampling as shown in Fig.1b: in this example, the predicted decision boundary is closer to the actual boundary after using under-or over-sampling method.Similarly, sampling weighting methods also change the predicted decision boundary to the actual boundary.

    Fig.1c illustrates another example of a biased classifier,which focuses on improving the performance of a majority class.However,in this example,the number of data from a minority class is too small to generalize the data corresponding to the minority class.Since the data from the minority class does not generalize to the actual distribution,any sampling approach cannot improve the performance as shown in Fig.1d:in this example,the predicted decision boundary is almost unchanged even after using under-or over-sampling method.Similarly,sampling weighting methods also have little effect on the predicted decision boundary.

    To alleviate the class imbalance issue, Yang et al.[30] recently demonstrated that pseudo-label on extra unlabeled data can improve the classification performance, especially with the imbalanced dataset,theoretically and empirically.More specifically,a base classifierfBwas first trained based on the original imbalanced training data.Subsequently, extra unlabeled data was labeled usingfB.At last,by re-trainingfBwith the additional pseudo-label data,the classifier was shown to be improved.However,the pseudo-labels also can be biased towards a majority of data,which results in the incorrect labeling,especially for a minority of data.Thus,the improvement from usage of the extra unlabeled data is limited.In this study,we present the algorithms that can improve the labeling accuracy,which eventually improves the overall classification performance.

    Figure 1: Examples of a biased classifier and the effects of data-level techniques; (a)an example of a biased classifier,(b)the effect of under-or over-sampling method(the predicted decision boundary closer to the actual boundary), (c)another example of a biased classifier, (d)the effect of under-or over-sampling method(little effect on the predicted decision boundary)

    4 Iterative Semi-Supervised Learning Using Softmax Probability

    4.1 Algorithm Description

    In this study,we propose the semi-supervised learning algorithms,which iteratively corrects the labeling of the extra unlabeled data.Algorithm 1 presents the pseudo-code of our proposed algorithm named iterative semi-supervised learning based on softmax probability (ISSL-SP).Let denote the original labeled data and the extra unlabeled data byDataoriandDataun, respectively.Regarding the instance perspectives,let denote theithextra unlabeled data and the corresponding label byand,respectively.Let also denote theithoriginal labeled data and the corresponding label byand,respectively.Before applying the algorithm ISSL-SP,we first train a base classifierfBusing the original training dataDataori.In the first stage,we consider the softmax probabilities corresponding to each class for,wherefor the number of unlabeled data.For each of,if the maximum value of the softmax probabilities is equal or greater than 0.99,we assigned the corresponding the class to.Here,the optimized threshold value of 0.99 was found throughout this study,and the trade-off between accuracy metrics and the threshold value is described in Results.On the other hand,if the maximum value of the softmax probabilities is less than 0.99,we assign the labelas undefined.Every iteration,we updatefBusing all available data for training:fBtofnew.Finally,we arrange the data with labels assigned as undefined,and repeat the entire process until all the data is labeled in a specific class.In this way,ISSL-SP improves the overall classification performance by assigning the labels only with high softmax probability.

    Algorithm 1 Iterative semi-supervised learning based on softmax probability (ISSL-SP).This algorithm is given a base classifier fB which was trained with original training data Dataori.We consider that the data has the label:1,2,...Require 1:Dataori:original train data 2:Dataun:extra unlabeled data 3:fB:base classifier providing softmax probability//fB was trained with Dataori 4:function ISSL-SP(fB,Dataun,n(Dataun))//n(Dataun):the number of Dataun 5:fnew=fB 6:while n(Dataun)>0 do 7:for i=1 to n(Dataun)do 8://Datai un: ith unlabeled data 9:probs=fBimages/BZ_1349_694_2141_712_2187.png//softmax probabilities for each class 10:if max(probs)≥0.99 then 11://0.99 or higher is considered correctimages/BZ_1349_552_2141_570_2187.pngDatai un 12:Labeliun =argmax(probs)13:else 14:Labeliun =-1//undefined 15:end if 16:end for 17:Update fnew based on the all available data including Dataori and Dataun with Labelun >0 18:Update Dataun with Labeli un =-1 19:end while 20:return fnew 21:end function

    4.2 Algorithm Insight

    Based onwithfromfB,let denote the data corresponding to=+1 bySimilarly,let denote the data corresponding to= -1 by.As we mentioned above,our aim is to learn.Here,withand,the estimator can be constructed by

    wheren+andn-are the numbers of the, respectively.Given the distribution of,and that of,the estimator can be expressed by

    4.3 A variant of ISSL-SP

    ISSL-SP algorithm can be extended in a variety of forms.Algorithm 2 presents the pseudo-code named ISSL-SP with re-labeling all the initial unlabeled data(ISSL-SPR).As a variant of ISSL-SP,ISSL-SPR is the same as ISSL-SP,except that all of the unlabeled data is labeled again every iteration:the line 18 in ISSL-SP (Algorithm 1)is missing.Since the updated classifierfnewis trained with ever increasing data, it can provide better performance as the process is repeated; and thus, it may be necessary for the initial unlabeled dataDataunto be labeled over and over again.To sum up, ISSLSP labels only the data assigned by undefined while ISSL-SPR labels all initial unlabeled data over again.

    Algorithm 2 A variant of ISSL-SP:ISSL-SPR.This algorithm is the same as ISSL-SP,except that all of the unlabeled data are labeled again.Require 1:Dataori :original train data 2:Dataun:extra unlabeled data 3:fB:base classifier providing softmax probability//fB was trained with Dataori 4:function ISSL-SPR(fB,Dataun,n(Dataun))//n(Dataun):the number of Dataun 5:fnew =fB 6:while True do 7:Same from lines 7 to 17 in Algorithm 1 8:if n(Labelun ==-1)==0 then(Continued)

    Algorithm 2 Continued 9:break end if 10:11:end while 12:return fnew 13:end function

    5 Dataset and Experiment Setup

    5.1 Dataset

    To evaluate our proposed algorithms of ISSL-SP and ISSL-SPR, we mainly used two datasets of CIFAR-10 [31] and the street view house number (SVHN)[32].The two datasets include images and the corresponding class labels.In addition, they have additional unlabeled data with similar distributions: 80 Million Tiny Images [33] includes the unlabeled images for CIFAR-10, and extra SVHN[32]includes the unlabeled images for SVHN.Tab.1 summarizes the four datasets of CIFAR-10,80 Million Tiny Images,SVHN and extra SVHN.More specifically,for training,80 Million Tiny Images includes 500,000 unlabeled images while CIFAR-10 includes 50,000 labeled images.The extra SVHN includes 531,131 unlabeled images while SVHN includes 73,257 images.

    Table 1:Summary of four datasets:CIFAR-10,80 Million Tiny Images,SVHN and extra SVHN.80 Million Tiny Images are unlabeled images for CIFAR-10.Extra SVHN images are unlabeled images for SVHN

    5.2 Experimental Setup

    In this study,we conducted experiments on artificially created long-tailed data distribution from CIFAR-10 and SVHN.Tab.2 summarizes the trained data randomly drawn from datasets of CIFAR-10, 80 Million Tiny Images, SVHN and extra SVHN.The class imbalance ratio was defined as the number of the most frequent class divided by that of the least frequent class[29-31].

    Table 2: Summary of trained data randomly drawn from datasets from datasets of CIFAR-10,80 Million Tiny Images, SVHN, extra SVHN and CINIC-10.For the unlabeled data Dataun, we considered two scenarios with different imbalance ratios

    For CIFAR-10 and SVHN,we randomly drew samples to make the imbalance ratio of 50,which is denoted byDataori.For the unlabeled dataDataun, we considered two scenarios with different imbalance ratios.In Scenario 1,we assumed that the unlabeled data was balanced with the imbalance ratio of 1.In Scenario 2,we assumed that the unlabeled data was imbalanced with the imbalance ratio of 50.For both scenarios,we almost balanced out the numbers of labeled and unlabeled data:13,996Dataoriand 13,990Dataunfrom CIFAR-10 and 80 Million Tiny Images while 2,795Dataoriand 2,790Dataunfrom SVHN and extra SVHN.Finally,we evaluated each of the trained models on the isolated and balanced testing dataset[30,31,34,35].

    We implemented and trained the models using Pytorch.For all experiments,we used the stochastic gradient descent(SGD)optimizer with batch size of 256 and binary cross-entropy for the cost function.The entire experiments were performed on NVIDIA GeForce GTX 1080 Ti GPU.

    5.3 Evaluation Metrics

    To analyze the performance,the labeling percentage was defined as the number of the labeled data amongDataundivided by the number ofDataun:

    To evaluate the performance,we used sensitivity(recall),specificity,precision,accuracy,balanced accuracy(BA)and F1 score as

    where TP,TN,FP,and FN represent the true positive,true negative,false positive,and false negative,respectively.In addition,we also used the metrics of top-1 error.

    6 Results

    6.1 With Balanced Unlabeled Data:Scenario 1

    Tab.3 summarizes the results when unlabeled data is balanced.It shows sensitivity, specificity,accuracy,BA,F1 score and top-1 error.Note since the testing dataset is balanced,the F1 score can be both macro average and weighted average.For the CIFAR-10 dataset,if onlyDataoriis used for training as a baseline,the top-1 error is 28.76%.IfDataunis additionally used for training without iteration[30],the top-1 error is 24.93%,which is slightly decreased.On the other hand,ifDataunis ideally given with 100%labeling accuracy and additionally used for training,the top-1 error is significantly dropped to 8.83%,which can be considered the lowest bound.Our proposed algorithms ISSL-SP and ISSL-SPR provide the top-1 error of 14.92%and 10.79%,respectively,which are much lower than that from the method[30],and are very close to the lowest bound.Similarly,for the SVHN dataset,withDataorionly,the top-1 error is 28.10%.IfDataunis additionally used for training without iteration[30],the top-1 error decreases to 25.73%.With the ideal condition usingDataun100% labeling accuracy, the top-1 error is 9.17% as the lowest bound.Our proposed algorithms ISSL-SP and ISSL-SPR provide the top-1 error of 14.87%and 11.09%,respectively,which are also much lower than that from the method[30], and are very close to the lowest bound.More detailed results are presented in Supplementary Tabs.1 and 2.In addition, the results show that ISSL-SPR provides slightly higher accuracy than ISSL-SP,indicating that the updated classifier needs to re-label the entire initial unlabeled data.

    Table 3: With balanced and unlabeled data from CIFAR-10 and SVHN datasets

    Fig.2 plots labeled percentages and top-1 errors using ISSL-SP and ISSL-SPR according to each iteration.It shows that the labeled percentage increases and top-1 error decreases as the labeling processing is repeated.Also, the tendency to change with each iteration can be observed in both algorithms of ISSL-SP and ISSL-SPR.

    Figure 2:(Scenario 1:with balanced unlabeled data)Labeled percentages and top-1 errors using ISSLSP and ISSL-SPR according to each iteration

    6.2 With Balanced Unlabeled Data:Scenario 2

    Tab.4 summarizes the results when unlabeled data is imbalanced.It shows sensitivity,specificity,accuracy,BA,F1 score and top-1 error.For the CIFAR-10 dataset,withDataorionly,the top-1 error is 28.76%.IfDataunis additionally used for training without iteration[30],the top-1 error decreases to 25.85%.As the lowest bound,ifDataunis ideally given with 100%labeling accuracy and additionally used for training,the top-1 error is 11.62%.Our proposed algorithms ISSL-SP and ISSL-SPR provide the top-1 error of 18.58% and 14.87%, respectively, which are also much lower than that from the method [30], and are very close to the lowest bound.Similarly, for the SVHN dataset, withDataorionly, the top-1 error is 28.10%.IfDataunis additionally used for training without iteration [30], the top-1 error decreases to 25.25%.With the ideal condition usingDataun100%labeling accuracy,the top-1 error is 11.47%as the lowest bound.Our proposed algorithms ISSL-SP and ISSL-SPR provide the top-1 error of 14.14%and 13.62%,respectively,which are also much lower than that from the method[30], and are very close to the lowest bound.More detailed results are presented in Supplementary Tabs.3 and 4.In addition,similar to the scenario 1,the results show that ISSL-SPR provides slightly higher accuracy than ISSL-SP,indicating that the updated classifier needs to re-label the entire initial unlabeled data.

    Table 4: With imbalanced and unlabeled data from CIFAR-10 and SVHN datasets

    Fig.3 plots labeled percentages and top-1 errors using ISSL-SP and ISSL-SPR according to each iteration.It also shows that the labeled percentage increases and top-1 error decreases as the labeling processing is repeated.

    Figure 3:(Scenario 2:with balanced unlabeled data)Labeled percentages and top-1 errors using ISSLSP and ISSL-SPR according to each iteration

    6.3 Effect of Softmax Threshold Values

    To investigate the effect of the softmax threshold values,we changed the threshold values from 0.5 to 0.999:by the increment of 0.01 from 0.5 to 0.9,and the increment of 0.001 from 0.9 to 0.999.Fig.4 shows the accuracy metrics of F1 score,balanced accuracy and top-1 error according to the softmax threshold values.The results show that the threshold value of 0.99 provides the highest accuracy values.Throughout this study,we have used the softmax threshold value of 0.99 for the simulation results.

    Figure 4:F1 score,Balanced accuracy and top-1 errors according to softmax threshold values

    7 Conclusion and Discussion

    In this study,we propose new semi-supervised learning algorithms,which iteratively corrects the labeling of the extra unlabeled data based on softmax probabilities.We first train a base classifier using original labeled data,and evaluate unlabeled data using softmax probabilities.For each unlabeled data,if the maximum value of the softmax probabilities is equal or greater than 0.99,we assign the unlabeled data with the corresponding class.Every iteration,we update the classifier using all available data for training.Regarding the labeling, ISSL-SP considers only the remaining unlabeled data while ISSLSPR considers the entire initial unlabeled data.To validate the proposed algorithms,we tested on the two scenarios: with balanced unlabeled dataset and with imbalanced unlabeled dataset.The results show that the two proposed algorithms,ISSL-SP and ISSL-SPR,provide the accuracy as high as that from supervised learning,where the unlabeled data is given 100%labeling accuracy.

    Comparing the performance of the two algorithms of ISSP-SP and ISSP-SPR,ISS-SPR outperforms ISS-SP regardless of the datasets and the imbalance ratio of unlabeled data.The results indicate that the updated classifier needs to re-label the entire initial unlabeled data.Furthermore, ISS-SPR outperforms previous state-of-the-arts.In the future work,we plan to validate the algorithm efficacy using more extended datasets.In addition,we need to investigate an optimum strategy to reduce the lengthy training time caused by the iteration process.

    Supplementary Table 1: Results from Scenario 1 with CIFAR-10

    Supplementary Table 1:Continued

    Supplementary Table 1:Continued

    Supplementary Table 2: Results from Scenario 1 with SVHN

    Supplementary Table 2:Continued

    Supplementary Table 3: Results from Scenario 2 with CIFAR-10

    Supplementary Table 3:Continued

    Supplementary Table 4: Results from Scenario 2 with SVHN

    Supplementary Table 4:Continued

    Funding Statement:This work was supported by the National Research Foundation of Korea (No.2020R1A2C1014829),and by the Korea Medical Device Development Fund grant,which is funded by the Government of the Republic of Korea Korea government (the Ministry of Science and ICT;the Ministry of Trade,Industry and Energy;the Ministry of Health and Welfare;and the Ministry of Food and Drug Safety)(grant KMDF_PR_20200901_0095).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产日本99.免费观看| 精品人妻1区二区| 搞女人的毛片| 国产精品1区2区在线观看.| 757午夜福利合集在线观看| 三级国产精品欧美在线观看 | 一级毛片高清免费大全| 特级一级黄色大片| 亚洲中文字幕日韩| 亚洲精品一区av在线观看| 国产精品香港三级国产av潘金莲| www.999成人在线观看| a级毛片a级免费在线| 这个男人来自地球电影免费观看| 精品熟女少妇八av免费久了| 99久久综合精品五月天人人| 久久久久国产一级毛片高清牌| www.精华液| 很黄的视频免费| 国产又黄又爽又无遮挡在线| 香蕉久久夜色| www日本在线高清视频| 亚洲午夜精品一区,二区,三区| 精品不卡国产一区二区三区| 亚洲av成人不卡在线观看播放网| 1024香蕉在线观看| 九九热线精品视视频播放| 久久久久九九精品影院| 精品久久蜜臀av无| 99国产精品99久久久久| 岛国在线观看网站| 777久久人妻少妇嫩草av网站| 天天一区二区日本电影三级| 白带黄色成豆腐渣| 久久香蕉精品热| 国产爱豆传媒在线观看 | 国产精品美女特级片免费视频播放器 | 女生性感内裤真人,穿戴方法视频| 亚洲性夜色夜夜综合| 国产精品精品国产色婷婷| 欧美一区二区国产精品久久精品 | 老鸭窝网址在线观看| 日日干狠狠操夜夜爽| 精品欧美国产一区二区三| 精品久久久久久久久久久久久| 中文字幕av在线有码专区| 搡老熟女国产l中国老女人| 精品少妇一区二区三区视频日本电影| 国产av在哪里看| 岛国视频午夜一区免费看| 一夜夜www| 女警被强在线播放| 99久久综合精品五月天人人| 日韩欧美在线二视频| 欧美一级a爱片免费观看看 | 欧美一区二区国产精品久久精品 | 国产片内射在线| 99精品在免费线老司机午夜| 国产伦一二天堂av在线观看| av福利片在线观看| 亚洲 国产 在线| www日本黄色视频网| 亚洲熟妇中文字幕五十中出| 听说在线观看完整版免费高清| 亚洲精品色激情综合| 久久亚洲真实| 国产成人av激情在线播放| 黑人巨大精品欧美一区二区mp4| 午夜免费成人在线视频| 国产精品永久免费网站| 国产v大片淫在线免费观看| 精品久久久久久久久久久久久| 亚洲中文日韩欧美视频| 精品福利观看| 一区二区三区高清视频在线| 亚洲国产精品999在线| 少妇被粗大的猛进出69影院| 亚洲国产欧美人成| 一个人观看的视频www高清免费观看 | av免费在线观看网站| 久久这里只有精品中国| 日韩av在线大香蕉| 国产在线精品亚洲第一网站| av天堂在线播放| av在线天堂中文字幕| 两人在一起打扑克的视频| 国产熟女xx| 一级a爱片免费观看的视频| а√天堂www在线а√下载| 欧美极品一区二区三区四区| 亚洲一区二区三区色噜噜| 亚洲男人天堂网一区| 后天国语完整版免费观看| 可以在线观看毛片的网站| 男人舔奶头视频| 国产精品自产拍在线观看55亚洲| 成熟少妇高潮喷水视频| 日本黄大片高清| 19禁男女啪啪无遮挡网站| 99热6这里只有精品| 91在线观看av| 国内揄拍国产精品人妻在线| 久久久国产成人免费| 国产1区2区3区精品| 国产三级中文精品| 午夜亚洲福利在线播放| 久久久久久久久免费视频了| 国产高清视频在线播放一区| 免费看美女性在线毛片视频| 亚洲国产中文字幕在线视频| 又紧又爽又黄一区二区| a级毛片a级免费在线| 又爽又黄无遮挡网站| 国产精品一区二区三区四区免费观看 | 岛国在线观看网站| 宅男免费午夜| 在线观看免费午夜福利视频| 日韩精品中文字幕看吧| 国产免费av片在线观看野外av| 亚洲五月婷婷丁香| 中文字幕av在线有码专区| 日韩欧美精品v在线| 人成视频在线观看免费观看| xxx96com| 色在线成人网| 在线观看午夜福利视频| 国产精品野战在线观看| 免费在线观看影片大全网站| 美女免费视频网站| 9191精品国产免费久久| 欧美 亚洲 国产 日韩一| 香蕉久久夜色| 欧美精品亚洲一区二区| 欧美精品亚洲一区二区| 99久久久亚洲精品蜜臀av| 色综合亚洲欧美另类图片| av视频在线观看入口| 久久九九热精品免费| 桃色一区二区三区在线观看| 亚洲欧美日韩高清在线视频| 免费无遮挡裸体视频| 午夜福利18| 90打野战视频偷拍视频| 亚洲成av人片免费观看| 一本精品99久久精品77| 国产精品自产拍在线观看55亚洲| 18禁裸乳无遮挡免费网站照片| 校园春色视频在线观看| 精品久久久久久久久久免费视频| 久久香蕉精品热| 99精品欧美一区二区三区四区| 国产av又大| 国产99久久九九免费精品| 欧美日韩黄片免| 美女黄网站色视频| 又大又爽又粗| 精品欧美一区二区三区在线| 亚洲国产精品合色在线| 亚洲美女黄片视频| 亚洲精品国产精品久久久不卡| 桃红色精品国产亚洲av| 正在播放国产对白刺激| 少妇熟女aⅴ在线视频| 五月伊人婷婷丁香| 欧美日韩亚洲综合一区二区三区_| 成年免费大片在线观看| 国产伦在线观看视频一区| 五月伊人婷婷丁香| 免费人成视频x8x8入口观看| 熟女少妇亚洲综合色aaa.| 曰老女人黄片| 欧美日本亚洲视频在线播放| 亚洲一码二码三码区别大吗| 成人18禁在线播放| 国产精品久久久久久久电影 | 精品国产乱子伦一区二区三区| 变态另类成人亚洲欧美熟女| 午夜免费观看网址| 日韩大码丰满熟妇| 欧美久久黑人一区二区| 久久香蕉精品热| 亚洲18禁久久av| 国产熟女午夜一区二区三区| 黄色毛片三级朝国网站| 嫩草影视91久久| 天天添夜夜摸| 亚洲欧美日韩高清专用| 一卡2卡三卡四卡精品乱码亚洲| 国产熟女xx| 免费看a级黄色片| cao死你这个sao货| 正在播放国产对白刺激| 亚洲国产高清在线一区二区三| 免费高清视频大片| av福利片在线| 波多野结衣巨乳人妻| 国产三级中文精品| 欧美日韩中文字幕国产精品一区二区三区| 亚洲欧美激情综合另类| www国产在线视频色| 免费一级毛片在线播放高清视频| 777久久人妻少妇嫩草av网站| 亚洲 国产 在线| 黄色视频,在线免费观看| 国产亚洲欧美98| 亚洲国产精品合色在线| 999久久久国产精品视频| 亚洲精品一区av在线观看| 国产一区二区在线观看日韩 | 校园春色视频在线观看| 婷婷亚洲欧美| 一区二区三区国产精品乱码| 亚洲欧美日韩东京热| 国产一区在线观看成人免费| 亚洲人成77777在线视频| 最新在线观看一区二区三区| 一本综合久久免费| 国产在线观看jvid| 夜夜爽天天搞| 国产片内射在线| 国产99白浆流出| 亚洲精品国产精品久久久不卡| 99久久国产精品久久久| 天堂av国产一区二区熟女人妻 | 国产欧美日韩一区二区三| 18禁裸乳无遮挡免费网站照片| 国产在线精品亚洲第一网站| 妹子高潮喷水视频| 免费高清视频大片| 可以在线观看的亚洲视频| 亚洲欧美日韩高清在线视频| 亚洲一区二区三区色噜噜| 99在线人妻在线中文字幕| 亚洲人成电影免费在线| 中文字幕av在线有码专区| 色在线成人网| cao死你这个sao货| 午夜日韩欧美国产| 国产午夜精品久久久久久| 亚洲国产中文字幕在线视频| 少妇人妻一区二区三区视频| 国产日本99.免费观看| 精品国产超薄肉色丝袜足j| 性色av乱码一区二区三区2| 啦啦啦韩国在线观看视频| av在线播放免费不卡| 狂野欧美白嫩少妇大欣赏| 精品少妇一区二区三区视频日本电影| 69av精品久久久久久| 午夜日韩欧美国产| 欧美黑人欧美精品刺激| 欧美大码av| 美女大奶头视频| 午夜精品久久久久久毛片777| 欧美不卡视频在线免费观看 | 欧美日韩瑟瑟在线播放| 欧美一区二区国产精品久久精品 | 日本黄色视频三级网站网址| 91成年电影在线观看| 成熟少妇高潮喷水视频| 999久久久精品免费观看国产| 老熟妇仑乱视频hdxx| 午夜两性在线视频| 日韩精品免费视频一区二区三区| 亚洲无线在线观看| 精品熟女少妇八av免费久了| 男男h啪啪无遮挡| 亚洲国产中文字幕在线视频| 欧美久久黑人一区二区| 精品国产超薄肉色丝袜足j| 麻豆国产97在线/欧美 | 久久伊人香网站| 三级男女做爰猛烈吃奶摸视频| 精品乱码久久久久久99久播| 在线观看日韩欧美| 怎么达到女性高潮| 日本黄色视频三级网站网址| 国产一区二区在线av高清观看| 亚洲欧美激情综合另类| 日韩精品中文字幕看吧| 国产精品一及| 在线观看日韩欧美| 香蕉久久夜色| 少妇被粗大的猛进出69影院| 麻豆av在线久日| 精品第一国产精品| 真人一进一出gif抽搐免费| 淫妇啪啪啪对白视频| 久久久久亚洲av毛片大全| 99国产精品一区二区蜜桃av| 国产精品久久电影中文字幕| 国产精品亚洲一级av第二区| 天堂√8在线中文| 亚洲欧美激情综合另类| 岛国在线观看网站| 88av欧美| 老司机午夜十八禁免费视频| 97碰自拍视频| 美女 人体艺术 gogo| 亚洲国产高清在线一区二区三| 免费人成视频x8x8入口观看| 色老头精品视频在线观看| 俄罗斯特黄特色一大片| 91九色精品人成在线观看| 国产黄a三级三级三级人| 中文字幕最新亚洲高清| 午夜a级毛片| 成人国语在线视频| 少妇人妻一区二区三区视频| 老司机午夜福利在线观看视频| 欧美又色又爽又黄视频| 久久久久性生活片| 亚洲精品一区av在线观看| 可以在线观看的亚洲视频| 亚洲一区二区三区色噜噜| 国产成人影院久久av| 两性午夜刺激爽爽歪歪视频在线观看 | a级毛片在线看网站| 特级一级黄色大片| 日本a在线网址| 香蕉久久夜色| 日本免费a在线| 99国产精品99久久久久| 久久久久久大精品| 黑人操中国人逼视频| 琪琪午夜伦伦电影理论片6080| 精品国产乱子伦一区二区三区| 免费电影在线观看免费观看| 一本精品99久久精品77| 欧美黄色片欧美黄色片| 欧美一级a爱片免费观看看 | 好看av亚洲va欧美ⅴa在| 久久香蕉国产精品| 亚洲精品av麻豆狂野| 久久精品国产清高在天天线| svipshipincom国产片| avwww免费| 色av中文字幕| 亚洲片人在线观看| 色尼玛亚洲综合影院| 色噜噜av男人的天堂激情| 少妇的丰满在线观看| 久久国产乱子伦精品免费另类| 免费在线观看成人毛片| 亚洲全国av大片| 国产熟女午夜一区二区三区| 999久久久国产精品视频| 欧美日韩福利视频一区二区| 男人舔奶头视频| 99久久国产精品久久久| 国产男靠女视频免费网站| 中文字幕精品亚洲无线码一区| 男人舔女人的私密视频| 精品无人区乱码1区二区| 久久精品人妻少妇| x7x7x7水蜜桃| 亚洲乱码一区二区免费版| 美女 人体艺术 gogo| 日韩精品青青久久久久久| 日韩有码中文字幕| 此物有八面人人有两片| 久久久久性生活片| 在线视频色国产色| 老司机福利观看| 国产精品香港三级国产av潘金莲| 丰满人妻熟妇乱又伦精品不卡| 国产高清激情床上av| 美女扒开内裤让男人捅视频| 一区福利在线观看| 精品久久久久久久久久久久久| 精品国产美女av久久久久小说| 丰满人妻熟妇乱又伦精品不卡| 久久99热这里只有精品18| 国产高清videossex| 香蕉av资源在线| 一本精品99久久精品77| 免费高清视频大片| 精品高清国产在线一区| 免费观看人在逋| 日本熟妇午夜| 久久天躁狠狠躁夜夜2o2o| 国产黄片美女视频| 国产成人影院久久av| 日韩中文字幕欧美一区二区| 最近在线观看免费完整版| 欧美一区二区精品小视频在线| 亚洲成人久久爱视频| 此物有八面人人有两片| 免费看日本二区| 中文字幕人成人乱码亚洲影| 成人18禁在线播放| 波多野结衣高清作品| 免费观看精品视频网站| 露出奶头的视频| 欧美乱码精品一区二区三区| а√天堂www在线а√下载| 亚洲午夜精品一区,二区,三区| 母亲3免费完整高清在线观看| 国产精品免费视频内射| 久久九九热精品免费| 每晚都被弄得嗷嗷叫到高潮| 老司机午夜十八禁免费视频| av国产免费在线观看| 午夜福利成人在线免费观看| 日韩成人在线观看一区二区三区| 欧美日韩一级在线毛片| 亚洲中文字幕日韩| 欧美精品亚洲一区二区| 搞女人的毛片| 老汉色av国产亚洲站长工具| √禁漫天堂资源中文www| 亚洲精品一区av在线观看| 午夜精品在线福利| 国产精品98久久久久久宅男小说| 国内毛片毛片毛片毛片毛片| 中文字幕久久专区| 国产av一区二区精品久久| 性欧美人与动物交配| 999精品在线视频| 国产三级黄色录像| 日韩欧美国产在线观看| 99国产极品粉嫩在线观看| 在线观看日韩欧美| 国产精品亚洲一级av第二区| 国产高清有码在线观看视频 | netflix在线观看网站| 五月玫瑰六月丁香| 一级毛片女人18水好多| 亚洲18禁久久av| 亚洲国产精品久久男人天堂| 久久人人精品亚洲av| 免费看十八禁软件| 99精品在免费线老司机午夜| 亚洲成人国产一区在线观看| aaaaa片日本免费| 三级男女做爰猛烈吃奶摸视频| 亚洲五月天丁香| 变态另类成人亚洲欧美熟女| 午夜福利免费观看在线| 国产又黄又爽又无遮挡在线| 国产又色又爽无遮挡免费看| 日韩高清综合在线| 三级男女做爰猛烈吃奶摸视频| 日本 欧美在线| 香蕉久久夜色| 欧美性猛交黑人性爽| 国产在线精品亚洲第一网站| 日韩精品免费视频一区二区三区| 99久久国产精品久久久| 日韩精品青青久久久久久| 女警被强在线播放| 精品一区二区三区视频在线观看免费| 一个人观看的视频www高清免费观看 | av超薄肉色丝袜交足视频| 免费无遮挡裸体视频| 色综合站精品国产| 亚洲在线自拍视频| 色在线成人网| 无人区码免费观看不卡| 美女免费视频网站| 高清在线国产一区| 久久香蕉国产精品| www日本在线高清视频| 丝袜人妻中文字幕| 欧美+亚洲+日韩+国产| 首页视频小说图片口味搜索| 国产精品久久久久久精品电影| 脱女人内裤的视频| 黄频高清免费视频| 又紧又爽又黄一区二区| 一进一出抽搐动态| 日韩 欧美 亚洲 中文字幕| 欧美日韩瑟瑟在线播放| 一个人免费在线观看的高清视频| 丁香六月欧美| 十八禁网站免费在线| 欧美丝袜亚洲另类 | 欧美成狂野欧美在线观看| 色播亚洲综合网| 白带黄色成豆腐渣| 日韩欧美三级三区| 国产又黄又爽又无遮挡在线| 中文字幕人妻丝袜一区二区| 午夜久久久久精精品| 久久精品国产亚洲av香蕉五月| 欧洲精品卡2卡3卡4卡5卡区| avwww免费| 天堂影院成人在线观看| 日韩欧美三级三区| 欧美最黄视频在线播放免费| 波多野结衣高清作品| 日韩大尺度精品在线看网址| 亚洲激情在线av| 亚洲人成网站高清观看| 长腿黑丝高跟| 夜夜躁狠狠躁天天躁| 99热只有精品国产| 观看免费一级毛片| 亚洲国产欧洲综合997久久,| 可以在线观看毛片的网站| 国产一区二区三区在线臀色熟女| 亚洲午夜理论影院| av在线天堂中文字幕| 亚洲自偷自拍图片 自拍| 琪琪午夜伦伦电影理论片6080| 黑人操中国人逼视频| 搡老妇女老女人老熟妇| 亚洲专区字幕在线| 国产精品av久久久久免费| 日韩欧美在线乱码| 露出奶头的视频| 波多野结衣巨乳人妻| 亚洲中文av在线| 国产激情欧美一区二区| 亚洲精品粉嫩美女一区| 亚洲午夜理论影院| av视频在线观看入口| 小说图片视频综合网站| 两性夫妻黄色片| www.自偷自拍.com| 欧美黄色淫秽网站| 亚洲片人在线观看| 国产av一区在线观看免费| 国产不卡一卡二| 99热6这里只有精品| 久久久久亚洲av毛片大全| 久久精品国产99精品国产亚洲性色| 国产又色又爽无遮挡免费看| 国产片内射在线| 黄片小视频在线播放| 国产成人av激情在线播放| 宅男免费午夜| 亚洲欧美日韩高清在线视频| 亚洲熟妇中文字幕五十中出| 亚洲 欧美一区二区三区| 亚洲无线在线观看| 搡老熟女国产l中国老女人| 大型av网站在线播放| 精品不卡国产一区二区三区| 俺也久久电影网| 好男人电影高清在线观看| 成人18禁在线播放| 黄色视频,在线免费观看| 全区人妻精品视频| 很黄的视频免费| 1024香蕉在线观看| 特级一级黄色大片| 男插女下体视频免费在线播放| 99久久久亚洲精品蜜臀av| 无人区码免费观看不卡| 国产野战对白在线观看| 久久国产乱子伦精品免费另类| 欧美色欧美亚洲另类二区| 又粗又爽又猛毛片免费看| 国产成人av教育| 麻豆久久精品国产亚洲av| bbb黄色大片| av国产免费在线观看| 国产高清videossex| 久久久久久久久久黄片| 九色国产91popny在线| 精品乱码久久久久久99久播| 啦啦啦韩国在线观看视频| 久久久久久国产a免费观看| 91在线观看av| 岛国在线免费视频观看| av免费在线观看网站| 欧美乱妇无乱码| 好男人电影高清在线观看| 国产69精品久久久久777片 | 18禁观看日本| 夜夜夜夜夜久久久久| 精品一区二区三区视频在线观看免费| 国产不卡一卡二| 性欧美人与动物交配| av欧美777| 精品久久久久久久久久久久久| 波多野结衣高清作品| 免费在线观看视频国产中文字幕亚洲| 在线观看美女被高潮喷水网站 | 亚洲成a人片在线一区二区| 亚洲自拍偷在线| 99热这里只有精品一区 | 99国产极品粉嫩在线观看| 91麻豆av在线| 国产精品一及| 日日爽夜夜爽网站| 叶爱在线成人免费视频播放| 国产精品国产高清国产av| 两性夫妻黄色片| 国产精品香港三级国产av潘金莲| 色综合婷婷激情| 两性夫妻黄色片| or卡值多少钱| 一级黄色大片毛片| 狂野欧美激情性xxxx| 变态另类丝袜制服| 精品少妇一区二区三区视频日本电影| 欧美中文日本在线观看视频| 国产午夜精品久久久久久| 在线观看免费视频日本深夜| www国产在线视频色| 久久久久久久午夜电影| 一区二区三区激情视频| 中出人妻视频一区二区| 亚洲国产欧美人成| 全区人妻精品视频| 欧美极品一区二区三区四区| 最近视频中文字幕2019在线8| 亚洲国产高清在线一区二区三| 欧美极品一区二区三区四区| 嫩草影院精品99| 悠悠久久av| 一本精品99久久精品77| 欧美zozozo另类|