• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Correspondence:Uncertainty-aware complementary label queries for active learning?

    2023-11-06 06:15:08ShengyuanLIUKeCHENTianleiHUYunqingMAO

    Shengyuan LIU ,Ke CHEN ,Tianlei HU ,Yunqing MAO

    1Key Lab of Intelligent Computing Based Big Data of Zhejiang Province,Zhejiang University,Hangzhou 310027,China

    2State Key Laboratory of Blockchain and Data Security,Zhejiang University,Hangzhou 310027,China

    3City Cloud Technology (China) Co.,Ltd.,Hangzhou 310000,China

    Many active learning methods assume that a learner can simply ask for the full annotations of some training data from annotators.These methods mainly try to cut the annotation costs by minimizing the number of annotation actions.Unfortunately,annotating instances exactly in many realworld classification tasks is still expensive.To reduce the cost of a single annotation action,we try to tackle a novel active learning setting,named active learning with complementary labels (ALCL).ALCL learners ask only yes/no questions in some classes.After receiving answers from annotators,ALCL learners obtain a few supervised instances and more training instances with complementary labels,which specify only one of the classes to which the pattern does not belong.There are two challenging issues in ALCL:one is how to sample instances to be queried,and the other is how to learn from these complementary labels and ordinary accurate labels.For the first issue,we propose an uncertainty-based sampling strategy under this novel setup.For the second issue,we upgrade a previous ALCL method to fit our sampling strategy.Experimental results on various datasets demonstrate the superiority of our approaches.

    1 Introduction

    Traditional multi-class classification methods typically necessitate fully-annotated data,a process which can be time-consuming and costly.To mitigate the extensive resource allocation for annotation actions,researchers have explored various ways to learn from weakly supervised annotations (Zhou,2018): active learning (AL) (Sinha et al.,2019;Yoo and Kweon,2019),semi-supervised learning (Zhang T and Zhou,2018),partial label learning (Wang et al.,2019),and others.Among these,AL assumes that different samples in the same dataset have different values for the update of the current model,and tries to select the samples with the highest value from the training set.In this paper,we explore AL and another weakly supervised learning framework named complementary-label learning (Ishida et al.,2017,2019),as shown in Fig.1.

    Fig.1 Example of an image with full annotation [a]and complementary label [b].In the complementarylabel setting,we know only that this image does not contain a wolf

    Many classic AL methods assume that the ground-truth labels of unlabeled samples can be obtained from an oracle (Ren et al.,2021).The primary objective of these methods is to indirectly cut the annotation costs by minimizing the number of required label queries (Settles,2011).However,in several practical scenarios,acquiring full annotations for patterns from an oracle remains costly,such as for multi-class classification tasks.Consequently,it raises the question: is it feasible to directly decrease the total annotation costs by reducing the expense associated with a single annotation action?

    To tackle this problem,a few active learning approaches have provided some practicable ways to reduce the annotation costs.For example,ALPF (Hu et al.,2019) learners engage in querying semantic information and acquire partial feedback from annotators.Nonetheless,ALPF needs additional prior knowledge to generate semantic questions,and post-pruning exists in the sampling process.WEAKAL (Gonsior et al.,2020) uses typical AL techniques,and uses the outputs of a network as additional weak labels.Nevertheless,when the instances are difficult to annotate,this algorithm is error-prone due to the low quality of weak labels.Zhang CC and Chaudhuri (2015) used both strong and weak labelers to save the limited budget.Duo-Lab (Younesian et al.,2020) uses a similar method in online active learning.However,these two algorithms still need strong labelers.Inspired by ALPF,we want to obtain weak annotation only from weak labelers without extra prior knowledge.Thus,we try to combine AL with complementary-label learning.

    In summary,we tackle a novel ALCL(Liu et al.,2023),as shown in Fig.2.There are two challenging issues in ALCL: the first is how to learn from complementary and ordinary labels,and the second is how to sample instances to be queried (Liu et al.,2023).To solve the first issue,we upgrade the method named weight redistribution based on the balance of category contribution from Liu et al.(2023),which treated the candidate and complementary labels differently.This approach incorporates instance-wise reweighting into the loss function to concern candidate labels,thereby emphasizing instances with a reduced number of candidate labels.For the second issue,we design a new sampling strategy named uncertainty for sampling and deep learning (USD) based on the strength of ALCL and uncertainty in deep learning.Comprehensive experimental results illustrate that this revised weight redistribution based on the balance of category contribution (WEBB) method outperforms state-of-theart complementary-label learning algorithms.Furthermore,our USD sampling strategy has indeed been improved upon existing sampling strategies in ALCL.

    Fig.2 Workflow for an ALCL learner (ALCL: active learning with complementary labels)

    2 Preliminaries

    Our work expands upon the AL framework(Settles,2009),complementary-label learning (Ishida et al.,2017),and ALCL (Liu et al.,2023),and our proposed USD sampling strategy is based on the uncertainty in deep learning.In this section,we briefly review the uncertainty in deep learning.

    There are two main kinds of uncertainty for deep neural networks: epistemic uncertainty and aleatoric uncertainty.Epistemic uncertainty accounts for uncertainty in the model parameters and can be improved with enough data,so it is often referred to as model uncertainty.Aleatoric uncertainty can be further categorized as homoscedastic uncertainty and heteroscedastic uncertainty.Homoscedastic uncertainty remains constant for different inputs,whereas heteroscedastic uncertainty depends on the input samples of the model.For example,some inputs could be more challenging to recognize in image recognition.A model usually has high heteroscedastic uncertainty on inputs with low confidence.

    Recently,many researchers have investigated how to estimate and use uncertainty in deep learning(Blundell et al.,2015;Gal and Ghahramani,2016).With these techniques,uncertainty-based methods are introduced into multi-view learning(Geng et al.,2021),few-shot classification(Zhang ZZ et al.,2020),and multi-task learning (Cipolla et al.,2018).Geng et al.(2021) assumed that inputs are sampled from different Gaussian distributions,and that the variance in Gaussian distributions reflects the uncertainty of inputs.Zhang ZZ et al.(2020)characterized the uncertainty by the variance between inputs and prototypes.Cipolla et al.(2018) used Boltzmann distribution to build the relationship between inputs and prediction,and the conditional probability to handle different tasks.Arnab et al.(2020) introduced uncertainty to weigh the samples with weak annotations in action recognition.Inspired by these studies,our method attempts to use the uncertainty in deep learning to guide the queries of active learning.In other words,the uncertainty in our approach has a dual purpose,deciding which sample to query and guiding how to learn from samples.

    3 The proposed framework

    In this section,we first introduce two reasonable sampling strategies as baselines.Then,we introduce our USD strategy.Finally,we upgrade the previous ALCL method WEBB.

    3.1 Baselines

    In the ALCL problem,the number of complementary labels should be considered in sampling strategies.We use a weighting mechanism in Eq.(1)that considers this particularity:

    wherep(m|xi) represents the conventional confidence level associated with instancexifor labelm,andis the complementary label set.The hyperparameterαsignifies a preference factor,and whenα >0,it indicates a proclivity towards selecting instances exhibiting a smaller number of complementary labels.

    In our previous study (Liu et al.,2023),we have applied this weighting scheme on two conventional uncertainty-based sampling strategies: leastconfidence(LC) (Culotta and McCallum,2005)and margin sampling (MS) (Scheffer et al.,2001).From these,two baselines have been established: weighted least-confidence (WLC) and weighted margin sampling(WMS) (Liu et al.,2023).

    3.2 Uncertainty for sampling and deep learning

    In AL,many uncertainty-based methods use various types of data uncertainty to decide which sample to query.Although they indeed obtain valuable samples to query,they usually ignore the possible usage of uncertainty in deep learning.Inspired by these,we use one kind of uncertainty to decide which sample to query and guide how to learn from samples.

    Our primary purpose is,for the network,to learn from samples with low uncertainty or generate high uncertainty and avoid being penalized heavily for noisy samples where the information is insuffi-cient.The samples with high uncertainty will be queried later.With the original loss,we define our uncertainty loss as

    whereL(x,y) is the original loss function andσ2(x)is the uncertainty ofx.WhenL(x,y) is cross entropy,this corresponds to assuming a Boltzmann distribution on the output of the network with a temperature ofσ2,and approximately minimizing its log-likelihood (Cipolla et al.,2018).Forσ,we use an uncertainty prediction module to additionally predict the uncertaintyσ2(x) for samples.The number of complementary labels will influence the uncertainty.

    In fact,we use the uncertainty prediction module to directly predictv:=logσ2and obtain=exp(-v) in the following experiments.Because if we predictσorσ2,dividing by 0 may happen when computing.

    3.3 WEBB with USD

    From Section 3.2,we know that different inputs have different uncertainties in USD,so we cannot obtain lossLufrom candidate label classification loss in WEBB (Liu et al.,2023).When we use USD as the sampling strategy,candidate label classification loss will be written as

    wherewikis the weight of instanceiwith lablek,nis the size of the dataset,andKis the number of labels.

    Then we can calculate the final training loss with complementary-label classification loss.

    4 Experiments

    4.1 Experimental setup

    1.Datasets and optimization.We use four widely-used benchmark datasets: MNIST (LeCun et al.,1998),Fashion-MNIST (Xiao et al.,2017),Kuzushiji-MNIST (Clanuwat et al.,2018),and CIFAR-10(Krizhevsky and Hinton,2009).

    In subsequent experiments,the allocated query limit for each dataset is 20 000.For datasets MNIST,Fashion-MNIST,and Kuzushiji-MNIST,a multilayer perceptron(MLP)model(d-500-k)is trained as the primary framework,while ResNet(18 layers)(He et al.,2016)is used in CIFAR-10.

    The optimization process is conducted with Adam(Kingma and Ba,2015),applying a weight decay of 1×10-4on weight parameters.Learning rates for each dataset are evaluated from {1×10-5,5×10-5,1×10-4,5×10-4,1×10-3},and reduced by half every 50 epochs.The suggested learning rates are selected by the performance on a validation subset extracted from the training dataset.

    2.Baselines and the proposed method.The effi-cacy of the updated WEBB is evaluated in contrast with two complementary-label learning algorithms,LOG and EXP (Feng et al.,2020).Each of them is parameterized using experimental results acquired from a validation dataset.

    For the sampling strategies,we assess the performance of our approach USD in contrast to random sampling,LC (Culotta and McCallum,2005),MS(Scheffer et al.,2001),WLC,WMS,sequence entropy(SE)(Settles and Craven,2008),and the learning loss strategy (Yoo and Kweon,2019).In the SE approach,the candidate labels are taken into consideration collectively for entropy calculation.The learning loss strategy introduces an additional auxiliary module,named the loss prediction module,with the purpose of predicting the loss associated with the target model.This auxiliary module is employed to identify challenging instances for the target model.In the following experiments,the variableαis set to 0.1 for both WLC and WMS.The hidden dimension of the loss prediction module in learning loss is kept at 128 as recommended.

    4.2 Comparison among complementary-label learning approaches

    We record the comprehensive mean and standard deviation of the test accuracy after every training epoch.In Fig.3,the mean and standard deviation of test accuracy for LOG,EXP,and our revised WEBB are depicted during the last 250 training epochs.

    Fig.3 Experimental results across diverse datasets and models with varied sampling strategies: (a)CIFAR-10;(b) MNIST;(c) Fashion-MNIST;(d) Kuzushiji-MNIST.Dark colors represent the average accuracy derived from the five trials,while light colors denote the corresponding standard deviation

    Fig.3 shows that the performance of EXP is inconsistent due to its nearly equal treatment of instances with varying numbers of complementary labels as mentioned previously.Equipped with each sampling strategy,the refined WEBB and LOG work well in ALCL,and the refined WEBB outperforms all other methods on various datasets.

    4.3 Comparison among active sampling strategies

    Fig.4 presents the mean and standard deviation of accuracy enhancement compared to the random sampling baseline(based on five trials)over the number of queries on CIFAR-10.The random sampling baseline means that the model is trained to converge with data from previous queries,and the sampling of data is random.The improvements of the other methods over the random sampling baseline reflect the quality of sampling under the same number of queries.

    Fig.4 Mean accuracy enhancements with standard deviation(denoted by shading)of active learning methods compared to the random sampling baseline,across the number of queries on CIFAR-10.Dark colors represent the mean accuracy of the five trials,while light colors indicate the standard deviation.The loss function used in these trials is from WEBB

    Fig.4 shows that USD outperforms the other sampling strategies substantially,especially in the early stages,because the uncertainty in USD does not significantly depend on the accuracy of the current network.Specifically,USD uses an uncertainty prediction module to guide the sampling of instances.The training of the uncertainty prediction module is easier than the training of the original network,so we can obtain more valuable instances in the early stages with USD.

    5 Conclusions

    In this paper,we tackle the problem of ALCL (Liu et al.,2023).The objective of ALCL is to directly reduce the cost of annotation actions in AL,while providing a feasible approach for obtaining complementary labels.To solve ALCL,we design a sampling strategy USD,which uses the uncertainty in deep learning to guide the queries of active learning in this novel setup.Moreover,we upgrade the WEBB method to suit this sampling strategy.Comprehensive experimental results validate the performance of our proposed approaches.In the future,we plan to investigate the applicability of our approaches to large-scale datasets and account for noise in the feedback of annotators.

    Contributors

    Shengyuan LIU designed the research,processed the data,and drafted the paper.Yunqing MAO helped organize the paper.Ke CHEN and Tianlei HU revised and finalized the paper.

    Compliance with ethics guidelines

    Shengyuan LIU,Ke CHEN,Tianlei HU,and Yunqing MAO declare that they have no conflict of interest.

    Data availability

    The data that support the findings of this study are available from the corresponding author upon reasonable request.

    日韩中文字幕欧美一区二区| 午夜免费成人在线视频| 别揉我奶头 嗯啊视频| 男人的好看免费观看在线视频| 精品一区二区免费观看| 久久久久久九九精品二区国产| 亚洲午夜理论影院| 高清在线国产一区| 日韩 亚洲 欧美在线| 日本 欧美在线| 国产国拍精品亚洲av在线观看| 国产一级毛片七仙女欲春2| 嫁个100分男人电影在线观看| 国产一级毛片七仙女欲春2| 两人在一起打扑克的视频| 国产国拍精品亚洲av在线观看| 人妻夜夜爽99麻豆av| 美女被艹到高潮喷水动态| 夜夜夜夜夜久久久久| 久久久久国内视频| a级毛片免费高清观看在线播放| 亚洲精品日韩av片在线观看| 国产精品1区2区在线观看.| 老女人水多毛片| 一边摸一边抽搐一进一小说| 亚洲av成人av| 久久热精品热| 亚洲,欧美,日韩| 国产欧美日韩精品一区二区| 国产又黄又爽又无遮挡在线| 国产美女午夜福利| a在线观看视频网站| 一区二区三区四区激情视频 | 精品人妻偷拍中文字幕| 看黄色毛片网站| 国产av一区在线观看免费| 女的被弄到高潮叫床怎么办 | 欧美高清成人免费视频www| 免费黄网站久久成人精品| 中出人妻视频一区二区| 日本一二三区视频观看| 免费电影在线观看免费观看| 91av网一区二区| 亚洲av中文av极速乱 | 国产精品精品国产色婷婷| 高清日韩中文字幕在线| av国产免费在线观看| 午夜免费成人在线视频| 国内少妇人妻偷人精品xxx网站| 国产美女午夜福利| 深夜精品福利| 悠悠久久av| 免费人成在线观看视频色| 一个人观看的视频www高清免费观看| 在线观看av片永久免费下载| 老女人水多毛片| 午夜久久久久精精品| 国产麻豆成人av免费视频| 久久久久国内视频| 欧美成人一区二区免费高清观看| 搡老妇女老女人老熟妇| 99九九线精品视频在线观看视频| 国产成人福利小说| 丰满人妻一区二区三区视频av| 搡老岳熟女国产| 亚洲成人久久性| 精品一区二区三区av网在线观看| 亚洲精品粉嫩美女一区| 精品人妻1区二区| 黄色一级大片看看| 日韩欧美国产在线观看| 午夜激情欧美在线| 中国美女看黄片| 嫩草影院精品99| 深夜精品福利| 一区福利在线观看| 欧美又色又爽又黄视频| 免费观看在线日韩| 亚洲无线观看免费| 日韩大尺度精品在线看网址| 亚洲第一电影网av| 成人二区视频| 人妻少妇偷人精品九色| 欧美在线一区亚洲| 亚洲精品影视一区二区三区av| av中文乱码字幕在线| 少妇熟女aⅴ在线视频| 国产精品人妻久久久久久| 欧美高清成人免费视频www| 特级一级黄色大片| 国产精品人妻久久久久久| 尤物成人国产欧美一区二区三区| 性欧美人与动物交配| 无人区码免费观看不卡| 日韩 亚洲 欧美在线| 日本撒尿小便嘘嘘汇集6| 三级毛片av免费| 给我免费播放毛片高清在线观看| 男人的好看免费观看在线视频| 俺也久久电影网| 日韩亚洲欧美综合| 露出奶头的视频| 18禁在线播放成人免费| 午夜亚洲福利在线播放| 日韩av在线大香蕉| 久久午夜福利片| 春色校园在线视频观看| 国产欧美日韩一区二区精品| 亚洲人成伊人成综合网2020| 免费在线观看成人毛片| 日本 av在线| 国产三级中文精品| 在线观看舔阴道视频| x7x7x7水蜜桃| 我要搜黄色片| 免费不卡的大黄色大毛片视频在线观看 | 久久天躁狠狠躁夜夜2o2o| 狠狠狠狠99中文字幕| 男女视频在线观看网站免费| 禁无遮挡网站| 国产人妻一区二区三区在| 狂野欧美白嫩少妇大欣赏| 国产日本99.免费观看| 蜜桃亚洲精品一区二区三区| 国产av一区在线观看免费| 欧美日韩亚洲国产一区二区在线观看| bbb黄色大片| 韩国av在线不卡| 最新在线观看一区二区三区| 国产伦人伦偷精品视频| 99久久中文字幕三级久久日本| 精华霜和精华液先用哪个| 人妻夜夜爽99麻豆av| 国产亚洲精品久久久久久毛片| 成人高潮视频无遮挡免费网站| 国产精品99久久久久久久久| 中国美白少妇内射xxxbb| 在线播放无遮挡| 国产精品不卡视频一区二区| 韩国av一区二区三区四区| 免费人成在线观看视频色| 欧美一级a爱片免费观看看| 内地一区二区视频在线| 老司机福利观看| 国产成人av教育| 国产真实乱freesex| 久久午夜亚洲精品久久| 最近在线观看免费完整版| 亚洲成a人片在线一区二区| 中文字幕免费在线视频6| 十八禁国产超污无遮挡网站| 午夜视频国产福利| a级一级毛片免费在线观看| 精品日产1卡2卡| 国产精品一区www在线观看 | 久久久久精品国产欧美久久久| 天堂av国产一区二区熟女人妻| 赤兔流量卡办理| 啦啦啦韩国在线观看视频| 成人欧美大片| 欧美+日韩+精品| 国产精品1区2区在线观看.| 国产精品一区二区性色av| 国产精品福利在线免费观看| 久久这里只有精品中国| 久久精品影院6| 少妇裸体淫交视频免费看高清| 最近最新免费中文字幕在线| 男人和女人高潮做爰伦理| 午夜激情欧美在线| 老司机午夜福利在线观看视频| 69av精品久久久久久| 国产一区二区在线av高清观看| 黄色女人牲交| ponron亚洲| 又黄又爽又免费观看的视频| 国产精品人妻久久久久久| 别揉我奶头 嗯啊视频| 久久精品国产亚洲网站| 久久久国产成人精品二区| 男人狂女人下面高潮的视频| 精品久久久久久成人av| 精品一区二区三区av网在线观看| 韩国av一区二区三区四区| 在线观看66精品国产| 亚洲精品亚洲一区二区| avwww免费| 亚洲国产欧美人成| 日本欧美国产在线视频| 俺也久久电影网| 国产伦人伦偷精品视频| 国产伦精品一区二区三区视频9| 白带黄色成豆腐渣| 18+在线观看网站| 欧美日本视频| 九九久久精品国产亚洲av麻豆| 美女xxoo啪啪120秒动态图| 免费高清视频大片| 久久久久久久久久黄片| 男女之事视频高清在线观看| 久久久久九九精品影院| 精品人妻熟女av久视频| 亚洲精品乱码久久久v下载方式| 日韩精品中文字幕看吧| netflix在线观看网站| 亚洲天堂国产精品一区在线| 欧美3d第一页| 亚洲久久久久久中文字幕| 最新在线观看一区二区三区| 精品久久久久久久久久免费视频| 中文字幕av在线有码专区| 婷婷亚洲欧美| 三级国产精品欧美在线观看| 黄片wwwwww| 少妇的逼好多水| 色精品久久人妻99蜜桃| 美女高潮的动态| 十八禁国产超污无遮挡网站| 日本熟妇午夜| 国产免费一级a男人的天堂| 色噜噜av男人的天堂激情| 高清日韩中文字幕在线| 12—13女人毛片做爰片一| 国产黄片美女视频| 国产精品国产高清国产av| 人人妻人人看人人澡| 中文资源天堂在线| 欧美最新免费一区二区三区| 日本黄大片高清| 日日啪夜夜撸| av国产免费在线观看| 久久中文看片网| 久久久精品欧美日韩精品| 中亚洲国语对白在线视频| 午夜爱爱视频在线播放| 内地一区二区视频在线| 精品国内亚洲2022精品成人| 一a级毛片在线观看| 亚洲自偷自拍三级| 69av精品久久久久久| 三级毛片av免费| 91麻豆av在线| 观看免费一级毛片| 日韩在线高清观看一区二区三区 | 日韩欧美在线乱码| 一区二区三区激情视频| 少妇被粗大猛烈的视频| 嫩草影院精品99| 简卡轻食公司| 精品久久久久久久久久免费视频| 国产免费男女视频| 国产女主播在线喷水免费视频网站 | av天堂在线播放| 长腿黑丝高跟| 在线免费观看的www视频| 美女高潮喷水抽搐中文字幕| 我要搜黄色片| 免费看美女性在线毛片视频| 日韩欧美精品免费久久| 国模一区二区三区四区视频| 夜夜夜夜夜久久久久| 国产探花在线观看一区二区| 日本成人三级电影网站| 久久精品国产自在天天线| 亚洲性夜色夜夜综合| 一进一出抽搐动态| 男女那种视频在线观看| 在线观看免费视频日本深夜| 国产三级在线视频| a级毛片免费高清观看在线播放| 久久中文看片网| 91久久精品国产一区二区三区| 色综合站精品国产| 99国产精品一区二区蜜桃av| 69人妻影院| 亚洲精品影视一区二区三区av| 亚洲va在线va天堂va国产| 嫁个100分男人电影在线观看| 国产中年淑女户外野战色| 51国产日韩欧美| 国产精品一区二区免费欧美| 不卡视频在线观看欧美| 99久久久亚洲精品蜜臀av| 五月玫瑰六月丁香| 日韩亚洲欧美综合| av在线老鸭窝| 麻豆成人av在线观看| 国产精品av视频在线免费观看| 欧美性感艳星| 免费看av在线观看网站| 久久精品国产亚洲av涩爱 | 色精品久久人妻99蜜桃| 蜜桃久久精品国产亚洲av| 18+在线观看网站| 久久久久久九九精品二区国产| 色吧在线观看| 日韩欧美精品免费久久| 极品教师在线视频| 欧美日韩瑟瑟在线播放| 国产精品久久久久久久电影| 国产一级毛片七仙女欲春2| 成人三级黄色视频| 国产精品一区二区三区四区久久| 性插视频无遮挡在线免费观看| 婷婷丁香在线五月| a级一级毛片免费在线观看| 女同久久另类99精品国产91| 精品人妻1区二区| 国产av不卡久久| 97热精品久久久久久| 成人鲁丝片一二三区免费| 亚洲av中文字字幕乱码综合| 我的老师免费观看完整版| 色在线成人网| 亚洲av免费在线观看| 日韩欧美精品免费久久| 国产单亲对白刺激| netflix在线观看网站| 夜夜看夜夜爽夜夜摸| 3wmmmm亚洲av在线观看| 深爱激情五月婷婷| 国产麻豆成人av免费视频| 免费一级毛片在线播放高清视频| 12—13女人毛片做爰片一| 国产亚洲91精品色在线| 好男人在线观看高清免费视频| av在线老鸭窝| 日本黄大片高清| 免费高清视频大片| 少妇人妻精品综合一区二区 | 老师上课跳d突然被开到最大视频| 婷婷亚洲欧美| 色在线成人网| 久久久久久久精品吃奶| 人妻久久中文字幕网| 黄色一级大片看看| 日日啪夜夜撸| 亚洲最大成人av| 亚洲成a人片在线一区二区| 国产精品不卡视频一区二区| 午夜免费成人在线视频| 黄色视频,在线免费观看| 久久久久性生活片| 日韩欧美在线乱码| ponron亚洲| 国产一区二区三区在线臀色熟女| 中文字幕精品亚洲无线码一区| 真人做人爱边吃奶动态| 国产精品久久久久久亚洲av鲁大| 嫩草影院入口| 国产又黄又爽又无遮挡在线| 一区二区三区激情视频| 神马国产精品三级电影在线观看| 日韩精品有码人妻一区| 亚洲自偷自拍三级| 国产黄色小视频在线观看| 性色avwww在线观看| 成年女人毛片免费观看观看9| 国产aⅴ精品一区二区三区波| 麻豆成人av在线观看| 男人舔奶头视频| 国产一区二区激情短视频| 免费人成视频x8x8入口观看| 国产精品永久免费网站| 国产伦在线观看视频一区| 婷婷色综合大香蕉| 久久亚洲真实| 亚洲人成网站高清观看| 国产乱人视频| 日韩欧美免费精品| 内地一区二区视频在线| 欧美区成人在线视频| 国产精品一区二区三区四区久久| 国产精品美女特级片免费视频播放器| 午夜视频国产福利| 国产真实伦视频高清在线观看 | 精品久久久噜噜| 国产精品久久久久久亚洲av鲁大| or卡值多少钱| 亚洲欧美激情综合另类| 成人美女网站在线观看视频| 黄色配什么色好看| 国产午夜精品久久久久久一区二区三区 | 好男人在线观看高清免费视频| 国产在线精品亚洲第一网站| 国产视频一区二区在线看| 97超视频在线观看视频| 一本一本综合久久| a级一级毛片免费在线观看| 日日啪夜夜撸| 校园人妻丝袜中文字幕| 婷婷丁香在线五月| 亚洲av二区三区四区| 99久久中文字幕三级久久日本| a在线观看视频网站| 中文字幕人妻熟人妻熟丝袜美| 国产精品久久久久久久电影| 哪里可以看免费的av片| 乱系列少妇在线播放| 国产亚洲精品综合一区在线观看| 国产男靠女视频免费网站| 国产一区二区三区在线臀色熟女| 婷婷丁香在线五月| 看十八女毛片水多多多| 色吧在线观看| 热99re8久久精品国产| 天天一区二区日本电影三级| 久久久久久伊人网av| 韩国av在线不卡| 九九在线视频观看精品| 国产精品三级大全| 一本久久中文字幕| 精品久久久久久久人妻蜜臀av| 日本与韩国留学比较| 欧美xxxx黑人xx丫x性爽| 欧美最黄视频在线播放免费| 此物有八面人人有两片| 噜噜噜噜噜久久久久久91| 啦啦啦啦在线视频资源| 又黄又爽又免费观看的视频| 成人国产麻豆网| 在线观看午夜福利视频| 成人欧美大片| 搡老岳熟女国产| 久久国内精品自在自线图片| 啦啦啦韩国在线观看视频| 波多野结衣高清无吗| 亚洲av熟女| 啦啦啦观看免费观看视频高清| 午夜福利18| 嫁个100分男人电影在线观看| 赤兔流量卡办理| 精品久久久久久久久亚洲 | 可以在线观看毛片的网站| 午夜福利18| 嫁个100分男人电影在线观看| 午夜日韩欧美国产| avwww免费| 别揉我奶头~嗯~啊~动态视频| 伦精品一区二区三区| 久久久久久久久久黄片| 麻豆精品久久久久久蜜桃| 国产精品国产高清国产av| 丰满的人妻完整版| 色精品久久人妻99蜜桃| 国产探花在线观看一区二区| 欧美三级亚洲精品| 亚洲国产欧洲综合997久久,| 毛片女人毛片| 中文字幕久久专区| 91午夜精品亚洲一区二区三区 | 久久精品国产亚洲av天美| 国内精品一区二区在线观看| 国产精品一及| 男人的好看免费观看在线视频| 亚洲欧美日韩高清在线视频| 一级a爱片免费观看的视频| 日韩欧美精品免费久久| 啦啦啦韩国在线观看视频| 日本色播在线视频| 国产欧美日韩一区二区精品| 中出人妻视频一区二区| 国产精品野战在线观看| 国产不卡一卡二| 国产欧美日韩一区二区精品| 日韩国内少妇激情av| 五月玫瑰六月丁香| 免费看av在线观看网站| 在线观看免费视频日本深夜| 在线国产一区二区在线| 最新中文字幕久久久久| 亚洲国产色片| 日本成人三级电影网站| 欧美在线一区亚洲| 国产探花极品一区二区| 淫秽高清视频在线观看| 99久久精品一区二区三区| 97超视频在线观看视频| 别揉我奶头 嗯啊视频| 五月伊人婷婷丁香| 啦啦啦啦在线视频资源| 久久精品人妻少妇| 亚洲国产色片| av在线蜜桃| 一本一本综合久久| 国产亚洲精品久久久久久毛片| 啪啪无遮挡十八禁网站| 亚洲va日本ⅴa欧美va伊人久久| 俄罗斯特黄特色一大片| 亚洲人成网站高清观看| 久久欧美精品欧美久久欧美| 国模一区二区三区四区视频| 国产探花极品一区二区| 欧美黑人欧美精品刺激| 国产精品久久久久久久久免| 日韩中字成人| 男女那种视频在线观看| 国产精品精品国产色婷婷| 国语自产精品视频在线第100页| 成年人黄色毛片网站| 男人舔奶头视频| 欧美另类亚洲清纯唯美| 国产欧美日韩精品亚洲av| 欧美日韩国产亚洲二区| www.www免费av| 91午夜精品亚洲一区二区三区 | 男女下面进入的视频免费午夜| 国产大屁股一区二区在线视频| 国产黄a三级三级三级人| 91狼人影院| 最后的刺客免费高清国语| 男女啪啪激烈高潮av片| 精品99又大又爽又粗少妇毛片 | 搡老熟女国产l中国老女人| 日本 欧美在线| 一区二区三区高清视频在线| 精品午夜福利视频在线观看一区| 国产高清视频在线观看网站| 国产免费男女视频| 国产亚洲精品久久久com| 人人妻人人澡欧美一区二区| 级片在线观看| 最近视频中文字幕2019在线8| 婷婷精品国产亚洲av| 午夜爱爱视频在线播放| 一a级毛片在线观看| 亚洲三级黄色毛片| 一夜夜www| 国产午夜福利久久久久久| 夜夜夜夜夜久久久久| 日日撸夜夜添| 久久久久免费精品人妻一区二区| 中国美女看黄片| 此物有八面人人有两片| 在线播放国产精品三级| 噜噜噜噜噜久久久久久91| .国产精品久久| 51国产日韩欧美| 国产高清激情床上av| a级一级毛片免费在线观看| 亚洲aⅴ乱码一区二区在线播放| 人妻夜夜爽99麻豆av| 国产v大片淫在线免费观看| 亚洲精品色激情综合| 久久久久免费精品人妻一区二区| 亚洲va在线va天堂va国产| 午夜免费成人在线视频| 国产aⅴ精品一区二区三区波| 国产精品自产拍在线观看55亚洲| 欧美区成人在线视频| 麻豆精品久久久久久蜜桃| 久久精品国产清高在天天线| 亚洲成av人片在线播放无| 亚洲精华国产精华精| 国产三级中文精品| 直男gayav资源| 丰满乱子伦码专区| 一个人看的www免费观看视频| 18禁黄网站禁片午夜丰满| 男女做爰动态图高潮gif福利片| 成人精品一区二区免费| 中国美女看黄片| 国产男靠女视频免费网站| 国产aⅴ精品一区二区三区波| 俄罗斯特黄特色一大片| 免费观看在线日韩| 国产v大片淫在线免费观看| 国产av不卡久久| 老熟妇乱子伦视频在线观看| 国产精品美女特级片免费视频播放器| 国产精品av视频在线免费观看| 人人妻人人看人人澡| 午夜老司机福利剧场| av在线观看视频网站免费| 又黄又爽又刺激的免费视频.| 天美传媒精品一区二区| 亚洲成人久久性| 桃色一区二区三区在线观看| 一级a爱片免费观看的视频| 免费人成在线观看视频色| 99久久无色码亚洲精品果冻| 国内少妇人妻偷人精品xxx网站| 日韩欧美在线二视频| 久久久精品欧美日韩精品| 日韩中文字幕欧美一区二区| 成年女人毛片免费观看观看9| 少妇的逼好多水| 国产精品98久久久久久宅男小说| 免费在线观看成人毛片| 亚州av有码| 欧美激情久久久久久爽电影| 51国产日韩欧美| 免费电影在线观看免费观看| 不卡视频在线观看欧美| 乱系列少妇在线播放| 他把我摸到了高潮在线观看| 国产伦在线观看视频一区| 麻豆国产av国片精品| 久久这里只有精品中国| 搡老熟女国产l中国老女人| 亚洲av免费高清在线观看| 欧美bdsm另类| 麻豆一二三区av精品| 日韩高清综合在线| 亚洲精品日韩av片在线观看| 国产av麻豆久久久久久久| 一边摸一边抽搐一进一小说| 亚洲自拍偷在线| 可以在线观看毛片的网站| 亚洲精品久久国产高清桃花| 国产 一区精品| 午夜视频国产福利| 免费观看的影片在线观看| 男女下面进入的视频免费午夜| 亚洲成av人片在线播放无|