• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Label distribution expression recognition algorithm based on asymptotic truth value

    2021-09-15 02:34:00HUANGHaoGEHongwei

    HUANG Hao,GE Hongwei

    (1. School of Artificial Intelligence and Computer Science,Jiangnan University,Wuxi 214122,China; 2. Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence,Wuxi 214122,China)

    Abstract:Ambiguous expression is a common phenomenon in facial expression recognition (FER).Because of the existence of ambiguous expression,the effect of FER is severely limited.The reason maybe that the single label of the data cannot effectively describe complex emotional intentions which are vital in FER.Label distribution learning contains more information and is a possible way to solve this problem.To apply label distribution learning on FER,a label distribution expression recognition algorithm based on asymptotic truth value is proposed.Under the premise of not incorporating extraneous quantitative information,the original information of database is fully used to complete the generation and utilization of label distribution.Firstly,in training part,single label learning is used to collect the mean value of the overall distribution of data.Then,the true value of data label is approached gradually on the granularity of data batch.Finally,the whole network model is retrained using the generated label distribution data.Experimental results show that this method can improve the accuracy of the network model obviously,and has certain competitiveness compared with the advanced algorithms.

    Key words:facial expression recognition (FER);label distributed learning;label smoothing;ambiguous expression

    0 Introduction

    As one of the most important and accessible emotional expressions of human beings,facial expressions have been extensively studied in psychology.The facial expression recognition (FER)system in the field of computer vision mainly focuses on automatic FER.Its major task is to recognize the facial expressions in pictures or picture sequences (including videos).It is believed that this technology is of great significance to the efficient human-computer interaction in the future world[1],as well as to the filed of fatigue driving detection[2]and the treatment of mental diseases[3].Li S et al.[4]concluded that there are two nodus in deep expression recognition at this stage:network overfitting due to lack of effective data and difficulty in feature extraction due to a large amount of redundant information.A large number of recent studies have shown that it is not appropriate to simply regard deep expression recognition as an application of neural networks.Even with high-standard valid data and advanced network models,the accuracy of deep expression recognition is still limited by the characteristics of the expression itself.Especially in the past two years,the ambiguous expression phenomenon has been paid more and more attention by researchers[5-7].

    What we study is the FER of static pictures.Usually,a picture corresponds to only one label.In fact,a picture may contain complex emotional intentions.From this perspective,the labels for static expression recognition are not accurate enough.To illustrate this problem intuitively,we will use real data from FER2013 database as an example to describe the problem,as shown in Fig.1.

    Fig.1 Picture of FER2013

    Taking the above figures as an example,we give a brief description of what we have learned in the three ways of one-hot label,multi-label and label distribution.These images are taken from the training set of the FER2013 database,and their labels in FER+[8]are shown in Table 1.

    Table 1 Corresponding label of Fig.1

    a1,a2,and a3 will be classified as Angry,Neutral,and Unknown.Respectively,when the label conversion is carried out by majority voting,for single-label data,a1,a2 and a3 lost data of 2,4 and 10 labels;for figure a1,the essence one-hot label is [0,0,0,0,10,0,0,0,0,0].Obviously,if each annotator has the same weight,this classification is not quite fair.When figure a3 is converted into multiple labels,the form of data is changed to [1,0,1,0,0,0,0,0,0],and the essence of what the network learns is [5,0,5,0,0,0,0,0],which will also lose part of the information.The classification of the ten annotators is partly accidental,and both are acceptable,but when applied in practice,it is not wise to ignore 20% or 10% of emotional preferences.

    Label distribution learning is very promising in solving this problem.Label distribution learning (LDL)was clearly proposed by Geng et al.[9].He believes that although multi-label learning can solve the problem of label ambiguity to a certain extent,for many practical problems,the overall label distribution is more important.Subsequently,Gao et al.[10]proposed deep label distribution learning (DLDL)model,which applies label distribution learning to tasks such as age prediction and head pose estimation.In the direction of FER,similar work is more influential.In addition to the FER+ database relabeling work used in the previous example,there is also the work of Chen et al.[5].They proposed the lable distribution learning on auxiliary lable space graphs (LDL-ALSG)framework,which uses an approximate K-nearest neighbor algorithm to calculate the distribution of current data in the auxiliary task solution space of similar data through auxiliary tasks similar to FER,and then uses the auxiliary network as a judgment.Barsoum et al.[8]used crowd sourcing to convert single-label data into label distribution data,which improves the recognition effect of deep neural networks.However,this method requires multiple people to relabel,which takes a lot of manpower and high economic costs.The method of Chen et al.has the disadvantage that the amount of calculation during training is extremely large,and the effect of the method depends on a good auxiliary task model.

    As far as we know,the IPA2LT framework proposed by Zeng et al.[11]is the first attempt to solve the problem of ambiguous labels.Their work is based on the recognition that facial expression data have a potential true label.That is,for a certain face picture,the emoticon label contained therein has a certain truth value (they think that the ambiguous emoticon is an inconsistent labeling of emoticon labels).Inspired by this idea,the content of our work is based on the following idea:For certain facial expression data,there is a potential true emotional distribution.In the data containing single labels,the label is a high-level generalization of this distribution.We use an approximation method to find the potential real expression label distribution of the data.The significant difference from the former work is that the more appropriate way to describe expressions is the label distribution of expressions.Therefore,this study acknowledges the authenticity and representativeness of the labels of the data annotators.

    1 Proposed method

    1.1 Overal framework

    The latent truth is a classical assumption.In essence,assuming that the data can get a perfect label distribution is the same as assuming that the loss function of the network can be minimized,which is an ideal situation.Although the loss function of the neural network cannot be minimized in most cases,the characterization of the loss function makes it possible to optimize the network model step by step.In this process,the minimum value does not actually appear,but the expectation of the loss function to approach 0 actually improves the network model.The algorithm presented in this paper is trying to do such an approximation work.Although the true value of the data label distribution is not known,it can be approximated to the true value through a better expectation.

    The overall framework of the algorithm in this paper is shown in Fig.2.The training process is as follows:First send the training data to the network model and use the single label of the data to learn knowledge,then collect the softmax output of the training data as the true label distribution during the last training.The initial valueLTmeanwhile collects the class average distributionLMof all kinds of data.The label distribution update process is as follows:Consider the original labelLOof the data as well as theLTandLMobtained during training at the same time,and update the label distribution of the training data.In the testing process,make predictions on the test set of the database with the network trained on label distribution data.

    Fig.2 Overall framework of proposed method

    1.1.1 Label update strategy

    The algorithm adopts the class-average distributionLMfor every class of data in the training set as one of the references of the real label distribution,and 0.5 is taken as the lower limit of the label update strategy according to the principle of majority voting.Specifically,for a data sample in the training set,the network model can predict it in the following three situations:

    1)The probability that the classification is correct and that it belongs to the correct classification is greater than the average expectation of the class inLM,the overall average of the data.In this case,the classification of the data is very accurate,and higher requirements can only be put forward for further improvement.The label distribution of this class is set as the one-hot label of the data;

    2)The classification is correct,but the probability of prediction is lower than the average,which is the ambiguous data label that this study tries to solve.The expectation of the original single data label for this data is too optimistic,but in fact the distribution of facial expressions contained in the data is not so clear.Adjust the label distribution of the data to the value of the data inLM;

    3)The classification is wrong,so we should not expect too much of the expression distribution label in the data,as long as the probability that it belongs to the correct value in the next distribution approaches 0.5.It is unrealistic to require this kind of data to achieve the precise classification in the single label.The essence of this process is label smoothing with a larger threshold.

    The formulation of the above description is given by

    (1)

    Fig.3 Label update of three situations

    1.1.2 Asymptotic bounds

    The purpose of label update is to generate the data label distribution reasonably.There are two key issues in this process.The first is how to ensure the representativeness of the class average distributionLM,that is,how to ensure that the model has learned enough knowledge and there is no over-fitting to the training data.There is no simple solution to this problem,it can only be obtained with sufficient experience through a large amount of experimental.The second is when the network model is used asLTin the label update process after fixingLM.Ideally,this problem can be dealt with at the same time as the previous one,that is,the state of the network model after single-label training is exactly the most reasonable state for label update.However,the reality is that the learning of the network model is not iterative learning once,and knowledge can be learned for model every batch in the training process.LMconsiders the average data of the entire database training set.If the fixedLMmodel is also used forLTinitialization,it is almost certain that half of the data will be lower thanLMreference standard in the label distribution.Under the premise of authenticity and representativeness,it is obviously inappropriate to think that half of the data are accurately marked.

    In order to solve the above two problems at the same time,when fixingLMmodel,a certain compromise is made to the classification ability of the network model,and the network model after a certain batch shall prevail.In fact,as the number of iterations increases,the growth of the feature extraction ability of the network model slows down,and the loss ofLMis acceptable for such processing.WhenLTis initialized,the network model after a certain batch shall prevail.After multiple batches of training,the feature extraction ability of the network is further improved.If there is an expected threshold for the accuracy of the data,the network model can be as close to this threshold as possible in the granularity of the batch.One of the practical problems is that in the batch processing process,if the batch size is too small,the efficiency of the system to update the entireLTis too low.The information of a batch is limited,and may not even be updated forLT,but it is necessary to traverse all the data.Undoubtedly,the training time cost of such processing is huge.The algorithm proposed will design two approximate processings,one is to updateLTonce after processingKbatches,and the other is to updateLTof the current batch for each batch.These two approximations will give priority to updating the top batches each time.However,since the order of the batches is random,taking multiple averages can offset this defect.Furthermore,after fixingLM,the network model is very close to fitting,and the difference betweenLTupdated in the previous batch and the following batch is not large.

    2 Experiments

    2.1 Experimental setup

    2.1.1 Datasets

    To evaluate the performance of the algorithm designed for FER in the wild environment,we select several popular wild databases in recent years:FER+[8],AffectNet[13],RAF-DB[12].FER+ database is a relabel of the FER2013 database,also written as Ferplus.For 35 887 pictures of FER2013,10 labels marked by 10 annotators are made for each picture.They are in order:neutral,happy,surprise,sadness,anger,disgust,fear,contempt,unknown,and no face.Ferplus solves the problem of low credibility of FER2013’s much criticized label.The FER2013 database provides low-resolution grayscale images of 48×48 pixels.Until today,such low-resolution is still a challenge.RAF-DB database is the database with the most stringent production standards so far.Li et al.provided single-label data with 40 annotators’labels for each picture (by majority voting ).The database provides a total of 29 672 high-resolution data,of which 15 339 are single-label 7-category basic expressions (including neutral),and 3 954 are single-label 11-category composite expressions.Even for these high-quality labeled data,their group also provides aligned face data that are uniformly processed as 100×100 pixels.Subsequent researchers can directly use the aligned data for research,thus simplifying the preprocessing process.AffectNet is currently the database with the most single-label data in number.It contains more than one million high-resolution pictures,of which about 450 000 pictures are manually marked as 11 types of single-label data (A none label added relative to FER+).Due to time and money costs,these labels are only marked by only one annotator.The data set also provides the valence and arousal labels of these manually annotated pictures,and each picture is marked by 12 professionals.For basic emotions system,AffectNet labels have low credibility,and making expression recognition predictions on this database is a huge challenge.

    2.1.2 Parameters and environment

    Without special instruct-tions,the experimental parameters are set as follows:The data type accepted by the network model is:224×224 size rgb three-channel image,SGD optimizer is selected,the momentum is 0.5,the learning rate is 0.01 and the number of iterations is 20.The weight decay and learning rate decay strategies are adopted,the weight decay coefficient is 10-5,and the learning rate decay method is exponential decay.The network model is deployed on one Nvidia 2080ti GPU,using the pytorch deep learning framework.Since the batch size needs to be set appropriately when approaching,the batch size should be adjusted to 64,which is relatively large.Set theKvalue to 50.The following experiments all adopt the best result in 20 iterations as the final result.

    2.2 Analysis of hyper-parameter

    In order to evaluate the influence of hyperpara-meters,in the experimental part,we first analyze the hyper-parameter precision threshold and the number of single-label training iterations,and then test and apply this method on different network models.The experimental design and analysis are as follows:

    Approximate method 1:Considering the relatively large amount of calculation in approximate method 1,only experimental tests are performed on RAF-DB.Under the premise that the number of single iterations is 3,4 and 5.Tested on the ResNet18 and ResNet34 network models respectively,the parameter accuracy threshold fluctuates from 0.7 to 1.0 with a step size of 0.05.The experimental results are shown in Tables 2 and 3.

    Table 2 Experiment of ResNet18 on RAF-DB database (approximate method 1)

    Table 3 Experiment of ResNet34 on RAF-DB database (approximate method 1)

    In order to analyse the data more intuitively,setting the accuracy threshold as the abscissa and the accuracy as the ordinate,we get Fig.4,which illustrates the experimental results of ResNet18 and ResNet34.

    (a)ResNet18 on RAF-DB

    Obviously,although the overall test accuracy obtained by the experiment fluctuates slightly with the change of the accuracy threshold.The following information can still be obtained:Firstly,the experimental results on the model ResNet34 are significantly better than that on ResNet18.Secondly,the overall single-label training shows that the number of iterations should not be too low (refer to line 3 in Fig.4).The fluctuation of the data is due to the characteristics of the neural network,each batch will be randomly scrambled in each training,the randomization parameters obtained in each training are different,and the fitting of the network parameters during the training process is also slightly different.The number of single-label training cannot be too low because the network at this time is seriously under-fitting.The initialization ofLTwill also contain a lot of error information.Especially when the precise threshold is set to a large value,the error caused by the under-fitting network model will be further reflected in theLTupdate process.According to the original intention of the scheme design,as the accuracy threshold changes,there will be two accuracy fluctuations in the results of the experiment,such as line 5 in Fig.5 and line 4 in Fig.5.The best result in the previous part is due to the “correction”of the data label distribution by the label update algorithm,and the latter is the joint effect of the further fitting of the network model parameters and the label update algorithm.

    Fig.5 is schematic diagram of the accuracy change with the accuracy threshold.

    (a)ResNet18 on RAF-DB

    Approximate method 2:The basic parameters and experimental settings are the same as those of approximate method 1.The experimental results are listed in Tables 4 and 5.

    Table 4 Experiment of ResNet18 on RAF-DB database (approximate method 2)

    Table 5 Experiment of ResNet34 on RAF-DB database (approximate method 2)

    It can be seen that for the approximation method 2,the experimental results when the number of single-label training iterations is chosen as 4 have obvious advantages,especially on the ResNet34 network model.At the same time,these two experimental result polylines with four iterations are also in line with the conjecture of the two local best values in the previous analysis.The obvious difference from the approximation method 1 is that the series of experiments with a single-label training iteration number of 5 perform poorly.It is speculated that the approximation method 2 is more sensitive to over-fitting data and is more negatively affected by the over-fitting network.

    In order to visually analyse the performance of the two approximation methods,we take the best results of each shown in Fig.6 to compare them.Approximation method 1 takes a series of experiments when the number of iterations for two single-label training is 5,and the approximation method 2 also takes the a series of experiments when the number of iterations is 4.

    (a)Comparison on RAF-DB (ResNet18)

    It can be seen that for both the best value and the overall test accuracy,the approximation method 2 is better than the approximation method 1,so the approximation method 2 with better performance is selected in the subsequent comparative experiment stage.In addition,because there is no need to consider the huge amount of calculation caused by traversing the entire database training set multiple times,the batch size of the overall network model in training is no longer limited.The experimental results when taking 16,32 and 64 respectively are shown in Table 6.

    Table 6 Impact of batch size

    Fig.7 Influence of batch size on RAF-DB(ResNet34)

    It can be seen that the experimental effect of this algorithm is better when the batch size is small.It is speculated that because the batch size is smaller,the granularity of the label distribution update is more detailed,which also partially explains the phenomenon that the approximation method 1 is worse than the approximation method 2.Obviously,the former updates more labels each time.

    All in all,the method proposed in this paper is sensitive to hyper-parameters.The hyper-parameters that achieve the best result on RAF-DB are as follows:single-label training iterations is 4,batch size is 16,accuracy threshold is 0.9,and network model is ResNet34.The best accuracy is 86.51%.

    2.3 Comparative experiment

    First,to verify the effectiveness of the proposed method.We compare its experiment results on the three databases of FER+,AffectNet,and RAF-DB with that of benchmark method on ResNet34.

    Table 7 shows that after applying the method proposed,the network model has improved by 1.35% on AffectNet,3.46% on RAF-DB,and 4.69% on FER+.

    Table 7 Comparison with baseline

    Then,a comparative experiment was conducted with some advanced algorithms that have performed well in recent years,and the experimental results shown in Table 8.On AffectNet,the method proposed is compared with DLP-CNN,EAU-Net,pACNN,and IPA2LT.The hyper-parameters that achieve the best result on AffectNet are as follows:the number of single-label training iterations is 4,the batch size is 32,the accuracy threshold is 1,and the network model is ResNet34.

    Table 8 Comparison with baseline

    As shown in Table 9,on the RAF-DB database,the method proposed is compared with DLP-CNN,EAU-Net,gACNN and DeepExp3D.Among the advanced methods,the effect of proposed method reaches the best.The hyper-parameters that obtain the best result on the RAF-DB database are as follows:the number of single-label training iterations is 4,the batch size is 16,the accuracy threshold is 0.95,and the network model is ResNet34.

    Table 9 Comparison with baseline

    As shown in Table 10,on FER+,the method proposed is compared with SHCNN,TFE-JL,VGG13-PLD,and ESR-9,the best results are achieved,and the best hyper-parameters on FER+ are as follows:the number of single-label training iterations is 4,the batch size is 16,the accuracy threshold is 0.95,and the network model is ResNet34.

    Table 10 Comparison with baseline

    3 Conclusions

    In this work,a label distribution expression recog-nition algorithm based on asymptotic truth value is proposed to solve the problem of ambiguous expression.In order to accurately describe the emotional tendency in image data,we use label distribution to avoid the ambiguity problem caused by single-label data.We propose a simple label generation strategy and a set of corresponding training methods.Taking into account the rigor of the data,we use the overall inner-class mean and the lower bound introduced by the absolute majority voting method as constraints.This is the highlight of this work and the focus of improvement.In future research,a lot of different attempts are needed to find a more reasonable reference standard for label generation.

    黄色毛片三级朝国网站| 99热只有精品国产| 狠狠狠狠99中文字幕| 亚洲色图 男人天堂 中文字幕| 欧美一级毛片孕妇| 国产精品乱码一区二三区的特点| 亚洲精品在线观看二区| 亚洲狠狠婷婷综合久久图片| 国产视频内射| 热re99久久国产66热| 亚洲三区欧美一区| 久久久久久久久久黄片| 男人操女人黄网站| 黄片播放在线免费| 欧美亚洲日本最大视频资源| www.999成人在线观看| 亚洲一码二码三码区别大吗| 日韩一卡2卡3卡4卡2021年| 久久青草综合色| 黄网站色视频无遮挡免费观看| 丰满的人妻完整版| 亚洲国产欧美一区二区综合| www.精华液| 亚洲自偷自拍图片 自拍| 一进一出抽搐gif免费好疼| 18禁国产床啪视频网站| 日日干狠狠操夜夜爽| x7x7x7水蜜桃| 久久精品亚洲精品国产色婷小说| 18禁黄网站禁片免费观看直播| 国产一区二区三区在线臀色熟女| 最新在线观看一区二区三区| 日韩精品中文字幕看吧| 精品国产美女av久久久久小说| 欧美在线一区亚洲| 亚洲人成77777在线视频| 1024手机看黄色片| 国产精品野战在线观看| 午夜免费成人在线视频| 99国产综合亚洲精品| 国产国语露脸激情在线看| 日本一本二区三区精品| 精品福利观看| 亚洲激情在线av| 亚洲一码二码三码区别大吗| 黄色丝袜av网址大全| 侵犯人妻中文字幕一二三四区| 日韩精品免费视频一区二区三区| 18禁裸乳无遮挡免费网站照片 | 18禁观看日本| 19禁男女啪啪无遮挡网站| 1024手机看黄色片| 国产亚洲精品av在线| 日韩欧美国产一区二区入口| 国产精品爽爽va在线观看网站 | 久久精品91蜜桃| 亚洲激情在线av| 国产成人影院久久av| 在线播放国产精品三级| 美女国产高潮福利片在线看| 久久天堂一区二区三区四区| 久久午夜亚洲精品久久| 久久久久久久久免费视频了| 嫩草影视91久久| 午夜福利免费观看在线| 亚洲欧美精品综合久久99| 国产免费男女视频| 男女做爰动态图高潮gif福利片| 一个人观看的视频www高清免费观看 | 啦啦啦免费观看视频1| 黄色视频,在线免费观看| 亚洲欧美激情综合另类| 超碰成人久久| 亚洲激情在线av| 青草久久国产| 丁香六月欧美| 91成人精品电影| 欧美激情高清一区二区三区| 亚洲男人的天堂狠狠| 国产黄片美女视频| 中文字幕人妻丝袜一区二区| 黄色片一级片一级黄色片| 精品高清国产在线一区| 波多野结衣av一区二区av| 精品免费久久久久久久清纯| 特大巨黑吊av在线直播 | 亚洲成av人片免费观看| 一区二区三区国产精品乱码| 亚洲真实伦在线观看| 国产精品精品国产色婷婷| 国产区一区二久久| 亚洲色图av天堂| 两个人免费观看高清视频| 99久久无色码亚洲精品果冻| 中文字幕最新亚洲高清| 欧美亚洲日本最大视频资源| 亚洲人成77777在线视频| 免费在线观看成人毛片| 美国免费a级毛片| 精华霜和精华液先用哪个| 国产成人精品久久二区二区91| 特大巨黑吊av在线直播 | 制服诱惑二区| 国产精品电影一区二区三区| 欧美日本亚洲视频在线播放| 欧美黑人巨大hd| 国产久久久一区二区三区| 一级黄色大片毛片| 国产精品二区激情视频| 在线观看免费午夜福利视频| 精品国产国语对白av| 男人舔奶头视频| 少妇裸体淫交视频免费看高清 | 精品国产乱子伦一区二区三区| 日韩欧美 国产精品| 色综合站精品国产| 国产v大片淫在线免费观看| 女性被躁到高潮视频| 男女做爰动态图高潮gif福利片| 中国美女看黄片| 亚洲人成网站高清观看| 婷婷亚洲欧美| 两个人免费观看高清视频| 欧美zozozo另类| 一级作爱视频免费观看| 亚洲中文日韩欧美视频| 美女午夜性视频免费| 老鸭窝网址在线观看| 午夜久久久久精精品| 少妇粗大呻吟视频| 日本a在线网址| 国产精品 欧美亚洲| 在线观看免费视频日本深夜| 国产精品自产拍在线观看55亚洲| 校园春色视频在线观看| 欧美亚洲日本最大视频资源| 精品免费久久久久久久清纯| 高潮久久久久久久久久久不卡| 亚洲全国av大片| 欧美激情高清一区二区三区| 人人澡人人妻人| 神马国产精品三级电影在线观看 | 成人三级做爰电影| 亚洲国产高清在线一区二区三 | 欧美久久黑人一区二区| 国产男靠女视频免费网站| 日本a在线网址| 中文在线观看免费www的网站 | 欧美日韩中文字幕国产精品一区二区三区| 久久草成人影院| 老司机在亚洲福利影院| 成人亚洲精品一区在线观看| 色av中文字幕| 国内少妇人妻偷人精品xxx网站 | 精品国产乱码久久久久久男人| 欧洲精品卡2卡3卡4卡5卡区| 此物有八面人人有两片| 久久天躁狠狠躁夜夜2o2o| 精品国产乱子伦一区二区三区| 女人高潮潮喷娇喘18禁视频| www.www免费av| 成人一区二区视频在线观看| 丁香欧美五月| 国产真实乱freesex| 成人永久免费在线观看视频| 亚洲男人天堂网一区| 99热6这里只有精品| 欧美性长视频在线观看| 狠狠狠狠99中文字幕| 国产片内射在线| 一进一出抽搐gif免费好疼| 制服丝袜大香蕉在线| 国产人伦9x9x在线观看| 国产爱豆传媒在线观看 | 成人亚洲精品av一区二区| 久久精品91蜜桃| 黑丝袜美女国产一区| 久久伊人香网站| 日本免费一区二区三区高清不卡| 亚洲av熟女| 两个人看的免费小视频| 婷婷精品国产亚洲av在线| 亚洲aⅴ乱码一区二区在线播放 | 悠悠久久av| 国产伦人伦偷精品视频| 看免费av毛片| 国产精品久久久人人做人人爽| 国产精品美女特级片免费视频播放器 | 国产97色在线日韩免费| 欧美成人午夜精品| 一个人免费在线观看的高清视频| 成人18禁在线播放| 97碰自拍视频| 亚洲精品久久国产高清桃花| 亚洲一区二区三区不卡视频| 久久久水蜜桃国产精品网| 国产v大片淫在线免费观看| 久久久久久免费高清国产稀缺| 亚洲九九香蕉| 免费观看精品视频网站| 男人舔女人下体高潮全视频| 老司机午夜十八禁免费视频| 日本黄色视频三级网站网址| 欧美zozozo另类| 亚洲精品av麻豆狂野| 久久狼人影院| 别揉我奶头~嗯~啊~动态视频| 国产精品乱码一区二三区的特点| 国产黄a三级三级三级人| 精品欧美一区二区三区在线| 国内少妇人妻偷人精品xxx网站 | 人成视频在线观看免费观看| 亚洲中文日韩欧美视频| 99精品欧美一区二区三区四区| av片东京热男人的天堂| 亚洲自偷自拍图片 自拍| 亚洲第一av免费看| 免费无遮挡裸体视频| 成人国产一区最新在线观看| 嫩草影院精品99| xxxwww97欧美| 色综合婷婷激情| 精品久久久久久,| 母亲3免费完整高清在线观看| 精品国产乱子伦一区二区三区| 亚洲真实伦在线观看| 嫩草影院精品99| 国产v大片淫在线免费观看| 熟女电影av网| 国产精品二区激情视频| 欧美日韩一级在线毛片| 日韩大尺度精品在线看网址| 少妇 在线观看| 国产av一区二区精品久久| 母亲3免费完整高清在线观看| 日韩欧美在线二视频| 午夜激情av网站| 午夜福利一区二区在线看| 免费看十八禁软件| 男女那种视频在线观看| 免费在线观看影片大全网站| 亚洲精品美女久久av网站| 国产麻豆成人av免费视频| 人人妻人人看人人澡| av中文乱码字幕在线| 午夜亚洲福利在线播放| 亚洲第一电影网av| 国产一级毛片七仙女欲春2 | 国产精品亚洲av一区麻豆| 精品福利观看| 国产一区二区三区在线臀色熟女| 日日夜夜操网爽| 国产高清激情床上av| 无人区码免费观看不卡| 国产精品亚洲一级av第二区| 一个人观看的视频www高清免费观看 | 性色av乱码一区二区三区2| 亚洲欧美日韩高清在线视频| 亚洲第一青青草原| 日韩欧美在线二视频| av中文乱码字幕在线| 国产av又大| 成人国产综合亚洲| 搡老妇女老女人老熟妇| 在线观看舔阴道视频| www.999成人在线观看| 亚洲男人的天堂狠狠| 免费看a级黄色片| 最近在线观看免费完整版| 国产乱人伦免费视频| 99国产综合亚洲精品| 中文字幕久久专区| 欧美激情 高清一区二区三区| 久久久久国产一级毛片高清牌| 伊人久久大香线蕉亚洲五| 黄片大片在线免费观看| 亚洲欧美精品综合一区二区三区| 亚洲国产精品成人综合色| 欧美最黄视频在线播放免费| 亚洲美女黄片视频| 欧美三级亚洲精品| 国产在线观看jvid| 99久久无色码亚洲精品果冻| 伦理电影免费视频| 女同久久另类99精品国产91| 一二三四社区在线视频社区8| 久久久精品国产亚洲av高清涩受| 国产精品亚洲美女久久久| 国产男靠女视频免费网站| av有码第一页| 国产视频内射| 99国产极品粉嫩在线观看| 久9热在线精品视频| 国产亚洲av高清不卡| 99国产精品一区二区蜜桃av| 不卡av一区二区三区| 成人亚洲精品一区在线观看| 中文字幕高清在线视频| 成人免费观看视频高清| 丁香六月欧美| 久久香蕉国产精品| 怎么达到女性高潮| 免费高清视频大片| 50天的宝宝边吃奶边哭怎么回事| 国产精品久久久久久精品电影 | 2021天堂中文幕一二区在线观 | 日韩 欧美 亚洲 中文字幕| 久久久精品国产亚洲av高清涩受| 好男人电影高清在线观看| 禁无遮挡网站| 久久久久久免费高清国产稀缺| 国产精品av久久久久免费| 国产区一区二久久| 欧美日韩亚洲综合一区二区三区_| 欧美一级毛片孕妇| 婷婷丁香在线五月| 亚洲av五月六月丁香网| av天堂在线播放| 欧美成人一区二区免费高清观看 | 狠狠狠狠99中文字幕| 免费在线观看日本一区| 国产一区在线观看成人免费| 欧美成狂野欧美在线观看| 搞女人的毛片| 亚洲久久久国产精品| 成人亚洲精品av一区二区| 成人国产综合亚洲| 欧美日韩中文字幕国产精品一区二区三区| 亚洲国产精品久久男人天堂| 久久欧美精品欧美久久欧美| 又大又爽又粗| 一级毛片高清免费大全| 在线视频色国产色| 这个男人来自地球电影免费观看| 国产亚洲精品一区二区www| 欧美黑人精品巨大| 禁无遮挡网站| 亚洲国产日韩欧美精品在线观看 | 国产一区二区三区在线臀色熟女| 午夜成年电影在线免费观看| 久久香蕉精品热| 淫秽高清视频在线观看| 校园春色视频在线观看| 身体一侧抽搐| 国内毛片毛片毛片毛片毛片| 法律面前人人平等表现在哪些方面| 国产一区二区三区视频了| 亚洲九九香蕉| 观看免费一级毛片| 91成人精品电影| 十八禁人妻一区二区| 亚洲国产日韩欧美精品在线观看 | 十八禁人妻一区二区| 精品卡一卡二卡四卡免费| 女生性感内裤真人,穿戴方法视频| 精品国产乱子伦一区二区三区| 岛国视频午夜一区免费看| 制服人妻中文乱码| 欧美黑人欧美精品刺激| 黄片播放在线免费| 99在线人妻在线中文字幕| 怎么达到女性高潮| 女生性感内裤真人,穿戴方法视频| 国产99久久九九免费精品| 亚洲国产欧洲综合997久久, | 亚洲性夜色夜夜综合| 亚洲av片天天在线观看| 极品教师在线免费播放| 天天一区二区日本电影三级| 国产成人精品久久二区二区91| 天天一区二区日本电影三级| 母亲3免费完整高清在线观看| 老司机在亚洲福利影院| 久久精品成人免费网站| 日本黄色视频三级网站网址| 真人一进一出gif抽搐免费| 久久久久久免费高清国产稀缺| 91国产中文字幕| 亚洲三区欧美一区| 1024手机看黄色片| 天天躁夜夜躁狠狠躁躁| x7x7x7水蜜桃| 丁香欧美五月| 亚洲av电影不卡..在线观看| 国产免费av片在线观看野外av| 国产精品永久免费网站| 免费电影在线观看免费观看| 国产成人av激情在线播放| 男女床上黄色一级片免费看| 午夜免费鲁丝| 亚洲av日韩精品久久久久久密| 一级毛片精品| 亚洲狠狠婷婷综合久久图片| 久久久久免费精品人妻一区二区 | 久久国产亚洲av麻豆专区| 色综合站精品国产| 美国免费a级毛片| 搡老妇女老女人老熟妇| 老司机在亚洲福利影院| 日韩大码丰满熟妇| 动漫黄色视频在线观看| 别揉我奶头~嗯~啊~动态视频| 亚洲专区中文字幕在线| 国产亚洲欧美98| 在线视频色国产色| svipshipincom国产片| 亚洲成人国产一区在线观看| 一级毛片女人18水好多| 天天添夜夜摸| 曰老女人黄片| 一本久久中文字幕| 午夜福利高清视频| 欧美色欧美亚洲另类二区| 色在线成人网| 欧美另类亚洲清纯唯美| 丰满的人妻完整版| 久久青草综合色| 午夜福利免费观看在线| 欧美性长视频在线观看| 18美女黄网站色大片免费观看| 黑人巨大精品欧美一区二区mp4| 亚洲av片天天在线观看| 国产一区二区激情短视频| 国产成人精品久久二区二区免费| 欧美激情高清一区二区三区| 免费无遮挡裸体视频| 丝袜人妻中文字幕| 身体一侧抽搐| xxxwww97欧美| 国产精品香港三级国产av潘金莲| 精品第一国产精品| 国产精品日韩av在线免费观看| 成人精品一区二区免费| 久久精品国产亚洲av高清一级| 国产乱人伦免费视频| 俺也久久电影网| 午夜视频精品福利| 免费在线观看成人毛片| 国产野战对白在线观看| 99久久综合精品五月天人人| 国产免费av片在线观看野外av| 91大片在线观看| АⅤ资源中文在线天堂| 国产伦一二天堂av在线观看| 夜夜爽天天搞| 每晚都被弄得嗷嗷叫到高潮| 精品无人区乱码1区二区| 村上凉子中文字幕在线| 亚洲国产精品久久男人天堂| 长腿黑丝高跟| 一本大道久久a久久精品| 母亲3免费完整高清在线观看| 亚洲电影在线观看av| 国产精品美女特级片免费视频播放器 | 人妻丰满熟妇av一区二区三区| 国产三级黄色录像| 在线十欧美十亚洲十日本专区| 欧美成人一区二区免费高清观看 | 久久香蕉精品热| 欧美日韩福利视频一区二区| 午夜影院日韩av| 日韩欧美在线二视频| 天堂动漫精品| 国产伦在线观看视频一区| 欧美黑人巨大hd| 国产一区二区三区视频了| 免费在线观看成人毛片| 国产高清激情床上av| 高潮久久久久久久久久久不卡| or卡值多少钱| 白带黄色成豆腐渣| 欧美人与性动交α欧美精品济南到| 男人舔奶头视频| 亚洲av电影不卡..在线观看| 波多野结衣高清作品| 亚洲国产日韩欧美精品在线观看 | 亚洲一码二码三码区别大吗| 亚洲精品在线美女| 亚洲成a人片在线一区二区| 男人舔女人下体高潮全视频| 高潮久久久久久久久久久不卡| 国产精品精品国产色婷婷| 精品福利观看| 国产欧美日韩一区二区精品| 成人三级黄色视频| 国产在线精品亚洲第一网站| 成人免费观看视频高清| 亚洲av五月六月丁香网| 亚洲一卡2卡3卡4卡5卡精品中文| 无遮挡黄片免费观看| 成人18禁在线播放| 女性生殖器流出的白浆| 老熟妇乱子伦视频在线观看| 中文字幕人妻熟女乱码| 美女午夜性视频免费| 久久热在线av| 久久人妻福利社区极品人妻图片| 日韩国内少妇激情av| 国产精品久久视频播放| 国产成人欧美| 特大巨黑吊av在线直播 | 精品国产国语对白av| 一级片免费观看大全| 在线观看www视频免费| 亚洲欧美激情综合另类| 一进一出好大好爽视频| 精品高清国产在线一区| a级毛片在线看网站| 日本免费一区二区三区高清不卡| 亚洲欧美精品综合久久99| 国产精品二区激情视频| 欧美 亚洲 国产 日韩一| 色播亚洲综合网| 老熟妇仑乱视频hdxx| 精品日产1卡2卡| 男女床上黄色一级片免费看| 欧美绝顶高潮抽搐喷水| 午夜精品在线福利| 天天躁狠狠躁夜夜躁狠狠躁| 精品一区二区三区av网在线观看| 欧美激情高清一区二区三区| 一边摸一边做爽爽视频免费| av片东京热男人的天堂| 欧美激情 高清一区二区三区| 午夜日韩欧美国产| 18美女黄网站色大片免费观看| 嫩草影视91久久| 99热这里只有精品一区 | 自线自在国产av| 成人三级做爰电影| 国产黄a三级三级三级人| 超碰成人久久| 国产熟女午夜一区二区三区| 中文字幕人成人乱码亚洲影| 91老司机精品| 久久精品国产亚洲av高清一级| 久久婷婷成人综合色麻豆| 成年免费大片在线观看| 亚洲精品色激情综合| 欧美乱色亚洲激情| 18禁裸乳无遮挡免费网站照片 | 精品熟女少妇八av免费久了| 午夜福利欧美成人| 国产1区2区3区精品| av超薄肉色丝袜交足视频| 日韩中文字幕欧美一区二区| 国产欧美日韩一区二区精品| 欧美日韩瑟瑟在线播放| 亚洲av熟女| 正在播放国产对白刺激| 色婷婷久久久亚洲欧美| 亚洲欧美一区二区三区黑人| 成年版毛片免费区| 黄频高清免费视频| 熟妇人妻久久中文字幕3abv| 免费看美女性在线毛片视频| 日韩国内少妇激情av| 国产高清有码在线观看视频 | 欧美一级毛片孕妇| 一区福利在线观看| √禁漫天堂资源中文www| 久久99热这里只有精品18| 精品久久久久久久久久免费视频| 深夜精品福利| 成人精品一区二区免费| 亚洲九九香蕉| 色老头精品视频在线观看| 在线观看午夜福利视频| 在线观看www视频免费| 国产麻豆成人av免费视频| 久久久国产欧美日韩av| 人妻久久中文字幕网| 日韩欧美一区二区三区在线观看| 九色国产91popny在线| 国产午夜精品久久久久久| 真人做人爱边吃奶动态| 亚洲狠狠婷婷综合久久图片| 日韩国内少妇激情av| 免费搜索国产男女视频| 亚洲熟女毛片儿| 免费在线观看成人毛片| 国产精品免费一区二区三区在线| 天天一区二区日本电影三级| 亚洲成人久久爱视频| 久久精品国产亚洲av香蕉五月| 欧美乱码精品一区二区三区| 男人操女人黄网站| 男女下面进入的视频免费午夜 | 国产欧美日韩精品亚洲av| 美女国产高潮福利片在线看| 很黄的视频免费| 日本精品一区二区三区蜜桃| 老司机深夜福利视频在线观看| 欧美日韩福利视频一区二区| 国产欧美日韩一区二区三| 国产伦在线观看视频一区| 最近在线观看免费完整版| 国产蜜桃级精品一区二区三区| 国产伦在线观看视频一区| 国产欧美日韩一区二区精品| 欧美黑人精品巨大| 亚洲专区字幕在线| 别揉我奶头~嗯~啊~动态视频| 他把我摸到了高潮在线观看| 日本熟妇午夜| 久久久精品国产亚洲av高清涩受| 亚洲成人国产一区在线观看| 久久久久国产精品人妻aⅴ院| 很黄的视频免费| 欧美黑人欧美精品刺激| 欧美精品亚洲一区二区| 欧美丝袜亚洲另类 | 免费电影在线观看免费观看| 99国产精品一区二区三区| 国产成人一区二区三区免费视频网站|