• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Label distribution expression recognition algorithm based on asymptotic truth value

    2021-09-15 02:34:00HUANGHaoGEHongwei

    HUANG Hao,GE Hongwei

    (1. School of Artificial Intelligence and Computer Science,Jiangnan University,Wuxi 214122,China; 2. Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence,Wuxi 214122,China)

    Abstract:Ambiguous expression is a common phenomenon in facial expression recognition (FER).Because of the existence of ambiguous expression,the effect of FER is severely limited.The reason maybe that the single label of the data cannot effectively describe complex emotional intentions which are vital in FER.Label distribution learning contains more information and is a possible way to solve this problem.To apply label distribution learning on FER,a label distribution expression recognition algorithm based on asymptotic truth value is proposed.Under the premise of not incorporating extraneous quantitative information,the original information of database is fully used to complete the generation and utilization of label distribution.Firstly,in training part,single label learning is used to collect the mean value of the overall distribution of data.Then,the true value of data label is approached gradually on the granularity of data batch.Finally,the whole network model is retrained using the generated label distribution data.Experimental results show that this method can improve the accuracy of the network model obviously,and has certain competitiveness compared with the advanced algorithms.

    Key words:facial expression recognition (FER);label distributed learning;label smoothing;ambiguous expression

    0 Introduction

    As one of the most important and accessible emotional expressions of human beings,facial expressions have been extensively studied in psychology.The facial expression recognition (FER)system in the field of computer vision mainly focuses on automatic FER.Its major task is to recognize the facial expressions in pictures or picture sequences (including videos).It is believed that this technology is of great significance to the efficient human-computer interaction in the future world[1],as well as to the filed of fatigue driving detection[2]and the treatment of mental diseases[3].Li S et al.[4]concluded that there are two nodus in deep expression recognition at this stage:network overfitting due to lack of effective data and difficulty in feature extraction due to a large amount of redundant information.A large number of recent studies have shown that it is not appropriate to simply regard deep expression recognition as an application of neural networks.Even with high-standard valid data and advanced network models,the accuracy of deep expression recognition is still limited by the characteristics of the expression itself.Especially in the past two years,the ambiguous expression phenomenon has been paid more and more attention by researchers[5-7].

    What we study is the FER of static pictures.Usually,a picture corresponds to only one label.In fact,a picture may contain complex emotional intentions.From this perspective,the labels for static expression recognition are not accurate enough.To illustrate this problem intuitively,we will use real data from FER2013 database as an example to describe the problem,as shown in Fig.1.

    Fig.1 Picture of FER2013

    Taking the above figures as an example,we give a brief description of what we have learned in the three ways of one-hot label,multi-label and label distribution.These images are taken from the training set of the FER2013 database,and their labels in FER+[8]are shown in Table 1.

    Table 1 Corresponding label of Fig.1

    a1,a2,and a3 will be classified as Angry,Neutral,and Unknown.Respectively,when the label conversion is carried out by majority voting,for single-label data,a1,a2 and a3 lost data of 2,4 and 10 labels;for figure a1,the essence one-hot label is [0,0,0,0,10,0,0,0,0,0].Obviously,if each annotator has the same weight,this classification is not quite fair.When figure a3 is converted into multiple labels,the form of data is changed to [1,0,1,0,0,0,0,0,0],and the essence of what the network learns is [5,0,5,0,0,0,0,0],which will also lose part of the information.The classification of the ten annotators is partly accidental,and both are acceptable,but when applied in practice,it is not wise to ignore 20% or 10% of emotional preferences.

    Label distribution learning is very promising in solving this problem.Label distribution learning (LDL)was clearly proposed by Geng et al.[9].He believes that although multi-label learning can solve the problem of label ambiguity to a certain extent,for many practical problems,the overall label distribution is more important.Subsequently,Gao et al.[10]proposed deep label distribution learning (DLDL)model,which applies label distribution learning to tasks such as age prediction and head pose estimation.In the direction of FER,similar work is more influential.In addition to the FER+ database relabeling work used in the previous example,there is also the work of Chen et al.[5].They proposed the lable distribution learning on auxiliary lable space graphs (LDL-ALSG)framework,which uses an approximate K-nearest neighbor algorithm to calculate the distribution of current data in the auxiliary task solution space of similar data through auxiliary tasks similar to FER,and then uses the auxiliary network as a judgment.Barsoum et al.[8]used crowd sourcing to convert single-label data into label distribution data,which improves the recognition effect of deep neural networks.However,this method requires multiple people to relabel,which takes a lot of manpower and high economic costs.The method of Chen et al.has the disadvantage that the amount of calculation during training is extremely large,and the effect of the method depends on a good auxiliary task model.

    As far as we know,the IPA2LT framework proposed by Zeng et al.[11]is the first attempt to solve the problem of ambiguous labels.Their work is based on the recognition that facial expression data have a potential true label.That is,for a certain face picture,the emoticon label contained therein has a certain truth value (they think that the ambiguous emoticon is an inconsistent labeling of emoticon labels).Inspired by this idea,the content of our work is based on the following idea:For certain facial expression data,there is a potential true emotional distribution.In the data containing single labels,the label is a high-level generalization of this distribution.We use an approximation method to find the potential real expression label distribution of the data.The significant difference from the former work is that the more appropriate way to describe expressions is the label distribution of expressions.Therefore,this study acknowledges the authenticity and representativeness of the labels of the data annotators.

    1 Proposed method

    1.1 Overal framework

    The latent truth is a classical assumption.In essence,assuming that the data can get a perfect label distribution is the same as assuming that the loss function of the network can be minimized,which is an ideal situation.Although the loss function of the neural network cannot be minimized in most cases,the characterization of the loss function makes it possible to optimize the network model step by step.In this process,the minimum value does not actually appear,but the expectation of the loss function to approach 0 actually improves the network model.The algorithm presented in this paper is trying to do such an approximation work.Although the true value of the data label distribution is not known,it can be approximated to the true value through a better expectation.

    The overall framework of the algorithm in this paper is shown in Fig.2.The training process is as follows:First send the training data to the network model and use the single label of the data to learn knowledge,then collect the softmax output of the training data as the true label distribution during the last training.The initial valueLTmeanwhile collects the class average distributionLMof all kinds of data.The label distribution update process is as follows:Consider the original labelLOof the data as well as theLTandLMobtained during training at the same time,and update the label distribution of the training data.In the testing process,make predictions on the test set of the database with the network trained on label distribution data.

    Fig.2 Overall framework of proposed method

    1.1.1 Label update strategy

    The algorithm adopts the class-average distributionLMfor every class of data in the training set as one of the references of the real label distribution,and 0.5 is taken as the lower limit of the label update strategy according to the principle of majority voting.Specifically,for a data sample in the training set,the network model can predict it in the following three situations:

    1)The probability that the classification is correct and that it belongs to the correct classification is greater than the average expectation of the class inLM,the overall average of the data.In this case,the classification of the data is very accurate,and higher requirements can only be put forward for further improvement.The label distribution of this class is set as the one-hot label of the data;

    2)The classification is correct,but the probability of prediction is lower than the average,which is the ambiguous data label that this study tries to solve.The expectation of the original single data label for this data is too optimistic,but in fact the distribution of facial expressions contained in the data is not so clear.Adjust the label distribution of the data to the value of the data inLM;

    3)The classification is wrong,so we should not expect too much of the expression distribution label in the data,as long as the probability that it belongs to the correct value in the next distribution approaches 0.5.It is unrealistic to require this kind of data to achieve the precise classification in the single label.The essence of this process is label smoothing with a larger threshold.

    The formulation of the above description is given by

    (1)

    Fig.3 Label update of three situations

    1.1.2 Asymptotic bounds

    The purpose of label update is to generate the data label distribution reasonably.There are two key issues in this process.The first is how to ensure the representativeness of the class average distributionLM,that is,how to ensure that the model has learned enough knowledge and there is no over-fitting to the training data.There is no simple solution to this problem,it can only be obtained with sufficient experience through a large amount of experimental.The second is when the network model is used asLTin the label update process after fixingLM.Ideally,this problem can be dealt with at the same time as the previous one,that is,the state of the network model after single-label training is exactly the most reasonable state for label update.However,the reality is that the learning of the network model is not iterative learning once,and knowledge can be learned for model every batch in the training process.LMconsiders the average data of the entire database training set.If the fixedLMmodel is also used forLTinitialization,it is almost certain that half of the data will be lower thanLMreference standard in the label distribution.Under the premise of authenticity and representativeness,it is obviously inappropriate to think that half of the data are accurately marked.

    In order to solve the above two problems at the same time,when fixingLMmodel,a certain compromise is made to the classification ability of the network model,and the network model after a certain batch shall prevail.In fact,as the number of iterations increases,the growth of the feature extraction ability of the network model slows down,and the loss ofLMis acceptable for such processing.WhenLTis initialized,the network model after a certain batch shall prevail.After multiple batches of training,the feature extraction ability of the network is further improved.If there is an expected threshold for the accuracy of the data,the network model can be as close to this threshold as possible in the granularity of the batch.One of the practical problems is that in the batch processing process,if the batch size is too small,the efficiency of the system to update the entireLTis too low.The information of a batch is limited,and may not even be updated forLT,but it is necessary to traverse all the data.Undoubtedly,the training time cost of such processing is huge.The algorithm proposed will design two approximate processings,one is to updateLTonce after processingKbatches,and the other is to updateLTof the current batch for each batch.These two approximations will give priority to updating the top batches each time.However,since the order of the batches is random,taking multiple averages can offset this defect.Furthermore,after fixingLM,the network model is very close to fitting,and the difference betweenLTupdated in the previous batch and the following batch is not large.

    2 Experiments

    2.1 Experimental setup

    2.1.1 Datasets

    To evaluate the performance of the algorithm designed for FER in the wild environment,we select several popular wild databases in recent years:FER+[8],AffectNet[13],RAF-DB[12].FER+ database is a relabel of the FER2013 database,also written as Ferplus.For 35 887 pictures of FER2013,10 labels marked by 10 annotators are made for each picture.They are in order:neutral,happy,surprise,sadness,anger,disgust,fear,contempt,unknown,and no face.Ferplus solves the problem of low credibility of FER2013’s much criticized label.The FER2013 database provides low-resolution grayscale images of 48×48 pixels.Until today,such low-resolution is still a challenge.RAF-DB database is the database with the most stringent production standards so far.Li et al.provided single-label data with 40 annotators’labels for each picture (by majority voting ).The database provides a total of 29 672 high-resolution data,of which 15 339 are single-label 7-category basic expressions (including neutral),and 3 954 are single-label 11-category composite expressions.Even for these high-quality labeled data,their group also provides aligned face data that are uniformly processed as 100×100 pixels.Subsequent researchers can directly use the aligned data for research,thus simplifying the preprocessing process.AffectNet is currently the database with the most single-label data in number.It contains more than one million high-resolution pictures,of which about 450 000 pictures are manually marked as 11 types of single-label data (A none label added relative to FER+).Due to time and money costs,these labels are only marked by only one annotator.The data set also provides the valence and arousal labels of these manually annotated pictures,and each picture is marked by 12 professionals.For basic emotions system,AffectNet labels have low credibility,and making expression recognition predictions on this database is a huge challenge.

    2.1.2 Parameters and environment

    Without special instruct-tions,the experimental parameters are set as follows:The data type accepted by the network model is:224×224 size rgb three-channel image,SGD optimizer is selected,the momentum is 0.5,the learning rate is 0.01 and the number of iterations is 20.The weight decay and learning rate decay strategies are adopted,the weight decay coefficient is 10-5,and the learning rate decay method is exponential decay.The network model is deployed on one Nvidia 2080ti GPU,using the pytorch deep learning framework.Since the batch size needs to be set appropriately when approaching,the batch size should be adjusted to 64,which is relatively large.Set theKvalue to 50.The following experiments all adopt the best result in 20 iterations as the final result.

    2.2 Analysis of hyper-parameter

    In order to evaluate the influence of hyperpara-meters,in the experimental part,we first analyze the hyper-parameter precision threshold and the number of single-label training iterations,and then test and apply this method on different network models.The experimental design and analysis are as follows:

    Approximate method 1:Considering the relatively large amount of calculation in approximate method 1,only experimental tests are performed on RAF-DB.Under the premise that the number of single iterations is 3,4 and 5.Tested on the ResNet18 and ResNet34 network models respectively,the parameter accuracy threshold fluctuates from 0.7 to 1.0 with a step size of 0.05.The experimental results are shown in Tables 2 and 3.

    Table 2 Experiment of ResNet18 on RAF-DB database (approximate method 1)

    Table 3 Experiment of ResNet34 on RAF-DB database (approximate method 1)

    In order to analyse the data more intuitively,setting the accuracy threshold as the abscissa and the accuracy as the ordinate,we get Fig.4,which illustrates the experimental results of ResNet18 and ResNet34.

    (a)ResNet18 on RAF-DB

    Obviously,although the overall test accuracy obtained by the experiment fluctuates slightly with the change of the accuracy threshold.The following information can still be obtained:Firstly,the experimental results on the model ResNet34 are significantly better than that on ResNet18.Secondly,the overall single-label training shows that the number of iterations should not be too low (refer to line 3 in Fig.4).The fluctuation of the data is due to the characteristics of the neural network,each batch will be randomly scrambled in each training,the randomization parameters obtained in each training are different,and the fitting of the network parameters during the training process is also slightly different.The number of single-label training cannot be too low because the network at this time is seriously under-fitting.The initialization ofLTwill also contain a lot of error information.Especially when the precise threshold is set to a large value,the error caused by the under-fitting network model will be further reflected in theLTupdate process.According to the original intention of the scheme design,as the accuracy threshold changes,there will be two accuracy fluctuations in the results of the experiment,such as line 5 in Fig.5 and line 4 in Fig.5.The best result in the previous part is due to the “correction”of the data label distribution by the label update algorithm,and the latter is the joint effect of the further fitting of the network model parameters and the label update algorithm.

    Fig.5 is schematic diagram of the accuracy change with the accuracy threshold.

    (a)ResNet18 on RAF-DB

    Approximate method 2:The basic parameters and experimental settings are the same as those of approximate method 1.The experimental results are listed in Tables 4 and 5.

    Table 4 Experiment of ResNet18 on RAF-DB database (approximate method 2)

    Table 5 Experiment of ResNet34 on RAF-DB database (approximate method 2)

    It can be seen that for the approximation method 2,the experimental results when the number of single-label training iterations is chosen as 4 have obvious advantages,especially on the ResNet34 network model.At the same time,these two experimental result polylines with four iterations are also in line with the conjecture of the two local best values in the previous analysis.The obvious difference from the approximation method 1 is that the series of experiments with a single-label training iteration number of 5 perform poorly.It is speculated that the approximation method 2 is more sensitive to over-fitting data and is more negatively affected by the over-fitting network.

    In order to visually analyse the performance of the two approximation methods,we take the best results of each shown in Fig.6 to compare them.Approximation method 1 takes a series of experiments when the number of iterations for two single-label training is 5,and the approximation method 2 also takes the a series of experiments when the number of iterations is 4.

    (a)Comparison on RAF-DB (ResNet18)

    It can be seen that for both the best value and the overall test accuracy,the approximation method 2 is better than the approximation method 1,so the approximation method 2 with better performance is selected in the subsequent comparative experiment stage.In addition,because there is no need to consider the huge amount of calculation caused by traversing the entire database training set multiple times,the batch size of the overall network model in training is no longer limited.The experimental results when taking 16,32 and 64 respectively are shown in Table 6.

    Table 6 Impact of batch size

    Fig.7 Influence of batch size on RAF-DB(ResNet34)

    It can be seen that the experimental effect of this algorithm is better when the batch size is small.It is speculated that because the batch size is smaller,the granularity of the label distribution update is more detailed,which also partially explains the phenomenon that the approximation method 1 is worse than the approximation method 2.Obviously,the former updates more labels each time.

    All in all,the method proposed in this paper is sensitive to hyper-parameters.The hyper-parameters that achieve the best result on RAF-DB are as follows:single-label training iterations is 4,batch size is 16,accuracy threshold is 0.9,and network model is ResNet34.The best accuracy is 86.51%.

    2.3 Comparative experiment

    First,to verify the effectiveness of the proposed method.We compare its experiment results on the three databases of FER+,AffectNet,and RAF-DB with that of benchmark method on ResNet34.

    Table 7 shows that after applying the method proposed,the network model has improved by 1.35% on AffectNet,3.46% on RAF-DB,and 4.69% on FER+.

    Table 7 Comparison with baseline

    Then,a comparative experiment was conducted with some advanced algorithms that have performed well in recent years,and the experimental results shown in Table 8.On AffectNet,the method proposed is compared with DLP-CNN,EAU-Net,pACNN,and IPA2LT.The hyper-parameters that achieve the best result on AffectNet are as follows:the number of single-label training iterations is 4,the batch size is 32,the accuracy threshold is 1,and the network model is ResNet34.

    Table 8 Comparison with baseline

    As shown in Table 9,on the RAF-DB database,the method proposed is compared with DLP-CNN,EAU-Net,gACNN and DeepExp3D.Among the advanced methods,the effect of proposed method reaches the best.The hyper-parameters that obtain the best result on the RAF-DB database are as follows:the number of single-label training iterations is 4,the batch size is 16,the accuracy threshold is 0.95,and the network model is ResNet34.

    Table 9 Comparison with baseline

    As shown in Table 10,on FER+,the method proposed is compared with SHCNN,TFE-JL,VGG13-PLD,and ESR-9,the best results are achieved,and the best hyper-parameters on FER+ are as follows:the number of single-label training iterations is 4,the batch size is 16,the accuracy threshold is 0.95,and the network model is ResNet34.

    Table 10 Comparison with baseline

    3 Conclusions

    In this work,a label distribution expression recog-nition algorithm based on asymptotic truth value is proposed to solve the problem of ambiguous expression.In order to accurately describe the emotional tendency in image data,we use label distribution to avoid the ambiguity problem caused by single-label data.We propose a simple label generation strategy and a set of corresponding training methods.Taking into account the rigor of the data,we use the overall inner-class mean and the lower bound introduced by the absolute majority voting method as constraints.This is the highlight of this work and the focus of improvement.In future research,a lot of different attempts are needed to find a more reasonable reference standard for label generation.

    av免费观看日本| 精品99又大又爽又粗少妇毛片| 午夜免费鲁丝| 永久免费av网站大全| 中文在线观看免费www的网站| 晚上一个人看的免费电影| 婷婷色综合www| 国精品久久久久久国模美| 亚洲内射少妇av| 日韩伦理黄色片| 欧美日韩精品成人综合77777| 免费高清在线观看视频在线观看| 黄色日韩在线| 久久精品熟女亚洲av麻豆精品| 欧美高清性xxxxhd video| 啦啦啦视频在线资源免费观看| 久久久成人免费电影| 在线天堂最新版资源| 国产伦在线观看视频一区| av不卡在线播放| 91aial.com中文字幕在线观看| 看免费成人av毛片| 青春草视频在线免费观看| 老熟女久久久| 免费看日本二区| 亚洲国产色片| 久久6这里有精品| 久久人人爽av亚洲精品天堂 | 国产男女内射视频| av网站免费在线观看视频| 黄色一级大片看看| av视频免费观看在线观看| 国产91av在线免费观看| 亚洲国产精品成人久久小说| 嫩草影院新地址| 一级毛片电影观看| 七月丁香在线播放| 日产精品乱码卡一卡2卡三| 两个人的视频大全免费| 夫妻性生交免费视频一级片| 免费看光身美女| 久久久久久久大尺度免费视频| 偷拍熟女少妇极品色| 日本猛色少妇xxxxx猛交久久| 26uuu在线亚洲综合色| 天天躁夜夜躁狠狠久久av| 国产成人精品久久久久久| 久久久久久久久久久免费av| 99热全是精品| 高清欧美精品videossex| 久久韩国三级中文字幕| 搡老乐熟女国产| 日韩av免费高清视频| 亚洲国产精品专区欧美| 国产精品成人在线| 一级毛片黄色毛片免费观看视频| 大香蕉久久网| 中文资源天堂在线| 国产在线男女| 老师上课跳d突然被开到最大视频| 婷婷色av中文字幕| 日本欧美国产在线视频| 日韩三级伦理在线观看| 日韩精品有码人妻一区| 一区二区三区精品91| 国产高潮美女av| 高清黄色对白视频在线免费看 | 少妇裸体淫交视频免费看高清| 国产精品免费大片| 十分钟在线观看高清视频www | 青春草亚洲视频在线观看| 3wmmmm亚洲av在线观看| 精品人妻偷拍中文字幕| 天天躁夜夜躁狠狠久久av| 熟女av电影| 直男gayav资源| 人妻少妇偷人精品九色| 日韩亚洲欧美综合| 少妇猛男粗大的猛烈进出视频| 精品少妇久久久久久888优播| 免费av中文字幕在线| 国产精品国产三级国产av玫瑰| 国产午夜精品久久久久久一区二区三区| 成人毛片a级毛片在线播放| 亚洲av日韩在线播放| 永久网站在线| a级一级毛片免费在线观看| 欧美高清性xxxxhd video| 在线 av 中文字幕| 色婷婷久久久亚洲欧美| 午夜免费鲁丝| 丰满迷人的少妇在线观看| 亚洲国产精品一区三区| 噜噜噜噜噜久久久久久91| 夫妻午夜视频| 中文精品一卡2卡3卡4更新| 一区二区av电影网| 日韩不卡一区二区三区视频在线| 黄片无遮挡物在线观看| 婷婷色综合大香蕉| 成人黄色视频免费在线看| 一区二区三区精品91| 亚洲最大成人中文| 国产在线男女| 国产老妇伦熟女老妇高清| 高清欧美精品videossex| 一区二区三区免费毛片| 嫩草影院入口| 国产亚洲精品久久久com| videossex国产| videossex国产| 国内精品宾馆在线| 亚洲国产精品成人久久小说| 国内精品宾馆在线| 五月天丁香电影| 狂野欧美激情性bbbbbb| 国产精品一区www在线观看| 下体分泌物呈黄色| 亚洲高清免费不卡视频| av卡一久久| 天美传媒精品一区二区| 久久综合国产亚洲精品| 久久久久久久精品精品| 青青草视频在线视频观看| 18禁在线播放成人免费| 日韩中文字幕视频在线看片 | 成人无遮挡网站| 色吧在线观看| 国产国拍精品亚洲av在线观看| 欧美性感艳星| 在线观看国产h片| 一二三四中文在线观看免费高清| 欧美成人一区二区免费高清观看| 午夜激情福利司机影院| 国产精品久久久久久久久免| 极品少妇高潮喷水抽搐| 亚洲无线观看免费| .国产精品久久| 亚洲精品亚洲一区二区| 国产伦精品一区二区三区视频9| 啦啦啦在线观看免费高清www| 91在线精品国自产拍蜜月| 久久国内精品自在自线图片| 99热这里只有是精品在线观看| 亚洲国产成人一精品久久久| 联通29元200g的流量卡| 久久久久久久久久久免费av| 免费人妻精品一区二区三区视频| 亚洲国产欧美在线一区| 噜噜噜噜噜久久久久久91| 性色avwww在线观看| 亚洲欧美一区二区三区国产| 九九爱精品视频在线观看| 国产成人a区在线观看| 高清视频免费观看一区二区| 永久网站在线| 亚洲欧美日韩无卡精品| 少妇人妻久久综合中文| 99九九线精品视频在线观看视频| 18禁在线无遮挡免费观看视频| 一本久久精品| 97精品久久久久久久久久精品| 97在线人人人人妻| 人人妻人人看人人澡| 插逼视频在线观看| 欧美亚洲 丝袜 人妻 在线| 自拍欧美九色日韩亚洲蝌蚪91 | 国产成人91sexporn| 国产精品免费大片| 免费观看无遮挡的男女| 亚洲人与动物交配视频| 特大巨黑吊av在线直播| 80岁老熟妇乱子伦牲交| 国产精品一区二区三区四区免费观看| 免费观看av网站的网址| 一区在线观看完整版| 日韩中文字幕视频在线看片 | 精品少妇黑人巨大在线播放| 一级毛片黄色毛片免费观看视频| 只有这里有精品99| 爱豆传媒免费全集在线观看| 欧美日韩视频高清一区二区三区二| 少妇猛男粗大的猛烈进出视频| 婷婷色综合www| 青春草国产在线视频| 国产 一区 欧美 日韩| 欧美变态另类bdsm刘玥| 少妇高潮的动态图| 久久久久久九九精品二区国产| 亚洲国产精品专区欧美| 欧美一区二区亚洲| 亚洲av免费高清在线观看| 国产一级毛片在线| 青春草视频在线免费观看| 黄色日韩在线| 久久99蜜桃精品久久| 国产亚洲91精品色在线| 啦啦啦中文免费视频观看日本| av女优亚洲男人天堂| 成年av动漫网址| 人人妻人人看人人澡| 午夜福利在线观看免费完整高清在| av国产免费在线观看| 97在线视频观看| 亚洲四区av| 老熟女久久久| 日本黄色日本黄色录像| 午夜福利在线在线| 爱豆传媒免费全集在线观看| 欧美zozozo另类| 少妇熟女欧美另类| 亚洲人成网站高清观看| 成人免费观看视频高清| 日韩av不卡免费在线播放| 最新中文字幕久久久久| 噜噜噜噜噜久久久久久91| 我要看黄色一级片免费的| 亚洲成人一二三区av| 亚洲丝袜综合中文字幕| 一级毛片黄色毛片免费观看视频| 国产精品无大码| 国产精品免费大片| 国产亚洲精品久久久com| videos熟女内射| 日韩av在线免费看完整版不卡| 日韩av在线免费看完整版不卡| 色哟哟·www| videossex国产| 亚洲人成网站在线播| 国产伦理片在线播放av一区| 国产精品一区二区性色av| 国产精品一及| 国产av国产精品国产| 啦啦啦啦在线视频资源| 联通29元200g的流量卡| 老司机影院毛片| 少妇丰满av| 91精品国产国语对白视频| 中文字幕免费在线视频6| 91精品一卡2卡3卡4卡| 日韩一本色道免费dvd| 黑人猛操日本美女一级片| 午夜激情福利司机影院| 久久久久久久亚洲中文字幕| 久久99热6这里只有精品| 在线精品无人区一区二区三 | 国产精品一区二区在线不卡| 97精品久久久久久久久久精品| 人人妻人人爽人人添夜夜欢视频 | av福利片在线观看| 亚洲综合色惰| 老司机影院成人| 秋霞伦理黄片| 日韩av不卡免费在线播放| 黄色欧美视频在线观看| 新久久久久国产一级毛片| 亚洲在久久综合| 国产精品偷伦视频观看了| 人人妻人人添人人爽欧美一区卜 | 超碰97精品在线观看| 国产精品国产av在线观看| 欧美最新免费一区二区三区| 日韩中字成人| 丰满迷人的少妇在线观看| 3wmmmm亚洲av在线观看| 色婷婷av一区二区三区视频| 亚洲aⅴ乱码一区二区在线播放| 久久人人爽人人片av| 人人妻人人看人人澡| 久久久午夜欧美精品| 久久人人爽人人爽人人片va| 日本欧美国产在线视频| 一级爰片在线观看| 夜夜看夜夜爽夜夜摸| 亚洲国产av新网站| 亚洲成人手机| 精品99又大又爽又粗少妇毛片| 国产成人精品福利久久| 亚洲天堂av无毛| 少妇裸体淫交视频免费看高清| 国产乱来视频区| 国产伦理片在线播放av一区| 亚洲精品自拍成人| 在线观看三级黄色| 国内精品宾馆在线| 精品一区二区三卡| 纵有疾风起免费观看全集完整版| 一边亲一边摸免费视频| 水蜜桃什么品种好| 日韩视频在线欧美| 国内少妇人妻偷人精品xxx网站| 高清av免费在线| 在线 av 中文字幕| 日韩视频在线欧美| 男女免费视频国产| 777米奇影视久久| 91久久精品国产一区二区成人| 18禁动态无遮挡网站| 久久久久久久久久久免费av| 又大又黄又爽视频免费| 插阴视频在线观看视频| 精品亚洲成a人片在线观看 | 大片免费播放器 马上看| 中文在线观看免费www的网站| 肉色欧美久久久久久久蜜桃| 日韩大片免费观看网站| 春色校园在线视频观看| 免费少妇av软件| 在线看a的网站| 黑人猛操日本美女一级片| 22中文网久久字幕| 国产精品成人在线| 一级毛片久久久久久久久女| 国产精品久久久久久久久免| 欧美激情极品国产一区二区三区 | videossex国产| 熟妇人妻不卡中文字幕| 人人妻人人澡人人爽人人夜夜| 亚洲综合色惰| av不卡在线播放| 男人添女人高潮全过程视频| 国产精品一区二区性色av| 男女边吃奶边做爰视频| 欧美精品亚洲一区二区| 午夜免费观看性视频| 一区二区三区精品91| av在线播放精品| 丝瓜视频免费看黄片| 日本免费在线观看一区| 日韩中文字幕视频在线看片 | 精品熟女少妇av免费看| 久久久成人免费电影| 能在线免费看毛片的网站| 青春草视频在线免费观看| 男女边摸边吃奶| 丰满乱子伦码专区| 99热6这里只有精品| 一个人看视频在线观看www免费| 日韩中字成人| av国产久精品久网站免费入址| 免费少妇av软件| 国产成人精品久久久久久| 欧美另类一区| 日韩一区二区三区影片| www.av在线官网国产| 色吧在线观看| 成年人午夜在线观看视频| 久久久欧美国产精品| 少妇人妻一区二区三区视频| 97热精品久久久久久| 国产一区有黄有色的免费视频| 男女啪啪激烈高潮av片| 天堂8中文在线网| 国产久久久一区二区三区| 免费观看av网站的网址| 久久ye,这里只有精品| 香蕉精品网在线| 亚洲成色77777| 精品一区二区免费观看| 亚洲综合精品二区| 少妇的逼水好多| 亚洲熟女精品中文字幕| 日韩欧美精品免费久久| 中文在线观看免费www的网站| 高清黄色对白视频在线免费看 | 亚洲精品色激情综合| 亚洲精品成人av观看孕妇| 国产精品.久久久| 成人国产av品久久久| 黄色日韩在线| 亚洲国产欧美在线一区| 人妻系列 视频| 欧美人与善性xxx| 日日撸夜夜添| 最近的中文字幕免费完整| 看免费成人av毛片| 亚洲国产欧美人成| 亚洲图色成人| 国产男女超爽视频在线观看| 自拍欧美九色日韩亚洲蝌蚪91 | 99热这里只有是精品在线观看| 国产精品一区二区在线观看99| xxx大片免费视频| 久久久久性生活片| 亚洲成人av在线免费| 一级片'在线观看视频| 中文字幕亚洲精品专区| 久久国产精品大桥未久av | 各种免费的搞黄视频| 麻豆乱淫一区二区| 国产爽快片一区二区三区| 人妻夜夜爽99麻豆av| 天堂8中文在线网| 精品亚洲成国产av| 婷婷色麻豆天堂久久| 97热精品久久久久久| 青春草视频在线免费观看| 精品久久久精品久久久| 特大巨黑吊av在线直播| 亚洲av欧美aⅴ国产| 久久6这里有精品| 精品久久久噜噜| 精品人妻视频免费看| 夜夜骑夜夜射夜夜干| 99国产精品免费福利视频| 亚洲一级一片aⅴ在线观看| 秋霞在线观看毛片| 亚洲成色77777| 伦理电影免费视频| 中文字幕制服av| 国产高潮美女av| 国产精品久久久久久av不卡| 国产欧美日韩精品一区二区| 亚洲国产av新网站| 狂野欧美激情性bbbbbb| 国产白丝娇喘喷水9色精品| 九色成人免费人妻av| 久久久久国产精品人妻一区二区| 免费av不卡在线播放| 成人影院久久| 尤物成人国产欧美一区二区三区| 国产极品天堂在线| 高清不卡的av网站| 一级毛片aaaaaa免费看小| 最黄视频免费看| 国内精品宾馆在线| 99re6热这里在线精品视频| 亚州av有码| 色视频www国产| av.在线天堂| 亚洲精品久久午夜乱码| 国产色婷婷99| 亚洲精品乱久久久久久| 久热这里只有精品99| 精品午夜福利在线看| 国产91av在线免费观看| 在线免费观看不下载黄p国产| 在现免费观看毛片| 久久久久久九九精品二区国产| 观看av在线不卡| 亚洲欧美精品专区久久| 亚洲国产精品专区欧美| 国产无遮挡羞羞视频在线观看| av播播在线观看一区| 亚洲av日韩在线播放| 中文在线观看免费www的网站| 久久精品国产亚洲网站| 少妇 在线观看| 日本爱情动作片www.在线观看| 久久久色成人| 人妻 亚洲 视频| 人妻一区二区av| av在线app专区| 男女边吃奶边做爰视频| 国产亚洲5aaaaa淫片| 国产精品人妻久久久影院| 七月丁香在线播放| 成人黄色视频免费在线看| 久久久久性生活片| 久久精品熟女亚洲av麻豆精品| 国产色爽女视频免费观看| 久久精品人妻少妇| 日韩电影二区| 亚洲av福利一区| 少妇裸体淫交视频免费看高清| 深爱激情五月婷婷| 三级国产精品片| 制服丝袜香蕉在线| 美女国产视频在线观看| 观看免费一级毛片| 国产精品国产av在线观看| 国产成人freesex在线| 老司机影院毛片| 亚洲国产毛片av蜜桃av| 天堂俺去俺来也www色官网| 国产伦在线观看视频一区| 中文字幕久久专区| 精品国产乱码久久久久久小说| 中文资源天堂在线| 国产精品国产三级国产av玫瑰| 黄色欧美视频在线观看| 成年免费大片在线观看| 精品久久久噜噜| 日韩欧美 国产精品| 国产成人freesex在线| 夫妻性生交免费视频一级片| 亚洲精品成人av观看孕妇| 久久久国产一区二区| 18禁在线无遮挡免费观看视频| 波野结衣二区三区在线| 欧美日韩在线观看h| 国产毛片在线视频| 国产一级毛片在线| 久久久a久久爽久久v久久| 大陆偷拍与自拍| 少妇人妻一区二区三区视频| 秋霞在线观看毛片| 亚洲中文av在线| 夫妻午夜视频| 免费观看的影片在线观看| av视频免费观看在线观看| 黄色欧美视频在线观看| 久久久久网色| 我要看日韩黄色一级片| 在线观看国产h片| 日韩视频在线欧美| 欧美日本视频| 日韩av在线免费看完整版不卡| 永久网站在线| 男女啪啪激烈高潮av片| 国国产精品蜜臀av免费| 一本久久精品| 久久久久久久大尺度免费视频| 老司机影院成人| 最近手机中文字幕大全| 久久久久精品性色| 免费看av在线观看网站| 少妇被粗大猛烈的视频| 各种免费的搞黄视频| 热re99久久精品国产66热6| 国产在线视频一区二区| 亚洲人成网站在线观看播放| av国产精品久久久久影院| 色5月婷婷丁香| 色婷婷久久久亚洲欧美| 亚洲第一av免费看| 国产男人的电影天堂91| 久久国产精品男人的天堂亚洲 | 少妇的逼好多水| 国产精品熟女久久久久浪| 插阴视频在线观看视频| 日本一二三区视频观看| 美女高潮的动态| 亚洲精品,欧美精品| 亚洲国产日韩一区二区| 亚洲在久久综合| 又粗又硬又长又爽又黄的视频| 精品酒店卫生间| 久久亚洲国产成人精品v| 三级国产精品欧美在线观看| 国产免费视频播放在线视频| 国产久久久一区二区三区| 老熟女久久久| 国产av码专区亚洲av| 精品午夜福利在线看| 91久久精品国产一区二区三区| 久久久久久久久久成人| 最近最新中文字幕大全电影3| 精品酒店卫生间| 亚洲欧洲国产日韩| 亚洲精品日本国产第一区| 日韩av免费高清视频| av在线播放精品| 国产黄色免费在线视频| 高清黄色对白视频在线免费看 | 夫妻午夜视频| 日韩免费高清中文字幕av| 国内精品宾馆在线| 香蕉精品网在线| 国产久久久一区二区三区| 国产伦精品一区二区三区四那| 高清黄色对白视频在线免费看 | 日韩不卡一区二区三区视频在线| 亚洲在久久综合| 久热久热在线精品观看| 最近的中文字幕免费完整| 国产精品麻豆人妻色哟哟久久| 欧美精品人与动牲交sv欧美| 日日啪夜夜撸| 婷婷色av中文字幕| 成人无遮挡网站| 国产大屁股一区二区在线视频| 直男gayav资源| 国产免费又黄又爽又色| 人人妻人人看人人澡| 亚洲欧美一区二区三区黑人 | 精品一区二区免费观看| av在线蜜桃| 中文资源天堂在线| 日韩制服骚丝袜av| 91狼人影院| 成人毛片60女人毛片免费| 又爽又黄a免费视频| 国产亚洲av片在线观看秒播厂| 少妇的逼水好多| 建设人人有责人人尽责人人享有的 | 午夜免费观看性视频| 亚洲美女黄色视频免费看| 视频中文字幕在线观看| 亚洲欧洲国产日韩| 久久久久性生活片| 亚洲欧美一区二区三区黑人 | 国产免费视频播放在线视频| 亚洲经典国产精华液单| 最近中文字幕2019免费版| 少妇丰满av| 日日啪夜夜撸| 国产精品久久久久久精品电影小说 | 夜夜骑夜夜射夜夜干| 国产女主播在线喷水免费视频网站| 少妇精品久久久久久久| 高清欧美精品videossex| 伊人久久精品亚洲午夜| 99久久人妻综合| 蜜桃久久精品国产亚洲av| 少妇高潮的动态图| 精品酒店卫生间| 日韩视频在线欧美| 欧美区成人在线视频| 久久毛片免费看一区二区三区| 日韩不卡一区二区三区视频在线| av网站免费在线观看视频| 黄片wwwwww| 一本久久精品| 免费看av在线观看网站|