• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Dual Discriminator Method for Generalized Zero-Shot Learning

    2024-05-25 14:43:14TianshuWeiandJinjieHuang
    Computers Materials&Continua 2024年4期

    Tianshu Wei and Jinjie Huang,2,?

    1School of Computer Science and Technology,Harbin University of Science and Technology,Harbin,150006,China

    2School of Automation,Harbin University of Science and Technology,Harbin,150006,China

    ABSTRACT Zero-shot learning enables the recognition of new class samples by migrating models learned from semantic features and existing sample features to things that have never been seen before.The problems of consistency of different types of features and domain shift problems are two of the critical issues in zero-shot learning.To address both of these issues,this paper proposes a new modeling structure.The traditional approach mapped semantic features and visual features into the same feature space;based on this,a dual discriminator approach is used in the proposed model.This dual discriminator approach can further enhance the consistency between semantic and visual features.At the same time,this approach can also align unseen class semantic features and training set samples,providing a portion of information about the unseen classes.In addition,a new feature fusion method is proposed in the model.This method is equivalent to adding perturbation to the seen class features,which can reduce the degree to which the classification results in the model are biased towards the seen classes.At the same time,this feature fusion method can provide part of the information of the unseen classes,improving its classification accuracy in generalized zero-shot learning and reducing domain bias.The proposed method is validated and compared with other methods on four datasets,and from the experimental results,it can be seen that the method proposed in this paper achieves promising results.

    KEYWORDS Generalized zero-shot learning;modality consistent;discriminator;domain shift problem;feature fusion

    1 Introduction

    Traditional image classification methods need to collect a large number of images with annotations for model training,but for some new things that cannot massively collect training images,traditional image classification methods can not directly classify the new things.The emergence of zero-shot learning can solve this problem.Zero-shot learning learns from existing samples and then infers the categories of new things.Zero-shot learning recognizes new things using linguistic descriptions of the new things,and we refer to the linguistic descriptions as semantic features in this paper.

    Two types of features are needed in zero-shot learning:Sample features(visual features)and the semantic features mentioned above.These two types of features belong to different feature spaces,and aligning these two types of features is very important.Aligning semantic and visual features is usually done by mapping them to the same feature space[1–5].We refer to these methods as embedding methods [6,7].However,these methods sometimes only consider information from the seen classes,which can cause a decrease in the accuracy when classifying the unseen class samples.

    Addressing the problem of misclassification results for unseen classes,some researchers add information about unseen classes to their models,methods commonly used nowadays are generative models[8–11].Although the generative models can get good classification results,these methods need to train the generative model first and then use the generative model to obtain pseudo-samples about unseen classes.Then,a classifier is trained using pseudo-samples.The generative model methods make the process more complicated than other methods.Incorporating unseen class semantic features into the loss function [12] or adding a calibration term to the classification [13,14] is another technique to increase the classification accuracy of unseen class samples.In addition,some literature has also noted that the similarity between features also leads to a decrease in the zero-shot classification accuracy.Zhang et al.[15] proposed imposing orthogonality constraints between semantic features to differentiate between semantic features of different classes.This approach increased the differences between different categories and alleviated domain shift problems.

    We have similarly employed adding information about unseen classes to the model.Unlike the methods mentioned above,a new feature alignment method is proposed in our model.In this paper,except the traditional mapping approach,we further use a dual discriminator approach to align the semantic and visual features.Instead of increasing the distance between different categories’visual and semantic features,we increased the consistency between the hidden space visual features with all class semantic features.This approach not only aligns features but also provides information about unseen classes.A new feature fusion approach is also used for classifier training to alleviate the bias problem.Our contributions are as follows:

    (1) We propose a new model structure for solving the alignment problem of different modal features and the domain shift problem.

    (2) To make a better alignment of semantic and visual features,this paper proposes a dual discriminator module and this dual discriminator method can provide information about the unseen classes.

    (3) We propose a new feature fusion method by which the seen class features are perturbed to reduce the degree to which the classification results in the model are biased toward the seen classes and provide information on the unseen classes.

    (4) Our method was validated on four different datasets.The experimental results demonstrate that the proposed model obtains promising results,especially in aPY dataset(5.1%).

    2 Related Works

    2.1 Zero-Shot Learning

    Semantic features and visual features belong to different feature spaces with different dimensions,respectively.Usually,it is a choice to map these two features to the same feature space.Figs.1a and 1b show the two mapping methods: From semantic space to visual space and from visual space to semantic space.Liu et al.[6]proposed a Low-Rank Semantic Autoencoder(LSA)to enhance the zeroshot learning capability.Before classification,they used a mapping matrix to map semantic features to visual space.Tang et al.[4]mapped visual features to the semantic space and realized feature alignment and classification by calculating the mutual information between semantic features and visual features.In addition to the two mapping methods in Figs.1a and 1b,common feature space can be used in some literature.Hyperbolic spaces can maintain a hierarchy of features.Liu et al.[16] proposed to map the visual features and the semantic features into hyperbolic space.Li et al.[17] used direct sum decomposition for semantic features;the semantic features were decomposed into subspaces.The method in the literature [17] embedded semantic features and visual features into the common space.In addition,another method that maps semantic features to the visual space while projecting visual features to the semantic space.This method reduces the domain shift problem and allows better alignment of both features[5,18,19].These methods mentioned above only consider the information of the seen class when training the models but ignore the information provided by the unseen class semantic features.The compression of the unseen class information leads to the misclassification of the samples of the unseen class.Especially for generalized zero-shot,neglecting the unseen class information can cause most samples to be biased towards the seen classes.

    Figure 1: Embedding methods

    2.2 Domain Shift Problem

    Since the unseen class samples only appear in the test set and the distribution is not the same between the seen class samples and the unseen class samples,this leads to a bias in the model when classifying the unseen class samples,and this phenomenon is domain shift problem.Especially for test sets containing the seen class categories,the unseen class samples are more likely to be misclassified as one of the seen class categories.Adding information about unseen classes to the model is proposed to address the problem mentioned above.Some researchers proposed generative models to generate unseen class samples[8–10,20].These methods use pseudo-samples instead of real samples for training the classifier.Huynh et al.[12] proposed another method.They proposed to add a term about the unseen class information in the loss function so that the information about the unseen class will not be too compressed.In addition to these two methods mentioned above,Jiang et al.[21] used class similarity as the coefficients in the loss function to improve the classification accuracy.In order to make semantic features more distinguishable,some researchers have imposed constraints on the semantic features of all classes,and such restrictions can distinguish the semantic features of different classes.In this way,all the features can be better categorized when mapped to the same feature space and alleviate the domain shift problem.Wang et al.[22]proposed to add orthogonal constraints to class prototypes in all class prototypes.Zhang et al.[15]proposed bi-orthogonal constraints on the latent semantic features and used the discriminator to reduce the modality differences.Zhang et al.[23]proposed corrected attributes for both seen and unseen class semantic features;the corrected attributes can be discriminative in zero-shot learning and alleviate the domain shift problem.Shen et al.[24]used spherical embedding space to classify the unseen class samples,this method used different radius and spherical alignments on angles to alleviate the prediction bias.

    In the literature [15],the authors proposed the use of an adversarial network to distinguish the semantic features and visual features.Our method also uses a discriminator for the semantic features and visual features.Still,there is no orthogonality restriction on the semantic features in our method,and this paper employs a dual discriminator approach to align the features of different modalities.This dual discriminator can provide part of the information about the unseen class.To alleviate the problem that most of the unseen class samples are always classified into seen classes,we propose a feature fusion method that can reduce the seen class’s information and increase the unseen class’s information to some extent.

    3 A Dual Discriminator Method for Generalized Zero-Shot Learning

    3.1 Definition of Problem

    The training set can be denoted byT={Xt,At,Yt}.We useU={Xu,Au,Yu} to represent the unseen classes.Xrepresents the visual features,Arepresents the semantic features andYrepresents the labels.We use the subscripttanduto represent seen classes and unseen classes.In conventional zero-shot learning(CZSL),the unseen samples can be classified into the unseen classes.In generalized zero-shot learning(GZSL),test samples are classified into all classes(both seen and unseen classes).

    3.2 The Architecture of the Proposed Method

    The proposed method is shown in Fig.2.We only consider GZSL in this paper.The visual featuresXtare encoded to get the hidden space featuresZt1,Zt2,andZt1=Zt2.The hidden space features are aligned with the seen class semantic featuresAtand unseen class semantic featuresAuthrough two discriminators.The features in the hidden space are decoded to get new visual featuresandand the new visual features are fused with the original visual features as the input featuresf1andf2to the classifier.We use lowercase letters to represent a feature.Each part of the model is described in detail below.

    Semantic features and visual features belong to different feature spaces;mapping these two features to the same feature space and maintaining the consistency of these two features is an essential issue in zero-shot learning.Inspired by the literature [25],we use the latent space visual features to make the different modality features consistent.

    In the literature [15],the authors used a discriminator to discriminate the different modality features.Different from the literature [15],we use two discriminators to enhance the consistency of the two modality features.We take one of the discriminators as an example,and its structure is shown in Fig.3.Inspired by generative adversarial networks[26],a discriminator can be used in generative adversarial networks to distinguish whether the sample is a generated sample or a real sample.This approach can make the generated samples more similar to the real samples.In this paper,we regard the hidden space visual features obtained by using the encoder as generative samples and regard the semantic features as real samples so that the discriminator can make the hidden space visual features more similar to the semantic features and enhance the visual features consistent with the semantic features.Also,to reduce the domain shift problem and increase the information of the unseen class,a discriminator is used for the semantic featuresAuand the hidden spatial visual features.

    Figure 2: The proposed method

    The other discriminator has the same structure as Fig.3.Inspired by Wasserstein Generative Adversarial Nets(WGAN)[26],we write the loss function of the discriminator in the following form:

    Figure 3: The structure of the discriminator

    Here,λ1andλ2represents the coefficients.D1andD2represent the two discriminators,where D1denotes the discriminator associated withZt1andAtand D2denotes the discriminator associated withZt2andAu.The subscriptPrepresents the distribution of the data.In this paper,our calculation ofis slightly different from that in the literature[26],we computeby=δ?Zt1+(1-δ)?Atandδ~U(0,1).is computed asThese two discriminators align semantic and visual features and add the information of unseen classes.The encoder in Fig.1 can be seen as the generator and mean(?)represents the mean value.The loss function is shown in Eq.(2):

    The hidden visual featureszt1are passed through the decoder to get the new visual featureswhich need to be consistent with the original visual features,and this relationship can be written as:

    Similarly,for the hidden spatial featureszt2to get the new visual featuresthrough the decoder,the loss function concerning the original visual features is written as:

    Here,Δx=xt-We first compute Δx,then we compute Eq.(4).We useΔxinstead ofxtbecausezt2contains a portion of the information of the unseen class,and we want to reduce the compression of the knowledge of the unseen classes after the decoder.In the latent space,we also want the different modality features to be consistent with each other.

    If only the featuresXtare employed as input features to the classifier.The results will biased to the seen classes.So,before inputting the features into the classifier,feature fusion is used,as shown in

    Here,μ1andμ2are coefficients.Feature fusion is equivalent to adding perturbations to the original visual features,which can compress the information about the seen classes and provide information about the unseen classes.The cross-entropy is used as the loss function in the classifier,yirepresents the true label andrepresents the predicted label.

    The total loss function is:

    whereβis the coefficient.The model proposed in this paper is optimized by alternating optimization method.The discriminator is firstly trained by Eq.(1),and then the other networks in the model are trained by Eq.(10).

    4 Experiments

    We validate our model on four datasets: Animals with Attribute 1 (AWA1) [27],Animals with Attribute 2(AWA2)[28],Attribute Pascal and Yahoo(aPY)[29]and Caltech-UCSD Birds-200-2011(CUB)[30].The details of these four datasets are shown in Table 1.

    Table 1: The details of the four datasets

    In the proposed model,we use the RMSProp method to optimize the discriminator modules and the Adam method to optimize the other part of the proposed model.The learning rate is 0.001 for AWA1 and AWA2 datasets,and the learning rate is 0.006 for CUB and aPY datasets.The output of the first layer in the encoder contains 512 units,and the output of the first layer in the decoder contains 256 units.The output dimensions of the fully connected layer in the discriminator are 1024 and 256.We setμ1=0.5 andμ2=1 in our model.The visual features and semantic features are taken from the literature [28].The dimension of the visual features is 2048.The complexity of the model are as follows:The flops for AWA1,AWA2,CUB and aPY are 4.86 M,4.86,6.77 M and 4.68 M,and the byte are 2.44 M,2.44 M,3.39 M,2.35 M.

    4.1 Results of GZSL

    The proposed method is compared with other methods in GZSL settings.The evaluation method is taken from the literature [28].We useCto denote the average per-class top-1 accuracy andHto denote the harmonic mean.The subscriptssandudenote the seen classes and the unseen classes.The equations are as follows:

    The results of the proposed method are shown in Table 2.As seen from Table 2,the results of the proposed method on the AWA1 dataset are 2.2% lower than the best results.The method proposed in this paper achieves promising results on AWA2 and aPY datasets.Especially on the aPY dataset,the method in this paper outperforms the Spherical Zero-Shot Learning(SZSL)[24]method by 5.1%.The methods Semantic Autoencoder+Generic Plug-in Attribute Correction(SAE+GPAC)[23],SZSL[24],Transferable Contrastive Network(TCN)[21],and Modality Independent Adversarial Network(MIANet) [15] are considered the unseen semantic features in their models.Where SAE+GPAC,SZSL,and MIANet impose constraints on the semantic features,making the different classes of features more distinguishable.TCN proposed using the relationship of unseen class and seen class semantic features as the coefficients of the loss function.The method in this paper achieves better results than SAE+GPAC,SZSL,TCN,and MIANet these four methods on the AWA1,AWA2,and APY datasets,and the methods SZSL and TCN for the CUB dataset are better than the proposed method.In summary,the method in this paper gives good results on the AWA2 dataset and the APY dataset,and not as good as the other methods on the AWA1 dataset and the CUB dataset,especially on CUB dataset.This is because the CUB dataset is a fine-grained image dataset,although the method in this paper can provide features about unseen classes,it is not sufficiently discriminative between features of different classes,so it will lead to a decrease in classification results.

    Table 2: The results in GZSL

    4.2 Parameters Influences

    Figs.4–7 show the effects of β in Eq.(10)on the generalized zero-shot classification results.

    Figure 4: The effects of β on AWA1

    Figure 5: The effects of β on AWA2

    In Figs.4–7,this paper uses ‘tr’and ‘ts’to denote the average per-class top-1 accuracy of the seen classes and the unseen classes,respectively.For the AWA1 and AWA2 datasets,asβincreases,the accuracy is increased for the harmonic mean and unseen classes and decreased for the seen classes.For the aPY dataset,an increase inβhas little effect on the harmonic mean,while the accuracy decreases for the seen classes and increases for the unseen classes.For the CUB dataset,accuracy increases for unseen class samples and decreases for seen class samples.In summary,asβincreases,the accuracy of the unseen classes increases,while the accuracy of the seen classes decreases.

    Figure 6: The effects of β on aPY

    Figure 7: The effects of β on CUB

    4.3 Ablation Experiments and tSNE

    The results of the ablation experiments are shown in Table 3.The method without discriminator and feature fusion is denoted as the baseline.We use visual features as the input featuresf1andf2for the classifier in the baseline.We use‘baseline+feature fusion’to indicate that the model does not contain discriminators,f1andf2are calculated using Eqs.(7) and (8).‘baseline+feature fusion+one discriminator’denotes the method adds a discriminator related to semantic features of the seen classes.

    Table 3: Ablation experiments

    Table 3 shows that for AWA1,AWA2,and CUB,the fusion of features in the three dataset models can drastically improve the harmonic mean.‘baseline+feature fusion’improves the accuracy of the seen classes compared to the baseline method,but does not reduce the accuracy of the unseen classes too much,which indicates that ‘baseline+feature fusion’can improve the accuracy of the seen classes while still making the unseen class samples not massively biased toward the seen classes.‘baseline+feature fusion’can make the increase in both seen and unseen classes on aPY compared to the baseline method.From Table 3,it can be seen that when the discriminator is added,there is an increase in harmonic mean;this is because adding the discriminator not only adds information about the unseen class but also makes the features of the different modalities more consistent.

    Figs.8a and 8b show the tSNE for the AWA2 dataset.Fig.8a shows the unseen class visual features in the AWA2 dataset,and Fig.8b shows the visual featuresf2obtained using feature fusion.Since the training set samples are used to obtainf2,the number of samples obtained for each class is different.The figure shows that the method proposed in this paper can provide a part of the distribution similar to the original sample features.

    Figure 8: The tSNE of AWA2

    4.4 The Influence of the Features ΔX

    Fig.9 shows the results of replacing ΔXin Eq.(4) with the original visual featureXt.From Fig.9,although good results can be obtained using the original visual features,the results are still low compared to the method in this paper.

    Fig.10 shows the classification accuracy for each unseen class on the aPY dataset when replacing ΔXwith the original featureXt.From Fig.9,the accuracy is less than the method proposed in this paper,except for very few classes where the accuracy increases when using the original features.

    Figure 9: The harmonic mean of the original train features used in Eq.(4)

    Figure 10: The accuracy of the unseen class samples of aPY

    5 Conclusions

    We propose a new model structure for the consistency problems of different modal features and domain shift problems in generalized zero-shot learning.Using a dual discriminator structure in the proposed model can lead to a better alignment of semantic and visual features,and this dual discriminator structure can provide part of the information about the unseen class.At the same time,this paper adopts a new feature fusion method to reduce the information about seen classes and provide information about unseen classes,so the model is not too biased towards seen classes in generalized zero-shot classification and improves the harmonic mean.We have experimented with our proposed model on four datasets,and the experimental results show the effectiveness of our approach,especially on the aPY dataset.We will further explore using an attention mechanism approach to extract more discriminative features,which will enable better alignment of features across modalities,and more discriminative features can improve the accuracy of zero-shot classification.

    Acknowledgement:The authors sincerely appreciate the editors and reviewers for their valuable work.

    Funding Statement:The authors received no specific funding for this study.

    Author Contributions:Study design and draft manuscript preparation: Tianshu Wei;reviewing and editing the manuscript:Jinjie Huang.

    Availability of Data and Materials:The datasets used in the manuscript are public datasets.The datasets used in the manuscript are available from https://www.mpi-inf.mpg.de/departments/computer-vision-an d-machine-learning/research/zero-shot-learning/zero-shot-learning-the-good-the-bad-and-the-ugly.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    精品高清国产在线一区| 亚洲人成电影观看| 国产精品久久久人人做人人爽| av中文乱码字幕在线| 精品国产超薄肉色丝袜足j| 757午夜福利合集在线观看| 色尼玛亚洲综合影院| 久久精品国产亚洲av高清一级| 国产高清激情床上av| 亚洲全国av大片| 999精品在线视频| 久久精品国产综合久久久| 怎么达到女性高潮| 最新在线观看一区二区三区| 精品久久久久久久毛片微露脸| 午夜福利,免费看| 欧美激情 高清一区二区三区| 日韩精品青青久久久久久| 又黄又粗又硬又大视频| 久久精品国产99精品国产亚洲性色 | 亚洲av电影不卡..在线观看| 禁无遮挡网站| or卡值多少钱| 变态另类丝袜制服| 国产成年人精品一区二区| 无遮挡黄片免费观看| 亚洲熟女毛片儿| 国产亚洲精品一区二区www| 天堂影院成人在线观看| 99久久综合精品五月天人人| 国产色视频综合| 88av欧美| 久久人人精品亚洲av| 亚洲一区中文字幕在线| 真人做人爱边吃奶动态| 成熟少妇高潮喷水视频| 久久久精品国产亚洲av高清涩受| 亚洲人成电影观看| 日韩高清综合在线| 国产高清有码在线观看视频 | 露出奶头的视频| 黄色视频不卡| 久久亚洲精品不卡| 国产精品影院久久| 日本黄色视频三级网站网址| 国产亚洲av高清不卡| 亚洲国产精品合色在线| 9热在线视频观看99| 国产精品国产高清国产av| 精品久久久久久久人妻蜜臀av | 久久午夜综合久久蜜桃| 黑人欧美特级aaaaaa片| 日日摸夜夜添夜夜添小说| 熟妇人妻久久中文字幕3abv| 变态另类成人亚洲欧美熟女 | 桃红色精品国产亚洲av| 中文字幕人妻熟女乱码| 久久这里只有精品19| 熟女少妇亚洲综合色aaa.| 精品第一国产精品| 欧美性长视频在线观看| www日本在线高清视频| 黄色成人免费大全| 岛国在线观看网站| 欧美在线黄色| 一级,二级,三级黄色视频| 亚洲avbb在线观看| 午夜两性在线视频| 美女高潮喷水抽搐中文字幕| 别揉我奶头~嗯~啊~动态视频| 久久人人爽av亚洲精品天堂| 亚洲中文字幕一区二区三区有码在线看 | 国产亚洲av嫩草精品影院| 国内久久婷婷六月综合欲色啪| 久久久久国产精品人妻aⅴ院| 亚洲成人精品中文字幕电影| 国产av精品麻豆| 操美女的视频在线观看| 色综合婷婷激情| 少妇粗大呻吟视频| 免费高清在线观看日韩| 色综合欧美亚洲国产小说| 老司机深夜福利视频在线观看| 国产三级在线视频| 亚洲aⅴ乱码一区二区在线播放 | 欧美一级毛片孕妇| 麻豆成人av在线观看| 黄色女人牲交| 男人操女人黄网站| 精品久久久久久久人妻蜜臀av | 十八禁人妻一区二区| 日韩欧美在线二视频| 欧美不卡视频在线免费观看 | 成人国产一区最新在线观看| 免费观看人在逋| 欧美丝袜亚洲另类 | 久久天躁狠狠躁夜夜2o2o| √禁漫天堂资源中文www| 精品福利观看| 深夜精品福利| 美女 人体艺术 gogo| 午夜日韩欧美国产| 免费少妇av软件| 国产高清有码在线观看视频 | 一边摸一边抽搐一进一小说| 免费观看精品视频网站| 婷婷精品国产亚洲av在线| 免费看美女性在线毛片视频| 国产片内射在线| 国产视频一区二区在线看| 美女高潮喷水抽搐中文字幕| aaaaa片日本免费| 精品久久久精品久久久| avwww免费| av超薄肉色丝袜交足视频| 多毛熟女@视频| 一级a爱视频在线免费观看| 国产精品久久久人人做人人爽| 久久天躁狠狠躁夜夜2o2o| 国产aⅴ精品一区二区三区波| 久久香蕉精品热| 亚洲成av片中文字幕在线观看| 欧美人与性动交α欧美精品济南到| 久久青草综合色| 国产精品综合久久久久久久免费 | 两个人看的免费小视频| 亚洲全国av大片| 大陆偷拍与自拍| x7x7x7水蜜桃| 曰老女人黄片| 免费在线观看完整版高清| 一本久久中文字幕| 国产熟女xx| 成人国产一区最新在线观看| 搡老妇女老女人老熟妇| 黑人欧美特级aaaaaa片| 免费看a级黄色片| 精品国产乱码久久久久久男人| 国产精品亚洲av一区麻豆| 欧美精品啪啪一区二区三区| 男人舔女人下体高潮全视频| 一区在线观看完整版| 精品久久久久久久人妻蜜臀av | 国产国语露脸激情在线看| 色播在线永久视频| 亚洲激情在线av| 香蕉国产在线看| 91精品国产国语对白视频| 男女午夜视频在线观看| 色精品久久人妻99蜜桃| 国产成人精品久久二区二区免费| 一区二区日韩欧美中文字幕| 免费在线观看黄色视频的| 国产免费男女视频| 亚洲av美国av| 国内精品久久久久久久电影| 久久香蕉精品热| 女生性感内裤真人,穿戴方法视频| 国产区一区二久久| 搡老熟女国产l中国老女人| 欧美在线黄色| 无人区码免费观看不卡| 亚洲专区国产一区二区| 日本精品一区二区三区蜜桃| 操美女的视频在线观看| 国产午夜福利久久久久久| av有码第一页| 亚洲天堂国产精品一区在线| 久久国产乱子伦精品免费另类| av超薄肉色丝袜交足视频| aaaaa片日本免费| 国产乱人伦免费视频| 午夜福利,免费看| 日本vs欧美在线观看视频| 美女国产高潮福利片在线看| 日本精品一区二区三区蜜桃| 美女扒开内裤让男人捅视频| 欧美+亚洲+日韩+国产| 最近最新中文字幕大全免费视频| 黄片播放在线免费| 亚洲欧美日韩无卡精品| 欧美人与性动交α欧美精品济南到| 亚洲一区二区三区不卡视频| 欧美老熟妇乱子伦牲交| 国产成人一区二区三区免费视频网站| 久久香蕉精品热| 亚洲av电影在线进入| 国产成人啪精品午夜网站| 女人被躁到高潮嗷嗷叫费观| 免费搜索国产男女视频| 91精品国产国语对白视频| 亚洲国产看品久久| 18禁黄网站禁片午夜丰满| 两性午夜刺激爽爽歪歪视频在线观看 | 99久久99久久久精品蜜桃| av欧美777| 欧美成狂野欧美在线观看| 在线国产一区二区在线| 亚洲视频免费观看视频| 性少妇av在线| 国产午夜福利久久久久久| 日韩欧美免费精品| 精品国内亚洲2022精品成人| 国产又爽黄色视频| 18禁黄网站禁片午夜丰满| 欧美日韩精品网址| 成在线人永久免费视频| 黄片播放在线免费| 欧美在线黄色| 两人在一起打扑克的视频| 一边摸一边做爽爽视频免费| 欧美激情极品国产一区二区三区| 正在播放国产对白刺激| 麻豆久久精品国产亚洲av| 19禁男女啪啪无遮挡网站| 国产精品自产拍在线观看55亚洲| 久久久久久亚洲精品国产蜜桃av| 精品国产乱子伦一区二区三区| 波多野结衣高清无吗| 中国美女看黄片| 亚洲情色 制服丝袜| 免费看a级黄色片| 免费女性裸体啪啪无遮挡网站| 免费在线观看亚洲国产| 一区二区三区高清视频在线| 久久久久久大精品| 久久久久久久久免费视频了| 国产精品综合久久久久久久免费 | 可以免费在线观看a视频的电影网站| 午夜福利影视在线免费观看| 国产av一区二区精品久久| 人人妻人人澡人人看| 搡老熟女国产l中国老女人| 久热这里只有精品99| 免费在线观看亚洲国产| 色婷婷久久久亚洲欧美| 免费高清在线观看日韩| 午夜久久久久精精品| 神马国产精品三级电影在线观看 | 中文字幕人成人乱码亚洲影| 18美女黄网站色大片免费观看| 色综合婷婷激情| 国产xxxxx性猛交| 国产极品粉嫩免费观看在线| 两性午夜刺激爽爽歪歪视频在线观看 | 精品一区二区三区av网在线观看| 男女午夜视频在线观看| 国产欧美日韩综合在线一区二区| 9191精品国产免费久久| 日韩 欧美 亚洲 中文字幕| 在线观看免费午夜福利视频| 亚洲av五月六月丁香网| 久久精品人人爽人人爽视色| 91麻豆av在线| 黄网站色视频无遮挡免费观看| 搡老妇女老女人老熟妇| 人妻丰满熟妇av一区二区三区| 国产不卡一卡二| a在线观看视频网站| 我的亚洲天堂| 国产av一区二区精品久久| 一区在线观看完整版| 精品久久久精品久久久| 天天躁狠狠躁夜夜躁狠狠躁| 操出白浆在线播放| 色老头精品视频在线观看| 亚洲熟妇熟女久久| 免费不卡黄色视频| 黑人巨大精品欧美一区二区mp4| 亚洲视频免费观看视频| 亚洲欧美精品综合久久99| 亚洲精品一卡2卡三卡4卡5卡| 午夜福利,免费看| 欧美日韩福利视频一区二区| 少妇 在线观看| 免费高清视频大片| 精品日产1卡2卡| 激情视频va一区二区三区| 91成年电影在线观看| 精品欧美一区二区三区在线| 成人av一区二区三区在线看| 亚洲精华国产精华精| xxx96com| 无遮挡黄片免费观看| 母亲3免费完整高清在线观看| 看片在线看免费视频| 无遮挡黄片免费观看| 一级片免费观看大全| 久久久久久久久免费视频了| 国产高清激情床上av| 欧美成狂野欧美在线观看| 丝袜美足系列| 国产激情久久老熟女| 搡老熟女国产l中国老女人| av在线播放免费不卡| 在线观看免费日韩欧美大片| 9191精品国产免费久久| 亚洲片人在线观看| 狂野欧美激情性xxxx| 色哟哟哟哟哟哟| 夜夜爽天天搞| 久久精品国产综合久久久| 国产黄a三级三级三级人| 精品久久蜜臀av无| 亚洲专区国产一区二区| 最近最新免费中文字幕在线| 国产精品一区二区在线不卡| 天天躁狠狠躁夜夜躁狠狠躁| 俄罗斯特黄特色一大片| 两人在一起打扑克的视频| 夜夜躁狠狠躁天天躁| 少妇熟女aⅴ在线视频| 精品国产一区二区三区四区第35| 天堂动漫精品| 久久精品成人免费网站| 久久香蕉激情| 99精品在免费线老司机午夜| 一二三四在线观看免费中文在| 男女下面进入的视频免费午夜 | 国产成+人综合+亚洲专区| 精品电影一区二区在线| 在线免费观看的www视频| 母亲3免费完整高清在线观看| 婷婷丁香在线五月| 搡老熟女国产l中国老女人| 老熟妇仑乱视频hdxx| 久久人妻福利社区极品人妻图片| 99久久国产精品久久久| 看片在线看免费视频| 日韩欧美国产一区二区入口| 国产av又大| 精品久久久精品久久久| 国产成年人精品一区二区| 日本在线视频免费播放| 亚洲专区中文字幕在线| 丝袜人妻中文字幕| 一二三四在线观看免费中文在| av有码第一页| 午夜久久久久精精品| 韩国精品一区二区三区| 日韩有码中文字幕| 亚洲色图av天堂| 亚洲精品美女久久久久99蜜臀| 国产成人一区二区三区免费视频网站| 亚洲人成网站在线播放欧美日韩| 日本免费a在线| 中文字幕最新亚洲高清| 国产免费av片在线观看野外av| 日本一区二区免费在线视频| 亚洲欧美日韩另类电影网站| 黄色视频,在线免费观看| 看片在线看免费视频| 国产精品久久久久久精品电影 | 亚洲国产中文字幕在线视频| 韩国精品一区二区三区| 热99re8久久精品国产| 黄色成人免费大全| 欧美激情高清一区二区三区| 精品卡一卡二卡四卡免费| 免费av毛片视频| 性少妇av在线| 手机成人av网站| 色播亚洲综合网| 亚洲av成人一区二区三| 黄色女人牲交| 亚洲第一电影网av| 91成人精品电影| 一边摸一边抽搐一进一小说| 精品不卡国产一区二区三区| 91av网站免费观看| 夜夜躁狠狠躁天天躁| www.999成人在线观看| 午夜福利成人在线免费观看| 日韩欧美国产一区二区入口| 99riav亚洲国产免费| 亚洲专区国产一区二区| 男女下面插进去视频免费观看| 亚洲 欧美一区二区三区| 男女午夜视频在线观看| 狂野欧美激情性xxxx| 老熟妇仑乱视频hdxx| 日韩av在线大香蕉| 97人妻天天添夜夜摸| 一个人免费在线观看的高清视频| 精品一区二区三区av网在线观看| 男女床上黄色一级片免费看| 国产成人啪精品午夜网站| 亚洲精品一区av在线观看| 热99re8久久精品国产| 少妇 在线观看| 黄色视频,在线免费观看| 亚洲国产欧美网| 国产黄a三级三级三级人| 正在播放国产对白刺激| 身体一侧抽搐| 国产欧美日韩综合在线一区二区| 精品不卡国产一区二区三区| 久久久久久久久久久久大奶| 国产视频一区二区在线看| 91av网站免费观看| 一区二区三区国产精品乱码| 欧美黄色片欧美黄色片| 国产av一区在线观看免费| 免费看a级黄色片| 丝袜人妻中文字幕| 亚洲中文字幕一区二区三区有码在线看 | 青草久久国产| 国语自产精品视频在线第100页| 叶爱在线成人免费视频播放| 国产精品乱码一区二三区的特点 | 国内精品久久久久久久电影| 国产亚洲精品第一综合不卡| 在线观看免费午夜福利视频| 久久欧美精品欧美久久欧美| 69精品国产乱码久久久| 中文亚洲av片在线观看爽| 国产精品av久久久久免费| 天天一区二区日本电影三级 | xxx96com| 久久亚洲精品不卡| 人妻久久中文字幕网| 99国产极品粉嫩在线观看| 美女 人体艺术 gogo| 999精品在线视频| 搡老妇女老女人老熟妇| 老熟妇乱子伦视频在线观看| 国产精品精品国产色婷婷| 50天的宝宝边吃奶边哭怎么回事| 欧美激情极品国产一区二区三区| 久久热在线av| 久久精品aⅴ一区二区三区四区| 欧美大码av| 99精品欧美一区二区三区四区| 免费在线观看日本一区| 国产精品精品国产色婷婷| 亚洲精品av麻豆狂野| 亚洲第一青青草原| 亚洲欧美日韩高清在线视频| 亚洲激情在线av| 色av中文字幕| 他把我摸到了高潮在线观看| 一卡2卡三卡四卡精品乱码亚洲| 男女床上黄色一级片免费看| 国产区一区二久久| 精品国产一区二区三区四区第35| 日本vs欧美在线观看视频| 色在线成人网| 精品国产一区二区久久| 美女大奶头视频| 一本久久中文字幕| 韩国精品一区二区三区| 老司机午夜福利在线观看视频| 97碰自拍视频| 校园春色视频在线观看| 久久天堂一区二区三区四区| 成人18禁在线播放| 一进一出好大好爽视频| 久久人妻av系列| av有码第一页| 国产成人精品久久二区二区免费| 亚洲一码二码三码区别大吗| 12—13女人毛片做爰片一| 久久久精品国产亚洲av高清涩受| 中文字幕人成人乱码亚洲影| 人人澡人人妻人| 国产三级黄色录像| 国产99久久九九免费精品| av在线天堂中文字幕| 一边摸一边抽搐一进一出视频| 亚洲一码二码三码区别大吗| 精品不卡国产一区二区三区| 这个男人来自地球电影免费观看| 老熟妇仑乱视频hdxx| 精品熟女少妇八av免费久了| 色在线成人网| 精品欧美一区二区三区在线| 欧美老熟妇乱子伦牲交| 淫妇啪啪啪对白视频| 90打野战视频偷拍视频| 一区二区三区国产精品乱码| 大型av网站在线播放| 手机成人av网站| 精品欧美国产一区二区三| 色哟哟哟哟哟哟| 两人在一起打扑克的视频| 乱人伦中国视频| 成人亚洲精品av一区二区| 一级a爱片免费观看的视频| 国产麻豆69| 亚洲 国产 在线| 国产欧美日韩综合在线一区二区| 久久久久亚洲av毛片大全| 三级毛片av免费| 一区二区三区激情视频| 很黄的视频免费| 午夜福利高清视频| 好男人在线观看高清免费视频 | 国产av在哪里看| 可以在线观看的亚洲视频| 成人av一区二区三区在线看| 精品久久蜜臀av无| 国产真人三级小视频在线观看| 伦理电影免费视频| 亚洲第一青青草原| 国产日韩一区二区三区精品不卡| 亚洲av电影在线进入| 色综合婷婷激情| 一级片免费观看大全| 欧美精品亚洲一区二区| 1024视频免费在线观看| 91字幕亚洲| 日本vs欧美在线观看视频| 村上凉子中文字幕在线| 国产精品永久免费网站| 午夜福利,免费看| 高清在线国产一区| or卡值多少钱| 国产亚洲精品一区二区www| 丝袜人妻中文字幕| 黑人巨大精品欧美一区二区mp4| 午夜老司机福利片| 久久欧美精品欧美久久欧美| 免费在线观看亚洲国产| 他把我摸到了高潮在线观看| 日本在线视频免费播放| 久久久久久大精品| 欧美乱色亚洲激情| 午夜福利视频1000在线观看 | 色播亚洲综合网| 悠悠久久av| 亚洲色图 男人天堂 中文字幕| 久久香蕉精品热| 禁无遮挡网站| 999久久久国产精品视频| 精品午夜福利视频在线观看一区| 美女大奶头视频| 欧美一级a爱片免费观看看 | 日韩精品中文字幕看吧| 久久精品91无色码中文字幕| 麻豆成人av在线观看| 欧美成狂野欧美在线观看| 丁香欧美五月| 精品久久蜜臀av无| 亚洲,欧美精品.| 午夜免费鲁丝| 亚洲av成人av| 欧美性长视频在线观看| 99国产精品一区二区蜜桃av| 老熟妇仑乱视频hdxx| 日韩有码中文字幕| 波多野结衣高清无吗| 天堂影院成人在线观看| 美女高潮喷水抽搐中文字幕| 国产成+人综合+亚洲专区| 成年人黄色毛片网站| 动漫黄色视频在线观看| 亚洲成国产人片在线观看| 午夜福利,免费看| 怎么达到女性高潮| 色播亚洲综合网| 欧美+亚洲+日韩+国产| 欧美日韩福利视频一区二区| 午夜福利一区二区在线看| 色播在线永久视频| 亚洲精品中文字幕一二三四区| 99久久久亚洲精品蜜臀av| 欧美色欧美亚洲另类二区 | 一级作爱视频免费观看| 一区福利在线观看| 国产1区2区3区精品| 久久中文字幕人妻熟女| 国产一区二区三区视频了| 国产熟女午夜一区二区三区| 大型黄色视频在线免费观看| 中出人妻视频一区二区| 日韩欧美一区二区三区在线观看| 久久久久久久久免费视频了| av视频在线观看入口| 精品日产1卡2卡| 日韩欧美一区二区三区在线观看| 国产亚洲精品一区二区www| 久久中文字幕一级| 欧美日韩黄片免| 国产av又大| 国产高清激情床上av| 国产亚洲精品av在线| 69av精品久久久久久| 神马国产精品三级电影在线观看 | 亚洲av熟女| 一级a爱视频在线免费观看| 国产97色在线日韩免费| 美女大奶头视频| xxx96com| 人成视频在线观看免费观看| 在线观看www视频免费| 熟女少妇亚洲综合色aaa.| 亚洲成人国产一区在线观看| 久久久久久久久免费视频了| cao死你这个sao货| 不卡一级毛片| 欧美日韩亚洲国产一区二区在线观看| 国产真人三级小视频在线观看| 国产精品一区二区在线不卡| 男女床上黄色一级片免费看| 国产精品野战在线观看| 黄色视频,在线免费观看| 国产精品免费视频内射| 亚洲成a人片在线一区二区| 美女午夜性视频免费| 亚洲午夜理论影院| 亚洲精品中文字幕一二三四区| 91字幕亚洲| 99久久综合精品五月天人人| 国产成人精品在线电影| 在线十欧美十亚洲十日本专区|