• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Unsupervised Domain Adaptation Based on Discriminative Subspace Learning for Cross-Project Defect Prediction

    2021-12-14 06:05:00YingSunYanfeiSunJinQiFeiWuXiaoYuanJingYuXueandZixinShen
    Computers Materials&Continua 2021年9期

    Ying Sun,Yanfei Sun,2,*,Jin Qi,Fei Wu,Xiao-Yuan Jing,3,Yu Xue and Zixin Shen

    1Nanjing University of Posts and Telecommunications,Nanjing,210003,China

    2Jiangsu Engineering Research Center of HPC and Intelligent Processing,Nanjing,210003,China

    3Wuhan University,Wuhan,430072,China

    4Nanjing University of Information Science and Technology,Nanjing,210044,China

    5Carnegie Mellon University,Pittsburgh,15213,USA

    Abstract:Cross-project defect prediction(CPDP)aims to predict the defects on target project by using a prediction model built on source projects.The main problem in CPDP is the huge distribution gap between the source project and the target project,which prevents the prediction model from performing well.Most existing methods overlook the class discrimination of the learned features.Seeking an effective transferable model from the source project to the target project for CPDP is challenging.In this paper,we propose an unsupervised domain adaptation based on the discriminative subspace learning(DSL)approach for CPDP.DSL treats the data from two projects as being from two domains and maps the data into a common feature space.It employs crossdomain alignment with discriminative information from different projects to reduce the distribution difference of the data between different projects and incorporates the class discriminative information.Specifically,DSL first utilizes subspace learning based domain adaptation to reduce the distribution gap of data between different projects.Then,it makes full use of the class label information of the source project and transfers the discrimination ability of the source project to the target project in the common space.Comprehensive experiments on five projects verify that DSL can build an effective prediction model and improve the performance over the related competing methods by at least 7.10%and 11.08%in terms of G-measure and AUC.

    Keywords:Cross-project defect prediction;discriminative subspace learning;unsupervised domain adaptation

    1 Introduction

    Software security is important in the software product development process.Defects are inevitable and damage the high quality of the software.Software defect prediction (SDP) [1-3]is one of the most important steps in software development and can detect potentially defective instances before the release of software.Recently,SDP has received much research attention.

    Most SDP methods construct a prediction model based on historical defect data and then apply it to predict the defects of new instances within the same project,which is called withinproject defect prediction (WPDP) [4-6].The labeled historical data,however,usually are limited.New projects have few historical data,and the collection of labels is time-consuming.However,plenty of data are available from other projects,and cross-project defect prediction (CPDP) [7,8]has emerged as an alternative solution.

    CPDP uses training data from external projects (i.e.,source projects) to train the prediction model,and then predicts defects in the target project [9].CPDP is challenging because a prediction model that is trained on one project might not work well on other projects.Huge gaps exist between different projects because of differences in the programming language used,developer experience levels,and code standards.Zimmermann et al.[7]performed CPDP on 622 pairs of projects,and only 21 pairs performed well.In the machine learning literature [10],such works always require different projects to have similar distribution.In WPDP,this assumption is easy,but it cannot hold for CPDP.Therefore,the ability to reduce the distribution differences between different projects is key to increasing the effectiveness of CPDP.

    To overcome this distribution difference problem,many methods have been proposed.Some researchers have used data filtering techniques.For example,Turhan et al.[11]used the nearest neighbor technique to choose instances from the source project that were similar to instances from the target project.Some researchers have introduced transfer learning methods into CPDP.For example,Nam et al.[12]used transfer component analysis to reduce the distribution gaps between different projects.These works usually have ignored the consideration of conditional distributions of different projects.

    Unsupervised domain adaptation (UDA) [13,14]focuses on knowledge transfer from the source domain to the target domain,with the aim of decreasing the discrepancy of data distribution between two domains,which is widely used in various fields,such as computer vision [15],and natural language processing [14].The key to UDA is to reduce the domain distribution difference.Subspace learning is the main category of UDA,which can conduct subspace transformation to obtain better feature representation.In addition,most existing CPDP methods ignore the effective exploration of class label information from the source data [11].

    In this paper,to reduce the distribution gap across projects and make full use of the class label information of source project,we propose a discriminative subspace learning (DSL) approach based on UDA for CPDP.DSL aims to align the source and target projects into a common space by feature mapping.In addition,DSL incorporates class-discriminative information into the learning process.We conduct comprehensive experiments on five projects to verify that DSL can build an effective prediction model and improve the performance over the related competing methods by at least 7.10% and 11.08% in terms of G-measure and AUC,respectively.Our contributions are as follows:

    1) We propose a discriminative subspace learning approach for CPDP.To reduce the distribution gap of the data between source and target projects,DSL uses subspace learning based on UDA to learn a projection that can map the data from different projects into the common space.

    2) To fully use the label information and ensure that the model has better discriminative representation ability,DSL incorporates discriminative feature learning and accurate label prediction into subspace learning procedure for CPDP.

    3) We conduct extensive experiments on 20 cross-project pairs.The results indicate the effectiveness and the superiority of DSL compared with related baselines.

    The remainder of this paper is organized as follows.The related works are discussed in Section 2.The details of DSL are introduced in Section 3.The experiment settings are provided in Section 4.Section 5 gives the experimental results and analysis.Section 6 analyzes the threats to the validity of our study.The study is concluded in Section 7.

    2 Related Work

    In this section,we review the related CPDP methods and subspace learning based unsupervised domain adaptation methods.

    2.1 Cross-Project Defect Prediction

    Over the past several years,CPDP has attached many studies and many new methods have been proposed.Briand et al.[16]first proposed whether the cross system prediction model can be investigated.They developed a prediction model on one project and then used it for other projects.The poor experimental results showed that applying models across projects is not easy.

    Some methods focus on designing an effective machine learning model with an improved learning ability or generalization ability.Xia et al.[8]proposed a hybrid model (HYDRA)including genetic algorithm and ensemble learning.In the genetic algorithm stage,HYDRA builds multiple classifiers and assigns weights to different classifiers to output an optimal composite model.In ensemble learning phase,the weights of instances can be updated by iteration to learn a composition of the previous classifiers.Wu et al.[17]employed dictionary learning in CPDP and proposed an improved dictionary learning method CKSDL for addressing class imbalanced problem and semi-supervised problem.The authors utilize semi-supervised dictionary learning to improve the feature learning ability of the prediction model.CKSDL combines cost-sensitive technique and kernel technique to further improve the prediction model.Then,Sun et al.[18]introduced adversarial learning into CPDP and embedded a triplet sampling strategy into an adversarial learning framework.

    In addition,some methods focus on introducing transfer learning into CPDP.Ma et al.[19]proposed transfer naive bayes for CPDP.They exploited the transfer information from different projects to construct weighted training instances,and then built a prediction model on these instances.The experiment indicated that useful transfer knowledge would be helpful.Nam et al.[12]employed transfer component analysis for CPDP,which learns transfer components in a kernel space to make the data distribution from different projects similar,and an experiment on two datasets indicated that TCA+achieves good performance.Li et al.[20]proposed cost-sensitive transfer kernel canonical correlation analysis (CTKCCA) to mitigate the linear inseparability problem and class imbalance problem in heterogeneous CPDP.They introduced CCA into heterogeneous CPDP and improved the CCA to obtain better performance.Liu et al.[21]proposed a two-phase method called TPTL based on transfer learning,which can address the instability issue of traditional transfer learning.TPTL first selects source projects that have high similarity with target project.Then,TPTL uses TCA+in the prediction model on the selected source projects.

    Hosseini et al.[22]evaluated 30 previous studies,including their data metrics,models,classifiers and corresponding performance.The authors counted the common settings and concluded that CPDP is still a challenge and has room for improvement.Zhou et al.[23]compared the performance of existing CPDP methods.In addition,they built two unsupervised models—ManualUp and ManualDown,which only use the size of a module to predict its defect proneness.They concluded that the module size based models achieve good performance for CPDP.

    Existing CPDP methods cannot consider the conditional distribution differences between different projects and ignore the discriminative information from both source project and target project.From the above consideration,we propose DSL to improve the performance of CPDP.

    2.2 Unsupervised Domain Adaptation

    Unsupervised domain adaptation has attracted much attention,and it aims to transfer useful information from the source to the target domain effectively in unsupervised scenario.Domain adaptation methods have been applied to various applications,such as computer vision [15]and natural language processing [14].Subspace learning based unsupervised domain adaptation aims to learn a mapping for aligning the subspaces between two projects.Sun et al.[13]proposed subspaces distribution alignment method for performing subspace alignment to reduce distribution gaps.Long et al.[24]adapted both marginal distribution and conditional distribution between the source and target domains to reduce the different distributions simultaneously.Zhang et al.[25]proposed guided subspace learning to reduce the discrepancy between different domains.

    In this paper,we introduce subspace learning based unsupervised domain adaptation into CPDP,which can effectively reduce the distribution gap between different projects.We also make full use of the class information from source project and propose a discriminative subspace learning approach for CPDP.

    3 Methodology

    In this section,we present the details of DSL.Fig.1 illustrates the overall framework of the proposed DSL approach for CPDP.

    3.1 Data Preprocessing

    To alleviate the class imbalance problem,we perform SMOTE [26]technique on the source project,as is one of the most widely used data preprocessing methods in CPDP.Previous works [27,28]have demonstrated that dealing with the class imbalance problem is helpful for SDP.

    3.2 Notation

    Given t he labeled source projectS={Xs,Ys}=and the unlabeled target projectT={Xt}=whereXsandXtare the data sets andYsis the label set.nsandntare the numbers of instances from source and target projects,respectively.Here,denotes theith instance in the source project,anddenotes theith instance in target project.yisis the corresponding label ofxis.aijsdenotes the value of thejth metric in theith instancexisin source project.aijtdenotes the value of thejth metric in theith instancexitin theTsource project;anddis the metric dimension of the source or target project.In CPDP,the labels of the target projectare unknown.

    3.3 Subspace Learning

    To minimize the distribution discrepancy between the projects,we seek an effective transformationP.We construct a common subspace with the projectionP.The mapped representations with the projection matrix from the source and target projects are represented aszs=PTxs,andzt=PTxt,respectively.

    To effectively reduce the distribution discrepancy between the different projects,we consider the distances of conditional distributions across the projects.The maximum mean discrepancy(MMD) criterion [29]has been used widely to measure the difference between two distributions.We minimize the distanceFmmd(S,T)between the source and target projects as follows:

    wherep(ys|xs)andp(yt|xt)denote the conditional distributions from the source and target projects.Cdenotes the number of label classes in CPDP.In this paper,the label of an instance is defective or non-defective,andC=2.ncsandnctare the numbers of the instances from source and target projects in classc.Specifically,the label of the target instance is unknown in CPDP.A feasible method is to use the target pseudo label,which can be predicted by the source classifier.We can make full use of the intrinsic discriminative information of the target project during the training process.In addition,we employ multiple iterations to improve the accuracy of the predicted labels.

    Using matrix tricks,we rewrite Eq.(1) as follows:

    whereX={Xs,Xt} combines the source project and target project data,andMcdenotes the conditional MMD matrix with classc,which is computed as

    3.4 Discriminative Feature Learning

    Minimizing Eq.(1) can align the source data and target data,but it cannot guarantee that the learned feature representations are sufficiently discriminative.To ensure that the feature representations from the same class are closer than those from different classes in the common space,we design a triplet constraint that minimizes the distance between the intra-class instances and maximizes the distances between the inter-class instances.This constraint is based on the prior assumption of consistency.This means that nearby instances are likely to have the same label.

    Specifically,we focus on the hardest cases,which are the most dissimilar instances within the same class and the most similar instances in the different classes.For each instancezisin the source project,we choose the farthest instancezjswith the same class as a positive matching,and the nearest instancezkswith a different class as a negative matching.Finally,we construct the tripletfor each instance in the source project.Then,we design the following formulation:

    whereanddenote the distance between the instance pairs.Similarly,we defnie the triplet for the target instances in the same way and the formulation can be represented as

    We obtain the whole formula for the source and target projects as

    We formulate the overall equation of the DSL by incorporating Eqs.(2) and (6) as follows:

    whereαis a balance factor andβis a regularization term.

    The optimization of DSL is represented as follows:

    whereIis an identity matrix andH=I-(1/n)1 denotes a centering matrix similar to that in [24].The constraint condition ensures thatPTXcan preserve the inner attributes of the original data.The optimization problem of Eq.(8) can be solved as a generalized eigendecomposition problem.To solve Eq.(8),we denote Lagrange multipliers asand then we rewrite Eq.(8) as

    We set the derivative ofPto 0,and then we compute the eigenvectors for the generalized eigenvector problem:

    By solving Eq.(10),we can obtain the optimal transformation matrixP.We iteratively updatePandMto benefit the next iteration process in a positive feedback way.

    3.5 Label Prediction

    To further improve the prediction accuracy of the target data,we design a label prediction mechanism.Considering that predicting target labels by using a source classifier directly may lead to overfitting,we utilize the label consistency between different projects and within the same project to obtain more accurate label prediction results.

    In the common subspace,we believe that the classifier will be more accurate when the target instances are close to the source instances.Moreover,similar instances should have consistent labels.We assign larger weights to similar target instances that are close to the source instances,and assign smaller weights to dissimilar target instances that are far from the source instances.

    The formula for label prediction can be defined as

    wherefs(zti)is the prediction label forztifrom the source classifier.ft(zti)is the expected label forzti.λiis a weight factor forzcti·wijis a binary matrix that is used to measure the similarity between different instances from the target project.If theith instance andjth instance are nearest neighbor,the value ofwijis 1,otherwise the value ofwijis 0.λican be calculated by the following formula:wherent/nctis used to balance the effects of different classes.lcidenotes the distance between the target instance with pseudo labelcpredicted by the source classifier and the instance with an expect labelc.

    Then,we obtain the solution of (13) asThe label ofztiisWe summarize the DSL approach in Algorithm 1.

    Algorithm 1:The proposed DSL approach Input:Source data set Xsand corresponding label set Ys,target data set Xt.Output:The prediction label of Xt.1.Data preprocessing.2.Construct the initial MMD matrix by Eq.(3).Repeat 3.Obtain the map matrix P by solving Eq.(10).4.Obtain fe at ure representation Zs and Zt in the common subspace.5.Obtain the pseudo labels by the source classifier.6.Use label prediction to update the pseudo labels of the target data.7.Update Mmmd and MD.Until Maximum number of iterations achieved.8.Obtain predict label by using label prediction.

    4 Experiment

    4.1 Dataset

    We adopt five widely used projects from the AEEEM dataset [30]for CPDP.The AEEEM dataset is collected by D’Ambros et al.[30]and contains the data about Java projects.The projects of AEEEM are Equinox (EQ),Eclipse JDT Core (JDT),Apache Lucene (LC),Mylyn (ML) and Eclipse PDE UI (PDE).Each project consists of 61 metrics including entropy code metrics,source code metrics,etc.The data is publicly available online:http://bug.inf.usi.ch/.Tab.1 shows the detailed information for each project.

    4.2 Evaluation Measures

    In the experiment,we employ two well-known measures,the G-measure and AUC,to evaluate the performance of the CPDP model;these have been widely used in previous SDP works [20,31].The G-measure is a harmonic mean of recall (aka.pd) and specificity,and it is defined in Eq.(14).Recall is a measure defined asTP/(TP+FN).Specificity is a statistical measure that is defined asTN/(TN+FP).TP,FN,TN,andFPmean True Positive,False Negative,True Negative and False Positive,respectively,and they are defined in Tab.2.

    The AUC is used to evaluate the performance of the classification model.The ROC curve is a two-dimensional plane with recall and pf.Pf,i.e.,the possibility of false alarm,which is defined asFP/(TN+FP).The AUC calculates the area under ROC curve.

    The values of the G-measure and AUC range from 0 to 1.A higher value means better prediction performance.AUC=0.5 signifies a model that is randomly guessing.

    Table 1:Experimental data description

    Table 2:Four kinds of defect prediction results

    4.3 Research Question

    In this paper,we answer the following research question:

    RQ:Does DSL outperform previous related works?

    We compare DSL with defect prediction models including TCA+[12],CKSDL [17],CTKCCA [20]and ManualDown [23].TCA+,CKSDL and CTKCCA are successful CPDP methods.ManualDown is an unsupervised method that was suggested as the baseline for comparison in [23]when developing a new CPDP method.

    4.4 Experimental Setup

    Similar to prior CPDP methods [17,20],we count cross-project pairs to perform CPDP.For each target project,there are four prediction combinations.For example,when EQ is selected as the target project,JDT,LC,ML and PDE are separately selected as source project for training.The prediction combinations are JDT→EQ,LC→EQ,ML→EQ,and PDE→EQ.We can obtain 20 cross-project pairs for the five projects.In DSL,we setα=1 andβ=0.1 in Eq.(10)empirically.We fix the maximum number of iterations to 10 in experiment.

    5 Experimental Results and Analysis

    5.1 RQ:Does DSL Outperform Previous Related Works?

    Tabs.3 and 4 show the G-measure and AUC results of DSL and the baselines for each crossproject pair.The best result of each combination is in boldface.Fig.2 shows the boxplots of G-measure and AUC for DSL and the baselines.From Tabs.3 and 4 and Fig.2,it is evident that DSL performs best in most cases compared with the baselines and obtains the best average result in terms of G-measure and AUC.In terms of the overall performance,DSL improves the average G-measure values by 7.10%-19.96% and average AUC values by 11.08%-15.21%.

    Table 3:Comparison results in terms of G-measure for DSL and baselines.The best values are in boldface

    DSL obtains good performance may for the following reasons:Compared with TCA+and CTKCCA,which reduce the distribution gap based on transfer learning,DSL embeds the discrimination representation into the feature learning process,and obtains discriminative common feature space that can reduce the distribution gap of data from different projects.Compared with ManualDown,DSL can utilize the class label information from the source project and discriminant information from the target project,which then obtain more useful information for the model training process.Compared with CKSDL,DSL can align the data distributions between different projects and has strong discriminative ability that can transfer it from the source project to the target project.Thus,in most cases,DSL outperforms the baselines.

    Table 4:Comparison results in terms of AUC for DSL and baselines.The best values are in boldface

    5.2 Statistical Test

    We perform the Friedman test [32]with the Nemenyi test on the two measure results to analyze the statistical difference between DSL and baselines,which has been widely used in SDP [33,34].For each evaluation measure,we compute the ranking of comparison methods on each cross-project pair.We apply Friedman test to determine whether the methods are significantly different.Then we apply Nemenyi test to compare the difference among each pair of methods.For a method pair,we compute their difference in rank by using critical difference (CD) value.If the difference in rank exceeds CD,we consider that the methods have significant difference.CD is defined as

    whereqαdenotes the critical value at significance levelα,Nis the number of cross-project pairs,andLdenotes the number of methods.

    The Friedman test results on the G-measure and AUC are visualized in Figs.3 and 4 respectively.A lower rank means better performance.The methods that have significant differences are divided into different groups.As shown in Figs.3 and 4,DSL performs better than the baselines with lower ranks.For the G-measure and AUC,DSL belongs to the top-ranked group without other methods,which means that it achieves the best performance,and that DSL has significant differences when compared with the baselines.

    Figure 3:The results of statistical test in terms of G-measure for DSL and the other methods

    Figure 4:The results of statistical test in terms of AUC for DSL and the other methods

    5.3 Ablation Study

    To deeply investigate the impact of different parts of DSL,we conduct an ablation study.We evaluate DSL and two variants of DSL:

    Subspace learning (DSL1):Using subspace learning based on conditional distribution for CPDP.

    Subspace learning and discriminative feature learning (DSL2):Using subspace learning and discriminative feature learning for CPDP.

    The experimental settings are the same as those in Section 4.4 on 20 cross-project combinations.The results are reported in Figs.5 and 6.From the figures,the results of DSL are improved on different projects.Clearly,in the mapped feature space,the distribution difference is reduced.Discriminative learning can help us explore the discrimination between different classes.The pseudo labels provided by label prediction can facilitates subspace learning and discriminative feature learning.Thus,discriminative feature learning and label prediction boost each other during the iteration process.Discriminative feature learning and label prediction play important roles in exploring discriminative information for CPDP.

    5.4 Effects of Different Parameter Values

    We conduct parameter sensitivity studies to verify the impact of parametersαandβon DSL.The value ranges ofαandβare from 0.001 to 10.We report the experimental results in terms of the G-measure and AUC in Fig.7.

    From Fig.7,we can observe that DSL achieves good performance under a wide range of values.Whenα∈[0.5,5]andβ∈[0.05,0.5],DSL performs stably and outperforms the other cases.The results are not very sensitive to the parameters,which demonstrates the robustness of DSL.In the experiments,we setα=1 andβ=0.1.

    Figure 7:The mean G-measure and AUC values with various values of α and β.(a) Parameter α.(b) Parameter β

    6 Threats to Validity

    Internal validity mainly relates to the implementation of comparative baselines.For the baselines whose source codes can be provided by the original authors,we use the original implementation code to avoid the inconsistency of re-implementation.For the baselines do not public codes,we follow their papers and implement the methods carefully.

    Construct validity in this work refers to the suitability of the evaluation measures.We employ two widely used measures and the selected measures are comprehensive indicators for CPDP.

    External validity refers to the degree to which the results in this paper can be generalized to other tasks.We perform a comprehensive experiment on 5 projects in this paper.However,we still cannot claim that our approach would be suitable for other software projects.

    7 Conclusion

    In this paper,we propose a new model for CPDP based on unsupervised domain adaptation called discriminative subspace learning (DSL).DSL first handles the problem of large distribution gap in the feature mapping process.Furthermore,the discriminative feature learning is embedded in the feature mapping to make good use of the discriminative information from source project and target project.Extensive experiments on 5 projects are conducted.The performance of DSL is evaluated by using two widely used indicators including G-measure and AUC.The comprehensive experimental results show that DSL has superiority in CPDP.In our future work,we will focus on collecting more projects to evaluate DSL and predicting the number of defects in each instance by utilizing the defect count information.

    Funding Statement:This paper was supported by the National Natural Science Foundation of China (61772286,61802208,and 61876089),China Postdoctoral Science Foundation Grant 2019M651923,and Natural Science Foundation of Jiangsu Province of China (BK0191381).

    Conflicts of Interest:No conflicts of interest exit in the submission of this manuscript,and manuscript is approved by all authors for publication.

    欧美日韩在线观看h| 国产 一区精品| 国产激情偷乱视频一区二区| 免费看美女性在线毛片视频| av在线观看视频网站免费| 色5月婷婷丁香| 精品久久久久久久久久久久久| 成人漫画全彩无遮挡| 色综合亚洲欧美另类图片| 国语自产精品视频在线第100页| 国产伦精品一区二区三区四那| 国产亚洲精品久久久com| 国产亚洲91精品色在线| 悠悠久久av| 在线播放无遮挡| 国产精品伦人一区二区| 亚洲精品国产av成人精品 | 久久久久国内视频| 哪里可以看免费的av片| 日韩欧美 国产精品| 国产午夜精品论理片| 午夜激情欧美在线| 中文字幕熟女人妻在线| 亚洲精品日韩av片在线观看| 久久久欧美国产精品| 国产69精品久久久久777片| 一个人免费在线观看电影| 亚洲av中文av极速乱| 小蜜桃在线观看免费完整版高清| 久久6这里有精品| av国产免费在线观看| 久久久久精品国产欧美久久久| 一夜夜www| 欧美在线一区亚洲| 永久网站在线| 真人做人爱边吃奶动态| 老司机福利观看| 18禁在线无遮挡免费观看视频 | 国产69精品久久久久777片| 女人十人毛片免费观看3o分钟| 丰满人妻一区二区三区视频av| av.在线天堂| 人妻丰满熟妇av一区二区三区| 欧美丝袜亚洲另类| 国产精品精品国产色婷婷| av免费在线看不卡| 日韩av不卡免费在线播放| 亚洲av中文字字幕乱码综合| 如何舔出高潮| 嫩草影院入口| 国产一区二区三区在线臀色熟女| 无遮挡黄片免费观看| 中文亚洲av片在线观看爽| 看片在线看免费视频| 狂野欧美白嫩少妇大欣赏| 我要看日韩黄色一级片| 免费av毛片视频| 国产精品久久久久久亚洲av鲁大| 国产av麻豆久久久久久久| 成人鲁丝片一二三区免费| 亚洲高清免费不卡视频| 国产精品久久久久久精品电影| 成人av一区二区三区在线看| 国产精品亚洲美女久久久| 中文字幕人妻熟人妻熟丝袜美| 一本久久中文字幕| 亚洲最大成人av| 久久精品国产亚洲av香蕉五月| 看非洲黑人一级黄片| 亚洲精品一区av在线观看| 免费观看精品视频网站| 老司机午夜福利在线观看视频| 色播亚洲综合网| 国产国拍精品亚洲av在线观看| 蜜臀久久99精品久久宅男| 亚洲中文字幕一区二区三区有码在线看| 日韩,欧美,国产一区二区三区 | 天天躁夜夜躁狠狠久久av| 欧美绝顶高潮抽搐喷水| 成人综合一区亚洲| 特大巨黑吊av在线直播| 久久久国产成人免费| 欧美zozozo另类| 亚洲最大成人av| 99久久精品热视频| 日韩欧美 国产精品| 成年版毛片免费区| 日本一本二区三区精品| 最近在线观看免费完整版| a级毛片免费高清观看在线播放| 国产高清视频在线播放一区| 三级经典国产精品| 欧美最黄视频在线播放免费| 欧美丝袜亚洲另类| 91久久精品国产一区二区成人| 天堂动漫精品| 精华霜和精华液先用哪个| 日韩av不卡免费在线播放| 在线天堂最新版资源| 国产白丝娇喘喷水9色精品| 亚洲精华国产精华液的使用体验 | 国产人妻一区二区三区在| 九色成人免费人妻av| 一进一出好大好爽视频| 国产精品久久久久久亚洲av鲁大| 国产熟女欧美一区二区| av在线亚洲专区| 日本在线视频免费播放| av女优亚洲男人天堂| 性插视频无遮挡在线免费观看| 成年女人永久免费观看视频| 婷婷六月久久综合丁香| 欧美日韩在线观看h| 你懂的网址亚洲精品在线观看 | 久久久久久久久中文| 偷拍熟女少妇极品色| 日韩精品有码人妻一区| 成人av一区二区三区在线看| 日本熟妇午夜| 久久久久免费精品人妻一区二区| 老司机午夜福利在线观看视频| 在线免费观看不下载黄p国产| 少妇猛男粗大的猛烈进出视频 | 久久亚洲国产成人精品v| 国产高清三级在线| 亚洲美女搞黄在线观看 | 变态另类成人亚洲欧美熟女| 久久久久免费精品人妻一区二区| 波多野结衣巨乳人妻| 国产高清三级在线| 国产欧美日韩精品一区二区| 久久久久久久久久久丰满| 如何舔出高潮| 日本三级黄在线观看| 老司机午夜福利在线观看视频| 在线看三级毛片| 99在线视频只有这里精品首页| a级毛片a级免费在线| 久久中文看片网| av在线观看视频网站免费| 成人特级黄色片久久久久久久| 亚洲在线观看片| 国产v大片淫在线免费观看| 一进一出抽搐动态| 久久久国产成人免费| 综合色av麻豆| 日韩制服骚丝袜av| 深夜a级毛片| 12—13女人毛片做爰片一| 亚洲色图av天堂| 九九爱精品视频在线观看| 尤物成人国产欧美一区二区三区| 国产欧美日韩一区二区精品| 狠狠狠狠99中文字幕| 国产精品爽爽va在线观看网站| av国产免费在线观看| 国产精品综合久久久久久久免费| 中文亚洲av片在线观看爽| 小蜜桃在线观看免费完整版高清| 亚洲在线观看片| 国产精品永久免费网站| 成人二区视频| 99在线视频只有这里精品首页| 一级毛片久久久久久久久女| 亚洲一级一片aⅴ在线观看| 婷婷色综合大香蕉| 美女大奶头视频| 国产午夜精品久久久久久一区二区三区 | 久久久久久大精品| 国产高清视频在线播放一区| 免费观看在线日韩| 欧美区成人在线视频| 色综合色国产| 日本黄色视频三级网站网址| 在线观看66精品国产| 在线观看免费视频日本深夜| 熟女电影av网| 白带黄色成豆腐渣| 婷婷精品国产亚洲av| 天堂av国产一区二区熟女人妻| 日本一本二区三区精品| 国产精品久久久久久久久免| a级毛色黄片| 美女cb高潮喷水在线观看| 一个人观看的视频www高清免费观看| av免费在线看不卡| 亚洲国产精品sss在线观看| 欧美+亚洲+日韩+国产| 国产亚洲精品久久久久久毛片| 搡老熟女国产l中国老女人| 一本精品99久久精品77| 精品日产1卡2卡| 国产麻豆成人av免费视频| 久久久国产成人免费| 欧美成人a在线观看| 欧美zozozo另类| 免费搜索国产男女视频| 亚洲欧美日韩卡通动漫| 深爱激情五月婷婷| 男女之事视频高清在线观看| 欧美性猛交╳xxx乱大交人| 深爱激情五月婷婷| 美女xxoo啪啪120秒动态图| 精品久久久久久久人妻蜜臀av| 啦啦啦啦在线视频资源| 国产单亲对白刺激| 国产精品美女特级片免费视频播放器| 男人的好看免费观看在线视频| 最近中文字幕高清免费大全6| 岛国在线免费视频观看| 国产高清有码在线观看视频| 日日摸夜夜添夜夜爱| 好男人在线观看高清免费视频| 少妇人妻精品综合一区二区 | 高清日韩中文字幕在线| 99在线人妻在线中文字幕| 长腿黑丝高跟| 九九热线精品视视频播放| 国产黄色小视频在线观看| 99久久无色码亚洲精品果冻| 国产成人a∨麻豆精品| 国产精品久久久久久av不卡| 天天躁日日操中文字幕| 六月丁香七月| 最后的刺客免费高清国语| 久久国内精品自在自线图片| 国产中年淑女户外野战色| 久久精品国产亚洲网站| 亚洲美女搞黄在线观看 | 精品人妻视频免费看| 亚洲精品一区av在线观看| 亚洲av.av天堂| 91久久精品国产一区二区三区| 中国美女看黄片| 麻豆国产av国片精品| 99久国产av精品国产电影| 99riav亚洲国产免费| 久久九九热精品免费| 在线免费十八禁| 一进一出抽搐动态| 韩国av在线不卡| 免费看日本二区| 尾随美女入室| 熟妇人妻久久中文字幕3abv| 欧美国产日韩亚洲一区| 亚洲无线在线观看| 久久草成人影院| 亚洲av五月六月丁香网| 日日啪夜夜撸| 亚洲专区国产一区二区| 亚洲av五月六月丁香网| 国产精品亚洲一级av第二区| 精品福利观看| 精品不卡国产一区二区三区| 国产毛片a区久久久久| 亚洲国产精品合色在线| av专区在线播放| 亚洲人成网站高清观看| 国产在视频线在精品| 成年女人毛片免费观看观看9| 欧美极品一区二区三区四区| 国产真实乱freesex| 18禁在线无遮挡免费观看视频 | 十八禁国产超污无遮挡网站| 国产色爽女视频免费观看| 国产激情偷乱视频一区二区| 日韩欧美国产在线观看| 国产又黄又爽又无遮挡在线| 亚洲欧美日韩高清在线视频| 国产精品野战在线观看| 十八禁国产超污无遮挡网站| 亚洲图色成人| 啦啦啦韩国在线观看视频| 日韩欧美在线乱码| 日韩国内少妇激情av| 成人特级黄色片久久久久久久| 三级经典国产精品| 色哟哟哟哟哟哟| 3wmmmm亚洲av在线观看| 国产精品国产三级国产av玫瑰| 露出奶头的视频| 亚洲av第一区精品v没综合| 日韩成人av中文字幕在线观看 | 国产伦精品一区二区三区四那| 18+在线观看网站| 久久久久国产网址| 国产精品久久久久久亚洲av鲁大| 精品一区二区免费观看| av在线天堂中文字幕| 久久6这里有精品| 禁无遮挡网站| 久久人人爽人人片av| 99久久成人亚洲精品观看| a级一级毛片免费在线观看| 国产精品亚洲一级av第二区| 一级a爱片免费观看的视频| 亚洲综合色惰| 日本爱情动作片www.在线观看 | 在线a可以看的网站| 日韩,欧美,国产一区二区三区 | 日本免费a在线| 卡戴珊不雅视频在线播放| 日韩亚洲欧美综合| 亚洲天堂国产精品一区在线| 一个人看视频在线观看www免费| 日本黄色视频三级网站网址| 精华霜和精华液先用哪个| 国产精品日韩av在线免费观看| 男插女下体视频免费在线播放| 中出人妻视频一区二区| 成人综合一区亚洲| 免费无遮挡裸体视频| 国产男靠女视频免费网站| 欧美性猛交黑人性爽| 可以在线观看的亚洲视频| 国产aⅴ精品一区二区三区波| 大香蕉久久网| 午夜福利18| 国产成人福利小说| 最后的刺客免费高清国语| 如何舔出高潮| 亚洲欧美成人精品一区二区| 国内精品美女久久久久久| 婷婷六月久久综合丁香| 免费人成视频x8x8入口观看| 日本撒尿小便嘘嘘汇集6| 久久久国产成人精品二区| 又黄又爽又刺激的免费视频.| 中国美白少妇内射xxxbb| 国产三级中文精品| 亚洲av成人av| 亚洲四区av| 尤物成人国产欧美一区二区三区| 色5月婷婷丁香| 中文字幕av成人在线电影| 91麻豆精品激情在线观看国产| 国产av不卡久久| 插阴视频在线观看视频| 国产精品永久免费网站| 两性午夜刺激爽爽歪歪视频在线观看| 国产精品久久久久久亚洲av鲁大| 色哟哟·www| 日韩 亚洲 欧美在线| 级片在线观看| h日本视频在线播放| www.色视频.com| 国产精品一区www在线观看| 国产精品一区二区免费欧美| 日本熟妇午夜| 日本与韩国留学比较| 乱人视频在线观看| 国产成人aa在线观看| 青春草视频在线免费观看| 我的老师免费观看完整版| 亚洲av.av天堂| 综合色丁香网| 蜜桃久久精品国产亚洲av| 国产亚洲精品综合一区在线观看| 午夜福利在线观看免费完整高清在 | ponron亚洲| 99热只有精品国产| 综合色av麻豆| 美女 人体艺术 gogo| 看片在线看免费视频| 成人毛片a级毛片在线播放| 亚洲av.av天堂| 美女cb高潮喷水在线观看| 天堂网av新在线| 国产视频一区二区在线看| 婷婷六月久久综合丁香| 日日摸夜夜添夜夜爱| 高清毛片免费看| 国产探花极品一区二区| 欧美日韩在线观看h| 欧美一区二区亚洲| 日韩 亚洲 欧美在线| av在线老鸭窝| 国产高清不卡午夜福利| 国内揄拍国产精品人妻在线| 丝袜美腿在线中文| 91精品国产九色| 欧美最新免费一区二区三区| 国产精品久久久久久精品电影| 俺也久久电影网| 最近最新中文字幕大全电影3| 亚洲中文字幕一区二区三区有码在线看| 高清日韩中文字幕在线| 亚洲va在线va天堂va国产| 尤物成人国产欧美一区二区三区| 日本成人三级电影网站| 99热只有精品国产| 国产美女午夜福利| 久久久久免费精品人妻一区二区| 国产av在哪里看| 嫩草影院新地址| 久久久久久久久久久丰满| 国内精品美女久久久久久| 九色成人免费人妻av| 久久人人爽人人片av| 久久久久国产精品人妻aⅴ院| 国产精品,欧美在线| 男人和女人高潮做爰伦理| 国产色爽女视频免费观看| 1000部很黄的大片| 男女边吃奶边做爰视频| 人妻久久中文字幕网| 婷婷亚洲欧美| 丰满人妻一区二区三区视频av| 熟女人妻精品中文字幕| 国产精品不卡视频一区二区| 久久人人精品亚洲av| 亚洲无线在线观看| 欧美中文日本在线观看视频| 搞女人的毛片| 欧美bdsm另类| 亚洲欧美清纯卡通| 亚洲性久久影院| 欧美日韩精品成人综合77777| 男女之事视频高清在线观看| 蜜桃久久精品国产亚洲av| 国产白丝娇喘喷水9色精品| 国产精品一区二区三区四区久久| 久久久色成人| 欧美激情国产日韩精品一区| 日韩欧美一区二区三区在线观看| 毛片一级片免费看久久久久| 久久久a久久爽久久v久久| 国产免费一级a男人的天堂| 国产一区二区三区在线臀色熟女| 老司机午夜福利在线观看视频| 亚洲经典国产精华液单| 亚洲国产欧洲综合997久久,| 精品无人区乱码1区二区| 特级一级黄色大片| 国产精品国产三级国产av玫瑰| 乱系列少妇在线播放| 久久鲁丝午夜福利片| 免费不卡的大黄色大毛片视频在线观看 | 人人妻,人人澡人人爽秒播| 欧美性猛交╳xxx乱大交人| 一级毛片久久久久久久久女| 国产精品久久电影中文字幕| 国内少妇人妻偷人精品xxx网站| av在线播放精品| 日韩大尺度精品在线看网址| 自拍偷自拍亚洲精品老妇| 久久天躁狠狠躁夜夜2o2o| 精品国内亚洲2022精品成人| 日本五十路高清| 国产精品久久视频播放| 国产在线男女| 国内揄拍国产精品人妻在线| 国产欧美日韩一区二区精品| 久久久久国产精品人妻aⅴ院| 深夜精品福利| 欧美丝袜亚洲另类| 欧美激情久久久久久爽电影| 少妇的逼好多水| 亚洲久久久久久中文字幕| 91午夜精品亚洲一区二区三区| 亚洲va在线va天堂va国产| 伦精品一区二区三区| 国产欧美日韩精品亚洲av| 亚洲久久久久久中文字幕| 国产三级中文精品| 午夜免费激情av| 亚洲无线观看免费| 深夜精品福利| 极品教师在线视频| 悠悠久久av| 美女内射精品一级片tv| 日韩国内少妇激情av| 舔av片在线| 亚洲天堂国产精品一区在线| 人人妻,人人澡人人爽秒播| 国产成人影院久久av| 搡老熟女国产l中国老女人| 亚洲av美国av| 亚洲精品456在线播放app| 日韩欧美 国产精品| 卡戴珊不雅视频在线播放| 精品国内亚洲2022精品成人| 18禁黄网站禁片免费观看直播| 最近在线观看免费完整版| 国语自产精品视频在线第100页| a级毛片a级免费在线| 久久久a久久爽久久v久久| 欧美日韩在线观看h| 亚洲欧美中文字幕日韩二区| 看十八女毛片水多多多| 中文字幕熟女人妻在线| 真实男女啪啪啪动态图| 婷婷色综合大香蕉| 国产精品一区www在线观看| 在线a可以看的网站| 国产精品女同一区二区软件| 可以在线观看毛片的网站| 亚洲成人精品中文字幕电影| 国产真实乱freesex| 看十八女毛片水多多多| 观看美女的网站| 99精品在免费线老司机午夜| 免费av毛片视频| 免费看av在线观看网站| 欧美+亚洲+日韩+国产| 日本欧美国产在线视频| 成年版毛片免费区| 97在线视频观看| 国产精品一区二区免费欧美| 久久精品久久久久久噜噜老黄 | av天堂在线播放| 亚洲欧美中文字幕日韩二区| 成人亚洲欧美一区二区av| 校园人妻丝袜中文字幕| 亚洲人成网站在线观看播放| 欧美精品国产亚洲| 成人亚洲精品av一区二区| 久久人人精品亚洲av| 俺也久久电影网| 欧美不卡视频在线免费观看| 女生性感内裤真人,穿戴方法视频| 日韩欧美国产在线观看| 欧美日韩精品成人综合77777| 亚洲一区高清亚洲精品| av天堂在线播放| 久99久视频精品免费| 日本-黄色视频高清免费观看| 天天躁日日操中文字幕| 亚洲最大成人中文| 中国国产av一级| 午夜视频国产福利| 色哟哟·www| 成人综合一区亚洲| 久久久久久伊人网av| 别揉我奶头 嗯啊视频| 亚洲精品成人久久久久久| 国产精品一二三区在线看| 91午夜精品亚洲一区二区三区| 亚洲欧美日韩高清专用| 国产视频一区二区在线看| 亚洲婷婷狠狠爱综合网| 白带黄色成豆腐渣| 成人午夜高清在线视频| 日韩在线高清观看一区二区三区| 国产一区二区在线av高清观看| 啦啦啦观看免费观看视频高清| 国产精品国产三级国产av玫瑰| 在线播放国产精品三级| 婷婷六月久久综合丁香| 高清毛片免费看| 日韩制服骚丝袜av| 亚洲美女搞黄在线观看 | 久久久久性生活片| 国产乱人视频| 乱系列少妇在线播放| 12—13女人毛片做爰片一| 久久午夜亚洲精品久久| 高清午夜精品一区二区三区 | 99国产精品一区二区蜜桃av| 亚洲,欧美,日韩| 综合色丁香网| av在线亚洲专区| 日韩强制内射视频| 国产色婷婷99| 99热只有精品国产| 国产成人a区在线观看| 亚洲一级一片aⅴ在线观看| 亚洲国产精品国产精品| 搡女人真爽免费视频火全软件 | 精品不卡国产一区二区三区| 天美传媒精品一区二区| 小蜜桃在线观看免费完整版高清| 天堂√8在线中文| 啦啦啦观看免费观看视频高清| 91麻豆精品激情在线观看国产| 级片在线观看| 一进一出好大好爽视频| 国产乱人偷精品视频| 久久婷婷人人爽人人干人人爱| 国产v大片淫在线免费观看| 亚洲成人中文字幕在线播放| 国产精品一二三区在线看| 久久人人精品亚洲av| 干丝袜人妻中文字幕| 美女大奶头视频| 精品日产1卡2卡| 久久99热这里只有精品18| 色尼玛亚洲综合影院| av免费在线看不卡| 亚洲在线自拍视频| 亚洲精品色激情综合| 成人一区二区视频在线观看| 国产亚洲91精品色在线| 国产av麻豆久久久久久久| 国产精品爽爽va在线观看网站| 91在线观看av| 99九九线精品视频在线观看视频| 国产精品亚洲美女久久久| 国产高清有码在线观看视频| 婷婷亚洲欧美| 成人无遮挡网站| 成年女人看的毛片在线观看| 国产精华一区二区三区| 两个人的视频大全免费| 成人性生交大片免费视频hd| 亚洲精品亚洲一区二区| 热99re8久久精品国产| 午夜影院日韩av| 午夜福利成人在线免费观看| 草草在线视频免费看| 国产高清三级在线| 亚洲内射少妇av| 午夜视频国产福利| 偷拍熟女少妇极品色| 久久久久国产精品人妻aⅴ院|