• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Robust Re-Weighted Multi-View Feature Selection

    2019-08-13 05:55:08YimingXueNanWangYanNiuPingZhongShaozhangNiuandYuntaoSong
    Computers Materials&Continua 2019年8期

    Yiming Xue,Nan Wang,Yan Niu,Ping Zhong,Shaozhang Niuand Yuntao Song

    Abstract: In practical application,many objects are described by multi-view features because multiple views can provide a more informative representation than the single view.When dealing with the multi-view data,the high dimensionality is often an obstacle as it can bring the expensive time consumption and an increased chance of over-fitting.So how to identify the relevant views and features is an important issue.The matrix-based multi-view feature selection that can integrate multiple views to select relevant feature subset has aroused widely concern in recent years.The existing supervised multi-view feature selection methods usually concatenate all views into the long vectors to design the models.However,this concatenation has no physical meaning and indicates that different views play the similar roles for a specific task.In this paper,we propose a robust re-weighted multi-view feature selection method by constructing the penalty term based on the low-dimensional subspaces of each view through the least-absolute criterion.The proposed model can fully consider the complementary property of multiple views and the specificity of each view.It can not only induce robustness to mitigate the impacts of outliers,but also learn the corresponding weights adaptively for different views without any presetting parameter.In the process of optimization,the proposed model can be splitted to several small scale sub-problems.An iterative algorithm based on the iteratively re-weighted least squares is proposed to efficiently solve these sub-problems.Furthermore,the convergence of the iterative algorithm is theoretical analyzed.Extensive comparable experiments with several state-of-the-art feature selection methods verify the effectiveness of the proposed method.

    Keywords:Supervised feature selection,multi-view,robustness,re-weighted.

    1 Introduction

    In many applications,we need to deal with a large number of data which have high dimensionality.Handling high dimensional data(such as image and video data)may bring many challenges,including the added computational complexity and the increased chance of over-fitting.So how to effectively reduce the dimensionality has become an important issue.As an effective method of selecting representative features,feature selection has attracted many attentions.Feature selection methods[Fang,Cai,Sun et al.(2018)]can be grouped into filter methods,wrapper methods,and embedded methods.Filter methods select features according to the general characteristics of data without taking the learning model into consideration.Wrapper methods select features by taking the performance of some model as criterion.Embedded methods incorporate feature selection and classification process into a single optimization problem,which can achieve reasonable computational cost and good classification performance.Thus,embedded methods are in the dominant position in machine learning,and Least absolute shrinkage and selection operator(Lasso)[Tibshirani(2011)]is one of the most important representatives.

    Recently,unlike the previous vector-based feature selection methods(such as Lasso)that are only used for binary classification,many of matrix-based structured sparsity-inducing feature selection(SSFS)methods have been proposed to solve multi-class classification[Gui,Sun,Ji et al.(2016)].Obozinski et al.[Obozinski,Taskar and Jordan(2006)]first introducedl2,1-norm regularization term which is an extension ofl1-norm in Lasso for multi-task feature selection.Thel2,1-norm regularizer can obtain a joint sparsity matrix because minimizingl2,1-norm will make the rows of transformation matrix corresponding tothenonessentialfeaturesbecomezerosorclosetozeros.Thankstoitsgoodperformance,many SSFS methods based onl2,1-norm regularizer have been proposed[Yang,Ma,Hauptman et al.(2013);Chen,Zhou and Ye(2011);Wang,Nie,Huang et al.(2011);Wang,Nie,Huang et al.(2011);Jebara(2011)].In addition,Nie et al.[Nie,Huang,Cai et al.(2011)]utilizedl2,1-norm penalty to construct a robust SSFS method called RFS to deal with bioinformatics tasks.Unlike the frequently-used least squared penalty,the residual in RFS is not squared,and thus the outliers have less influence.With the aid ofl2,1-norm penalty,several robust SSFS methods have been proposed[Zhu,Zuo,Zhang et al.(2015);Du,Ma,Li et al.(2017)].

    As is known,the description of an object from multiple views is more informative than the one from single view,and a large amount of multi-view data have been collected.In order to describe this kind of data in a better way,a lot of features are extracted by different feature extractors.How to integrate these views to select more relevant feature subset is important for the subsequent classification model.Xiao et al.[Xiao,Sun,He et al.(2013)]firstly proposed the two-view feature selection method.However,many objects are described from more than two views.Wang et al.[Wang,Nie,Huang et al.(2012);Wang,Nie and Huang(2013);Wang,Nie,Huang et al.(2013)]proposed the improved multi-view feature selection methods to handle the general case.In Wang et al.[Wang,Nie and Huang(2012)],they established a multi-view feature selection framework by adoptingG1-norm regularizer andl2,1-norm regularizer to make both views and features sparsity.In Wang et al.[Wang,Nie and Huang(2013);Wang,Nie,Huang et al.(2013)],SSMVFS and SMML were proposed by using the same framework to induce the structured sparsity.Specifically,SSMVFS employed the discriminative K-means loss for clustering,and SMML employed the hinge loss for classification.Zhang et al.[Zhang,Tian,Yang et al.(2014)]proposed a multi-view feature selection method based onG2,1-norm regularizer by incorporating the view-wise structure information.Gui et al.[Gui,Rao,Sun et al.(2014)]proposed a joint feature extraction and feature selection method by considering both complementary property and consistency of different views.

    Multi-view feature selection methods have achieved good performance.Concatenating multiple views into new vectors is a common way when establishing the multi-view feature selection schemes.However,the concatenated vectors have no physical meaning,and the concatenation implies that the different views have similar effects on a specific class.In addition,the concatenated vectors are always high dimensional,which increases the chance of over-fitting.Noticing these limitations,some multi-view clustering methods[Xu,Wang and Lai(2016);Xu,Han,Nie et al.(2017)]have been proposed to learn the corresponding distribution of different views.

    Inspired by the above work,we propose a robust re-weighted multi-view feature selection method(RRMVFS)without concatenation.For each view,we make the predictive values close to the real labels,and construct the penalty term by using the least-absolute criterion,which can not only induce robustness but also learn the corresponding view weights adaptively without any presetting parameter.Based on the proposed penalty,the scheme is established by addingG1-norm andl2,1-norm regularization terms for the structured sparsity.In the procedure of optimization,the proposed model can be decomposed into several small scale subproblems,and an iterative algorithm based on Iterative Re-weighted Least Squares(IRLS)[Daubechies,Devore,Fornasier et al.(2008)]is proposed to solve the new model.Furthermore,the theoretical analysis of convergence is also presented.

    In a nutshell,the proposed multi-view feature selection method has the following advantages:

    ·It can fully consider the complementary property of multiple views as well as the specificity of each view,since it assigns all views of each sample to the same class while separately imposes penalty for each view.

    ·It can reduce the influence of outliers effectively because the least-absolute residuals of each view are combined as penalty.

    ·It can learn the view weights adaptively by a re-weighted way,where the weights are updated according to the current weights and bias matrix without any presetting parameter.

    ·It can be efficiently solved due to the following two reasons.One is that the objective function can be decomposed into several small scale optimization subproblems,and the other one is that IRLS can solve the least-absolute residual problem within finite iterations.

    ·Extensive comparison experiments with several state-of-the-art feature selection methods show the effectiveness of the proposed method.

    The paper is organized as follows.In Section 2,we present our feature selection model and algorithm in detail,and the convergence of the proposed algorithm is analysed.After presenting the extensive experiments in Section 3,we draw the conclusions in Section 4.Now,we give the notation in this paper.GivenNsamples which haveVviews belonging toPclasses,the data matrix of thevth view is denoted asis thevth view of thenth sample,anddvis the feature number

    of thevth view,v=1,···,V.The data matrix of all views can be represented by where

    2 Robust re-weighted multi-view feature selection method

    2.1 Model formulation

    In order to select the relevant views and feature subset from the original ones,we first use label information of the multi-view data to build the penalty term through the loss minimization principle.We calculate the penalty term by least-absolute criterion which can induce robustness.Instead of concatenating all views as long vectors,the penalty term is established as the sum of the residuals calculated on the latent subspace of each view:

    Formulation(2)assigns all views of each sample to the same class,and imposes the penalty separately on each view.It simultaneously considers the complementary property of different views and the specificity of each view.

    whereb(v)∈RPis a bias vector of thevth view,and1N ∈RNis the vector of all ones.The residuals are not squared and thus outliers have less effect compared with the squared residuals.Since different views have different effects for a specific class,we useG1-norm regularizer to enforce the sparsity between views.Meanwhile,we usel2,1-norm regularizer which has the ability of imposing the transformation matrix sparse between rows to select the representative features.The formulation of the proposed multi-view feature selection method can be described as follows:

    2.2 Optimization algorithm

    Next we give an iterative optimization algorithm to solve the problem(2).Firstly,we transform the objective function of(2)as:

    SinceJv(W(v),b(v))is only related to thevth view and nonnegative,the problem(2)can be decomposed into solvingV-subproblems:

    Note that the problem(4)cannot be easily solved by the sophisticated optimization algorithms since its objective function is nonsmooth.We utilize IRLS method[Daubechies,Devore,Fornasier et al.(2008)]to solve it,and change it as follows:

    1.Fixαv,updateW(v),b(v).For problem(5),we can solve the following problem iteratively with the fixedαv:

    Taking the derivative of the(10)with the respect toandand setting them to zeros,we can obtain

    So from(11),we have

    whereYp=(yp1,···,ypN)∈RN.Substituting(13)into(12),we have

    2.FixW(v),b(v),updateαv.The non-negative weightαvfor each view can be updated as

    The proposed multi-view feature selection algorithm(RRMVFS)is summarized in Algorithm 1.

    2.3 Convergence analysis

    Theorem 1.The values of the objective function of(2)monotonically decrease in each iteration in Algorithm 1,and the algorithm converges.

    Proof:From Eq.(3),in order to proveJ(Wt+1,Bt+1)≤J(Wt ,Bt),we only need to prove thatJv(W(v),b(v))monotonically decreases in each iteration.

    Algorithm 1:RobustRe-weighted Multi-view FeatureSelection(RRMVFS)Input:data matrix of each view X(v),v=1,2,···,V,label matrix Y=[y1 ,y2 ,···,yN ].Output:transformation matrix W(v),v=1,2,···,V.Initialization 1.Set t=0,threshold∈=10-5,the largest iterative number T=20;Set(W(v))0 ∈ Rdv×P(v=1,···,V)all elements 1,and(b(v)p )0 can be calculated by(b(v)p)0=Yp 1N -(w(v)(1≤p≤P,1≤v≤V)as well as(αv)0 = 1 p)0 TX(v)1N(1≤v≤V)while not converged do 2.Compute the diagonal matrices(ˉD(v))t (1≤v≤V)with the ith diagonal element 1 N 2‖(W(v))0TX(v)+(b(v))01TN -Y ‖F(xiàn) 2‖(W(v)i:)t-1‖2;Compute the diagonal matrices(?D(v)p )t =Idv (1≤v≤V)with Idv being an identity matrix.3.For each w(v)p and b(v)p (1≤ p≤ P,1≤v≤V),compute(w(v)p)t+1=((αv)tX(v)HX(v)T+γ1(?D(v)p)t+γ2(ˉD(v))t)-1(αv)tX(v)HYTp,and(b(v)p)t+1=Yp 1N -(w(v)1 2‖(w(v)p)t-1‖2 p)t+1 TX(v)1N 4.Calculate(αv)t +1 = 1(1≤v≤V);5.Check the convergence condition J(Wt ,Bt )-J(Wt +1 ,Bt +1 ) < ∈(J(·,·)is the objective function of(2))or t>T;6.t=t+1;End While N 2‖(W(v))t+1TX(v)+(b(v))t+11TN -Y ‖F(xiàn)

    By Step3 in Algorithm 1,we have

    From(17)-(20),we obtain

    So,wegetJ(Wt+1,Bt+1)≤J(Wt ,Bt),thatis,Algorithm1decreasestheobjectivevalues in each iteration.SinceJ(W,B)≥0,Algorithm 1 converges.

    3 Experiments

    In this section,we evaluate the effectiveness of the proposed method by comparing it with several related feature selection methods.

    3.1 Experimental setup

    We compare the performance of our method RRMVFS with several related methods:Single,CAT,RFS[Nie,Huang,Cai et al.(2011)],SSMVFS[Wang,Nie and Huang(2013)],SMML[Wang,Nie,Huang et al.(2013)],DSML-FS[Gui,Rao,Sun et al.(2014)],and WMCFS[Xu,Wang and Lai(2016)].Single refers to using single-view features to find best performance for classification.CAT refers to using concatenated vectors for classification without feature selection.RFS is an efficient robust feature selection method,but not designed for multi-view feature selection.The other feature selection methods are designed for multi-view feature selection.The parametersγ1andγ2in feature selection methods are tuned from the set{10i|i=-5,-4,···,5}.The exponential parameterρin WMCFS method is set to be 5 according to Xu et al.[Xu,Wang and Lai(2016)].

    The public available data sets,including images data set NUS-WIDE-OBJECT(NUS),handwritten numerals data set mfeat,and Internet pages data set Ads are employed in the experiments.For NUS data set,we choose all 12 animal classes including 8182 images.For Ads,there exist some incomplete data.We first discard the incomplete data and then randomly choose some non-AD samples so that the number of non-AD data is the same as that of AD data.The total number of the samples employed in Ads is 918.For mfeat data set,all of data are employed.In each data set,the samples are randomly and averagely divided into two parts.One part is used for training and the other part is used for testing.In the training set,we randomly choose 6(9,or 12)samples from each class to learn transformation matrix of these compared methods.In the test set,we employ 20%of data for validation,and the parameters that achieve the best performance on the validation set are employed for testing.We arrange features in descending order based on the values of‖Wi:‖2,i=1,2,···,d,and select a certain number of top-ranked features.The numbers of selected features are{10%,···,90%}of the total amount of features,respectively.Then the selected features are taken as the inputs of the subsequent 1-nearest neighbour(1NN)classifier.Weconducttwokindsofexperiments.First,weconductexperimentsonallviews to evaluate these methods.Then,we conduct experiments on the subsets formed by two views and four views to make the evaluation of views.For all experiments,ten independent and random choices of training samples are employed,and the averaged accuracies(AC)and F1 scores(macroF1)are reported.

    3.2 Evaluation of feature selection

    Figs.1,2,and 3 show the performance of the compared methods w.r.t.the different percentages of selected features on NUS,mfeat and Ads data sets,respectively.From thesefigures,we can see that,the performance of these methods under AC are consistent with the one under macroF1 scores.CAT has the better performance than Single in all cases,which means that it is essential to combine different views to select features.The feature selection methods,including ours,achieve the comparable or even better performance than CAT.Specifically,from Fig.1,we can see that three feature selection methods including RFS,SSMVFS,and DSML-FS show the comparable performance on NUS data set.SSMVFS and DSML-FS are the feature selection methods designed for multi-view learning,while RFS is a robust feature selection method that is not specially designed for multi-view learning.This means that it is necessary to build a robust method since data may be corrupted with noise.Along with the number of selected features increasing,the ACs and macroF1 scores of the multi-view feature selection methods WMCFS,SMML,and the proposed RRMVFS are greatly improved.Moreover,RRMVFS achieves the best results in most cases.This phenomenon might be attributed to the robust penalty which may help RRMVFS select more representative features.

    Figure1:ACs and macroF1 scores of compared methods vs.percents of selected features on NUS data set

    Figure2:ACs and macroF1 scores of compared methods vs.percents of selected features on mfeat data set

    Figure3:ACs and macroF1 scores of compared methods vs.percents of selected features on Ads data set

    From Fig.2,we can see that RFS,SSMVFS,and DSML-FS show the comparable performance on mfeat data set.Along with the number of selected features increasing,the ACs and macroF1 scores of SMML and WMCFS are increased except at a few percentages,especially for WMCFS.The proposed RRMVFS can achieve the best performance when only a small number of features(the percent of selected features is 10)are selected.

    From Fig.3,we can see that four feature selection methods including RFS,SSMVFS,SMML,and DSML-FS obtain comparable performance on Ads data set.They achieve their best performance at 10%,and when the percentages of selected features change from 20%to 90%,their ACs and macroF1 scores are substantially unchanged and comparable to those of CAT.Just like the performance on the other two data sets,the proposed RRMVFS still achieves the best performance on Ads data set.

    3.3 Evaluation of views

    In order to evaluate the effect of views,we test ACs and macroF1 scores of the compared methods in terms of views on NUS,mfeat,and Ads data sets,respectively.The experiments are conducted on the subsets that contains 12 samples of each class.For each data sets,we conduct two kinds of experiments.Firstly,we randomly choose two views for experiments.Secondly,we randomly choose two views from the remaining views,and combine them with the previous two views to form the subsets for experiments.

    Table1:ACs and macroF1 scores with standard deviation of the compared methods in terms of views on NUS data set

    The experimental results on these three data sets are shown in Tabs.1-3,respectively.It can be seen that with the increase of views,generally speaking,the performance of all compared methods gets better.On the subsets which consist of two views,our method RRMVFS does not show the best performance,but on the subsets formed by four views and the whole sets,our method is superior to others significantly.This phenomenon might be attributed to the learning of view weights which may help RRMVFS select more relevant views.

    Table2:ACs and macroF1 scores with standard deviation of the compared methods in terms of views on mfeat data set

    Table3:ACs and macroF1 scores with standard deviation of the compared methods in terms of views on Ads data set

    4 Conclusion

    In this paper,we have proposed a robust re-weighted multi-view feature selection method by assigning all views of each sample to the same class while imposing penalty based on latent subspaces of each view through least-absolute criterion,which can take both the complementary property of different views and the specificity of each view into consideration and induce the robustness.The proposed model can be efficiently solved by decomposing it into several small scale optimization subproblems,and the convergence of the proposed iterative algorithm is presented.The comparison experiments with several state-of-the-art feature selection methods verify the effectiveness of the proposed method.Many real-world applications,such as text categorization,are multi-label problems.The future work is to develop the proposed method for multi-label multi-view feature selection.

    Acknowledgement:The work was supported by the National Natural Science Foundation of China(Grant Nos.61872368,U1536121).

    久久久国产成人精品二区| 国国产精品蜜臀av免费| 女同久久另类99精品国产91| 精品少妇黑人巨大在线播放 | 亚洲欧美日韩高清在线视频| 内地一区二区视频在线| 久久精品夜色国产| 性插视频无遮挡在线免费观看| 热99在线观看视频| avwww免费| 久久99蜜桃精品久久| 久久久久久九九精品二区国产| 中国美白少妇内射xxxbb| 波多野结衣巨乳人妻| 麻豆成人av视频| 久久久久久久久中文| 99精品在免费线老司机午夜| 大又大粗又爽又黄少妇毛片口| 高清在线视频一区二区三区 | 美女黄网站色视频| 国产高清激情床上av| 久久久久久久久大av| 婷婷六月久久综合丁香| 男女做爰动态图高潮gif福利片| 久久精品久久久久久噜噜老黄 | 色噜噜av男人的天堂激情| 亚洲欧洲国产日韩| 美女国产视频在线观看| 日日撸夜夜添| 日韩三级伦理在线观看| 亚洲国产欧洲综合997久久,| 九色成人免费人妻av| 丰满乱子伦码专区| 91久久精品国产一区二区三区| 狠狠狠狠99中文字幕| av卡一久久| 欧美成人a在线观看| 免费观看人在逋| 夜夜爽天天搞| 日韩高清综合在线| 性插视频无遮挡在线免费观看| 男插女下体视频免费在线播放| 最近手机中文字幕大全| 毛片女人毛片| 亚洲国产色片| 国产麻豆成人av免费视频| 在线免费观看不下载黄p国产| 一级黄片播放器| 女的被弄到高潮叫床怎么办| 日本三级黄在线观看| 日韩视频在线欧美| 伦精品一区二区三区| 婷婷亚洲欧美| 免费av不卡在线播放| 国产极品精品免费视频能看的| 国产三级在线视频| avwww免费| 色哟哟·www| 免费看美女性在线毛片视频| 一区二区三区免费毛片| 美女内射精品一级片tv| 一边摸一边抽搐一进一小说| 亚洲欧美中文字幕日韩二区| 国产精品美女特级片免费视频播放器| 热99re8久久精品国产| 欧美丝袜亚洲另类| 亚洲人成网站在线播| 高清在线视频一区二区三区 | 免费在线观看成人毛片| 欧美丝袜亚洲另类| 一本精品99久久精品77| 波多野结衣巨乳人妻| 高清午夜精品一区二区三区 | 亚洲人与动物交配视频| 欧美丝袜亚洲另类| 欧美另类亚洲清纯唯美| 麻豆av噜噜一区二区三区| 中文字幕制服av| 亚洲在线自拍视频| 国产一级毛片七仙女欲春2| 少妇高潮的动态图| 身体一侧抽搐| 国产三级中文精品| 色噜噜av男人的天堂激情| 亚州av有码| 欧美性猛交╳xxx乱大交人| 亚洲欧美日韩卡通动漫| 97热精品久久久久久| 男人的好看免费观看在线视频| 日本欧美国产在线视频| 美女高潮的动态| 极品教师在线视频| 亚洲图色成人| 97超碰精品成人国产| 伦理电影大哥的女人| 三级国产精品欧美在线观看| 18禁黄网站禁片免费观看直播| 黑人高潮一二区| 欧美性猛交黑人性爽| 亚洲aⅴ乱码一区二区在线播放| 午夜精品在线福利| 成人鲁丝片一二三区免费| 国产综合懂色| 97超碰精品成人国产| 人人妻人人澡人人爽人人夜夜 | 我的女老师完整版在线观看| 久久婷婷人人爽人人干人人爱| 亚洲av成人av| 日本爱情动作片www.在线观看| 插阴视频在线观看视频| 久久精品久久久久久久性| 色尼玛亚洲综合影院| 国产成人午夜福利电影在线观看| 国语自产精品视频在线第100页| 中文亚洲av片在线观看爽| 日韩av不卡免费在线播放| 村上凉子中文字幕在线| 国产精品,欧美在线| 淫秽高清视频在线观看| 69av精品久久久久久| 99久国产av精品国产电影| 啦啦啦啦在线视频资源| 国产在视频线在精品| 亚洲成av人片在线播放无| 婷婷精品国产亚洲av| 1000部很黄的大片| 日韩精品有码人妻一区| 一边亲一边摸免费视频| 嫩草影院新地址| 亚洲欧美日韩高清专用| 美女高潮的动态| 国产精品蜜桃在线观看 | 国内少妇人妻偷人精品xxx网站| av天堂中文字幕网| 日本免费一区二区三区高清不卡| 波多野结衣高清无吗| 色综合亚洲欧美另类图片| 亚洲精品色激情综合| 九色成人免费人妻av| 亚洲五月天丁香| 性插视频无遮挡在线免费观看| 最近2019中文字幕mv第一页| 久久精品国产亚洲网站| 欧美极品一区二区三区四区| 亚洲精品乱码久久久v下载方式| 可以在线观看毛片的网站| 黄色日韩在线| 男的添女的下面高潮视频| 久久这里只有精品中国| 国产一区二区激情短视频| 日韩av在线大香蕉| 亚洲av免费高清在线观看| 又爽又黄无遮挡网站| 一本精品99久久精品77| 99精品在免费线老司机午夜| 日韩在线高清观看一区二区三区| 变态另类成人亚洲欧美熟女| 欧美最新免费一区二区三区| 51国产日韩欧美| 一个人看视频在线观看www免费| 午夜免费男女啪啪视频观看| 亚洲国产精品合色在线| 国产精品麻豆人妻色哟哟久久 | 男人舔女人下体高潮全视频| 久久精品综合一区二区三区| 免费搜索国产男女视频| 国产真实乱freesex| 高清毛片免费看| av专区在线播放| 欧美最新免费一区二区三区| 久久久午夜欧美精品| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 村上凉子中文字幕在线| 日韩欧美一区二区三区在线观看| 少妇的逼好多水| 一个人观看的视频www高清免费观看| 少妇熟女欧美另类| 黄色视频,在线免费观看| 成人漫画全彩无遮挡| 日本一二三区视频观看| 精品99又大又爽又粗少妇毛片| 精品少妇黑人巨大在线播放 | 亚洲欧美精品自产自拍| 亚洲精品日韩av片在线观看| 一边亲一边摸免费视频| 麻豆成人午夜福利视频| 亚洲av电影不卡..在线观看| 日韩成人伦理影院| 身体一侧抽搐| 91麻豆精品激情在线观看国产| 精品久久国产蜜桃| 两个人视频免费观看高清| 天堂中文最新版在线下载 | 91精品国产九色| 欧美变态另类bdsm刘玥| 熟女电影av网| 男女做爰动态图高潮gif福利片| 亚洲aⅴ乱码一区二区在线播放| 又爽又黄无遮挡网站| 一级毛片aaaaaa免费看小| 三级男女做爰猛烈吃奶摸视频| 日本色播在线视频| 成人特级av手机在线观看| 麻豆国产av国片精品| 日韩欧美国产在线观看| 不卡视频在线观看欧美| 12—13女人毛片做爰片一| 国产成人91sexporn| а√天堂www在线а√下载| 国产黄色视频一区二区在线观看 | 久久久久久久久久久丰满| 亚洲国产日韩欧美精品在线观看| 男女啪啪激烈高潮av片| 国产成年人精品一区二区| 男女视频在线观看网站免费| 国产成人aa在线观看| 免费观看精品视频网站| av天堂中文字幕网| 久久欧美精品欧美久久欧美| 国产一区亚洲一区在线观看| 一级毛片久久久久久久久女| 91aial.com中文字幕在线观看| 国产老妇女一区| 亚洲,欧美,日韩| 午夜福利高清视频| 日日啪夜夜撸| АⅤ资源中文在线天堂| av女优亚洲男人天堂| 亚洲人成网站在线播| 网址你懂的国产日韩在线| 免费av毛片视频| 18禁裸乳无遮挡免费网站照片| 亚洲久久久久久中文字幕| 天天一区二区日本电影三级| 26uuu在线亚洲综合色| 卡戴珊不雅视频在线播放| 亚洲国产色片| 精品久久久噜噜| 三级男女做爰猛烈吃奶摸视频| 日韩一本色道免费dvd| 久久久久久大精品| 国产成人a∨麻豆精品| 国产精品久久久久久久久免| 18+在线观看网站| 亚洲精品日韩av片在线观看| 高清日韩中文字幕在线| 一级毛片我不卡| 亚洲欧美中文字幕日韩二区| 毛片一级片免费看久久久久| 日韩亚洲欧美综合| 可以在线观看的亚洲视频| 国产在线精品亚洲第一网站| av在线蜜桃| 久久久久久国产a免费观看| 好男人视频免费观看在线| 尤物成人国产欧美一区二区三区| 黄色配什么色好看| 国内揄拍国产精品人妻在线| 嫩草影院新地址| 欧美一级a爱片免费观看看| 五月伊人婷婷丁香| 干丝袜人妻中文字幕| 麻豆久久精品国产亚洲av| 少妇猛男粗大的猛烈进出视频 | 欧美变态另类bdsm刘玥| 久久婷婷人人爽人人干人人爱| 久久久久网色| 国语自产精品视频在线第100页| 久久精品国产清高在天天线| 麻豆国产av国片精品| 国产 一区 欧美 日韩| 中出人妻视频一区二区| 久久久久久久久久成人| 中文字幕av成人在线电影| avwww免费| 成人漫画全彩无遮挡| 亚洲av.av天堂| 亚洲七黄色美女视频| 内地一区二区视频在线| 国产老妇女一区| 成年女人永久免费观看视频| 99在线视频只有这里精品首页| 一级毛片aaaaaa免费看小| 久久精品人妻少妇| 久久久精品欧美日韩精品| 啦啦啦啦在线视频资源| 亚洲欧美日韩东京热| 国产一区二区三区av在线 | 亚洲在线自拍视频| 日韩欧美 国产精品| 黑人高潮一二区| 国产人妻一区二区三区在| 18+在线观看网站| 亚洲精品成人久久久久久| 性插视频无遮挡在线免费观看| 国产高清三级在线| 欧美精品一区二区大全| 久久久久久大精品| 精品日产1卡2卡| 黄色一级大片看看| 日日撸夜夜添| 国产真实乱freesex| 在线播放国产精品三级| 永久网站在线| 综合色av麻豆| 国产精品伦人一区二区| 日韩成人伦理影院| 高清午夜精品一区二区三区 | 免费看日本二区| 久久久欧美国产精品| 91av网一区二区| 午夜老司机福利剧场| 成人一区二区视频在线观看| 日产精品乱码卡一卡2卡三| 亚洲国产欧美人成| 日韩高清综合在线| 亚洲天堂国产精品一区在线| 国产精品.久久久| 亚洲美女搞黄在线观看| 亚洲va在线va天堂va国产| 国产91av在线免费观看| 波多野结衣巨乳人妻| 日本与韩国留学比较| 精品久久久久久久末码| 青春草视频在线免费观看| 久久久午夜欧美精品| 少妇猛男粗大的猛烈进出视频 | 99久国产av精品| 日韩一区二区视频免费看| 日韩av不卡免费在线播放| 91麻豆精品激情在线观看国产| 一级毛片电影观看 | 波多野结衣高清无吗| 久久久久久久午夜电影| 人妻少妇偷人精品九色| 国内精品一区二区在线观看| 亚洲av男天堂| 熟女人妻精品中文字幕| 国产精品电影一区二区三区| 亚洲色图av天堂| 天天躁夜夜躁狠狠久久av| 97人妻精品一区二区三区麻豆| 久久久成人免费电影| 精品久久国产蜜桃| 免费观看的影片在线观看| 亚洲天堂国产精品一区在线| 神马国产精品三级电影在线观看| 欧美日本亚洲视频在线播放| av免费观看日本| 久久婷婷人人爽人人干人人爱| 色吧在线观看| 亚洲熟妇中文字幕五十中出| 亚洲av免费在线观看| 午夜a级毛片| 亚洲精品日韩av片在线观看| 一区福利在线观看| 久久99热这里只有精品18| 日日撸夜夜添| 又粗又硬又长又爽又黄的视频 | 99九九线精品视频在线观看视频| 中国国产av一级| 欧美性感艳星| 欧美丝袜亚洲另类| 亚洲精品国产av成人精品| 99久久精品国产国产毛片| av女优亚洲男人天堂| 麻豆精品久久久久久蜜桃| 夜夜夜夜夜久久久久| 老司机福利观看| 亚洲美女搞黄在线观看| 日本一本二区三区精品| 国内精品久久久久精免费| 美女脱内裤让男人舔精品视频 | 国内精品宾馆在线| 免费一级毛片在线播放高清视频| 欧美激情国产日韩精品一区| 久久久久久久久久久免费av| 欧美三级亚洲精品| 日本黄大片高清| 午夜福利在线在线| 最新中文字幕久久久久| 色综合亚洲欧美另类图片| 色噜噜av男人的天堂激情| 国产精品,欧美在线| 亚洲自偷自拍三级| 中国国产av一级| 六月丁香七月| 国产人妻一区二区三区在| 国产精品伦人一区二区| 老师上课跳d突然被开到最大视频| 国产探花在线观看一区二区| 91狼人影院| 变态另类成人亚洲欧美熟女| 不卡一级毛片| 久久人人精品亚洲av| 搡女人真爽免费视频火全软件| 国产精品人妻久久久影院| 久久久精品欧美日韩精品| 亚洲高清免费不卡视频| 成人av在线播放网站| 国产精品久久久久久久久免| 国产在线精品亚洲第一网站| 亚洲欧美成人精品一区二区| 黄色一级大片看看| 久久久久久九九精品二区国产| 又粗又爽又猛毛片免费看| 少妇丰满av| 国产av不卡久久| 搡老妇女老女人老熟妇| 男人和女人高潮做爰伦理| 麻豆乱淫一区二区| 国产69精品久久久久777片| 亚洲欧美精品综合久久99| av黄色大香蕉| 国产色婷婷99| 日韩大尺度精品在线看网址| 午夜免费男女啪啪视频观看| 日韩在线高清观看一区二区三区| 伦精品一区二区三区| 黄色日韩在线| 亚洲人成网站在线播放欧美日韩| 久久久午夜欧美精品| 国产一区二区在线观看日韩| 黑人高潮一二区| 亚洲av第一区精品v没综合| 久久国内精品自在自线图片| 美女 人体艺术 gogo| 内地一区二区视频在线| 久久这里有精品视频免费| 婷婷亚洲欧美| 午夜福利高清视频| 在线观看一区二区三区| 麻豆国产97在线/欧美| 一边摸一边抽搐一进一小说| 人妻系列 视频| 中文在线观看免费www的网站| 欧美日韩国产亚洲二区| 变态另类成人亚洲欧美熟女| 亚洲精品亚洲一区二区| 精品一区二区免费观看| 亚洲无线在线观看| 国产精品久久视频播放| 久久99热6这里只有精品| 青春草视频在线免费观看| 久久久久国产网址| 亚洲一区二区三区色噜噜| 国产一区二区三区av在线 | 亚洲欧美中文字幕日韩二区| 蜜桃亚洲精品一区二区三区| 欧美潮喷喷水| 我的老师免费观看完整版| 亚洲一级一片aⅴ在线观看| 99久久无色码亚洲精品果冻| 青春草国产在线视频 | 欧美最新免费一区二区三区| 99九九线精品视频在线观看视频| 大又大粗又爽又黄少妇毛片口| 亚洲精品日韩在线中文字幕 | 韩国av在线不卡| 老熟妇乱子伦视频在线观看| 欧美精品国产亚洲| 久久精品国产亚洲网站| 亚洲成av人片在线播放无| 国产精品一区二区性色av| 免费不卡的大黄色大毛片视频在线观看 | 26uuu在线亚洲综合色| 日日干狠狠操夜夜爽| 人妻系列 视频| 日日摸夜夜添夜夜添av毛片| 亚洲精品乱码久久久久久按摩| 精品久久久久久久久av| 国产精品久久久久久精品电影小说 | 性欧美人与动物交配| 欧美+日韩+精品| 国产一区二区三区在线臀色熟女| 国产精品野战在线观看| 免费av毛片视频| 日本黄色片子视频| 永久网站在线| 亚洲七黄色美女视频| 99精品在免费线老司机午夜| 少妇人妻一区二区三区视频| 国产精品久久久久久av不卡| 欧美一级a爱片免费观看看| 深夜a级毛片| 欧美成人一区二区免费高清观看| 日本在线视频免费播放| 九九在线视频观看精品| 免费电影在线观看免费观看| 一本一本综合久久| 午夜免费激情av| 内地一区二区视频在线| 一个人看的www免费观看视频| 九九久久精品国产亚洲av麻豆| videossex国产| 国产成人精品一,二区 | 亚洲欧美日韩卡通动漫| 亚洲不卡免费看| 国产高清不卡午夜福利| av又黄又爽大尺度在线免费看 | 国产精品无大码| 欧美zozozo另类| 一本精品99久久精品77| 精品熟女少妇av免费看| 深夜精品福利| 亚洲人与动物交配视频| 国产伦在线观看视频一区| 免费观看a级毛片全部| 18禁在线无遮挡免费观看视频| 日本一本二区三区精品| 天天躁夜夜躁狠狠久久av| 我要搜黄色片| 亚洲国产精品久久男人天堂| 欧美三级亚洲精品| 国产一区二区亚洲精品在线观看| 在现免费观看毛片| 久久久久久久亚洲中文字幕| 高清午夜精品一区二区三区 | 亚洲久久久久久中文字幕| 全区人妻精品视频| 九色成人免费人妻av| 久久久久性生活片| 成人高潮视频无遮挡免费网站| 久久久成人免费电影| 麻豆成人午夜福利视频| 午夜视频国产福利| 最好的美女福利视频网| 嫩草影院入口| 国产探花在线观看一区二区| 村上凉子中文字幕在线| 免费人成视频x8x8入口观看| 国产精品麻豆人妻色哟哟久久 | 听说在线观看完整版免费高清| 如何舔出高潮| 欧美三级亚洲精品| av在线天堂中文字幕| 高清毛片免费看| ponron亚洲| 91在线精品国自产拍蜜月| 日本一本二区三区精品| 蜜臀久久99精品久久宅男| 天天躁日日操中文字幕| 国产激情偷乱视频一区二区| 国产 一区精品| 国产亚洲精品久久久com| 日韩欧美在线乱码| 国产高清激情床上av| 久99久视频精品免费| 亚洲无线观看免费| www.色视频.com| 久久99热6这里只有精品| 插逼视频在线观看| 一级二级三级毛片免费看| 国产乱人偷精品视频| 日日啪夜夜撸| 男人的好看免费观看在线视频| 国产亚洲精品av在线| 国产私拍福利视频在线观看| 久久99热6这里只有精品| 你懂的网址亚洲精品在线观看 | 婷婷六月久久综合丁香| 精品一区二区三区视频在线| 亚洲欧美日韩东京热| 99在线人妻在线中文字幕| 午夜免费激情av| 两性午夜刺激爽爽歪歪视频在线观看| 少妇丰满av| 非洲黑人性xxxx精品又粗又长| 亚洲色图av天堂| 激情 狠狠 欧美| 亚洲在线观看片| 欧美bdsm另类| 久久精品国产亚洲网站| 亚洲第一区二区三区不卡| 最近中文字幕高清免费大全6| 日韩一区二区视频免费看| 色综合亚洲欧美另类图片| 桃色一区二区三区在线观看| 精品熟女少妇av免费看| 丝袜美腿在线中文| 欧美日本视频| 日日撸夜夜添| 欧美成人免费av一区二区三区| 日本黄大片高清| 国产精品久久久久久av不卡| av在线亚洲专区| 一边亲一边摸免费视频| 精品久久久久久久久久久久久| 最好的美女福利视频网| 在线免费十八禁| 小说图片视频综合网站| 综合色丁香网| 日本爱情动作片www.在线观看| 国产伦精品一区二区三区四那| 日日撸夜夜添| 一卡2卡三卡四卡精品乱码亚洲| 男的添女的下面高潮视频| 在线观看午夜福利视频| 看免费成人av毛片| 91久久精品国产一区二区三区| 天堂网av新在线| 久久精品国产亚洲av香蕉五月| 国产视频内射| 成人毛片a级毛片在线播放| 成人无遮挡网站| 久久久久国产网址| 久久综合国产亚洲精品| 人人妻人人看人人澡| 亚洲内射少妇av| 欧美一级a爱片免费观看看| 日日撸夜夜添| 在线播放国产精品三级| 国内揄拍国产精品人妻在线| 蜜桃亚洲精品一区二区三区| 欧美又色又爽又黄视频|