• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Enrichment Procedures for Soft Clusters:A Statistical Test and its Applications

    2014-04-17 03:22:10PhillipsMHossainLWatsonRWynneandNarenRamakrishnan

    R.D.PhillipsM.S.HossainL.T.WatsonR.H.Wynneand Naren Ramakrishnan

    1 Introduction

    Clustering is an unsupervised process that models locality of data samples in attribute space to identify groupings:samples within a group are closer to each other than to samples from other groups.To assess whether the discovered clusters are meaningful,a typical procedure is to see if the groupings capture other categorical informationnot originally used during clustering.For instance,in microarray bioinformatics,data samples correspond to genes and their expression vectors,clusters capture locality in expression space,and they are evaluated to see if genes within a cluster share common biological function/annotations.(Observe that the functional annotations are not used during clustering).In text mining,data samples correspond to documents and their text vectors,clusters capture locality in term space,and are evaluated for their correspondence witha prioridomain information such as topics.In remote sensing,data samples correspond to pixels in an image,clusters capture locality of pixel intensities,and are evaluated for their correspondence with land cover classifications.

    All of the above applications are essentially determining whether locality in one space preserves correspondence with information in another space,also referred to as thecluster assumption[Chapelle,Sch?lkopf,and Zien(2006)].While cluster evaluation is typically conducted as a distinct post-processing stage after mining,recently developed clustering formulations blur this boundary.For instance,in Wagstaff,Cardie,Rogers,and Schr?dl(2001),locality information is used along with background knowledge to influence the clustering.Such background knowledge takes the form of constraints,some of which dictate that certain samples should appear in the same cluster,while others specify that two samples should be in different clusters.Similarly,in Tishby,Pereira,and Bialek(1999),clusters are designed using an objective function that balances compression of the primary random variable against preservation of mutual information with an auxiliary variable.With the advent of semisupervised clustering[Chapelle,Sch?lkopf,and Zien(2006)],more ways to integrate labeled and unlabeled information are rapidly being proposed.

    The design of both classical and the newer clustering algorithms is predicated on the ability to evaluate clusters for enrichment and using this information to drive the refinement and subsequent discovery of clusters.However,classical statistical enrichment procedures(e.g.,using the hyper-geometric distribution[Ewens and Grant(2001)])assume a hard clustering formulation.The focus here is on soft clusters where the groupings are defined by portions of individual samples.This paper presents a new statistical test to enrich soft clusters and demonstrates its application to several datasets.

    2 Clustering

    Clustering can be used to analyze and discover relationships in large datasets.Strictly unsupervised clustering is used in the absence of information about target clusters and variables of interest,however,clustering can be partially supervised or guided when additional information regarding target clusters is available.

    Clustering by itself doesnot correspond to classification,the process by which class labels are assigned to individual data elements,but clustering can be a useful tool in the classification of large datasets.When clusters are used to organize similar elements in a dataset,class labels can be assigned to entire clusters,allowing individual elements within a cluster to be assigned that class label.Because samples or elements in a particular cluster are similar or “close,”they are assumed likely to share a class label,which is known as thecluster assumption.Assigning labels to a modest number of clusters is less time intensive than assigning labels to many individual samples,so if the cluster assumption holds,clustering is an efficient and powerful tool in classification.Unfortunately,this cluster assumption does not hold in all cases,as there is no rule that dictates that“close”samples must share a label.Finally,the descriptions of clustering,semisupervised clustering,and cluster evaluation given above assume a specific type of clustering where clusters are collections of individual elements,known as hard or crisp clustering.Alternatively,clusters can be defined by portions of individual samples,known as soft clustering.Soft cluster evaluation becomes less intuitive as clusters will no longer“contain”individual samples,and clusters cannot be composed primarily from samples belonging to one class in the same sense.The following subsections define hard and soft clustering and classification.

    2.1 Hard Clustering

    Hard clustering produces clusters that are a collection of individual samples.Let theith sample be denoted byx(i)∈?bwherei=1,...,n.A cluster is typically represented by a prototype, such as the mean of the samples contained in the cluster,and let thejth cluster prototype beU(j)∈?bwherej=1,...,K.All clusters taken together form a partition of the data,defined by a partition matrix,wwithwij=1 indicating that theith sample belongs to thejth cluster,wij=0 otherwise,and∑Kj=1wij=1 for alli.Each sample is a member of exactly one cluster.

    A classic example of a simple hard clustering method is theK-means clustering algorithm that locates a local minimum point of the objective function

    whereρij=kx(i)?U(j)k22[Mac Queen(1974)].In this case,ρijis a measure of dissimilarity or distance between theith sample and thejth cluster.TheK-means clustering algorithm attempts to find the ideal partition that minimizes the sum of squared distances between each sample and the prototype of the cluster to which the sample belongs.The algorithm forK-means requiresKinitial cluster prototypes and iteratively assigns each sample to the closest cluster using

    for eachi,followed by the cluster prototype(mean)recalculation

    oncewhas been calculated. This process, guaranteed to terminate in a finite number of iterations,continues until no further improvement is possible,terminating at a local minimum point of(1).

    In hard clusters,such as those produced byK-means,the collection of samples that belong to a particular cluster can be evaluated to determine a cluster’s eligibility to perform classification. The class memberships of the labeled samples in a particular cluster can be modeled using discrete random variables generated from binomial,multinomial,or hyper geometric distributions,for example.These random variables form the basis of statistical tests used to evaluate clusters for classification.For example,letVicbe a Bernoulli random variable where success(Vic=1)indicates theith labeled sample is labeled with thecth class.The number of labeled samples labeled with thecth class in a particular cluster would be a binomial random variableVc,j= ∑i∈IjVicwhereIjis the index set of labeled samples belonging to thejth cluster.This binomial random variable can be used as the basis for a statistical hypothesis test to determine if the number of samples labeled with thecth class(as opposed to all other classes)in thejth cluster is significant.In practice,thecth class that would be tested would be the class that is most represented in thejth cluster,or mathematically,c=argmax1≤c≤CVc,jfor a particularjwhereCis the number of classes.

    2.2 Soft Clustering

    Soft clusters are clusters that instead of containing a collection of individual samples,contain portions of individual samples.Another view of soft clustering is that each sample has a probability of belonging to a particular cluster.Soft clustering has advantages over hard clustering in that a sample is not simply assigned to the closest cluster,but information is preserved about relationships to other clusters as well.Furthermore,these continuous assignments are less constrained that discrete assignments,resulting in a less constrained objective function.Like in hard clustering,wijindicates cluster membership,but instead of being either zero or one,wij∈(0,1),and like in hard clustering,∑Kj=1wij=1 for alli.Some versions of fuzzy clustering do not impose this requirement,but those nonprobabilistic methods will not be considered here.

    An example of a soft clustering method analogous toK-means is fuzzyK-meansthat locates a local minimum point of the objective function

    whereρijis still the squared Euclidean distance betweenx(i)andU(j)andp>1[Bezdek(1980)].The algorithm that minimizes this objective function is similar to that ofK-means in that it first calculates

    for alliandjfollowed by calculating updated cluster prototypes

    The cluster prototype is a weighted average.This iteration(recalculation of the weights followed by recalculation of cluster prototypes,following by recalculation of the weights,etc.)is guaranteed to converge(with these definitions ofρij,U(j),andwij)forp>1[Bezdek(1980)].

    3 Soft Cluster Evaluation

    Evaluation of soft clusters requires taking cluster weights into account when examining class memberships of the labeled samples.Each labeled sample will have some positive membership in each cluster,and a new type of evaluation will be necessary to directly evaluate soft clusters.Soft cluster memberships could be converted to hard cluster memberships for the purpose of cluster evaluation,but if soft clustering is warranted,those soft clusters should be evaluated directly.

    Hard cluster evaluation(for classification)is based on the composition of the cluster,or what type of samples are making up the cluster.The question of whether a cluster should be used for classification can be answered when some of the samples within the cluster have labels and there are a sufficient number of samples to draw statistical conclusions.Because soft clusters no longer“contain”samples,the more important question is whether the relative magnitudes of memberships between samples of a particular class and the cluster are significantly different.In other words,if the magnitude of cluster memberships for samples of a particular class appear to be significantly higher than memberships for other classes,then the cluster is demonstrating characteristics of that class.With hard clusters,a cluster is pure if only one class is contained in the cluster;no samples labeled with another class are present in the cluster.This is impossible in soft clustering as all types of samples will have positive memberships in all clusters,and in practice,these memberships,although possibly small,will be nonnegligible.

    Just as hard clusters that are ideal for classification contain only one class,soft clusters that are ideal for classification will be representative of just one class.The goal in using soft clustering for classification is to assign a class label to an entire cluster(the same goal for hard clusters),but just as each sample has a soft membership in a particular cluster,each sample will have soft membership in a class.The samples demonstrate characteristics of multiple classes,justifying soft classification,but the clusters(logical grouping of similar data)should not contain or represent multiple classes.The goal of this work is to associate a soft cluster to one particular class if that class is clearly dominant within the cluster.Probability will determine how clearly a particular cluster is composed of one class,and if this probability passes a predetermined threshold test,the cluster will be associated with a class.

    3.1 Hypothesis Test

    The statistical tests used to evaluate clusters in this paper are statistical hypothesis tests,where a null hypothesis is proposed.If observed evidence strongly indicates the null hypothesis should be rejected,the alternate hypothesis will be accepted.In the absence of compelling evidence to the contrary,the null hypothesis cannot be rejected.

    The first hypothesis test is based on the average cluster weights in the cluster of interest,thejth cluster.In order to associate thejth cluster to thecth class,the average cluster weight for thecth class

    wherencis the number of samples labeled with thecth class andJcis the index set of samples labeled with thecth class,should be statistically significantly higher than other cluster weights for thejth cluster.If the weights for samples labeled with thecth class are higher in general than samples from arbitrary classes,the cluster is demonstrating a tendency to thecth class,and can be used to discriminate thecth class from other classes.

    The null hypothesis is that the average cluster weights for samples from thecth class in thejth cluster is not significantly different from the average cluster weight for samples from all classes in thejth cluster.The alternate hypothesis is that the average weight for samples from thecth class in thejth cluster is significantly different(higher)than the average cluster weight for all samples.Note that in practice,only the class with the highest average cluster weight for thejth cluster would be considered.Suppose that a test statistic derived for this test is normally distributed,and is in fact a standard normal random variableZ.Then if the observed value is?z,ifP(Z≥?z)≤αfor 0<α<1,the null hypothesis is rejected.The following sections derive appropriate test statistics to use in this hypothesis test.

    3.2 Test Statistic 1

    Suppose a datasetxcontainsnsamplesx(i)∈?B,i=1,...,n.ForKfixed cluster centersU(k)∈?B,k=1,...,K,the assigned weight of theith pixel to thejth cluster is

    Theorem:LetX(i),i=1,2,...,beB-dimensional random vectors having one ofQdistinct multivariate normal distributions.Fori=1,2,...andj=1,...,Kdefine the random variables

    Figure 1:Distribution of sums of weights in one soft cluster out of two.

    whereKis the number of clusters andU(k)∈?Bis thekth cluster center(and is considered fixed for weight calculation).Then for anyj=1,...,K,

    Remark:The assumption that theX(i),i=1,2,...,are generated from a finite number of normal distributions is stronger than necessary.The proof in Phillips,Watson,Wynne,and Ramakrishnan(2009a)holds ifX(i),i=1,2,...,are generated from a finite number of arbitrary distributions.

    Experimental clustering results using a dataset described in Section 4 of this paper match this theoretical result,as illustrated by one experiment in Fig.1.This illustration shows the distribution of sums of cluster weights for one particular cluster(whenK=2).

    Starting with the normal approximation for the sum of the cluster weights,the standard normal test statistic would be

    where E[Wij]is the expected value ofWijand Var[Wij]is the variance ofWijfor thejth cluster.E[Wij]and Var[Wij]are unknown,but can be reasonably approximated using the sample mean

    Sincezis generated(approximately)by the standard normal distribution,this test statistic can be used in the proposed hypothesis test.

    3.3 Test Statistic 2

    One potential issue with the above statistic is that the sample mean and standard deviation calculations assume the sample is identically distributed,which is specificallynotthe assumption in this case(clustering assumes that the data are generated from a number of distributions,where the true number of clusters is equal to the number of distributions,which is unknown apriori).A better statistic ac-knowledges that the data are not identically distributed,but are generated from a if nite number of distributions.Since the number of distributions and the distributions are unknown,the number of classes and the individual class labels,which are assumed to correspond to inherent structure of the data,are used to approximate the true mean and variance of multiple clusters.Precisely,assume that all labeled sample in dicesiwith distribution indexψ(i)=qcorrespond to the same class labelφ(i)=c.Ifi∈ψ?1(q),theni∈φ?1(c),buti∈φ?1(c)does not implyi∈ψ?1(q)(more than one distribution can correspond to one class),andJc=φ?1(c)={i|φ(i)=c,1≤i≤n}.The above statistic requires modification to use class information.In the previous statistic,

    recalling that E[Wij]=aij=αqjfori∈Iq.Assume whenφ(i)=c,and distribution indexq=ψ(i)corresponds toc=φ(i),thenαqjcan be approximated byγcj,the mean of classc=φ(i).Ideallyαqjshould be approximated directly,but there is no way to knowψ?1(q),so essentiallyψ?1(q)?φ?1(c)is being approximated byφ?1(c).Unfortunately,using the sample mean of thecth class and thejth cluster to approximateγcjand thereforeαqjbreaks down because the sample mean of thecth class and thejth cluster is both the random variable on the left side and the approximation of the expected value on the right side of the minus sign.This is illustrated below.Approximatingγcj(andαqj)with the sample mean for thecth class,

    Thus this test statistic does not work because the value being tested is the same as the estimated mean for thecth class.

    In order to make use of class information to estimate distribution statistics(mean and variance),it is necessary to modify the random variable to model class labels as well as cluster memberships.Consider each labeled sample’s membership in a particular class,say thecth class,to be a Bernoulli trialVic,whereVic=1 indicates theith sample is labeled with thecth class,andWijis defined above.Define

    wherenis the total number of labeled samples as the random variable for the sum of weights for samples in thecth class to thejth cluster.The Central Limit Theorem applies to this sum of bounded random variables with finite mean and variance(see Theorem 1),andYc,jis approximately normal.

    Consider now the test statistic

    Fixingjandc,assumingWijandVicare independent,and definingmq=|Iq|,the number of in dicesifor whichX(i)has theqth distribution,

    whereCis the number of classes.Then

    Using these expressions for the mean and variance ofYc,j,the Wald statistic for thecth class andjth cluster is

    and the null hypothesis is rejected ifP(Z≥?z)≤α.

    4 Experimental Results

    This section presents experimental results to demonstrate the functioning of the new statistical test.It is important to distinguish the nature of enrichments identified by the new test from the quality of clusters mined by a specific algorithm.The features evaluated are(i)whether the test is able to recognize clusters with partial

    memberships (soft assignments) as being significant, (ii) whether it leads to a higher number of assignments in soft clustering situations, and (iii) the variation in number of enrichments as entropy of clusters and significance levels are changed.For the purpose of this evaluation,consider the soft k-means algorithm where membership probabilities at each stage of the iteration are non-zero across the clusters.

    Table 1:Datasets

    Figure 2:Enrichments of synthetic data(Jaccard similarity between class labels and clusters=1.0).

    Table 1 describes the datasets used in this study;with the exception of the synthetic dataset,all are taken from the UCI KDD/ML data repository.In each case,the number of clusters to be identified is set equal to the number of natural classes present in the dataset.

    Fig.2 presents results on synthetic data involving four separable Gaussians in a two-dimensional layout.The enrichmentp-values are also shown for all 16 combinations for the soft and hard versions of the Wald statistic as well as the hypergeometric test,which is commonly used for cluster evaluation.As can be seen,the qualitative trends are the same so that for all stringent thresholds the results yield four clusters enriched with four different class labels.

    Figure 3:Ionosphere data.(left)Soft assignments:p-values are derived from Wald statistic for thecth class andjth cluster.(middle)Hard assignments:p-values are derived from Wald statistic for thecth class andjth cluster.(right)Enrichment with hypergeometric distribution.(Jaccard similarity between fuzzyk-means and the actual class-labels:0.5865.)

    Fig.3 presents a more complicated situation with the ionosphere dataset.This dataset involves two classes and there are more tangible differences between the three statistical tests.Note that the Jaccard similarity between the fuzzyk-means and class labels is not a perfect 1.As a result,for various values of thep-value threshold,it is possible to get one,two,three,or four cells enriched by the Wald statistic(soft assignment)whereas the hypergeometric distribution can lead to only two or four cells enriched.The Wald statistic(hard assignment)also performs better than the hypergeometric distribution.

    Fig.4 more directly describes a plot of the number of enriched cells as thep-value cutoff is varied,using the vehicle dataset.The Wald statistics lead to a consistently greater number of enrichments compared to the hypergeometric test.A similar plot can be seen in Fig.5.

    A different type of evaluation is shown in Fig.6(a)where the membership probabilities are artificially varied(from a hard membership)to impose a specified entropy on their distribution.As the entropy increases,the number of enrichments drops monotonically in the case of the Wald(soft)statistic whereas the hypergeometric enrichment test does not account for the entropy in a smooth manner.Fig.6(b)demonstrates the variation for a fixed value of the entropy but increasingly lax values of thep-value threshold.Again,the enrichments for the Wald(soft)statistic increase steadily.Similar plots for the breast tissue,steel plate faults,and glass datasets are shown in Figs.7,8,9,respectively.Finally,Fig.10 superimposes the variation ofp-value cutoff and entropy threshold to describe how the variation seen in previous plots manifests at allp-value thresholds,whereas the hypergeometric distribution is uniformly unable to provide a richer variety of enrichments.

    Figure 5:Cardiotocography data.Number of enrichments at different p-value cut-offs with the three enrichment procedures(Jaccard similarity between fuzzy k-means and actual class labels:0.7896).

    5 Conclusion

    This paper presented a new statistical test suitable for enrichment of soft clusters.It was shown how this test produces significantly more enrichments,tunable control of number of enrichments,and smoother variation in enriched cells with entropy andp-value cutoffs.The method can be used as given here or embedded inside a cluster refinment algorithm for continuous evaluation and updating of clusters.Since few soft cluster enrichment methods exist,the framework here contributes a key methodology for clustering and cluster evaluation research.

    Figure 6:Cardiotocography data.(top)Number of enrichments with fixed p-value threshold but varying entropy.Note that the number of enrichments falls monotonically with increasing entropy.(bottom)Number of enrichments with fixed entropy and varying p-value threshold.Note that the number of enrichments monotonically increases with increasing p-value threshold.

    Figure 7:Breast tissue data.(Jaccard similarity between fuzzy k-means and the actual class-labels:0.7051.)(a)Number of enrichments at different p-value cutoffs with the three enrichment procedures.(b)Number of enrichments with fixed p-value threshold but varying entropy.Note that the number of enrichments falls monotonically with increasing entropy.(c)Number of enrichments with fixed entropy and varying p-value threshold.Note that the number of enrichments monotonically increases with increasing p-value threshold.

    Figure 8:Steel plates faults data.(Jaccard similarity between fuzzy k-means and the actual class-labels:0.6681.)(a)Number of enrichments at different p-value cut-offs with the three enrichment procedures.(b)Number of enrichments with fixed p-value threshold but varying entropy.Note that the number of enrichments falls monotonically with increasing entropy for the Wald statistic.(c)Number of enrichments with fixed entropy and varying p-value threshold.Note that the number of enrichments monotonically increases with increasing p-value threshold.

    Figure 9:Glass data.(a)Soft assignments:p-values are derived from Wald statistic for the cth class and jth cluster.(b)Hard assignments:p-values are derived from Wald statistic for the cth class and jth cluster.(c)Enrichment with hypergeometric distribution.(Jaccard similarity between fuzzy k-means and the actual class-labels:0.7117.)(d)Number of enrichments at different p-value cut-offs with different enrichment procedures.

    Figure 10:Glass data.In this example,assignments are directly taken from the class labels.The entropy is changed by modifying the membership probability of the class of every instance.(a)Number of enrichments with different p-value thresholds and fixed entropy.(b)The plot at left shows how the number of enrichments change over the p-value thresholds and entropy.Note that the p-value is fixed for each of the spikes in this plot.For example,α remains 0.0020 in the interval between 0.0020 and 0.0028.The plot at the right side shows the change in number of enrichments with entropy where the p-value threshold is fixed.

    Acknowledgement:This work was supported in part by Department of Energy Grant DE-FG02-06ER25720 and NIGMS/NIH Grant 5-R01-GM078989.

    Bezdek,J.(1974):Fuzzy mathematics in pattern classification.Ph.D.thesis,Cornell University,Ithaca,NY,1974.

    Bezdek,J.C.(1980):A convergence theorem for the fuzzy ISODATA clustering algorithms.IEEE Transactions on Pattern Analysis and Machine Intelligence,vol.2,pp.1–8.

    Chapelle,O.;Sch?lkopf,B.;Zien,A.(2006):Semi-Supervised Learning.MIT Press,Cambridge,MA.

    Derraz,F.;Peyrodie,L.;Pinti,A.;Taleb-Ahmed,A.;Chikh,A.;Hautecoeur,P.(2010):Semi-automatic segmentation of multiple sclerosis lesion based active contours model and variational dirichlet process.Computer Modeling in Engineering&Sciences,vol.67,no.2,pp.95–118.

    Ewens,W.;Grant,G.(2001):Statistical Methods in Bioinformatics.Springer.

    Gnedenko,B.(1997):Theory of Probability.Gordan and Breach Science Publishers,The Netherlands,sixth edition.

    Lin,Z.;Cheng,C.(2010): Creative design of multi-layer web frame structure using modified ahp and modified triz clustering method.Computer Modeling in Engineering&Sciences,vol.68,no.1,pp.25–54.

    MacQueen,J.B.(1967): Some methods for classification and analysis of multivariate observations.InProc.of the fifth Berkeley Symposium on Mathematical Statistics and Probability,volume 1,pp.281–297.L.M.Le Cam and J.Neyman,editors,University of California Press.

    MacQueen,J.B.(1974):Fuzzy Mathematics in Pattern Classification.Ph.D.thesis,1974.

    Musy,R.F.;Wynne,R.H.;Blinn,C.E.;Scrivani,J.A.;Mcroberts,R.E.(2006): Automated forest area estimation via iterative guided spectral class rejection.Photogrammetric Engineering&Remote Sensing,vol.72,no.8,pp.949–960.

    Phillips,R.D.;Watson,L.T.;Wynne,R.H.(2007):Hybrid image classification and parameter selection using a shared memory parallel algorithm.Comput.Geosci.,vol.33,pp.875–897.

    Phillips,R.D.;Watson,L.T.;Wynne,R.H.;Ramakrishnan,N.(2009a):Continuous iterative guided spectral class rejection classification algorithm:Part 1.Technical report,Department of Computer Science,VPI&SU,Blacksburg,VA,2009a.

    Phillips,R.D.;Watson,L.T.;Wynne,R.H.;Ramakrishnan,N.(2009b):Continuous iterative guided spectral class rejection classification algorithm:Part 2.Technical report,Department of Computer Science,VPI&SU,Blacksburg,VA,2009b.

    Richards,J.A.;Jia,X.(1999):Remote Sensing Digital Image Analysis.Springer-Verlag,Berlin,third edition.

    Tishby,N.;Pereira,F.C.;Bialek,W.(1999): The information bottleneck method.InProc.of 37th annual Allerton Conference on Communication,Control,and Computing,pp.368–377.

    van Aardt,J.A.N.;Wynne,R.H.(2007):Examining pine spectral separability using hyperspectral data from an airborne sensor:An extension of field-based results.International Journal of Remote Sensing,vol.28,pp.431–436.

    Wagstaff,K.;Cardie,C.;Rogers,S.;Schr?dl,S.(2001):Constrained k-means clustering with background knowledge.InICML’01,pp.577–584.

    99热6这里只有精品| 波野结衣二区三区在线| 日韩中字成人| 国产av国产精品国产| 99热这里只有是精品在线观看| 免费黄频网站在线观看国产| 久久久久久久久久人人人人人人| 国产一区二区在线观看av| 日本av手机在线免费观看| 99热全是精品| 18禁国产床啪视频网站| 国产高清不卡午夜福利| 99香蕉大伊视频| 免费大片18禁| 日韩一区二区三区影片| 夜夜爽夜夜爽视频| 日本免费在线观看一区| 大码成人一级视频| 一级毛片我不卡| 亚洲国产成人一精品久久久| 精品少妇黑人巨大在线播放| 亚洲色图 男人天堂 中文字幕 | 中文乱码字字幕精品一区二区三区| av黄色大香蕉| 最近中文字幕2019免费版| av在线观看视频网站免费| 欧美国产精品一级二级三级| 日韩一区二区三区影片| 精品少妇黑人巨大在线播放| 最新中文字幕久久久久| 黄色毛片三级朝国网站| 日日啪夜夜爽| 国产免费福利视频在线观看| 国产亚洲午夜精品一区二区久久| 秋霞在线观看毛片| 亚洲欧美成人精品一区二区| 亚洲天堂av无毛| 美女xxoo啪啪120秒动态图| 亚洲欧美一区二区三区国产| 中国美白少妇内射xxxbb| 视频在线观看一区二区三区| 人人澡人人妻人| 少妇熟女欧美另类| 色5月婷婷丁香| 亚洲一区二区三区欧美精品| 国产成人av激情在线播放| 黄网站色视频无遮挡免费观看| 欧美变态另类bdsm刘玥| 日产精品乱码卡一卡2卡三| 国国产精品蜜臀av免费| 最近的中文字幕免费完整| 久热这里只有精品99| 高清黄色对白视频在线免费看| 国产欧美另类精品又又久久亚洲欧美| 熟女人妻精品中文字幕| h视频一区二区三区| 永久免费av网站大全| 亚洲综合色网址| 亚洲四区av| 涩涩av久久男人的天堂| 成年美女黄网站色视频大全免费| 免费av中文字幕在线| 欧美国产精品va在线观看不卡| 日本91视频免费播放| 色网站视频免费| www.色视频.com| 精品国产国语对白av| 精品人妻一区二区三区麻豆| 国产女主播在线喷水免费视频网站| 午夜激情久久久久久久| 一边摸一边做爽爽视频免费| 在线精品无人区一区二区三| 日韩制服骚丝袜av| 2021少妇久久久久久久久久久| 丝袜喷水一区| 久久免费观看电影| 国产成人av激情在线播放| 如何舔出高潮| 在线天堂最新版资源| 日韩中文字幕视频在线看片| 一级黄片播放器| 热99久久久久精品小说推荐| 五月伊人婷婷丁香| 亚洲精品国产av成人精品| 国产精品麻豆人妻色哟哟久久| 中文字幕人妻丝袜制服| 丰满乱子伦码专区| 麻豆乱淫一区二区| 熟女av电影| 日本av免费视频播放| 黄色怎么调成土黄色| 纯流量卡能插随身wifi吗| 国产成人精品福利久久| 亚洲国产精品专区欧美| 免费日韩欧美在线观看| 国产精品不卡视频一区二区| av福利片在线| 欧美 亚洲 国产 日韩一| 一级爰片在线观看| 交换朋友夫妻互换小说| 久久人人爽av亚洲精品天堂| 欧美成人午夜免费资源| 在线天堂最新版资源| 五月玫瑰六月丁香| 国产精品国产三级国产专区5o| 欧美激情 高清一区二区三区| 亚洲av中文av极速乱| 久久热在线av| 女的被弄到高潮叫床怎么办| 狠狠婷婷综合久久久久久88av| 亚洲av福利一区| 午夜激情av网站| 亚洲欧美成人精品一区二区| 亚洲国产av新网站| 曰老女人黄片| 蜜桃在线观看..| 国产一区二区三区av在线| 亚洲 欧美一区二区三区| 亚洲美女搞黄在线观看| 天天操日日干夜夜撸| 欧美少妇被猛烈插入视频| 国产精品一二三区在线看| 不卡视频在线观看欧美| 久久久久久伊人网av| 国产精品嫩草影院av在线观看| 久久婷婷青草| 最新中文字幕久久久久| 美女国产高潮福利片在线看| 欧美老熟妇乱子伦牲交| 日本vs欧美在线观看视频| 国产精品久久久久成人av| 美女中出高潮动态图| 国产黄色免费在线视频| 亚洲成人手机| 午夜福利乱码中文字幕| 香蕉国产在线看| 日本av手机在线免费观看| 国产精品一二三区在线看| 欧美激情 高清一区二区三区| 水蜜桃什么品种好| 免费看av在线观看网站| 国产成人一区二区在线| 少妇高潮的动态图| 91aial.com中文字幕在线观看| 一级a做视频免费观看| 午夜福利影视在线免费观看| 色婷婷久久久亚洲欧美| 91国产中文字幕| 亚洲av.av天堂| 校园人妻丝袜中文字幕| 女性被躁到高潮视频| 熟妇人妻不卡中文字幕| 97超碰精品成人国产| 男女下面插进去视频免费观看 | 亚洲精品国产av成人精品| 亚洲精品国产av蜜桃| 国产午夜精品一二区理论片| 国产欧美亚洲国产| 国产黄频视频在线观看| 多毛熟女@视频| 2021少妇久久久久久久久久久| 亚洲少妇的诱惑av| 久久久久国产网址| 日产精品乱码卡一卡2卡三| 精品久久国产蜜桃| 中文字幕精品免费在线观看视频 | 久久99精品国语久久久| 国产成人精品婷婷| 国产亚洲精品久久久com| av线在线观看网站| 日韩熟女老妇一区二区性免费视频| 精品国产乱码久久久久久小说| av国产精品久久久久影院| 韩国高清视频一区二区三区| 中文字幕人妻丝袜制服| 亚洲 欧美一区二区三区| 亚洲欧美色中文字幕在线| 婷婷色麻豆天堂久久| 91精品伊人久久大香线蕉| 丰满饥渴人妻一区二区三| 乱码一卡2卡4卡精品| 久久久亚洲精品成人影院| 久久人人爽av亚洲精品天堂| 日韩一区二区视频免费看| 欧美日韩视频精品一区| 午夜激情av网站| 国产在线免费精品| av一本久久久久| 亚洲精品国产av蜜桃| 亚洲一码二码三码区别大吗| 国产1区2区3区精品| 天天影视国产精品| 国产国语露脸激情在线看| 精品一区二区三区视频在线| 亚洲人成77777在线视频| 夫妻午夜视频| 国产亚洲av片在线观看秒播厂| 日韩av在线免费看完整版不卡| 毛片一级片免费看久久久久| 蜜桃在线观看..| 少妇人妻久久综合中文| 香蕉精品网在线| 最近最新中文字幕大全免费视频 | 看免费成人av毛片| 多毛熟女@视频| 日韩制服丝袜自拍偷拍| 成人手机av| 亚洲精品国产av成人精品| 亚洲,欧美精品.| 一级毛片 在线播放| 日韩一区二区三区影片| 亚洲精品aⅴ在线观看| 久久青草综合色| 免费少妇av软件| 午夜av观看不卡| 国产精品嫩草影院av在线观看| 久久精品国产亚洲av涩爱| 在线观看三级黄色| 亚洲av在线观看美女高潮| 亚洲精品自拍成人| 亚洲国产最新在线播放| 亚洲国产日韩一区二区| 欧美精品av麻豆av| 久久狼人影院| 成人国产av品久久久| √禁漫天堂资源中文www| 中文字幕免费在线视频6| 国产免费一级a男人的天堂| av国产精品久久久久影院| 日韩精品免费视频一区二区三区 | 久久99热6这里只有精品| 一级毛片黄色毛片免费观看视频| 啦啦啦中文免费视频观看日本| 亚洲国产看品久久| 哪个播放器可以免费观看大片| a级毛片黄视频| 成人免费观看视频高清| 观看美女的网站| 日韩视频在线欧美| 最近最新中文字幕免费大全7| 热re99久久精品国产66热6| 天堂中文最新版在线下载| 精品少妇久久久久久888优播| 男女无遮挡免费网站观看| 又大又黄又爽视频免费| 亚洲精品456在线播放app| 亚洲一码二码三码区别大吗| 夜夜骑夜夜射夜夜干| 99re6热这里在线精品视频| 国产男人的电影天堂91| 老女人水多毛片| 日本av免费视频播放| 中文乱码字字幕精品一区二区三区| 五月开心婷婷网| 伦理电影免费视频| 免费人成在线观看视频色| 亚洲国产av新网站| 亚洲精品国产av蜜桃| 2022亚洲国产成人精品| 国产精品一区www在线观看| 亚洲经典国产精华液单| xxxhd国产人妻xxx| 欧美激情 高清一区二区三区| 五月玫瑰六月丁香| 91成人精品电影| 观看av在线不卡| 欧美xxxx性猛交bbbb| 九九在线视频观看精品| 久久韩国三级中文字幕| 精品人妻熟女毛片av久久网站| 多毛熟女@视频| 精品亚洲成国产av| 日韩不卡一区二区三区视频在线| 香蕉精品网在线| 亚洲精品乱码久久久久久按摩| 一本大道久久a久久精品| 国产欧美亚洲国产| xxxhd国产人妻xxx| 亚洲久久久国产精品| 97在线视频观看| 久久久国产欧美日韩av| 日韩制服骚丝袜av| 日韩制服丝袜自拍偷拍| 免费在线观看黄色视频的| 三级国产精品片| 国产免费福利视频在线观看| 9191精品国产免费久久| 久久精品人人爽人人爽视色| 国产av国产精品国产| 97超碰精品成人国产| 大片电影免费在线观看免费| 又黄又爽又刺激的免费视频.| 视频区图区小说| 久久久精品区二区三区| 亚洲精品,欧美精品| av女优亚洲男人天堂| 精品国产露脸久久av麻豆| 黄色毛片三级朝国网站| 岛国毛片在线播放| 国精品久久久久久国模美| 尾随美女入室| 国产日韩一区二区三区精品不卡| av视频免费观看在线观看| 精品一区二区免费观看| 亚洲精品国产av蜜桃| 黄片无遮挡物在线观看| 热re99久久精品国产66热6| 日韩av在线免费看完整版不卡| 老司机亚洲免费影院| 欧美日韩综合久久久久久| 啦啦啦中文免费视频观看日本| 欧美老熟妇乱子伦牲交| 亚洲国产精品一区三区| 亚洲,欧美精品.| 一级毛片电影观看| 成人午夜精彩视频在线观看| 国产男女内射视频| 久久狼人影院| 日本猛色少妇xxxxx猛交久久| av在线老鸭窝| 国产精品人妻久久久影院| 伊人久久国产一区二区| 亚洲伊人久久精品综合| 99久久人妻综合| 日韩中字成人| 男人舔女人的私密视频| 日韩中字成人| 天天躁夜夜躁狠狠躁躁| 精品国产乱码久久久久久小说| 超碰97精品在线观看| 这个男人来自地球电影免费观看 | videossex国产| 伦理电影大哥的女人| 91在线精品国自产拍蜜月| 色5月婷婷丁香| 国产视频首页在线观看| 国产不卡av网站在线观看| 国产激情久久老熟女| 一级爰片在线观看| 久久人人爽人人爽人人片va| 日本欧美视频一区| av又黄又爽大尺度在线免费看| 亚洲精品国产av蜜桃| 美女主播在线视频| 天天躁夜夜躁狠狠躁躁| 久久这里只有精品19| 亚洲国产成人一精品久久久| 视频区图区小说| 亚洲国产成人一精品久久久| 日韩不卡一区二区三区视频在线| 乱人伦中国视频| 草草在线视频免费看| 亚洲国产成人一精品久久久| 人人妻人人澡人人看| 在线亚洲精品国产二区图片欧美| 精品国产乱码久久久久久小说| 欧美成人精品欧美一级黄| 男人操女人黄网站| 晚上一个人看的免费电影| 香蕉国产在线看| 妹子高潮喷水视频| 国产熟女午夜一区二区三区| 国产精品蜜桃在线观看| 亚洲欧美成人综合另类久久久| 男人舔女人的私密视频| 日本vs欧美在线观看视频| 精品亚洲乱码少妇综合久久| 我的女老师完整版在线观看| 国产日韩欧美在线精品| 亚洲内射少妇av| 在线观看免费视频网站a站| 国产男女内射视频| 国产精品国产av在线观看| 伦理电影免费视频| 人人妻人人爽人人添夜夜欢视频| 黄色怎么调成土黄色| 不卡视频在线观看欧美| 国产成人a∨麻豆精品| 亚洲中文av在线| 国产精品久久久久久av不卡| 国产有黄有色有爽视频| 中文乱码字字幕精品一区二区三区| 国产福利在线免费观看视频| 久久精品夜色国产| 日本与韩国留学比较| 久久久精品区二区三区| 最黄视频免费看| 国产欧美日韩一区二区三区在线| 亚洲美女视频黄频| 亚洲av在线观看美女高潮| 美女国产视频在线观看| 男人舔女人的私密视频| av播播在线观看一区| a 毛片基地| 亚洲精品乱码久久久久久按摩| 丰满少妇做爰视频| 看免费成人av毛片| 国产成人a∨麻豆精品| 老司机影院毛片| 国产在视频线精品| 人成视频在线观看免费观看| 免费黄色在线免费观看| 欧美日韩视频精品一区| 亚洲图色成人| 黑人高潮一二区| 美女中出高潮动态图| 欧美国产精品一级二级三级| 欧美+日韩+精品| 啦啦啦中文免费视频观看日本| 欧美人与善性xxx| 欧美国产精品一级二级三级| 欧美精品高潮呻吟av久久| 国产欧美日韩综合在线一区二区| 久久午夜福利片| 日韩 亚洲 欧美在线| 亚洲国产精品国产精品| 国产黄色免费在线视频| av女优亚洲男人天堂| 国产av精品麻豆| 黄色毛片三级朝国网站| 亚洲精华国产精华液的使用体验| 爱豆传媒免费全集在线观看| 国产日韩欧美亚洲二区| 成人国产麻豆网| 国产精品国产三级国产专区5o| 国产熟女欧美一区二区| 国产精品国产三级国产av玫瑰| 飞空精品影院首页| 亚洲av欧美aⅴ国产| 卡戴珊不雅视频在线播放| 国产成人一区二区在线| 精品久久国产蜜桃| 乱人伦中国视频| 五月伊人婷婷丁香| 建设人人有责人人尽责人人享有的| 日本-黄色视频高清免费观看| 热re99久久国产66热| 亚洲成av片中文字幕在线观看 | 久久99一区二区三区| 午夜老司机福利剧场| 五月开心婷婷网| 人人澡人人妻人| 热99久久久久精品小说推荐| 国产激情久久老熟女| 日本欧美视频一区| 国产一区亚洲一区在线观看| videos熟女内射| 亚洲国产av新网站| 天天影视国产精品| 国产日韩一区二区三区精品不卡| 日韩熟女老妇一区二区性免费视频| 春色校园在线视频观看| 男女啪啪激烈高潮av片| 飞空精品影院首页| 又大又黄又爽视频免费| 亚洲精品456在线播放app| 2022亚洲国产成人精品| 久久久久网色| 成人亚洲欧美一区二区av| 一二三四中文在线观看免费高清| 国产一区二区三区综合在线观看 | 日本欧美视频一区| 一级毛片 在线播放| 亚洲伊人色综图| 哪个播放器可以免费观看大片| 建设人人有责人人尽责人人享有的| 最新的欧美精品一区二区| 亚洲欧美一区二区三区国产| 久久99一区二区三区| 一本久久精品| 免费高清在线观看日韩| 嫩草影院入口| www.av在线官网国产| 久久精品久久精品一区二区三区| 深夜精品福利| 97在线视频观看| 精品一品国产午夜福利视频| 最近最新中文字幕免费大全7| 成年av动漫网址| av免费观看日本| 婷婷色av中文字幕| 超色免费av| 亚洲五月色婷婷综合| 伊人亚洲综合成人网| 9热在线视频观看99| 边亲边吃奶的免费视频| 99久久精品国产国产毛片| 成年人免费黄色播放视频| 亚洲欧美一区二区三区黑人 | 国产国拍精品亚洲av在线观看| 美女中出高潮动态图| 亚洲精品国产色婷婷电影| 国产伦理片在线播放av一区| 精品第一国产精品| 精品人妻熟女毛片av久久网站| 亚洲欧美日韩卡通动漫| 国产成人精品在线电影| 久久午夜福利片| 久久国产精品大桥未久av| 99九九在线精品视频| 波野结衣二区三区在线| 一本大道久久a久久精品| 最近最新中文字幕大全免费视频 | 另类亚洲欧美激情| 在线观看美女被高潮喷水网站| 黄色配什么色好看| 亚洲国产最新在线播放| 国产精品久久久久久久电影| www日本在线高清视频| 边亲边吃奶的免费视频| 免费黄色在线免费观看| av播播在线观看一区| 如何舔出高潮| 又大又黄又爽视频免费| av黄色大香蕉| 亚洲av福利一区| 日韩欧美一区视频在线观看| 久久久久人妻精品一区果冻| 亚洲精品美女久久av网站| 18禁在线无遮挡免费观看视频| 大陆偷拍与自拍| 最近手机中文字幕大全| 最近最新中文字幕大全免费视频 | 精品亚洲乱码少妇综合久久| 少妇人妻久久综合中文| av线在线观看网站| 女人精品久久久久毛片| 久久久国产欧美日韩av| 少妇的丰满在线观看| 免费人成在线观看视频色| 日韩中文字幕视频在线看片| 午夜影院在线不卡| 中文精品一卡2卡3卡4更新| 美女中出高潮动态图| 国产片内射在线| 亚洲国产精品999| 亚洲精品国产色婷婷电影| 亚洲性久久影院| 国产av国产精品国产| 国产一区二区三区综合在线观看 | 看免费成人av毛片| 午夜福利视频精品| 国产欧美亚洲国产| 热re99久久国产66热| 精品酒店卫生间| av卡一久久| 久久久久久久久久人人人人人人| 国语对白做爰xxxⅹ性视频网站| 国产熟女欧美一区二区| 热99国产精品久久久久久7| 交换朋友夫妻互换小说| 国产黄色视频一区二区在线观看| 午夜日本视频在线| 麻豆乱淫一区二区| 免费观看a级毛片全部| 久热久热在线精品观看| 中文字幕人妻熟女乱码| 欧美成人午夜免费资源| 女人精品久久久久毛片| 在线观看一区二区三区激情| 亚洲内射少妇av| 午夜视频国产福利| 男女国产视频网站| 国产伦理片在线播放av一区| 亚洲 欧美一区二区三区| 女人久久www免费人成看片| 亚洲欧美清纯卡通| 国产免费一级a男人的天堂| 日韩 亚洲 欧美在线| 亚洲美女黄色视频免费看| 亚洲精品中文字幕在线视频| 国产极品粉嫩免费观看在线| 十八禁高潮呻吟视频| 日韩av免费高清视频| 国产av一区二区精品久久| 国产日韩一区二区三区精品不卡| 亚洲久久久国产精品| 大片免费播放器 马上看| 国产av精品麻豆| a级毛片黄视频| 22中文网久久字幕| 青春草国产在线视频| 久久久久精品久久久久真实原创| 99久久人妻综合| av又黄又爽大尺度在线免费看| 午夜视频国产福利| xxxhd国产人妻xxx| 亚洲成人av在线免费| 中文乱码字字幕精品一区二区三区| 视频在线观看一区二区三区| 热re99久久精品国产66热6| 成人国产麻豆网| 巨乳人妻的诱惑在线观看| av天堂久久9| 99re6热这里在线精品视频| 9191精品国产免费久久| 久久99一区二区三区| 99热这里只有是精品在线观看| 精品国产乱码久久久久久小说| av在线app专区| 丰满乱子伦码专区| 大香蕉久久网| 下体分泌物呈黄色| 欧美人与性动交α欧美精品济南到 | 在线看a的网站| 国产乱来视频区| 日韩不卡一区二区三区视频在线| 国产xxxxx性猛交| 黑人欧美特级aaaaaa片| 夫妻性生交免费视频一级片| 亚洲国产精品成人久久小说| 国产一区二区三区av在线| 97在线人人人人妻| av一本久久久久| 看十八女毛片水多多多|