• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Discriminative subgraphs for discovering family photos

    2016-12-14 05:28:36ChangminChoiYoonSeokLeeandSungEuiYoon
    Computational Visual Media 2016年3期

    Changmin Choi,YoonSeok Lee,and Sung-Eui Yoon()

    ? The Author(s)2016.This article is published with open access at Springerlink.com

    Discriminative subgraphs for discovering family photos

    Changmin Choi1,YoonSeok Lee1,and Sung-Eui Yoon1()

    ? The Author(s)2016.This article is published with open access at Springerlink.com

    DOI 10.1007/s41095-016-0054-4 Vol.2,No.3,September 2016,257–266

    We propose to use discriminative subgraphs todiscoverfamily photosfrom group photosin an efficient and effective way. Group photos are represented as face graphs by identifying social contexts such as age,gender,and face position.The previous work utilized bag-of-word modelsand considered frequent subgraphs from all group photos as features for classification.This approach,however,produces numerous subgraphs,resulting in high dimensions. Furthermore,some of them are not discriminative. To solve these issues,we adopt a state-of-the-art, frequent subgraph mining method that removes nondiscriminativesubgraphs. WealsouseTF-IDF normalization,which is more suitable for the bag-ofword model.To validate our method,we experiment in two datasets. Our method shows consistently better performance,higher accuracy in lower feature dimensions,compared to the previous method.We also integrate our method with the recent Microsoft face recognition API and release it in a public website.

    image classification;subgraph mining; social context;group photographs

    1 Introduction

    Recent studies on image classification focus on object and scene classification. They show remarkable performance thanks to the improvement of image features such as convolutionalneuralnetwork (CNN)[1]. These image features are built from pixel-level descriptors,and may be not enough to describe group photos,since classifying group photos requires to utilize more semantic information like relations,events,or activities. Interestingly,humans can classify types(e.g.,friends and family) of group photos without much training,because we can estimate a variety of social contexts such as age,gender,proximity,and place,by observing face, position,clothing,and other objects.

    Once we identify the social context on group photos,we can use this information for various applications.One application is to control privacy of shared images in various social websites(e.g., Facebook).People share images sometimes without much consideration on what information shared images can deliver to other people. When we identify that a shared group photo is a family photo containing children,we may wish to share that image to a small circle of persons,e.g.,relatives,instead of publicly.

    For classifying group photos,Chen et al.[2] proposed a method to categorize group photos into family and non-family types. This method assumes that annotations about age,gender,and face position are well-estimated beforehand by using existing face detection and statistical estimation derived from the pixel context. On top of that, they proposed to use a social-level feature named as Bag-of-Face-subGraph(BoFG)to represent group photos by graphs. For constructing BoFGs,a mining algorithm extracting frequent subgraphs is adopted. This is based on the assumption that prominent social subgroups captured in group photos can be identified by looking at frequently appearing subgraphs.

    While the prior method enlightens an interesting research direction of classifying group photos,it has certain drawbacks. It first requires a userspecified threshold to determine the number of feature dimensions in a training phase.Furthermore, as we have more frequent subgraphs by having more feature dimensions,we also raise the probability as a

    side effect that more non-discriminative subgraphs are selected due to repetitive and redundant patterns.In other words,thresholding the number of subgraphs with the frequency criterion alone can cause a scalability problem.

    Main contributions.To overcome these issues, we survey the state-of-the-art subgraph mining techniques,and propose to use a subgraph mining technique,CORK,that identifies discriminative subgraphs and culls out redundant subgraph generations.We also propose to use a TF-IDF,a widely-used feature normalization for the bag-ofword models,to our BoFG feature.

    To validate benefits of our method in terms of classifying family and non-family types of group photos,we have tested the prior and our methods in two different datasets(Fig.1)including the public dataset[3]. Overall,our method shows higher accuracy with less dimensions over the prior method.Furthermore,our method does not require a manually tunned threshold for computing dimensions of our BoFG features.

    Fig.1 We test our method against a new,extended dataset consisting of(a)non-family and (b–e)different family types. Ourmethod achievesthehighestaccuracy,79.34%,with 90 dimensions,while the state-of-the-art method achieves 76.8%with 1000 dimensions.

    We have also integrated our method with the face API1https://www.projectoxford.ai/face/.of Microsoft Project Oxford and released it at our demo site2http://is-fam.net/..In this system(Fig.2),users can test their own group images and see how well our method performs with them.

    2 Related work and background

    We review prior approaches that are related to our method.

    2.1 Social context in photographs

    Fig.2 Our demo site using the proposed classification method.

    Social contexts contain various information such as clothing,age,gender,absolute or relative position,

    face angle,gesture,body direction,and so on. They have been widely used to recognize people and groups[2,4,5]. Several works analyzed the contexts to study the structure of scenes in group photos[3,6,7].Some researchers utilized them to classify group types[2,4,8,9],retrieve similar group photos[10–12],discover social relations[5,13],or predict occupations[14].

    Pixel contexts in addition to the social contexts have been used together to recognize a type of group photos[4].Some of well-known pixel-level features include SIFT[15],GIST[16],CNN[1],etc.Sociallevel features can be estimated by face detection, clothing segmentation,or partial body detection.

    2.2 Frequent subgraph mining

    Our work is based on identifying subgraphs from a graph representing the relationship between people shown in group photos. Frequently appearing subgraphs provide important cues on understanding graph structures and similarity between different graphs.As a result,mining frequent subgraphs has been widely studied[17].For various classification, frequent subgraph mining has been used in training and test phases to build a social-level feature,as used in classifying family and non-family photo types[2].

    We have found that extracted subgraphs significantly affect classification accuracy. There are two simple strategies to explore subgraphs in a database: (1)BFS-based and (2)DFS-based approaches[17].The BFS-based algorithm has been less used recently due to its technical challenges in generating candidates and pruning false positives.More advanced techniques focus on efficient candidate generation,since the subgraph isomorphism test is an NP-complete[18].Recent successful algorithms proceed based on depth-first search and pattern growth [17],i.e.,subgraph growing.Our method is also based on the DFS-based strategy,and uses canonical labels to avoid the scalability issue.We additionally measure the discriminative power of each subgraph during the pattern growth.

    2.3 Graph-based image editing

    In this work,we use graphs and histograms of theirsubgraphsfordiscovering family photos. Interestingly, there have been many graphbased approachesforimageextrapolation [19], interpolation [20], image segmentation [21], representations[22],etc.While these applications are not directly related to our classification problem, utilizing histograms of subgraphs could be useful in these applications,e.g.,better graph matching for extrapolation.

    3 Backgrounds on social subgraphs

    In this section,we give the background of using BoFG features for group photo classification.

    Chen et al.[2]proposed BoFG features for group photo classification.This method constructs face graphs(Fig.3)and uses their subgraphs to describe various social relationships. BoFG is analogous to the bag-of-word model of text retrieval. For example,a text corpus corresponds to a group photo album,a document to an image,and a word to a subgraph in a face graph,respectively.The main difference between these models is that the bag-ofword model performs clustering over all vectors in order to obtain a codebook,whereas BoFG performs frequent subgraph mining over all the face graphs.

    Fig.3 (a)Representing an image as a face graph using(b)14 vertex types and(c)4 edge types.

    Attributesofgroup membersenableusto discriminate the type of groups,although we do not even know their names or relationships.In addition, understanding each one's position is informative to infer physical and relationship closeness among people.Chen et al.[2]showed that only knowing gender,age,and face positions as attributes of group members works effectively for a binary classification of family and non-family photos.Our approach is also based on this approach,and represents a group photo into a face graph,elaborated below.

    Face graphs.Figure 3 illustrates an example of representing a group photo to a face graph.Each node of the graph corresponds to each person in the group photo,and is associated with a vertex label describing age and gender.Each edge between two nodes encodes relative position between two people.

    There are 14 different types describing age and gender for each vertex label.The age ranges from 0-year-old to 75-year-old,and is categorized into seven age types.There are two gender types,male and female,and they are visualized by square and circle, respectively in Fig.3(b).The combinations of age and gender result in 14 different types.Identifying faces and their attributes have been well studied[23, 24],and APIs of performing these operations are available,as mentioned in Section 1.

    Most previous works used the Euclidean distance in image space,i.e.,pixel distance,to measure the closeness between persons in group photos[3,5, 12,13].Unfortunately,it has been known not to be invariant to scales of images,faces,distance to camera,or the orientation angle of a face.Instead, we use an order distance that indicates how close people stand with each other.The order distance has been demonstrated to be more stable over the pixel distance in terms of various factors[2]. The order distance is computed as the path length among vertices on a minimum spanning tree(MST) generated from a face graph.Such order distance is used for each edge label such as Fig.3(c).

    Bag-of-Face-subGraph (BoFG).Oncewe represent group photos into face graphs,we extract frequent subgraphs and regard them as BoFG features for classification.BoFG has been proposed to be a useful feature to compare structures of group photos.It helps to infer a type of a group by using substructures of groups. For example,in Fig.3, edges between two vertices of 28fand 28m(i.e., mother–father relationship),and between 28fand 5m(i.e.,mother–son relationship)provide additional information on social relationship over each node of those edges;f and m represent female and male gender types,respectively.

    Subgraph enumeration via gSpan.The prior work regarded frequent subgraphs as BoFG features, and generated such subgraphs by frequent subgraph mining,specifically,the gSpan method[25].Most prior approaches of frequent subgraph mining[17] initially generate candidates of frequent subgraphs and adoptapruningprocesstoremovefalse positives.The pruning process,unfortunately,has a heavy computational cost,because it requires subgraph isomorphism testing.

    gSpan adopted in the prior classification system[2] ameliorated this computational overhead issue by utilizing two techniques,DFS lexicographic order and minimal DFS code.Specifically,we first traverse an input graph,G,in a depth-first search(DFS)and assign an incrementally increasing visiting order to a newly visited vertex.Whenever we traverse an edge from vmto vnof the graph G,we represent the traversed edge into a 5-tuple DFS code:

    where m and n are vertex indices computed by the visiting ordering during the DFS traversal,Lmand Lnare vertex labels of vmand vn,respectively,andis a edge label associated with the edge.

    A graph,however,can have multiple DFS codes depending on traversal orders of vertices and edges. gSpan particularly allows the DFS lexicographic order computed from labels,of verticesand edges,and usestheDFS code corresponding to the minimal lexicographic order from the graph G. In this way,we can remove redundant subgraphs and maintain a subgraph among its isomorphic subgraphs.

    To check the subgraph isomorphism,we simply look at the DFS code of a subgraph,Gs,to see whether the code is equal to or bigger than ones generated by prior subgraphs.If so,this indicates that Gsis a redundant subgraph,which is isomorphic to a prior subgraph.An illustration of generating DFS codes and pruning process is shown in Fig.4.

    Fig.4 This figure shows a process of generating all the subgraphs having one edge or more in gSpan. During the enumeration of subgraphs,gSpan prunes subgraphs once their DFS codes are equal to or bigger than prior ones.We highlight three subgraphs labelled (a),(b),and(c),and their DFS codes in below.5fis a 5-year-old female,while 5mis a 5-year-old male.Let the lexicographic orders of vertex and edge be 5f<5m<28f<28mand 0<1<2.Note that the subgraphs(b)and(c)are isomorphic to each other.However, the subgraph of(c)is not a minimal DFS code because it is bigger than that of(b).In this manner,the search space can be pruned;the dotted subgraphs are pruned during the DFS-based expansion.

    To define frequently appearing subgraphs,gSpan requiresa userdefined parameter,known as minimum frequency. We consider all different subgraphs whose frequency counts are bigger than the minimum frequency to be features of the BoFG.

    The aforementioned method focuses on extracting frequency-based subgraphs and has some limitations for graph classification. Extracted frequent subgraphs in this approach may not show structural differences between classes.This is a similar problem even in the text classification.For instance,“a”and“the”are most commonly appearing words,but are not discriminative words for document classification. Moreover,the minimum frequency of subgraphs for defining BoFGs should be picked through a tedious trial-and-error approach for achieving high accuracy.

    To address these drawbacks of using frequently appearing subgraphs, we propose to use discriminative subgraphs,adopt a recent subgraph mining method,CORK [26],extracting such discriminativesubgraphs,and apply it toour classification problem of group photos.Additionally, we further improve the classification accuracy by adopting and tailoring the TF-IDF normalization scheme to our problem.

    4 Our approach

    In thissection,weexplain ourapproach for classifying group photos into family and non-family types.

    4.1 Overview

    Figure5showstheoverview ofourmethod. As offline process,we first generate face graphs from group photos in a training set and extract discriminative subgraphs as Bag-of-Face-subGraph (BoFG)features from face graphs. We utilize family and non-family labels associated with training images.We then extract a BoFG feature for each photo and normalize the feature by using the TFIDF weighting.Through discriminative learning,we finally construct an SVM classifier.

    When a query image is provided,we represent it to a face graph,and extract and normalize a BoFG feature from the graph.We then estimate a query's label by utilizing the pre-trained classifier.

    Our work adopts face graphs and their subgraphs as the BoFG features for the classification problem (Section 3). For achieving higher accuracy in an efficient manner,we additionally propose using discriminative subgraphs(Section 4.2)inspired by a recent near-optimal selection method[26].We also normalize BoFG features using the term frequency and inverse document frequency,i.e.,the TF-IDF weighting scheme(Section 4.3).

    4.2 Discriminative subgraphs mining

    We would like to identify discriminative subgraphs that are characteristic features in each category.We have identified similar issues from data mining,and found that CORK[26]works well for our problem.

    Fig.5 The overview of our approach.The red boxes indicate the main contributions of our method.

    CORK considers statistical significance to select discriminative subgraphs. It defines a new measurement counting the number of features that

    are not helpful for classification among candidate features.This measurement can be integrated into gSpan as a culling method. It can reduce the number of features,while preserving performance in classification and can prune search space without relying upon a manually-tuned frequency threshold.

    A near-optimality of CORK is obtained from a submodular quality function,q(·),using a greedy forward feature selection.The function q(·)considers presence or absence of each subgraph in each class. q(·)for the set containing subgraph,S,is defined as the following:

    where A and B are two classes in a dataset.AS0is the number of images of the class A that do not have the subgraph set{S}.AS1is the number of images including the subgraph set{S}in the class A.The subscripts S0and S1are used in the same manner for another class B.

    When a subgraph appears or does not appear simultaneously in both classes,it can be considered as a non-discriminative feature between two classes. To consider this observation,AS0and BS0are multiplied together;the same reasoning applies for the product of AS1and BS1. In this context,a feature becomes more discriminative,as the quality function q(·)becomes higher. Figure 6 shows examples of the quality function for two subgraphs in classes A and B.

    While generating subgraphs,we commonly expand a subgraph S into another one,T,by adding a neighboring edge or so. During this incremental process,suppose that we already decided to include S into our feature set.We then need to check the quality of having a newly expanded feature T on top of S.As a result,we need to reevaluate q({T}).

    Fig.6 A and B are two different classes in a given dataset.a1?3and b1?3are images in classes A and B,respectively.Each indicator is 1,if its corresponding subgraph appears in each image,otherwise 0. Referred to Eq.(2),q({S})=?(0·1+3·2)=?6 and q({T})=?(0· 3+3·0)=0.As a result,the subgraph T has a higher discriminative power than S.

    Unfortunately, this process can require an excessive amount of running time,since as the number of features increases to N,the number ofpossible feature combinations can increase exponentially to 2N.

    To accelerate this process,CORK relies on a pruning criterion.Especially,the upper bound of the quality function is derived based on three possible cases,when we consider a supergraph T from its subgraph S.One of such cases is that images from class A do not have the supergraph T,while images in the other class have the supergraph and thus their indicator values are affected.The second case is that the scenario of the first case is applied in the reverse way to classes A and B.The third case is where we do not have any changes.By considering these three different cases,the upper bound of the quality function is derived as the following[26,Theorems 2.2,2.3]:

    While expanding subgraphs,we prune the children of supergraphs T expanded from the subgraph S,when the quality function of T is equal to the one of those upper bounds. This culling criterion is adopted,since it is guaranteed that we cannot find any better supergraphs than T whose quality function is higher than the the upper bound shown in the aforementioned inequality.This approach has been proven to identify discriminative subgraphs whose quality function values are bigger than a certain lower bound[26,Theorem 2.1]. Furthermore,unlike gSpan,users do not need to provide manually-tuned parameters for identifying discriminative subgraphs.

    4.3 TF-IDF normalization

    Onceweextractfeatures,wenormalizethose features.TF-IDF[27]is one of commonly adopted normalization schemes, mainly for document classification.We apply this normalization to our feature,which resembles the bag-of-word model. Inspired by the TF-IDF normalization scheme,we give higher weights to more frequent features in each image and deemphasize features that appear in more images.

    In particular,our TF-IDF weighting scheme of a subgraph s occurring in an image i given an image

    database D is defined as the following:

    where fs,iis the number of the subgraph s occurring in the image i,N is the number of all images in the database D,and nsis the number of images with the subgraph s.If fs,iis zero,TF term would be undefined.To prevent this case,a small constant,1, is added.Similarly,to avoid divide-by-zero,we also add the small constant 1 to the denominator of the IDF term.

    5 Results

    Weimplemented priorand ourmethodsfor discovering family photos in a machine that has Xeon 3.47GHz with 192GB main memory. We evaluate the effectiveness of computing and using discriminative feature selection along with TFIDF normalization.For classification,we use the support vector machine(SVM).The classification is conducted with linear kernel and 5-fold cross validation.

    5.1 Datasets

    To validate our approach,we use the existing dataset provided by Chen et al.[2]. We additionally test different methods against a new,larger,and diverse dataset,which is rearranged from the public dataset[3],as adopted also in the previous work. Based on the protocol laid out in the prior work, we obtain a“soft”ground truth containing 1613 family photos and 1890 non-family photos for our new,extended dataset. The“soft”ground truth for the new dataset is generated without any prior knowledge such as looking at labels of those images.

    The new extended dataset also shares the same images to the Chen et al.'s dataset,since these two datasets are arranged from the public dataset.We also measure the common images in both or either one of two datasets(Table 1).The difference from the previous one is that the new dataset has 1073 more photos and includes wider sets of family types such as siblings,single parent,nuclear family,and extended family,as shown in Fig.1.

    Note that these images from the public datasethave labels,which are groups,wedding,and family types.Our methods independently predict family types of these images and measure accuracy by comparing their predicted labels with the original ones associated with the public dataset.

    Table 1 The composition of Chen et al.'s((a)+(b))and our datasets ((b)+(c)).(b)indicates the number of co-occurring images in both Chen et al.'s and ours.Many images of family and non-family cooccur in ours and Chen et al.'s,although we prepare the extended dataset without looking at their original labels

    We have also considered other datasets related to group photos[4,8].Unfortunately,these datasets do not contain labels directly for family and non-family types.As a result,we were unable to use them for our problem.

    5.2 Effects of discriminative subgraphs

    We test accuracy of different methods including ours and the gSpan method[2].We have implemented the prior method by following the guideline of the original paper[2].For gSpan,we generate frequent subgraphs up to 10,000 subgraphs and sort them in the order of document frequency and select them as BoFG.To achieve the best accuracy for the gSpan method,it is required for users to specify the number of subgraphs. In this approach,we need to rely on many trial-and-error procedures, while our method automatically constructs a set of discriminative subgraphs.

    We were unclear how the prior method uses the document frequency(DF)term,because there is an ambiguity in which the DF term is evaluated either after or during the running process of gSpan1We have consulted authors of the gSpan technique for faithful re-implementation of the gSpan method.. We thus experiment both cases.gSpan+DF(1) and gSpan+DF(2)correspond respectively to the adaption of DF posterior to and during gSpan in Table 2.In Table 2,our method finds the maximal number of subgraphs without using the minimum frequency.

    Our methods w/and w/o the TF-IDF scheme in the Chen et al.'s dataset identify a small set of discriminative subgraphs(i.e.,76 subgraphs),and achieves 80.61%and 78.65%accuracy respectively.

    Table 2 The accuracy of different methods in Chen et al.'s and our datasets

    Table 3 The accuracy of DF vs.TF-IDF in Chen et al.'s and our extended datasets

    Ourmethod in theextended datasetachieves 77.26%,and shows 79.34%with the TF-IDF scheme. gSpan+DF(1)and gSpan+DF(1)methods show inferior results over our method in most cases. Interestingly,the prior methods show even lower accuracy as they use higher dimensions. This is mainly because frequent subgraphs may not be discriminative.

    5.3 Effects of TF-IDF normalization

    We measure accuracy of different methods with and without TF-IDF normalization. Since gSpan+ DF(2)achieves higher accuracy than gSpan+ DF(1),we show the results of gSpan+DF(2)and ours for the test.

    In both gSpan+DF(2)and ours,using TFIDF over DF improves the classification accuracy in most cases.Especially,our method using TFIDF achieves the highest accuracy,79.34%,for the extended dataset.

    5.4 Comparison of subgraphs

    We check the number of subgraphs co-occurring in the BoFG features generated by both gSpan and our method. This investigation can help us to understand how many dimensions prior methods require in order to obtain discriminative features extracted by our method. Even in hundreds of thousands of dimensions extracted by gSpan,some of discriminative subgraphs extracted by our method are not identified(Table 4).

    We also measure how well query images used in the test phase are represented by extracted features. For this,we measure how many query images are represented by null vector,indicating that query images are not represented by any features extracted by gSpan or our method(Table 5).As a result, we can conclude that the feature extraction of our method performs better than other tested methods (gSpan+DF(1)or gSpan+DF(2)).

    6 Conclusions and future work

    We have proposed a novel classification system utilizing discriminative subgraph mining forachieving high accuracy. We represent group photosasgraphswith age,gender,and face position,and then extract discriminative subgraphs and construct BoFG features. For extracting discriminative subgraphs,we proposed to use a recentdiscriminativesubgraph miningmethod, CORK,that adopts a quality function with nearoptimal guarantees. We additionally proposed to use the TF-IDF normalization to better support the characteristic of BoFG features. To validate benefits of our approach,we have tested different methods including ours against two different datasets including our new,extended dataset.Our method achieves higher accuracy in the same dimensionality over the prior methods.Furthermore,our method achieves higher or similar accuracy over the prior work that relies on manual turning and requires a higher dimensionality.

    Table 4 The number of common subgraphs between gSpan and ours in Chen et al.'s and our datasets

    Table 5 This table shows the number of query images that are represented by the null vector in Chen et al.'s and our extended data

    There are many interesting future directions. Since our work is based on the concept of social relationships,we consider subgraphs consisting of at least two nodes.However,only a single node can provide useful social cues.Incorporating single nodes in BoFGs and investigating its effects should be interesting. We would like to also investigate recent deep learning techniques that learns low-level features and classification functions.Due to the lack of sufficient training datasets,we did not consider recent deep learning techniques,but this approach should be worthwhile for achieving higher accuracy.

    Acknowledgements

    We are thankful to our lab members for valuable feedbacks,and to Ph.D.Yan-Ying Chen for sharing her dataset.This work was supported in part by MSIP/IITP(Nos.R0126-16-1108,R0101-16-0176) and MSIP/NRF(No.2013-067321).

    [1]Krizhevsky,A.Learning multiple layers of features from tiny images.Technical Report.University of Toronto,2009.

    [2]Chen,Y.-Y.;Hsu,W.H.;Liao,H.-Y.M.Discovering informative social subgraphs and predicting pairwise relationships from group photos.In: Proceedings ofthe20th ACM InternationalConferenceon Multimedia,669–678,2012.

    [3]Gallagher,A.C.;Chen,T.Understanding images of groups of people.In:Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,256–263,2009.

    [4]Murillo,A.C.;Kwak,I.S.;Bourdev,L.;Kriegman, D.;Belongie,S.Urban tribes:Analyzing group photos from a social perspective.In:Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops,28–35,2012.

    [5]Wang,G.;Gallagher,A.;Luo,J.;Forsyth,D.Seeing people in social context:Recognizing people and social relationships.In:Lecture Notes in Computer Science, Vol.6315.Daniilidis,K.;Maragos,P.;Paragios,N. Eds.Springer Berlin Heidelberg,169–182,2010.

    [6]Chiu,Y.-I.;Li,C.;Huang,C.-R.;Chung,P.-C.; Chen,T.Efficient graph based spatial face context representation and matching.In:Proceedings of IEEE International Conference on Acoustics,Speech and Signal Processing,2001–2005,2013.

    [7]Gallagher,A.C.;Chen,T.Finding rows of people in group images.In:Proceedings of IEEE International Conference on Multimedia and Expo,602–605,2009.

    [8]Choi,W.;Chao,Y.-W.;Pantofaru,C.;Savarese,S. Discovering groups of people in images.In:Lecture Notes in Computer Science,Vol.8692.Fleet,D.; Pajdla,T.;Schiele,B.;Tuytelaars,T.Eds.Springer International Publishing,417–433,2014.

    [9]Shu,H.;Gallagher,A.;Chen,H.;Chen,T.Face–graph matching for classifying groups of people.In: Proceedings of IEEE International Conference on Image Processing,2425–2429,2013.

    [10]Chiu,Y.-I.;Hsu,R.-Y.;Huang,C.-R.Spatial face context with gender information for group photo similarity assessment.In:Proceedings of the 22nd InternationalConferenceon Pattern Recognition, 2673–2678,2014.

    [11]Shimizu,K.;Nitta,N.;Nakai,Y.;Babaguchi,N. Classification based group photo retrieval with bag of people features.In:Proceedings of the 2nd ACM International Conference on Multimedia Retrieval, Article No.6,2012.

    [12]Zhang,T.;Chao,H.; Willis,C.; Tretter,D. Consumer image retrieval by estimating relation tree from family photo collections.In:Proceedings of the ACM International Conference on Image and Video Retrieval,143–150,2010.

    [13]Singla,P.;Kautz,H.;Luo,J.;Gallagher,A.Discovery of social relationships in consumer photo

    collections using Markov logic.In: Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops,1–7,2008.

    [14]Song,Z.;Wang,M.;Hua,X.-s.;Yan,S.Predicting occupation via human clothing and contexts.In: Proceedings of International Conference on Computer Vision,1084–1091,2011.

    [15]Lowe, D.G.Distinctive image features from scale-invariant keypoints.International Journal of Computer Vision Vol.60,No.2,91–110,2004.

    [16]Oliva,A.;Torralba,A.Modeling the shape of the scene:A holistic representation of the spatial envelope. International Journal of Computer Vision Vol.42,No. 3,145–175,2001.

    [17]Jiang,C.;Coenen,F.; Zito,M.A survey of frequent subgraph mining algorithms.The Knowledge Engineering Review Vol.28,No.1,75–105,2013.

    [18]Cook,S.A.The complexity of theorem-proving procedures.In:Proceedings of the 3rd Annual ACM Symposium on Theory of Computing,151–158,1971.

    [19]Wang,M.;Lai,Y.-K.;Liang,Y.;Martin,R.R.;Hu, S.-M.BiggerPicture:Data-driven image extrapolation using graph matching.ACM Transactions on Graphics Vol.33,No.6,Article No.173,2014.

    [20]Chen,X.;Zhou,B.;Guo,Y.;Xu,F.;Zhao,Q. Structure guided texture inpainting through multiscalepatchesandglobaloptimization forimage completion.Science China Information Sciences Vol. 57,No.1,1–16,2014.

    [21]Li,H.; Wu,W.; Wu,E.Robustinteractive image segmentation via graph-based manifold ranking. Computational Visual Media Vol.1,No.3,183–195, 2015.

    [22]Hu,S.-M.;Zhang,F.-L.;Wang,M.;Martin,R. R.;Wang,J.PatchNet: A patch-basedimage representation for interactive library-driven image editing.ACM Transactions on Graphics Vol.32,No. 6,Article No.196,2013.

    [23]Taigman,Y.; Yang,M.; Ranzato,M.; Wolf, L.DeepFace: Closing thegap to human-level performance in face verification.In:Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,1701–1708,2014.

    [24]Zhao,W.;Chellappa,R.;Phillips,P.J.;Rosenfeld,A. Face recognition:A literature survey.ACM Computing Surveys Vol.35,No.4,399–458,2003.

    [25]Yan,X.;Han,J.gSpan:Graph-based substructure pattern mining.In:Proceedings of the 2002 IEEE International Conference on Data Mining,721–724, 2002.

    [26]Thoma,M.;Cheng,H.;Gretton,A.;Han,J.;Kriegel, H.-P.;Smola,A.;Song,L.;Yu,P.S.;Yan,X.; Borgwardt,K.M.Discriminative frequent subgraph mining with optimality guarantees.Statistical Analysis and Data Mining:The ASA Data Science Journal Vol. 3,No.5,302–318,2010.

    [27]Jones,K.S.A statistical interpretation of term specificity and its application in retrieval.Journal of Documentation Vol.28,No.1,11–21,1972.

    Changmin Choiisinastart-up company.He received his M.S.degree from School of Computing at Korea Advanced Institute ofScience and Technology(KAIST),and B.A.degree from Business Schoolat Hanyang University. His research interest is understanding group photos in social media.

    YoonSeok Lee is a M.S.student in School of Computing at Korea Advanced InstituteofScienceand Technology (KAIST)and he received his B.S.degree in computer science from KAIST in 2014.His research interest lies in image classification,image representation,and hashing techniques.

    Sung-Eui Yoon is currently an associate professor at Korea Advanced InstituteofScienceand Technology (KAIST).He received his B.S.and M.S.degrees in computer science from Seoul National University in 1999 and 2001,respectively. His main research interest is designing scalable graphics, image search,and geometric algorithms.He gave numerous tutorials on proximity queries and large-scale rendering at various conferences including ACM SIGGRAPH and IEEE Visualization.Some of his work received a distinguished paper award at Pacific Graphics,invitations to IEEE TVCG,an ACM student research competition award,and other domestic research-related awards. He is a senior member of IEEE,and a member of ACM and KIISE.

    Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    1 Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon, Republic of Korea.E-mail:sungeui@gmail.com().

    Manuscript received:2016-02-01;accepted:2016-04-13

    深爱激情五月婷婷| 国产成人福利小说| 色在线成人网| 亚洲国产高清在线一区二区三| 久久久久国产精品人妻aⅴ院| 久久精品国产综合久久久| a级一级毛片免费在线观看| 国产亚洲精品综合一区在线观看| 欧美绝顶高潮抽搐喷水| 一区二区三区高清视频在线| 国产黄a三级三级三级人| 网址你懂的国产日韩在线| 人人妻,人人澡人人爽秒播| 免费高清视频大片| av女优亚洲男人天堂| 天堂影院成人在线观看| 欧美日韩综合久久久久久 | 国产精品一区二区免费欧美| 国产精品亚洲美女久久久| 三级男女做爰猛烈吃奶摸视频| 成人精品一区二区免费| 亚洲欧美精品综合久久99| 18禁黄网站禁片午夜丰满| 久久久国产成人免费| 夜夜爽天天搞| av女优亚洲男人天堂| 国产美女午夜福利| 欧美色欧美亚洲另类二区| 国产v大片淫在线免费观看| 精品日产1卡2卡| 色尼玛亚洲综合影院| 内地一区二区视频在线| 一本精品99久久精品77| 中文字幕av在线有码专区| 日韩中文字幕欧美一区二区| 亚洲国产欧洲综合997久久,| 一夜夜www| 女人十人毛片免费观看3o分钟| 国内精品久久久久精免费| 国产亚洲精品综合一区在线观看| 国产极品精品免费视频能看的| 美女大奶头视频| 国产精品一及| 九九热线精品视视频播放| 国模一区二区三区四区视频| 日本黄色片子视频| 最新美女视频免费是黄的| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | av在线天堂中文字幕| 欧美zozozo另类| 黄片大片在线免费观看| 欧美色视频一区免费| 国产中年淑女户外野战色| 91麻豆av在线| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 美女高潮的动态| 免费av观看视频| 久久欧美精品欧美久久欧美| 国产伦人伦偷精品视频| 成人鲁丝片一二三区免费| 欧美乱色亚洲激情| av欧美777| 夜夜爽天天搞| 亚洲欧美日韩高清在线视频| 伊人久久大香线蕉亚洲五| 丰满乱子伦码专区| 狂野欧美激情性xxxx| 特级一级黄色大片| 国产一区在线观看成人免费| 日韩高清综合在线| 国产激情欧美一区二区| 大型黄色视频在线免费观看| 香蕉丝袜av| 亚洲一区高清亚洲精品| 国产精品1区2区在线观看.| 搡女人真爽免费视频火全软件 | 他把我摸到了高潮在线观看| 天堂√8在线中文| 国产精品,欧美在线| 99热只有精品国产| 99热只有精品国产| 国产亚洲欧美在线一区二区| 我要搜黄色片| 欧美一区二区亚洲| 久久九九热精品免费| 国产精品亚洲av一区麻豆| 亚洲天堂国产精品一区在线| 免费av观看视频| 日本成人三级电影网站| 国内少妇人妻偷人精品xxx网站| 国产真实伦视频高清在线观看 | 搡老熟女国产l中国老女人| 成人国产一区最新在线观看| 九九热线精品视视频播放| 亚洲欧美激情综合另类| 老熟妇乱子伦视频在线观看| 欧美黄色片欧美黄色片| 欧美日韩福利视频一区二区| 国产熟女xx| 欧美av亚洲av综合av国产av| 99riav亚洲国产免费| 欧美一级a爱片免费观看看| 1000部很黄的大片| 欧美一区二区亚洲| 国产高清videossex| 一区二区三区激情视频| 亚洲av电影在线进入| 97超级碰碰碰精品色视频在线观看| 中文资源天堂在线| 久久婷婷人人爽人人干人人爱| 在线观看午夜福利视频| 国产激情偷乱视频一区二区| 欧美激情久久久久久爽电影| 天堂影院成人在线观看| 久久人妻av系列| 99久久精品一区二区三区| a级一级毛片免费在线观看| 国产乱人伦免费视频| 亚洲精品一卡2卡三卡4卡5卡| 精品一区二区三区人妻视频| 五月伊人婷婷丁香| 国产一区二区在线av高清观看| 色播亚洲综合网| 欧美色欧美亚洲另类二区| 国产色爽女视频免费观看| 国产成+人综合+亚洲专区| 一区二区三区激情视频| 国产真人三级小视频在线观看| 久久精品综合一区二区三区| 亚洲精品久久国产高清桃花| 亚洲av成人精品一区久久| 成人av一区二区三区在线看| а√天堂www在线а√下载| 国产精品永久免费网站| 精品电影一区二区在线| 欧美zozozo另类| 精品久久久久久成人av| 国产精品久久久久久久电影 | 免费人成在线观看视频色| 久久精品人妻少妇| 久久久久国内视频| 国产激情欧美一区二区| 日本 av在线| 18美女黄网站色大片免费观看| 亚洲人成伊人成综合网2020| 日韩欧美在线乱码| 精品电影一区二区在线| 亚洲内射少妇av| 日本撒尿小便嘘嘘汇集6| 久久精品国产自在天天线| av欧美777| 毛片女人毛片| 国产精品99久久久久久久久| 中文字幕人妻丝袜一区二区| 免费看日本二区| 波野结衣二区三区在线 | 亚洲欧美激情综合另类| 美女被艹到高潮喷水动态| 91在线观看av| 色综合欧美亚洲国产小说| 国内精品久久久久精免费| 婷婷精品国产亚洲av| 日韩免费av在线播放| 五月玫瑰六月丁香| 亚洲国产精品成人综合色| 国产亚洲精品久久久com| 精品久久久久久成人av| 国产亚洲精品综合一区在线观看| 亚洲美女视频黄频| 国产真实乱freesex| 内射极品少妇av片p| 国产91精品成人一区二区三区| 18禁黄网站禁片午夜丰满| 亚洲国产精品合色在线| 久久久久久大精品| 尤物成人国产欧美一区二区三区| 最好的美女福利视频网| 禁无遮挡网站| 内射极品少妇av片p| 美女黄网站色视频| 久久草成人影院| 小说图片视频综合网站| 午夜亚洲福利在线播放| 国产激情偷乱视频一区二区| 午夜免费观看网址| 老司机午夜十八禁免费视频| 日韩中文字幕欧美一区二区| 在线看三级毛片| 国产精品女同一区二区软件 | 天美传媒精品一区二区| 看免费av毛片| 国产精品香港三级国产av潘金莲| 久99久视频精品免费| 免费看光身美女| 高清日韩中文字幕在线| 欧美中文日本在线观看视频| 人妻夜夜爽99麻豆av| 午夜亚洲福利在线播放| 日日夜夜操网爽| 国产野战对白在线观看| 一区二区三区免费毛片| 欧美日韩国产亚洲二区| 长腿黑丝高跟| 精品久久久久久成人av| 日本五十路高清| 一个人观看的视频www高清免费观看| 九色国产91popny在线| 午夜福利在线在线| 亚洲精品影视一区二区三区av| 蜜桃久久精品国产亚洲av| 小蜜桃在线观看免费完整版高清| 亚洲成a人片在线一区二区| 最新美女视频免费是黄的| av天堂中文字幕网| 亚洲精品成人久久久久久| 久久精品综合一区二区三区| 在线观看免费午夜福利视频| 两个人看的免费小视频| 一区二区三区激情视频| 五月玫瑰六月丁香| 亚洲最大成人手机在线| 午夜久久久久精精品| 在线观看免费视频日本深夜| 黄色女人牲交| 久久精品91无色码中文字幕| 亚洲精品456在线播放app | 男女之事视频高清在线观看| 色综合婷婷激情| 18禁国产床啪视频网站| 免费在线观看亚洲国产| 成人av一区二区三区在线看| 男人的好看免费观看在线视频| a级一级毛片免费在线观看| 久久精品国产亚洲av香蕉五月| 午夜激情福利司机影院| 日本黄色视频三级网站网址| 久久天躁狠狠躁夜夜2o2o| 怎么达到女性高潮| 婷婷六月久久综合丁香| 色综合欧美亚洲国产小说| ponron亚洲| 久久久久久久久大av| 午夜精品在线福利| 手机成人av网站| 午夜福利免费观看在线| 国产在线精品亚洲第一网站| avwww免费| 国产精品嫩草影院av在线观看 | 人人妻人人澡欧美一区二区| 我的老师免费观看完整版| 欧美日韩乱码在线| 天堂av国产一区二区熟女人妻| 国产高清激情床上av| 久久久久性生活片| 一级黄色大片毛片| 男女视频在线观看网站免费| 欧美精品啪啪一区二区三区| 香蕉丝袜av| 啦啦啦韩国在线观看视频| 搡女人真爽免费视频火全软件 | av在线蜜桃| 嫩草影院入口| 特级一级黄色大片| 叶爱在线成人免费视频播放| 国产精品野战在线观看| 亚洲熟妇中文字幕五十中出| 国产高清videossex| 村上凉子中文字幕在线| 黄色片一级片一级黄色片| 国产精品99久久99久久久不卡| 欧美性猛交╳xxx乱大交人| 最近最新中文字幕大全电影3| www日本黄色视频网| av女优亚洲男人天堂| 在线a可以看的网站| 香蕉丝袜av| 精品久久久久久久人妻蜜臀av| 久久久成人免费电影| 老司机午夜福利在线观看视频| 色哟哟哟哟哟哟| 精品免费久久久久久久清纯| 亚洲av成人精品一区久久| 国产高清激情床上av| 国产精品亚洲av一区麻豆| 国产爱豆传媒在线观看| 一级作爱视频免费观看| 国产精品1区2区在线观看.| 欧美日韩乱码在线| 中文资源天堂在线| 欧美性猛交黑人性爽| 真实男女啪啪啪动态图| 国产免费av片在线观看野外av| 中出人妻视频一区二区| 久久人人精品亚洲av| 欧美色视频一区免费| 在线a可以看的网站| 最近最新中文字幕大全免费视频| 日韩高清综合在线| 蜜桃久久精品国产亚洲av| 欧美日本亚洲视频在线播放| 国产视频内射| 亚洲精品亚洲一区二区| 欧美日韩福利视频一区二区| 成人av一区二区三区在线看| 男插女下体视频免费在线播放| 老司机在亚洲福利影院| 99国产精品一区二区蜜桃av| 日韩国内少妇激情av| or卡值多少钱| 少妇的逼水好多| 天堂√8在线中文| 最新美女视频免费是黄的| 在线观看一区二区三区| 欧美成狂野欧美在线观看| 人人妻人人看人人澡| 久久精品人妻少妇| 亚洲国产欧洲综合997久久,| 99在线视频只有这里精品首页| 美女被艹到高潮喷水动态| 91字幕亚洲| 长腿黑丝高跟| av片东京热男人的天堂| 三级男女做爰猛烈吃奶摸视频| 欧美日韩乱码在线| 99国产精品一区二区三区| 极品教师在线免费播放| 久久精品国产综合久久久| 成年女人永久免费观看视频| 精品久久久久久久人妻蜜臀av| 国产成年人精品一区二区| 中文字幕人妻熟人妻熟丝袜美 | 国产精品久久电影中文字幕| 18禁黄网站禁片免费观看直播| 最新在线观看一区二区三区| 99久久久亚洲精品蜜臀av| 桃红色精品国产亚洲av| 国产一级毛片七仙女欲春2| a在线观看视频网站| 不卡一级毛片| 国产亚洲精品久久久com| 国产精品1区2区在线观看.| 制服丝袜大香蕉在线| 99热精品在线国产| 长腿黑丝高跟| 日日夜夜操网爽| www.999成人在线观看| av黄色大香蕉| 热99在线观看视频| 精品久久久久久成人av| eeuss影院久久| 少妇的丰满在线观看| 欧美日本亚洲视频在线播放| 欧美黄色淫秽网站| 久久国产乱子伦精品免费另类| 两人在一起打扑克的视频| 亚洲精品在线观看二区| 欧美色欧美亚洲另类二区| 99国产极品粉嫩在线观看| 亚洲av免费高清在线观看| 久久久久免费精品人妻一区二区| 久久人人精品亚洲av| 日韩高清综合在线| 亚洲国产欧美网| 丰满的人妻完整版| 一进一出抽搐动态| av在线蜜桃| 真人一进一出gif抽搐免费| 夜夜看夜夜爽夜夜摸| 18美女黄网站色大片免费观看| 黄色日韩在线| 免费看十八禁软件| 在线天堂最新版资源| 高清毛片免费观看视频网站| 亚洲一区二区三区色噜噜| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲精品日韩av片在线观看 | 亚洲国产高清在线一区二区三| 黄片大片在线免费观看| 两个人的视频大全免费| 狂野欧美白嫩少妇大欣赏| 日韩免费av在线播放| 国产成人aa在线观看| 一区二区三区高清视频在线| 免费在线观看亚洲国产| 少妇人妻一区二区三区视频| 亚洲男人的天堂狠狠| 久久精品夜夜夜夜夜久久蜜豆| 色综合欧美亚洲国产小说| 一a级毛片在线观看| 亚洲av不卡在线观看| www日本黄色视频网| 欧美日韩亚洲国产一区二区在线观看| 国产精品久久久人人做人人爽| 中文字幕人妻熟人妻熟丝袜美 | 91九色精品人成在线观看| 露出奶头的视频| 搡老熟女国产l中国老女人| 少妇裸体淫交视频免费看高清| 特级一级黄色大片| 老鸭窝网址在线观看| 久久国产精品影院| 精品欧美国产一区二区三| 欧美性猛交╳xxx乱大交人| 韩国av一区二区三区四区| 中文字幕熟女人妻在线| 一个人看视频在线观看www免费 | 欧美三级亚洲精品| 亚洲国产色片| 国产精品影院久久| 欧美丝袜亚洲另类 | 欧美一区二区精品小视频在线| 女警被强在线播放| 精品人妻1区二区| 一a级毛片在线观看| 色综合欧美亚洲国产小说| 久久亚洲精品不卡| 中文在线观看免费www的网站| 热99在线观看视频| 久久久久久久亚洲中文字幕 | 午夜精品久久久久久毛片777| 免费在线观看影片大全网站| aaaaa片日本免费| 欧美日本亚洲视频在线播放| 亚洲男人的天堂狠狠| 亚洲激情在线av| 亚洲在线自拍视频| 成年女人永久免费观看视频| 欧美最黄视频在线播放免费| 久久婷婷人人爽人人干人人爱| 亚洲国产精品999在线| 成人午夜高清在线视频| 夜夜躁狠狠躁天天躁| 亚洲国产精品999在线| 欧美日韩一级在线毛片| 亚洲 欧美 日韩 在线 免费| 亚洲人与动物交配视频| 九九在线视频观看精品| 最近视频中文字幕2019在线8| 免费人成在线观看视频色| 国产91精品成人一区二区三区| 成熟少妇高潮喷水视频| 亚洲美女黄片视频| 一级黄片播放器| 人妻夜夜爽99麻豆av| 国产美女午夜福利| 18禁黄网站禁片免费观看直播| 久久伊人香网站| 美女黄网站色视频| 18+在线观看网站| 国产精品女同一区二区软件 | 制服人妻中文乱码| 欧美一区二区国产精品久久精品| 免费看光身美女| 中国美女看黄片| 999久久久精品免费观看国产| 操出白浆在线播放| 欧美3d第一页| 亚洲不卡免费看| 国产私拍福利视频在线观看| 一级毛片女人18水好多| a级毛片a级免费在线| 国产午夜精品论理片| 久久亚洲真实| 99久久久亚洲精品蜜臀av| 欧美色欧美亚洲另类二区| 男人舔女人下体高潮全视频| 亚洲在线观看片| 噜噜噜噜噜久久久久久91| 两性午夜刺激爽爽歪歪视频在线观看| www.www免费av| 亚洲专区中文字幕在线| 人妻久久中文字幕网| 在线免费观看的www视频| 国产精品野战在线观看| 亚洲av熟女| 中文亚洲av片在线观看爽| av女优亚洲男人天堂| 色在线成人网| 尤物成人国产欧美一区二区三区| 琪琪午夜伦伦电影理论片6080| 一本综合久久免费| 俺也久久电影网| 日韩欧美国产一区二区入口| 欧洲精品卡2卡3卡4卡5卡区| 亚洲国产欧洲综合997久久,| 国产免费男女视频| 禁无遮挡网站| 亚洲美女黄片视频| 最近最新中文字幕大全电影3| 欧美性猛交黑人性爽| 又黄又爽又免费观看的视频| 男人的好看免费观看在线视频| 亚洲av成人不卡在线观看播放网| 91麻豆av在线| 亚洲av成人av| 无限看片的www在线观看| 最后的刺客免费高清国语| 亚洲 国产 在线| 精品久久久久久成人av| 国产黄a三级三级三级人| 日本熟妇午夜| 99在线视频只有这里精品首页| 午夜福利18| 高清在线国产一区| 国产精品爽爽va在线观看网站| 国产乱人伦免费视频| 国产真实乱freesex| 一级a爱片免费观看的视频| 久久久久久久亚洲中文字幕 | 97超级碰碰碰精品色视频在线观看| 久久亚洲精品不卡| 麻豆一二三区av精品| 91久久精品国产一区二区成人 | 亚洲精品久久国产高清桃花| 国产精品影院久久| 精品久久久久久久久久久久久| 国产一区二区亚洲精品在线观看| 少妇熟女aⅴ在线视频| 国产亚洲精品久久久com| 午夜影院日韩av| 国产又黄又爽又无遮挡在线| 国产精品女同一区二区软件 | 国产黄片美女视频| 三级毛片av免费| 在线观看一区二区三区| 丁香六月欧美| 天天躁日日操中文字幕| 少妇高潮的动态图| 欧美黄色片欧美黄色片| 香蕉久久夜色| 嫁个100分男人电影在线观看| 97碰自拍视频| 日韩欧美国产在线观看| 欧美性感艳星| 日本成人三级电影网站| 女警被强在线播放| 国产三级在线视频| 欧美丝袜亚洲另类 | 少妇人妻一区二区三区视频| 国产成人aa在线观看| 久久国产乱子伦精品免费另类| 真实男女啪啪啪动态图| 午夜亚洲福利在线播放| 免费在线观看影片大全网站| 99久国产av精品| 变态另类丝袜制服| 一个人看的www免费观看视频| 三级毛片av免费| 他把我摸到了高潮在线观看| 久久精品国产综合久久久| a在线观看视频网站| 色在线成人网| 免费观看精品视频网站| 91在线精品国自产拍蜜月 | av片东京热男人的天堂| 99热只有精品国产| 天天一区二区日本电影三级| 成人性生交大片免费视频hd| 欧美bdsm另类| 国产精品野战在线观看| 欧美成人一区二区免费高清观看| 男女午夜视频在线观看| 夜夜躁狠狠躁天天躁| 欧美激情在线99| 好男人电影高清在线观看| 蜜桃久久精品国产亚洲av| a级毛片a级免费在线| 午夜两性在线视频| www国产在线视频色| 少妇的逼好多水| 少妇高潮的动态图| 一级黄片播放器| 97超视频在线观看视频| 亚洲欧美日韩东京热| 久99久视频精品免费| www.色视频.com| 在线观看舔阴道视频| 一本一本综合久久| 日本 av在线| 韩国av一区二区三区四区| 欧美成狂野欧美在线观看| 国产不卡一卡二| 真人做人爱边吃奶动态| 性色avwww在线观看| 成人三级黄色视频| 国产激情偷乱视频一区二区| 午夜老司机福利剧场| 亚洲美女黄片视频| 欧美最黄视频在线播放免费| 色精品久久人妻99蜜桃| 精品久久久久久,| 啦啦啦免费观看视频1| 色吧在线观看| 97人妻精品一区二区三区麻豆| 狂野欧美激情性xxxx| 欧美午夜高清在线| 可以在线观看毛片的网站| 综合色av麻豆| 在线观看舔阴道视频| 日日干狠狠操夜夜爽| 国产乱人伦免费视频| 国产精华一区二区三区| 成人国产一区最新在线观看| 欧美成人免费av一区二区三区| 免费电影在线观看免费观看| 欧美最黄视频在线播放免费| 免费看美女性在线毛片视频| 一级黄色大片毛片| 国产精品美女特级片免费视频播放器| 午夜福利18| 精品人妻一区二区三区麻豆 | 亚洲真实伦在线观看| 国产精品久久久久久久久免 | 日本黄大片高清| 三级毛片av免费|