• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Rethinking random Hough Forests for video database indexing and pattern search

    2016-07-19 05:44:34CraigHendersonEbroulIzquierdocTheAuthor206ThisarticleispublishedwithopenaccessatSpringerlinkcom
    Computational Visual Media 2016年2期

    Craig Henderson),Ebroul Izquierdo?cThe Author(s)206.This article is published with open access at Springerlink.com

    ?

    Research Article

    Rethinking random Hough Forests for video database indexing and pattern search

    Craig Henderson1),Ebroul Izquierdo1
    ?cThe Author(s)2016.This article is published with open access at Springerlink.com

    Abstract Hough Forests have demonstrated effective performance in object detection tasks,which has potentialtotranslatetoexcitingopportunities inpatternsearch.However,currentsystemsare incompatible with the scalability and performance requirementsofaninteractivevisualsearch.In this paper,we pursue this potential by rethinking the method of Hough Forests training to devise a system that is synonymous with a database search index that can yield pattern search results in near realtime.Thesystemperformswellonsimple pattern detection,demonstrating the concept is sound. However,detection of patterns in complex and crowded street-scenes is more challenging.Some success is demonstrated in such videos,and we describe future work that will address some of the key questions arising from our work to date.

    Keywords Hough Forests;pattern detection;pattern search;machine learning

    1Multimedia and Vision Research Group,School of Electronic Engineering and Computer Science,Queen Mary University of London,London,E1 4NS,UK.E-mail:C.Henderson,c.d.m.henderson@qmul.ac.ukE.Izquierdo,e.izquierdo@qmul.ac.uk.

    Manuscript received:2015-11-24;accepted:2015-12-18

    1 Introduction

    Randomised Hough Forests were introduced in 2009 and demonstrated to detect instances of an object class,such as cars or pedestrians[1].This,and subsequent works,have shown that Hough Forests can perform effectively in object detection tasks,and we see potential to translate their use to exciting opportunities in pattern search in large-scale data corpora.

    Current systems are,however,incompatible with the scalability and performance requirements of an interactive visual search system.Contemporary research trains a forest using variants of an object and regression is performed using an unseen image in which instances of the pattern are probabilistically identified.We propose to conceptually invert the use of the forest by using the corpus data that is to be searched as training data,and a query image of an unseen pattern to be searched for(Fig.1).The new schema results in a very fast search of a large set of data whereby a single pass of a small query image through the forest yields probabilistic hypotheses of pattern detection in every image in the training data.Conventionally,regression is performed on each image and while each one can take only a few hundred milliseconds,the time to run over a large corpus invalidates its use for search at large scale.

    We make the following contributions:

    1.a new approach to Hough Forests training and pattern detection is described,suitable for largescale pattern search in an interactive system;

    2.a technique to train forests without explicit negative training images,eliminating the need for an otherwise arbitrary set of negative training images to be used to counter positive training images;

    3.a method to select positive and negative training patches to filter noise from the background of images;

    4.a flexible and scalable pattern search index that data can be added to or removed from at any time without need for any re-training of the existing forests.

    1.1Motivation

    We are interested in the problem of searching a large corpus of video and still images for a previouslyunknown distinctive pattern such as a fashion or corporate logo,tattoo,or bag for left-luggage search. The genericity of the problem definition eliminates the opportunity to restrict the search,for example,by reducing the problem using a person detector to first find a suitable search area.The mixture of video and still images in the database also restricts the general use of spatio-temporal information available only in the video subset of data.We therefore seek a generic solution to a search problem that is flexible and fast enough to be an interactive search with user input defining a previously unknown pattern for which to search.

    Fig.1?。╝)A typical forest training and regression process and(b)our novel process for video indexing.

    1.2Nomenclature

    Literature to date describe Hough Forests used in object detection.We are interested in a more general pattern detection regardless of the structure of the pattern,and therefore refer to pattern where it can be referred to an object in the case of other literature. Our data corpus consists of video sequences and collections of related still images.Each video or collection of images is treated as a unit of data for indexing.For brevity,we refer to the unit as a video,where it can also mean a collection of images.

    2 Background

    Hough Forests[2]use a random forest framework[3]that is trained to learn a mapping from denselysampledD-dimensional feature cuboids in the image domain to their corresponding votes in a Hough spaceH?RH.The Hough space accumulates votes for a hypothesish(c,x,s)of an object belonging to classc∈Ccentred onx∈RDand with sizes. Object hypotheses are then formed by local maxima in the Hough space when voting is complete.Votes contributing to a hypothesis are called support.The term cuboid refers to a local image patch(D=2)or video spatio-temporal neighbourhood(D=3)depending on the task.Introduced in 2009[1],Hough Forests have gained interest in many areas such as object detection(D=2)[4-8],object tracking[9],segmentation[5,10],and feature grouping[7].With D=3,action recognition is an active research area that has used Hough Forests[11,12],and in Ref.[13],more general randomised forests were used as part of a solution to index short video sequences using spatio-temporal interest points.

    Hough Forests have shown to be effective in class-specific object detection[1],multi-class object detection[14],tracking object instances[2],and pattern detection[15],and their performance is suitable for real-time tasks with sub-second regression time on 800×600 dimension images.The high performance and effective accuracy make Hough Forests an attractive proposition for large-scale video database pattern searching.However,the conventional use of forests in object detection tasksdoes not scale well;a forest is trained with example images of the object or class of object to be found,and then a regression is run on unseen image to detect the object.In an interactive search system,the query pattern is unknown until runtime,when it is specified by the user.In this paper,we seek toaddress this conflict,rethink the method,and use of a randomised Hough forest for high-performance visual search.

    3 Rethinking forests

    In a departure from the established method,we conceptually invert the use of the forest and consider the image domain to consist not of instances of an object class or variations of the pattern to be detected,but the corpus to be searched.In our case,a forest’s image domain is a set of frame images from a single video within the corpus.

    We use a collection of forests to build a complete index of our video and image corpus.Each forest is trained using a single video(Fig.2)-which can be just a few frames or 90,000 frames for one hour sequence at 25 fps-and can therefore be considered a sub-index of the database relating exclusively to a single video.Each forestF= {T1,T2,T3,...,TN}consists ofNtrees where each treeTiis independent of all other trees for training and testing.

    Forests are trained using a novel scheme to identify positive and negative training samples(Section 3.2)from the frames of a video,thus removing the usual need for an explicit set of negative training images. Training is the most time-consuming function,and can be done as an offline activity for each video, independently of all other videos in the index. Training each forest is therefore consistent with building a video index in a more conventional retrieval system.A trained forest provides fast access to the patterns contained within the video such that it is searchable on-demand for unseen patterns of any size and dimension.

    To perform a search,patches are extracted from a query image(or sub-image defined by a query region)using a sliding window,and passed through the forest.Rather than accumulating votes in the two dimensions of the query image,we accumulate votes in three dimensions of width and height of the training dataset and depth of the number of training images.The support of the leaf is used to trace the contributing vote back to the source frames,and the vote is accumulated in each of them(Section 3.3).

    The independence of the components within a collection of forests is important for large-scale searching,providing scalability and flexibility.

    Scalability.To support large video database search,the index must be highly scalable.The independence of components in the forest collection means that it is massively scalable and processing can be extended across many cores,processors,and machines.Training trees can be performed in parallel as there is no dependence between individual trees.Pattern search is less time-consuming,but similarly scalable-each forest can pass the query image patches through all of its trees simultaneously and accumulate the results as they complete.

    Fig.2 The video database index is a collection of independent forests.Each row represents a forest of five independent trees,trained using frames from single video sequence.

    Flexibility.New videos can be added easily without need for any re-training of existing forests. A new forest is simply created,trained with the newdata and then added to the collection.If a video is no longer required to be searched,then the relevant forest can simply be removed from the collection and will no longer be included in future searches.No re-training is necessary.Where available,the date of the video can be added as a property of the forest to further increase performance and search result relevance.A user can specify a time frame and forests containing videos from outside of the date range can easily be excluded from the search.

    3.1Hough Forests algorithm

    We make some small amendments to the Hough Forests algorithm to achieve our goals.First,an offset vector is used[14]to translate the centre of the object in the training set to the centre in the regression step.With the inversion of training and query data,voting is accumulated at the patch position in the original frame dimension(Section 3.2)and the centre point offset into 2D Hough space of dimension of the query is not used,so we omit storing vector.Second,we do not scale images such that the longest spatial dimension of the bounding box is a known length.We work in the original frame dimensions of the video to avoid image data loss caused by resizing.Third,at each leaf,we record the training image to which each patch belongs,and finally,voting weights are accumulated in a multidimensional Hough space that includes the frame number.

    3.2Training the forest

    A forest is trained using sets of patches that are extracted from the training images at random positions.Twosetsoftrainingimagesare conventionally used:a positive set of images containing examples of the object class or pattern to be detected,and a negative training set consisting images that do not contain any examples.Patches are extracted from each set yielding positive and negative training patches,respectively.The original authors of Hough Forests object detection[1]and subsequent research use a pre-determined patch size of 16×16 pixels,and extract 50 patches from each training image(resized such that the longest side is 100 pixels)to use in training the forest.

    From each image,32 feature channels are created: three colour channels of the Lab colour space,the absolute values of the first-and second-order derivatives in thex-andy-directions,and nine HOG[16]channels.Each HOG channel is obtained as the soft bin count of gradient orientations in a 5×5 neighbourhood around a pixel.To increase the invariance under noise and articulations of individual parts these 16 channels are processed by applying the min and the max filtration with 5×5 filter size,yielding 32 feature channels(16 for the min filter and 16 for the max filter).

    Patches are extracted from random positions within a training image,overlapping to build up a patchwork representation of the image(Fig.4(b)).It therefore follows that the quantity and size of patches as well as the size of the training image determine the coverage. Our recent study[15]demonstrated some increase in detection accuracy can be achieved by dynamically selecting the size and quantity of training patches without resizing the training images.A large number of patches in a small training image will cause a lot of overlap,leading to redundancy in the forest. Although random forests do not suffer from overtraining[3],bloating the trees with excessive patches does impede runtime performance.In an image of dimension 704×625,a saturated coverage using 16× 16 dimension patches with one patch in each position on a grid would requireρpatches,with some patches overlapping because the height is not divisible by the patch height.

    However,saturated coverage with little or no overlap is a poor solution for us;the pattern is unlikely to align neatly to a grid and with each patch casting a single weighted vote,the accumulation to indicate the presence of the pattern will be weak.Positioning patches is therefore a critical step in the algorithm.

    Randomnessis used extensively in computer science as a means to avoid bias.Consider the training images stacked one on top of another.If patches were always placed in the same position,then each patch would represent a tunnel view through the images.All image data outside of these tunnels would be ignored.Furthermore,algorithms are usually trained using a corpus of images containingobjects that contain neatly cropped and centred views of the objects.In such cases,if the position of the patch extracted from every image was in the same position,then the model would be trained only on a subset of tunnel-vision views of an object,resulting in over-training and ineffective detection. Variation is therefore important,and randomness is used to achieve variation,in this case to avoid overtraining.However,randomness is non-deterministic,and repeated execution of the same algorithm will produce different results,which makes measuring the effect and change in accuracy of incremental adjustments impossible to compare.To overcome this,the random number generator can be seeded to a fixed value,to yield a repeatable sequence of psuedorandom numbers.This is the approach adopted in experiments in this paper.

    Selecting patches.Patches are extracted from training images at random positions,thus leaving to chance the amount of coverage of the image that is used for training.If the number and size of patches are large relative to the dimensions of the training images,and the random number generator has a uniform distribution,then the chances of the coverage being spread evenly over the image and sufficient to capture the image detail are increased.

    We are interested in indexing video sequences from street-scenes such as closed circuit television (CCTV)images,often filmed from high in the air and,although cluttered with respect to large crowds,still containing non-distinct areas for which searching is unlikely.Random sampling from the image is too hit-and-miss(Fig.3)to be reliable in extracting patches of use for a search system.Patches are therefore selected based on a statistical measure of how distinctive they may be in any subsequent search. The grey-level co-occurrence matrix(GLCM)is a wellknown statistical method of examining texture that considers the spatial relationship of pixels.We use the derived GLCM-Contrast statistic that measures the local variations within the matrix.For brevity,we refer to GLCM-Contrast simply as the contrast of a patch.For each of the randomly positioned 16×16 patches,the normalised contrastτiof the patch is calculated.A patch is selected as a positive training patch ifτi>λ,otherwise the patch is considered too low-contrast for positive training.This selection restricts the random placement of positive training patches to be within high-contrast areas which are more distinctive and therefore more searchable(see green patches in Fig.4(b)).

    Fig.3 Extracting patches at random is unreliable for selecting sufficient training patches to yield a good search results.Only 1 of 200 randomly placed patches will extract any of the Adidasrlogo on the black hoodie.

    Training the forest with the corpus leaves no sensible choice for negative training images to counter the positive training images.However,the use of the contrast measure has rejected a number of random patches that will not be used for positive training. We therefore use these rejected patches as negative training patches,and overcome a significant problem by providing a contextual set of negative training patches.Each patch is therefore selected as a positive or negative training patch based on the normalised GLCM-Contrast of the patch,thus:

    whereλ:=0.015for16×16patchesinour experiments.

    Figure 4 shows extracted patch regions that are selected positive(green)and negative(red)in the example image of Fig.3,when searching for 200 (Fig.4(a))and 1760(Fig.4(b))positive patches.The positive patches are clustered in the high-contrast areas of the image,which are those that contain searchable distinctive patterns.Light clothing is selected because the patch contrast is affected by shadows caused by creases,etc.Low-contrast areas are ignored as they are not distinctive patterns.The high density of 1760 patches in Fig.4(b)shows so much overlapping of patches that the low-contrastand high-contrast regions become visible.

    3.3Detection

    A forest,trained from all of the frames from a video,is synonymous with an index of patterns for searching the video.A single pass of patches from the query image through the tree finds a probabilistic hypothesis for the pattern’s occurrence in each frame.The interactive visual search is therefore very fast.A given query region is defined by the user by drawing a rubber band around a pattern of interest-for example,a distinctive pattern or logo on a piece of clothing.A sliding window of size 16×16(the same as the training patch size)is then passed over the query image and votes are accumulated for each image in the training set.

    To visualise the results,we back-project votes to the support.Back-projection is an established method of tracing the leaves of a forest backwards through the tree to establish the source data contributing to the leaf,and has previously been used for verification[17],meta-data transfer[18],and visualisation[19].Backprojected votes are aggregated at the patch positions to create a heat-map of votes overlaid onto the image domain(Fig.8).This is not an integral part of the algorithm,but enables a visual inspection of the accuracy of the pattern detection.

    4 Experiments

    We ran four experiments to examine the feasibility and accuracy performance of the new Hough Forests video database index.Three experiments were conceived to validate the concept without being distracted by the complexity of the search domain.In these,very simple synthetic images were constructed to train a forest and perform pattern detection in uncluttered images.We feel this is important as,although a toy example in pattern detection terms,it is used as a proof of concept for the new method of training a forest using the search domain and detection of an unseen query image.First is a very simple anti-aliased black text on a white background. Patches extracted from all the images shown in Fig.5(a)were used as positive training samples and negative training images came from cartoon images from the Garfield and Snoopy categories of Ref.[20]and 104.homer-simpson from Ref.[21]. Second,using the same images(Fig.5(a))training patches were extracted from all frames and selected as positive or negative training samples based on our algorithm(Eq.(2)).No additional negative training images were used.Third,a simple anti-aliased black text on a red-and-white high-contrast checkerboard background(Fig.6(a)).While the first three experiments demonstrate the viability of the new use of Hough Forests and negative training method,using synthetic images,the fourth experiment tests a real-world use case.The forest is trained with 256 frames of a video sequence and detection of a patternan Adidasrlogo on a jumper(Fig.7)-during theLondon riots of 2011.

    For each experiment,a forest of five trees was trained using the dataset as positive training images,extracting 200 positive training patches per image. The efficacy of pattern detection was then assessed by using a pattern that is known to be in the training data.This protocol mirrors our motivating use case.

    Fig.5?。╝)10 images in the search corpus;(b)detection resultsusingrandom patches through the image and negative training images from Refs.[20]and[21];(c)detection results using a forest trained with random patches from the images and GLCM-Contrast selected positive/negative patches.Right,query image (shown with a border).

    Fig.6?。╝)10 images in the search corpus;(b)pattern detection results using a forest of five trees,each with a maximum depth of five.The coloured bordered represent successful(green)and unsuccessful(red)detection of the pattern.All patterns are detected successfully if the forest shape is changed to twenty trees,each with a maximum depth of ten.

    Fig.7 The video frame(left)from which a query region(right,actual size)is selected to perform our fourth experiment.

    Fig.8 Two frames of a video showing correctly identified positions of the search pattern.The offset hot-spot may be related to the training of the forest(see text).

    5 Empirical results

    In each of our result figures,coloured squares highlight the patch positions that contribute to a vote in favour of a pattern position.The colour reflects the accumulated vote from the patch,or series of overlaid patches,from blue(few votes)to red(many votes)on a heat map scale.

    Detection of a piece of text on a plain white background is a simple enough task,but valuableas a baseline experiment for a novel method such as ours.Figure 5(b)shows the correct detection of the text with some noise in the background(a perfect detection is a hot-spot in the centre of the text).We repeat the exercise using our low-contrast filtering method to select negative training patches from lowcontrast areas of the images without using arbitrary negative training images,and the result is improved considerably(Fig.5(c))with hot-spots more central and no votes at all accumulated in the background away from the pattern area.

    Results from the third test are shown in Fig.6. The red-and-white checker-board background is high-contrast and the background is therefore not eliminated using our patch selection technique. Accuracy is measured by the position of the most intense red region,not by the size of the region.The intensity represents the number of votes cast,so a small intense red region depicts a higher probability of the centre of the pattern than a larger,less intense colour.This is evident in the first result image(topleft).A larger red/orange area has accumulated to the north-east of the pattern ground truth,but a smaller,more intense region shows on the right-hand edge of the letterx.The result is therefore determined a successful detection.In total,the pattern has been detected in seven out of the ten images.A successful detection in ten out of the ten images is achieved when the number of trees in the forest is increased to twenty,and the maximum depth of each tree is ten.

    Figure 8 shows the successful detection of a sportswear company logo on clothing in two frames of a video sequence,from a realistically trained forest.The images have been darkened to highlight the results.The scatter of patches is restricted to high-contrast regions because of the training patch selection algorithm.The hot-spot(intensity red)is the algorithm’s predicted centre of the logo.In both frames,the centre point is within the logo,but offcentre.However,consider the patch positions of the training image in Fig.4(a);these randomly positioned patches only cover the top-left of the logo that was searched for.Although this is only one training frame,it indicates that the detection algorithm has not received enough good training patches for the selected query,and this is affecting the detection performance. Detection in some other frames performed less well (Fig.9).The voting hot-spot is outside the ground truth position,accumulating on a different jumper’s logo.Patches in the ground truth area are very sparse indicating that the forest has not been trained sufficiently.

    Fig.9 An example of a false detection in an image where the query pattern is clearly visible.The voting hot-spot is on a jumper of another person,but the accumulated vote is lower showing less confidence in this detection.

    6 Conclusions

    This paper reports on new techniques for applying an established framework of randomised Hough Forests to large-scale pattern detection.By rethinking how a forest is trained and used in pattern detection,we have shown some initial results that validate the approach,and show promise of future success in applying the problem not only at large scale,but also to sequences of videos of complex scenes.

    Training the forest effectively is an open research area to achieve good general pattern detection accuracy in cluttered street-scene images.Training with negative patches from low-contrast areas of the image works well in our experiments to date,but experience from textual retrieval systems suggest this may not always hold.

    Studies of[textual]retrieval effectiveness show that all terms should be indexed...any visible component of a page might reasonably be used as a query term...Even stopwords—which are of questionable value for bag-of-words queries—have an important role in phrase queries[22].

    Relating this experience to the image search domain may suggest that low-contrast,and even background patches,should be included in the searchable forest and therefore used as positive patches for training.Further experimentation is needed in this area.Open questions remain:

    1.How can the shape of the forest(number of trees,maximum depth,etc.)be determined per-video to maximise pattern detection accuracy?

    All our experiments in this paper have used a consistent forest structure of five trees with a maximum depth of 5 levels,each trained using 200 positive training patches from each image.This structure generally performs well,but could theoretically be optimised based on the video image contents[15].For example,videos containing more complex scenes should benefit from a larger forest(more trees)or more complex trees(greater depth)as experiment four demonstrates.The balance of runtime complexity,memory consumption and accuracy will be a tradeoff consideration.

    2.Can a Hough Tree be learned incrementally without visibility of all its training data together?

    One aspect of the scalability of the system to very large videos remains an open research area;To train a forest,the patches for all training data has to be accessible.This is by algorithm design and could present a scalability limit for videos with a large number of frame.The Hoeffding algorithm [23]describes a method to train a tree without visibility of all data,which may benefit our system.Some recent work has been published on incremental learning of forests for classification[24],and an opportunity exists to extend this to the problem of pattern detection. 3.Can the granularity of forest and training data be better chosen to improve pattern detection accuracy?

    Thus far we have used one forest per video in the corpus.This choice is made with the prior knowledge of properties of street-scene CCTV videos,for example that a video will be a sequence from a single camera without any shot change that temporally changes significantly between two consecutive frames of video[25].While this choice is valid,in our view,it may not be optimal. We would like to investigate whether a single forest can be used to index multiple(perhaps related)videos,or if any videos can be temporally segmented and distributed to multiple forests.In such a system,each forest then becomes an index for a pre-classified set of video segments which may yield better detection performance.

    As well as investigating these questions,a full assessment of the system,measuring accuracy against a ground truth is required in future work to produce a quantitative evaluation of our method.

    Acknowledgements

    This work is funded by the European Union’s Seventh Framework Programme,specific topic“framework and tools for(semi-)automated exploitation of massive amounts of digital data for forensic purposes”,under grant agreement number 607480(LASIE IP project).The authors extend their thanks to the Metropolitan Police at Scotland Yard,London,UK,for the supply of and permission to use CCTV images.

    References

    [1]Gall,J.;Lempitsky,V.Class-specific Hough forests for object detection.In:Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,1022-1029,2009.

    [2]Gall,J.;Yao,A.;Razavi,N.;Van Gool,L.;Lempitsky,V.Hough forests for object detection,tracking,and action recognition.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.33,No.11,2188-2202,2011.

    [3]Breiman,L.Random forests.Machine Learning Vol. 45,No.1,5-32,2001.

    [4]Barinova,O.;Lempitsky,V.;Kholi,P.On detection of multiple object instances using Hough transforms. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.34,No.9,1773-1784,2012.

    [5]Payet,N.;Todorovic,S.Hough forest random field for object recognition and segmentation.IEEE TransactionsonPatternAnalysisandMachine Intelligence Vol.35,No.5,1066-1079,2013.

    [6]Razavi,N.;Alvar,N.S.;Gall,J.;Van Gool,L. Sparsity potentials for detecting objects with the Hough transform.In:Proceedings of British Machine Vision Conference,11.1-11.10,2012.

    [7]Srikantha,A.;Gall,J.Hough-based object detection with grouped features.In:Proceedings of IEEE International Conference on Image Processing,1653-1657,2014.

    [8]Yokoya,N.;Iwasaki,A.Object detection based on sparse representation and Hough voting for optical remote sensing imagery.IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Vol.8,No.5,2053-2062,2015.

    [9]Godec,M.;Roth,P.M.;Bischof,H.Hough-based tracking of non-rigid objects.Computer Vision and Image Understanding Vol.117,1245-1256,2013.

    [10]Rematas,K.;Leibe,B.Efficient object detection and segmentation with a cascaded Hough Forest ISM. In:Proceedings of IEEE International Conference on Computer Vision Workshops,966-973,2011.

    [11]Waltisberg,D.;Yao,A.;Gall,J.;Van Gool,L. Variations of a Hough-voting action recognition system. In:Lecture Notes in Computer Science,Vol.6388. ¨Unay,D.;C?ataltepe,Z.;Aksoy,S.Eds.Springer Berlin Heidelberg,306-312,2010.

    [12]Yao,A.;Gall,J;Van Gool,L.A Hough transformbased voting framework for action recognition.In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2061-2068,2010.

    [13]Yu,G.;Yuan,J.;Liu,Z.Unsupervised random forest indexing for fast action search.In:Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,865-872,2011.

    [14]Gall,J.;Razavi,N.;Van Gool,L.An introduction to random forests for multi-class object detection.In: Proceedings of the 15th International Conference on Theoretical Foundations of Computer Vision:Outdoor and Large-scale Real-world Scene Analysis,243-263,2012.

    [15]Henderson,C.;Izquierdo,E.Minimal Hough forest training for pattern detection.In:Proceedings of International Conference on Systems,Signals and Image Processing,69-72,2015.

    [16]Dalal,N.;Triggs,B.Histograms of oriented gradients for human detection.In:Proceedings of IEEE Computer Society Conference on Computer Vision and Pattern Recognition,Vol.1,886-893,2005.

    [17]Leibe,B.; Leonardis,A.; Schiele,B.Robust object detection with interleaved categorization and segmentation.International Journal of Computer Vision Vol.77,No.1,259-289,2008.

    [18]Thomas,A.;Ferrari,V.;Leibe,B.;Tuytelaars,T.;Van Gool,L.Using multi-view recognition and metadata annotation to guide a robot’s attention.The International Journal of Robotics Research Vol.28,No. 8,976-998,2009.

    [19]Razavi,N.;Gall,J.;Van Gool,L.Backprojection revisited:Scalable multi-view object detection and similarity metrics for detections.In:Lecture Notes in Computer Science,Vol.6311.Daniilidis,K.;Maragos,P.;Paragios,N.Eds.Springer Berlin Heidelberg,620-633,2010.

    [20]Li,F(xiàn).-F.;Fergus,R.;Perona,P.Learning generative visual models from few training examples:An incremental Bayesian approach tested on 101 object categories.Computer Vision and Image Understanding Vol.106,No.1,59-70,2007.

    [21]Griffin, G.; Holub, A.; Perona, P.Caltech-256objectcategorydataset.TechnicalReport. California Institute of Technology,2007.Available at http://authors.library.caltech.edu/7694/1/CNS-TR-2007-001.pdf.

    [22]Zobel,J.;Moffat,A.Inverted files for text search engines.ACM Computing Surveys Vol.38,No.2,Article No.6,2006.

    [23]Domingos,P.;Hulten,G.Mining high-speed data streams.In:Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,71-80,2000.

    [24]Ristin,M.;Guillaumin,M.;Gall,J.;Van Gool,L.Incremental learning of random forests for largescale image classification.IEEE Transactions on Pattern Analysis and Machine IntelligenceDOI: 10.1109/TPAMI.2015.2459678,2015.

    [25]Henderson,C.;Blasi,S.G.;Sobhani,F(xiàn).;Izquierdo,E.On the impurity of street-scene video footage.In: Proceedings of the 6th International Conference on Imaging for Crime Prevention and Detection,1-7,2015.

    Craig Hendersonreceived his B.S. degree in computing for real time systems from the University of the West of England in Bristol,UK,in 1995.From 1995 to 2014 he worked in a variety of organisations as software engineer and engineering manager.

    Since 2014,he is a Ph.D.candidate in the Multimedia and Computer Vision Laboratory,School of Electronic Engineering and Computer Science at Queen Mary University of London,UK.His research interests include computer vision,machine learning,and scalable systems.

    Ebroul Izquierdoreceived his M.S.,Ph.D., C.Eng., FIET, SMIEEE,MBMVA degrees.For his thesis on the numerical approximation of algebraicdifferential equations,he received the Dr. Rerum Naturalium(Ph.D.)degree from Humboldt University,Berlin,Germany.

    He is the head of the Multimedia and Vision Group,School of Electronic Engineering and Computer Science at Queen Mary University of London,UK. He has published over 500 technical papers including book chapters and holds several patents.

    Open AccessThe articles published in this journal aredistributedunderthetermsoftheCreative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/), whichpermits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    国国产精品蜜臀av免费| 岛国在线免费视频观看| 免费人成视频x8x8入口观看| 成人性生交大片免费视频hd| 午夜久久久久精精品| 精品少妇黑人巨大在线播放 | 五月玫瑰六月丁香| 丝袜美腿在线中文| 女人被狂操c到高潮| 成人欧美大片| 国产午夜福利久久久久久| 国产精品久久久久久精品电影| 嫩草影院精品99| 国产真实乱freesex| 亚洲人成网站高清观看| 岛国毛片在线播放| 国产午夜精品久久久久久一区二区三区| 男女边吃奶边做爰视频| 好男人在线观看高清免费视频| 又爽又黄a免费视频| 国产精品一区二区三区四区免费观看| 村上凉子中文字幕在线| 有码 亚洲区| 1024手机看黄色片| 精品久久久久久久久久久久久| 少妇高潮的动态图| 亚洲欧美中文字幕日韩二区| 亚洲国产精品国产精品| 九草在线视频观看| 美女大奶头视频| 美女xxoo啪啪120秒动态图| 少妇人妻精品综合一区二区 | 精品国产三级普通话版| 一夜夜www| 成人二区视频| 日日摸夜夜添夜夜添av毛片| 亚洲精品亚洲一区二区| 久久精品综合一区二区三区| 国产真实乱freesex| 啦啦啦韩国在线观看视频| 天堂中文最新版在线下载 | 国产91av在线免费观看| 不卡视频在线观看欧美| 久久久午夜欧美精品| 男女边吃奶边做爰视频| 国产精品美女特级片免费视频播放器| 中文字幕久久专区| 两个人的视频大全免费| 亚洲精品色激情综合| 一本久久中文字幕| 久久精品国产亚洲av天美| 99久久九九国产精品国产免费| 五月伊人婷婷丁香| 欧美zozozo另类| 晚上一个人看的免费电影| 精品少妇黑人巨大在线播放 | 成人午夜高清在线视频| 91精品国产九色| 国产伦精品一区二区三区四那| 免费观看在线日韩| 国产精品av视频在线免费观看| 久久99热这里只有精品18| 亚洲av.av天堂| 国产亚洲91精品色在线| or卡值多少钱| 一级二级三级毛片免费看| 美女xxoo啪啪120秒动态图| 成人漫画全彩无遮挡| 长腿黑丝高跟| 成年女人永久免费观看视频| 变态另类丝袜制服| 九九热线精品视视频播放| av在线天堂中文字幕| 中国国产av一级| 女人十人毛片免费观看3o分钟| 69av精品久久久久久| 亚洲va在线va天堂va国产| 美女内射精品一级片tv| 国产精品久久视频播放| 成人性生交大片免费视频hd| 99热这里只有精品一区| 午夜视频国产福利| 国产精品国产高清国产av| 国产精品一区二区性色av| 亚洲精品日韩在线中文字幕 | 国产高清有码在线观看视频| .国产精品久久| 搞女人的毛片| 日韩,欧美,国产一区二区三区 | 99热这里只有是精品在线观看| 久久精品国产99精品国产亚洲性色| 日本三级黄在线观看| 久久久久久久亚洲中文字幕| 久久久欧美国产精品| 国内久久婷婷六月综合欲色啪| 亚洲av电影不卡..在线观看| 成人欧美大片| 国产精品国产高清国产av| av.在线天堂| 国产探花极品一区二区| 少妇人妻一区二区三区视频| 日韩欧美三级三区| 亚洲av熟女| 中出人妻视频一区二区| 内射极品少妇av片p| 亚洲精品自拍成人| 91麻豆精品激情在线观看国产| 一边亲一边摸免费视频| 男人的好看免费观看在线视频| 亚洲人成网站在线播| 中文资源天堂在线| 亚洲色图av天堂| 欧美三级亚洲精品| 日韩av不卡免费在线播放| 美女大奶头视频| 18禁在线播放成人免费| 欧美色视频一区免费| 内地一区二区视频在线| 日韩欧美在线乱码| 亚洲精品自拍成人| 有码 亚洲区| 午夜精品在线福利| 亚洲四区av| 女人被狂操c到高潮| 日韩大尺度精品在线看网址| 国产探花极品一区二区| 久久精品人妻少妇| 日日撸夜夜添| 国国产精品蜜臀av免费| 岛国在线免费视频观看| 高清毛片免费观看视频网站| 久久精品久久久久久久性| 成人一区二区视频在线观看| 国产一级毛片七仙女欲春2| 日韩三级伦理在线观看| 免费观看人在逋| 日日摸夜夜添夜夜添av毛片| 婷婷亚洲欧美| 久久国产乱子免费精品| 欧美性猛交黑人性爽| 乱人视频在线观看| avwww免费| 精品免费久久久久久久清纯| 免费在线观看成人毛片| 又爽又黄无遮挡网站| 成人一区二区视频在线观看| 人人妻人人澡人人爽人人夜夜 | 丰满人妻一区二区三区视频av| 久久久久久国产a免费观看| 性色avwww在线观看| 中文字幕免费在线视频6| 午夜视频国产福利| 国产一级毛片七仙女欲春2| 亚洲av.av天堂| 国产高清有码在线观看视频| 国产老妇伦熟女老妇高清| 又爽又黄a免费视频| 国产成人freesex在线| ponron亚洲| 有码 亚洲区| 久久午夜亚洲精品久久| 中文字幕制服av| 伦精品一区二区三区| 日本免费一区二区三区高清不卡| 91麻豆精品激情在线观看国产| 国产成人a∨麻豆精品| 人妻久久中文字幕网| 尾随美女入室| 欧美日韩国产亚洲二区| 婷婷精品国产亚洲av| 十八禁国产超污无遮挡网站| 色5月婷婷丁香| 国产成人一区二区在线| 我的老师免费观看完整版| 精华霜和精华液先用哪个| 激情 狠狠 欧美| av女优亚洲男人天堂| 日韩欧美国产在线观看| 天堂av国产一区二区熟女人妻| 亚洲国产高清在线一区二区三| 一区福利在线观看| 久久综合国产亚洲精品| 亚洲成人久久爱视频| 成人美女网站在线观看视频| 美女脱内裤让男人舔精品视频 | 日日摸夜夜添夜夜爱| 夜夜夜夜夜久久久久| 波多野结衣巨乳人妻| 黄片wwwwww| 精品欧美国产一区二区三| 国产伦在线观看视频一区| 婷婷六月久久综合丁香| 欧美xxxx性猛交bbbb| 白带黄色成豆腐渣| 国产黄a三级三级三级人| 亚洲欧美精品综合久久99| 国产熟女欧美一区二区| 国产精品,欧美在线| 91久久精品国产一区二区成人| 女人被狂操c到高潮| av福利片在线观看| 亚洲欧洲日产国产| 精品久久久久久成人av| 精品人妻偷拍中文字幕| 久久6这里有精品| 成人美女网站在线观看视频| 久久国产乱子免费精品| 日韩亚洲欧美综合| 亚洲在久久综合| 国产成年人精品一区二区| 免费看av在线观看网站| 熟女电影av网| 免费看美女性在线毛片视频| 国产 一区 欧美 日韩| 国内少妇人妻偷人精品xxx网站| 国产精品人妻久久久影院| 欧美zozozo另类| 嘟嘟电影网在线观看| 赤兔流量卡办理| 国产精品久久久久久亚洲av鲁大| 淫秽高清视频在线观看| 久久久久性生活片| 国产精品人妻久久久久久| 此物有八面人人有两片| 亚洲av一区综合| 久久精品久久久久久久性| 欧美激情国产日韩精品一区| 免费观看a级毛片全部| 91久久精品国产一区二区成人| 欧美极品一区二区三区四区| av福利片在线观看| 我的女老师完整版在线观看| 男人的好看免费观看在线视频| 成人一区二区视频在线观看| 亚洲精品自拍成人| 综合色丁香网| 中文字幕免费在线视频6| 成人毛片a级毛片在线播放| 久久6这里有精品| 国产在线男女| 亚洲综合色惰| 成人鲁丝片一二三区免费| 不卡一级毛片| 免费人成视频x8x8入口观看| 亚洲精华国产精华液的使用体验 | 亚洲欧美日韩高清在线视频| 午夜a级毛片| 最近2019中文字幕mv第一页| 欧美最新免费一区二区三区| 舔av片在线| 老女人水多毛片| 中文欧美无线码| 久久久久网色| 亚洲色图av天堂| 国产一区二区三区在线臀色熟女| 国产成人a区在线观看| 18禁黄网站禁片免费观看直播| 亚洲国产精品成人久久小说 | 能在线免费观看的黄片| 亚洲av二区三区四区| 青春草亚洲视频在线观看| 最近最新中文字幕大全电影3| 97人妻精品一区二区三区麻豆| 成人性生交大片免费视频hd| 美女脱内裤让男人舔精品视频 | 国产黄片美女视频| 最近视频中文字幕2019在线8| 精品久久久久久久久久免费视频| 成人国产麻豆网| 桃色一区二区三区在线观看| 麻豆久久精品国产亚洲av| 性插视频无遮挡在线免费观看| 久久久久久久久中文| 亚州av有码| 国产精品久久电影中文字幕| 国产精品久久久久久亚洲av鲁大| 国产精品嫩草影院av在线观看| 国产毛片a区久久久久| 国产精品爽爽va在线观看网站| 老女人水多毛片| 看非洲黑人一级黄片| 少妇人妻精品综合一区二区 | 少妇猛男粗大的猛烈进出视频 | 亚洲va在线va天堂va国产| 久久久久国产网址| av在线天堂中文字幕| 成人一区二区视频在线观看| avwww免费| 99久国产av精品国产电影| 午夜免费激情av| 能在线免费观看的黄片| 白带黄色成豆腐渣| 日本成人三级电影网站| 97超视频在线观看视频| a级毛色黄片| 可以在线观看毛片的网站| 精华霜和精华液先用哪个| 久久久色成人| av专区在线播放| 欧美又色又爽又黄视频| 一夜夜www| 亚洲天堂国产精品一区在线| 日韩欧美 国产精品| 午夜亚洲福利在线播放| 日韩欧美国产在线观看| 日韩人妻高清精品专区| 一个人免费在线观看电影| 老师上课跳d突然被开到最大视频| 麻豆久久精品国产亚洲av| 国产精品女同一区二区软件| 高清在线视频一区二区三区 | 久久久久久久亚洲中文字幕| 午夜福利在线在线| 日日摸夜夜添夜夜爱| 久久久久久久亚洲中文字幕| 国产高潮美女av| 搡老妇女老女人老熟妇| 99热精品在线国产| 国产精品一区二区三区四区久久| 99久久成人亚洲精品观看| 亚洲成人中文字幕在线播放| 青春草国产在线视频 | 精品久久久久久久末码| 九草在线视频观看| 亚洲av第一区精品v没综合| 在线国产一区二区在线| 国产成人精品婷婷| av在线老鸭窝| 国产精品国产三级国产av玫瑰| 在线播放国产精品三级| 男人舔女人下体高潮全视频| 成熟少妇高潮喷水视频| 日本色播在线视频| 亚洲精品456在线播放app| 人人妻人人澡欧美一区二区| 欧美色欧美亚洲另类二区| 久久精品人妻少妇| 在线免费十八禁| 永久网站在线| 美女cb高潮喷水在线观看| 波野结衣二区三区在线| 免费人成视频x8x8入口观看| 伦理电影大哥的女人| 日日干狠狠操夜夜爽| 两性午夜刺激爽爽歪歪视频在线观看| 99九九线精品视频在线观看视频| 18+在线观看网站| 国产精品三级大全| 免费一级毛片在线播放高清视频| 亚洲国产欧洲综合997久久,| 精品久久久噜噜| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 波多野结衣高清无吗| 最近2019中文字幕mv第一页| 91午夜精品亚洲一区二区三区| 国产人妻一区二区三区在| av在线蜜桃| 禁无遮挡网站| av.在线天堂| 亚洲欧美清纯卡通| 天堂中文最新版在线下载 | av天堂中文字幕网| 亚洲欧美日韩卡通动漫| 一级毛片aaaaaa免费看小| 婷婷色综合大香蕉| 18禁在线无遮挡免费观看视频| 欧美变态另类bdsm刘玥| 欧美一区二区精品小视频在线| 老熟妇乱子伦视频在线观看| 久久精品久久久久久噜噜老黄 | 91久久精品国产一区二区成人| 乱码一卡2卡4卡精品| 大型黄色视频在线免费观看| 性插视频无遮挡在线免费观看| 看黄色毛片网站| 免费av毛片视频| 国产精品一二三区在线看| 欧美三级亚洲精品| 搡老妇女老女人老熟妇| 中文在线观看免费www的网站| 自拍偷自拍亚洲精品老妇| 一本久久中文字幕| 不卡视频在线观看欧美| 精品久久久久久久久av| 亚洲av电影不卡..在线观看| 久久精品人妻少妇| 性色avwww在线观看| 中文资源天堂在线| 欧美精品国产亚洲| 精品久久久久久成人av| 日韩精品青青久久久久久| 99riav亚洲国产免费| 亚洲在久久综合| 国产av不卡久久| 国产色爽女视频免费观看| 亚洲自偷自拍三级| 国产av麻豆久久久久久久| 国产成人福利小说| 国产成人午夜福利电影在线观看| 深爱激情五月婷婷| 黄色配什么色好看| 中国美女看黄片| 亚洲电影在线观看av| 午夜精品在线福利| 久久精品人妻少妇| 亚洲一区高清亚洲精品| 在线观看一区二区三区| 亚洲天堂国产精品一区在线| 亚洲av成人av| 亚洲av中文字字幕乱码综合| 噜噜噜噜噜久久久久久91| 国产伦理片在线播放av一区 | 美女被艹到高潮喷水动态| 成人特级av手机在线观看| 国产成人午夜福利电影在线观看| 综合色丁香网| h日本视频在线播放| 99久久无色码亚洲精品果冻| 高清午夜精品一区二区三区 | 久久国产乱子免费精品| 日本三级黄在线观看| 丰满人妻一区二区三区视频av| 成人美女网站在线观看视频| 成年版毛片免费区| 欧美成人一区二区免费高清观看| 国产极品精品免费视频能看的| 简卡轻食公司| 国产精品福利在线免费观看| 一卡2卡三卡四卡精品乱码亚洲| 久久亚洲精品不卡| 亚洲人成网站高清观看| a级毛片免费高清观看在线播放| 亚洲欧美成人精品一区二区| а√天堂www在线а√下载| 亚洲经典国产精华液单| 给我免费播放毛片高清在线观看| 欧美+亚洲+日韩+国产| 色视频www国产| 91久久精品国产一区二区成人| 国产高清不卡午夜福利| 国产老妇伦熟女老妇高清| 亚洲精品亚洲一区二区| 亚洲欧美精品综合久久99| 国产高清激情床上av| 亚洲不卡免费看| 深夜精品福利| 国产在线男女| av在线播放精品| 变态另类成人亚洲欧美熟女| 一本一本综合久久| 久久欧美精品欧美久久欧美| 亚洲欧美精品综合久久99| 美女国产视频在线观看| 国产av麻豆久久久久久久| 一级二级三级毛片免费看| 欧美不卡视频在线免费观看| 久久国内精品自在自线图片| 久久欧美精品欧美久久欧美| 中文在线观看免费www的网站| 亚洲av中文av极速乱| 成人鲁丝片一二三区免费| 国产精品av视频在线免费观看| 国产色爽女视频免费观看| 国产一区二区亚洲精品在线观看| 精品少妇黑人巨大在线播放 | 日韩欧美国产在线观看| 久久久精品94久久精品| 欧美激情久久久久久爽电影| 男人的好看免费观看在线视频| 天天躁夜夜躁狠狠久久av| 欧美丝袜亚洲另类| 亚洲精品成人久久久久久| 一本精品99久久精品77| 久久久久久久午夜电影| 久久久久国产网址| 久久久色成人| 神马国产精品三级电影在线观看| 内地一区二区视频在线| 久久精品影院6| av在线亚洲专区| 亚洲高清免费不卡视频| 久久热精品热| 国产v大片淫在线免费观看| 免费电影在线观看免费观看| 人人妻人人看人人澡| 丝袜美腿在线中文| 亚洲欧美精品自产自拍| 成人鲁丝片一二三区免费| 免费av毛片视频| 老女人水多毛片| 亚州av有码| 老女人水多毛片| 寂寞人妻少妇视频99o| 亚洲aⅴ乱码一区二区在线播放| 免费无遮挡裸体视频| 麻豆国产av国片精品| 91aial.com中文字幕在线观看| 国产乱人偷精品视频| 亚洲内射少妇av| 伊人久久精品亚洲午夜| 日本欧美国产在线视频| 亚洲欧美精品专区久久| 国产在视频线在精品| 亚洲国产欧美人成| 成年版毛片免费区| 熟女人妻精品中文字幕| 午夜a级毛片| 国产精品电影一区二区三区| 亚洲欧美日韩卡通动漫| 1000部很黄的大片| 女的被弄到高潮叫床怎么办| 国产精品人妻久久久影院| 国产成人一区二区在线| 春色校园在线视频观看| 国产极品天堂在线| 99久久人妻综合| 国产一区二区亚洲精品在线观看| 白带黄色成豆腐渣| 国产黄色视频一区二区在线观看 | 亚洲婷婷狠狠爱综合网| 欧美+亚洲+日韩+国产| 欧洲精品卡2卡3卡4卡5卡区| 国产精品三级大全| 亚洲色图av天堂| 国产精品久久久久久精品电影小说 | 欧美变态另类bdsm刘玥| 天天躁日日操中文字幕| 国内少妇人妻偷人精品xxx网站| 午夜精品国产一区二区电影 | 国产亚洲av片在线观看秒播厂 | 中文欧美无线码| 午夜福利成人在线免费观看| 久久99热这里只有精品18| 乱码一卡2卡4卡精品| 久久久久久久久久黄片| 久久久久久久久久久丰满| 男人舔女人下体高潮全视频| 国产91av在线免费观看| 免费av毛片视频| 久久精品国产鲁丝片午夜精品| 你懂的网址亚洲精品在线观看 | 国产中年淑女户外野战色| 蜜臀久久99精品久久宅男| 久久99热6这里只有精品| av免费观看日本| 别揉我奶头 嗯啊视频| 免费av毛片视频| 亚洲在线自拍视频| 97在线视频观看| 色5月婷婷丁香| 午夜福利成人在线免费观看| 精品国产三级普通话版| 少妇熟女欧美另类| 国内精品美女久久久久久| 老司机影院成人| 亚洲欧美精品自产自拍| 毛片一级片免费看久久久久| 国产成人一区二区在线| 亚洲国产精品成人综合色| 大型黄色视频在线免费观看| 国产乱人偷精品视频| 欧美性感艳星| 亚洲欧美精品自产自拍| 成人亚洲精品av一区二区| 97在线视频观看| 日本成人三级电影网站| 欧美丝袜亚洲另类| 深夜精品福利| 国产精品国产高清国产av| 亚洲人与动物交配视频| 91麻豆精品激情在线观看国产| 国产伦在线观看视频一区| 男人和女人高潮做爰伦理| 亚洲成人中文字幕在线播放| 日韩一区二区视频免费看| 男女视频在线观看网站免费| 日韩精品青青久久久久久| 国产 一区 欧美 日韩| 在线观看美女被高潮喷水网站| 不卡一级毛片| 亚洲久久久久久中文字幕| 亚洲最大成人av| 亚洲第一区二区三区不卡| 晚上一个人看的免费电影| 床上黄色一级片| 一级黄色大片毛片| 夜夜夜夜夜久久久久| 国产亚洲91精品色在线| 婷婷精品国产亚洲av| 美女高潮的动态| 天堂√8在线中文| 婷婷精品国产亚洲av| 久久精品久久久久久久性| АⅤ资源中文在线天堂| 欧美最新免费一区二区三区| 免费看av在线观看网站| 国产成人福利小说| 人体艺术视频欧美日本| 国产精品久久久久久久久免| 有码 亚洲区| 插逼视频在线观看| 免费看日本二区| 欧美一区二区国产精品久久精品| 亚洲中文字幕日韩| 校园春色视频在线观看| 听说在线观看完整版免费高清| 麻豆久久精品国产亚洲av| 国内久久婷婷六月综合欲色啪| 美女被艹到高潮喷水动态| 黄色视频,在线免费观看| 欧美三级亚洲精品| 免费黄网站久久成人精品| 国产精品精品国产色婷婷| 亚洲中文字幕一区二区三区有码在线看| 精品日产1卡2卡|