• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    BING:Binarized normed gradients for objectness estimation at 300fps

    2019-05-14 13:25:38ingingChengYunLiuWenYanLinZimingZhangPaulRosinandPhilipTorr
    Computational Visual Media 2019年1期

    M ing-M ing Cheng(),Yun Liu ,Wen-Yan Lin ,Ziming Zhang,Paul L.Rosin ,and Philip H.S.Torr

    Abstract Training a generic objectness measure to produce object proposals has recently become of signif icant interest.We observe that generic objects with well-def ined closed boundaries can be detected by looking at the norm of gradients,with a suitable resizing of their corresponding image windows to a small f ixed size.Based on this observation and computational reasons,weproposeto resizethewindow to 8×8 and use the norm of the gradients as a simple 64D feature to describe it,for explicitly training a generic objectness measure.We further show how the binarized version of this feature,namely binarized normed gradients(BING),can be used for ef ficient objectnessestimation,which requiresonly a few atomic operations(e.g.,add,bit wise shif t,etc.).To improve localization quality of the proposals while maintaining ef ficiency,we propose a novel fast segmentation method and demonstrate its ef fectiveness for improving BING’s localization performance,when used in multithresholding straddling expansion (MTSE) postprocessing.On the challenging PASCAL VOC2007 dataset,using 1000 proposalsper imageand intersectionover-union threshold of 0.5,our proposal method achieves a 95.6%object detection rate and 78.6%mean average best overlap in less than 0.005 second per image.

    Keywords object proposals;objectness;visual attention;category agnostic proposals

    1 Introduction

    As suggested in pioneering research[1,2],objectness is usually taken to mean a value which ref lects how likely an image window covers an object in any category.A generic objectness measure has great potential to be used as a pre-f ilter for many vision tasks,including object detection[3–5],visual tracking[6,7],object discovery[8,9],semantic segmentation[10,11],content aware image retargeting [12],and action recognition[13].Especially for object detection,proposal-based detectors have dominated recent state-of-the-art performance.Compared with sliding windows,objectnessmeasurescan signif icantly improve computational ef ficiency by reducing the search space,and system accuracy by allowing the use of complex subsequent processing during testing.However,designing a good generic objectnessmeasure method is dif ficult,and should:

    ? achieve a high object detection rate(DR),as any undetected objects rejected at this stage cannot be recovered later;

    ? possess high proposal localization accuracy,measured by average best overlap(ABO)for each object in each class and mean average best overlap(MABO)across all classes;

    ? be highly computationally ef ficient so that it is useful in realtime and large-scale applications;

    ? produce a small number of proposals,to reduce the amount of subsequent precessing;

    ? possess good generalization to unseen object categories,so that the proposals can be used in various vision tasks without category biases.To the best of our knowledge,no prior method can satisfy all of these ambitious goals simultaneously.Research from cognitive psychology[14,15]and neurobiology[16,17]suggests that humans have a strong ability to perceive objects before identifying them.Based on the observed human reaction time and the biological estimated signal transmission time,human attention theorieshypothesize that thehuman visual system processes only parts of an image in detail,while leaving others nearly unprocessed.This further suggests that before identifying objects,simple mechanisms in the human visual system select possible object locations.

    In this paper,we propose a surprisingly simple and powerful feature which we call“BING”,to help search for objects using objectness scores.Our work is motivated by the concept that objects are standalone things with well-def ined closed boundaries and centers[2,18,19],even if the visibility of these boundaries depends on the characteristics of the background and of occluding foreground objects.We observe that generic objects with well-def ined closed boundaries share surprisingly strong correlation in terms of the norm of their gradients(see Fig.1 and Section 3),after resizing their corresponding image windows to a small f ixed size(e.g.,8×8).Therefore,in order to ef ficiently quantify the objectness of an image window,we resize it to 8×8 and use the norm of the gradients as a simple 64D feature for learning a generic objectness measure in a cascaded SVM framework.We further show how the binarized version of the norm of gradients feature,namely binarized normed gradients(BING),can be used for ef ficient objectness estimation of image windows,using only a few atomic CPU operations(add,bit wise shif t,etc.).The BING feature’s simplicity,whileusing advanced speed-up techniquesto makethe computational time tractable,contrasts with recent state-of-the-art techniques[2,20,21]which seek increasingly sophisticated features to obtain greater discrimination.

    Fig.1 Although object(red)and non-object(green)windows vary greatly in image space(a),at proper scales and aspect ratios which correspond to a small fixed size(b),their corresponding normed gradients(NG features)(c),share strong correlation.We learn a single 64D linear model(d)for selecting object proposals based on their NG features.

    The original conference presentation of BING[22]has received much attention.Its ef ficiency and high detection rates make BING a good choice in a large number of successful applications that require category independent object proposals[23–29].Recently,deep neural network based object proposal generation methods have become very popular due to their high recall and computational ef ficiency,e.g.,RPN[30],YOLO900[31],and SSD[32].However,these methods generalize poorly to unseen categories,and rely on training with many ground-truth annotations for the target classes.For instance,the detected object proposals of RPN are highly related to the training data:after training it on the PASCAL VOC dataset[33],the trained model will aim to only detect the20 classesof objectstherein and performspoorly on other datasetslike MSCOCO(see Section 5.4).Its poor generalization ability has restricted its usage,so RPN is usually only used in object detection.In comparison,BING is based on lowlevel cues concerning enclosing boundaries and thus can produce category independent object proposals,which has demonstrated applications in multi-label image classif ication[23],semantic segmentation[25],video classif ication[24],co-salient object detection[29],deep multi-instance learning[26],and video summarisation[27].However,several researchers[34–37]have noted that BING’s proposal localization is weak.

    This manuscript further improves proposal localization over the method described in the conferenceversion[22]by applying multi-thresholding straddling expansion(MTSE)[38]as a postprocessing step.Standard MTSE would introduce a signif icant computational bottleneck because of its image segmentation step.Therefore we propose a novel image segmentation method, which generates accurate segments much more ef ficiently.Our approach starts with a GPU version of the SLIC method[39,40]to quickly obtain initial seed regions(superpixels)by performing oversegmentation.Region merging is then performed based on average pixel distances.We replace the method from Ref.[41]in MTSE with this novel grouping method[42],and dub the new proposal system BING-E.

    We have extensively evaluated our objectness methods on the PASCAL VOC2007[33]and Microsoft COCO[43]datasets.The experimental results show that our method ef ficiently(at 300 fps for BING and 200 fps for BING-E)generates a small set of data-driven,category-independent,and highquality object windows.BING is able to achieve 96.2%detection rate(DR)with 1000 windows and intersection-over-union(IoU)threshold 0.5.At the increased IoU threshold of 0.7,BING-E can obtain 81.4%DR and 78.6%mean average best overlap(MABO).Feeding the proposals to the fast RCNN framework[4]for an object detection task,BING-E achieves 67.4%mean average precision(MAP).Following Refs.[2,20,21],we also verify the generalization ability of our method.When training our objectness measure on the VOC2007 training set and testing on the challenging COCO validation set,our method still achieves competitive performance.Compared to most popular alternatives[2,20,21,34,36,44–50],our method achieves competitive performance using a smaller set of proposals,while being 100–1000 times faster than them.Thus,our proposed method achieves signif icantly higher ef ficiency while providing state-of-the-art generic object proposals.This performance fulf ils a key previously stated requirement for a good objectness detector.Our source code is published with the paper.

    2 Related works

    Being able to perceive objects before identifying them is closely related to bottom up visual attention(saliency).According to how saliency is def ined,we broadly classify related research into three categories:f ixation prediction,salient object detection,and objectness proposal generation.

    2.1 Fixation prediction

    Fixation prediction models aim to predict human eye movements[51,52].Inspired by neurobiological research on early primate visual systems,Ittiet al. [53]proposed one of the f irst computational models for saliency detection,which estimates centersurround dif ferences across multi-scale image features.Ma and Zhang[54]proposed a fuzzy growing model to analyze local contrast based saliency.Harel et al.[55]proposed normalizing center-surrounded featuremaps for highlighting conspicuous parts.Although f ixation point prediction models have developed remarkably,the prediction results tend to highlight edges and corners rather than entire objects.Thus,these models are unsuitable for generating generic object proposals.

    2.2 Salient object detection

    Salient object detection models try to detect the most attention-grabbing objects in a scene,and then segment the whole extent of those objects[56–58].Liu et al.[59]combined local,regional,and global saliency measurements in a CRF framework.Achanta et al.[60]localized salient regions using a frequencytuned approach.Cheng et al.[61]proposed a salient object detection and segmentation method based on region contrast analysis and iterative graph based segmentation.More recent research has also tried to produce high-quality saliency mapsin a filtering-based framework[62].Such salient object segmentation has achieved great success for simple images in image scene analysis[63–65],and content aware image editing[66,67];it can be used as a cheap tool to process a large number of Internet images or build robust applications[68–73]by automatically selecting good results[61,74].However,these approaches are less likely to work for complicated images in which many objects are present but are rarely dominant(e.g.,PASCAL VOC images).

    2.3 Objectness prop osal generation

    These methods avoid making decisions early on,by proposing a small number(e.g.,1000)of categoryindependent proposals that are expected to cover all objects in an image[2,20,21]. Producing rough segmentations[21,75]as object proposals has been shown to be an ef fective way of reducing search spaces for category-specif ic classif iers,whilst allowing the usage of strong classif iers to improve accuracy. However,such methods[21,75]are very computationally expensive.Alexe et al.[2]proposed a cue integration approach to get better prediction performance more ef ficiently.Broadly speaking,two main categories of object proposal generation methods exist,region based methods and edge based methods.

    Region based object proposal generation methods mainly look for sets of regions produced by image segmentation and use the bounding boxes of these sets of regions to generate object proposals.Since image segmentation aims to cluster pixels into regions expected to represent objects or object-parts,merging certain regions is likely to f ind complete objects.A large literature has focused on this approach.Uijlings et al.[20]proposed a selective search approach,which combined the strength of both an exhaustive search and segmentation,to achieve higher prediction performance.Pont-Tuset et al.[36]proposed a multiscale method to generate segmentation hierarchies,and then explored the combinatorial space of these hierarchical regions to produce high-quality object proposals.Other well-known algorithms[21,45–47,49]fall into this category as well.

    Edge based object proposal generation approaches use edges to explore where in an image complete objects occur.As pointed out in Ref.[2],complete objects usually have well-def ined closed boundaries in space,and various methods have achieved high performance using this intuitive cue.Zitnick and Doll′ar[34]proposed a simple box objectnessscorethat measured the number of contours wholly enclosed by a bounding box,generating object bounding box proposals directly from edges in an ef ficient way.Lu et al.[76]proposed a closed contour measure def ined by a closed path integral.Zhang et al.[44]proposed a cascaded ranking SVM approach with an oriented gradient feature for ef ficient proposal generation.

    Generic object proposals are widely used in object detection[3–5],visual tracking[6,7],video classif ication[24],pedestrian detection[28],content aware image retargeting[12],and action recognition[13].Thus a generic objectness measure can benef it many vision tasks.In this paper,we describe a simple and intuitive object proposal generation method which generally achieves state-of-the-art detection performance,and is 100–1000 times faster than most popular alternatives[2,20,21](see Section 5).

    3 BING for objectness measure

    3.1 Preliminaries

    Inspired by the ability of the human visual system to ef ficiently perceive objects before identifying them[14–17],we introduce a simple 64D norm-of-gradients(NG)feature(Section 3.2),as well as its binary approximation,i.e.,the binarized normed gradients(BING)feature(Section 3.4),for ef ficiently capturing the objectness of an image window.

    To f ind generic objects within an image,we scan over a predef ined set of quantized window sizes(scales and aspect ratios①In all exp eriments,we test 36 quantized target window sizes{(W o,H o)},where W o,H o∈{16,32,64,128,256,512}.We resize the input image to 36 sizes so that 8×8 windows in the downsized images(from which we extract features),corresp ond to target windows.).Each window is scored with a linear model w∈R64(Section 3.3):where sl,gl,l,i,and(x,y)aref ilter score,NG feature,location,size,and position of a window,respectively.Using non-maximal suppression(NMS),we select a small set of proposalsfrom each size i.Zhao et al.[37]showed that thischoiceof window sizesalong with the NMSis close to optimal.Some sizes(e.g.,10×500)are less likely than others(e.g.,100×100)to contain an object instance.Thus we def ine the objectness score(i.e.,the calibrated f ilter score)as where vi,ti∈R are learnt coef ficient and bias terms for each quantized size i(Section 3.3).Note that calibration using Eq.(3),although very fast,is only required when re-ranking the small set of f inal proposals.

    3.2 Normed gradients(NG)and objectness

    Objects are stand-alone things with well-def ined closed boundaries and centers[2,18,19]although the visibility of these boundaries depends on the characteristics of the background and occluding foreground objects. When resizing windows corresponding to real world objects to a small f ixed size(e.g.,8×8,chosen for computational reasons that will be explained in Section 3.4),the norms(i.e.,magnitude)of the corresponding image gradients become good discriminative features,because of the limited variation that closed boundaries could present in such an abstracted view.As demonstrated in Fig.1,although the cruise ship and the person have huge dif ferences in terms of color,shape,texture,illumination,etc.,they share clear similarity in normed gradient space.To utilize this observation to ef ficiently predict the existence of object instances,we f irstly resize the input image to dif ferent quantized sizes and calculate the normed gradients of each resized image.The values in an 8×8 region of these resized normed gradients maps are def ined as a 64D vector of normed gradients(NG)①The normed gradient represents Euclidean norm of the gradient.feature of its corresponding window.

    Our NG feature,as a dense and compact objectness feature for an image window,has several advantages.Firstly,no matter how an object changes its position,scale,and aspect ratio,its corresponding NG feature will remain roughly unchanged because the region for computing the feature is normalized.In other words,NG features are insensitive to change of translation,scale,and aspect ratio,which will be very useful for detecting objects of arbitrary categories.Such insensitivity in a property is one that a good objectness proposal generation method should have.Secondly,the dense compact representation of the NG feature makes it allow to be very ef ficiently calculated and verif ied,with great potential for realtime applications.

    The cost of introducing such advantages to the NG feature is loss of discriminative ability.However,this is not a problem as BING can be used as a pre-f ilter,and the resulting false-positives can be processed and eliminated by subsequent category specif ic detectors.In Section 5,we show that our method results in a small set of high-quality proposals that cover 96.2%of the true object windows in the challenging VOC2007 dataset.

    3.3 Learning objectness measurement with NG

    To learn an objectness measure for image windows,we follow the two stage cascaded SVM approach[44].

    Stage I.We learn a single model w for Eq.(1)using a linear SVM[77].NG featuresof ground truth object windows and randomly sampled background windows are used as positive and negative training samples respectively.

    Stage II.To learn viand tiin Eq.(3)using a linear SVM[77],we evaluate Eq.(1)at size i for training images and use the selected(NMS)proposals as training samples,their f ilter scoresas1D features,and check their labeling using training image annotations(see Section 5 for evaluation criteria).As can be seen in Fig.1(d),the learned linear model w(see Section 5 for experimental settings)looks similar to the multi-size center-surrounded patterns[53]hypothesized as a biologically plausible architecture in primates[15,16,78].The large weights along the borders of w favor a boundary that separates an object(center)from its background(surround).Compared to manually designed centersurround patterns[53],our learned w captures a more sophisticated natural prior.For example,lower object regions are more often occluded than upper parts.Thisisrepresented by w placing lessconf idence in the lower regions.

    3.4 Binarized normed gradients(BING)

    To make use of recent advantages in binary model approximation[79,80],we describe an accelerated version of the NG feature,namely binarized normed gradients(BING),to speed up the feature extraction and testing process.Our learned linear model w∈R64can be approximated by a set of basis vectorsusing Algorithm 1,where Nwdenotes the number of basis vectors,denotes a single basis vector,andβj∈ R denotes its corresponding coef ficient.By further representing each ajusing a binary vector and its complement:where,a binarized feature b can be tested using fast bit wise and and bit count operations(see Ref.[79]):

    The key challenge is how to binarize and calculate NG features ef ficiently.We approximate the normed gradient values(each saved as a byt e value)using the top Ngbinary bits of the byt e values.Thus,a 64D NG feature glcan be approximated by Ngbinarized normed gradients(BING)features as

    Algorithm 1 Binary approximate model w[79]Input:w,N w Output:{βj}N w j=1,{a j}N w j=1 Initialize residual:ε=w for j=1 to N w do a j=sign(ε)βj=a j,ε/||a j||2 (projectεonto a j)ε←ε?βj a j (update residual)end for

    Notice that these BING features have dif ferent weights according to their corresponding bit position in the byt e values.

    Naively determining an 8×8 BING feature requires a loop computing access to 64 positions.By exploring two special characteristics of an 8×8 BING feature,we develop a fast BING feature calculation algorithm(Algorithm 2),which enables using atomic updates(bit wise shif t and bit wise or)to avoid computing the loop.Firstly,a BING feature bx,yand its last row rx,yare saved in a single int 64 and a byt e variable,respectively.Secondly,adjacent BING features and their rows have a simple cumulative relation.As shown in Fig.2 and Algorithm 2,the operator bit wise shif t shifts rx?1,yby one bit,automatically through the bit which does not belong to rx,y,and makes room to insert the new bit bx,yusing the bit wise or operator.Similarly bit wise shif t shifts bx,y?1by 8 bits automatically through the bits which do not belong to bx,y,and makes room to insertrx,y.

    Our ef ficient BING feature calculation shares its cumulative nature with the integral image representation[81].Instead of calculating a single scalar value over an arbitrary rectangle range[81],our method uses a few atomic operations(e.g.,add,bit w ise,etc.)to calculate a set of binary patterns over an 8×8 f ixed range.

    Algorithm 2 Get BING features for W×H positions Comments:see Fig.2 for an explanation of variables Input:binary normed gradient map b W×H Output:BING feature matrix b W×H Initialize:b W×H=0,r W×H=0 for each position(x,y)in scan-line order d o r x,y=(r x?1,y1)|b x,y b x,y=(b x,y?18)|r x,y end for

    Fig. 2 Variables:a BING feature b x,y,its last row r x,y,and last element b x,y.Notice that the subscripts i,x,y,l,k,introduced in Eq.(2)and Eq.(5),are locations of the whole vector rather than the indices of vector elements.We can use a single atomic variable(int 64 and byt e)to represent a BING feature and its last row,respectively,enabling ef ficient feature computation.

    The f ilter score Eq.(1)for an image window corresponding to BING features bk,lcan be ef ficiently computed using:

    To implement these ideas,we use the 1-D kernel[?1,0,1]to f ind image gradients gxand gyin the horizontal and vertical directions,calculate normed gradients using min(|gx|+|gy|,255)and save them in byt e values.By default,we calculate gradients in RGB color space.

    4 Enhancing BING with region cues

    BING is not only very ef ficient,but can also achieve a high object detection rate.However,in comparison to ABO or MABO,its performance is disappointing.When further applying BINGin someobject detection frameworks which use object proposals as input,like fast R-CNN,the detection rate is also poor.This suggests that BING does not provide good proposal localization.

    Two reasons may cause this.On one hand,given an object,BING tries to capture its closed boundaries by resizing it to a small f ixed size and setting larger weights at the most probable positions.However,as shapes of objects are varied,the closed boundaries of objects will be mapped to dif ferent positions in the f ixed size windows.The learned model of NG features cannot adequately represent this variability across objects.On the other hand,BING is designed to only test a limited set of quantized window sizes.However,the sizes of objectsarevariable.Thus,to someextent,bounding boxes generated by BING are unable to tightly cover all objects.

    In order to improve this unsatisfactory localization,we use multi-thresholding straddling expansion(MTSE)[38],which is an ef fective method to ref ine object proposals using segments.Given an image and corresponding initial bounding boxes,MTSE f irst aligns boxes with potential object boundaries preserved by superpixels,and then multi-thresholding expansion is performed with respect to superpixels straddling each box.In this way,each bounding box tightly covers a set of internal superpixels,signif icantly improving the localization quality of proposals.However,the MTSE algorithm is too slow;the bottleneck is segmentation[41].Thus,we use a new fast image segmentation method[42]to replace the segmentation method in MTSE.

    Recently,SLIC [40]has become a popular superpixel generation method becauseof itsef ficiency;gSLICr,the GPU version of SLIC[39],can achieve a speed of 250 fps.SLIC aims to generate small superpixels and is not good at producing large image segments.In the MTSE algorithm,large image segments are needed to ensure accuracy,so it is not straightforward to use SLIC within MTSE.However,the high ef ficiency of SLIC makes it a good start for developing new segmentation methods.We f irst use gSLICr to segment an image into many small superpixels.Then,we view each superpixel as a node whose color is denoted by the average color of all its pixels,and the distance between two adjacent nodes is computed as the Euclidean distance of their color values.Finally,wefeed thesenodesinto a graph-based segmentation method to produce the f inal image segmentation[42].

    We employ the full MTSE pipeline,and modify it to use our new segmentation algorithm,reducing the computation time from 0.15 s down to 0.0014 s per image.Incorporating this improved version of MTSE as a postprocessing enhancement step for BING gives our new proposal system,which we call BING-E.

    5 Evaluation

    5.1 Background

    We have extensively evaluated our method on the challenging PASCAL VOC2007[33]and Microsoft COCO[43]datasets.PASCAL VOC2007 contains 20 object categories,and consists of training,validation,and test sets,with 2501,2510,and 4952 images respectively,having corresponding bounding box annotations.We use the training set to train our BING model and test on thetest set.Microsoft COCO consists of 82,783 images for training and 40,504 images for validation,with about 1 million annotated instances in 80 categories.COCO is more challenging because of its large size and complex image contents.

    We compared our method to various competitive methods:EdgeBoxes[34]①ht t ps://gi t hub.com/pdol l ar/edges,CSVM[44]②ht t ps://zi mi ngzhang.wor dpr ess.com/,MCG[36]③ht tp://www.eecs.ber kel ey.edu/Resear ch/Pr oj ect s/CS/vision/grouping/mcg/,RPN[30]④ht t ps://gi t hub.com/r bgi r shick/py-f ast er-r cnn,Endres[21],Objectness[2],GOP[48],LPO[49],Rahtu[45],RandomPrim[46],Rantalankila[47],and SelectiveSearch[20],using publicly available code[82]downloaded from ht t ps://git hub.com/Cl oud-CV/obj ect-pr oposal s. All parameters for these methods were set to default values,except for Ref.[48],in which we employed(180,9)as suggested on the author’s homepage.To make the comparison fair,all methods except for the deep learning based RPN[30]were tested on the same device with an Intel i7-6700k CPU and NVIDIA GeForce GTX 970 GPU,with data parallelization enabled.For RPN,we utilized an NVIDIA GeForce GTX TITAN X GPU for computation.

    Since objectness is often used as a preprocessing step to reduce the number of windows considered in subsequent processing,too many proposals are unhelpful.Therefore,we only used the top 1000 proposals for comparison.

    In order to evaluate the generalization ability of each method,we tested them on the COCO validation dataset using the same parameters as for VOC2007 without retraining.Since at least 60 categories in COCO dif fer from those in VOC2007,COCO is a good test of the generalization ability of the methods.

    5.2 Exp erimental setup

    5.2.1 Discussion of BING

    As shown in Table 1,by using the binary approximation to the learned linear f ilter(Section 3.4)and BING features,computing the response score for each image window only needs a f ixed small number of atomic operations.It iseasy to see that the number of positions at each quantized scale and aspect ratio is O(N),where N is the number of pixels in the image.Thus,computing response scores at all scales and aspect ratios also has computational complexity O(N).Furthermore,extracting the BING feature and computing the response score at each potential position(i.e.,an image window)can be calculated with information given by its 2 neighbors left and above.This means that the space complexity is also O(N).

    For training, we f lip the images and the corresponding annotations.The positive samples are boxes that have IoU overlap with a ground truth box of at least 0.5,while the maximum IoU overlap with the ground truth for the negative sampling boxes is less than 0.5.

    Tab le 1 Average number of atomic operations for computing objectness of each image window at dif ferent stages:calculate normed gradients,extract BING features,and get objectness score

    Some window sizeswhose aspect ratiosare too large are ignored as there are too few training samples(less than 50)in VOC2007 for each of them.Our training on 2501 VOC2007 images takes only 20 seconds(excluding XML loading time).

    We further illustrate in Table 2 how dif ferent approximation levels inf luence the result quality.From this comparison,we decided in all further experiments to use Nw=2,Ng=4.

    5.2.2 Implementation of BING-E

    In BING-E,removing some small BING windows,with Wo<30 or Ho<30,hardly degrades the proposal quality of BING-E while reducing the runtime spent on BING processing by half.When using gSLICr[39]to segment images into superpixels,we set the expected size of superpixels to 4×4.In the graph-based segmentation system[41,42],we use the scale parameter k=120,and the minimum number of superpixels in each produced segment is set to 6.We utilize the default multi-thresholds of MTSE:{0.1,0.2,0.3,0.4,0.5}.After ref inement,non-maximal suppression(NMS)is performed to obtain the f inal boxes,with an IoU threshold of NMSset to 0.8.All experiments used these settings.

    5.3 PASCAL VOC2007

    Tab le 2 Average result quality(DR using 1000 proposals)of BING at dif ferent approximation levels,measured by N w and N g in Section 3.4.N/A represents unbinarized

    higher ef ficiency than traditional methods.Thus,we f irst compare our method with some competitors using detection recall metrics.Figure 3(a)shows detection recall when varying the IoU overlap threshold using 1000 proposals. EdgeBoxes and MCG outperform many other methods in all cases.RPN achieves very high performance when the IoU threshold is less than 0.7,but then drops rapidly.Note that RPN is the only deep learning based method amongst these competitors.BING’s performance is not competitive when the IoU threshold increases,but BING-E provides close to the best performance.It should be emphasized that both BING and BING-E are more than 100 times faster than most popular alternatives[20,21,34,36](see details in Table 3).The performance of BING and CSVM[44]almost coincide in all three subf igures,but BING is 100 times faster than CSVM.The signif icant improvement from BING to BING-E illustrates that BING is a strong basis that can be extended and improved in various ways.Since BING is able to run at about 300 fps,its variants can still be very fast.For example,BING-E can generate competitive candidates at over 200 fps,which is far beyond the performance of most other detection algorithms.

    Figures 3(b)–3(d)show detection recall and MABO versus the number of proposals(#WIN)respectively.When the IoU threshold is 0.5,both BING and BING-E perform very well;when the

    Tab le 3 Detection recall(%)using dif ferent IoU thresholds and#WIN on the VOC2007 test set

    5.3.1 Results

    As demonstrated by Refs.[2,20],a small set of coarse locations with high detection recall(DR)is suf ficient for ef fective object detection,and it allows expensive features and complementary cues to be involved in subsequent detection to achieve better quality and number of candidates is suf ficient,BING and BING-E outperform all other methods.In Fig.3(c),the recall curve of BING drops signif icantly,as it does in the MABO evaluation.This may be because the proposal localization quality of BING is poor.However,the performance of BING-E is consistently close to the best performance,indicating that it overcomes BING’s localization problem.

    Fig.3 Testing results on PASCAL VOC2007 test set:(a)object detection recall versus IoU overlap threshold;(b,c)recall versus the number of candidates at IoU threshold 0.5 and 0.7 respectively;(d)MABO versus the number of candidates using at most 1000 proposals.

    We show a numerical comparison of recall vs.#WIN in Table 3.BING-E always performs better than most competitors.The speeds of BING and BING-E are obviously faster than all of the other methods.Although EdgeBoxes,MCG,and SelectiveSearch perform very well,they are too slow for many applications.In contrast,BING-E is more attractive.It is also interesting to f ind that the detection recall of BING-E increases by 46.1%over BING using 1000 proposals with IoU threshold 0.7,which suggests that the accuracy of BING has lots of room for improvement by applying postprocessing.Table 4 compares ABO&MABO scores with the competitors.MCG always outperforms others by a big gap,but BING-E is competitive with all other methods.

    Since proposal generation is usually a preprocessing step in vision tasks,we fed candidate boxes produced by objectness methods into the fast R-CNN[4]object detection framework to test the ef fectiveness of proposals in practical applications.The CNN model of fast R-CNN was retrained using boxes from the respective methods.Table 5 shows the evaluation results.In terms of MAP(mean average precision),the overall detection rates of all methods are quite similar. RPN performs slightly better,while our BING-E method gives very close to the best performance.Although MCG almost dominates the recall,ABO,and MABO metrics,it does not achieve the best performance on object detection,and is worse than BING-E.In summary we may say that BING-E provides state-of-the-art generic object proposals at a much higher speed than other methods.Finally,we illustrate sample results of varying complexity provided by our improved BINGE method for VOC2007 test images in Fig.5,to demonstrate our high-quality proposals.

    Table 4 ABO&MABO(%)using at most 1000 proposals per image on the VOC2007 test set

    5.3.2 Discussion

    In order to perform further analysis,we divided the ground truths into dif ferent sets according to their window sizes,and tested some of the most competitive methods on these sets.Table 6 shows theresults.When the ground truth area is small,BING-E performs much worse than the other methods.As the ground truth area increases,thegap between BING-E and other state-of-the-art methodsgradually narrows,and BING-E outperforms all of them on the recall metric when the area is larger than 212.Figure 4 shows some failing examples produced by BINGE.Note that almost all falsely detected objects are small.Such small objects may have blurred boundaries making them hard to distinguish from background.

    Table 5 Detection average precision(%)using fast R-CNN on the VOC2007 test set with 1000 proposals

    Table 6 Recall/MABO(%)vs.area on VOC2007 test set with 1000 proposals and IoU threshold 0.5

    Fig.4 True positive object proposals for VOC2007 test images using BING-E.

    Note that MCG achieves much better performance on small objects,and it may be the main cause of the drop in detection rate when using MCG in the fast R-CNN framework.The fast R-CNN uses the VGG16[83]model,in which the convolutional layers are pooled several times.The size of a feature map will be just 1/24size of the original object when it arrives at the last convolutional layer of VGG16,and the feature map will be too coarse to classify such small instances.Thus,using MCG proposals to retrain the CNN model may confuse the network because of the detected small object proposals.As a result,MCG does not achieve the best performance in the object detection task although it outperforms others on recall and MABO metrics.

    Fig.5 Some failure examples of BING-E.Failure means that the overlap between the best detected box(green)and ground truth(red)is less than 0.5.All images are from the VOC2007 test set.

    5.4 M icrosoft COCO

    In order to test the generalization ability of the various methods,we extensively evaluated them on the COCO validation set using the same parameters as for the VOC2007 dataset,without retraining.As this dataset is so large,we only compared against some of the more ef ficient methods.

    Fig.6 Testing results on COCO validation dataset:(a)object detection recall versus IoU overlap threshold;(b,c)recall versus number of candidates at IoU thresholds 0.5 and 0.7 respectively;(d)MABO versus the number of candidates using at most 1000 proposals.

    Figure 6(a)shows object detection recall versus IoU overlap threshold using dif ferent numbers of proposals.MCG always dominates the performance,but its low speed makes it unsuited to many vision applications.EdgeBoxes performs well when the IoU threshold is small,and LPO performs well for large IoU thresholds.The performance of BING-E is slightly worse than state-of-the-art performance.Both BING,Rahtu,and Objectness struggle on the COCO dataset,suggesting that these methods may be not robust in complex scenes.RPN performs very poorly on COCO,which means it is highly dependent on the training data.As noted in Ref.[82],a good object proposal algorithm should be category independent.Although RPN achieves good results on VOC2007,it is not consistent with the goal of designing a category independent object proposal method.

    Figures 6(b)–6(d)show recall and MABO when varying the number of proposals.Clearly,RPN suf fers a big drop in performance over VOC2007.Its recall at IoU 0.5 and MABO are even worse than those of BING.BING and BING-E are very robust when transferring to dif ferent object classes.Table 7 shows a statistical comparison.Although BING and BING-E do not achieve the best performance,they obtain very high computational ef ficiency with a moderate drop in accuracy.The signif icant improvement from BING to BING-E suggests that BING would bea good basisfor combining with other more accurate bounding box ref inement methods in cases where the increased computational load is acceptable.

    Tab le 7 Detection recall(%)using dif ferent IoU thresholds and#WIN on COCO validation set

    6 Conclusions and future work

    6.1 Conclusions

    We have presented a surprisingly simple,fast,and high-quality objectness measure using 8×8 binarized normed gradients(BING)features.Computing the objectness of each image window at any scale and aspect ratio only needs a few atomic(add,bit wise,etc.)operations.To improve the localization quality of BING,we further proposed BING-E which incorporates an ef ficient image segmentation strategy.Evaluation results using the most widely used benchmarks(VOC2007 and COCO)and evaluation metrics show that BING-E can generate state-of-the-art generic object proposals at a signif icantly higher speed than other methods.Our evaluation demonstrates that BING is a good basis for object proposal generation.

    6.2 Limitations

    BING and BING-E predict a small set of object bounding boxes.Thus,they share similar limitations with all other bounding box based objectness measure methods[2,44]and classic sliding window based object detection methods[84,85].For some object categories(snakes,wires,etc.),a bounding box might not localize object instances as well as a segmentation region[21,47,75].

    6.3 Future work

    The high quality and ef ficiency of our method make it suitable for many realtime vision applications and uses based on large scale image collections(e.g.,ImageNet[86]).In particular,the binary operations and memory ef ficiency make our BING method suitable for low-power devices[79,80].Our speed-up strategy of reducing the number of tested windows is complementary to other speed-up techniques which try to reduce the subsequent processing time required for each location.The ef ficiency of our method solves the computational bottleneck of proposal based vision tasks such as object detection methods[4,87],enabling real-time high-quality object detection.

    We have demonstrated how to generate a small set(e.g.,1000)of proposals to cover nearly all potential object regions,using very simple BING features and a postprocessing step.It would be interesting to introduce other additional cues to further reduce the number of proposals while maintaining a high detection rate[88,89],and to explore more applications[23–27,29,90]using BING and BING-E.To encourage future work,the source code will be kept up-to-date at ht t p://mmcheng.net/bing.

    Acknowledgements

    This research was supported by the National Natural Science Foundation of China(Nos.61572264,61620106008).

    精品免费久久久久久久清纯 | 女性被躁到高潮视频| 侵犯人妻中文字幕一二三四区| 天天躁夜夜躁狠狠躁躁| 免费在线观看完整版高清| 欧美在线一区亚洲| 十分钟在线观看高清视频www| 人妻久久中文字幕网| 午夜免费观看网址| 国产精品一区二区在线不卡| bbb黄色大片| 午夜福利在线观看吧| 国产精品 国内视频| 另类亚洲欧美激情| 99久久人妻综合| 亚洲av成人不卡在线观看播放网| 国产亚洲一区二区精品| 一边摸一边抽搐一进一出视频| 欧美乱码精品一区二区三区| 国产精品电影一区二区三区 | 日韩中文字幕欧美一区二区| 亚洲欧美日韩另类电影网站| 在线观看www视频免费| 淫妇啪啪啪对白视频| 亚洲久久久国产精品| 日韩熟女老妇一区二区性免费视频| 亚洲av欧美aⅴ国产| 视频区图区小说| 美女扒开内裤让男人捅视频| 淫妇啪啪啪对白视频| 精品久久久精品久久久| 欧美日韩黄片免| 精品一品国产午夜福利视频| 岛国毛片在线播放| 亚洲精品自拍成人| 国产精品美女特级片免费视频播放器 | 丰满饥渴人妻一区二区三| 亚洲精品一卡2卡三卡4卡5卡| 麻豆乱淫一区二区| 国产亚洲一区二区精品| 亚洲第一av免费看| 久久香蕉国产精品| 大香蕉久久成人网| 精品欧美一区二区三区在线| av国产精品久久久久影院| 91精品国产国语对白视频| 热99国产精品久久久久久7| 久久国产乱子伦精品免费另类| 国产一区二区三区视频了| 国产精品国产高清国产av | 天天添夜夜摸| 男女免费视频国产| 大香蕉久久成人网| 老司机午夜福利在线观看视频| 国产精品免费大片| 国产精品99久久99久久久不卡| 久久亚洲精品不卡| 国产成人影院久久av| 欧美国产精品一级二级三级| 曰老女人黄片| 亚洲av成人不卡在线观看播放网| 无人区码免费观看不卡| 亚洲熟妇熟女久久| 亚洲视频免费观看视频| 午夜福利在线免费观看网站| 久久精品成人免费网站| 日韩一卡2卡3卡4卡2021年| svipshipincom国产片| 成人精品一区二区免费| 国产精品98久久久久久宅男小说| videos熟女内射| 日本精品一区二区三区蜜桃| 国产欧美日韩精品亚洲av| 狠狠婷婷综合久久久久久88av| 天堂√8在线中文| 啪啪无遮挡十八禁网站| 后天国语完整版免费观看| 99久久精品国产亚洲精品| 高清在线国产一区| 亚洲专区字幕在线| 黄色 视频免费看| 一区福利在线观看| 亚洲,欧美精品.| 老司机在亚洲福利影院| 老司机深夜福利视频在线观看| 国产精品乱码一区二三区的特点 | 日韩精品免费视频一区二区三区| av在线播放免费不卡| 黄片小视频在线播放| 叶爱在线成人免费视频播放| 久久久久久久久久久久大奶| 19禁男女啪啪无遮挡网站| 久久国产精品大桥未久av| 国产精品一区二区在线观看99| 国产成人精品无人区| 国产精华一区二区三区| a在线观看视频网站| 国产欧美日韩一区二区精品| 在线观看日韩欧美| 国产区一区二久久| 国产aⅴ精品一区二区三区波| 亚洲欧美一区二区三区久久| 在线观看www视频免费| 亚洲欧美色中文字幕在线| 亚洲av日韩在线播放| 熟女少妇亚洲综合色aaa.| 欧美日韩av久久| 少妇被粗大的猛进出69影院| 香蕉国产在线看| 亚洲 欧美一区二区三区| 日韩免费av在线播放| 成人亚洲精品一区在线观看| 欧美丝袜亚洲另类 | 免费在线观看日本一区| 欧美日韩亚洲高清精品| 热99久久久久精品小说推荐| 一夜夜www| 成人国语在线视频| 啦啦啦视频在线资源免费观看| 亚洲精品在线美女| 午夜福利一区二区在线看| 日本a在线网址| 久久久国产成人免费| 久久久久国产一级毛片高清牌| 黄片播放在线免费| 成年版毛片免费区| 欧美亚洲日本最大视频资源| 国产一区在线观看成人免费| 亚洲一区二区三区欧美精品| 人人妻人人添人人爽欧美一区卜| 婷婷精品国产亚洲av在线 | av电影中文网址| 国产精品1区2区在线观看. | 搡老岳熟女国产| 一进一出好大好爽视频| 国产三级黄色录像| 两性夫妻黄色片| 亚洲一区高清亚洲精品| 美女福利国产在线| 欧美亚洲日本最大视频资源| 18禁国产床啪视频网站| 欧美精品人与动牲交sv欧美| 高清av免费在线| 国产免费男女视频| 超碰97精品在线观看| 欧美激情 高清一区二区三区| а√天堂www在线а√下载 | 新久久久久国产一级毛片| 精品第一国产精品| 黑人操中国人逼视频| 亚洲精品中文字幕在线视频| 少妇被粗大的猛进出69影院| 欧美在线一区亚洲| 精品欧美一区二区三区在线| 日韩免费高清中文字幕av| 悠悠久久av| 亚洲成av片中文字幕在线观看| 18禁裸乳无遮挡免费网站照片 | 法律面前人人平等表现在哪些方面| 91大片在线观看| 久久国产亚洲av麻豆专区| 欧美日韩av久久| 黑人巨大精品欧美一区二区蜜桃| 亚洲视频免费观看视频| 亚洲熟妇中文字幕五十中出 | 99精国产麻豆久久婷婷| 一级作爱视频免费观看| 久久ye,这里只有精品| 欧美不卡视频在线免费观看 | 99国产极品粉嫩在线观看| 国产欧美日韩一区二区三| 这个男人来自地球电影免费观看| 国产高清videossex| 午夜福利影视在线免费观看| 免费黄频网站在线观看国产| 香蕉丝袜av| 成人三级做爰电影| 亚洲色图av天堂| 午夜免费观看网址| 99香蕉大伊视频| 国精品久久久久久国模美| www.自偷自拍.com| 久久国产精品大桥未久av| 男人操女人黄网站| 中出人妻视频一区二区| 精品国产国语对白av| 身体一侧抽搐| 久久中文看片网| 亚洲精品自拍成人| aaaaa片日本免费| a级毛片黄视频| 美女午夜性视频免费| 国产精品一区二区在线观看99| 别揉我奶头~嗯~啊~动态视频| 成人18禁高潮啪啪吃奶动态图| 国产免费男女视频| 涩涩av久久男人的天堂| 99精国产麻豆久久婷婷| 久久这里只有精品19| 亚洲国产欧美网| 亚洲伊人色综图| 日韩人妻精品一区2区三区| 久久青草综合色| 18禁观看日本| 啦啦啦 在线观看视频| 超色免费av| √禁漫天堂资源中文www| 欧美激情久久久久久爽电影 | 亚洲欧洲精品一区二区精品久久久| 悠悠久久av| 亚洲av熟女| 99精品在免费线老司机午夜| 久久久国产欧美日韩av| 久99久视频精品免费| 亚洲一区中文字幕在线| 国产三级黄色录像| 欧美中文综合在线视频| 涩涩av久久男人的天堂| 91国产中文字幕| 香蕉久久夜色| 国产一区二区激情短视频| 人妻久久中文字幕网| 中文字幕制服av| 99国产精品一区二区蜜桃av | 精品国产乱码久久久久久男人| 久久午夜综合久久蜜桃| 国产一区在线观看成人免费| 女警被强在线播放| 亚洲中文字幕日韩| 久久中文字幕一级| 日日夜夜操网爽| 啦啦啦免费观看视频1| 欧美日韩视频精品一区| 国产精品乱码一区二三区的特点 | 一区二区日韩欧美中文字幕| 国产97色在线日韩免费| 好男人电影高清在线观看| 最近最新免费中文字幕在线| 免费在线观看视频国产中文字幕亚洲| 亚洲av日韩精品久久久久久密| 国产激情久久老熟女| 亚洲熟女毛片儿| 国产三级黄色录像| 久久影院123| 精品一区二区三区视频在线观看免费 | 国产野战对白在线观看| av超薄肉色丝袜交足视频| 国产成人精品久久二区二区免费| 1024香蕉在线观看| 国产成人啪精品午夜网站| 日韩免费av在线播放| 久久天躁狠狠躁夜夜2o2o| 咕卡用的链子| videosex国产| 欧美国产精品一级二级三级| 极品少妇高潮喷水抽搐| 欧美日韩福利视频一区二区| 成人特级黄色片久久久久久久| 免费观看人在逋| 亚洲精品av麻豆狂野| 久久精品aⅴ一区二区三区四区| 热99re8久久精品国产| 一区二区三区激情视频| 亚洲av日韩在线播放| 丝袜美足系列| 久久久久久久午夜电影 | 国产亚洲av高清不卡| 亚洲精品美女久久av网站| 午夜久久久在线观看| 十八禁高潮呻吟视频| 国产精品成人在线| 一级a爱视频在线免费观看| 久久国产精品大桥未久av| 亚洲欧美日韩高清在线视频| 午夜福利一区二区在线看| 久久久国产成人精品二区 | 国产av又大| 真人做人爱边吃奶动态| 人人妻人人爽人人添夜夜欢视频| 国产成人啪精品午夜网站| 亚洲成人免费av在线播放| 欧美色视频一区免费| 又紧又爽又黄一区二区| 亚洲黑人精品在线| 亚洲午夜精品一区,二区,三区| 天天添夜夜摸| 男男h啪啪无遮挡| 精品国产美女av久久久久小说| 69精品国产乱码久久久| 午夜91福利影院| 国产人伦9x9x在线观看| 丝袜美足系列| 国产精品国产av在线观看| 美女高潮喷水抽搐中文字幕| 免费观看a级毛片全部| 黑人操中国人逼视频| av在线播放免费不卡| 高清毛片免费观看视频网站 | 制服诱惑二区| 精品国产国语对白av| 黄色女人牲交| 69av精品久久久久久| 久久人人爽av亚洲精品天堂| 婷婷成人精品国产| 视频区欧美日本亚洲| 狂野欧美激情性xxxx| 777久久人妻少妇嫩草av网站| 深夜精品福利| 操美女的视频在线观看| 麻豆成人av在线观看| www.999成人在线观看| 免费不卡黄色视频| 亚洲五月色婷婷综合| 女性生殖器流出的白浆| 成人国语在线视频| 久久香蕉精品热| 大陆偷拍与自拍| 国产精品久久电影中文字幕 | 国产淫语在线视频| 黄片大片在线免费观看| 国产成人免费无遮挡视频| 久久久国产欧美日韩av| 国产成人影院久久av| 亚洲专区字幕在线| 久久中文字幕一级| 一本大道久久a久久精品| 国产男女超爽视频在线观看| 亚洲欧美一区二区三区久久| 欧美日韩亚洲综合一区二区三区_| 一级毛片精品| 色94色欧美一区二区| 精品国产乱子伦一区二区三区| 国产免费av片在线观看野外av| 大码成人一级视频| 亚洲国产看品久久| 欧洲精品卡2卡3卡4卡5卡区| 人妻 亚洲 视频| 国产激情久久老熟女| 午夜福利欧美成人| 国产男女内射视频| 久久亚洲精品不卡| 国产成人啪精品午夜网站| 国产一区在线观看成人免费| 国产激情久久老熟女| 精品国产一区二区三区四区第35| 婷婷丁香在线五月| 日韩精品免费视频一区二区三区| 亚洲第一欧美日韩一区二区三区| 丁香欧美五月| av网站在线播放免费| tube8黄色片| 99热网站在线观看| tube8黄色片| 久久久国产成人精品二区 | 老汉色av国产亚洲站长工具| 一级毛片女人18水好多| 国产欧美日韩精品亚洲av| 中文字幕最新亚洲高清| 欧美黄色片欧美黄色片| 欧美最黄视频在线播放免费 | 久久人妻熟女aⅴ| 91九色精品人成在线观看| 久久久国产精品麻豆| 欧美最黄视频在线播放免费 | 午夜亚洲福利在线播放| 欧美日韩瑟瑟在线播放| 最近最新免费中文字幕在线| 一区二区三区激情视频| 欧美人与性动交α欧美精品济南到| 婷婷成人精品国产| 69精品国产乱码久久久| 国产高清视频在线播放一区| 久久久国产成人免费| 欧美亚洲日本最大视频资源| 欧美色视频一区免费| 成人av一区二区三区在线看| 日本wwww免费看| 男人舔女人的私密视频| 老司机午夜十八禁免费视频| 国产男靠女视频免费网站| 国产精品秋霞免费鲁丝片| 亚洲精品在线观看二区| 国产精品亚洲av一区麻豆| 国产一区有黄有色的免费视频| 十分钟在线观看高清视频www| av中文乱码字幕在线| 免费在线观看亚洲国产| 乱人伦中国视频| 国产麻豆69| 国产野战对白在线观看| 婷婷丁香在线五月| 丰满的人妻完整版| 怎么达到女性高潮| 亚洲av成人av| 9191精品国产免费久久| 狠狠婷婷综合久久久久久88av| 国产亚洲精品一区二区www | 老汉色∧v一级毛片| 别揉我奶头~嗯~啊~动态视频| 12—13女人毛片做爰片一| 99香蕉大伊视频| 纯流量卡能插随身wifi吗| 日韩有码中文字幕| 自线自在国产av| 日韩一卡2卡3卡4卡2021年| 亚洲美女黄片视频| 国产精品免费大片| a级片在线免费高清观看视频| 国产日韩一区二区三区精品不卡| 亚洲精品粉嫩美女一区| 黑人操中国人逼视频| 免费女性裸体啪啪无遮挡网站| 超碰成人久久| 麻豆av在线久日| 丰满饥渴人妻一区二区三| 香蕉国产在线看| 午夜精品国产一区二区电影| 日本撒尿小便嘘嘘汇集6| 久久久国产欧美日韩av| 多毛熟女@视频| 精品国产美女av久久久久小说| 国产在线精品亚洲第一网站| 国产三级黄色录像| 搡老熟女国产l中国老女人| 日韩欧美免费精品| 午夜激情av网站| 黄片大片在线免费观看| 大型黄色视频在线免费观看| 久久久久视频综合| 精品国产美女av久久久久小说| 精品一区二区三区av网在线观看| 国产三级黄色录像| 天天躁日日躁夜夜躁夜夜| 亚洲精品国产精品久久久不卡| 亚洲av片天天在线观看| 51午夜福利影视在线观看| 美女国产高潮福利片在线看| 视频在线观看一区二区三区| 妹子高潮喷水视频| 黄色片一级片一级黄色片| 自线自在国产av| 最新在线观看一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 很黄的视频免费| 每晚都被弄得嗷嗷叫到高潮| 夜夜夜夜夜久久久久| 精品亚洲成国产av| 国产成人免费观看mmmm| 好男人电影高清在线观看| 女人精品久久久久毛片| 午夜福利乱码中文字幕| 91老司机精品| www.999成人在线观看| 亚洲精品国产色婷婷电影| 精品人妻熟女毛片av久久网站| 真人做人爱边吃奶动态| 亚洲七黄色美女视频| 国产精品亚洲av一区麻豆| 精品国产一区二区三区四区第35| 在线播放国产精品三级| 国产一区二区激情短视频| 无限看片的www在线观看| 一级片'在线观看视频| 久久精品国产a三级三级三级| 在线av久久热| 人妻 亚洲 视频| 高清视频免费观看一区二区| svipshipincom国产片| 成年版毛片免费区| 91成年电影在线观看| 少妇的丰满在线观看| 日本五十路高清| 久久精品91无色码中文字幕| 国产亚洲精品一区二区www | 黄色丝袜av网址大全| 1024香蕉在线观看| 在线播放国产精品三级| 热99久久久久精品小说推荐| 法律面前人人平等表现在哪些方面| 亚洲国产精品sss在线观看 | 天堂中文最新版在线下载| 热re99久久国产66热| 久久精品人人爽人人爽视色| 50天的宝宝边吃奶边哭怎么回事| 国产又爽黄色视频| 可以免费在线观看a视频的电影网站| 飞空精品影院首页| 99riav亚洲国产免费| 亚洲色图av天堂| 久久精品国产亚洲av高清一级| 高清欧美精品videossex| 黑丝袜美女国产一区| 91精品国产国语对白视频| 啦啦啦 在线观看视频| xxxhd国产人妻xxx| 国产欧美日韩精品亚洲av| 夜夜躁狠狠躁天天躁| 91精品三级在线观看| 50天的宝宝边吃奶边哭怎么回事| 高清视频免费观看一区二区| 久久人人爽av亚洲精品天堂| 一本一本久久a久久精品综合妖精| 丰满人妻熟妇乱又伦精品不卡| 国产av一区二区精品久久| 一级片'在线观看视频| 99re在线观看精品视频| 在线av久久热| 国产在线观看jvid| 人妻久久中文字幕网| 成年动漫av网址| √禁漫天堂资源中文www| 亚洲精品久久成人aⅴ小说| 国产日韩一区二区三区精品不卡| 亚洲人成77777在线视频| 天天躁日日躁夜夜躁夜夜| 欧美在线一区亚洲| 999久久久精品免费观看国产| 亚洲成a人片在线一区二区| 黄色女人牲交| 亚洲国产欧美网| 亚洲一区二区三区欧美精品| 国产在线一区二区三区精| 欧美中文综合在线视频| 亚洲中文av在线| 精品福利观看| 午夜老司机福利片| 国产av一区二区精品久久| 久久 成人 亚洲| 男女下面插进去视频免费观看| 每晚都被弄得嗷嗷叫到高潮| 欧美成人免费av一区二区三区 | 欧美一级毛片孕妇| 国产一区二区三区综合在线观看| 欧美激情久久久久久爽电影 | 国产成人影院久久av| 亚洲伊人色综图| 国产人伦9x9x在线观看| 久久中文字幕一级| 久久久久久免费高清国产稀缺| 国产成人一区二区三区免费视频网站| av视频免费观看在线观看| 国产1区2区3区精品| 亚洲国产欧美一区二区综合| 18禁观看日本| 黄色丝袜av网址大全| 老司机影院毛片| 亚洲欧美日韩另类电影网站| 国产亚洲欧美98| 深夜精品福利| 免费在线观看视频国产中文字幕亚洲| 国产av又大| 亚洲国产毛片av蜜桃av| 国产成人av激情在线播放| 在线看a的网站| 免费在线观看影片大全网站| 国产激情久久老熟女| videosex国产| 麻豆乱淫一区二区| 夜夜夜夜夜久久久久| 深夜精品福利| 欧美激情久久久久久爽电影 | 两性夫妻黄色片| 啦啦啦免费观看视频1| xxxhd国产人妻xxx| 久久亚洲真实| 好男人电影高清在线观看| 伦理电影免费视频| 欧美日韩亚洲综合一区二区三区_| 国产视频一区二区在线看| 欧美成狂野欧美在线观看| 老熟妇仑乱视频hdxx| 十分钟在线观看高清视频www| 免费黄频网站在线观看国产| 少妇的丰满在线观看| 日韩精品免费视频一区二区三区| 久久久久久久久免费视频了| 日韩熟女老妇一区二区性免费视频| 亚洲va日本ⅴa欧美va伊人久久| 大香蕉久久成人网| av天堂在线播放| 欧洲精品卡2卡3卡4卡5卡区| 另类亚洲欧美激情| 精品高清国产在线一区| 国产伦人伦偷精品视频| 成年人午夜在线观看视频| 久久精品亚洲av国产电影网| 精品人妻1区二区| 国产激情久久老熟女| 91精品三级在线观看| 精品久久久久久久久久免费视频 | 国产av一区二区精品久久| 日日爽夜夜爽网站| 亚洲精品中文字幕一二三四区| 亚洲成人免费电影在线观看| 国产精品秋霞免费鲁丝片| 国产高清激情床上av| 啦啦啦在线免费观看视频4| 久久中文字幕人妻熟女| 老熟妇乱子伦视频在线观看| 欧美成狂野欧美在线观看| 涩涩av久久男人的天堂| 欧美精品av麻豆av| 欧美激情 高清一区二区三区| 丁香六月欧美| 国产免费现黄频在线看| 午夜激情av网站| 在线观看日韩欧美| 大型黄色视频在线免费观看| bbb黄色大片| 91成人精品电影| 国产人伦9x9x在线观看| 老司机影院毛片| 午夜91福利影院| 亚洲av第一区精品v没综合| 成年版毛片免费区|