• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Robust tracking-by-detection using a selection and completion mechanism

    2017-09-15 08:59:51RuochenFanFangLueZhangMinZhangandRalphMartin
    Computational Visual Media 2017年3期

    Ruochen Fan,Fang-Lue Zhang,Min Zhang(),and Ralph R.Martin

    c○The Author(s)2017.This article is published with open access at Springerlink.com

    Robust tracking-by-detection using a selection and completion mechanism

    Ruochen Fan1,Fang-Lue Zhang2,Min Zhang3(),and Ralph R.Martin4

    c○The Author(s)2017.This article is published with open access at Springerlink.com

    It is challenging to track a target continuously in videos with long-term occlusion, or objects which leave then re-enter a scene. Existing tracking algorithms combined with onlinetrained object detectors perform unreliably in complex conditions, and can only provide discontinuous trajectories with jumps in position when the object is occluded.This paper proposes a novel framework of tracking-by-detection using selection and completion to solve the abovementioned problems. It has two components,tracking and trajectory completion.An offl ine-trained object detector can localize objects in the same category as the object being tracked.The object detector is based on a highly accurate deep learning model.The object selector determines which object should be used to re-initialize a traditional tracker. As the object selector is trained online, it allows the framework to be adaptable. During completion,a predictive non-linear autoregressive neuralnetwork completes any discontinuous trajectory. The tracking component is an online real-time algorithm,and the completion part is an after-theevent mechanism.Quantitative experiments show a significant improvement in robustness over prior stateof-the-art methods.

    object tracking;detection;proposal selection;trajectory completion

    1 Introduction

    Object tracking aims to acquire the moving trajectories of objects of interest in video,and is a fundamental problem in computer vision.It plays a key role in applications like surveillance analysis[1,2]and traffi c monitoring[3,4].Decades of research have led to tremendous progress in this field.However,there is still a long way to go to achieve satisfactory results in many challenging videos with,e.g.,violent shaking,longterm occlusion,or objects which leave then re-enter the scene.Traditionaltracking methods achieve high accuracy in experimental tests,but perform poorly on practical problems. In most methods,object features are extracted in each frame and used to search for the object in the subsequent frame[5,6]. Errors can accumulate in this process.If occlusion or frame skipping occurs,tracking will fail because of the rapid change of appearance features in local windows.

    Combining detection with tracking is a feasible solution to these problems[7].In the tracking process,errors can accumulate,but a detector can be used to localize the object being tracked and re-initialize the tracker.It is evident that detection accuracy is essential,so a high decision threshold is set for the detector.This means that detection results are accurate but frequently unavailable.In recent years,deep learning has made significant strides in object detection.However,adaptive online training is stillan open problem.The computational requirements of training and lack of training data make it hard to recognize a specific target amongst other objects which belong to the same category in a scene.Furthermore,most tracking frameworks can only provide a discontinuous trajectory with jumpsin position when an object is occluded.However,in application scenarios such as safety monitoring,for reliable analysis results,we must infer the missing parts of occluded trajectories.

    For these reasons,this work has designed a novel framework that decomposes the tracking task into two parts: tracking and trajectory completion. During the tracking stage,three steps are invoked for every frame,including a simple tracker,a detection module,and a selection module.This allows the object to be tracked throughout the video. The tracker attempts to follow the object from one frame to another. In this process,tracking errors may accumulate,and if the object is occluded for a long term,the tracker will fail to follow the object.Thus we use the object detector and the object selector to determine the accurate location of the object to re-initialize the tracker.The object detector’s job is to localize objects of the same kind as the object being tracked.The task of the object selector is to discriminate between them and determine which object should be used to re-initialize the tracker.For accuracy,we set a high decision threshold for both object detector and object selector,so the recallrate is low.Thus,the location of the object cannot be obtained in every frame by the detector and the selector.However,once the object is localized,the tracker will be re-initialized.

    During the completion phase,we use a predictive neural network to complete the discontinuous trajectory.While the missing parts of the trajectory could be interpolated by a simple curve,e.g.,a Hermite cubic spline,this is not a good approach as the missing trajectory may not be smooth or regular. We instead use a neural network,which is capable of learning the more complex behaviour of a real trajectory.

    Our experimental results show that our method outperforms previous methods in cases in which the target objects are occasionally occluded,and can generate reliable trajectories for such objects.

    2 Related work

    2.1 Object tracking

    Object tracking is the task of estimating the trajectory of a moving target in video.Traditional tracking algorithms start from object initialization in which the target is manually specified using a bounding box or ellipse. Motion estimation is the key phase in tracking. After the object has been modeled,particle filters[8]can be used to estimate object motion.There are two kinds ofobject modeling approaches:global object representations and local object representations.A variety of global visual representation methods are used for object tracking. Santner et al.[9]adapted an opticalflow-based representation and built a tracker using a single template model.Hedayatiet al.[10]combined optical flow with mean shift of color signature to track multiple objects. Optical flow can provide spatiotemporal features of an object,but it can not be applied to scenes with rapid changes in illumination.Zhao et al.[11]represented objects by color distribution.A differential earth mover’s distance algorithm was used to calculate the distance between two distributions. Sun et al.[12]used fragment-based features,and handled occlusion by solving a two-stage optimization problem.Hu et al.[13]proposed an active contour-based visual tracking framework.Colors,shapes,and motions are combined to evolve the contour. Jepson et al.[14]used object representations based on filter responses from a steerable pyramid.Other than traditional methods,neural networks can be used to perform object tracking without depending on extracting hand-crafted features.Wang et al.[15] proposed an online training network to transfer pretrained deep features for tracking.

    In contrast to global visual representations,local visual representations based on local appearance structures can be more robust to object deformation and illumination changes.Wang et al.[16]segmented superpixel regions surrounding the target,and then represented each superpixel by a feature vector.An appearance model based on superpixels was used to distinguish the object from its background. The scale-invariant feature transform(SIFT)[17]is a widely used localfeature extraction algorithm;some approaches[18–20]use it to match regions ofinterest between frames in a tracking framework. Static and motion saliency features[21,22]and corner features[23]have also been commonly used in object tracking.However,local representations reply on rich texture,and they are unstable for low resolution images.Simple motion estimation tracking suffersfrom error accumulation and cannot dealwith object occlusion or re-entry.Thus,combining tracking with detection is meaningful.

    2.2 Tracking with detection

    Some work has applied object detection to tracking systems,and these approaches are most similar to the approach we take.In Ref.[24],the identify of the tracked object was verified by a validator.If verification failed,an offl ine-trained object detector searched the entire image exhaustively.Liet al.[25] used a probabilistic model combining conventional tracking and object detection to track objects in low frame rate(LFR)video;a cascade particle filter was used.Okuma et al.[26]focused on tracking multiple objects which can leave and enter the scene, using a combination of mixture particle filters and Adaboost. However,there is no discrimination between the objects tracked.Pedestrian detectors can also be used to improve robustness in multiobject tracking[27].All of the detectors used in the above papers were trained offl ine. Although offl ine-trained classifiers may perform better than real-time detectors due to ample training samples and suffi cient training time,they cannot distinguish between objects of the same category.For example, a detection mechanism can localize all pedestrians in a frame,but it is unable to distinguish a specific person.Thus,it is hard for the detectors in the above papers to rectify the tracker following a specific object.

    Grabner and Bischof[28]used a real-time Adaboost feature selection model for object detection. This work reduced the computational complexity of Adaboost significantly,but because of the limitations ofthe number of weak classifiers,the accuracy ofthe detector was low.Babenko et al.[29] trained an online object classifier which was updated by the output of the tracker.A multiple-instancelearning approach was used to reduce ambiguities in the detection phase. Tang et al.[30]treated tracking as a foreground–background classification problem;online support vector machines were built to recognise different features using a co-training framework.Online detectors are more adaptable, and are able to track a specific target amongst many objects from the same class.However,these classifiers perform worse than offl ine detectors in terms of accuracy,and training data extracted from real-time video have limited reliability. In this paper,we integrate a pre-trained model with a classifier which is updated in real time,to overcome this problem.

    3 Tracking by detection and selection

    Our framework has two phases: tracking and trajectory completion. The former can track the target even in the presence of frequent and longterm occlusion,or object absences,while the latter can complete incomplete trajectories having missing segments. The tracking part of our tracking-bydetection using selection and completion(TDSC) framework can track a specific target in videos with long-term occlusion.The user should labelthe object to be tracked in the first frame,then our tracking algorithm produces the location of this target in every frame.If the target is occluded or goes out of sight,this algorithm outputs the location where the target last appeared.After the target reappears,this tracking algorithm can find the target and output its correct location.The TDSC framework is able to distinguish a specific object amongst others ofthe same kind,for example,a specific pedestrian among many people.So,even though TDSC is designed to track a single object,we can also use it to deal with the multiple object tracking problem by running multiple simultaneous instances.

    A block diagram for our framework is shown in Fig.1.In this section,we consider the tracking phase, including the object detector,object selector,and tracker.The following section willconsider trajectory completion.

    Both detector and tracker receive video frames as input data.The object detector can localize objects in the same object category as the target being tracked.However,as a classifier,the detector has two main shortcomings:it is inevitable that(i)it will at times return false positives,and(ii)it will fail to discriminate objects of the same kind.Thus, any objects detected are next filtered by the object selector to remove false positives and objects other than the specific desired target.

    Our work aims to build a robust framework for tracking objects with long-term occlusions. This paper does not focus on the design of the tracker. We thus simply use compressive tracking[6]in our implementation;it is often employed as abenchmark in comparative experiments because of its effectiveness and effi ciency.

    Fig.1 Data flow between the components of TDSC.

    A detector can produce coordinates and categories of objects. Object detection is a fundamental problem in computer vision. To obtain high accuracy,our framework uses an offl ine-trained detector which has been exposed to abundant training samples,without restriction on training time.In recent years,convolutionalneural networks (CNNs)have become widely used in this field,as they have higher detection performance compared to methods based on low-level features such as histograms of oriented gradients(HoG)[31]or SIFT features[32]. In this paper,we employ faster-RCNN[33]as our object detector.Region proposal computation is a bottleneck for fast-RCNN[34]. Faster-RCNN overcomes this problem by using a region proposal network which shares convolutional features with the detection network.It achieves near real-time detection rates and detects multiple objects in specific classes with high accuracy.

    The approach used by our object selector is to extract feature vectors from objects found by the detector,which we call object proposals,and use a categorization model to find positive proposals. If an object proposal is recognized as of the same category as that of the object being tracked,we call it a positive proposal.The feature vector is based on the color and shape ofthe object.A color histogram represents the distribution ofcolors in an object.We use HoG features to represent shape and contextual information.The first step in calculating the HoG descriptor is to compute image gradient values.The region of interest containing an object proposal is divided into 10×10 cells. Each pixel within a cell casts a weighted vote for an orientation-based histogram channel based on the magnitude and orientation of the gradient vector.To counter any changes in illumination over space,cells are grouped into blocks in which we locally normalize the gradient strengths.The HoG descriptor is then extracted by concatenating the normalized cell histograms from allblocks.For every block ofan object proposal,the feature vector is calculated by combining the color histogram and the HoG descriptor into a vector x. This is used by a classifier to assign a label y to each object,which is either+1 or?1 to state that it belongs to the target object and some other object respectively.

    For simplicity and speed,we use a linear support vector machine(SVM)as the classifier.An SVM provides a method to calculate the hyperplane that optimally separates two high-dimensional classes of objects.The hyperplane is given by

    whereωis the normalvector to the hyperplane and b is the hyperplane offset from the origin.Finding this hyperplane is a convex optimization problem.To be able to handle data which are not linearly separable, we introduce a soft margin,whereupon the objective function to be minimized is

    Doing so gives a hyperplane which can be used for classifying the feature vectors.However,this does not yet take into account temporal coherence.

    In the case of continuous successful detection, the current target location should be close to the previous one.If an object proposal is far from the correct target in the previous frame,this object is unlikely to be the correct proposal:distance should also be considered in our object selector.However, if the object has been absent for a while,the object is likely to be further away.

    In a standard SVM,a new feature vector x is classified by computing:

    Taking distance as a penalty factor,given an absence of detection output for T frames,the classificationformula can now be written as

    is the distance between the current object proposal location(xc,yc)and the previous target location (xp,yp),and constantμis a distance threshold set by experimental experience.We can see from Eq. (4)that we use a quadratic form of distance as the penalty factor.If the current object proposal is far from the previous object,it is highly unlikely for this proposal to be the correct location.As the distance increases,the penalty factor should be dominant,so the penalty factor is made to be a quadratic function of distance.

    In reality,the appearance of the object can change during the tracking process.For example, a pedestrian may slowly turn around. Although only the initial bounding box that the user draws is completely reliable,we cannot train our selection only using the initialdata because ofthis appearance change.To provide greater adaptability,the object selector is trained online.In the initial stage,we should draw a rectangle to specify an object to track, and draw another rectangle as a negative sample,to initialize the SVM.Once the initial SVM model has been established,positive and negative samples are extracted in the process of selection.Afterwards, online training is carried out continuously in order to adapt to the changes in appearance of the target.

    4 Trajectory completion

    When the object being tracked is occluded,the tracker cannot produce correct coordinates. A sudden,significant change in object coordinates is inevitable when the object becomes visible again after occlusion.We can determine that the object has been occluded by detecting abrupt position change.This framework does not detect occlusion directly by the tracker,because the tracker should not determine that the object is occluded when it is not distinct and hard to recognize.

    When occlusion has been detected,a trajectory completion mechanism is used to correct the discontinuous trajectories.Trajectory completion is a temporal extrapolation problem.Artificial neural networks are one of the most accurate and widely used forecasting models which are capable of identifying complex nonlinear relationships between input and output data. Non-linear autoregressive(NAR)neural networks have proved useful for complicated pattern data forecasting[35]. Completing the tracked object’s coordinates is also a data forecasting problem.We thus employ a three-layer NAR neural network for trajectory completion.Compared with using an interpolation method such as spline fitting,an NAR neural network can produce a predicted trajectory without making any assumption about the type ofmovement the object is undergoing,and need not assume a smooth trajectory.

    An NAR network has linear activation functions for the output layer and non-linear logistic activation functions for the hidden layer.Thus our network performs a non-linear functional mapping from the past object coordinates to future locations.

    Let xtbe the horizontal coordinate of the target at time t.The mapping performed by the network is

    where w are the connection weights of this network and l is the maximum time delay for the input data.

    We can see that this network is a nonlinear autoregressive model.The structure of this autoregressive neural network is shown in Fig.2.

    Fig.2 Autoregressive neural network structure.

    In the majority of cases,trajectories of objects are continuous and smooth,such as when tracking cars and pedestrians. In order to make full use of known information and improve the continuity and smoothness of the trajectories,the forecast is carried out in two directions.The coordinates beforeocclusion are input to the NAR neural network as training data in time sequence,and the prediction output is denoted by{x+t},while the coordinates after occlusion are also input to the neural network as training data in the reverse direction,and the prediction output is denoted by{x?t}. The final forecast result{xt}can be calculated using:

    5 Experimental results

    5.1 Dataset preparation

    Several datasets exist for benchmarking visual tracking,such as VOT①http://www.votchallenge.net/vot2016/dataset.html,VTB②http://cvlab.hanyang.ac.kr/tracker benchmark/index.html,and MOT③http://motchallenge.net/data/MOT16/. However,most sequences in these datasets have no object occlusion.Even in the video samples in which objects are blocked,the occlusion spans are too short.In order to evaluate tracking performance in sophisticated circumstances such as long and frequent occlusion,we introduce a more challenging dataset.Two sequences in this dataset are selected from VTB benchmark,and we have captured four more diffi cult samples with significant occlusion.

    For quantitative evaluation,we use the following protocolsimilar to thatused in the MOT benchmark. Tracking starts from an initial bounding box in the first frame.Both the ground truth and the tracking results are a sequence of bounding boxes.If the overlapping area between a tracking bounding box and a ground truth bounding box is larger than an overlap threshold,the tracking result in this frame is deemed successful.For every tracking framework, we plot a success rate curve against the overlap threshold.The overall performance of a tracker can be measured by the AUC(area under the curve) criterion.In order to measure the ability to handle occlusion,re-initialization is not performed in the tracking process.

    5.2 Experiments on the tracking phase

    We compare our proposed TDSC framework with four trackers:CT[6],KCF[36],CSK[37],IMT[38], SORT[39],and LCT[40].All trackers were run with the same parameters using our dataset.Figure 3 shows tracking results for some test samples.From top to bottom these are:David,Jogging,Occlusion 1,Occlusion 2,Occlusion 3,and Frameskip.The first two are relatively simple scenes with short-term occlusion,and come from the VTB dataset.Occlusion 1 is a multi-object occlusion scene.Occlusion 2 includes long-term occlusion and shaking.Occlusion 3 is a long-term and multiobject occlusion scene.The last sample has missing frames in a sequence.The results reveal that while state-of-the-art tracking methods are able to handle short-term occlusion,the targets are lost with longterm occlusion.However,our TDSC framework can continue tracking through re-initialization.

    Figure 4 presents the performance curves for these seven trackers on our dataset.The results indicate that the proposed tracker has better performance than the other trackers in our experiments.To provide a quantitative analysis,we give the AUC values for these seven trackers using our test dataset,for three specific conditions,in Table 1. The proposed TDSC framework shows a significant improvement over the prior state-of-the-art methods.

    The proposed framework achieves real-time processing.For a resolution of576×432,traditional tracking and selection only take 3.9 ms and less than 1 ms respectively using an Intel Core i7 CPU.

    5.3 Experiments on the completion phase

    Building datasets is the first diffi culty in trajectory completion experiments. For a video sample in which an object is physically occluded,we are unable to acquire a complete trajectory,so it is hard to annotate ground truth.Therefore,we capture some video samples without occlusion,annotate the object trajectories as ground truth,and then draw synthetic obstacles to occlude the moving objects.

    We conducted experiments using two kinds of two cases:straight trajectories and curved trajectories. Results are illustrated in Fig.5.Blue lines are trajectories extracted by our tracking–detection–selection mechanism.Because of occlusion,these trajectories are discontinuous.Yellow lines are trajectories predicted by the completion mechanism.Red crosses indicate trajectory ground truth. We use the average distance between points in predicted trajectories and ground truth trajectories for quantitative evaluation.The average distance in straight-trajectory cases is 5 pixels(3%of the target height)and in curved trajectory cases,10 pixels(14% of the target height).Our experiments furthermore show that our TDSC framework can output continuous trajectories in cases with occlusion.

    Table 1 Tracking performance measured by the AUC criterion

    Fig.3 Tracking results.Samples,top to bottom:David,Jogging,Occlusion 1(multiple objects),Occlusion 2(long-term occlusion and shaking),Occlusion 3(long-term occlusion),and Frameskip.Yellow,blue,green,red,and purple rectangles represent tracking output of TDSC,KCF,CSK,CT,and IMT respectively.

    6 Conclusions

    In this paper,we designed a novel framework to solve the problem of object tracking where longterm occlusion interferes with the tracking process. Continuous tracking is necessary for some realistic diffi cult problems,especially for safety monitoring. Our framework decomposes the task into two parts: tracking and trajectory completion. The object detector is a deep neural network model which localizes objects in the same category as the object being tracked. The object selector is based on an online-trained SVM model,and discriminates between the outputs of the object detector to determine which object should be used to reinitialize the tracker. Offl ine-trained and online-trained classifiers are combined for accuracy and flexibility. To obtain a continuous trajectory,we utilize a non-linear autoregressive neural network to complete the missing parts of trajectories extracted by the tracking component of TDSC.Quantitative experiments show our proposed framework improves upon prior state-of-the-art tracking methods and is able to output continuous trajectories.

    Fig.4 Overall performance curve.

    Fig.5 Trajectory completion results.

    Acknowledgements

    This work was supported by the National Natural Science Foundation of China (Project No.61521002),the General Financial Grant from the China Postdoctoral Science Foundation(Grant No.2015M580100),a Research Grant of Beijing Higher Institution Engineering Research Center, and an EPSRC Travel Grant.

    [1]Collins,R.T.;Lipton,A.J.;Fujiyoshi,H.;Kanade, T.Algorithms for cooperative multisensor surveillance. Proceedings of the IEEE Vol.89,No.10,1456–1477, 2001.

    [2]Greiff enhagen,M.;Comaniciu,D.;Niemann,H.; Ramesh,V.Design,analysis,and engineering of video monitoring systems:An approach and a case study. Proceedings of the IEEE Vol.89,No.10,1498–1517, 2001.

    [3]Kanhere,N.K.;Birchfield,S.T.;Sarasua,W.A. Vision based real time traffi c monitoring.U.S.Patent 8,379,926.2013.

    [4]Morris,B.T.; Tran,C.; Scora,G.; Trivedi, M.M.; Barth, M.J.Real-time video-based traffi c measurement and visualization system for energy/emissions.IEEE Transactions on Intelligent Transportation Systems Vol.13,No.4,1667–1678, 2012.

    [5]Rui,Y.;Chen,Y.Better proposal distributions: Object tracking using unscented particle filter.In: Proceedings ofthe IEEE Computer Society Conference on Computer Vision and Pattern Recognition,Vol.2, II-786–II-793,2001.

    [6]Zhang,K.;Zhang,L.;Yang,M.-H.Real-time compressive tracking.In:Computer Vision–ECCV 2012.Fitzgibbon,A.;Lazebnik,S.;Perona,P.;Sato, Y.;Schmid,C.Eds.Springer-Verlag Berlin Heidelberg, 864–877,2012.

    [7]Li,X.;Hu,W.;Shen,C.;Zhang,Z.;Dick,A.;van den Hengel,A.A survey of appearance models in visual object tracking.ACM Transactions on Intelligent Systems and Technology Vol.4,No.4,Article No.58, 2013.

    [8]Isard,M.;Blak,A.CONDENSATION—Conditional density propagation for visual tracking.International Journal of Computer Vision Vol.29,No.1,5–28,1998.

    [9]Santner,J.;Leistner,C.;Saff ari,A.;Pock,T.;Bischof, H.PROST:Parallel robust online simple tracking.In: Proceedings ofthe IEEE Computer Society Conference on Computer Vision and Pattern Recognition,723–730,2010.

    [10]Hedayati,M.;Cree,M.J.;Scott,J.Combination of mean shift of colour signature and optical flow for tracking during foreground and background occlusion. In:Image and Video Technology.Br¨aunl,T.;McCane, B.;Rivera,M.;Yu,X.Eds.Springer International Publishing Switzerland,87–98,2016.

    [11]Zhao,Q.;Yang,Z.;Tao,H.Differential earth mover’s distance with its applications to visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.32,No.2,274–287,2010.

    [12]Sun,C.;Wang,D.;Lu,H.Occlusion-aware fragmentbased tracking with spatial-temporal consistency. IEEE Transactions on Image Processing Vol.25,No. 8,3814–3825,2016.

    [13]Hu,W.;Zhou,X.;Li,W.;Luo,W.;Zhang,X.; Maybank,S.Active contour-based visual tracking by integrating colors,shapes,and motions.IEEE Transactions on Image Processing Vol.22,No.5, 1778–1792,2013.

    [14]Jepson,A.D.;Fleet,D.J.;El-Maraghi,T.F. Robust online appearance models for visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.25,No.10,1296–1311,2003.

    [15]Wang,L.;Ouyang,W.;Wang,X.;Lu,H.STCT: Sequentially training convolutionalnetworks for visual tracking.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,1373–1381, 2016.

    [16]Wang,S.;Lu,H.;Yang,F.;Yang,M.-H.Superpixel tracking.In:Proceedings of the IEEE International Conference on Computer Vision,1323–1330,2011.

    [17]Lowe,D.G.Object recognition from local scaleinvariant features.In:Proceedings of the 7th IEEE International Conference on Computer Vision,Vol.2, 1150–1157,1999.

    [18]Chen,A.-h.;Zhu,M.;Wang,Y.-h.;Xue,C.Mean shift tracking combining SIFT.In:Proceedings of the 9th International Conference on Signal Processing,1532–1535,2008.

    [19]Fazli,S.;Pour,H.M.;Bouzari,H.Particle filter based object tracking with sift and color feature.In: Proceedings of the 2nd International Conference on Machine Vision,89–93,2009.

    [20]Zhou,H.;Yuan,Y.;Shi,C.Object tracking using SIFT features and mean shift.Computer Vision and Image Understanding Vol.113,No.3,345–352,2009.

    [21]Mahapatra,D.;Saini,M.K.;Sun,Y.Illumination invariant tracking in offi ce environments using neurobiology-saliency based particle filter. In: Proceedings of the IEEE International Conference on Multimedia and Expo,953–956,2008.

    [22]Zhang,G.;Yuan,Z.;Zheng,N.;Sheng,X.;Liu,T. Visual saliency based object tracking.In:Computer Vision–ACCV 2009.Zha,H.;Taniguchi,R.;Maybank, S.Eds.Springer-Verlag Berlin Heidelberg,193–203, 2010.

    [23]Kim,Z.W.Real time object tracking based on dynamic feature grouping with background subtraction.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,1–8, 2008.

    [24]Williams, O.; Blake, A.; Cipolla, R.Sparse Bayesian learning for effi cient visual tracking.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.27,No.8,1292–1304,2005.

    [25]Li,Y.;Ai,H.;Yamashita,T.;Lao,S.;Kawade,M. Tracking in low frame rate video:A cascade particle fi lter with discriminative observers of diff erent life spans.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.30,No.10,1728–1740,2008.

    [26]Okuma,K.;Taleghani,A.;de Freitas,N.;Little,J. J.;Lowe,D.G.A boosted particle filter:Multitarget detection and tracking.In:Computer Vision–ECCV 2004.Pajdla,T.;Matas J.Eds.Springer-Verlag Berlin Heidelberg,28–39,2004.

    [27]Leibe,B.;Schindler,K.;van Gool,L.Coupled detection and trajectory estimation for multi-object tracking.In: Proceedings of the IEEE 11th International Conference on Computer Vision,1–8, 2007.

    [28]Grabner,H.;Bischof,H.On-line boosting and vision.In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,Vol.1,260–267,2006.

    [29]Babenko,B.;Yang,M.-H.;Belongie,S.Visualtracking with online multiple instance learning.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,983–990,2009.

    [30]Tang,F.; Brennan, S.; Zhao,Q.; Tao,H. Co-tracking using semi-supervised support vector machines.In: Proceedings of the IEEE 11th International Conference on Computer Vision,1–8, 2007.

    [31]Dalal,N.;Triggs,B.Histograms of oriented gradients for human detection.In:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,Vol.1,886–893,2005.

    [32]Girshick,R.;Donahue,J.;Darrell,T.;Malik, J.Region-based convolutional networks for accurate object detection and segmentation.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.38, No.1,142–158,2016.

    [33]Ren,S.;He,K.;Girshick,R.;Sun,J.Faster RCNN:Towards real-time object detection with region proposal networks.In:Proceedings of the Advances in Neural Information Processing Systems 28,91–99, 2015.

    [34]Girshick,R.Fast R-CNN.In:Proceedings ofthe IEEE International Conference on Computer Vision,1440–1448,2015.

    [35]Chow,T.W.S.;Leung,C.T.Nonlinear autoregressive integrated neural network model for short-term load forecasting. IEE Proceedings-Generation, Transmission and Distribution Vol.143,No.5, 500–506,1996.

    [36]Henriques,J.F.;Caseiro,R.;Martins,P.;Batista,J. High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.37,No.3,583–596,2015.

    [37]Henriques,J.F.;Caseiro,R.;Martins,P.;Batista, J.Exploiting the circulant structure of tracking-bydetection with kernels.In:Computer Vision–ECCV 2012.Fitzgibbon,A.;Lazebnik,S.;Perona,P.;Sato, Y.;Schmid,C.Eds.Springer-Verlag Berlin Heidelberg, 702–715,2012.

    [38]Yoon,J.H.;Yang,M.H.;Yoon,K.J.Interacting multiview tracker.IEEE Transactions on Pattern Analysis and Machine Intelligence Vol.38,No.5,903–917,2016.

    [39]Bewley,A.;Ge,Z.;Ott,L.;Ramos,F.;Upcroft,B. Simple online and realtime tracking.In:Proceedingsof the IEEE International Conference on Image Processing,3464–3468,2016.

    [40]Ma,C.;Yang,X.;Zhang,C.;Yang,M.-H.Longterm correlation tracking.In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,5388–5396,2015.

    Ruochen Fan is a master candidate in the Department of Computer Science and Technology, Tsinghua University. He received his bachelor degree from Beijing University of Posts and Telecommunications in 2016.His research interest is computer vision.

    Fang-Lue Zhang is a lecturer in Victoria University of Wellington. He received his doctor degree from Tsinghua University in 2015 and bachelor degree from Zhejiang University in 2009. His research interests include image and video editing,computer vision,and computer graphics.

    Min Zhang is a postdoctoral fellow in the Center of Mathematical Sciences and Applications,Harvard University. She received her Ph.D.degree in computer science from Stony Brook University and another Ph.D.degree in mathematics from Zhejiang University. She is an expert in the fields ofgeometric modeling,medicalimaging,graphics,visualization,machine learning,3D technologies,etc.

    Ralph R.Martin is a professor in Cardiff University. He obtained his Ph.D.degree from Cambridge University in 1983.He has published more than 300 papers and 15 books, covering such topics as solid and surface modeling, intelligent sketch input, geometric reasoning, reverse engineering,and computer graphics.He is a Fellow of the Learned Society of Wales,the Institute of Mathematics and its Applications,and the British Computer Society. He has served on the editorial boards of Computer-Aided Design,Computer Aided Geometric Design,and Geometric Models. He was recently awarded a Friendship Award, China’s highest honor for foreigners.

    Open Access The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journalare available free of charge from http://www.springer.com/journal/41095. To submit a manuscript,please go to https://www. editorialmanager.com/cvmj.

    1 Tsinghua University,Beijing 100084,China. E-mail: frc16@mails.tsinghua.edu.cn.

    2 School of Engineering and Computer Science,Victoria University of Wellington,Wellington,New Zealand.E-mail:z.fanglue@gmail.com.

    3 Center of Mathematical Sciences and Applications, Harvard University,Cambridge,Massachusetts,USA.E-mail:mzhang@math.harvard.edu().

    4 School of Computer Science and Informatics, Cardiff University, Cardiff, Wales, UK.E-mail: ralph.martin@cs.cardiff.ac.uk.

    2017-02-05;accepted:2017-04-07

    88av欧美| 波多野结衣高清作品| 国产欧美日韩精品亚洲av| 丰满人妻一区二区三区视频av | 欧美av亚洲av综合av国产av| 午夜免费成人在线视频| 亚洲国产精品999在线| 国内揄拍国产精品人妻在线| 热99在线观看视频| 亚洲av免费在线观看| 午夜福利18| 老司机在亚洲福利影院| 一级黄色大片毛片| 麻豆久久精品国产亚洲av| 日韩精品青青久久久久久| 午夜福利在线观看免费完整高清在 | 少妇人妻一区二区三区视频| 18禁黄网站禁片免费观看直播| 一区福利在线观看| 波多野结衣高清作品| 国产三级黄色录像| 欧美性猛交╳xxx乱大交人| 狠狠狠狠99中文字幕| a在线观看视频网站| 1024手机看黄色片| 成人无遮挡网站| 成人鲁丝片一二三区免费| 亚洲欧美日韩高清在线视频| 国产av麻豆久久久久久久| 国产熟女xx| 97碰自拍视频| 中文字幕熟女人妻在线| 色视频www国产| 看黄色毛片网站| 成人av一区二区三区在线看| 久久久久久大精品| tocl精华| 首页视频小说图片口味搜索| 精品欧美国产一区二区三| АⅤ资源中文在线天堂| 久久久久久国产a免费观看| 亚洲七黄色美女视频| 亚洲精品久久国产高清桃花| 亚洲成人久久爱视频| aaaaa片日本免费| 亚洲第一电影网av| 国产精品一区二区免费欧美| 天堂av国产一区二区熟女人妻| 免费看光身美女| 精品一区二区三区人妻视频| 日本与韩国留学比较| 国产精品一及| 黄色视频,在线免费观看| 岛国在线免费视频观看| 午夜免费激情av| 性色avwww在线观看| 嫁个100分男人电影在线观看| 成年人黄色毛片网站| 欧美乱妇无乱码| 亚洲人与动物交配视频| 一级黄片播放器| 国产精品久久久久久久久免 | 超碰av人人做人人爽久久 | 中文字幕人妻熟人妻熟丝袜美 | 日本精品一区二区三区蜜桃| a级毛片a级免费在线| 亚洲aⅴ乱码一区二区在线播放| 国产精华一区二区三区| 国产视频内射| 九色国产91popny在线| 看免费av毛片| 伊人久久精品亚洲午夜| 成年女人看的毛片在线观看| 2021天堂中文幕一二区在线观| 免费av毛片视频| 淫秽高清视频在线观看| 日韩欧美精品免费久久 | 久久草成人影院| 精品久久久久久久久久免费视频| 免费人成在线观看视频色| 国产探花极品一区二区| 亚洲 欧美 日韩 在线 免费| 伊人久久大香线蕉亚洲五| 久久亚洲精品不卡| 欧美性猛交黑人性爽| 免费看十八禁软件| 亚洲av免费高清在线观看| 国产成人aa在线观看| 亚洲欧美精品综合久久99| 观看免费一级毛片| 在线观看舔阴道视频| 欧美最新免费一区二区三区 | 每晚都被弄得嗷嗷叫到高潮| 欧美日韩乱码在线| 亚洲国产精品久久男人天堂| 国产一区二区三区视频了| 国产一区二区在线av高清观看| 可以在线观看的亚洲视频| 一区二区三区国产精品乱码| 久久久久亚洲av毛片大全| 免费电影在线观看免费观看| 国产欧美日韩精品亚洲av| 极品教师在线免费播放| 日本a在线网址| 国产欧美日韩一区二区精品| 婷婷丁香在线五月| 人妻丰满熟妇av一区二区三区| 国产免费一级a男人的天堂| 亚洲精品粉嫩美女一区| 免费在线观看亚洲国产| 国产伦一二天堂av在线观看| 久久婷婷人人爽人人干人人爱| 成人无遮挡网站| 久久久精品大字幕| 三级国产精品欧美在线观看| 日本 av在线| 久久亚洲精品不卡| 久久香蕉国产精品| 99国产综合亚洲精品| www国产在线视频色| 色尼玛亚洲综合影院| 18+在线观看网站| 日本黄色视频三级网站网址| 日韩成人在线观看一区二区三区| 国产国拍精品亚洲av在线观看 | 最新在线观看一区二区三区| 两个人看的免费小视频| 欧美不卡视频在线免费观看| 亚洲真实伦在线观看| 精品乱码久久久久久99久播| 亚洲精品影视一区二区三区av| 在线免费观看不下载黄p国产 | 欧美性猛交╳xxx乱大交人| 一本一本综合久久| svipshipincom国产片| 国产熟女xx| 国产视频一区二区在线看| 国产精品影院久久| 99热只有精品国产| 床上黄色一级片| 日日干狠狠操夜夜爽| 一个人观看的视频www高清免费观看| 一本久久中文字幕| 国产精品亚洲av一区麻豆| 亚洲在线观看片| 国产熟女xx| 在线十欧美十亚洲十日本专区| 一进一出抽搐gif免费好疼| 丰满的人妻完整版| 19禁男女啪啪无遮挡网站| 99在线人妻在线中文字幕| 国产一区二区三区在线臀色熟女| 听说在线观看完整版免费高清| 中国美女看黄片| 亚洲人与动物交配视频| 成人永久免费在线观看视频| eeuss影院久久| 免费人成视频x8x8入口观看| 蜜桃久久精品国产亚洲av| av福利片在线观看| 国产高清视频在线观看网站| 色视频www国产| 亚洲av五月六月丁香网| 99久久精品热视频| av女优亚洲男人天堂| av专区在线播放| 欧美又色又爽又黄视频| 丝袜美腿在线中文| 黑人欧美特级aaaaaa片| 午夜福利在线在线| 偷拍熟女少妇极品色| 欧美成人免费av一区二区三区| h日本视频在线播放| 国产主播在线观看一区二区| 别揉我奶头~嗯~啊~动态视频| 色综合婷婷激情| 悠悠久久av| 老汉色∧v一级毛片| ponron亚洲| 国产三级中文精品| 90打野战视频偷拍视频| 蜜桃亚洲精品一区二区三区| 变态另类成人亚洲欧美熟女| 综合色av麻豆| 欧美成人免费av一区二区三区| 黄色日韩在线| 国产一区二区激情短视频| 狂野欧美白嫩少妇大欣赏| 亚洲一区二区三区不卡视频| 久久久久久久久大av| 国产亚洲av嫩草精品影院| 啦啦啦韩国在线观看视频| 中文字幕av成人在线电影| 国产一区二区在线av高清观看| 免费在线观看成人毛片| 在线观看66精品国产| 在线免费观看的www视频| 少妇人妻精品综合一区二区 | 午夜精品在线福利| 色尼玛亚洲综合影院| 一级毛片高清免费大全| 国产亚洲精品综合一区在线观看| 久久香蕉精品热| 国产乱人视频| 88av欧美| 亚洲成av人片免费观看| 日韩欧美国产在线观看| av天堂中文字幕网| 久久这里只有精品中国| 亚洲五月天丁香| 亚洲人成网站在线播| 国产精品一区二区三区四区免费观看 | 无限看片的www在线观看| 久久精品91蜜桃| 国产91精品成人一区二区三区| 最近最新免费中文字幕在线| 亚洲久久久久久中文字幕| 婷婷亚洲欧美| 亚洲美女视频黄频| 日韩欧美三级三区| 亚洲黑人精品在线| 亚洲久久久久久中文字幕| 99久久九九国产精品国产免费| 3wmmmm亚洲av在线观看| 免费观看人在逋| 桃色一区二区三区在线观看| 欧美+日韩+精品| 亚洲精品日韩av片在线观看 | av女优亚洲男人天堂| 欧美一级a爱片免费观看看| 亚洲 欧美 日韩 在线 免费| 一二三四社区在线视频社区8| 成人av在线播放网站| 人人妻人人看人人澡| 国产三级在线视频| eeuss影院久久| 亚洲熟妇熟女久久| 国产欧美日韩一区二区精品| 淫妇啪啪啪对白视频| av福利片在线观看| 最新中文字幕久久久久| 国产精品自产拍在线观看55亚洲| 国产成人av教育| 人人妻人人看人人澡| 日韩欧美国产一区二区入口| 久久久国产精品麻豆| 可以在线观看的亚洲视频| 少妇熟女aⅴ在线视频| 亚洲av成人av| 免费在线观看成人毛片| 亚洲最大成人中文| 黄片大片在线免费观看| 99热这里只有精品一区| 熟妇人妻久久中文字幕3abv| ponron亚洲| 久久香蕉国产精品| 日本黄色片子视频| 日韩国内少妇激情av| 最好的美女福利视频网| 国产高清有码在线观看视频| 国产精品嫩草影院av在线观看 | av专区在线播放| 国产成人av教育| 午夜久久久久精精品| 久99久视频精品免费| 国产激情偷乱视频一区二区| 亚洲人成网站在线播放欧美日韩| 老汉色av国产亚洲站长工具| 三级毛片av免费| 日日摸夜夜添夜夜添小说| 99国产极品粉嫩在线观看| 欧美性猛交╳xxx乱大交人| 色尼玛亚洲综合影院| 欧美精品啪啪一区二区三区| 亚洲18禁久久av| 在线观看免费午夜福利视频| 中文资源天堂在线| 亚洲国产精品999在线| 午夜福利在线观看吧| 啦啦啦韩国在线观看视频| 亚洲精品在线观看二区| 一进一出抽搐gif免费好疼| 美女黄网站色视频| av片东京热男人的天堂| 欧美区成人在线视频| 久久久精品欧美日韩精品| 精品日产1卡2卡| 又紧又爽又黄一区二区| 久久久久久国产a免费观看| 日本在线视频免费播放| 午夜福利高清视频| 一级a爱片免费观看的视频| 久久国产乱子伦精品免费另类| 精品久久久久久久毛片微露脸| www国产在线视频色| 精品国产超薄肉色丝袜足j| 88av欧美| 国产麻豆成人av免费视频| 欧美日韩精品网址| 综合色av麻豆| 在线看三级毛片| 国产一区二区三区视频了| 亚洲狠狠婷婷综合久久图片| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 婷婷丁香在线五月| 真人做人爱边吃奶动态| 国产乱人视频| 长腿黑丝高跟| 一卡2卡三卡四卡精品乱码亚洲| 精品一区二区三区人妻视频| 18禁美女被吸乳视频| 亚洲第一电影网av| 亚洲 国产 在线| 午夜老司机福利剧场| 国产精品一区二区免费欧美| 精品不卡国产一区二区三区| 男女做爰动态图高潮gif福利片| 国产一区二区亚洲精品在线观看| 欧美+亚洲+日韩+国产| 深夜精品福利| 99精品久久久久人妻精品| 国产精品99久久久久久久久| 男女做爰动态图高潮gif福利片| 给我免费播放毛片高清在线观看| 国产精品野战在线观看| 日本免费一区二区三区高清不卡| 宅男免费午夜| www.www免费av| 美女被艹到高潮喷水动态| 久久人人精品亚洲av| 久久亚洲真实| 日日夜夜操网爽| 国产主播在线观看一区二区| 国产精品永久免费网站| 久久婷婷人人爽人人干人人爱| 成人亚洲精品av一区二区| 精品人妻偷拍中文字幕| 国产精品久久久久久人妻精品电影| 国产av一区在线观看免费| 国产精品亚洲一级av第二区| 久久久久久久久久黄片| e午夜精品久久久久久久| 在线观看66精品国产| 亚洲无线观看免费| 日本黄色视频三级网站网址| 老汉色av国产亚洲站长工具| 中文字幕精品亚洲无线码一区| 成人永久免费在线观看视频| 黄片小视频在线播放| 国产精品久久久久久久久免 | 色综合婷婷激情| 国产蜜桃级精品一区二区三区| 国产美女午夜福利| 激情在线观看视频在线高清| 久久99热这里只有精品18| 国模一区二区三区四区视频| 成人18禁在线播放| 欧美一区二区精品小视频在线| 九色成人免费人妻av| 少妇的丰满在线观看| 国产精品国产高清国产av| 哪里可以看免费的av片| 亚洲天堂国产精品一区在线| 18禁国产床啪视频网站| xxx96com| 国产高清视频在线观看网站| 午夜精品一区二区三区免费看| 国产精品,欧美在线| 精品久久久久久久久久久久久| 九九久久精品国产亚洲av麻豆| 中文字幕人妻丝袜一区二区| 99热精品在线国产| 久久久久久久午夜电影| 日日摸夜夜添夜夜添小说| 国产三级黄色录像| 少妇的逼水好多| 久久香蕉国产精品| 久久精品国产综合久久久| 欧美性猛交黑人性爽| 色综合婷婷激情| 国产一区二区三区在线臀色熟女| 欧美乱色亚洲激情| 欧美激情在线99| 亚洲av五月六月丁香网| 18禁裸乳无遮挡免费网站照片| 欧美区成人在线视频| 男人的好看免费观看在线视频| 久久精品91蜜桃| 亚洲av免费高清在线观看| 人妻夜夜爽99麻豆av| 最近在线观看免费完整版| 亚洲成人久久性| 在线观看免费午夜福利视频| 亚洲真实伦在线观看| 美女高潮的动态| 国产真实伦视频高清在线观看 | 久9热在线精品视频| 悠悠久久av| 亚洲人与动物交配视频| 久久国产乱子伦精品免费另类| 欧美黄色淫秽网站| avwww免费| 亚洲不卡免费看| 很黄的视频免费| 亚洲国产高清在线一区二区三| 久久九九热精品免费| 亚洲av二区三区四区| 露出奶头的视频| 看免费av毛片| 久久久久亚洲av毛片大全| 人妻丰满熟妇av一区二区三区| 免费看a级黄色片| 中国美女看黄片| 国产久久久一区二区三区| 每晚都被弄得嗷嗷叫到高潮| 超碰av人人做人人爽久久 | 国产亚洲精品综合一区在线观看| 午夜免费成人在线视频| 亚洲精品久久国产高清桃花| 欧美绝顶高潮抽搐喷水| 久久久久久久午夜电影| 精品一区二区三区av网在线观看| 在线观看一区二区三区| 一级作爱视频免费观看| 欧美精品啪啪一区二区三区| 变态另类丝袜制服| 性欧美人与动物交配| 伊人久久精品亚洲午夜| 丰满乱子伦码专区| 高清在线国产一区| 国产一区在线观看成人免费| 成人性生交大片免费视频hd| 99国产极品粉嫩在线观看| 少妇的丰满在线观看| 亚洲片人在线观看| 国产精品一区二区免费欧美| 综合色av麻豆| 亚洲最大成人中文| 俺也久久电影网| 1000部很黄的大片| 最新在线观看一区二区三区| 国产精品av视频在线免费观看| 精品国产亚洲在线| 69人妻影院| 免费av不卡在线播放| 禁无遮挡网站| 午夜福利在线在线| 亚洲久久久久久中文字幕| 国产私拍福利视频在线观看| 99热精品在线国产| 超碰av人人做人人爽久久 | 中文字幕熟女人妻在线| 少妇高潮的动态图| 久久性视频一级片| 欧美又色又爽又黄视频| 亚洲久久久久久中文字幕| 亚洲,欧美精品.| 看片在线看免费视频| 在线观看美女被高潮喷水网站 | 悠悠久久av| 欧美色欧美亚洲另类二区| 国产一区在线观看成人免费| 黑人欧美特级aaaaaa片| 嫩草影院精品99| 欧美日韩瑟瑟在线播放| 国产亚洲精品久久久com| 成人国产一区最新在线观看| 99久久综合精品五月天人人| 久久久久久久久久黄片| 韩国av一区二区三区四区| 色播亚洲综合网| 中文亚洲av片在线观看爽| 国产成人aa在线观看| 美女大奶头视频| 国产av一区在线观看免费| 哪里可以看免费的av片| 一个人看视频在线观看www免费 | 神马国产精品三级电影在线观看| 亚洲avbb在线观看| 国产高清视频在线观看网站| 国产伦在线观看视频一区| 久久天躁狠狠躁夜夜2o2o| 99久久久亚洲精品蜜臀av| 国产精品亚洲av一区麻豆| 他把我摸到了高潮在线观看| 真人做人爱边吃奶动态| 91久久精品国产一区二区成人 | 久久精品91无色码中文字幕| 12—13女人毛片做爰片一| 欧美3d第一页| 国产精品免费一区二区三区在线| 99久久99久久久精品蜜桃| 国产精品久久久久久人妻精品电影| 亚洲中文字幕一区二区三区有码在线看| 黄片大片在线免费观看| 亚洲国产高清在线一区二区三| 12—13女人毛片做爰片一| 天美传媒精品一区二区| 真人做人爱边吃奶动态| 国产黄a三级三级三级人| 日本在线视频免费播放| av欧美777| 99久国产av精品| 在线视频色国产色| 亚洲一区高清亚洲精品| 女人高潮潮喷娇喘18禁视频| 黄色片一级片一级黄色片| 精品电影一区二区在线| 午夜精品在线福利| 国产极品精品免费视频能看的| 男人的好看免费观看在线视频| 午夜两性在线视频| 美女cb高潮喷水在线观看| 神马国产精品三级电影在线观看| 国内久久婷婷六月综合欲色啪| 国产高清视频在线观看网站| 中文字幕久久专区| 亚洲av成人av| 国产精品电影一区二区三区| 亚洲精品国产精品久久久不卡| 欧美一级a爱片免费观看看| 欧美黄色片欧美黄色片| 真实男女啪啪啪动态图| 免费电影在线观看免费观看| 最近最新中文字幕大全电影3| 亚洲激情在线av| 欧洲精品卡2卡3卡4卡5卡区| 久久婷婷人人爽人人干人人爱| 国产综合懂色| 深爱激情五月婷婷| 国产午夜福利久久久久久| 欧美3d第一页| 五月玫瑰六月丁香| 欧美精品啪啪一区二区三区| 99精品久久久久人妻精品| 午夜福利欧美成人| 99久久精品一区二区三区| 69av精品久久久久久| 久久久久久人人人人人| 欧美中文综合在线视频| 久久久久久大精品| 欧美区成人在线视频| 99久久精品国产亚洲精品| 高清日韩中文字幕在线| 一个人看视频在线观看www免费 | 亚洲专区国产一区二区| 亚洲欧美激情综合另类| 美女黄网站色视频| 国产免费男女视频| 十八禁人妻一区二区| 99国产极品粉嫩在线观看| 午夜久久久久精精品| 国产精品女同一区二区软件 | 国语自产精品视频在线第100页| 国产美女午夜福利| 国产蜜桃级精品一区二区三区| 国产欧美日韩精品一区二区| 国产精品 国内视频| 久久久久亚洲av毛片大全| 国产综合懂色| 久久香蕉国产精品| 国产午夜精品论理片| 免费人成视频x8x8入口观看| 亚洲精品亚洲一区二区| 法律面前人人平等表现在哪些方面| 久久精品国产清高在天天线| 国产激情欧美一区二区| 亚洲av第一区精品v没综合| 91久久精品电影网| 国内久久婷婷六月综合欲色啪| 最近视频中文字幕2019在线8| 夜夜看夜夜爽夜夜摸| 蜜桃亚洲精品一区二区三区| 九色成人免费人妻av| 国产精品99久久99久久久不卡| bbb黄色大片| 免费av毛片视频| 久久精品亚洲精品国产色婷小说| 国产欧美日韩一区二区精品| 日本 av在线| 久久精品国产综合久久久| 久久性视频一级片| 亚洲美女黄片视频| 天堂网av新在线| 欧美zozozo另类| 久久欧美精品欧美久久欧美| 国产色爽女视频免费观看| 观看美女的网站| 久久久久久人人人人人| 尤物成人国产欧美一区二区三区| av在线天堂中文字幕| 日本黄色视频三级网站网址| 亚洲av不卡在线观看| 午夜福利免费观看在线| 日本黄色视频三级网站网址| 人妻久久中文字幕网| 久久久久久国产a免费观看| 99视频精品全部免费 在线| 亚洲精品456在线播放app | 亚洲av免费在线观看| 又黄又爽又免费观看的视频| 真人一进一出gif抽搐免费| 国产精品av视频在线免费观看| 可以在线观看毛片的网站| 国产伦人伦偷精品视频| 麻豆成人av在线观看| 99在线视频只有这里精品首页| 久久国产精品人妻蜜桃| 亚洲av免费在线观看| 好看av亚洲va欧美ⅴa在| 欧美xxxx黑人xx丫x性爽| 亚洲 国产 在线| 成年女人毛片免费观看观看9| 欧美午夜高清在线| 成人一区二区视频在线观看|