• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Object tracking using a convolutional network and a structured output SVM

    2018-01-08 05:09:59JunweiLiXiaolongZhouSixianChanandShengyongChen
    Computational Visual Media 2017年4期
    關(guān)鍵詞:投入產(chǎn)出中非生產(chǎn)總值

    Junwei Li,Xiaolong Zhou,Sixian Chan,and Shengyong Chen

    ?The Author(s)2017.This article is published with open access at Springerlink.com

    Object tracking using a convolutional network and a structured output SVM

    Junwei Li1,Xiaolong Zhou1,Sixian Chan1,and Shengyong Chen2

    ?The Author(s)2017.This article is published with open access at Springerlink.com

    Object tracking has been a challenge in computer vision. In this paper,we present a novel method to model target appearance and combine it with structured output learning for robust online tracking within a tracking-by-detection framework.We take both convolutional features and handcrafted features into account to robustly encode the target appearance. First,we extract convolutional features of the target by kernels generated from the initial annotated frame.To capture appearance variation during tracking,we propose a new strategy to update the target and background kernel pool.Secondly,we employ a structured output SVM for re fining the target’s location to mitigate uncertainty in labeling samples as positive or negative.Compared with existing state-of-the-art trackers,our tracking method not only enhances the robustness of the feature representation,but also uses structured output prediction to avoid relying on heuristic intermediate steps to produce labelled binary samples.Extensive experimental evaluation on the challenging OTB-50 video sequences shows competitive results in terms of both success and precision rate,demonstrating the merits of the proposed tracking method.

    object tracking;convolutional network;structured learning;feature extraction

    1 Introduction

    Visual tracking is a fundamental research problem in computer vision and robotics,with wide applications such as intelligent video surveillance,transportationmonitoring,robot–human interaction,etc.In recent years,many excellent tracking algorithms have been proposed,but it remains a challenging problem for a tracker to handle occlusion,abrupt motion,appearance variation,and background clutter.

    In this paper,we propose a novel tracking method which utilizes a discriminative convolutional network[1]and HOG descriptors[2]to encode target appearance,together with a structured output support vector machine(SO-SVM)to jointly estimate target appearance.In the proposed method,tracking is formulated as binary classification and structured output tasks,to select the most likely target candidate and reject background patches.It uses an online trained structural output classifier within a particle filter framework.The convolutional filters for modeling target appearance are generated from the initial frame(annotated manually).We perform a soft-shrink operation on the output convolutional feature maps to enhance their robustness.One of the most significant advantages is that the convolutional filters are generated from both the target and its surrounding area,so fully exploiting local structure and internal geometric layout of the target.Additionally,our method employs an SO-SVM to overcome the drawback that samples used for training the classifier are all equally weighted,meaning that a negative example which overlaps significantly with the tracker’s bounding box is treated the same as one which overlaps very little.Another advantage of SO-SVM is that the labeler no longer labels the samples as positive or negative based on intuition and heuristics.

    The remainder of this paper is organized as follows.Section 2 gives a brief overview of related work.Details of our tracking method are described in Section 3.Section 4 reports qualitative and quantitative experimental results.We finally conclude this paper in Section 5.

    2 Related work

    Most existing tracking methods belong to two categories: generative models and discriminative models.The generative approach formulates the tracking problem as minimizing reconstruction error,while the discriminative model method considers the tracking problem as a binary classification task to separate the target from the background.From another perspective, a tracker can be decomposed into two components:an online updated appearance model for feature extraction and an observation model to find the most probable target transformation.

    Recent tracking methods mainly focus on designing a robust appearance model[3]to capture target appearance variation.The most popular approach based on the discriminative model casts tracking as a foreground and background separation problem,performing tracking by learning a classifier using multiple instance learning[4],P-N learning[5],online boosting[6,7],SVM[8],structured output SVMs [9], CRFs [10], probability hypothesis density methods[11,12],etc.These tracking methods first train a classifier online,inspired by statistical machine learning methods,to separate the target from the background surrounding the target location in the previous frame.Generative methods describe the target’s appearance using generative models and search for target regions that best fit the model. Various generative target appearance modeling algorithms have been proposed using sparse representation[13,14],density estimation [15,16], and incremental subspace learning[17]. In Ref.[18],generative and discriminative models were combined for more accurate online tracking. Some efficient trackers have been proposed using hand-crafted features,including Haar-like feature histograms[4,6,9,19],HOG descriptors[2],binary features[20],and covariance descriptors[21].However,such trackers do not adapt well to target appearance variation.

    To overcome the shortcomings of handcrafted features in modeling object appearance,deep networks have been employed to directly learn features from raw data without resorting to manual intervention.Convolutional features have been used in many applications such as Ref.[22].In Ref.[23],Li et al.used a convolutional neural network(CNN)for visual tracking with multiple image cues as inputs.In Ref.[24],Zhou et al.used an ensemble of deep networks in combination with an online boosting method for visual tracking. Reference[25]presented a human tracking algorithm that learns a specific feature extractor with CNNs.Numerous auxiliary data are required for offline training the deep networks;the pre-trained model is then used for online visual tracking. Wang and Yeung[26]developed a deep learning tracking method that uses stacked de-noising auto-encoders to learn generic features from a large number of auxiliary images.Reference[27]used a two-layer CNN to learn hierarchical features from auxiliary video sequences;it takes into account complicated motion transformations and appearance variations in visual tracking. A drawback of all the above frameworks is that they need a large amount of auxiliary data to pre-train a deep network model;such models can be highly specific and have poor adaptive ability.

    In Ref.[1]Zhang et al.incorporated convolutional networks(convolutional filters defined as normalized image patches from the first frame)which do not require auxiliary data to train filters.They achieved state-of-the-art precision.Several tracking algorithms based on hand-crafted features have been developed within a multiple instance learning framework,aiming to improving the poor ability of hand-crafted features to represent semantic level features.Grabner et al.proposed an online boosting algorithm to select features for tracking.However,these trackers[28,29]use one positive sample(i.e.,the current tracker location)and a few negative samples when updating the classifier.As the appearance model is updated with noisy and potentially misaligned examples,this often leads to the problem of tracking drift.A semi-supervised learning approach can be used in which positive and negative samples are selected via an online classifier with structural constraints.Yang et al.[30]present a discriminative appearance model based on superpixels. It is able to handle heavy occlusion and recover from drift. In Ref.[9],Hare et al.used an online structured output support vector machine(SVM)for robust tracking;it can mitigate the effect of wrongly labeled samples.Reference[31]introduced a fast tracking algorithm which exploits the circulant structure of the kernel matrix in SVM classifiers so that it can be efficiently computed by the fast Fourier transform algorithm.

    3Method

    In this section,we describe our proposed tracking method in detail.The tracking problem is formulated as a detection task,and the pipeline of the proposed approach is shown in Fig.1.We assume that the tracking target is manually annotated in the first frame. To model target appearance,we sample various background and foreground convolutional kernels to encode target and background structural information.When a new frame arrives,we first extract its convolutional feature map to estimate the target transformation.Secondly,we incorporate structured output learning and HOG descriptors to predict the target location and scale variation.Lastly,the tracking results are combined to jointly determine the target transformation and scale variation.

    3.1 Feature extraction by convolutional network

    The convolutional network includes two separate layers.Firstly,a set of background and foreground convolutional kernels are generated from a bank of filters which sample the input frame using a sliding window.Secondly,to enhance the robustness of the convolutional feature representation,all feature maps are stacked together,and the final feature vector is determined by solving a sparse representation equation.

    In the initial frame,a set of samples,denotedis warped to a canonical size ofn×nin grayscale color space.Then each sample is pre-processed by subtracting the mean andL2normalization is performed,aiming to modify local brightness differences and achieve contrast normalization, respectively.A sliding window strategy is employed to generate a bank of patches with the field sizew×w.This results in a total ofimage patches sampled from the initial frame.

    Fig.1 Architecture of the proposed tracking algorithm.

    Following the pre-processing step,thek-means algorithm is used to select multiple convolution filterfrom the filter bank using the initial frame as the representative target filters. The remaining object filter kernelsare selected dynamically to capture target appearance variation,and are generated by clustering the target filter bank of thetth frame.are filter banks obtained from the initial and tth frame,respectively.The strategy of generating a dynamic convolutional filter is the most significant difference from Ref.[1].One obvious advantage is that our filters have the ability to adapt in the face of target occlusion and deformation,and illumination changes,which cause target appearance variation.In other words,the proposed convolutional filters are more robust in dynamic environments.

    Given the target candidate imageand the ith object filter kernelsconvolution ofand the filters is denoted byand?denotes the convolution operator.The local filters encode stable object visual information from both the initial frame and the previous frame even though the object may experience significant appearance change from the initial frame.Thereby,we can extract more discriminative features and effectively handle the drift problem.

    The background context surrounding the object provides useful information to discriminate the target.The convolutional networks select m background samples surrounding the object,and then the same cluster algorithm is used to generate background filtersthe ith background sample. An average pooling strategy is operated to summarize each filter inNext,the background kernelsthat encode the visual information and geometric layout surrounding the object are generated:

    To further enhance the strength of this representation and eliminate the influence of noise,a complex cell feature map that is a 3D tensoris constructed,which stacks d different simple cell feature maps constructed with the filter set

    A sparse vector c is set to approximate vec(C)by minimizing the following objective function:

    where vec(C)is a column vector concatenating all the elements in C,of length(n?w+1)2d.The optimization problem in Eq.(4)has a closed form solution,as explained in Ref.[32]:

    where median(vec(C)) is robust to target appearance variation and noise interference.

    3.2 HOG feature extraction

    Hand-crafted features are morphological,shape,statistical,or textural based representations that attempt to encode object appearance at lowlevel,and are the fundamental elements of object representation.Contrary to the high-level semantic features found by convolutional networks,which can be treated as a black-box object representation,hand-crafted features encode object appearance and effectively preserve structure information,which is very important in object tracking.In this paper,we use HOG(histograms of oriented gradients)features[33]as complementary features to jointly encode target appearance.HOG descriptors have several advantages.For example,their gradient structure is very characteristic of local shape,they are computed in a local cell with an easily controllable degree,and they are invariant to local geometric and photometric transformations.In other words,translations or rotations make little difference even if they are much smaller than the local spatial or orientation bin size.All of these advantages of HOG features play a key role in target location and scale estimation.

    3.3 Structured output learning

    Traditional tracking-by-detection approaches employ a classifier trained online to distinguish the target object from its surrounding background. In the tracking process,the classifier is used to estimate the object’s transformation by searching for the maximum classification score amongst a set of target candidates around the target’s location in the previous frame,typically using a sliding window or another motion model to generate target candidates.Given the estimated target location,traditional tracking methods generate a set of binary labelled training samples to update the classifier online.This tracking framework raises a number of issues.Firstly,it is not clear how to label the training samples in a principled manner.One popular way is to utilize predefined rules such as the distance between a sample and the estimated target candidate to determine whether a sample should be labelled as positive or negative.Secondly,the goal of a classifier is to predict a binary label instead of a structured output.However,the objective for a tracker is to estimate the object’s transformation accurately.In Ref.[6],Ma et al.formulated the tracking problem as structured output prediction to mitigate the gap between binary classification and accurate target transformation determination.

    When a new frame arrives,the ultimate goal for a tracker is to estimate the target position(a 2D rectangle)in the current frame.To capture target appearance variation,the classifier is updated online based on the newly estimated target appearance aroundand the corresponding samples∈The classifier is trained on the example pairs(x,z),where z=±1 is a binary label,and makes its prediction according to z=sign(h(x)),where h:X→R is the classification confidence function which maps from feature space X to a real target confidence value R.Let pt?1denote the estimated bounding box at time(t?1).The objective is to estimate a transformation pt∈Y,so the new position of the object is approximated by the composition pt=pt?1° yt.Y is the search space.Mathematically,the estimation process is converted to searching for the position change relative to the previous frame by solving:

    To overcome the above two issues arising from traditional classifiers,we utilize the structured output SVM(SO-SVM)framework to estimate object location changes.The output space is thus the space of all transformations Y instead of the confidence labels.Thus we introduce an SO-SVM based discriminant function F:

    SO-SVM performs a maximization step in order to predict the object transformation,while the discriminant function F includes the label y explicitly,meaning it can be incorporated into the learning algorithm.The model update procedure is performed on a labelled example pair

    Function F measures the compatibility between(x,y)pairs,and gives high scores to those which are well matched. We restrict F to be of the form F(x,y)= 〈w,φ(x,y)〉,where φ(x,y)is a joint kernel map.The parameters can be learned in a large-margin framework from a set of example pairs{(x1,y1),...,(xn,yn)}by minimizing a convex objective function:

    where δ(φi(y))= φ(xi,yi)? φ(xi,y),and Δ(yi,y)is a loss function.The value of Δ(yi,y)decreases towards 0 as y and yibecome more similar.The optimization aims to ensure the value of F(xi,yi)is greater than F(xi,y)for any y /= yi,by a margin which depends on a loss function Δ.The loss function plays an important role in our approach,as it allows us to address the issue raised previously of all samples being treated equally.

    Using standard Lagrangian duality techniques,Eq.(8)can be converted into an equivalent dual form:

    which leads to a much simpler expression for the dual problem in Eq.(11)and corresponding discriminant function Eq.(12):

    3.4 Tracking algorithm

    The proposed tracking algorithm is formulated within a particle filter framework.Given the observation setthe goal is to determine the maximize the a posteriori probabilityusing Bayes Theorem:

    wheredenotes the target state translation (xt,yt)and scalest,andare the motion model that predicts statestbased on the previous statest?1and the likelihood of observation respectively.

    The sparse feature vectorcin Eq.(5)is used as the object feature template.It is updated incrementally to accommodate appearance changes over time for robust visual tracking.We use temporal low-pass filtering to update the tracking model:

    whereis a learning parameter,ctis the target template in thetth frame,andis the sparse representation of the tracked object in framet?1.Note that a significant innovation compared with the strategy in Ref.[1]is that the convolutional filters for extracting the object template are updated based on the newly tracked target:

    模型中非期望產(chǎn)出的處理。在DEA模型的投入產(chǎn)出要素中,地區(qū)生產(chǎn)總值為期望產(chǎn)出,碳排放為非期望產(chǎn)出,期望產(chǎn)出越大越好,非期望產(chǎn)出越少越好,違反了方程的一致性,必須進行處理。本文以非期望產(chǎn)出作為投入的方法處理碳排放問題。

    where clusterd(·)denotes a clustering operation withdclssses,andptare the image patches generated by a sliding window within the tracked object region.One of the advantages of this operation is that we can both preserve the original target appearance as well as capture new object variation,preventing target drift.

    4 Experiments

    We have evaluated our proposed tracking algorithm on a public dataset [34]which includes 50 video sequences categorized with 11 attributes based on different challenging factors including illumination variation(IV),out-of-plane rotation(OPR),scale variation(SV),occlusion(OCC),deformation(DEF),motion blur(MB),fast motion(FM),in-plane rotation(IPR),out-of-view(OV),background clutter(BC),and low resolution(LR).We compared the proposed tracking algorithm on OTB-50 against other methods including SCM[35],Struck’s method[9],TLD[20],MIL[3],and CT[19].In addition,we also compared our method with a state-of-the-art method convolutional network based tracker(CNT).For quantitative evaluation,we used a success plot and a precision plot for one-pass evaluation(OPE)protocols. All 50 videos were processed using the same parameter values during the tracking process,without modification.

    Fig.2 One-pass evaluation.Top:average precision plot.Bottom:success rate plot.

    The results in Fig.2 compare our tracking framework and CNT,Struck’s method,TLD,MIL,SCM,CT.It is clear that the combination of convolutional features and HOG features plays an important role in robust object tracking. For both success rate and precision rate for the OTB-50 dataset,our method achieves the maximum area under the curve. Unlike the CNT tracker,our tracking method updates the convolutional filter during the tracking process to capture target appearance variation,taking into account both the original target appearance and any target variation.In addition,the combination of HOG features and a structural output SVM improves the success and precision rates of our tracker by 13.4%and 12.2%respectively.In contrast,by adding convolutional features,our tracker enhances the performance of Struck’s method by 7.6%and 2.0%in terms of overall success rate and precision rate respectively.

    To analyze the strengths and weaknesses of the proposed algorithm,we further evaluate the trackers on videos with 11 attributes.Figure 3 shows success rate plots for videos with different attributes,while Fig.4 shows the corresponding precision plots.In the success rate evaluation,our tracking algorithm ranks first in 8 out of 11 attributes.Meanwhile,for the video sequences with occlusion,deformation,and fast motion,our tracking method is ranked second,with the SCM and Struck trackers achieving the best performance–they employe useful background information to train discriminative classifiers.In the precision plots in Fig.4,our tracking algorithm is ranked first in 6 out of 11 attributes,namely scale variation,out-of-plane rotation,in-plane rotation,illumination variation,motion blur,and background cluster,while our tracker is ranked second for the other 5 attributes.

    Figures 5 and 6 show some tracking results on some challenge image sequences for 7 trackers.Thebasketball,deer,andsoccervideo sequences contain illumination change,pose variation,and fast motion.In the above 3 sequences,the CNT tracker fails around frames 134,6,and 79,respectively. Atbasketballframe 531,all other trackers(CT,MIL,SCM,Struck,TLD)lose the target.Thecokeandfreeman4sequences contain significant out-of-plane rotation,occlusion,and pose variation. Tracking results on thefreeman4sequence show that most trackers drift away from the target when it is heavily occluded. These tracking results prove the effectiveness and robustness of the proposed featurere presentation(a combination of HOG and convolutional features)and structural output learning.The proposed tracking method can cope with target appearance variation in the tracking process by updating the object kernels over time.To make the tracker robust to target scale variation,we employ a combination of HOG descriptors and SOSVM to capture mid-level object cues.However,the time consumed is only 1.2 times greater than that used by the CNT tracker,and runs at 4.1 fps.

    Fig.3 Tracker success rates for videos with different attributes,annotated with the area under the curve.The number in each title indicates the number of video sequences with a given attribute.

    Fig.4 Tracker precision for videos with different attributes,annotated with the area under the curve.The number in each title indicates the number of video sequences with a given attribute.

    Fig.5 Qualitative results using the proposed method on various challenging sequences(basketball,deer,liquor,soccer)having illumination variation.Frame number is shown at the top left of each frame in green.

    Fig.6 Qualitative results using the proposed method on various challenging sequences(coke,doll,freeman4)having out-of-plane rotation.Frame number is shown at the top left of each frame in green.

    5 Conclusions

    In this paper,we have proposed a novel method to model target appearance with background and foreground convolutional filters for online tracking.To further improve tracking performance,we exploit the combination of hand-crafted features and structured output learning within a particle filter framework to jointly estimate target transformation and scale variation.Experimental results show that the proposed tracking method achieves excellent results in terms of both success rate and precision when compared to several state-of-the-art methods on public datasets.In the future,we hope to further exploit the convolutional feature representation at super-pixel level and use sparse representation to encode target appearance.

    Acknowledgements

    This work was supported by the National Natural Science Foundation of China (Nos.61403342,61273286,U1509207,61325019),and Hubei Key Laboratory of Intelligent Vision Based Monitoring for Hydroelectric Engineering(No.2014KLA09).

    [1]Zhang,K.;Liu,Q.;Wu,Y.;Yang,M.-H.Robust visual tracking via convolutional networks without training.IEEE Transactions on Image ProcessingVol.25,No.4,1779–1792,2016.

    [2]Henriques,J.F.;Caseiro,R.;Martins,P.;Batista,J.High-speed tracking with kernelized correlation filters.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.37,No.3,583–596,2015.

    [3]Smeulders,A.W.M.;Chu,D.M.;Cucchiara,R.;Calderara,S.;Dehghan,A.;Shah,M.Visual tracking:An experimental survey.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.36,No.7,1442–1468,2014.

    [4]Babenko,B.;Yang,M.-H.;Belongie,S.Robust object tracking with online multiple instance learning.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.33,No.8,1619–1632,2011.

    [5]Kalal,Z.;Mikolajczyk,K.;Matas,J.Trackinglearning-detection.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.34,No.7,1409–1422,2012.

    [6]Grabner,H.;Grabner,M.;Bischof,H.Real-time tracking via on-line boosting.In:Proceedings British Machine Vision Conference,Vol.1,47–56,2006.

    [7]Grabner,H.;Leistner,C.;Bischof,H.Semi-supervised on-line boosting for robust tracking.In:Computer Vision–ECCV 2008.Forsyth,D.;Torr,P.;Zisserma,A.Eds.Springer Berlin Heidelberg,234–247,2008.

    [8]Ma,Y.;Chen,W.;Ma,X.;Xu,J.;Huang,X.;Maciejewski,R.;Tung.A.K.H.EasySVM:A visual analysis approach for open-box support vector machines.Computational Visual MediaVol.3,No.2,161–175,2017.

    [9]Hare,S.;Golodetz,S.;Saffari,A.;Vineet,V.;Cheng,M.-M.;Hicks,S.L.;Torr,P.H.Struck:Structured output tracking with kernels.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.38,No.10,2096–2109,2016.

    [10]Ren,X.;Malik,J.Tracking as repeated figure/ground segmentation.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,1–8,2007.

    [11]Zhou,X.;Li,Y.;He,B.;Bai,T.GM-PHD-based multi-target visual tracking using entropy distribution and game theory.IEEE Transactions on Industrial InformaticsVol.10,No.2,1064–1076,2014.

    [12]Zhou,X.;Yu,H.;Liu,H.;Li,Y.Tracking multiple video targets with an improved GM-PHD tracker.SensorsVol.15,No.12,30240–30260,2015.

    [13]Mei,X.;Ling,H.Robust visual tracking usingl1 minimization.In: Proceedings of the IEEE 12th International Conference on Computer Vision,1436–1443,2009.

    [14]Simonyan,K.;Zisserman,A.Very deep convolutional networks for large-scale image recognition.arXiv preprintarXiv:1409.1556,2014.

    [15]Han,B.;Comaniciu,D.;Zhu,Y.;Davis,L.S.Sequential kernel density approximation and its application to real-time visual tracking.IEEE Transactions onPatternAnalysis andMachine IntelligenceVol.30,No.7,1186–1197,2008.

    [16]Jepson,A.D.;Fleet,D.J.;El-Maraghi,T.F.Robust online appearance models for visual tracking.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.25,No.10,1296–1311,2003.

    [17]Ross,D.A.; Lim,J.; Lin,R.-S;Yang,M.-H.Incremental learning for robust visual tracking.International Journal of Computer VisionVol.77,Nos.1–3,125–141,2008.

    [18]Zhong,W.;Lu,H.;Yang,M.-H.Robust object tracking via sparse collaborative appearance model.IEEE Transactions on Image ProcessingVol.23,No.5,2356–2368,2014.

    [19]Zhang,K.;Zhang,L.;Yang,M.-H.Real-time compressive tracking.In:Computer Vision–ECCV 2012.Fitzgibbon,A.;Lazebnik,S.;Perona,P.;Sato,Y.;Schmid,C.Eds.Springer Berlin Heidelberg,864–877,2012.

    [20]Kalal,Z.;Matas,J.;Mikolajczyk,K.P-N learning:Bootstrapping binary classifiers by structural constraints.In:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,49–56,2010.

    [21]Gao,J.;Ling,H.;Hu,W.;Xing,J.Transfer learning based visual tracking with Gaussian processes regression.In:Computer Vision–ECCV 2014.Fleet,D.;Pajdla,T.;Schiele,B.;Tuytelaars.Eds.Springer Cham,188–203,2014.

    [22]Zhu,Z.;Liang,D.;Zhang,S.;Huang,X.;Li,B.;Hu,S.Traffic-sign detection and classification in the wild.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2110–2118,2016.

    [23]Li,H.;Li,Y.;Porikli,F.Robust online visual tracking with a single convolutional neural network.In:Computer Vision–ACCV 2014.Cremers,D.;Reid,I.;Saito,H.;Yang,M.-H.Eds.Springer Cham,194–209,2014.

    [24]Zhou,X.;Xie,L.;Zhang,P.;Zhang,Y.An ensemble of deep neural networks for object tracking.In:Proceedings of the IEEE International Conference on Image Processing,843–847,2014.

    [25]Fan,J.;Xu,W.;Wu,Y.;Gong,Y.Human tracking using convolutional neural networks.IEEE Transactions on Neural NetworksVol.21,No.10,1610–1623,2010.

    [26]Wang,N.;Yeung,D.-Y.Learning a deep compact image representation for visual tracking. In:Proceedings of the Advances in Neural Information Processing Systems,809–817,2013.

    [27]Wang,L.;Liu,T.;Wang,G.;Chan,K.L.;Yang,Q.Video tracking using learned hierarchical features.IEEE Transactions on Image ProcessingVol.24,No.4,1424–1435,2015.

    [28]Avidan, S. Support vector tracking.IEEE Transactions onPatternAnalysis andMachine IntelligenceVol.26,No.8,1064–1072,2004.

    [29]Collins,R.T.;Liu,Y.;Leordeanu,M.Online selection of discriminative tracking features.IEEE Transactions on Pattern Analysis and Machine IntelligenceVol.27,No.10,1631–1643,2005.

    [30]Yang,F.;Lu,H.;Yang,M.-H.Robust superpixel tracking.IEEE Transactions on Image ProcessingVol.23,No.4,1639–1651,2014.

    [31]Henriques,J.F.;Caseiro,R.;Martins,P.;Batista,J.Exploiting the circulant structure of tracking-bydetection with kernels.In:Computer Vision–ECCV 2012.Fitzgibbon,A.;Lazebnik,S.;Perona,P.;Sato,Y.;Schmid,C.Eds.Springer Berlin Heidelberg,702–715,2012.

    [32]Elad,M.;Figueiredo,M.A.T.;Ma,Y.On the role of sparse and redundant representations in image processing.Proceedings of the IEEEVol.98,No.6,972–982,2010.

    [33]Dalal,N.;Triggs,B.Histograms of oriented gradients for human detection.In:Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition,Vol.1,886–893,2005.

    [34]Wu,Y.;Lim,J.;Yang,M.-H.Online object tracking:A benchmark.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,2411–2418,2013.

    [35]Zhong,W.;Lu,H.;Yang,M.-H.Robust object tracking via sparsity-based collaborative model.In:Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,1838–1845,2012.

    1 Zhejiang University of Technology,Hangzhou,310023,China.

    2Tianjin University of Technology,Tianjin,300384,China.E-mail:csy@tjut.edu.cn

    2017-02-27;accepted:2017-04-27

    Junwei Li,Ph.D.,is with the College of Computer Science and Technology,Zhejiang University of Technology.He is a member of the China Computer Federation.His main research interests include object tracking, machine learning,convolutional neural networks,and object detection.

    Xiaolong Zhou,Ph.D.and associate professor, is with the College of Computer Science and Technology,Zhejiang University of Technology.He is a member of the China Computer Federation,IEEE,and ACM.His main research interests are in visual tracking,gaze estimation,and pattern recognition.

    SixianChan,Ph.D.,is with the College of Computer Science and Technology, Zhejiang University of Technology. Hismain research interests include visual tracking,image processing,pattern recognition,robotics,and image understanding.

    Shengyong Chen,Ph.D.,professor.He is an IET fellow,an IEEE senior member,and a senior member of the China Computer Federation.His main research interests include computer vision,pattern recognition,and robotics.

    Open AccessThe articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use,distribution,and reproduction in any medium,provided you give appropriate credit to the original author(s)and the source,provide a link to the Creative Commons license,and indicate if changes were made.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095.To submit a manuscript,please go to https://www.editorialmanager.com/cvmj.

    猜你喜歡
    投入產(chǎn)出中非生產(chǎn)總值
    2020年河北省國內(nèi)生產(chǎn)總值
    2019年河北省國內(nèi)生產(chǎn)總值
    SelTrac?CBTC系統(tǒng)中非通信障礙物的設(shè)計和處理
    什么將取代國內(nèi)生產(chǎn)總值?
    英語文摘(2019年5期)2019-07-13 05:50:20
    無錫高新區(qū)制造業(yè)投入產(chǎn)出分析
    本地生產(chǎn)總值
    深化中非交通運輸基礎(chǔ)設(shè)施建設(shè)合作
    中國公路(2017年13期)2017-02-06 03:16:22
    基于DEA模型的省域服務(wù)業(yè)投入產(chǎn)出效率評價
    課堂教學(xué)中非言語交往研究
    對葉百部中非生物堿化學(xué)成分的研究
    后天国语完整版免费观看| 亚洲人成网站在线播放欧美日韩| 久久久久国产精品人妻aⅴ院| 国产高潮美女av| 一区福利在线观看| 久久久国产精品麻豆| 在线观看午夜福利视频| 黄色日韩在线| 波多野结衣高清作品| 一a级毛片在线观看| av中文乱码字幕在线| 亚洲国产精品999在线| 午夜影院日韩av| 1000部很黄的大片| 看片在线看免费视频| 九色国产91popny在线| www国产在线视频色| 亚洲一区高清亚洲精品| 国产69精品久久久久777片 | 精品国产乱码久久久久久男人| 制服丝袜大香蕉在线| 国产一区二区三区在线臀色熟女| 天堂√8在线中文| 久久久国产精品麻豆| 舔av片在线| 999久久久国产精品视频| 日日夜夜操网爽| 国产精品亚洲美女久久久| 亚洲av五月六月丁香网| 中文字幕av在线有码专区| 黑人欧美特级aaaaaa片| 国产又黄又爽又无遮挡在线| 高潮久久久久久久久久久不卡| 一区二区三区国产精品乱码| 国产精品亚洲美女久久久| 在线观看舔阴道视频| 91九色精品人成在线观看| 亚洲自偷自拍图片 自拍| 又黄又爽又免费观看的视频| 亚洲av片天天在线观看| 99热精品在线国产| 日日摸夜夜添夜夜添小说| 免费在线观看日本一区| 校园春色视频在线观看| 欧美另类亚洲清纯唯美| 99久久综合精品五月天人人| 国产成人啪精品午夜网站| 90打野战视频偷拍视频| 美女大奶头视频| 91av网一区二区| 午夜福利欧美成人| 国产真人三级小视频在线观看| 亚洲激情在线av| 亚洲美女视频黄频| 国产野战对白在线观看| www日本黄色视频网| 国产欧美日韩一区二区三| 国产精品久久视频播放| 成人亚洲精品av一区二区| 国产伦精品一区二区三区四那| 99久久99久久久精品蜜桃| 成人永久免费在线观看视频| 很黄的视频免费| 日韩有码中文字幕| 免费观看的影片在线观看| 母亲3免费完整高清在线观看| 淫秽高清视频在线观看| 久久香蕉国产精品| 亚洲av日韩精品久久久久久密| 亚洲熟妇中文字幕五十中出| 国产av不卡久久| 国产不卡一卡二| 美女午夜性视频免费| 亚洲精品一区av在线观看| 久久这里只有精品19| 欧美成人免费av一区二区三区| 国产精品久久电影中文字幕| 国产伦精品一区二区三区四那| 亚洲欧美精品综合一区二区三区| 丁香六月欧美| 久久精品影院6| 中文字幕熟女人妻在线| 国产精品精品国产色婷婷| 久久久国产精品麻豆| 久久久久久久精品吃奶| av片东京热男人的天堂| 国内精品久久久久精免费| 在线a可以看的网站| 丁香欧美五月| 嫩草影院精品99| 夜夜爽天天搞| 久久这里只有精品19| 精品国内亚洲2022精品成人| 精品国产超薄肉色丝袜足j| 久久天堂一区二区三区四区| 免费大片18禁| 毛片女人毛片| 成年女人毛片免费观看观看9| 国产av在哪里看| 欧美日韩综合久久久久久 | 亚洲成av人片在线播放无| 成年版毛片免费区| 精品人妻1区二区| 日韩 欧美 亚洲 中文字幕| 变态另类丝袜制服| 18美女黄网站色大片免费观看| 狂野欧美白嫩少妇大欣赏| 熟女少妇亚洲综合色aaa.| 看黄色毛片网站| www日本在线高清视频| av天堂在线播放| 在线免费观看的www视频| 国产精品爽爽va在线观看网站| 亚洲精品久久国产高清桃花| 精品久久蜜臀av无| 欧美乱码精品一区二区三区| 久久国产精品影院| 久久中文看片网| 久久亚洲真实| 国产欧美日韩一区二区精品| 99热这里只有精品一区 | 我的老师免费观看完整版| 国产精品香港三级国产av潘金莲| 国产一区二区三区在线臀色熟女| 香蕉国产在线看| 97超级碰碰碰精品色视频在线观看| 波多野结衣巨乳人妻| 黄色 视频免费看| 久久精品国产清高在天天线| 亚洲国产精品成人综合色| www国产在线视频色| 国产亚洲精品av在线| 亚洲电影在线观看av| 美女免费视频网站| 久久久久久久午夜电影| 成人av在线播放网站| 可以在线观看的亚洲视频| 国产又黄又爽又无遮挡在线| 日韩国内少妇激情av| 日韩免费av在线播放| 丰满的人妻完整版| 99riav亚洲国产免费| 99热这里只有是精品50| 91字幕亚洲| 噜噜噜噜噜久久久久久91| 免费观看的影片在线观看| svipshipincom国产片| 脱女人内裤的视频| 欧美日韩精品网址| av天堂在线播放| 1024香蕉在线观看| 欧美性猛交╳xxx乱大交人| 亚洲国产色片| 亚洲av电影不卡..在线观看| 国内精品久久久久精免费| 欧美日韩综合久久久久久 | 亚洲欧美精品综合一区二区三区| 国产极品精品免费视频能看的| 每晚都被弄得嗷嗷叫到高潮| 热99在线观看视频| 亚洲精品美女久久av网站| 成在线人永久免费视频| 操出白浆在线播放| 男女那种视频在线观看| 国内精品一区二区在线观看| 国产三级中文精品| 丰满人妻一区二区三区视频av | 成年版毛片免费区| 我的老师免费观看完整版| 美女午夜性视频免费| 日本免费a在线| 老司机午夜福利在线观看视频| 色综合婷婷激情| 大型黄色视频在线免费观看| 亚洲美女视频黄频| 免费观看人在逋| 熟女人妻精品中文字幕| svipshipincom国产片| 91在线观看av| 亚洲国产高清在线一区二区三| 国产精品综合久久久久久久免费| 18禁国产床啪视频网站| 国产精华一区二区三区| 国产高清三级在线| 国产精品99久久99久久久不卡| 老熟妇仑乱视频hdxx| 女生性感内裤真人,穿戴方法视频| 欧美成狂野欧美在线观看| 国产aⅴ精品一区二区三区波| 免费av不卡在线播放| 亚洲国产看品久久| 黄频高清免费视频| 亚洲欧美激情综合另类| 中文字幕av在线有码专区| 亚洲人成网站在线播放欧美日韩| 精品国产亚洲在线| 日韩欧美国产一区二区入口| 黄片小视频在线播放| 观看美女的网站| 少妇的丰满在线观看| 无限看片的www在线观看| 每晚都被弄得嗷嗷叫到高潮| 国产精品一区二区三区四区久久| 欧美zozozo另类| 亚洲五月婷婷丁香| 国产成人啪精品午夜网站| 老熟妇乱子伦视频在线观看| 一进一出好大好爽视频| 国产主播在线观看一区二区| 啪啪无遮挡十八禁网站| 亚洲国产精品久久男人天堂| 国产精品1区2区在线观看.| 99久久精品热视频| 51午夜福利影视在线观看| 中文字幕精品亚洲无线码一区| 啦啦啦韩国在线观看视频| 色哟哟哟哟哟哟| 天堂√8在线中文| 欧美日韩福利视频一区二区| 国产探花在线观看一区二区| 国产精品98久久久久久宅男小说| 国产精品久久久av美女十八| 国产精品香港三级国产av潘金莲| 久久久久久大精品| 美女高潮的动态| 亚洲专区字幕在线| 久久久国产欧美日韩av| 国产精品av久久久久免费| 少妇的丰满在线观看| 99热精品在线国产| 国产午夜福利久久久久久| 久久久水蜜桃国产精品网| 亚洲成人久久爱视频| 在线国产一区二区在线| av中文乱码字幕在线| 午夜亚洲福利在线播放| 亚洲国产中文字幕在线视频| 国产精品99久久99久久久不卡| 日韩欧美一区二区三区在线观看| 两性夫妻黄色片| 亚洲成av人片在线播放无| 2021天堂中文幕一二区在线观| 偷拍熟女少妇极品色| 亚洲avbb在线观看| 国产精品一区二区精品视频观看| 午夜免费观看网址| 日本 欧美在线| 国产成人啪精品午夜网站| 黑人巨大精品欧美一区二区mp4| 国产精品免费一区二区三区在线| 在线观看午夜福利视频| 丰满人妻熟妇乱又伦精品不卡| 欧美激情久久久久久爽电影| 精品电影一区二区在线| 久久香蕉国产精品| 精品久久蜜臀av无| 757午夜福利合集在线观看| 亚洲国产看品久久| 国产成人精品久久二区二区91| 免费人成视频x8x8入口观看| 成人18禁在线播放| 小说图片视频综合网站| 九九热线精品视视频播放| 中出人妻视频一区二区| 亚洲专区字幕在线| www日本黄色视频网| a在线观看视频网站| 精品午夜福利视频在线观看一区| 免费看光身美女| 久久精品国产亚洲av香蕉五月| 在线观看午夜福利视频| 国产精品一区二区免费欧美| 欧美三级亚洲精品| 日韩欧美三级三区| 18禁黄网站禁片午夜丰满| 人妻丰满熟妇av一区二区三区| 亚洲精品在线美女| 色噜噜av男人的天堂激情| 日日干狠狠操夜夜爽| 免费看日本二区| 激情在线观看视频在线高清| 久久99热这里只有精品18| 中亚洲国语对白在线视频| 级片在线观看| 午夜福利高清视频| 国产精品一及| 午夜精品在线福利| 国产欧美日韩精品一区二区| 此物有八面人人有两片| 国产伦人伦偷精品视频| 精品久久久久久成人av| 曰老女人黄片| 十八禁网站免费在线| 国产亚洲精品久久久com| 午夜精品一区二区三区免费看| 老熟妇仑乱视频hdxx| 久久久久久久午夜电影| 两人在一起打扑克的视频| 国产亚洲精品一区二区www| 又粗又爽又猛毛片免费看| 男女之事视频高清在线观看| 观看免费一级毛片| 亚洲精品在线观看二区| 国产精品爽爽va在线观看网站| 亚洲色图 男人天堂 中文字幕| 可以在线观看毛片的网站| 麻豆久久精品国产亚洲av| 色吧在线观看| 亚洲七黄色美女视频| 91麻豆av在线| 757午夜福利合集在线观看| 男女下面进入的视频免费午夜| 免费人成视频x8x8入口观看| 日本在线视频免费播放| 亚洲专区国产一区二区| 国产一区二区在线av高清观看| 国产精品亚洲av一区麻豆| 黄色成人免费大全| 日韩 欧美 亚洲 中文字幕| 午夜福利在线观看吧| xxx96com| 国产99白浆流出| 一二三四社区在线视频社区8| 桃色一区二区三区在线观看| 老熟妇乱子伦视频在线观看| 亚洲国产欧洲综合997久久,| 99久国产av精品| 精品国产超薄肉色丝袜足j| 88av欧美| 亚洲成人久久爱视频| 两性午夜刺激爽爽歪歪视频在线观看| 婷婷六月久久综合丁香| 1024手机看黄色片| 欧美绝顶高潮抽搐喷水| 国产私拍福利视频在线观看| 黄片大片在线免费观看| 中文字幕av在线有码专区| 九九久久精品国产亚洲av麻豆 | 久久久精品大字幕| 亚洲中文字幕日韩| 久久久久久大精品| 国产日本99.免费观看| 婷婷亚洲欧美| 国产精品日韩av在线免费观看| av黄色大香蕉| 精品一区二区三区视频在线观看免费| 丰满人妻一区二区三区视频av | 亚洲美女视频黄频| 亚洲五月婷婷丁香| 国产午夜精品论理片| 丰满人妻一区二区三区视频av | 欧美在线一区亚洲| 国产精品一区二区免费欧美| 人人妻,人人澡人人爽秒播| 黄频高清免费视频| 久久久久久久久免费视频了| 亚洲精品在线观看二区| 亚洲aⅴ乱码一区二区在线播放| 欧美另类亚洲清纯唯美| 国产精品久久久久久亚洲av鲁大| 99国产精品一区二区蜜桃av| 日日夜夜操网爽| 在线观看一区二区三区| 精品一区二区三区四区五区乱码| 色老头精品视频在线观看| 国产精品日韩av在线免费观看| 美女高潮的动态| 黄色 视频免费看| 观看美女的网站| 制服人妻中文乱码| 色在线成人网| 中国美女看黄片| 搞女人的毛片| 亚洲av成人av| 国产精品久久电影中文字幕| 欧美+亚洲+日韩+国产| 欧美xxxx黑人xx丫x性爽| 国产亚洲精品一区二区www| 女生性感内裤真人,穿戴方法视频| 久久草成人影院| 悠悠久久av| 啪啪无遮挡十八禁网站| 在线观看舔阴道视频| 宅男免费午夜| 日本黄色片子视频| 一区二区三区激情视频| 琪琪午夜伦伦电影理论片6080| 法律面前人人平等表现在哪些方面| 日韩欧美国产一区二区入口| 午夜精品在线福利| 国产精品一区二区精品视频观看| 舔av片在线| 日日摸夜夜添夜夜添小说| 日韩欧美三级三区| 欧美中文日本在线观看视频| 老汉色∧v一级毛片| 亚洲黑人精品在线| 啦啦啦观看免费观看视频高清| 天天躁狠狠躁夜夜躁狠狠躁| 久99久视频精品免费| 在线a可以看的网站| 免费观看人在逋| 最近最新中文字幕大全电影3| 12—13女人毛片做爰片一| 亚洲黑人精品在线| 国产熟女xx| 亚洲精品粉嫩美女一区| 国产高清videossex| 国产三级在线视频| 又大又爽又粗| 国产伦一二天堂av在线观看| 日韩欧美三级三区| 国产精品久久电影中文字幕| 亚洲成人精品中文字幕电影| 亚洲五月婷婷丁香| 国产精品乱码一区二三区的特点| АⅤ资源中文在线天堂| 亚洲精品乱码久久久v下载方式 | 黄色 视频免费看| 免费搜索国产男女视频| 欧美最黄视频在线播放免费| 99久久精品热视频| 欧美黄色淫秽网站| 久久精品人妻少妇| 亚洲五月婷婷丁香| 国产精品一区二区三区四区免费观看 | а√天堂www在线а√下载| 淫妇啪啪啪对白视频| 草草在线视频免费看| 国内精品久久久久久久电影| 成人特级黄色片久久久久久久| 99久久无色码亚洲精品果冻| 黑人操中国人逼视频| 色吧在线观看| 99热这里只有精品一区 | 九色成人免费人妻av| 两个人看的免费小视频| 日本 av在线| 精品久久蜜臀av无| 日本五十路高清| 亚洲国产色片| 在线观看66精品国产| 黄色视频,在线免费观看| 一区二区三区国产精品乱码| 日韩欧美三级三区| 亚洲天堂国产精品一区在线| 亚洲美女视频黄频| 日本黄色片子视频| 国产爱豆传媒在线观看| 男人的好看免费观看在线视频| 三级男女做爰猛烈吃奶摸视频| 欧美成人性av电影在线观看| 久久精品国产综合久久久| 99在线人妻在线中文字幕| 最新美女视频免费是黄的| 久久天堂一区二区三区四区| 国产一区二区三区视频了| 亚洲美女黄片视频| 别揉我奶头~嗯~啊~动态视频| 亚洲中文字幕日韩| 欧美zozozo另类| 特级一级黄色大片| 国产三级黄色录像| 国产欧美日韩精品一区二区| 在线免费观看的www视频| 91麻豆av在线| 日韩成人在线观看一区二区三区| 国产亚洲欧美98| 在线视频色国产色| 少妇的丰满在线观看| 久久久国产欧美日韩av| 最新在线观看一区二区三区| 婷婷精品国产亚洲av在线| 一级作爱视频免费观看| 悠悠久久av| 久久久久国内视频| 国产熟女xx| 听说在线观看完整版免费高清| 91av网站免费观看| 久久人妻av系列| 免费av毛片视频| 美女扒开内裤让男人捅视频| av国产免费在线观看| 很黄的视频免费| 999精品在线视频| 成人无遮挡网站| 手机成人av网站| 欧美成人性av电影在线观看| 熟妇人妻久久中文字幕3abv| 天天添夜夜摸| 亚洲av成人一区二区三| 中文字幕人妻丝袜一区二区| 中文在线观看免费www的网站| 色综合亚洲欧美另类图片| 国产高清三级在线| 亚洲av成人av| 人妻丰满熟妇av一区二区三区| 婷婷六月久久综合丁香| 欧美中文日本在线观看视频| 精品人妻1区二区| 美女高潮的动态| 日韩成人在线观看一区二区三区| 中文字幕精品亚洲无线码一区| 亚洲av成人不卡在线观看播放网| 在线观看免费午夜福利视频| 日韩免费av在线播放| 热99re8久久精品国产| 1000部很黄的大片| 男人的好看免费观看在线视频| 色哟哟哟哟哟哟| 18美女黄网站色大片免费观看| 国产精品一区二区精品视频观看| 天天躁日日操中文字幕| 国产精品1区2区在线观看.| 亚洲五月天丁香| 日韩精品中文字幕看吧| 久久中文字幕一级| 国产亚洲精品一区二区www| 性色av乱码一区二区三区2| 在线国产一区二区在线| 琪琪午夜伦伦电影理论片6080| 亚洲av中文字字幕乱码综合| 国产一区二区在线av高清观看| 一级毛片女人18水好多| 国产精品,欧美在线| 午夜久久久久精精品| 97超视频在线观看视频| 日韩大尺度精品在线看网址| 亚洲精品美女久久久久99蜜臀| 男女那种视频在线观看| 日韩 欧美 亚洲 中文字幕| 亚洲国产精品久久男人天堂| 婷婷六月久久综合丁香| 伦理电影免费视频| 黄色女人牲交| 亚洲国产中文字幕在线视频| 亚洲美女视频黄频| 中文在线观看免费www的网站| 亚洲精品色激情综合| 中文字幕久久专区| 国产成人福利小说| 亚洲国产欧美网| 免费看美女性在线毛片视频| 亚洲一区二区三区不卡视频| 少妇裸体淫交视频免费看高清| 无限看片的www在线观看| 精品国产乱码久久久久久男人| 麻豆一二三区av精品| 18禁美女被吸乳视频| 人妻夜夜爽99麻豆av| 久久草成人影院| 国产av不卡久久| 国产精品乱码一区二三区的特点| 久久久国产成人免费| 巨乳人妻的诱惑在线观看| 亚洲在线观看片| 男女之事视频高清在线观看| 国产一级毛片七仙女欲春2| 成年女人毛片免费观看观看9| 十八禁网站免费在线| 99国产精品一区二区三区| 国产淫片久久久久久久久 | 国产伦精品一区二区三区四那| 欧美黑人巨大hd| 1000部很黄的大片| 美女扒开内裤让男人捅视频| 90打野战视频偷拍视频| 日韩中文字幕欧美一区二区| 久久草成人影院| 国产成人福利小说| 亚洲专区国产一区二区| 成年免费大片在线观看| 国产高清视频在线观看网站| 亚洲欧美日韩东京热| 国产淫片久久久久久久久 | 老司机深夜福利视频在线观看| 精品欧美国产一区二区三| 色噜噜av男人的天堂激情| 亚洲av中文字字幕乱码综合| 久久久久国内视频| 在线视频色国产色| 一级毛片精品| 亚洲精华国产精华精| 精品人妻1区二区| 91老司机精品| 亚洲欧美日韩卡通动漫| 男人舔奶头视频| 国产又黄又爽又无遮挡在线| 亚洲av第一区精品v没综合| 我的老师免费观看完整版| e午夜精品久久久久久久| 精品国产三级普通话版| 韩国av一区二区三区四区| 色精品久久人妻99蜜桃| av中文乱码字幕在线| 亚洲在线观看片| 久久亚洲真实| 国产野战对白在线观看| 国产精品电影一区二区三区| 欧美成人一区二区免费高清观看 | 黄色丝袜av网址大全| 久久久久久大精品| 亚洲精品在线美女| 99热这里只有精品一区 | 国产激情欧美一区二区| 国模一区二区三区四区视频 | 久久人妻av系列| 免费看美女性在线毛片视频| 一区二区三区高清视频在线| 级片在线观看| 狠狠狠狠99中文字幕| 日韩人妻高清精品专区| 真实男女啪啪啪动态图| 日韩av在线大香蕉| 日本五十路高清|