• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Adaptive Human Tracking Across Non-overlapping Cameras in Depression Angles

    2015-11-21 07:09:00ShaoQuan邵荃LiangBinbin梁斌斌ZhuYan朱燕ZhangHaijiao張海蛟ChenTao陳濤
    關鍵詞:陳濤

    Shao Quan(邵荃)*,Liang Binbin(梁斌斌),Zhu Yan(朱燕),Zhang Haijiao(張海蛟),Chen Tao(陳濤)

    1.College of Civil Aviation,Nanjing University of Aeronautics and Astronautics,Nanjing 211100,P.R.China;

    2.Institute of Public Safety Research,Tsinghua University,Beijing 100084,P.R.China

    Adaptive Human Tracking Across Non-overlapping Cameras in Depression Angles

    Shao Quan(邵荃)1*,Liang Binbin(梁斌斌)1,Zhu Yan(朱燕)1,Zhang Haijiao(張海蛟)1,Chen Tao(陳濤)2

    1.College of Civil Aviation,Nanjing University of Aeronautics and Astronautics,Nanjing 211100,P.R.China;

    2.Institute of Public Safety Research,Tsinghua University,Beijing 100084,P.R.China

    To track human across non-overlapping cameras in depression angles for applications such as multi-airplane visual human tracking and urban multi-camera surveillance,an adaptive human tracking method is proposed,focusing on both feature representation and human tracking mechanism.Eeature representation describes individual by using both improved local appearance descriptors and statistical geometric parameters.The improved feature descriptors can be extracted quickly and make the human feature more discriminative.Adaptive human tracking mechanism is based on feature representation and it arranges the human image blobs in field of view into matrix. Primary appearance models are created to include the maximum inter-camera appearanceinformation captured from different visual angles.The persons appeared in camera are first filtered by statistical geometric parameters.Then the one among the filtered persons who has the maximum matching scale with the primary models is determined to be the target person.Subsequently,the image blobs of the target person are used to update and generate new primary appearance models for the next camera,thus being robust to visual angle changes.Experimental results prove the excellence of the feature representation and show the good generalization capability of tracking mechanism as well as its robustness to condition variables.

    adaptive human tracking;appearance features;geometric features;non-overlapping camera;depression angle

    0 Introduction

    Multi-camera visual surveillance systems have currently been widely distributed in many areas for applications of continuously tracking interesting objects,early-warning of abnormal events and so on.Particularly,in some cases of multi-airplane visual human tracking,aerial photography and urban multi-camera surveillance from tall buildings,there exists a multi-camera visual surveillance system in which all cameras are installed at high places with wide visual range and have large visual depression angles.A fundamental task for these particular multi-camera surveillance systems with visual depression angles is to associate people across different camera views at different locations and time.This is known as the human tracking problem in visual depression angles.

    Human tracking in visual depression angles faces an issue of visually matching a target person across different cameras distributed over disjoint scenes of distance and time differences.In this case,classical human tracking algorithms will fail since cameras do not overlap.Hence,non-overlapping camera human tracking in this paper describes algorithms that deal with human tracking across non-overlapping camera views.Non-overlapping camera human tracking techniques build upon single camera human tracking techniques,for a person needs to be tracked within one camera field of view(EOV)before it can be tracked in that of another.Therefore,the problem of non-overlapping camera human tracking becomes how to match individual from one independent surveillance area to another.It has many challenges among which feature representation and tracking mechanism are the most difficult.The tracked people should be differentiated from numerous visually similar but different people in those views,which requires a sufficiently discriminative feature representation to distinguish the target person from those similar yet different candidates.Potentially,different views may be taken from various shooting angles,causing dissimilar backgrounds under diverse illumination conditions,or with other view variables.It thus requires a robust tracking mechanism that can resist inter-camera and intra-camera shooting angle changes,as well as illumination change.

    Designing suitable feature representation for human tracking is a critical and challenging problem.Ideally,the features should be robust to illumination changes,visual angle altering,foreground errors,occlusion,and low image resolution.Contemporary approaches typically exploit low-level features such as appearance[1-3],spatial structure[4-5],or their combinations[6-12].This is because these features can be relatively easily and reliably measured.Moreover,they provide a reasonable level of inter-person discrimination and then can distinguish different people clearly.

    Gianfranco et al.[1]and Hyun-Uk et al.[2]used appearance to re-identify people,and proved that appearance feature had good performance in identifying individuals.However,they could not deal with illumination change very well.Generally,in single visual angle individuals can be discriminated based on their appearances.However,appearance feature will alter with the change of visual angle,which occurs frequently across nonoverlapping cameras.In this case,appearance is quite limited to distinguish individuals.Related researches compensated the limitation by combining geometric features with appearance[6-10]. Madden et al.[6]focused on a framework based on robust shape and appearance features.However,his proposal solely employed height as shape feature without considering the limitation of height in distinguishing human beings due to the close resemblance of human height.A good way to amend the limitation of height is to combine gait feature with height to compose a shape feature. Takayuki et al.[8]proposed a method that tracked a walking human using gait features to calculate a robust motion signature.Despite the well-done performance of gait feature in his method,a high accuracy rate and low computational cost were still far from being achieved[9].Moreover,gait features are tough to adjust to the visual angle change.In short,it needs more explorations to obtain a better feature representation.

    Once a suitable feature representation has been obtained,previous literatures typically used the nearest-neighbor[4]or model-based matching algorithms such as support-vector ranking[11]for human tracking.In each case,a distance metric must be chosen to measure the similarity between two samples.In single camera,both the modelbased matching approaches[12-13]and the nearest neighbor distance metrics[14-15]can be optimized to maximize tracking performance.However,despite their excellent performance in single camera,they are still limited in coping with those intractable challenges in non-overlapping cameras.The first challenge is to overcome the intercamera and intra-camera variations.These variations include the change of appearance feature,spatial structure,illumination condition and some other parameters,which makes non-overlapping camera human tracking tough to work well in various scenes from different visual angles.Eurthermore,such variations between non-overlapping cameras are in general complex and multimodal,and therefore complex for an algorithm to learn.The second challenge is how to achieve a good generalization capability.Previously,once trained for a specific pair of cameras,most models could not generalize well to other cameras from different visual angles[16]because there wasno connection between them.Therefore,it is necessary to establish a tracking mechanism with good generalization that models can be established once and then adaptively applied to different camera configurations.

    To solve the problems mentioned above,improved appearance feature and geometric feature are explored,and an adaptive tracking mechanism is also designed.The improved appearance and geometric features make up a discriminative and robust human feature.The appearance feature includes color and texture information.Color is analyzed in hue-saturation-value(HSV)space.The HSV space is evenly partitioned and generates less color histogram bins than previous literatures,thus cutting the computational cost.Texture histograms are generated through an improved direction coded local binary pattern descriptor and they can describe local texture distribution better.As for the geometric feature,the mean value and standard deviation of height estimates of multi-shot blobs are calculated.Superior to single-shot geometric analysis,these two statistic parameters are computed from multi-shot blobs and can suppress the disturbance from noise blobs.Besides,these two geometric parameters can easily reflect height and gait movement simultaneously.To our knowledge,it is original to extract such geometric features in a statistical way of this paper.

    In human tracking process,an adaptive tracking mechanism is designed.It aims to automatically match and track individuals based on both retrospective and on-the-fly information. The image blobs are divided into two groups,namely,the gallery group gathered by source images and the probe group gathered by target images.The gallery group trains the computation parameters of the target person and then tests the image blobs in the probe group.On one hand,in the gallery group the appearance feature from each visual angle is described by a primary appearance model.This paper creates primary appearance model to represent unique appearance feature seen from unique visual angle.On the other,the probe blobs are divided into groups based on state-of-the-art single camera human tracking techniques.Each group corresponds to a unique person and includes the appearance information of him/her.The geometric features of the groups are first compared with those of the gallery group.The geometrically similar groups are subsequently described and arrayed in an appearance model matrix,and then matched with the gallery primary appearance models.The group that has the maximum label scale with gallery primary appearance models is determined to be the target person.After being targeted,the blobs of the person are automatically collected into source blob sequences and updated for obtaining new gallery primary appearance models.The mechanism obtains a powerful generalization capability among different cameras.Superior to the existed methods,it has an increasingly better performance over time in term of generalization.

    1 Motion Detection

    Eig.1 shows a configuration of the proposed method.This paper employs ViBe to detect and segment moving objects[18].After segmenting an object,an external bounding box is used to contain it.The object segmentation actually obtains a foreground blob.Then the height to weight(H/W)ratio of foreground blob is computed.If its H/W ratio is between 5 and 10,this foreground blob is recognized as a human foreground blob.Otherwise,it will be deleted.

    2 Appearance Feature Extraction

    2.1 Partial body analysis

    Instead of focusing on global appearance features,this paper analyzes the appearance features from human body parts.The areas around the chest,thigh and foot are chosen as the three parts which describes the most critical information of individual,and thereby the feature extraction becomes faster whilst not losing important appearance information.In this paper,the chest corresponds to the 15%—40%region of the external bounding box,while the thigh corresponds to50%—70%and the foot 85%—100%.The chest weights 60%in term of importance,the thigh 25%and the foot 15%.

    Eig.1 Process flow diagram for the proposed method

    2.2 Appearance model

    Appearance model contains color and texture information.Color expresses chromatic information and texture gives spatial distribution information of pixels.The combination of color and texture enhances the discriminability to distinguish different people.

    2.2.1 Color analysis

    Color often varies with the illumination change.To cope with this,the HSV color space is employed.In the HSV color space,the effect from illumination change can be suppressed by decreasing Value characteristic.Eor an initial redgreen-blue(RGB)image,it is first converted into HSV space[19].After executing the convert,H has values from 0 to 360,S from 0 to 1,and V from 0 to 255.

    In most previous appearance models,each characteristic in color space generated a histogram and thus produced massive bins which would be time-consuming.Eor saving time,this paper partitions HSV color space into rough segments. The Hue characteristic is partitioned evenly into QHsegments,Saturation into QSsegments and Value into QVsegments.The values of H,S,V components are then converted into segment level HC,SC,VC,respectively.

    Then HC,SCand VCare integrated into a color descriptorγHSVas follows

    whereγH,γS,γVdenote the weight coefficients of HC,SC,VC,round(·)represents a rounding function.γH,γS,γVare computed by

    Since QS,QVand QHare all larger than 1,the weight of ValueγVwill be always less thanγHandγS.It suppresses the impact of brightness and strengthens the robustness to illumination change.To avoid losing too much Saturation information,the segment numbers should be sorted in the descending order of QH>QS>QV.Here,the values of QH,QSand QVare 250,50 and 5,respectively.

    2.2.2 Texture analysis

    Local binary pattern(LBP)descriptor is oneof the widely-used texture descriptors.Comparing with general LBP,the direction coded LBP(d LBP)descriptor presented in Ref.[20]considers the relations between center pixel and neighboring pixels,as well as the relations among border pixels along one direction,and thus can better describe the texture in interesting regions.Different from Ref.[20],we converts the eight-bit d LBP binary code into a decimal descriptor as

    where NS stands for the neighbor size and NS= 2NS′.vpis the value of pixel regularly spaced on circle and vcthe value of center pixel.

    The d LBP in Eq.(4)involves not only comparison information between border pixels and center pixel but also comparison information among border pixels themselves along one direction,and it can rank the three pixel values along one direction.Therefore,it can better discriminate person′s appearance.

    2.2.3 Appearance modeling

    Person′s appearance feature is modeled by the pixels from chest,thigh and foot regions. Each pixel has its color and d LBPP,Rvalue,and each region will thus generate a color histogram and a d LBP texture histogram.The appearance model is constructed by concatenating six histograms from left to right in such an order:the color histogram of chest,the color histogram of thigh,the color histogram of foot,the texture histogram of chest,the texture histogram of thigh,and the texture histogram of foot.

    3 Geometric Feature Extraction

    3.1 Height estimation

    Before calculating height estimate,apparent height should be measured at first.Apparent height is computed as the length from the middle top to the middle bottom of the bounding box. This paper calculates height estimate in the same way as Ref.[21].

    3.2 Statistical geometric features

    Statistical geometric features are obtained by computing the statistical parameters of height estimates.To suppress the impact from noise height estimates,the heights of blobs are sorted in a descending order,at the same time,the tallest and the shortest 5%will be deleted.The remaining height estimates compute the height estimate parameters.This paper uses mean and standard deviation of the remaining height estimates as the statistical geometric parameters.

    Imagining that there are N blobs in training sample,and the height estimate of blob i is denoted as hi.After deleting the tallest and shortest 5%blobs,the remaining blobs can be ranked as hm,hm+1,hm+2,…,hM-1,hM,…,hn-2,hn-1,hnin a descending order,where n-m+1=90%· N.The statistical geometric parameters are calculated by

    whereμhmeasures the stature of a person andσhreflects the rhythm up and down displacement of the upper body.These two parameters connect the height feature with the gait feature in a simple way.

    3.3 Geometric similarity

    Once the statistical geometric parameters have been obtained,the similarity of geometric features of source sequences(μs,σs)and target sequences(μt,σt)is computed as

    whereωμ=ωσ=0.5,representing the weight of mean value and standard deviation respectively.Simμand Simσdenote the similarity of(μs,μt)and(σs,σt)respectively.

    whereμ0is a constant related to real situations.If Sims,tis greater than threshold Tgeo,it proves that source sequence and target sequence are similar.

    4 Human Tracking

    Erom different visual angles,the same person may appear strikingly different,especially when the colors and textures of the front,side and back clothing are totally diverse.But as long as one appears under the same visual angle,his or her appearance models will be pair-camera correlated.This correlation can help us identify the same person from frame to frame and track him/ her in different views.Accordingly,authors collect the maximum appearance models from different visual angles,and compares them in the coming camera view with the collected models,so as to immediately target the person as soon as he/ she appears under a collected visual angle.

    When a person walks into a specified visual angle,he/she may produce a blob sequence,which mirrors the appearance feature in this specified visual angle.Ideally,each sequence corresponds to a visual angle,and the amount of sequences is equal to the amount of visual angles. However,since the noise blob sequences are unavoidable,thus leading to noise appearance models,primary appearance models are selected in an adaptive mechanism to prevent the noise appearance models.

    4.1 Primary appearance models

    Eor a person,each of his/her blobs has an appearance model and all the appearance models are constructed in the method described in Section 2.2.3.The appearance models are denoted as mi(i=1,2,3,…,n)and will be divided into classes. Those classes with over-threshold model population are selected as primary appearance model classes.

    4.1.1 Appearance model classification

    Appearance model classification is based on the pair-wise distance of appearance models.The appearance model mihas six histograms,including three color histogramsand three d LBP histograms(j=1,2,3 corresponding to chest,thigh and foot).The distance of miand mi′is computed from the correlations of the six histograms in the following

    The Bhattacharyya distance[10]is used to measure the correlation of histograms H1and H2.Then Dis(mi,mi′)is compared with a set threshold Tdis.If Dis(mi,mi′)is less than Tdis,mi,mi′are collected into the same class;otherwise,classified into different classes.

    4.1.2 Primary appearance model class

    All the appearance models are classified into N classes,denoted as MCi.Eor MCi,the amount of its inner models is Si.Considering that noise blobs will not exceed 10%of total blobs in most cases,primary appearance model class is the one whose inner models are more than 10%of total models.

    4.1.3 Primary appearance model

    Eor each primary appearance model class,it has a primary appearance model,which represents the appearance feature of the model class. Primary appearance model comes from the arithmetic operations of all the inner models.As each appearance model has six histograms(three color histograms and three texture histograms),the primary appearance model is co-determined by the arithmetic operations of these six histograms. Each of the six histograms computes an arithmetic mean histogram,described quantitatively in Eq.(11).

    4.2 Person re-identification

    Relying on the state-of-the-art tracking techniques based on spatial-temporal relations[23],the blobs in single camera could be classified into different groups.Each group corresponds to a unique person.According to temporal relation,an individual′s blobs can be arranged as a sequence in time order.This sequence includes all the appearance information of an individual under all visual angles when one passes through a camera.Accordingly,each camera will have several blob sequences,and the amount of sequences is equal to the amount of individuals″seen″by the camera.Each blob sequence consists of some blobs and each blob has an appearance model.All the appearance models in camera C can be arrayed in a matrix MC

    Row i lists the blob sequence of person i composed of liblobs.mCijstands for the appearance model of blob j in the blob sequence of person i.The column amount of MCis equal to the length of the largest blob sequence.Those smaller rows are filled with zero.

    The statistical geometric parameters of target person are initially computed from the gallery blobs.The sequence in matrix MCwhich is similar to the sequence of target person will be geometrically eligible.

    The geometrically eligible blob sequences are further tested by the primary appearance models denoted asin this paper,whererepresents the primary appearance model of class,andthe amount of its inner models.

    In order to determine whetherin matrix MCcan be gathered into,sign function Dsignis designed as

    The class label determination matrix reads how many class labels each blob has when matched with primary appearance model classes MCP.The accumulation of row i,denoted as LSiand calculated in Eq.(15),defines the label scale of blob sequence of person i when matched withthe primary appearance models.

    where liis the length of blob sequence of person i.Sometimes,ifhas below-threshold distances with several different primary appearance models synchronously,it can be classified into several classes.Hence,the blob will have several different labels.In order to sign if a blob has at least one label,sign function lbsignis introduced.

    A blob sequence with larger LBN means that more blobs in the sequence will be gathered into primary appearance model classes,but it does not mean it is more likely to be the target sequence. In fact,although those sequences containing more blobs tend to have larger LBNs,they are also likely to have more unlabeled blobs that do not belong to any primary appearance model class.In light of this,an appearance based blob sequence sign function ESsignis designed to determine if a sequence is an appearance based blob sequence.

    where ESsign(LBNi)=1 indicates that the blob sequence of person i is an appearance eligible blob sequence.

    The appearance eligible blob sequence of person t in matrix MC,which has the maximum label scale,is re-identified as the target sequence,and person t is re-identified as the target person, namely

    4.3 Update of primary appearance models

    The blobs of target person t captured from the newest camera have the newer information than those from foregoing cameras.Primary appearance models should be updated to ensure the accuracy of continuous disjoint tracking.

    Eor a blob in sequence of target person t in new camera,if its minimum distance with old primary appearance models is shorter than the threshold,its appearance model will be collected into the closest primary appearance model classOtherwise,it will not be collected into any existed class but generate a new appearance model class itself.The new appearance model class is labeled asand its inner model number is

    The appearance models in old primary model classes and new model classes renew the primary appearance models in the following sign functions

    5 Experimental Results

    5.1 Experimental setup

    The experiments are successively conducted in both well-known benchmark VIPeR dataset and real complex scenarios.Eirstly,this paper utilizes the images in VIPeR to test the effectiveness of human re-identification based on the improved appearance feature descriptor.The images in the dataset are randomly splitted into two sets:Camera A and Camera B.It is the most challenging dataset currently available for single-shot pedestrian re-identification.Secondly,the effectiveness of human tracking is tested in real complex scenarios composed of seven non-overlapping views,which may witness many vast angle changes and illumination alters.The scenarios reflect closer to real-life and reveal the toughness of disjoint video surveillance.Each view in complex scenarios captures many continuous images and enables a multi-shot analysis.

    5.2 Experiments in VIPeR dataset

    In the experiments of this paper dataset is splitted into 10 random sets and the average of the 10 experimental results are compared with those of other two excellent benchmark methods,namely Gray′s ELE 200[22]and Chen′s Adaboost classifier based on multiple features[23].The 10 experimental results are concluded from 10 random sets of 200 pedestrians.The cumulative matching characteristic(CMC)curves are depicted in Eig.2.

    Eig.2 CMC comparison of performance on VIPeR dataset

    Eig.2 demonstrates the proposed feature representation has the stronger discriminative power than other two state-of-the-art methods.It declares an excellent performance of single-shot people re-identification based on the improved local uniformly-partitioned HSV color descriptor and the improved d LBP texture descriptor.

    Eurther experiments conducted on VIPeR dataset test the computation time of the proposed appearance features.The result is compared with other two human tracking approaches,namely,Chen′s[23]and Hyun-Uk′s[2].All the normalized 632 images in Camera A are chosen to test the computation time of each approach.The comparison of computation time is illustrated in Eig.3.It infers that the proposed method attracts appearance features faster than the other two excellent algorithms.The real-time feature extraction underpins and ensures the multi-shot human tracking in term of computational speed.

    Eig.3 Comparison of computation time for feature extraction

    5.3 Experiments in complex scenarios

    The complex scenario is chosen in the experiments.It aims to verify the continuous tracking across non-overlapping cameras.In the beginning of the track,the gallery group needs to be first initialized.

    5.3.1 Gallery group initialization

    The gallery group initialization is to choose initial source images to train the parameters of the target person.Here three experiments are conducted when initializing the gallery group. The first two use the images captured from single shooting angle,while the third one uses the images captured from multiple visual angles in the initialization.The three experiments intend to prove the importance of collecting feature information from multiple visual angles.

    In the first two experiments,Camera 2 is the gallery camera and its first 168 images completethe initialization of the gallery group with only one visual angle.

    The first experiment selects Camera 3 as the probe camera,which has a similar visual angle to the gallery Camera 2.Persons 1,2,3 appear in Camera 3,whose blob sequences′statistical geometric parameters are listed in Table 1.The similarity of sequence of Person 1 with the gallery group is 48.8%,less than the threshold 75%,indicating that Person 1 differs from the gallery target in term of geometric features and it will be deleted.The label amount of each frame in the sequences of remaining Person 2 and Person 3 is illustrated in Eig.4(a),where Person 2 has a larger label scale(the sum of the label amount of each frame)and its LBN is also manifestly larger than 30%of total sequence.Therefore,Person 2 is determined as the target person.

    Table 1 Geometric parameters of blob sequences in Camera 3

    Camera 6 is chosen as the probe camera in the second experiment,which monitors the same three persons as Camera 3 and has vast angle difference with the gallery Camera 2.Eig.4(b)shows that when Person 2 and Person 3 come into Camera 6,they almost have no label scale.No one can be truly tracked.It means tracking failure appears in Camera 6.

    In the third experiment Camera 4 is regarded as the gallery camera.It initializes the gallery group in multiple shooting angles using the first 245 images.Eig.5 shows the EOV in Camera 4,displaying a wide view field.Eig.5 also lists four primary appearance models of the target Person 2.

    Just like the second experiment,Camera 6 is also selected as the probe camera in the third experiment.In contrast,the label scales in the third experiment(Eig.6)dramatically outperform those of the second in Eig.4(b).

    Eig.4 Label scales of Persons 2,3 in Cameras 3 and 6 when Camera 2 initializes the gallery group

    Eig.5 EOV of Camera 4 with multiple visual angles and primary appearance models of Person 2

    5.3.2 Continuous tracking

    Another group of experiments is conducted in six non-overlapping cameras to validate continuous human tracking across non-overlapping cameras.Eig.7 lists six EOVs in these experimental cameras.Each camera shoots in a unique angle and has different illumination conditions.Eig.7 demonstrates the effectiveness of human tracking across six non-overlapping cameras.

    Eig.6 Label scales of Persons 2,3 in Camera 6 when Camera 4 initializes the gallery group

    Eig.7 Human tracking across six disjoint cameras

    Eig.8 shows the label amount of each frame from Camera 2 to Camera 7.The frames in Eig.8 are arranged according to the time order.The label amount of each frame tends to increase from 0 to 4,which means more and more appearance features have been captured and stored in the gallery group.

    Eig.9 shows the performance parameters from Camera 2 to Camera 7,where matching rate represents the ratio of matched frames to all the analyzed images,and erroneous matching rate means the ratio of incorrectly matched frames to all the matched frames.The accurate matching rate increases over time,which infers that human tracking can accurately track the target people when one appears under different visual angles as well as in different view conditions.The up-andup performance implys that the generalization capability generalizes across different cameras over time.The erroneous matching rate almost remains unchanged,and even gets worse over time. It is mainly due to the fact that incorrect blob is wrongly matched with primary appearance models and the increase of primary appearance models raises the risk of erroneously re-identifying a wrong blob.However,as the erroneous matching rate is just around 10%,it is still acceptable.

    Eig.8 Label scale of each frame from Camera 2 to Camera 7

    Eig.9 Performance parameters of human tracking from Camera 2 to Camera 7

    The experimental results in complex scenarios prove the feasibility of this adaptive human tracking mechanism across non-overlapping cameras.In the mechanism,the gallery group initialization is important.When the gallery group is initialized at multiple angles,it is more likely to successfully track the person in the probe camera since more appearance information is captured under different visual angles.Eig.8 depicts the in-crease of label amount of each frame over time. Essentially,it proves that the re-identification ability improves when more primary appearance models are stored in the system.The update of gallery group renews the primary appearance models camera by camera,thus establishing an adaptive human tracking mechanism in non-overlapping cameras.

    6 Conclusions

    An adaptive human tracking approach based on primary appearance model and statistical geometric features is proposed to track human across disjoint cameras in depression angles.All the extracted features manage to keep robust to illumination variations,foreground errors,as well as visual angle changes.The local uniformly-partitioned HSV color features are extracted in realtime.The combination of appearance and statistical geometric features produces a discriminative and robust feature representation.The human tracking mechanism is the main contribution.It uses both retrospective and on-the-fly information to collect the maximum appearance information captured from different visual angles.The update of primary appearance models enables the human tracking mechanism to renew adaptively.In this adaptive mechanism,the later camera will gain a higher hit rate and its generalization capability will become better.The experiments conducted in benchmark dataset show the excellent accuracy and real-time extraction of feature representation. The experiments conducted in complex scenario prove the good generalization capability of the proposed mechanism and also show good performance in resisting inter-camera and intra-camera variations.

    Acknowledgements

    This work was funded by the Natural Science Eoundation of Jiangsu Province(No.BK2012389),the National Natural Science Eoundation of China(Nos.71303110,91024024),and the Eoundation of Graduate Innovation Center in NUAA(Nos.kfjj201471,kfjj201473).

    [1] Doretto G,Sebastian T,Tu P,et al.Appearancebased person re-identification in camera networks: problem overview and current approaches[J].J Ambient Intelligence and Humanized Computing,2011(2):127-151.

    [2] Chae Hyun-Uk,Jo Kang-Hyun.Appearance feature based human correspondence under non-overlapping views[C]∥Proceedings of 5th International Conference on Intelligent Computing,Emerging Intelligent Computing Technology and Applications.Ulsan: Springer,2009:635-644.

    [3] Zeng Eanxiang,Liu Xuan,Huang Zhitong,et al. Robust and efficient visual tracking under illumination changes based on maximum color difference histogram and min-max-ratio metric[J].J Electron Imaging,2013,22(4):043022.

    [4] Lee Seok-Han.Real-time camera tracking using a particle filter combined with unscented Kalman filters[J].J Electron Imaging,2014,23(1):013029.

    [5] Wu Yiquan,Zhu Li,Hao Yabing,et al.Edge detection of river in SARimage based on contourlet modulus maxima and improved mathematical morphology[J].Transactions of Nanjing University of Aeronautics and Astronautics,2014,31(5):478-483.

    [6] Madden C,Piccardi M.A framework for track matching across non-overlapping cameras using robust shape and appearance features[C]∥Proceedings of IEEE Conference on Advanced Video and Signal Based Surveillance.London:IEEE,2007:188-193.

    [7] Zhang Chao,Wang Daobo,Earooq M.Real time tracking for fast moving object on complex background[J].Transactions of Nanjing University of Aeronautics and Astronautics,2010(4):321-325.

    [8] Takayuki Hori,Jun Ohya,Jun Kurumisawa.Identifying a walking human by a tensor decomposition based approach and tracking the human across discontinuous fields of views of multiple cameras[C]∥Proceedings of Conference on Computational Imaging VIII.San Jose:SPIE,2010:75330.

    [9] Lin Yu-Chih,Yang Bing-Shiang,Lin Yu-Tzu,et al. Human recognition based on kinematics and kinetics of gait[J].J Medical and Biological Engineering,2010(31):255-263.

    [10]Trevor Montcalm,Bubaker Boufama.Object intercamera tracking with non-overlapping views:A new dynamic approach[C]∥Proceedings of 2010 Canadian Conference Computer and Robot Vision.Ottawa:IEEE,2010:354-361.

    [11]Prosser B,Zheng W,Gong S,et al.Person re-identification by support vector ranking[C]∥Proceedings of British Machine Vision Conference.[S.l.]:BMVA Press,2010:21.1-21.11.

    [12]Bazzani L,Cristani M,Perina A,et al.Multipleshot person re-identification by chromatic and epitomic analyses[J].Pattern Recognition Letters,2012(33):898-903.

    [13]Zheng W,Gong S,Xiang T.Person re-identification by probabilistic relative distance comparison[C]∥Proceedings of IEEE Conference Computer Vision and Pattern Recognition.Colorado:IEEE,2011: 649-656.

    [14]Avraham T,Gurvich I,Lindenbaum M,et al. Learning implicit transfer for person re-identification[C]∥Proceedings of European Conference on Computer Vision.Elorence:Springer,2012:381-390.

    [15]Hirzer M,Beleznai C,Roth P,et al.Person re-identification by descriptive and discriminative classification[C]∥Proceedings of 17th Scandinavian Conference on Image Analysis.Ystad:Springer,2011:91-102.

    [16]Zheng W,Gong S,Xiang T.Re-identification by relative distance comparison[J].IEEE Transactions on Pattern Analysis and Machine Intelligence,2013(35):653-668.

    [17]Layne R,Hospedales T M,Gong S.Domain transfer for person re-identification[C]∥Proceedings of ARTEMIS Workshop at ACM Multimedia.Barcelona:[s.n.],2013:25-32.

    [18]Yang Chenhui,Kuang Weixiang.Robust foreground detection based on improved vibe in dynamic background[J].International Journal of Digital Content Technology and Its Applications(JDCTA),2013(7):754-763.

    [19]Gonzalez R C,Woods R E.Digital image process[M].3rd Ed.Beijing:Publishing House of Electronics Industry,2010.

    [20]JiríTrefny,JiríMatas.Extended set of local binary patterns for rapid object detection[C]∥Proceedings of the Computer Vision Winter Workshop.Nove Hrady:Czech Pattern Recognition Society,2010:37-43.

    [21]Dai Xiaochen,Payandeh Shahram.Geometry-based object association and consistent labeling in multicamera surveillance[J].Emerging and Selected Topics in Circuits and Systems,IEEE Journal on,2013,3(2):175-184.

    [22]Douglas G,Hai Tao.Viewpoint invariant pedestrian recognition with an ensemble of localized features[C]∥Proceedings of the 10th European Conference on Computer Vision.Marseille:Springer,2008:62-275.

    [23]Chen Xiaotang,Huang Kaiqi,Tan Tieniu.Object tracking across non-overlapping cameras using adaptive models[C]∥Proceedings of ACCV 2012 International Workshops on Computer Vision.Daejeon: Springer,2012:464-477.

    (Executive editor:Zhang Tong)

    TP391.41 Document code:A Article ID:1005-1120(2015)01-0048-13

    *Corresponding author:Shao Quan,Associate Professor,E-mail:shaoquan@nuaa.edu.cn.

    How to cite this article:Shao Quan,Liang Binbin,Zhu Yan,et al.Adaptive human tracking across non-overlapping cameras in depression angles[J].Trans.Nanjing U.Aero.Astro.,2015,32(1):48-60.

    http://dx.doi.org/10.16356/j.1005-1120.2015.01.048

    (Received 13 November 2014;revised 7 January 2015;accepted 12 January 2015)

    猜你喜歡
    陳濤
    An extended social force model on unidirectional flow considering psychological and behavioral impacts of hazard source
    神奇符號 ——姓與名
    Simulation of crowd dynamics in pedestrian evacuation concerning panic contagion: A cellular automaton approach
    助人為樂的護士
    封二 春姑姑走啦
    猴爸爸的百寶箱
    陳濤吉祥物設計作品選登
    Interaction induced non-reciprocal three-level quantum transport?
    ACTIVE VIBRATION CONTROL OF TWO-BEAM STRUCTURES
    Experimental validation method of elastic thin rod model for simulating the motional cable harness
    青青草视频在线视频观看| 欧美+日韩+精品| 日本wwww免费看| 国产精品.久久久| 天天躁夜夜躁狠狠躁躁| 久久精品久久精品一区二区三区| 香蕉国产在线看| 欧美日本中文国产一区发布| 日韩一区二区三区影片| 久久久久久久精品精品| 亚洲欧洲国产日韩| 香蕉丝袜av| 一区福利在线观看| 一级a爱视频在线免费观看| 国产人伦9x9x在线观看 | 欧美人与性动交α欧美软件| 亚洲内射少妇av| 2022亚洲国产成人精品| 欧美日本中文国产一区发布| 亚洲精品一区蜜桃| 国产精品嫩草影院av在线观看| 激情五月婷婷亚洲| 国产 一区精品| 午夜福利在线免费观看网站| 巨乳人妻的诱惑在线观看| a 毛片基地| 久久久久久久久免费视频了| 免费播放大片免费观看视频在线观看| 免费看av在线观看网站| 久久韩国三级中文字幕| 亚洲成国产人片在线观看| 久久久久久久久久久久大奶| 深夜精品福利| 人妻系列 视频| 日日摸夜夜添夜夜爱| 少妇的丰满在线观看| 国产黄色免费在线视频| 亚洲精品美女久久久久99蜜臀 | www.精华液| 日本av手机在线免费观看| 免费大片黄手机在线观看| 成人免费观看视频高清| 制服丝袜香蕉在线| 国产黄色免费在线视频| 亚洲国产av影院在线观看| 色吧在线观看| 国产成人精品婷婷| 日韩大片免费观看网站| 肉色欧美久久久久久久蜜桃| 精品视频人人做人人爽| 最近的中文字幕免费完整| 啦啦啦在线免费观看视频4| 最近最新中文字幕免费大全7| 婷婷色麻豆天堂久久| 五月开心婷婷网| 波多野结衣av一区二区av| 久久久久网色| 国产日韩一区二区三区精品不卡| 欧美日韩一级在线毛片| 欧美xxⅹ黑人| 伦精品一区二区三区| 亚洲伊人色综图| 一级黄片播放器| 国产精品久久久久久av不卡| 熟女少妇亚洲综合色aaa.| 欧美精品一区二区免费开放| 欧美日韩综合久久久久久| 久久精品熟女亚洲av麻豆精品| 国产精品国产三级国产专区5o| 9191精品国产免费久久| 亚洲精品国产色婷婷电影| 精品国产一区二区久久| 国产精品 国内视频| 波野结衣二区三区在线| 考比视频在线观看| 九九爱精品视频在线观看| 免费黄色在线免费观看| 99九九在线精品视频| 亚洲精品国产色婷婷电影| 色吧在线观看| 日本色播在线视频| 亚洲av日韩在线播放| 看免费成人av毛片| 秋霞伦理黄片| 高清在线视频一区二区三区| 国产精品国产三级国产专区5o| 一本大道久久a久久精品| 在线观看美女被高潮喷水网站| 成人国语在线视频| 五月伊人婷婷丁香| 日韩一本色道免费dvd| 亚洲成人av在线免费| 久久av网站| 国产免费又黄又爽又色| 中文字幕制服av| 亚洲欧美精品综合一区二区三区 | 亚洲欧美清纯卡通| 观看美女的网站| 日韩av免费高清视频| xxx大片免费视频| 免费观看a级毛片全部| 伦精品一区二区三区| 久久精品aⅴ一区二区三区四区 | 亚洲精品一二三| 不卡av一区二区三区| 免费观看性生交大片5| 男女边吃奶边做爰视频| av片东京热男人的天堂| 国产一区二区三区av在线| 中文字幕人妻丝袜制服| 女性生殖器流出的白浆| 日韩在线高清观看一区二区三区| 欧美精品高潮呻吟av久久| 欧美日韩精品网址| 丰满少妇做爰视频| 啦啦啦在线免费观看视频4| 久久午夜综合久久蜜桃| xxx大片免费视频| 日韩一卡2卡3卡4卡2021年| 国产一区二区三区av在线| 日韩av在线免费看完整版不卡| 中文字幕另类日韩欧美亚洲嫩草| 九草在线视频观看| 1024香蕉在线观看| 日韩电影二区| 国产野战对白在线观看| 久久久久久人人人人人| 久久精品亚洲av国产电影网| 久久人妻熟女aⅴ| 夫妻性生交免费视频一级片| 欧美激情高清一区二区三区 | 午夜免费观看性视频| 欧美av亚洲av综合av国产av | 国产精品久久久久久久久免| 少妇熟女欧美另类| av女优亚洲男人天堂| 国产精品久久久久成人av| videossex国产| 一二三四中文在线观看免费高清| 国产日韩一区二区三区精品不卡| 夫妻午夜视频| 国产亚洲欧美精品永久| 人人妻人人澡人人看| 波多野结衣一区麻豆| 丝袜人妻中文字幕| 午夜福利网站1000一区二区三区| 香蕉丝袜av| 国产高清不卡午夜福利| 国产在线免费精品| 捣出白浆h1v1| 精品少妇黑人巨大在线播放| 欧美老熟妇乱子伦牲交| 少妇被粗大的猛进出69影院| 国产淫语在线视频| 国产xxxxx性猛交| 波多野结衣av一区二区av| av在线app专区| 亚洲伊人色综图| 久久精品国产鲁丝片午夜精品| 女人被躁到高潮嗷嗷叫费观| 久久精品久久久久久久性| 韩国av在线不卡| 欧美成人午夜精品| 少妇 在线观看| 秋霞在线观看毛片| 久久久国产欧美日韩av| 久久国产精品大桥未久av| 男人操女人黄网站| 日本猛色少妇xxxxx猛交久久| 欧美人与性动交α欧美软件| 熟妇人妻不卡中文字幕| 成年人免费黄色播放视频| 欧美日韩视频精品一区| 综合色丁香网| 国产亚洲一区二区精品| av线在线观看网站| 国产又色又爽无遮挡免| 国产免费现黄频在线看| 永久免费av网站大全| 精品国产乱码久久久久久小说| av片东京热男人的天堂| 亚洲内射少妇av| 青青草视频在线视频观看| 国产精品久久久久久av不卡| 免费日韩欧美在线观看| 激情视频va一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 日韩欧美一区视频在线观看| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 熟女电影av网| 一级片免费观看大全| 免费人妻精品一区二区三区视频| 国产欧美日韩综合在线一区二区| 中国三级夫妇交换| 国产成人精品婷婷| 9色porny在线观看| 嫩草影院入口| 久久久久精品性色| 免费看av在线观看网站| 97在线人人人人妻| 国产av一区二区精品久久| 不卡视频在线观看欧美| 精品国产一区二区三区久久久樱花| 在线看a的网站| 大香蕉久久成人网| av片东京热男人的天堂| √禁漫天堂资源中文www| 午夜免费鲁丝| 日韩免费高清中文字幕av| 在线免费观看不下载黄p国产| 久久久久久久大尺度免费视频| 在线观看一区二区三区激情| 欧美激情 高清一区二区三区| 亚洲av电影在线进入| 久久精品国产亚洲av高清一级| av在线观看视频网站免费| 亚洲美女视频黄频| 国产男女内射视频| 亚洲精品久久午夜乱码| 午夜老司机福利剧场| 曰老女人黄片| 久久久久久久国产电影| 国产xxxxx性猛交| 在线精品无人区一区二区三| 精品国产乱码久久久久久男人| 亚洲av国产av综合av卡| 久久精品久久久久久噜噜老黄| 桃花免费在线播放| 亚洲情色 制服丝袜| 亚洲欧美清纯卡通| 久久久a久久爽久久v久久| 欧美精品高潮呻吟av久久| 欧美黄色片欧美黄色片| 在线天堂最新版资源| 大香蕉久久成人网| 免费不卡的大黄色大毛片视频在线观看| 制服人妻中文乱码| 天天躁日日躁夜夜躁夜夜| 久久久久国产精品人妻一区二区| 精品99又大又爽又粗少妇毛片| av视频免费观看在线观看| 中文字幕人妻熟女乱码| 欧美bdsm另类| 2018国产大陆天天弄谢| 高清不卡的av网站| av女优亚洲男人天堂| 亚洲美女搞黄在线观看| 欧美激情 高清一区二区三区| 大话2 男鬼变身卡| 欧美bdsm另类| 伦理电影免费视频| 人妻人人澡人人爽人人| 另类精品久久| 日本欧美视频一区| 欧美人与善性xxx| 纵有疾风起免费观看全集完整版| 久久精品久久久久久噜噜老黄| 97在线视频观看| 亚洲国产最新在线播放| 啦啦啦啦在线视频资源| 美女中出高潮动态图| 久久久国产精品麻豆| 一级,二级,三级黄色视频| av片东京热男人的天堂| 日本爱情动作片www.在线观看| 亚洲欧美一区二区三区国产| av线在线观看网站| 深夜精品福利| 在线看a的网站| 久久久精品国产亚洲av高清涩受| 青春草亚洲视频在线观看| 欧美 亚洲 国产 日韩一| 成年av动漫网址| 国产又色又爽无遮挡免| 日本-黄色视频高清免费观看| 少妇被粗大猛烈的视频| av天堂久久9| 久久久久网色| 久久精品国产鲁丝片午夜精品| 欧美人与善性xxx| 国产极品天堂在线| 在线观看三级黄色| 精品卡一卡二卡四卡免费| 国产精品久久久久久精品古装| 精品一品国产午夜福利视频| 美女脱内裤让男人舔精品视频| 欧美人与性动交α欧美精品济南到 | 欧美 亚洲 国产 日韩一| 欧美日韩亚洲高清精品| 欧美变态另类bdsm刘玥| 叶爱在线成人免费视频播放| av福利片在线| 国产一区二区 视频在线| 曰老女人黄片| 一个人免费看片子| 精品酒店卫生间| 久久精品亚洲av国产电影网| 午夜久久久在线观看| 中文字幕精品免费在线观看视频| 国产成人欧美| 免费在线观看完整版高清| 老鸭窝网址在线观看| 午夜激情久久久久久久| 日韩欧美精品免费久久| 如何舔出高潮| 亚洲,欧美,日韩| 国产精品国产三级专区第一集| 欧美日韩亚洲国产一区二区在线观看 | 亚洲精品自拍成人| 国产精品免费大片| 不卡av一区二区三区| 男女午夜视频在线观看| 成人亚洲精品一区在线观看| 日本色播在线视频| 久久人人爽av亚洲精品天堂| 免费黄色在线免费观看| 国产一区有黄有色的免费视频| 亚洲成色77777| 国产欧美日韩综合在线一区二区| 观看美女的网站| 一区二区三区激情视频| 18禁国产床啪视频网站| 交换朋友夫妻互换小说| 精品久久久久久电影网| 最近中文字幕高清免费大全6| 新久久久久国产一级毛片| 国产黄频视频在线观看| www日本在线高清视频| 深夜精品福利| 爱豆传媒免费全集在线观看| 激情视频va一区二区三区| 久久99一区二区三区| 啦啦啦中文免费视频观看日本| 久久久精品免费免费高清| 中文字幕人妻熟女乱码| 欧美精品亚洲一区二区| 亚洲国产精品999| 可以免费在线观看a视频的电影网站 | 熟女少妇亚洲综合色aaa.| 一区二区三区激情视频| 黑丝袜美女国产一区| 久久久国产一区二区| 热re99久久精品国产66热6| 久久精品人人爽人人爽视色| 婷婷色综合大香蕉| av卡一久久| 国产色婷婷99| 亚洲av国产av综合av卡| 午夜久久久在线观看| 国产精品蜜桃在线观看| 久久女婷五月综合色啪小说| 又粗又硬又长又爽又黄的视频| av不卡在线播放| 精品亚洲成a人片在线观看| 在线看a的网站| 国产精品女同一区二区软件| 性色av一级| 亚洲成av片中文字幕在线观看 | 男人操女人黄网站| 久久久久久久精品精品| 午夜精品国产一区二区电影| 精品福利永久在线观看| av天堂久久9| 三上悠亚av全集在线观看| 国产高清不卡午夜福利| 色吧在线观看| 侵犯人妻中文字幕一二三四区| 成年人免费黄色播放视频| 国产精品秋霞免费鲁丝片| 美女国产高潮福利片在线看| 国产精品国产av在线观看| 国产精品秋霞免费鲁丝片| 女性被躁到高潮视频| 国产探花极品一区二区| 99热网站在线观看| 老司机亚洲免费影院| 久久亚洲国产成人精品v| 亚洲av.av天堂| 亚洲欧美中文字幕日韩二区| 亚洲精品久久成人aⅴ小说| 看非洲黑人一级黄片| 亚洲欧美一区二区三区久久| 波多野结衣av一区二区av| 18在线观看网站| 国产av国产精品国产| 久久精品熟女亚洲av麻豆精品| 国产精品秋霞免费鲁丝片| 丰满饥渴人妻一区二区三| 叶爱在线成人免费视频播放| av国产精品久久久久影院| 热99久久久久精品小说推荐| 少妇人妻 视频| 美女国产高潮福利片在线看| 久久精品aⅴ一区二区三区四区 | 久久久久视频综合| 欧美精品亚洲一区二区| 十八禁网站网址无遮挡| 久久久久久伊人网av| 老司机亚洲免费影院| 久久精品国产亚洲av涩爱| 国产精品秋霞免费鲁丝片| 啦啦啦啦在线视频资源| 纵有疾风起免费观看全集完整版| videossex国产| 99国产精品免费福利视频| 国产精品 欧美亚洲| 老女人水多毛片| 精品国产乱码久久久久久小说| 老司机影院成人| 国产白丝娇喘喷水9色精品| 青春草亚洲视频在线观看| 久久狼人影院| 少妇的丰满在线观看| 亚洲美女搞黄在线观看| 男女啪啪激烈高潮av片| 亚洲,一卡二卡三卡| 久久狼人影院| 最近最新中文字幕免费大全7| 香蕉精品网在线| 老司机影院毛片| 国产激情久久老熟女| 国产日韩欧美亚洲二区| 亚洲一区二区三区欧美精品| 91精品伊人久久大香线蕉| 日韩一区二区视频免费看| 欧美日韩一级在线毛片| 欧美日韩精品网址| 人体艺术视频欧美日本| xxxhd国产人妻xxx| 极品少妇高潮喷水抽搐| 亚洲精品久久久久久婷婷小说| videossex国产| 国产又爽黄色视频| 18在线观看网站| 午夜日韩欧美国产| 精品国产一区二区三区四区第35| 最新的欧美精品一区二区| 最近中文字幕高清免费大全6| 日本91视频免费播放| 大话2 男鬼变身卡| 人人妻人人添人人爽欧美一区卜| 涩涩av久久男人的天堂| 视频在线观看一区二区三区| av在线播放精品| av线在线观看网站| 亚洲一区二区三区欧美精品| 高清不卡的av网站| 精品一区二区三卡| 观看av在线不卡| av女优亚洲男人天堂| 国产欧美亚洲国产| 日韩av免费高清视频| 日韩 亚洲 欧美在线| 久久久久久人人人人人| 日韩视频在线欧美| 日韩精品有码人妻一区| 国产日韩欧美视频二区| 黄色配什么色好看| 99久久中文字幕三级久久日本| 国产精品蜜桃在线观看| 成人毛片60女人毛片免费| 午夜福利影视在线免费观看| 亚洲国产日韩一区二区| 青青草视频在线视频观看| 久久精品熟女亚洲av麻豆精品| 久久久精品国产亚洲av高清涩受| 老汉色∧v一级毛片| 久久精品亚洲av国产电影网| 麻豆精品久久久久久蜜桃| 欧美精品人与动牲交sv欧美| 女性被躁到高潮视频| 最近中文字幕2019免费版| 亚洲婷婷狠狠爱综合网| 亚洲中文av在线| av在线播放精品| 久久这里有精品视频免费| 最近最新中文字幕免费大全7| 狠狠精品人妻久久久久久综合| 99香蕉大伊视频| 欧美 日韩 精品 国产| 欧美人与性动交α欧美精品济南到 | 五月开心婷婷网| 国产av国产精品国产| 在线天堂中文资源库| 看免费av毛片| 国产在线视频一区二区| 天美传媒精品一区二区| 99国产精品免费福利视频| 天美传媒精品一区二区| 精品少妇黑人巨大在线播放| 我的亚洲天堂| 老汉色av国产亚洲站长工具| 国产综合精华液| 日韩在线高清观看一区二区三区| 丰满乱子伦码专区| av视频免费观看在线观看| 伦理电影免费视频| 亚洲精品一二三| 搡老乐熟女国产| 免费看不卡的av| 中国国产av一级| 亚洲视频免费观看视频| 国产高清国产精品国产三级| 黄片小视频在线播放| 青青草视频在线视频观看| 中文字幕人妻丝袜一区二区 | 精品卡一卡二卡四卡免费| 久久国产精品大桥未久av| 午夜老司机福利剧场| 中文字幕人妻丝袜制服| 热99国产精品久久久久久7| 国产成人精品无人区| 九色亚洲精品在线播放| a级毛片在线看网站| 一边摸一边做爽爽视频免费| 超色免费av| 超碰97精品在线观看| 日韩中字成人| 国产深夜福利视频在线观看| 午夜日本视频在线| 成年人午夜在线观看视频| 国产成人精品久久二区二区91 | 97在线视频观看| 1024香蕉在线观看| 校园人妻丝袜中文字幕| 五月开心婷婷网| 人妻人人澡人人爽人人| 成年美女黄网站色视频大全免费| 亚洲国产色片| 男女啪啪激烈高潮av片| 一级a爱视频在线免费观看| 一级,二级,三级黄色视频| 99热全是精品| 两性夫妻黄色片| 亚洲国产欧美日韩在线播放| 五月伊人婷婷丁香| 18禁观看日本| 色婷婷av一区二区三区视频| 波多野结衣av一区二区av| 一本色道久久久久久精品综合| 亚洲国产成人一精品久久久| 国产极品粉嫩免费观看在线| 伊人久久国产一区二区| 免费不卡的大黄色大毛片视频在线观看| 国产日韩欧美在线精品| 免费久久久久久久精品成人欧美视频| 精品人妻在线不人妻| av一本久久久久| 国产成人一区二区在线| 两个人看的免费小视频| 另类亚洲欧美激情| 久久久久久久久免费视频了| 在线精品无人区一区二区三| 久久97久久精品| 国产一区二区激情短视频 | 夜夜骑夜夜射夜夜干| 大话2 男鬼变身卡| 日韩 亚洲 欧美在线| 超色免费av| 久久久久久久大尺度免费视频| 日韩不卡一区二区三区视频在线| 乱人伦中国视频| 99热网站在线观看| 美女午夜性视频免费| 亚洲精品国产色婷婷电影| 欧美变态另类bdsm刘玥| 成人漫画全彩无遮挡| 亚洲精品美女久久久久99蜜臀 | 久久久久久久久久久久大奶| 亚洲精品成人av观看孕妇| 青草久久国产| 晚上一个人看的免费电影| 搡女人真爽免费视频火全软件| av在线老鸭窝| 免费av中文字幕在线| 人妻一区二区av| 黄色毛片三级朝国网站| 中文字幕另类日韩欧美亚洲嫩草| 国产精品免费视频内射| 国产精品无大码| 精品人妻一区二区三区麻豆| 有码 亚洲区| 老汉色av国产亚洲站长工具| 91精品国产国语对白视频| 99精国产麻豆久久婷婷| 国产成人aa在线观看| 欧美人与性动交α欧美精品济南到 | 国产乱人偷精品视频| 大片电影免费在线观看免费| 宅男免费午夜| 啦啦啦视频在线资源免费观看| 日本免费在线观看一区| 久久精品夜色国产| 国产熟女午夜一区二区三区| 丝袜喷水一区| 成人18禁高潮啪啪吃奶动态图| 性高湖久久久久久久久免费观看| 免费观看无遮挡的男女| 在线观看一区二区三区激情| av又黄又爽大尺度在线免费看| 成年动漫av网址| 久久ye,这里只有精品| 亚洲成av片中文字幕在线观看 | 欧美成人精品欧美一级黄| 丝袜人妻中文字幕| 欧美亚洲日本最大视频资源| 欧美日韩精品成人综合77777| 欧美成人午夜免费资源| 亚洲一级一片aⅴ在线观看| 国产精品香港三级国产av潘金莲 | 少妇被粗大的猛进出69影院| 亚洲情色 制服丝袜| 99国产精品免费福利视频| 日韩成人av中文字幕在线观看| 最近中文字幕2019免费版|