• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Template-guided frequency attention and adaptive cross-entropy loss for UAV visual tracking

    2023-10-25 12:12:48YuanliangXUEGuodongJINTaoSHENLiningTANLianfengWANG
    CHINESE JOURNAL OF AERONAUTICS 2023年9期

    Yuanliang XUE, Guodong JIN, Tao SHEN, Lining TAN, Lianfeng WANG

    School of Nuclear Engineering, PLA Rocket Force University of Engineering, Xi’an 710025, China

    KEYWORDS

    Abstract This paper addresses the problem of visual object tracking for Unmanned Aerial Vehicles (UAVs).Most Siamese trackers are used to regard object tracking as classification and regression problems.However, it is difficult for these trackers to accurately classify in the face of similar objects,background clutters and other common challenges in UAV scenes.So,a reliable classifier is the key to improving UAV tracking performance.In this paper, a simple yet efficient tracker following the basic architecture of the Siamese neural network is proposed, which improves the classification ability from three stages.First, the frequency channel attention module is introduced to enhance the target features via frequency domain learning.Second, a template-guided attention module is designed to promote information exchange between the template branch and the search branch, which can get reliable classification response maps.Third, adaptive cross-entropy loss is proposed to make the tracker focus on hard samples that contribute more to the training process,solving the data imbalance between positive and negative samples.To evaluate the performance of the proposed tracker,comprehensive experiments are conducted on two challenging aerial datasets,including UAV123 and UAVDT.Experimental results demonstrate that the proposed tracker achieves favorable tracking performances in aerial benchmarks beyond 41 frames/s.We conducted experiments in real UAV scenes to further verify the efficiency of our tracker in the real world.

    1.Introduction

    With the rapid development of Unmanned Aerial Vehicles(UAVs) and their aerial photography equipment, UAVs with visual tracking capability are widely used in traffic patrolling,wildlife protection, disaster response, and military reconnaissance due to their advantages of flexible motion, low cost,and high safety.1Because of its important role in video intelligence processing,object tracking technology receives extensive attention.2Despite the demonstrated success,it remains a challenge to design a tracker that is robust to various UAV scenes such as small objects, similar distractors, background clutters,scale variation, and frequent occlusions.Therefore, an accurate and robust tracker is of great value for the wide application of UAVs.

    Correlation Filter (CF) trackers realize acceptable performances with a high speed due to the hand-crafted features and fast Fourier transform.MOSSE3(Minimum Output Sum of Squared Error) is first proposed to calculate the similarity in the frequency domain, which incredibly improves the tracking speed.Henriques et al.4design a Kernelized Correlation Filter(KCF),incorporating multiple feature channels and adopting a circular shift sampling to enhance the ability of target representation.Based on the Refs.3-4, a series of extensions such as works5–8show their state-of-the-art performance.

    Recently, deep features provided by Convolutional Neural Networks(CNNs)have demonstrated powerful object characterization capabilities and gradually replaced traditional handcrafted features in the computer vision field, such as object detection and tracking.HCF9(Hierarchical Convolutional Feature tacker), C-COT10(Continuous Convolution Operators Tracker) and DeepSRDCF11(Spatially Regularized Discriminative Correlation Filter with Deep features) all make preliminary explorations on the combination of CF trackers and existing CNNs, complementarily improving the performances.Tao et al.12propose a novel Siamese tracker called SINT, the first tracker that applies Siamese architectures.SiamFC13(Fully-Convolutional Siamese networks) and SiamRPN14(Siamese Region Proposal Network) improve the tracking speed and accuracy by introducing a new similarity calculation and the Region Proposal Network (RPN),15respectively.However, the feature extraction capability of AlexNet16in the Refs.13–14is not powerful enough for complex challenges.SiamRPN++17achieves leading performance on several benchmarks by employing ResNet50.18Since the SiamRPN++ showed a high tracking performance with a simple structure, plenty of Siamese trackers, including works,19–22based on SiamRPN++ have shown outstanding performance.

    Numerous trackers have been proposed for UAV tracking in recent years.Li et al.23develop a new CF-based tracker for UAVs, called AutoTrack (Tracking with Automatic regularization), which can dynamically and automatically adjust the hyperparameters of spatio-temporal regularization terms.Yang et al.2train the pruned classifier via least squares transformation in spatial and Fourier domains, achieving real-time tracking but suboptimal performance.SiamAPN24(Siamese Anchor Proposal Network) adopts the no-prior structure of the Anchor Proposal Network (APN) to endow the anchors with adaptivity, enhancing the object perception ability.By virtue of the attention module, SiamAPN++25further raises the expression ability of features.CIFT26(Contextual Information Fusion Tracker) respectively inserts different attentions to the template branch and search branch, improving the tracker’s classification ability.Coincidentally, Cui et al.27respectively introduce spatial attention and channel attention into two branches to enhance target features and suppress distractor information.The above literature mainly focuses on promoting classification ability by designing constraint terms or adding attention modules.However, these trackers ignore the importance of the template in finding the target and omit the imbalanced training samples.

    This paper proposes a template-guided frequency attention tracker (referred to as TGFAT), and introduces the adaptive cross-entropy loss for training a high performance classifier.

    The main contributions of this work can be summarized as follows:

    (1) We insert a Frequency Channel Attention (FCA) module into the backbone to filter the features, which can suppress distractor background information via frequency domain learning.

    (2) To enhance the tracker classification capability, we design the Template-Guided Attention (TGA) module between the template branch and search branch, thus utilizing the given template features to guide the generation of search features.

    (3) We find that positive samples are far less than negative samples in UAV scenes, leading to inefficient training of the Siamese tracker.To eliminate the class imbalance of the positive and negative examples, we design the Adaptive Cross-Entropy (ACE) loss by introducing a hyperparameter.

    (4) We present a real-time tracker that achieves an outstanding tracking performance on several challenging aerial benchmark datasets.In addition, real-world tests strongly demonstrate impressive practicability and performance with a real-time speed of ~41.3 FPS(Frames Per Second).

    2.Related work

    Visual tracking is a popular research technology and plays an eminently important role in many fields.An exhaustive survey about object tracking is introduced in the survey28,and the following is a compendious review of the most representative Siamese trackers, as well as related issues on attention modules and loss functions.

    2.1.Trackers with Siamese architecture

    The Siamese architecture-based trackers formulate the problem of visual object tracking as the similarity embedding between a target template and a search region.SINT12(Siamese Instance Search Tracker) creatively applies Siamese architectures to find the target from the search region based on the highest similarity response.Though SINT performs well, its speed is only 4 FPS.Therefore, SiamFC13calculates the similarity between the whole candidate image and the template at one time by cross-correlation (Xcorr), improving the speed to 86 FPS.

    The emergence of SiamFC has promoted the spread of object tracking from correlation filters to Siamese architectures.CFNet29(Correlation Filter Network) integrates the correlation filter and SiamFC, giving CF a stronger representation ability.SA-Siam30(Semantic and Appearance twofold branch Siamese network)adds a semantic branch into the original appearance branch, complementarily improving the robustness of SiamFC.SiamRPN14employs the RPN module to treat object tracking as local object detection and to realize classification and regression,outperforming most trackers and running at 160 FPS.Based on SiamRPN, numerous works have been done for further improvement.SiamMask21(Siamese Mask tracker) finds some consistency between object tracking and semi-supervised object segmentation.Therefore,SiamMask augments the anti-interference ability of SiamFC with binary segmentation masks.SiamRPN++17is the first tracker to take advantage of deeper features and successfully solve the negative impact of training ResNet-50.Besides,SiamDW31(Deeper and wider Siamese networks) stacks the cropping-inside residual modules to deeper the network and meanwhile,it also prevents learning positional bias.Moreover,anchor-free trackers19,22,32–33show outstanding advantages in enhancing scale adaptation and generalisability due to prediction pixel by pixel.

    2.2.Attention modules in Siamese trackers

    The attention modules,such as SE34(Squeeze-and-Excitation),CBAM35(Convolutional Block Attention Module), and ECA36(Efficient Channel Attention),have been demonstrated to offer great potential in improving the performance of deep CNNs.Therefore,plenty of trackers explore the application of attention modules in object tracking, achieving high performance.

    SA-Siam30inserts the channel attention in the semantic branch to make channels where the target is located activated highly, suppressing distractor semantic information.SiamBM37(Siamese Better Match) finds that when the aspect ratio of the target is large, significant background clutters are easy to occur, and the application of spatial mask on the feature map has stronger background suppression ability and stability than channel attention.RASNet38(Residual Attentional Siamese Network)superimposes the residual attention module and the general attention module on the template branch to learn the common characteristics and differences of the targets and integrates the channel attention module to adapt to the appearance changes.SiamAPN++25aggregates and models the self-semantic interdependencies and the crossinterdependencies with two attentional aggregation networks.CIFT26establishes the long-range relationship using an attention information fusion module and learns the appearance features of the detection frame with a multi-spectral attention module.DAFSiamRPN39(Distance attention fusion SiamRPN) employs a convolutional attention module and a frequency channel attention module, respectively, to optimize the semantic information of cyberspace and channel feature information.

    There is much work done to try attention modules on Siamese trackers,40–43realizing satisfactory performance.However, the above literature all adopt attention modules that are self-contained, meaning that attention weight is learned only from their own features.Our template-guided attention module learns attention from the template features and uses this learned attention weight to enhance target features in the search features, promoting the cross-branch information exchanges.

    2.3.Loss functions for Siamese trackers

    Following up on SiamRPN, Siamese trackers regard object tracking as the classification and regression problems.Thus,the classification loss function and the regression loss function are employed in the training of these Siamese trackers.The classification loss function, such as cross-entropy loss, is used to help trackers learn to distinguish the target and background, and the regression loss, including IoU (Intersection over Union) loss44, is adopted to train the tracker’s ability to accurately locate the target based on the optimal classification results.

    There is some work focusing on improving the above loss functions to find the suitable loss for Siamese trackers.Focal loss45is adopted in SiamFC++32to alleviate the overfitting problem of cross-entropy loss.However, focal loss reduces the loss of easy samples, but the loss of hard samples is also punished.Thus adaptive focal loss40for the Siamese tracker is proposed to adaptively punish easy samples without reducing the loss of hard samples.Considering IoU loss which only works well when the bounding boxes have overlap and cannot provide any moving gradient for non-overlapping cases,LWTransTracker46(Layer-Wise Transformer Tracker)replaces IoU loss with C-IoU (Complete-IoU) loss for faster convergence and better regression accuracy.Based on considering the distance between the target and the bounding box,the Refs.26,39further consider the overlap rate and the scale variation as factors by employing D-IoU (Distance-IoU) loss in regression.In this paper, we find that data imbalance easily occurs in UAV object tracking,and cross-entropy loss or focal loss cannot solve this problem effectively.Therefore, we redesign the cross-entropy loss only with a hyperparameter, and accomplish better results.

    3.Proposed method

    In this section, we describe the proposed TGFAT framework in detail.Following the basic architecture of the Siamese neural network, TGFAT maintains the two-branch design strategy.The frequency channel attention module in the backbone network adaptively suppresses distractor information and enhances the target features via Two-Dimensional Discrete Cosine Transform(2D DCT).Template-guided attention module collects template feature information to efficiently guide the search feature,strengthening the classification ability with ignorable computational effort.Then adaptive crossentropy loss is introduced to adaptively punish easy samples and reduce the meaningless training, solving the data imbalance in the training process.The overall pipeline of our proposed tracker is depicted in Fig.1.We choose ResNet50 as backbone in which Frequency Channel Attention(FCA)module is inserted.Discrete Cosine Transform(DCT)is performed in the FCA module.Then, template-guided attention (TGA)module is integrated between template branch and search branch and ACE loss is classification loss.Finally, similarity response maps are calculated in RPN modules.The target position is located according to the maximum classification response, and then bounding box regression is performed at the predicted position.

    3.1.Architecture of Siamese trackers

    As shown in Eq.(1),tracking is modeled as a similarity learning problem by introducing the two same CNNs with parameters shared.Hence, Siamese trackers contain two branches and RPN modules.The two branches,consisting of a template branch and a search branch,are utilized to collect the template feature and the search feature.One branch is used to process the template of the tracking target, which is initialized by the manually labeled target area in the first frame of the video sequence, generally represented by z.The other branch is employed to process the search area of the video sequence.The selection of search areas for each frame is based on the location of the tracking result of the previous frame, that is,taking the coordinates of this location as the center, a fixedsize area is cropped out as the search area, which is generally represented by x.After being processed by backbone networks,z and x get the feature maps,which are respectively represented as φ(z )and φ(x ),where φ(z )size is smaller than φ(x ),so φ (z ) is adopted as a sliding window to slide on φ(x ).The whole process is similar to the sliding process of convolution kernel in CNN on the image, starting from the top-left and ending at the bottom-right.After the sliding window operation is completed, the final response map, represented by f(z,x ),can be generated to indicate the similarity between z and x.

    Fig.1 Pipeline of TGFAT.

    where φ(?)is the backbone network;*is the cross-correlation;b ?R is the bias of the convolution layer, and I is the identity matrix, so bI denotes a signal which takes value b in every location.

    3.2.Frequency channel attention module

    Compared with the ground scenes,the works23–25indicate that the UAV scenes contain more abundant and complex background information, easily misleading trackers into tracking the wrong target.Hence, we verify the difference between the UAV scenes and the ground scenes from the perspective of frequency domain processing.As shown in Fig.2,three groups of representative tracking scenes,including two types of common targets: vehicles and people, are selected for DCT (Discrete Cosine Transform) processing.In DCT, |F(ω )|is the amplitude and ω is the frequency.The results in Fig.2 show that the videos captured from UAVs have richer high frequency components than those captured from the ground.Fourier transform analysis theory assumes that the noise data are mostly high-frequency components, which indeed proves that the UAV scenes contain more noise information.Therefore,it is the key to filtering the feature information extracted from the backbone.Some trackers40–41try to insert the traditional spatial and channel attention module into the backbone,lacking effective processing of high frequency components.So this paper introduces the FCA (Frequency Channel Attention)module proposed in FcaNet,47fully considering the suppression of noise information.

    FcaNet47points out that using Global Average Pooling(GAP) in channel attention means only the lowest frequency information is preserved.And all components from other frequencies are discarded.FcaNet further proves that GAP is a special case of 2D DCT, so the FCA module is proposed to generalize GAP to more frequency components of 2D DCT.

    Second, the cosine basis function of 7 × 7 2D DCT is defined as

    where [ui,vi] are the frequency component 2D indices corresponding to Xi.

    Fig.4 shows the whole cosine basis functions.Note that the 2D indices of top-left block is ui=0,vi=0, and other blocks can also be obtained in this way.To enhance the learning of frequency components,each group is assigned a corresponding 2D DCT frequency component.Then the 2D DCT is performed on each group to obtain the compression results Freqi,which can be viewed as the representation vector of each group.The compression process can be written as

    As shown in Eq.(4), the whole representation vector Freq of the input is obtained by concatenation.and Freq is the multi-spectral vector that contains the importance of different frequency components.

    Next, FCA weight is collected through Eq.(5), which assigns different weights for different frequency components according to their importance.

    Fig.2 Spectrum analysis of ground and UAV images.

    Fig.3 Illustration of FCA module.

    Fig.4 Visualization of 7 × 7 DCT basis functions.

    where δ is the sigmoid activation function, and fc is the fully connected layer.

    To ensure that each position on the input feature map X has a corresponding weight, the attention weightsare expanded to the same size as X.Finally, the attention feature map X′is generated by

    which suppresses the unimportant frequency components.The FCA module decomposes the input features into combinations of different frequency components by the split and adjusts the proportion of each channel by DCT.FCA can obtain better frequency-domain energy concentration and condense the relatively important information in the image,paying more attention to the tracking target.

    3.3.Template-guided attention module

    Since the template used in Siamese trackers is usually fixed and not updated,it is the key to effectively leveraging the template features.Most Siamese trackers26,38–42choose to independently enhance the representation ability of feature information in the search branch and the template branch, ignoring the great potential of template features in guiding the generation of search features.UAV has a wide field of view, so it is more prone to similar object distractors, background clutters,and fast motion in the tracking process.Therefore, the tracking object feature is not obvious in search features, and there are also some distractor features around the tracking object,hindering the tracker from distinguishing the tracking object.

    Following the Ref.48, and aiming to explore the great potential of template features, we propose the Template-Guided Attention (TGA) module.As shown in Fig.5, the overall process is that template attention weight is collected based on the template feature, and then the target feature is strengthened on the search image under the guidance of template attention weight.Given the template feature φ(z ), we employ the Efficient Channel Attention (ECA)36to collect the template feature information.Firstly, Eq.(7) uses GAP to compress the template feature information on each channel and generate the global spatial representations whose size is C×1×1.Each of them squeezes the dimension of spatial information from H×W to 1×1.The process is formulated as

    where φ(z )is the input template feature maps with C channels;uc(i,j )represents the single channel feature map,c ?C;and i,j is the position coordinate.

    where σ is the sigmoid activation function; C1Dkis the 1DConv with k channels.One thing to note is that our parameter k is not need to vary with similar channels,which differs from the ECA module.

    Secondly, as shown in Eq.(8), a One-Dimensional sparse Convolution (1D-Conv) is introduced to learn the relationships between the current channel and adjacent channels instead of 2 FC (Fully Connected) layers as is done in SE.34Then the attention weights of each channel are predicted according to the classes of objects.Considering the global spatial representations represent each channel independently, the template attention weights are further obtained by the 1DConv.1D-Conv involving few parameters is a simple yet efficient way to learn a local cross-channel information interaction whose coverage is equal to the kernel size.

    The feature map is nearly orthogonal, different channels represent different object classes,17so it is not necessary to treat all channels equally.The purpose of single object tracking is to track the specific target, while the others are background and distractors.Channel attention can help trackers concentrate on ‘what’ is useful for tracking an object.49By reducing the attention weight on unimportant channels, the ECA module enhances the learning ability of the model on template features.

    Thirdly, to ensure that each position in the input feature map φ(x ) has a corresponding weight, the attention weights ω are expanded to the same dimension as φ(x ).Then the search attention features φ′(x ) are obtained by

    which embedding template feature relationships into search features and selectively suppressing the channels where the distractor targets are located.

    Finally, aiming at adapting to the large scale variations in UAV scenes, the bounding box regression and classification are completed in Anchor-Free RPN (AF-RPN) modules as done in SiamBAN19(Siamese box adaptive network).As shown in Eq.(10), the similarity between the search attention features and the template features is calculated in AF-RPNs,and the similarity response map B,S represent the matching degree.

    3.4.Adaptive cross-entropy loss

    Fig.5 TGA module.

    Training a good classifier always needs a sufficient number of high quality samples including positive samples and negative samples.However,object tracking in UAV scene has an imbalanced distribution of training samples50–51: (A) positive samples are far less than negative samples, leading to inefficient training of the Siamese trackers; and (B) most negative samples are easy negatives(non-similar non-semantic background)that contribute little useful information in learning a discriminative classifier45.As a consequence,the classifier is dominated by easily classified background samples and degrades when encountering difficult similar semantic distractors.

    Leng et al.52prove that the cross-entropy loss easily overfits the easy samples, which causes the model to ignore the hard but important samples.Focal loss tries to avoid the overfitting problem by reducing the loss of easy samples, but the loss of hard samples is also punished, which cannot effectively deal with data imbalance.So the cross-entropy loss and the focal loss are not optimized classification loss for object tracking in UAV scenes.Referring to the Ref.52, this paper further mines the great potential of the cross-entropy loss function in object tracking and proposes an Adaptive Cross-Entropy(ACE) loss.

    First, the cross-entropy loss is defined by:

    where ptis the model’s prediction probability of the target ground-truth class, and p is the probability that the object is predicted to be the positive sample.

    Then,the Taylor expansion of the cross-entropy loss in the bases of (1 -pt)j,j ?N*can be written as:

    Using the gradient descent algorithm to optimize the crossentropy loss in the training process needs to take the gradient with respect to pt.Thus, the gradient of the cross-entropy loss is:

    The overfitting problem can be seen from Eq.(13):the leading gradient term is 1, which provides a constant gradient regardless of the value of pt.On the contrary, the jth gradient term is strongly suppressed when j ?1 and ptgets closer to 1.Note that when ptgets closer to 1,the model is more confident in predicting the target classes, which usually represent the easy samples.Therefore,the focal loss adds the coefficient into cross-entropy loss to reduce the loss of easy samples,as shown in

    The focal loss is proven effective because the hard examples contribute more to the model training than before.But note that the coefficient also punishes valuable hard samples(1-Pt>0?5) and hinders their training to some extent, lacking adaptability.40Feng et al.53find that tuning the first polynomial term can improve model robustness and performance.Therefore, different from focal loss, only the first polynomial coefficient in cross-entropy loss is modified in the ACE loss.

    4.Experiment

    4.1.Implementation details

    Our experiments are implemented under Pytorch 0.4.1 framework on an Intel Core i7-9700 CPU (3.6 GHz) along with two NVIDIA Geforce RTX 2080Ti GPUs.The GOT-10k,54YouTube-BoundingBoxes,55ImageNet VID,56DET,56and COCO57datasets are used to train TGFAT,consisting of 760 thousand video sequences, and exceed more than ten million annotated pictures.Each group of data fed to the model comes from two different frames containing the same target in the annotated video, which simulates the movement process of the target and helps the model capture the robust features.The template image has a size of 127 × 127 and slides on the 255 × 255 search images with a sliding stride of 8.

    Parameter settings: The number of adjacent channels k in the TGA module is 3, and the hyperparameter ε in the ACE loss is 2.The Stochastic Gradient Descent (SGD) is utilized to train the model with a total of 20 epochs.A total of 1×106pairs of training samples are sampled in each epoch and the batch size is set to 22.We use a warm-up learning rate of 1×10-3to 5×10-3for the first 5 epochs which decays exponentially from 5×10-3to 5×10-5with a momentum of 0.9 for the last 15 epochs.In the first 10 epochs, we only train the heads with the pre-trained ResNet-50 parameters frozen.Then the conv3 and conv4 layers of the backbone are finetuned in the last 10 epochs.The loss is the sum of the ACE loss in the classification and the IoU loss in the regression.

    Tracking performance mainly has two evaluation criteria:success rate and precision rate.The precision rate is based on the Center Location Error (CLE), that is, calculate the Euclidean distance between the tracking result of each frame(given by the tracker) and the corresponding ground-truth.The precision rate represents the ratio that the CLE is less than 20 pixels.The success rate indicates the ratio of the overlap rate (represented by IoU) between the tracking target and the ground-truth greater than the specified threshold(usually 0.5).

    4.2.Visualization of classification

    Fig.6 shows the visualization of different trackers’ classification performances, comparing two leading trackers(SiamRPN++ and SiamBAN) with our tracker, TGFAT.

    These images are also called heatmaps, where the temperature represents the prediction confidence of the object class.In other words,the higher the temperature,the stronger the classification ability.It is clear to see that SiamRPN++ and SiamBAN are challenged by similar objects and fast motion.The tracking object feature becomes more obvious in the TGFAT similarity response map,which is conducive to classifying the object and distractors.Therefore, TGFAT achieves better performance with the help of the FCA module and TGA module,filtering backbone features and guiding the precise position of tracking targets,respectively.Besides,the ACE loss adaptively enhances the training of hard samples, which solves the data imbalance in the UAV scene without additional computing overhead.

    4.3.Evaluation on UAV datasets

    To verify the effectiveness of the TGFAT proposed in this paper, experiments and comparisons are conducted using test sequences from two prestigious UAV benchmarks:UAV12358and UAVDT59.During the tracking process of UAVs, small object, similar objects, background clutters,and scale variations often occur, making tracking challenging.Please note that the trackers’ results used in this paper are taken from the dataset officially provided, from the authors,and test results in the survey.1

    Fig.6 Classification response maps.

    4.3.1.Results on UAV123

    The UAV12358dataset is the video obtained from the shooting angle of the UAVs.Specifically, it contains 123 sequences and more than 110 thousand frames in total with 12 diverse kinds of attributes.Besides, the video sequences captured by UAVs have the characteristics of complex backgrounds and large scale variation,putting forward higher requirements for object tracking algorithms.In addition, the video length of UAV123 is longer than other datasets, which verifies the long-term tracking ability.To fully demonstrate the overall performance of our tracker,TGFAT is compared with the results of 17 current leading trackers, including CIFT,26SiamTPN60(Siamese Transformer Pyramid Networks), LDSTRT61(Learning Dynamic Spatial-Temporal Regularization Tracker), HiFT62(Hierarchical Feature Transformer), DAFSiamRPN,39BASCF63(Background cues and Aberrances response Suppression mechanism Correlation Filters), AutoTrack,23GlobalTrack64(Global Tracker), SiamRPN++,17LST2(Least Squares Transformation), ARCF65(Aberrance Repressed Correlation Filters), DaSiamRPN66(Distractoraware SiamRPN), SiamRPN,14ECO-gpu67(Efficient Convolution Operators), BACF68(Background Aware Correlation Filters), C-COT,10and SiamFC13, where the UAV trackers are CIFT, SiamTPN, LDSTRT, HiFT, DAFSiamRPN,BASCF, AutoTrack, LST, and ARCF.

    As shown in Table 1,TGFAT shows superior performance compared with other leading trackers.TGFAT accomplishes the best success rate (61.7%), the highest precision rate(82.7%), and a speed of 41.3 FPS.These can be attributed to two major reasons.First, the TGA and FCA modules can enhance the ability of the tracker to extract target feature information and suppress irrelevant information.Second, the distraction caused by easy samples is alleviated by introducing the ACE loss,which helps the tracker adaptively pay attention to the most discriminative ones.Compared with other UAV trackers such as CIFT and DAFSiamRPN, TGFAT achieves favorable results in terms of success rate, precision rate, and real-time speed (≥30 FPS), which means the robustness and speed of TGFAT are competent for UAV object tracking tasks.

    As illustrated in Table 2, TGFAT is compared with other trackers in a variety of attributes, where background clutters,similar objects,scale variations,fast motion,and full occlusion are the common attributes in UAV scenes.The results in Table 2 show that benefitting from the TGA module, FCA module, and the ACE loss, TGFAT achieves the best performance in similar objects,scale variations,fast motion,and full occlusion.Moreover, TGFAT is comparable with DaSiamRPN in background clutters,which shows the reliable discrimination and anti-interference ability of TGFAT.

    As displayed in Fig.7, we select five real-time scenes in the UAV123 dataset to demonstrate the effectiveness of the proposed tracker, including bike2, car6_2, car18, group1_2, and wakeboard6.In the real-time scene of bike2 and group1_2,drifts happen to trackers such as HiFT, AutoTrack, and SiamRPN++due to the target’s small size and similar object interferences, TGFAT can still track robustly and accurately.In car6_2, the tracking target partly disappears in the field of view, which badly affects the tracking performance of many trackers such as SiamFC and HiFT, but TGFAT can commendably adapt to the disappearances and scale variations.The car18 scene is mainly about the tracking target’s fast motion, and the results show that our tracker is also effective in this situation with the accurate calculation of the bounding box.The results in wakeboard6 exhibit that TGFAT can robustly recognize and accurately locate the target in the face of the fast motion and background clutters.

    Table 1 Experimental results on UAV123.The best two performances are displayed in bold and underline,respectively.Suc.and Pre.mean success rate and precision rate, respectively.

    Table 2 Comparisons of algorithms for the attributes common in UAV scene.The best two performances are displayed in bold and underline, respectively.Suc.and Pre.mean success rate and precision rate, respectively.

    Fig.7 Qualitative evaluation on UAV123.The sequences from left to right and top to bottom are bike2, car6_2, car18, group1_2 and wakeboard6.

    In addition, Fig.8 presents three evaluations in the typical UAV scenes, and the IoU between the ground-truth and the prediction bounding box reflects the performance of trackers.The main challenges in these sequences are partial occlusion,small object (the first row), low resolution, scale variation(the second row), camera motion, and background clutter(the third row).At frame 175 of sequence bike 3, when the tracking target reappears after occlusion, HiFT’s discriminative ability is disturbed, and begins tracking the wrong target.After frame 283,HiFT loses the target.However,our TGFAT is not affected by partial occlusion and small object and keeps accurate tracking of the target.When occlusions occur in sequence truck 2,HiFT is also disturbed by background noise and starts to lose the target, leading to subsequent tracking failure.By contrast, TGFAT can still track the target stably in the face of occlusion and low resolution, which exhibits its reliable classification ability.Background noises and camera motion in sequence wakeboard6 are unique and common challenges in UAV scenes.From frame 484 to frame 807, the performance of HiFT in handling camera motion is unsatisfactory.Moreover, when HiFT finds the target again,the tracking bounding box contains too much background noise due to the insufficient ability to distinguish between the target and the background.In contrast, TGFAT achieves trust-worthy performance in camera motion and background noises.Therefore,we can conclude that TGFAT is more competent for aerial object tracking tasks.

    4.3.2.Results on UAVDT

    The UAVDT29dataset is a large-scale and challenging UAV detection and tracking benchmark, and mainly focuses on vehicles.It consists of 100 video sequences and 8×104frames,selected from more than 10 h of videos shot by UAV platforms in multiple locations in urban areas, which represent various common scenes including squares, main roads, highways,and intersections.Manually annotate frames using bounding boxes and 14 challenging attributes, such as weather changes,flight altitude, vehicle category, and occlusion.Videos are recorded at 30 FPS with image resolution of 1080×540 pixels and an average of 10.5 targets per frame.The targets photographed by UAVDT show the characteristics of small size,dense distribution, and fast motion, which enormously tests the overall performance of these trackers.

    As displayed in Fig.9, TGFAT is compared with other 15 advanced trackers, including SiamCAR22(Siamese fully convolutional Classification and Regression), SiamRPN++,17MobileTrack,69GFSDCF70(Group Feature Selection and Discriminative Correlation Filter), MDNet71(Multi-Domain convolutional neural Networks), ARCF,65ARCF-H,65AutoTrack,23DSiam72(Dynamic Siamese network), ECOgpu67BACF,68SiamFC,13SRDCF,7CSR-DCF73(Discriminative Correlation Filter with Channel and Spatial Reliability),and C-COT,10where AutoTrack, ARCF, ARCF-H, and MobileTrack are trackers for UAV.TGFAT has competitive advantages attributed to the TGA module, FCA module,and ACE loss.TGFAT has achieved the success rate of 60.6% and the precision rate of 84.4%.The precision rate of TGFAT ranks first, respectively surpassing the second-best SiamCAR (82.3%) and the third-best SiamRPN++(82.0%) by 2.6% and 2.9%.Compared with other trackers,TGFAT achieves better performance in success rate,precision rate,and speed,showing its stable overall ability in the face of complex challenges.

    Fig.9 Results on UAVDT.

    To intuitively show the performance of our tracker,we utilize our TGFAT to make a more qualitative comparison with some trackers mentioned above on UAVDT.From Fig.10,we can see that TGFAT performs better in face of the challenging scenes including small object, occlusion, similar objects, fast motion, and background clutters.For example, there are serious background clutters and blurring target information in the S0103 and S1310 sequences, TGFAT can successfully distinguish between background and target, accomplishing better performance than HiFT and SiamRPN++.After the target in the S0601 sequence is fully occluded, some trackers lose the tracking target.

    However, TGFAT can still accurately track the target,which shows its reliable re-tracking ability.In the S0301 and S1310 sequences, the background presents the characteristics of low illumination, low resolution, and many similar targets.Most of the trackers are affected by the blurring target features, but TGFAT still accurately locates the target and adapts the scale variations.Besides, TGFAT can adaptively adjust the aspect ratio of the target with the rotation of the camera in the S1702 sequence.It is obvious that TGFAT is better equipped for aerial object tracking tasks compared with other trackers.

    4.4.Ablation experiments

    In this section, we implement our ablation experiments on UAV123 and UAVDT datasets to evaluate the different components’ contribution to our TGFAT.Table 3 demonstrates the Precision rate(Pre.)and the Success rate(Suc.)of different variations.Please note that the baseline represents the Siam-BAN training with the same datasets as TGFAT.

    From the comparison of the first two rows, the FCA module has the potential to enhance backbone robustness in UAV platforms, respectively promoting the success and precision of about 61.2% and 80.5% on UAV123, 60.5 % and 81.6% on UAVDT.It shows that TGFAT can benefit from the enhanced features provided by the FCA module.Besides,the TGA module is introduced to guide the generation of the search feature,improving success by 61.6% on UAV123, 60.6% on UAVDT and promoting precision by 81.1% on UAV123, 83.0% on UAVDT.Despite improvements, it is still difficult to solve the data imbalance in the training process.When we employ the ACE loss to train the TGFAT, the ACE loss can bring the precision rates increase (+2.0% and + 1.7% on UAV123 and UAVDT, respectively), and the success rates are promoted to 61.7%and 60.6%on UAV123 and UAVDT,respectively.That means the ACE loss can adaptively enhance the training of the hard samples,solving the data imbalance in the UAV scene.In summary, each component of TGFAT helps boost performance without introducing a notable computation burden.

    4.5.Real-world tests

    To verify the tracking performance of TGFAT in the real UAV scene, TGFAT is applied to the UAV aerial video for evaluation, where the resolution is 3840 × 2160 pixels, the frame rate of 30 FPS,and the shooting height of 120 m.Some tracking results are shown in Fig.11.Tracking targets are people and vehicles on the road.The small size, fast motion,little feature information, and background clutters of the target failed to affect the tracking performance of TGFAT.It can be seen that TGFAT has excellent small target recognition and anti-interference ability in practical applications.

    Fig.10 Qualitative evaluation on UAVDT.The sequences from left to right and top to bottom are S0103, S0601, S0301, S1310 and S1702.

    Table 3 Ablation experiments of TGFAT.Suc.and Pre.mean success rate and precision rate, respectively.

    Fig.11 Tracking results of TGFAT in real UAV videos.

    5.Conclusions

    In this work, a novel Siamese tracker, referred as TGFAT, is proposed for accurate and efficient aerial tracking.First, we introduce the FCA module in the backbone network to make full use of frequency feature information and channel correlation, adaptively suppressing distractor information and enhancing the target features.Next, the TGA module is applied to guide the search feature taking advantage of the template feature, and is completed in a cross-branch manner,strengthening the tracker classification capability.Finally, to avoid the tracker misled by easy samples, we employ the ACE loss to emphasize the learning of hard samples, adaptively solving data imbalance common in UAV scenes.During the experiments stage, we evaluate TGFAT on two benchmarks, UAV123 and UAVDT, with several state-of-the-art trackers.Besides, the practical application ability of TGFAT is also verified in real-world scenes.Abundant experiments show that TGFAT performs favorably against these advanced works while maintaining a real-time speed (~41.3 FPS).

    Declaration of Competing Interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgements

    This study was co-supported by the National Natural Science Foundation of China (Nos.61673017 and 61403398).

    亚洲欧美中文字幕日韩二区| 久久鲁丝午夜福利片| 丝袜脚勾引网站| 在线观看国产h片| 99香蕉大伊视频| 亚洲成色77777| 18禁观看日本| 国产精品熟女久久久久浪| 国产成人啪精品午夜网站| 国产精品国产三级专区第一集| 国产欧美亚洲国产| 极品少妇高潮喷水抽搐| 亚洲 欧美一区二区三区| 国产成人啪精品午夜网站| 老司机午夜十八禁免费视频| 天天躁夜夜躁狠狠久久av| 久久鲁丝午夜福利片| 午夜福利一区二区在线看| 一个人免费看片子| 欧美亚洲日本最大视频资源| 一级毛片女人18水好多 | 亚洲精品久久午夜乱码| 国产成人欧美在线观看 | 99精国产麻豆久久婷婷| tube8黄色片| 久久久久久久国产电影| 91精品国产国语对白视频| 国产激情久久老熟女| 亚洲精品日韩在线中文字幕| 亚洲九九香蕉| 精品久久久久久久毛片微露脸 | 亚洲情色 制服丝袜| 啦啦啦在线观看免费高清www| 精品熟女少妇八av免费久了| 国产精品久久久久久人妻精品电影 | 成年人免费黄色播放视频| 精品一区二区三卡| 午夜福利视频在线观看免费| 久久久久久人人人人人| 激情视频va一区二区三区| 国产精品熟女久久久久浪| 亚洲欧美清纯卡通| 大片免费播放器 马上看| 精品第一国产精品| 日本vs欧美在线观看视频| 亚洲精品自拍成人| 99精国产麻豆久久婷婷| 亚洲精品美女久久av网站| 精品视频人人做人人爽| 18禁国产床啪视频网站| 午夜精品国产一区二区电影| 一级黄色大片毛片| 无限看片的www在线观看| 一级毛片女人18水好多 | 乱人伦中国视频| 热re99久久精品国产66热6| 亚洲精品日本国产第一区| 中文字幕高清在线视频| 50天的宝宝边吃奶边哭怎么回事| 欧美中文综合在线视频| 午夜91福利影院| 亚洲欧美中文字幕日韩二区| 黄色怎么调成土黄色| 免费高清在线观看视频在线观看| 日韩,欧美,国产一区二区三区| 中文字幕制服av| netflix在线观看网站| 夫妻性生交免费视频一级片| www日本在线高清视频| 少妇人妻久久综合中文| 高潮久久久久久久久久久不卡| 一本大道久久a久久精品| 乱人伦中国视频| 国产欧美日韩精品亚洲av| 国产精品偷伦视频观看了| 美女扒开内裤让男人捅视频| 国产精品免费视频内射| 久久亚洲精品不卡| 成人手机av| 香蕉国产在线看| 国产又色又爽无遮挡免| 亚洲av男天堂| 精品人妻在线不人妻| 日韩 亚洲 欧美在线| 少妇猛男粗大的猛烈进出视频| 91麻豆av在线| 久久久亚洲精品成人影院| 99re6热这里在线精品视频| 亚洲成人免费电影在线观看 | 亚洲,欧美精品.| 在线观看人妻少妇| 在线观看www视频免费| 又黄又粗又硬又大视频| 日日摸夜夜添夜夜爱| 精品一区在线观看国产| 一级片'在线观看视频| 国语对白做爰xxxⅹ性视频网站| 久久久久久久国产电影| 老司机影院毛片| 菩萨蛮人人尽说江南好唐韦庄| 99久久综合免费| 成人国产一区最新在线观看 | 国产精品熟女久久久久浪| av在线老鸭窝| a级毛片在线看网站| 欧美中文综合在线视频| 一区二区三区激情视频| 九草在线视频观看| 女人精品久久久久毛片| 欧美日韩av久久| 男女无遮挡免费网站观看| 天天操日日干夜夜撸| 高清av免费在线| 成年人午夜在线观看视频| 男人添女人高潮全过程视频| 国产成人影院久久av| 国产高清国产精品国产三级| 女人被躁到高潮嗷嗷叫费观| 中文字幕另类日韩欧美亚洲嫩草| 国产精品三级大全| 蜜桃国产av成人99| 大陆偷拍与自拍| 久久久久久亚洲精品国产蜜桃av| 亚洲午夜精品一区,二区,三区| 青春草视频在线免费观看| 男女下面插进去视频免费观看| 国产亚洲午夜精品一区二区久久| 日韩 亚洲 欧美在线| 国产一卡二卡三卡精品| 国产欧美日韩综合在线一区二区| 咕卡用的链子| 欧美精品高潮呻吟av久久| 美女脱内裤让男人舔精品视频| 成年女人毛片免费观看观看9 | 国产精品一区二区在线观看99| 亚洲国产av影院在线观看| 欧美乱码精品一区二区三区| 成人午夜精彩视频在线观看| 中文欧美无线码| 午夜免费鲁丝| 新久久久久国产一级毛片| 99国产综合亚洲精品| 久久久久视频综合| 午夜两性在线视频| 黑人巨大精品欧美一区二区蜜桃| 亚洲七黄色美女视频| 亚洲国产中文字幕在线视频| 亚洲精品在线美女| 日韩,欧美,国产一区二区三区| 午夜老司机福利片| 亚洲午夜精品一区,二区,三区| 国产欧美日韩一区二区三 | 少妇人妻久久综合中文| 一二三四在线观看免费中文在| 欧美老熟妇乱子伦牲交| 国产成人影院久久av| 久久久久久久精品精品| 在线观看人妻少妇| 在线观看免费日韩欧美大片| 一区二区三区乱码不卡18| 欧美在线黄色| 国产一区二区三区av在线| 99国产精品99久久久久| h视频一区二区三区| 日韩人妻精品一区2区三区| 久久久精品94久久精品| 五月天丁香电影| 天天躁狠狠躁夜夜躁狠狠躁| 免费女性裸体啪啪无遮挡网站| 69精品国产乱码久久久| 欧美性长视频在线观看| 亚洲男人天堂网一区| 午夜老司机福利片| 亚洲五月色婷婷综合| 欧美黑人欧美精品刺激| 亚洲成国产人片在线观看| 欧美日韩精品网址| 亚洲av日韩精品久久久久久密 | 亚洲图色成人| 日韩av不卡免费在线播放| 黑人欧美特级aaaaaa片| 国产野战对白在线观看| 久久99精品国语久久久| 日韩免费高清中文字幕av| 人妻 亚洲 视频| 免费观看a级毛片全部| 亚洲成人手机| 少妇被粗大的猛进出69影院| 一区二区av电影网| 亚洲精品一卡2卡三卡4卡5卡 | 免费观看av网站的网址| 黑人欧美特级aaaaaa片| 久久国产精品男人的天堂亚洲| 免费看十八禁软件| av国产久精品久网站免费入址| 国产xxxxx性猛交| 免费看十八禁软件| 亚洲欧美清纯卡通| 亚洲成av片中文字幕在线观看| 日本vs欧美在线观看视频| 欧美日韩av久久| 久久精品国产亚洲av涩爱| 国产一区二区在线观看av| a 毛片基地| 在线精品无人区一区二区三| 午夜激情久久久久久久| 18禁观看日本| 亚洲激情五月婷婷啪啪| 日韩免费高清中文字幕av| 午夜免费男女啪啪视频观看| 七月丁香在线播放| 国产精品久久久久久精品古装| 成人亚洲欧美一区二区av| 大片电影免费在线观看免费| 亚洲国产成人一精品久久久| 日韩av在线免费看完整版不卡| 国精品久久久久久国模美| 少妇精品久久久久久久| 日本午夜av视频| 欧美乱码精品一区二区三区| 精品人妻熟女毛片av久久网站| 99热网站在线观看| 欧美成人精品欧美一级黄| 中文字幕人妻丝袜制服| 国精品久久久久久国模美| 美女午夜性视频免费| 国产精品成人在线| 久久久久网色| 老司机影院毛片| 激情五月婷婷亚洲| 无遮挡黄片免费观看| 免费高清在线观看视频在线观看| 免费日韩欧美在线观看| 女人久久www免费人成看片| 2018国产大陆天天弄谢| 国产日韩欧美亚洲二区| 后天国语完整版免费观看| 又黄又粗又硬又大视频| 看十八女毛片水多多多| 国产成人av激情在线播放| 日本91视频免费播放| 亚洲精品久久久久久婷婷小说| 亚洲国产最新在线播放| www.999成人在线观看| 色婷婷av一区二区三区视频| 一级黄片播放器| 精品久久久久久久毛片微露脸 | 亚洲成人手机| 久久精品国产亚洲av涩爱| 亚洲综合色网址| 女人被躁到高潮嗷嗷叫费观| 成年人黄色毛片网站| 国产片特级美女逼逼视频| 女人久久www免费人成看片| 中文字幕最新亚洲高清| 亚洲精品久久成人aⅴ小说| 久久久精品国产亚洲av高清涩受| 午夜视频精品福利| 国产精品一国产av| 亚洲熟女毛片儿| 久久人妻熟女aⅴ| 一级毛片女人18水好多 | 久久青草综合色| 久久精品aⅴ一区二区三区四区| 国产麻豆69| 国产精品国产三级国产专区5o| 欧美激情高清一区二区三区| 91精品伊人久久大香线蕉| 宅男免费午夜| www.精华液| 亚洲少妇的诱惑av| 日韩制服骚丝袜av| 18禁观看日本| 中文字幕人妻熟女乱码| 啦啦啦啦在线视频资源| 婷婷色麻豆天堂久久| 一本大道久久a久久精品| 国产三级黄色录像| 中国国产av一级| 国产淫语在线视频| 两人在一起打扑克的视频| 亚洲av综合色区一区| 电影成人av| 日本欧美国产在线视频| 一级毛片黄色毛片免费观看视频| 亚洲中文日韩欧美视频| 亚洲,欧美精品.| 国产激情久久老熟女| 亚洲伊人色综图| 免费观看a级毛片全部| 少妇 在线观看| 欧美xxⅹ黑人| 亚洲精品国产av成人精品| 欧美激情极品国产一区二区三区| 黄色怎么调成土黄色| kizo精华| av天堂久久9| avwww免费| 在线av久久热| 久久久久久久国产电影| 免费在线观看完整版高清| 免费久久久久久久精品成人欧美视频| 天天躁狠狠躁夜夜躁狠狠躁| 首页视频小说图片口味搜索 | 18禁裸乳无遮挡动漫免费视频| 亚洲国产成人一精品久久久| 最黄视频免费看| 黄网站色视频无遮挡免费观看| 男的添女的下面高潮视频| 最近手机中文字幕大全| 香蕉丝袜av| 日日爽夜夜爽网站| 五月开心婷婷网| xxxhd国产人妻xxx| 亚洲精品美女久久av网站| 国产片内射在线| 男人添女人高潮全过程视频| av网站在线播放免费| 少妇精品久久久久久久| 精品久久蜜臀av无| 在线观看一区二区三区激情| 欧美黄色淫秽网站| 国产日韩欧美亚洲二区| 成人国产av品久久久| 日日爽夜夜爽网站| www.av在线官网国产| 国产成人av激情在线播放| 91麻豆精品激情在线观看国产 | 99热全是精品| 日韩av在线免费看完整版不卡| 成年人午夜在线观看视频| 国产伦人伦偷精品视频| 又黄又粗又硬又大视频| 亚洲欧美一区二区三区黑人| 久久精品亚洲熟妇少妇任你| 国产成人精品无人区| 99国产精品一区二区三区| 久久中文字幕一级| 一本久久精品| 国产伦人伦偷精品视频| 老司机影院成人| 一级毛片黄色毛片免费观看视频| 欧美 亚洲 国产 日韩一| 成人亚洲欧美一区二区av| 亚洲中文日韩欧美视频| 成在线人永久免费视频| 最近手机中文字幕大全| 亚洲国产精品一区二区三区在线| 国产精品久久久久久精品电影小说| 精品免费久久久久久久清纯 | 男女免费视频国产| 亚洲一区中文字幕在线| 日韩一本色道免费dvd| 久久久久网色| 日韩一本色道免费dvd| 水蜜桃什么品种好| 国产黄色免费在线视频| 国产日韩欧美视频二区| 亚洲人成电影观看| 欧美亚洲日本最大视频资源| 观看av在线不卡| 亚洲精品在线美女| 最黄视频免费看| 久久久久久久大尺度免费视频| 成年人黄色毛片网站| 亚洲av综合色区一区| 亚洲美女黄色视频免费看| 韩国高清视频一区二区三区| 日本午夜av视频| 国产高清视频在线播放一区 | 国产1区2区3区精品| 九草在线视频观看| 王馨瑶露胸无遮挡在线观看| 99国产精品99久久久久| 成在线人永久免费视频| 久久精品亚洲熟妇少妇任你| 久久久久久久久久久久大奶| 少妇 在线观看| 18在线观看网站| 亚洲熟女毛片儿| 青草久久国产| 悠悠久久av| 别揉我奶头~嗯~啊~动态视频 | 国产三级黄色录像| 久久精品国产综合久久久| 人人妻人人澡人人爽人人夜夜| 美女福利国产在线| 欧美日韩亚洲综合一区二区三区_| 国产免费一区二区三区四区乱码| 香蕉丝袜av| 老司机亚洲免费影院| 久久精品国产亚洲av高清一级| 久久久久网色| 汤姆久久久久久久影院中文字幕| 欧美日韩黄片免| 成人黄色视频免费在线看| 欧美亚洲日本最大视频资源| 精品一区二区三区四区五区乱码 | 99精品久久久久人妻精品| 久久久久久久国产电影| 最新在线观看一区二区三区 | 国产一区二区在线观看av| av在线app专区| 首页视频小说图片口味搜索 | 亚洲欧美日韩高清在线视频 | 80岁老熟妇乱子伦牲交| 久久久久久久久免费视频了| 亚洲久久久国产精品| 亚洲 欧美一区二区三区| 国产精品九九99| 丝袜美足系列| 国产精品二区激情视频| 午夜激情久久久久久久| 大香蕉久久成人网| 久久国产精品人妻蜜桃| 天天躁狠狠躁夜夜躁狠狠躁| 免费观看a级毛片全部| 日日摸夜夜添夜夜爱| 欧美老熟妇乱子伦牲交| 亚洲欧洲精品一区二区精品久久久| 成人午夜精彩视频在线观看| 午夜久久久在线观看| 搡老岳熟女国产| 国产主播在线观看一区二区 | 国产精品免费视频内射| 亚洲av日韩在线播放| 99国产精品99久久久久| 欧美人与善性xxx| av一本久久久久| 少妇粗大呻吟视频| 久久久亚洲精品成人影院| 一本大道久久a久久精品| 成人免费观看视频高清| 亚洲情色 制服丝袜| 男女边吃奶边做爰视频| 丝袜在线中文字幕| 18在线观看网站| 一二三四在线观看免费中文在| 日韩人妻精品一区2区三区| 国产精品一区二区精品视频观看| 亚洲欧美一区二区三区黑人| 99国产精品99久久久久| 考比视频在线观看| 成年女人毛片免费观看观看9 | 久久影院123| 国产成人一区二区在线| 91麻豆av在线| 国产日韩欧美在线精品| 少妇的丰满在线观看| 国产精品久久久久久精品古装| 亚洲国产精品一区二区三区在线| 王馨瑶露胸无遮挡在线观看| 亚洲国产日韩一区二区| 国产成人欧美在线观看 | 欧美xxⅹ黑人| 99国产精品99久久久久| 在线精品无人区一区二区三| 国产亚洲午夜精品一区二区久久| 国产一级毛片在线| 国产精品成人在线| 欧美人与性动交α欧美软件| 国产高清不卡午夜福利| 亚洲色图综合在线观看| 欧美老熟妇乱子伦牲交| 日韩av免费高清视频| 国产精品久久久av美女十八| 69精品国产乱码久久久| 在线观看www视频免费| 亚洲一区二区三区欧美精品| 老司机亚洲免费影院| 肉色欧美久久久久久久蜜桃| 美国免费a级毛片| 国产福利在线免费观看视频| 亚洲精品一卡2卡三卡4卡5卡 | 日韩大片免费观看网站| 国产精品久久久人人做人人爽| 久久久久久久精品精品| 国产亚洲精品久久久久5区| 午夜激情av网站| 欧美精品一区二区免费开放| 亚洲国产精品国产精品| 国产精品 欧美亚洲| 男女床上黄色一级片免费看| av国产久精品久网站免费入址| 国产精品久久久久久人妻精品电影 | 久久中文字幕一级| 久久国产精品影院| 午夜老司机福利片| 午夜免费观看性视频| 50天的宝宝边吃奶边哭怎么回事| 精品国产一区二区三区久久久樱花| a级毛片黄视频| 欧美精品一区二区免费开放| 精品亚洲成a人片在线观看| √禁漫天堂资源中文www| 欧美人与善性xxx| 精品久久久精品久久久| 麻豆av在线久日| 中文字幕制服av| 亚洲国产欧美在线一区| 欧美人与善性xxx| av电影中文网址| 亚洲av国产av综合av卡| 午夜视频精品福利| 桃花免费在线播放| 久久亚洲国产成人精品v| 色播在线永久视频| 欧美精品一区二区大全| 黄色 视频免费看| 成人国产av品久久久| 免费在线观看影片大全网站 | www.自偷自拍.com| 久久久久视频综合| 少妇被粗大的猛进出69影院| 91精品伊人久久大香线蕉| 交换朋友夫妻互换小说| 久久久久久久大尺度免费视频| 91麻豆av在线| 老汉色av国产亚洲站长工具| 超碰成人久久| 男人操女人黄网站| 美女午夜性视频免费| 人人妻人人添人人爽欧美一区卜| 欧美日韩亚洲综合一区二区三区_| 侵犯人妻中文字幕一二三四区| 99精品久久久久人妻精品| 欧美97在线视频| 久久99热这里只频精品6学生| 精品免费久久久久久久清纯 | 女人精品久久久久毛片| 久久亚洲国产成人精品v| 99久久99久久久精品蜜桃| 1024视频免费在线观看| 韩国精品一区二区三区| 男女国产视频网站| 久久国产精品大桥未久av| 一级黄片播放器| 老司机午夜十八禁免费视频| 18禁国产床啪视频网站| 欧美乱码精品一区二区三区| 国产黄频视频在线观看| 波多野结衣av一区二区av| 成人18禁高潮啪啪吃奶动态图| 亚洲九九香蕉| 一区二区三区激情视频| 亚洲成国产人片在线观看| 一本一本久久a久久精品综合妖精| 美国免费a级毛片| 免费人妻精品一区二区三区视频| 午夜福利视频在线观看免费| 青青草视频在线视频观看| 成年美女黄网站色视频大全免费| 老司机靠b影院| 久久久国产一区二区| 黄色视频不卡| 国产精品免费视频内射| 性少妇av在线| 国产91精品成人一区二区三区 | 美女福利国产在线| 中国美女看黄片| 国产亚洲精品第一综合不卡| 国产日韩一区二区三区精品不卡| 免费观看人在逋| 蜜桃国产av成人99| 久久久国产一区二区| 日韩熟女老妇一区二区性免费视频| 一二三四在线观看免费中文在| 日韩制服骚丝袜av| 蜜桃在线观看..| 国产在视频线精品| 国产色视频综合| 日韩 欧美 亚洲 中文字幕| 国产免费视频播放在线视频| 欧美精品人与动牲交sv欧美| 人人妻,人人澡人人爽秒播 | 日韩精品免费视频一区二区三区| 亚洲成人手机| 嫁个100分男人电影在线观看 | 久久影院123| 肉色欧美久久久久久久蜜桃| 国产高清视频在线播放一区 | 黄网站色视频无遮挡免费观看| 少妇的丰满在线观看| 国产欧美亚洲国产| 一级黄色大片毛片| 成人国产av品久久久| 搡老岳熟女国产| 中文字幕av电影在线播放| 日韩一本色道免费dvd| 91成人精品电影| av国产精品久久久久影院| 一级片免费观看大全| 久久精品国产a三级三级三级| 亚洲精品国产色婷婷电影| 一级毛片黄色毛片免费观看视频| 每晚都被弄得嗷嗷叫到高潮| 国产精品欧美亚洲77777| 精品国产一区二区三区久久久樱花| 亚洲精品国产区一区二| 午夜久久久在线观看| 叶爱在线成人免费视频播放| 飞空精品影院首页| 欧美少妇被猛烈插入视频| 免费观看人在逋| 狠狠精品人妻久久久久久综合| 欧美精品av麻豆av| 性色av乱码一区二区三区2| 少妇粗大呻吟视频| 伦理电影免费视频| 男女之事视频高清在线观看 | 91国产中文字幕| 精品一区二区三卡| 精品卡一卡二卡四卡免费| 青草久久国产| 久热这里只有精品99|