• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Novel Tensor Decomposition-Based Efficient Detector for Low-Altitude Aerial Objects With Knowledge Distillation Scheme

    2024-03-01 11:02:50NianyinZengXinyuLiPeishuWuHanLiandXinLuo
    IEEE/CAA Journal of Automatica Sinica 2024年2期

    Nianyin Zeng , Xinyu Li , Peishu Wu , Han Li , and Xin Luo ,,

    Abstract—Unmanned aerial vehicles (UAVs) have gained significant attention in practical applications, especially the low-altitude aerial (LAA) object detection imposes stringent requirements on recognition accuracy and computational resources.In this paper, the LAA images-oriented tensor decomposition and knowledge distillation-based network (TDKD-Net) is proposed,where the TT-format TD (tensor decomposition) and equalweighted response-based KD (knowledge distillation) methods are designed to minimize redundant parameters while ensuring comparable performance.Moreover, some robust network structures are developed, including the small object detection head and the dual-domain attention mechanism, which enable the model to leverage the learned knowledge from small-scale targets and selectively focus on salient features.Considering the imbalance of bounding box regression samples and the inaccuracy of regression geometric factors, the focal and efficient IoU (intersection of union) loss with optimal transport assignment (F-EIoU-OTA)mechanism is proposed to improve the detection accuracy.The proposed TDKD-Net is comprehensively evaluated through extensive experiments, and the results have demonstrated the effectiveness and superiority of the developed methods in comparison to other advanced detection algorithms, which also present high generalization and strong robustness.As a resourceefficient precise network, the complex detection of small and occluded LAA objects is also well addressed by TDKD-Net, which provides useful insights on handling imbalanced issues and realizing domain adaptation.

    I.INTRODUCTION

    UNDER the impetus of computer vision, especially for object detection, unmanned aerial vehicles (UAVs) have been endowed with the ability to perceive, analyze, and make decisions, which enable efficient and flexible collection of images as well as accurate and rapid target recognition and localization [1], [2].In particular, object detection from the perspective of UAVs holds great potential in various domains such as intelligent transportation systems, smart city construction, and major disaster relief [3]-[5].For instance, in the event of natural disasters such as earthquakes or floods, UAVs can fly freely in the air without being constrained by ground transportation, quickly arriving at the disaster site to collect wide-angle images and conducting intelligent analysis based on deep learning algorithms, so as to provide effective support for emergency rescue and management.

    Images captured by UAVs are typically low-altitude aerial(LAA) images.In LAA images, due to the jitter and rotation of UAVs, variations in target illumination and scale are conspicuous, coupled with uneven spatial distribution, prevalence of small-sized objects and dense clustering, and the blurriness of obtained samples.Simultaneously, the hardware resources of UAVs are significantly constrained, rendering them inadequate for accommodating the computational capabilities of large-scale models.Hence, despite the remarkable advancements in general machine vision technologies, the specific object detection methodologies targeting LAA images have not yet been adequately explored in-depth, and there are many performance bottlenecks such as poor recognition accuracy, low positioning precision, and high latency.

    For example, the two-stage algorithm Faster R-CNN (regian convolutional neural network) [6] effectively incorporates contextual knowledge to enhance feature representation, but its large parameter size results in low recognition speed, making faster R-CNN fail to meet the lightweight deployment requirements in LAA image detection tasks.The well-known YOLO (you only look once) series algorithms [7]-[9] based on one-stage framework leverage a single neural network for object classification and bounding box regression simultaneously to enhance the detection efficiency, which sacrifices the localization accuracy and may present unsatisfactory recognition performance for small-sized targets in UAV-based tasks.Furthermore, RefineDet [10] inherits the advantages of both one-stage and two-stage detectors by designing an anchor refinement module to filter negative anchors and coarsely adjust their position and size, followed by using the refined anchors for a second regression and multi-class classification based on the object detection module.However, RefineDet exhibits unsatisfactory results for small and dense objects in LAA images due to the unreasonable receptive field settings.

    In recent years, some efforts have been carried out to improve the recognition accuracy and computational efficiency of models for UAV-based target detection.However,existing LAA image detection methods exhibit certain limitations.Some approaches focus on enhancing prediction accuracy without considering the deployment practicality for UAVs, while others utilize fewer parameters but fail to demonstrate effective performance for LAA-based targets.In particular, inadequate attention is given to addressing the performance trade-off and sample imbalance problems in complex scenes.For instance, EdgeYOLO [11] incorporates data augmentation technique and hybrid random loss function to address the over-fitting issues, while the EdgeYOLO might not adequately address the issue of imbalanced scenes in LAA images, potentially resulting in biased detection and overlooking certain classes of objects.Similarly, parallel residual bifusion network (PRBNet) [12] combines top-down and bottom-up paths for bi-directional feature fusion, thereby retaining high-quality features and enhancing LAA image target localization accuracy, but its large computational complexity presents challenges in practical deployment.Furthermore,there are some innovative studies focusing on UAV-based model trustworthy and intelligent systems, such as scenarios engineering and synthesize methods [13]-[15], which provide robust and dependable solutions for enhancing situational awareness, optimizing resource allocation, and increasing the overall performance.

    To tackle the aforementioned challenges, this study addresses the dual objective of attaining both accuracy and computational efficiency in LAA-based detection models.Therefore, we propose a novel framework, termed TDKDNet, which leverages tensor decomposition (TD) and knowledge distillation (KD) techniques.TDKD-Net (TD and KDbased network) is built upon the widely adopted YOLOv7 network [16], aiming to optimize both accuracy and lightweight characteristics in LAA image object detection tasks.On one hand, in terms of improving detection precision, firstly, the performance bottleneck of tiny-sized targets is alleviated by designing a small object detection head (SODH), which leverages higher resolution feature maps and a smaller receptive field to extract fine-grained details from targets.Secondly, to handle the issues arising from uneven spatial distribution and blurred targets, a dual-domain attention (DA) mechanism is introduced, which is integrated into the efficient layer aggregation network (ELAN) to propose the DA-ELAN.To be specific, in the spatial domain, an adaptive convolution kernel is employed to generate feature responses from diverse positions, which enables the ELAN to effectively address the issue of uneven spatial distribution of targets in both sparse and dense areas; in the channel domain, a self-attention mechanism is adopted to learn the inter-channel correlations, facilitating the extraction of more comprehensive and resilient feature representations, which significantly boosts the complementarity between features and contributes to the overall robustness of the model.Furthermore, in order to tackle the difficulties posed by inaccurate bounding box regression(BBR) and imbalanced positive and negative samples, the improved focal and efficient IoU (intersection of union) loss with optimal transport assignment (F-EIoU-OTA) is developed.The F-EIoU-OTA mechanism explicitly measures the differences of three geometric factors in BBR: overlap area,center point, and bounding box length.

    On the other hand, a thorough analysis of the redundancy in the ELAN structure is conducted, which redundantly extract similar or identical features at multiple stacked layers.To effectively handle the conflict between high-resolution property of LAA images and the limited computational resources of embedded devices, the ideas of tensor decomposition (TD)and knowledge distillation (KD) are adopted to compress the model at both the parameter and structure levels, which facilitate reducing the model redundancy and improving detection efficiency.The TD-ELAN reduces the complexity of the model while preserving important information, which is particularly beneficial for UAVs-based resource-constrained environments and real-time applications.Moreover, by leveraging the knowledge learned from a larger and more complex model, the designed TDKD-Net gains insights into challenging patterns and generalizes better on unseen data, leading to enhanced performance in various scenarios.Furthermore,ELAN, combined with TD and KD, promotes the extraction of more robust feature representations.The fusion of knowledge from a larger model enriches feature learning, enhancing the model’s capability to recognize complex patterns and improve object detection accuracy.

    The major contributions of this paper are outlined as follows:

    1) A novel LAA target detection framework TDKD-Net is proposed, which can effectively balance the detection accuracy and model size.

    2) Based on the ELAN, dual-domain attention and tensor decomposition are developed.Through the channel and spatial attention mechanism, the TDKD-Net can extract robust feature representations with a slight parameter increase cost.

    3) An equal-weighted response knowledge distillation method is introduced, which uses the output response of the large model to improve the generalization ability of TDKDNet, and compensate for the performance degradation caused by TD.Furthermore, the F-EIoU-OTA method is proposed to solve the performance bottleneck of the model caused by the imbalance of bounding box regression samples and the inaccurate regression geometry factors, and improve the detection accuracy.

    4) An equal-weighted response knowledge distillation method is proposed, which uses the output response of the large model to improve the generalization ability of the small model and compensate for the loss of accuracy caused by tensor decomposition.

    The remainder of this paper is organized as follows.In Section II, related representative work is reviewed, and the proposed TDKD-Net is introduced in Section III.Experimental results and discussions are presented in Section IV, and finally, conclusions are drawn in Section V.

    II.RELATED WORK

    In this section, relevant LAA object detection algorithms are reviewed.Due to the performance bottleneck caused by the unique nature of images captured from UAVs, methods for improving detection accuracy and compressing model size have been briefly introduced as well.

    A. Low-Altitude Aerial Object Detection Algorithms

    In the field of computer vision (CV), object detection methods based on deep learning have achieved great success in natural scenes.The success of CV techniques can be drawn upon and referenced for UAV-based detection tasks, for which many detection algorithms specifically designed for LAA images have been proposed.For example, in [17], a UAV object detection method ComNet based on thermal images is proposed for pedestrian and vehicle targets under different lighting conditions in both daytime and nighttime.ComNet employs a boundary-aware salient object detection network to extract maps of thermal images, and enhances thermal images using corresponding saliency maps through channel replacement and pixel-wise weighted fusion, which achieves a tradeoff between average precision and inference time.In [18], a feature fusion module has been proposed to enhance the expression capability of small objects by facilitating the interaction and fusion of features across multiple levels.Additionally, to address the problem of discontinuous information in occluded objects, an efficient convolutional transformer block with multi-head self-attention mechanism has been introduced.To address the problems that scale of objects changes dramatically and the high-speed motion brings object blur,[19] adds a prediction head to detect targets at different scales and replaces the original one with a transformer structure,which explores the detection potential for complex environments with self-attention mechanism.Inspired by trident networks, [20] presents an improved ResNet module that utilizes dilated convolutions to effectively capture contextual information, particularly for small-sized objects.By incorporating this module, the ResNet-based model becomes robust to scale variations in LAA objects.Moreover, in [21], small object detection accuracy is enhanced by incorporating feature maps from a shallow layer that contains fine-grained information for precise location prediction.This improvement is achieved by fusing local and global features from both shallow and deep feature maps within the pyramid network, resulting in an enhanced ability to extract more representative features, and the proposed methods have shown impressive performance and interpretability for LAA images.

    In this work, YOLOv7 is used as the baseline detection framework of proposed TDKD-Net, which incorporates strategies such as ELAN [22], coarse-to-fine guided label assignment and planned re-parameterized [23] convolutions.It is worth noting that, in addition to designing effective feature integration methods and accurate detection strategies for the network architecture, YOLOv7 also places great emphasis on optimizing the model training process.Specifically, it discusses some optimization modules that can improve accuracy without increasing the inference cost.Therefore, the improvement of YOLOv7 in model size and detection accuracy is significant.

    B. Methods for Performance Improvement and Model Compression

    Generic target detection algorithms are unable to overcome the performance bottleneck caused by remote shots, background occlusion, and tiny size in the UAV view, because the feature extraction and information abstraction structures are designed for natural scene images.Meanwhile, most deep models are computationally and memory-intensive, which makes it difficult to be deployed in embedded systems.Therefore, in response to performance improvement and model compression issues, recently there have been many improved approaches for LAA-based tasks.

    On one hand, the improvement of target detection performance from the perspective of drones mainly includes multiscale feature fusion, regional focusing strategy and loss function optimization.For example, in view of the insufficient visual information for small objects, a novel enhanced multiscale feature fusion method is proposed in [24], where rich receptive field information combined with contextual features is fully exploited.In addition, considering the issue of imbalanced positive and negative samples, [25] adopts VariFocal loss to address the detection of targets that require heightened attention, which selectively reduces the loss contribution of negative samples without uniformly decreasing the weight of positive ones in the same manner.

    On the other hand, in order to adapt to the deployment requirements of embedded or edge devices and achieve realtime applications, some lightweight methods are applied to LAA image analysis.For instance, to enable the inference on resource-constrained edge devices, [26] leverages the lightweight MobileNet V3 as a replacement for the original YOLOv4 backbone, resulting in a significant reduction in parameters.Meanwhile, [27] introduces network sparsity by incorporating L1 regularization on the convolutional layer and implements channel or layer pruning techniques to eliminate redundant structures.Furthermore, facing the real-time detection of embedded systems using UAVs, [28] develops an improved CNN model by employing the KD scheme on the pruned models.

    In this work, for the purpose of achieving a compromise between recognition precision and computational resources in the context of network structure optimization, the ELAN combined with DA mechanism, and a new object detection head for small-sized targets are constructed.At the same time, an FEIou-OTA mechanism is proposed to solve the performance bottleneck of the model caused by the imbalance of bounding box regression samples and the inaccurate regression geometry factors.Moreover, TD and KD techniques are applied for reducing network structure redundancy and compressing model size.Specifically, CNNs can predict results with only a small number of parameters, which indicates that there is a large amount of redundant information in the convolutional kernels.The idea of TD is to decompose the original CNN tensors into several low rank ones, which is beneficial for reducing the number of convolution operations and accelerating the operation process, and the mainstream TD approaches include CANDECOMP/PARAFAC (CP), Block-Term Tucker(BTT), Tucker-2, Tensor Train (TT), Tensor Ring (TR)[29]-[32] and typical non-negative matrix factorization methods [33]-[35].Furthermore, KD is a parameter optimization and model compression method based on transfer learning[36], which accomplishes the model compression and acceleration by transferring the relevant domain knowledge of teacher network to student network and guides the training of the latter.Generally, teacher networks are often of complex structures with strong generalization abilities, while student networks are small-sized models, and performance of student networks can be significantly improved under controlled parameter quantities through KD scheme.The mainstream KD methods can be divided into three types: response-based, feature-based and relation-based KD frameworks [37]-[39].

    Fig.1.Framework of the proposed tensor decomposition and knowledge distillation-based network (TDKD-Net).

    III.PROPOSED METHOD

    In this section, the proposed TDKD-Net is elaborated with implementation details, including the designing principles of DA-ELAN and TD-ELAN, as well as the developed equalweighted response-based KD scheme and F-EIoU-OTA mechanism.

    A. Overall Framework of TDKD-Net

    To begin with, the overall framework of TDKD-Net is illustrated in Fig.1.Firstly, the resolution of all LAA images is adjusted to 640 × 640 with data augmentation operations like geometric transformation, color adjustment, noise addition and morphological operations, and then input images are down-sampled by 4 times through Conv_1 to Conv_4 operations, where each operation consists of a 3 × 3 convolution with batch normalization and SiLU activation.Afterwards,four designed TD-ELAN modules are interspersed in the down-sampling operations of Conv_5 to Conv_7, which is responsible for obtaining rich gradient flow information and continuously enhancing contextual learning capabilities.It is worth noting that the convolution operation combined with tensor decomposition is embedded in TD-ELAN, which can effectively reduce redundant computational parameters.Furthermore, the spatial pyramid pooling-based cross stage partial convolution (SPPCSPC) module avoids the dilemma of data distortion caused by cutting and scaling the image region,and further solves the problem of repeated feature extraction with less computational cost.

    Next, the multi-scale feature maps are fused and in the followed Conv_8 to Conv_11 operations, successive six DAELAN modules are performed on different scales maps, where the dual-domain attention mechanism is applied for modeling the target locations and category characteristics.At last, four prediction heads are designed to recognize and locate objects at multi-scale level, which covers from tiny to large targets.Noticeably, reparameterized convolution (RepConv) is employed prior to the prediction head for channel adjustment, while simultaneously expediting the inference process through equivalent structure fusion.RepConv [23], as an efficient convolutional module, reinforces feature maps by replicating input or output within the channel dimension, without introducing additional parameters or computations.Substituting traditional convolutional layers with RepConv significantly enhances the performance of detection for LAA-based objects.

    Fig.2.The structure of DA-ELAN (left) and mechanism of dual-domain attention (right).

    B. Dual-Domain Attention-Efficient Layer Aggregation Network

    In order to improve the target focusing ability in the multiscale fusion process, a dual-domain attention mechanism based on spatial and channel domain is embedded in ELAN to develop DA-ELAN.In the proposed DA mechanism, the spatial attention focuses on selectively attending to specific spatial locations, which assigns higher weights to certain regions that are deemed more important for the detection task at hand,and further effectively concentrates computational resources on informative regions.Moreover, the channel attention aims to highlight critic feature maps by distributing distinct weights to each channel, where the informative channels are amplified and unimportant ones are suppressed, and further the model can enhance the discriminative power of feature representations.As shown on the left of Fig.2, the dual-domain attention mechanism is introduced into the ELAN structure.By combining the two types of attention mechanisms, the proposed DA-ELAN could exploit both spatial and channel dependencies in LAA images, so that context-aware focused information can be efficiently utilized.

    As shown in Fig.2, the DA mechanism inspired by [40] is further developed with the ELAN structure.In the DA module, the input featureFis firstly executed channel attention operations, where the max and average poolings are used to aggregate and refine target features.Subsequently, it is imperative to highlight that substituting the multilayer perceptron with a duo of 1 × 1 convolution operations engenders several noteworthy advantages.This entails a marked reduction in both parameter quantity and computational workload, while simultaneously preserving the spatial structure and local correlation of the feature map.Consequently, this leads to an augmented spatial information capacity, thereby elevating the network’s efficiency and generalization ability.Moreover, the Add operation and Sigmoid activation synergistically facilitate the generation of channel domain attention.Next, the channel weight distribution of different pixel regions is realized by element-wise multiplication, and spatial attention operations are further carried out.Compression is performed using max and average poolings over each channel, and 1 × 1 convolution is applied for parameter learning.Similarly, after the Sigmoid activation, the spatial domain weight assignment of different pixel regions is finally achieved by element-wise multiplication and obtain the refined featureF′.The working principle of the DA module is expressed as follows:

    whereαandσdenote the R eLU and S igmoid function, respectively;APandMPrepresent Average Pooling and Max Pooling,respectively.C1,C2andC3refer totheconvolutionoperations.Fis the originalfeature map oftheinput,F′cdenotes the feature map after the channel attention, and combined with spatial attention, the final outputis obtained.

    C. Tensor Decomposition-Efficient Layer Aggregation Network

    The original ELAN [22] considers the shortest and longest gradient paths of each layer and the longest gradient path of the whole network, where the transition layer is appropriately removed to alleviate performance degradation caused by model scaling.Through observation, it is evident that the ELAN architecture employs an extensive array of 1 × 1 convolution for channel transformation or dimension reduction.Moreover, it integrates numerous parallel convolutional branches to bolster the expressive capacity of features.While these convolutional branches contribute to heightened feature diversity and detailed information, they also incur an escalation in parameters and computations, potentially introducing redundant and noisy information.

    Fig.3.The structure of TD-ELAN (left) and TT-format TD principles for convolutions (right).

    Therefore, in order to minimize the redundancy of convolutional operations, the TD is introduced into the ELAN structure.In the TD principles, Tucker and CP decompositions are the most well-known TD methods, and in comparison to these two conventional ways, the TT decomposition possesses more inherent low-rank properties and provides more accurate information representation, so the TT-ELAN structure is proposed, as illustrated in Fig.3.

    Due to its balanced unfolding matrices, the TT decomposition can make more effective use of the information contained in the original tensors, which decomposes anN-th order tensor into a contraction form ofN-1 second-order or thirdorder tensors.The TT decomposition scheme [41] is performed by the following formula to represent the operation mode of the convolution kernel:

    whereFandF′refer to the original feature map and the refined one by convolutionW, respectively, andBis the bias.

    For ease of expression, we rewrite (2) as follows:

    To simplify the notation, the formula of the TT decomposition for the convolutional kernel can be expressed as

    As shown in the blue box content of Fig.3, the specific convolutional layer in the original ELAN structure can be represented in the following form:

    D. Equal-Weighted Response-Based Knowledge Distillation

    Although TT decomposition reduces the redundant parameters of convolutional operations through low-rank approximation, it unavoidably traps in the bottleneck of precision degradation.As a result, the equal-weighted response-based KD method is further applied for the proposed TDKD-Net, in which the knowledge transfer process from a teacher model to a student model is based on equal weights assigned to the responses generated by both models.The aim of KD is to encourage student model to learn knowledge from the teacher while maintaining a balanced consideration of its own predictions, leading to improved performance and generalization capabilities.The distillation scheme is illustrated in Fig.4.In the designed KD scheme, the enhanced TDKD-Net that increases the input resolution or the number of layer channels is used as the teacher network for pre-training.Subsequently,a distillation process is initiated, computing the distillation loss between the teacher network and the student model,which facilitates the optimization of training parameters for the student model.

    Fig.4.Equal-weighted response-based distillation framework.

    Subsequently, the principles of equal-weighted responsebased KD is analyzed through the training loss functions.For the loss functionLossstuof student model TDKD-Net, the weighted sum of the classification loss, confidence loss and bounding box loss are included, which can be expressed as follows:

    Furthermore, we observe that the original OTA (optimal transport assignment) mechanism [42] employs the relative proportion of width and height within the CIoU loss function,rather than their absolute values.Consequently, when the prediction box’s width and height meet specific conditions, the additional penalty term related to the relative proportion becomes ineffective.This situation hinders simultaneous increment or decrement of both width and height, thereby hampering synchronized optimization.To enhance the precision of object identification and localization, the EIoU loss[43] is adopted for accurate bounding box regression.The EIoU loss explicitly quantifies discrepancies in three geometric factors of BBR, namely the overlap area, center point, and side length, which are defined as follows:

    whereLEIOUmainlycontainsthe IoU lossLIOU,thedistance lossLdisandthe aspectlossLasp,whereWcandhcarethe width and height of the smallest enclosing box covering the two boxes, respectively.ρ2(b,bgt) represents center distance,and ρ2(W,Wgt), ρ2(h,hgt) denote the width and height differences, respectively.

    It is noticeable that the problem of imbalanced training examples always exists in BBR, that is, due to the sparsity of the target object in the image, the number of high-quality examples (anchors) with small regression error is much less than that of low-quality examples (outliers).Outliers produce gradients that are too large and harmful to the training process.Therefore, it is crucial to make high-quality examples contribute more gradients to the network training process, and the focal mechanism [43] is introduced to enhance the contribution of high-quality anchors with IoU in BBR model optimization, while suppressing irrelevant anchors.Focal Loss introduces a focusing parameter to re-weight the contribution of each sample during training, thereby amplifying the importance of hard-to-classify samples and enhancing the model’s performance on minority classes.As part of the bounding box loss in (8), the F-EIoU loss is defined as follows:

    whereIoU=|A∩B|/|A∪B| andγis a parameter to control the degree of inhibition of outliers.

    In order to avoid the student model mislearning the background prediction of the teacher model, the objectness scaling strategy [44] is further applied in above distillation process, in which the student model TDKD-Net employs the distillation mechanism only when encountering high-confidence outputs from the teacher model.Inspired by the notion of mutual learning, we believe that teacher knowledge is equally important as student information, therefore the same value weights are utilized to optimize the student model.In addition, the temperature coefficientTis introduced to control comparable gradient contributions from soft and hard targets [37], and the distillation loss from the classification, confidence and bounding box aspects, can be expressed as follows:

    IV.RESULTS AND DISCUSSIONS

    In this section, the proposed TDKD-Net is comprehensively evaluated on VisDrone [45], SeaDronesSee [46],UAVOD10 [47] and COCO2017 [48] datasets.Furthermore,substantial comparison experiments and ablation studies have been carried out to validate the effectiveness and superiority of the developed methods.At first, experimental datasets and environment are briefly introduced.

    TABLE I COMPARISON RESULTS BETWEEN YOLOV7 AND TDKD-NET IN VISDRONE, SEADRONESSEE, COCO2017 AND UAVOD10 VALIDATION SETS

    A. Dataset and Experimental Settings

    In order to facilitate an equitable comparison of the suggested enhancements, we conducted all experiments utilizing PyTorch with Python 3.8-based deep learning framework, and executed training from the ground up on a single NVIDIA 3090Ti GPU.Throughout all experimental configurations, we maintained uniformity in the input image size, data augmentation approach, learning rate, and batch size.

    The evaluation encompasses four datasets: VisDrone-2023,SeaDronesSee-v2, UAVOD10 and COCO2017.Specifically,VisDrone-2023 serves as the principal dataset and constitutes a large-scale UAV aerial image benchmark.It encompasses 6471 training images (1.44 GB) and 548 validation images(0.07 GB) with 2.6 million annotations across 10 object categories, including pedestrians, bicycles, and cars.The majority of objects are notably small, with 74.7% measuring below 32 × 32 pixels.On average, each image contains 61 objects,with certain images containing over 900 objects, thereby presenting significant complexity and computational challenges for detection algorithms.The prevalent object categories are pedestrians (29.4%), people (10.2%), and cars (23.5%), which hold paramount importance in UAV detection scenarios.Although other categories are relatively infrequent, they remain representative within the dataset.

    The remaining datasets serve as supplementary resources for performance validation.SeaDronesSee-v2 is tailored for UAV-based search and rescue operations in oceanic scenarios,encompassing five categories: swimmer, boat, jet-ski, life-saving appliances, and buoy.We employ a compressed version of a subset of SeaDronesSee-v2, comprising 1082 training images and 464 validation images, randomly sampled from the original dataset.UAVOD10, comprising 10 UAV target detection categories, including building, pool, vehicle and so on, is divided randomly into 590 training images and 254 testing images.Lastly, COCO2017, a large-scale and one of the most popular object detection dataset, comprises 80 categories, with 118 000 training images and 5000 testing images.

    Furthermore, preprocessing is performed on above datasets before training the model, where the size of input images is resized to 640 × 640 at first, and other data augmentation operations are only performed on the training samples, including HSV augmentation, translation, scale, flip, mosaic, mixup and copy-paste operations for LAA images.Online data augmentation is adopted, which allows the diversity of data fed into network training to be enriched without actually increasing the local training images.Regarding the VisDrone, SeaDronesSee, UAVOD10, and COCO2017 datasets, we have established batch sizes of 8, 12, 8, and 8 for training the models, respectively.The number of training epochs is set to 500 800, 1000, and 120, correspondingly.For TDKD-Net, the initial learning rate is configured to 0.01, employing the OneCycleLR policy with a maximum value of 0.1, while stochastic gradient descent is utilized as the optimizer.Meanwhile, the temperature factorTfor equal-weighted responsebased KD is set to 20.The learning rates for other models remain unchanged from their original settings.

    B. Performance Evaluation

    To completely evaluate performance of the proposed TDKD-Net, three groups of experiments are carried out,which aim at verifying the generalization ability, superiority against other typical CNN models and competitiveness in comparison to state-of-the-art UAV-based object detection algorithms, respectively.MetricsPrecision,Recall, mAP50,mAP50:95,Params,GFLOPs,FPSandTraining-timeare adopted for performance evaluation, whereAPis the abbreviation of average precision.Specifically,Precisiongauges the model’s accuracy in identifying true positive instances among all positive predictions, whileRecallindicates its ability to capture all relevant positive samples within the dataset.mAP50representsAPover IoU at 0.5, and mAP50:95representsAPover IoU at [0.5:0.95:0.05] (from 0.5 to 0.95 with an interval of 0.05).The remaining metrics are as follows:Paramsdenotes the number of model parameters,GFLOPsrepresents the computational complexity of the model,FPSsignifies the frame rate of the model detection, andTrainingtimeindicates the time required for the model to complete a certain number of epochs.

    In the following, the precision and recall indicators in all experimental data are shown as percentage form.

    Fig.5.Visualization results of baseline models and TDKD-Net for VisDrone and SeaDronesSee validation sets.

    1)Comparison With the Baseline Models: Initially, we validate the performance and effectiveness of the proposed TDKD-Net alongside the baseline model YOLOv7 using the VisDrone, SeaDronesSee, UAVOD10, and COCO2017 datasets.The comparison results are presented in Table I and visualization predictions as shown in Fig.5.The results reveal superior performance of TDKD-Net compared to the baseline model across all four datasets.Notably, TDKD-Net achieves remarkable improvements in mAP50:95of 3.0%, 2.6%, and 2.8% for the first three LAA image datasets, attributed to its small object detection head and attention mechanism, which significantly enhancePrecisionandRecall.Particularly on UAVOD10, theRecallis boosted by 8.5%.Moreover, the FEIOU mechanism contributes to more accurate bounding box regression and higher precision.The performance superiority of our model over the baseline extends to all metrics on the COCO2017.While maintaining stable values of precision and recall, TDKD-Net reduces the parameters count by 2.097 M compared to YOLOv7, making it more suitable for lightweight deployment needs for UAV-based LAA images.For an intuitive view, experimental results of the proposed TDKD-Net framework and baseline models are visualized in Fig.5, where the first three rows are predictions obtained by VisDrone dataset and the last row is from SeaDronesSee.The comparison results with gradient-weighted class activation mapping (Grad-CAM) have shown that the introduction of small object detection head (SODH) and DA mechanism can make the model accurately locate the target region and suppress useless information.In particular, under the complex scenes like severe light changes, tiny-sized and blurred objects, the developed TDKD-Net demonstrates strong robustness.It should be highlighted that, for the first line of LAA image, our TDKD-Net can recognize the tiny-scale crowds at long distances, which are missed detection by the baseline model.For the maritime vessel instances in the fourth line, the baseline model outputs a redundant prediction bounding box,whereas TDKD-Net dose not.By using the proposed TDKDNet, the better overall performance is achieved in terms of both localization and recognition ability of small-sized and densely distributed LAA objects.

    2)Comparison With Typical CNN-based Methods: In order to further validate the competitiveness of the proposed TDKD-Net, other 13 representative CNN-based algorithms are adopted for comparison in this experiment setting, including RetinaNet [49], fully convolutional one-stage object detection (FCOS) [50], CenterNet [51], TridentNet [52], adaptive training sample selection (ATSS) [53], feature selective anchor-free (FSAF) method [54], faster-RCNN [6], VariFocal network (VFNet) [55], disentangle dense object detector(DDOD) [56], YOLOX [57], Cascade-RCNN [58], taskaligned one-stage object detector (TOOD) [59] and the improved YOLOv3 [9].For fairness, all models share the same datasets, and the comparative results on validation dataset are reported in Table II.

    TABLE II COMPARISON WITH TYPICAL CNN-BASED MODELS ON VISDRONE AND SEADRONESSEE VALIDATION SETS

    As can be seen from Table II, the developed TDKD-Net achieves the best results on all metrics, which further demonstrates the superiority of TDKD-Net for UAVs-based object detection tasks.In particular, the TDKD-Net outperforms the suboptimal results on mAP50and mAP50:95of 10.2% and 6.3% respectively for the VisDrone dataset, while for the SeaDronesSee dataset, the proposed TDKD-Net also achieves satisfactory results of 87.8% and 57.2% on mAP50and mAP50:95, respectively.

    Through this group of experiment, it is demonstrated that the proposed TDKD-Net has overwhelming precision advantages against other advanced CNN-based models, which may owe to the meticulously designed SODH for tiny targets, DA mechanism for focusing on key information, improved loss functions for stable training and KD scheme for robust knowledge acquisition.

    3)Comparison With State-of-Art UAV-Based Detection Algorithms: This section presents a comprehensive comparison between the proposed TDKD-Net and other state-of-theart UAV-based detection algorithms, including Transformer prediction head (TPH)-YOLOv5 [19], parallel residual bi-Fusion network (PRBNet) [12], YOLOv8 [60], EdgeYOLO[11], and the PaddlePaddle evolved version of YOLO (PPYOLOE) along with its improved variants: PP-YOLOE with learnable parameterαfor the second output layer of the backbone (PP-YOLOE-P2-Alpha) and with scale optimization and detection head (SODH) (PP-YOLOE-SOD) [61].It is important to note that all experimental results are conducted under a fair comparison, utilizing the same local experimental setup.However, it may differ from the reported results of the advanced models.For instance, the best results of TPHYOLOv5 in their paper are based on high-resolution input,which increases the model’s memory consumption, neglecting the deployment factor.Therefore, we maintain consistency by employing 640 × 640 size images for training in all experiments.

    Tables III and IV demonstrate the superiority of the proposed TDKD-Net over well-known LAA image detection algorithms in terms of recognition accuracy, reduced parameter count, and controlled computational complexity.TheParamsandGFLOPsvalues of TDKD-Net, measuring spatial and computational complexity, are 34.433 M and 105.6,respectively.For the challenging VisDrone task, TDKD-Net exhibits significant performance advantages, achieving a remarkable 9.8% and 6.1% increase in mAP50and mAP50:95metrics compared to TPH-YOLOv5, respectively.In the SeaDronesSee task, which focuses on UAV detection of maritime targets, TDKD-Net demonstrates exceptional robustness and generalization, achieving commendable mAP50and mAP50:95metrics of 87.8% and 57.2%, respectively.As shown in Table IV, TDKD-Net achieves a highFPSindex of 68.03, ranking second, satisfying real-time detection requirements, but a limitation lies in the TD and KD schemes, which has a negative impact on theTraining-timeefficiency.The YOLOv7-sea [62] demonstrates remarkable performance in maritime UAVs-based target detection using the SeaDrones-See dataset, whose effective and inspiring strategy yields impressive results.In short, TDKD-Net achieves an efficient balance between recognition accuracy and computational complexity, making it suitable for practical UAV LAA image detection tasks, while a longer training phase for reduced computing resource dependency during deployment.

    TABLE III COMPARISON WITH STATE-OF-THE-ART UAV-BASED DETECTION ALGORITHMS ON VISDRONE AND SEADRONESSEE VALIDATION SETS

    C. Ablation Study

    To validate the effectiveness of core components in the proposed TDKD-Net, substantial ablation studies have been conducted, where the VisDrone dataset is adopted for verification.The results are reported in Fig.6 and Table V.It can be seen in Table V, in comparison to baseline model YOLOv7, the mAP50, mAP50:95andRecallhave been improved to a certainextent after introducing the SODH and DA, while leading to a noticeable increase burden in bothParamsandGFLOPs.After the TD method in TT-format is implemented, model size and computational cost of the model have been significantly reduced, resulting in enhanced detection performance compared to the original YOLOv7, while maintaining similar computational complexity.Ulteriorly, the utilization of FEIoU loss function, coupled with KD methods, demonstrates a remarkable capability of higher precision and recall values,which alleviates the performance degradation caused by lowrank approximation of TD, resulting in a 3.2% and 3.7%improvement of mAP50and Recall respectively compared to the baseline model.Furthermore, although the stacking of core components leads to a slight decrease inFPS, the resulting value of 68.03 still meets the real-time detection requirement.This notable enhancements of TDKD-Net highlight the efficacy of employing the methodologies and strategies to capture fine-grained tiny object details and handle challenging UAV-based scenarios, and further excellent performance is gained in terms of recognition and localization with strong robustness.

    TABLE IV COMPARISON WITH STATE-OF-THE-ART UAV-BASED DETECTION ALGORITHMS ON VISDRONE VALIDATION SET

    Fig.6.Precision-Recall (P-R) curves on VisDrone validation set for developed models.

    Moreover, the precision-recall (P-R) curves of six models listed in Table V are further presented in Fig.6, which offers insights into the ability to accurately identify positive instances while capturing all relevant samples.Fig.6 demonstrates that TDKD-Net with the combination of SODH, DA,TD, F-EIoU and KD is able to keep the high value of precision with growing recall, which reflects that our model consistently achieves accurate and comprehensive detection of UAV-based objects across a range of thresholds.

    D. Parameter Selection Study

    Additionally, in order to select the most suitable parameters for developed TT decomposition and equal-weighted response-based KD scheme of TDKD-Net, two sets of experiments are carried out to further analyze the impact of hyperparameters in TD and KD, and aid in the selection of the optimal parameter combinations.

    1)Rank Selection for TT Decomposition: Firstly, the optimal combination of ranks in TT decomposition is explored,where the range of initializing TT-rank includes [2, 2, 2], [4,4, 4] and [6, 6, 6].From the results of Table VI and Fig.7,when using the setting of [6, 6, 6] for TT-rank, the compression rate of 86.15 times is achieved in the selected layers, and the overallParamsis decreased by 2.697 M at the cost of only 0.3% loss for mAP50and mAP50:95.While the TT-rank is set to [2, 2, 2] and [4, 4, 4], inconspicuous parameter reduction is brought to TDKD-Net, but the information transmission is blocked at the expense of an unacceptable decrease in precision.Considering that the rank combination of [6, 6, 6]exhibits a substantial reduction in model parameter count with minimal compromise in accuracy, while lower ranks prove less advantageous, the rank of [6, 6, 6] has been adopted by TDKD-Net.

    It can be concluded that a lower rank corresponds to the decreased parameters and worse precision, and this is because the rank determines the number of latent factors or components used to represent the tensor data.Conversely, by employing a higher rank in TT decomposition, the complex relationships and structures of tensors can be captured with more accurate representation, while potentially resulting in increased memory requirements.Therefore, the selection of TD rank is crucial, which involves a trade-off between model performance and parameter efficiency.

    2)Parameter Configurations for KD Methods: For the implementation of equal-weighted response-based KD mechanism, where the larger input size and width channel used for the teacher model are studied, and further applied for KD process.As shown in Table VII and Fig.8, the term “Student”refers to the student framework obtained by removing KD scheme from TDKD-Net, “T-1.25Input size” denotes the teacher network obtained by increasing the resolution of the student’s input images by 1.25 times, and meanwhile “TS-1.25Input size” represents the new model obtained by KD training on the student framework with the guidance of the teacher network.Similarly, “TS-1.5Width channel” is the new model through KD, where the teacher network is generated by enlarging the number of channels in convolution operations(model width) by 1.5 times.

    From the results of Table VII and Fig.8, it can be observed that utilizing a larger LAA image resolution in the teacher network facilitates the transfer of richer prior knowledge to the student model, but the corresponding performance improvement is marginal.By increasing the model width to obtain theteacher network and further conducting KD training, the resulting new model exhibits an improvement of 0.2% in mAP50:95and 1.0% inRecall, while reducing the parameter counts by 42.848 M compared to the teacher network.These findings demonstrate the effectiveness and robustness of employing the teacher network obtained through channel expansion for the KD process.Furthermore, the proposed KD achieves considerable recognition performance without increasing the parameters, showcasing the capacity to enhance robustness and generalization for LAA-based detection tasks.

    TABLE V ABLATION STUDIES OF TDKD-NET ON VISDRONE VALIDATION SET

    TABLE VI TENSOR DECOMPOSITION IN DIFFERENT CONFIGURATIONS OF TDKD-NET ON VISDRONE VALIDATION SET

    TABLE VII PERFORMANCE OF TDKD-NET WITH DIFFERENT KD CONFIGURATIONS ON VISDRONE VALIDATION SET

    Fig.7.Different TT-rank combinations of TDKD-Net on VisDrone validation set.

    Fig.8.The P-R curves of TDKD-Net in different KD configurations on VisDrone validation set.

    V.CONCLUSION

    In this article, a novel TDKD-Net has been put forward for the detection of LAA objects, which aims to achieve an efficient trade-off between recognition accuracy and model size,catering to the practical deployment requirements of UAVs.In the proposed TDKD-Net, a TT-format tensor decomposition has been designed to extract the compact yet informative representations from high-dimensional input data; the equalweighted response-based KD scheme has been developed to distill the knowledge from sophisticated teacher model to a compact student model with comparable performance.Meanwhile, based on the YOLOv7 framework, the aforementioned principles are incorporated, and further enhancements are developed, including SODH, DA mechanism and F-EIoUOTA.These modifications make TDKD-Net selectively focus on salient regions and crucial information of small-sized targets, thereby improving the recognition and localization precision for LAA objects, which makes TDKD-Net selectively focus on salient regions and crucial information of small-sized targets, thereby improving the recognition and localization precision for LAA objects.

    The proposed TDKD-Net has been evaluated on four challenging datasets, and the obtained results show the effectiveness and superiority of our method, which also exhibits the potential of TDKD-Net for the real-world UAVs-based applications.In future, it is promising to 1) study the more efficient lightweight algorithms based on TD, KD and pruning with less model size and training time; 2) investigate the impact of virtual image generation on the performance of TDKD-Net; 3) explore neural network architecture search and the particle swarm optimization (PSO)-based automatic hyperparameter tuning methods [63], [64].

    少妇熟女欧美另类| 亚洲av二区三区四区| 纵有疾风起免费观看全集完整版| 少妇人妻久久综合中文| 嫩草影院新地址| 成年人午夜在线观看视频| 久久久久久人妻| 欧美高清成人免费视频www| 国产女主播在线喷水免费视频网站| 桃花免费在线播放| 久久韩国三级中文字幕| 亚洲av不卡在线观看| 国产在线男女| 欧美亚洲 丝袜 人妻 在线| 一级片'在线观看视频| 一级毛片我不卡| 亚洲人成网站在线播| 亚洲国产精品999| 日韩av不卡免费在线播放| 777米奇影视久久| 婷婷色综合www| 欧美成人午夜免费资源| 伦理电影大哥的女人| av在线观看视频网站免费| 久久久精品94久久精品| 精品人妻熟女av久视频| 精品视频人人做人人爽| 永久免费av网站大全| 大码成人一级视频| 老司机亚洲免费影院| 精品亚洲成国产av| 夜夜看夜夜爽夜夜摸| 国产一区二区在线观看av| 国产精品久久久久久av不卡| 日日啪夜夜爽| 国产精品一区二区在线观看99| 精品久久久久久久久亚洲| 欧美性感艳星| 婷婷色av中文字幕| 国产成人freesex在线| 亚洲欧美精品专区久久| 天堂8中文在线网| 建设人人有责人人尽责人人享有的| a级一级毛片免费在线观看| 午夜福利网站1000一区二区三区| 大片电影免费在线观看免费| 女性被躁到高潮视频| 国产成人精品一,二区| 亚洲电影在线观看av| 亚洲av.av天堂| 精品国产国语对白av| 自线自在国产av| 国产精品一区二区三区四区免费观看| 国产一区二区在线观看av| 王馨瑶露胸无遮挡在线观看| 亚洲精品乱码久久久v下载方式| www.av在线官网国产| 观看av在线不卡| 久久久久久久精品精品| 丝袜在线中文字幕| 少妇高潮的动态图| 成人国产麻豆网| 国产成人a∨麻豆精品| 久久久久久久国产电影| 黄色日韩在线| 国产精品人妻久久久影院| 免费观看的影片在线观看| 高清在线视频一区二区三区| 欧美日韩综合久久久久久| 亚洲怡红院男人天堂| 男男h啪啪无遮挡| 亚洲图色成人| 99精国产麻豆久久婷婷| av又黄又爽大尺度在线免费看| 亚洲四区av| 免费少妇av软件| 亚洲av日韩在线播放| 国产精品人妻久久久影院| 国产老妇伦熟女老妇高清| 在线观看免费视频网站a站| 国内少妇人妻偷人精品xxx网站| 色哟哟·www| 欧美xxxx性猛交bbbb| 国产又色又爽无遮挡免| 男女边摸边吃奶| 久久久久人妻精品一区果冻| 欧美3d第一页| 欧美激情极品国产一区二区三区 | 国产男人的电影天堂91| 国产 精品1| 80岁老熟妇乱子伦牲交| 国产精品国产三级专区第一集| 欧美另类一区| 少妇熟女欧美另类| 九草在线视频观看| 最近最新中文字幕免费大全7| 精品久久久精品久久久| 日韩精品免费视频一区二区三区 | 久久精品国产鲁丝片午夜精品| av福利片在线观看| 黑人高潮一二区| 丰满少妇做爰视频| 国产精品国产三级国产专区5o| 91精品国产九色| 久久久国产精品麻豆| 菩萨蛮人人尽说江南好唐韦庄| 最后的刺客免费高清国语| 涩涩av久久男人的天堂| 欧美激情国产日韩精品一区| 久久久久久久久久久免费av| 精品一品国产午夜福利视频| 一级a做视频免费观看| 亚洲一区二区三区欧美精品| 欧美日韩在线观看h| 亚洲欧洲精品一区二区精品久久久 | 亚洲一区二区三区欧美精品| 午夜福利网站1000一区二区三区| 婷婷色麻豆天堂久久| 亚洲欧洲国产日韩| 又粗又硬又长又爽又黄的视频| 欧美日韩国产mv在线观看视频| 18禁动态无遮挡网站| 99re6热这里在线精品视频| 一区二区三区精品91| 亚洲精品国产成人久久av| 中文乱码字字幕精品一区二区三区| 久久精品国产鲁丝片午夜精品| 亚洲精华国产精华液的使用体验| 午夜福利在线观看免费完整高清在| 观看av在线不卡| 18禁裸乳无遮挡动漫免费视频| 少妇的逼好多水| 色视频在线一区二区三区| 毛片一级片免费看久久久久| 久久精品久久久久久久性| 美女中出高潮动态图| 亚洲欧美精品自产自拍| 欧美bdsm另类| 18禁裸乳无遮挡动漫免费视频| 一边亲一边摸免费视频| 校园人妻丝袜中文字幕| 少妇的逼好多水| 七月丁香在线播放| 少妇猛男粗大的猛烈进出视频| 午夜91福利影院| 久久人人爽人人片av| av线在线观看网站| 夜夜爽夜夜爽视频| 欧美老熟妇乱子伦牲交| 三级经典国产精品| 91久久精品电影网| 免费观看a级毛片全部| 久久久久久伊人网av| 亚洲精品国产成人久久av| 美女内射精品一级片tv| 曰老女人黄片| 日本欧美视频一区| 看十八女毛片水多多多| 午夜福利在线观看免费完整高清在| 国产极品天堂在线| 日韩中字成人| av一本久久久久| 国产精品女同一区二区软件| 日本猛色少妇xxxxx猛交久久| 夫妻午夜视频| 久久人人爽av亚洲精品天堂| 大香蕉97超碰在线| 蜜桃久久精品国产亚洲av| 欧美一级a爱片免费观看看| 人人妻人人爽人人添夜夜欢视频 | 久久热精品热| 大片电影免费在线观看免费| 国产黄色视频一区二区在线观看| 在线观看免费视频网站a站| 日本免费在线观看一区| 亚洲成人av在线免费| 国产精品福利在线免费观看| 亚洲国产精品一区三区| 免费观看性生交大片5| 老司机亚洲免费影院| 久热久热在线精品观看| av线在线观看网站| 美女中出高潮动态图| 成人免费观看视频高清| 日产精品乱码卡一卡2卡三| 大片电影免费在线观看免费| 婷婷色综合www| 下体分泌物呈黄色| 亚洲国产毛片av蜜桃av| 国产精品久久久久久久电影| 久久久久精品久久久久真实原创| 伦理电影免费视频| 亚洲av免费高清在线观看| 国产无遮挡羞羞视频在线观看| 国产亚洲av片在线观看秒播厂| 成人午夜精彩视频在线观看| 蜜臀久久99精品久久宅男| 五月伊人婷婷丁香| 老司机影院成人| 久久精品国产a三级三级三级| 大陆偷拍与自拍| 欧美亚洲 丝袜 人妻 在线| 久久久国产一区二区| 王馨瑶露胸无遮挡在线观看| 麻豆成人午夜福利视频| 亚洲不卡免费看| 中国美白少妇内射xxxbb| 亚洲精品aⅴ在线观看| 成人国产av品久久久| 久久久久网色| 免费看av在线观看网站| 蜜桃在线观看..| 久久久午夜欧美精品| 色哟哟·www| 美女cb高潮喷水在线观看| 高清在线视频一区二区三区| 国产精品99久久久久久久久| 亚洲熟女精品中文字幕| av天堂久久9| 老司机亚洲免费影院| 人人妻人人看人人澡| 亚洲精品aⅴ在线观看| 亚洲伊人久久精品综合| 久久人人爽av亚洲精品天堂| 男人爽女人下面视频在线观看| 又爽又黄a免费视频| 99视频精品全部免费 在线| 女人精品久久久久毛片| 一本久久精品| 日韩一区二区视频免费看| 九九在线视频观看精品| av免费观看日本| 少妇裸体淫交视频免费看高清| 97精品久久久久久久久久精品| 国产精品国产av在线观看| 22中文网久久字幕| 丰满人妻一区二区三区视频av| 日本av手机在线免费观看| 极品教师在线视频| 欧美激情极品国产一区二区三区 | 晚上一个人看的免费电影| 日韩欧美一区视频在线观看 | 午夜免费鲁丝| 国产在视频线精品| 狠狠精品人妻久久久久久综合| 欧美激情国产日韩精品一区| 亚洲精品视频女| 激情五月婷婷亚洲| 国产日韩欧美视频二区| 日韩,欧美,国产一区二区三区| 国产高清不卡午夜福利| 免费不卡的大黄色大毛片视频在线观看| 在线观看免费高清a一片| 男人狂女人下面高潮的视频| a级一级毛片免费在线观看| 三级经典国产精品| 少妇人妻久久综合中文| 91在线精品国自产拍蜜月| 婷婷色av中文字幕| 精品少妇黑人巨大在线播放| 国产精品成人在线| 欧美少妇被猛烈插入视频| 欧美精品亚洲一区二区| 18+在线观看网站| 日本免费在线观看一区| 99久久精品热视频| 日日爽夜夜爽网站| 日本黄色片子视频| 国产欧美日韩精品一区二区| 亚洲欧美一区二区三区黑人 | 一级片'在线观看视频| 最后的刺客免费高清国语| 十八禁高潮呻吟视频 | 久久综合国产亚洲精品| 最近手机中文字幕大全| 亚洲精品中文字幕在线视频 | 成人毛片a级毛片在线播放| 日韩中文字幕视频在线看片| 麻豆成人午夜福利视频| 亚洲精华国产精华液的使用体验| 高清午夜精品一区二区三区| 黑人高潮一二区| 日本欧美国产在线视频| av专区在线播放| 亚洲综合色惰| 韩国高清视频一区二区三区| 九九爱精品视频在线观看| 一级毛片电影观看| 精品久久久精品久久久| 中文字幕人妻熟人妻熟丝袜美| 国产真实伦视频高清在线观看| 热99国产精品久久久久久7| 国产成人精品福利久久| 久久久久久久精品精品| 欧美+日韩+精品| 一级毛片电影观看| 日韩欧美一区视频在线观看 | 熟妇人妻不卡中文字幕| 久久国产乱子免费精品| 韩国高清视频一区二区三区| 免费高清在线观看视频在线观看| 少妇 在线观看| 亚洲第一区二区三区不卡| 99久久综合免费| 亚洲在久久综合| 亚洲欧洲精品一区二区精品久久久 | 中文乱码字字幕精品一区二区三区| 黑丝袜美女国产一区| 校园人妻丝袜中文字幕| 亚洲,欧美,日韩| 亚洲内射少妇av| 少妇精品久久久久久久| 日韩三级伦理在线观看| 丁香六月天网| 色吧在线观看| 丝袜喷水一区| 日韩三级伦理在线观看| 国产日韩欧美在线精品| 啦啦啦啦在线视频资源| 免费高清在线观看视频在线观看| 免费在线观看成人毛片| 亚洲精品aⅴ在线观看| 精品久久久精品久久久| 在线天堂最新版资源| 大片免费播放器 马上看| 免费大片黄手机在线观看| 久久人人爽av亚洲精品天堂| 欧美激情国产日韩精品一区| 亚洲精品久久午夜乱码| 99re6热这里在线精品视频| 久久午夜综合久久蜜桃| 成人午夜精彩视频在线观看| 伦理电影免费视频| 国产精品不卡视频一区二区| 久久久精品94久久精品| 日韩在线高清观看一区二区三区| 久久影院123| 秋霞伦理黄片| 免费av中文字幕在线| 日本欧美视频一区| 一边亲一边摸免费视频| 成人午夜精彩视频在线观看| 亚洲欧洲精品一区二区精品久久久 | 熟女人妻精品中文字幕| 日韩人妻高清精品专区| 最近的中文字幕免费完整| 午夜福利网站1000一区二区三区| 欧美日韩av久久| 国产成人精品久久久久久| 亚洲va在线va天堂va国产| 最近中文字幕2019免费版| 美女福利国产在线| 大陆偷拍与自拍| 99九九在线精品视频 | 国产精品一区www在线观看| 三级国产精品欧美在线观看| 亚洲精品日本国产第一区| 国产免费福利视频在线观看| 久久久久精品性色| 亚洲一区二区三区欧美精品| 国产精品一二三区在线看| 日产精品乱码卡一卡2卡三| 国产成人aa在线观看| 欧美精品人与动牲交sv欧美| 秋霞在线观看毛片| 日本av免费视频播放| 性色av一级| 日产精品乱码卡一卡2卡三| 麻豆乱淫一区二区| 国产黄频视频在线观看| 丝袜在线中文字幕| 少妇人妻 视频| 亚洲不卡免费看| 国产伦精品一区二区三区视频9| 中文字幕制服av| 日本午夜av视频| 欧美成人精品欧美一级黄| 精品久久久久久久久亚洲| 国产欧美日韩一区二区三区在线 | 啦啦啦视频在线资源免费观看| 国产成人freesex在线| 国产高清有码在线观看视频| 亚洲成色77777| 成人无遮挡网站| 欧美最新免费一区二区三区| 高清欧美精品videossex| 天堂中文最新版在线下载| 热99国产精品久久久久久7| 高清黄色对白视频在线免费看 | 国产精品无大码| 国产精品伦人一区二区| 久久精品国产自在天天线| 在线亚洲精品国产二区图片欧美 | 六月丁香七月| 精品熟女少妇av免费看| 熟女人妻精品中文字幕| 日产精品乱码卡一卡2卡三| 韩国高清视频一区二区三区| av福利片在线| 国产深夜福利视频在线观看| 日韩三级伦理在线观看| 成人午夜精彩视频在线观看| 青青草视频在线视频观看| 十八禁网站网址无遮挡 | 一级毛片黄色毛片免费观看视频| 91久久精品国产一区二区成人| 午夜视频国产福利| 赤兔流量卡办理| 一区二区三区精品91| 99久国产av精品国产电影| 人人妻人人爽人人添夜夜欢视频 | 五月伊人婷婷丁香| 大香蕉97超碰在线| 国产精品.久久久| 久久人人爽人人爽人人片va| 国产一级毛片在线| 国产黄片美女视频| 国产视频首页在线观看| 精品亚洲乱码少妇综合久久| a级毛片在线看网站| 美女内射精品一级片tv| 麻豆成人av视频| 一边亲一边摸免费视频| 伊人久久国产一区二区| 欧美日韩视频精品一区| 国产老妇伦熟女老妇高清| 日韩熟女老妇一区二区性免费视频| 一级二级三级毛片免费看| 久久国产精品男人的天堂亚洲 | 蜜桃在线观看..| 在线观看三级黄色| 亚洲国产精品国产精品| 日本色播在线视频| 99热全是精品| 日韩欧美 国产精品| 亚洲国产最新在线播放| 成年美女黄网站色视频大全免费 | 大又大粗又爽又黄少妇毛片口| 成人特级av手机在线观看| 国产伦在线观看视频一区| 久久6这里有精品| 99精国产麻豆久久婷婷| 丝袜脚勾引网站| 亚洲精品日韩在线中文字幕| 一级av片app| 又大又黄又爽视频免费| 久久久国产欧美日韩av| 有码 亚洲区| 国产伦理片在线播放av一区| 91精品伊人久久大香线蕉| 高清午夜精品一区二区三区| 最近2019中文字幕mv第一页| 深夜a级毛片| 久久精品国产a三级三级三级| 日韩中文字幕视频在线看片| 美女中出高潮动态图| 国产亚洲欧美精品永久| 亚洲欧美成人精品一区二区| 少妇猛男粗大的猛烈进出视频| 亚洲va在线va天堂va国产| 天堂中文最新版在线下载| 国产黄色视频一区二区在线观看| 免费观看a级毛片全部| 十八禁高潮呻吟视频 | 在线精品无人区一区二区三| 卡戴珊不雅视频在线播放| av.在线天堂| 亚洲精品第二区| 欧美日韩精品成人综合77777| 新久久久久国产一级毛片| 免费久久久久久久精品成人欧美视频 | 亚洲精品国产色婷婷电影| 人人妻人人爽人人添夜夜欢视频 | 国产伦在线观看视频一区| 国产精品久久久久久av不卡| 亚洲精品aⅴ在线观看| 性色av一级| 国产伦精品一区二区三区视频9| 在线观看一区二区三区激情| 99九九线精品视频在线观看视频| 国产黄片美女视频| 国产精品久久久久久精品电影小说| 自拍偷自拍亚洲精品老妇| 内地一区二区视频在线| 2018国产大陆天天弄谢| 十八禁网站网址无遮挡 | 欧美丝袜亚洲另类| 日本vs欧美在线观看视频 | 在线观看国产h片| 亚洲在久久综合| 丰满少妇做爰视频| 男人舔奶头视频| 亚洲人成网站在线播| 精品午夜福利在线看| 国产伦理片在线播放av一区| 精品国产国语对白av| 久久久久久久久久久久大奶| 女的被弄到高潮叫床怎么办| 一本—道久久a久久精品蜜桃钙片| 赤兔流量卡办理| 啦啦啦视频在线资源免费观看| 午夜福利在线观看免费完整高清在| 日韩一区二区三区影片| 亚洲伊人久久精品综合| 亚洲人成网站在线观看播放| 美女福利国产在线| 国内少妇人妻偷人精品xxx网站| 如何舔出高潮| 精品午夜福利在线看| 亚洲性久久影院| 久久国产亚洲av麻豆专区| 欧美xxⅹ黑人| 男人狂女人下面高潮的视频| 日韩中文字幕视频在线看片| 亚洲,欧美,日韩| 亚洲精品国产av成人精品| 亚洲av国产av综合av卡| 高清毛片免费看| 国产亚洲午夜精品一区二区久久| 黄色一级大片看看| 曰老女人黄片| 99热国产这里只有精品6| 男人添女人高潮全过程视频| av国产精品久久久久影院| 亚洲av日韩在线播放| 男人狂女人下面高潮的视频| 五月玫瑰六月丁香| 久久精品国产亚洲av涩爱| 久久国内精品自在自线图片| 免费高清在线观看视频在线观看| 又爽又黄a免费视频| 成人影院久久| 国产av码专区亚洲av| 麻豆精品久久久久久蜜桃| 久久久久久久久大av| 亚洲精华国产精华液的使用体验| 午夜91福利影院| 日产精品乱码卡一卡2卡三| 久久久久久久久久久丰满| 国产视频首页在线观看| 国产高清不卡午夜福利| 国产精品国产三级国产专区5o| 九九久久精品国产亚洲av麻豆| 日韩欧美一区视频在线观看 | 国产精品一区二区性色av| 另类精品久久| 亚洲精品乱久久久久久| 精品人妻熟女毛片av久久网站| 最近手机中文字幕大全| 日本av免费视频播放| 成人美女网站在线观看视频| 啦啦啦中文免费视频观看日本| 高清黄色对白视频在线免费看 | 爱豆传媒免费全集在线观看| 午夜免费男女啪啪视频观看| 人人澡人人妻人| 国产一区二区三区综合在线观看 | 老司机影院成人| 大话2 男鬼变身卡| 黑人猛操日本美女一级片| 99热全是精品| 国产精品偷伦视频观看了| 亚洲av综合色区一区| 亚洲欧美成人综合另类久久久| 中文字幕人妻熟人妻熟丝袜美| 有码 亚洲区| 欧美激情国产日韩精品一区| 亚洲,一卡二卡三卡| 人体艺术视频欧美日本| 国产美女午夜福利| 国产精品福利在线免费观看| 欧美日韩亚洲高清精品| 97超视频在线观看视频| 久久久欧美国产精品| 夫妻午夜视频| 男女无遮挡免费网站观看| 亚洲欧洲日产国产| 久久精品国产亚洲av涩爱| 最近中文字幕高清免费大全6| 色网站视频免费| 老司机影院成人| 91aial.com中文字幕在线观看| 欧美成人午夜免费资源| 高清av免费在线| 蜜桃在线观看..| 久久97久久精品| 亚洲电影在线观看av| 下体分泌物呈黄色| 日本与韩国留学比较| 国产深夜福利视频在线观看| 久久久国产一区二区| 国产精品99久久久久久久久| 欧美日韩国产mv在线观看视频| 22中文网久久字幕| 丝袜在线中文字幕| 一级毛片 在线播放| 少妇被粗大的猛进出69影院 | 亚洲av不卡在线观看| 我的女老师完整版在线观看| 午夜影院在线不卡| 国产精品三级大全| 中文资源天堂在线| 美女福利国产在线| 欧美区成人在线视频| 在现免费观看毛片| 国产极品天堂在线| 欧美成人精品欧美一级黄| 亚洲美女视频黄频| 一个人免费看片子| 欧美少妇被猛烈插入视频| 国产成人一区二区在线| 观看美女的网站| h日本视频在线播放| 一本久久精品| 男人舔奶头视频|