• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    DAAPS:A Deformable-Attention-Based Anchor-Free Person Search Model

    2023-12-15 03:57:16XiaoqiXinDezhiHanandMingmingCui
    Computers Materials&Continua 2023年11期

    Xiaoqi Xin,Dezhi Han and Mingming Cui

    School of Information Engineering,Shanghai Maritime University,Shanghai,201306,China

    ABSTRACT Person Search is a task involving pedestrian detection and person re-identification,aiming to retrieve person images matching a given objective attribute from a large-scale image library.The Person Search models need to understand and capture the detailed features and context information of smaller objects in the image more accurately and comprehensively.The current popular Person Search models,whether end-to-end or two-step,are based on anchor boxes.However,due to the limitations of the anchor itself,the model inevitably has some disadvantages,such as unbalance of positive and negative samples and redundant calculation,which will affect the performance of models.To address the problem of fine-grained understanding of target pedestrians in complex scenes and small sizes,this paper proposes a Deformable-Attention-based Anchor-free Person Search model(DAAPS).Fully Convolutional One-Stage(FCOS),as a classic Anchor-free detector,is chosen as the model’s infrastructure.The DAAPS model is the first to combine the Anchor-free Person Search model with Deformable Attention Mechanism,applied to guide the model adaptively adjust the perceptual.The Deformable Attention Mechanism is used to help the model focus on the critical information and effectively improve the poor accuracy caused by the absence of anchor boxes.The experiment proves the adaptability of the Attention mechanism to the Anchor-free model.Besides,with an improved ResNeXt+network frame,the DAAPS model selects the Triplet-based Online Instance Matching(TOIM)Loss function to achieve a more precise end-to-end Person Search task.Simulation experiments demonstrate that the proposed model has higher accuracy and better robustness than most Person Search models,reaching 95.0%of mean Average Precision(mAP)and 95.6%of Top-1 on the CUHK-SYSU dataset,48.6%of mAP and 84.7%of Top-1 on the Person Re-identification in the Wild(PRW)dataset,respectively.

    KEYWORDS Person Search;anchor-free;attention mechanism;person detection;pedestrian re-identification

    1 Introduction

    The Person Search aims to locate and detect all pedestrian targets in a given image or video,providing their positions and related information.The Person Search model usually includes two tasks person detection and person re-identification (re-ID) [1].The primary purpose of pedestrian detection is automatically detecting and locating pedestrians in images or videos.And person re-ID refers to the task of matching different images of the same pedestrian to their corresponding identity embeddings through deep learning.The Person Search model is more complex and has higher practical value due to its involvement in detecting,recognizing,and inferring relationships among multiple individuals[2].

    In practical application,the difficulty of improving the accuracy of the person-finding task focuses on the fine-grained understanding of the image.Distinguishing similar targets requires detailed analysis and comparison of their detailed features and minimizes interference from complex background factors such as changes in the appearance of pedestrian targets and crowds.Therefore,algorithms with strong robustness,high accuracy,and high efficiency need to be designed for Person Search tasks to cope with these challenges.

    Currently,mainstream deep learning-based Person Search models typically utilize neural networks to learn image features and then perform detection through object classification and position regression.This type of model can be further divided by model characteristics into one-stage,two-stage,and one-step two-stage Person Search models[3-8],as shown in Fig.1.

    Figure 1:Classification and comparison of three person search models

    (a) DAAPS: A Deformable-Attention-Based Anchor-Free Person Search Model.The one-step Person Search model[9],also known as end-to-end Person Search models,directly outputs pedestrian targets’position and size information from the input image.This model type usually has a faster speed and can achieve real-time detection.However,its detection accuracy is relatively lower due to the need for an explicit candidate region generation process.(b)Two-step Person Search model generates candidate boxes,then performs classification and position regression to obtain the final detection results.This model type usually has higher detection accuracy but needs to process many candidate regions,resulting in high computational complexity and relatively slow speed.(c)One-step two-stage model employs a Region of Interest Align(ROI-Align)layer to aggregate features from the detected bounding boxes,allowing detection and re-ID to share these features.Our model adopts a two-stage detector,such as Faster Region-based Convolutional Neural Networks(R-CNN)[10].

    Anchors[10]are commonly used prior information in object detection tasks,which set a few fixed sizes and aspect ratios of anchor boxes to predict the position and size of the targets.The existence of anchor boxes can improve the accuracy of the Anchor-based model,but it is also affected by overparameterization,which requires manual operation.Though,models based on Anchor-free [3,4] do not require predefined anchors but directly detect,extract,and recognize the human regions in the image.The model based on Anchor-free does not rely on prior boxes and can directly regress the position and size of the target,thus improving the computational efficiency.

    The model based on Anchor-free has received much research due to its simple and fast construction.Models based on Anchor-free introduce an Aligned Feature Aggregation (AFA) module improved on the FCOS object detection framework.The FCOS architecture adopts the basic Residual Network(ResNet)backbone network structure and Feature Pyramid Network(FPN)to fuse multiscale features and then deploys a decoupled detection head to detect targets on each scale separately.The network structure is shown in Fig.2.AFA reshapes some modules of FPN by utilizing deformable convolutional kernels for feature fusion to generate more robust re-identification embeddings,overcoming the problem of models based on Anchor-free being unable to learn and detect aligned features for specific regions.

    Figure 2:FCOS network architecture

    The Attention Mechanism is one of the widely applied Computer Vision tasks to improve the utilization of crucial information by dynamically assigning importance or weight to different input parts in deep learning models.Common Attention Mechanisms include Spatial Attention,Channel Attention,and Self-Attention.The Deformable Attention[11]mechanism,as an extension of the Self-Attention mechanism,can learn feature deformations and adaptively adjust feature sampling positions to better handle target deformations and pose changes.

    Besides,the design of loss functions is a crucial aspect in improving Person Search models.Common loss functions include Triplet Loss,Online Instance Matching (OIM) [1] Loss,etc.The TOIM Loss function deployed in the DAAPS model combines the above two loss functions to match instances across frames or videos online accurately[12].

    This paper proposes a Deformable-Attention-based Anchor-free Person Search(DAAPS)model inspired by the above research methods.Our main contributions are as follows:

    This paper proposes a novel Person Search model based on an Anchor-free detector that incorporates the Deformable Attention mechanism for the first time.The proposed model first extracts image features by combining the Deformable Attention mechanism and convolutional layer,aligns the features by the AFA module,uses the FCOS detection head for target detection,and then feeds into the pedestrian re-recognition module to combine the character features with labels to get the search results.

    The improved anchor-free detection feature extraction network,ResNeXt+,adds network branches and enhances the model scalability.The group convolution structure of ResNeXt+can better extract multi-scale features,making the model more adaptable to complex Person Search tasks.Furthermore,the TOIM Loss function,a more suitable function,is chosen to better adapt to target variations,thus improving the model’s detection accuracy.

    To demonstrate that the optimization can help the model better understand the images at a finer granularity,the paper conducts extensive ablation experiments,in which mAP and top-1 are 95.0%and 95.6% on the CUHK-SYSU dataset,and 48.6% and 84.7% on the PRW dataset,respectively.The experimental results show that the DAAPS model outperforms the current best models based on Anchor-free,fully demonstrating rationality and effectiveness.In addition to this,the study conducted ablation experiments on various parts of the model and proved that the proposed modifications and optimizations are more suitable for the anchor-free model,thus illustrating the robustness and superiority of the present model.

    The remainder of this paper is structured as follows.Section 2 reviews related work on the Person Search model based on Anchor-free and Attention Mechanisms.In Section 3,the implementation of the DAAPS model is depicted in detail.Section 4 analyzes and compares the experimental results,demonstrating the effectiveness of our proposed model.Finally,the paper is summarized,and future research is discussed in Section 5.Table 1 contains common abbreviations used in the paper for reference purposes.

    Table 1: Table of common abbreviations

    2 Related Work

    This section mainly reviews the existing Person Search models split by Anchor and based on Attention Mechanism,respectively,to highlight proposed model.

    In this paper,databases such as Institute of Electrical and Electronics Engineers Xplore,Web of Science,the Engineering Index,ScienceDirect,Springer,and Arxiv are used as the main target for the Person Search model within the last five years through Google Scholar.We use various combinations of characters as search terms,e.g.,“Person Search model”,“Anchor-free”,“pedestrian re-ID”and“Attention Mechanism”.After screening the suitable 54 papers whose sources include conferences,journals and online published papers were used as references for this paper.Deep learning models are gradually becoming one of the main targets of cyber attack.Attacks include adversarial attack,model spoofing attack,backdoor attack[13-15]and so on.How to reduce the impact of attacks and enhance robustness is also one of the focuses of model design.

    2.1 Person Search Models Split by Anchor

    The Person Search,object detection,and person recognition models have developed dramatically with the in-depth study of deep learning.Faster R-CNN is a classic two-step target detection model with Anchor,which can also be used for Person Search[10,16,17].Chen et al.[18]combined Faster R-CNN and Mask R-CNN to search for people using two parallel streams.One stream is used for object detection and feature extraction,and the other is used to generate semantic segmentation masks for pedestrians to improve the accuracy of pedestrian searches further.He et al.[19,20]implemented a Siamese architecture instead of one stream for an end-to-end training strategy.The detection module is optimized based on Faster-RCNN.However,when the human object is occluded or deformed,the anchor point cannot accurately capture the shape and position information of the object,thus affecting the detection performance of the Anchor-based models.

    Anchor-free detection is widely used in image detection[3,4,21-25],but it has been proposed to be applied to the Person Search model recently.Law et al.[26]proposed the earliest anchor-free target detection model,CornerNet,which does not rely on anchor boxes for target detection,but converts the target detection problem into a task of object corners detection.Subsequently,many classic Anchorfree detection models are proposed [3,4,9,21].Yan et al.[4] proposed the AlignPS model based on FCOS that introduces Anchor-free into the Person Search task for the first time.In the AlignPS model,the AFA module addresses the issues of scale,area,and task misalignment caused by Anchor-free.

    Nevertheless,as models based on Anchor-free usually rely on the prediction of critical points or center points,the precise positioning of targets is limited to some extent,and other methods are needed to help improve the model’s accuracy.There is still room for optimization in the accuracy of character detection and recognition and in the model architecture.

    2.2 Person Search Models Based on Attention Mechanism

    The application of the Attention Mechanism in the Person Search task can help improve the accuracy of detection and matching [21,24,27-34].Chen et al.[21] introduced the channel attention module into the model based on Anchor-free to express different forms of occlusion and make full use of the spatial attention module to highlight the target area of the occlusion-covered objects.Zhong et al.propose an enhancement to feature extraction in their work by incorporating a position-channel dual attention mechanism [33].This mechanism aims to improve the accuracy of feature representation by selectively attending to important spatial and channel-wise information.Zheng et al.introduce a novel hierarchical Gumbel attention network[34],which utilizes the Gumbel top-k re-parameterization algorithm.This network is designed for text-based person search and focuses on selecting semantically relevant image regions and words/phrases from images and texts.It enables precise matching by aligning and calculating similarities between these selected regions.Ji et al.develop a Description Strengthened Fusion-Attention Network (DSFA-Net) [35],which employs an end-to-end fusion-attention structure.DSFA-Net consists of a fusion and attention subnetwork,leveraging three attention mechanisms.This architecture addresses the challenges in Person Search by enhancing the fusion of multimodal features and applying attention mechanisms to capture relevant information.

    However,according to the experiments in this paper,Deformable Attention brings higher detection accuracy and is more suitable for Anchor-free Mechanism than channel attention and spatial attention.Cao et al.[36] and Chen et al.[37] proposed adding Deformable Attention Mechanism to Transformer [38] for the Person Search model.Although the Transformer works well for tasks such as long text or images,it has high computing and memory requirements due to the need to compute associations between all locations in the self-attention mechanism.Especially for longer input sequences,model training and reasoning can become more time-consuming and resource intensive.All above are why proposed model adopts the Deformable Attention mechanism to cooperate with the model based on an Anchor-free FCOS structure.Previous research has been limited to improving model performance by changing the detector or optimizing the re-identification algorithm.They focus only on the mechanisms they add and do not consider the effects of other attention mechanisms,nor do they validate the performance of the model under other attention mechanisms.The paper is the first to propose combining a deformable attention mechanism and an Anchor-free person search model with comparing with other attention,filling the gap in the impact of attention mechanism on the performance of pedestrian detection models.In addition,most of the previous studies have only considered the anchor frame and the attention mechanism itself,without considering how they are combined and what structures are needed to enable the two to be sufficiently combined to enhance model performance,which is one of the considerable differences between studies.

    3 Method

    In this section,the network architecture of DAAPS,the improved ResNeXt+structure,the implementation of the deformable Attention Mechanism,and the calculation of the loss function are introduced in detail.

    3.1 Network Architecture

    The infrastructure of the proposed DAAPS model in this paper is designed based on FCOS.As shown in Fig.3,for an input image of the size I ∈R3×H×W,the DAAPS model can simultaneously locate multiple target pedestrians in the image and learn re-ID embedding.Specifically,the proposed model first extracts image features and gets three levels of features according to the feature pyramid.A Deformable Attention Mechanism then processes it to handle better objects of different scales,directions,and shapes.Feature maps {P3,P4,P5} is obtained by down-sampling and weighting using strides of 8,16,and 32.Subsequently,an AFA module is utilized to fuse features of different scales into a single embedding vector.The AFA module has multiple branches,each branch performing a weighted fusion of features at different scales and producing a fused and flattened feature vector.Then,an FCOS detection head is employed for object detection.It comprises two branches,namely the classification regression branch,each including four 3×3 deformable convolutional layers.The classification branch is utilized to classify each pixel’s position determine if it is a queried object,and predict the object’s category.At the same time the regression branch is employed to regress each pixel’s position and predict the object’s position and size.

    Figure 3:Network architecture of DAAPS

    3.2 ResNeXt+Optimization

    The proposed DAAPS model is based on the classic model based on Anchor-free FCOS,which incorporates a single-stage object detection method and multi-scale feature fusion techniques.Unlike its predecessor,DAAPS introduces group convolution layers with more channels on top of the ResNet backbone to achieve ResNeXt[39]and deeper feature extraction.Subsequently,a pruning algorithm removes unimportant connections,reducing network complexity and improving the model’s speed and accuracy.This improved network structure is referred to as the ResNeXt+architecture in this paper.

    For given input data of D dimensions x=[x1,x2,...,xd],the corresponding filter weight is w=[w1,w2,...,wd].A linearly activated neuron without bias can be expressed as:

    That is,the data is split into individual features with low-dimensional embeddings.Each lowdimensional embedding undergoes a linear transformation and then aggregated using unit addition.It is a split-transform-merge structure that can be replaced with a more general function so that each structure makes use of the same topology.The aggregation transformation results are as follows:

    Among them,C is the size of the transformation set to be aggregated,namely cardinality.Ti(x)is any transformation,such as a series of convolution operations.ResNeXt+is based on group convolutions,a strategy between regular convolutional kernels and depth-separable convolutions.By controlling the cardinality,a balance between the two strategies is achieved.The complete ResNeXt+network structure is obtained,combined with the robust residual network.That is the addition of a shortcut is added to the simplified inception architecture,which is expressed as:

    ResNeXt+adopts a Visual Geometry Group-like block stacking method,and the stacking process follows two principles: (a) If the same size spatial maps are produced,the blocks share the same hyperparameters.(b)Whenever the spatial map is downsampling twice,the width of the convolutional kernel is multiplied by 2.The structure is shown in Fig.4.

    Figure 4:ResNeXt+structure

    That is,the input channel is reduced from 256 to 128 by a 1 × 1 convolutional layer and then processed using group convolution,with a convolutional kernel size of 3×3 groups of 32,and then up-dimensioned using a 1 × 1 convolutional layer.The output is added to the input to obtain the final output.ResNeXt+employs group convolution to increase the width of the model.Compared to traditional convolution layers,group convolution splits the input features into several small groups,performs convolution operations on each small group separately,and finally concatenates all the convolution results of the small groups together.

    Then pruning is implemented through the filter pruning algorithm[40]to optimize the network.Specifically,the L1 parametric was first used as the filter metric to determine which filters were more critical,and the L1 parametric was normalized to the average importance of each filter.A global pruning threshold is determined by setting the percentage of filter importance in the entire network,and the filters in the network are pruned according to the threshold.After pruning,the remaining filters are reattached to the network.Finally,fine-tuning is performed using a lower learning rate to recover the performance,and the fine-tuned network is initialized based on the weights before pruning.This approach allows the model to significantly reduce the number of parameters and the computational complexity of ResNeXt+without losing much performance,resulting in a more efficient network.

    ResNeXt+has advantages over ResNet in terms of better performance and higher accuracy.Due to the ability of ResNeXt+to simultaneously increase the depth and width of the model,it can better adapt to complex visual tasks.In addition,the more complex residual block is also employed in the ResNeXt+structure,further improving the model’s nonlinear expression ability.Therefore,the application of ResNeXt+can improve the receptive field of FCOS and boost the effectiveness of the network.At the same time,ResNeXt+has better generalization performance,making FCOS training and inference faster and more resource-efficient.

    3.3 Deformable Attention Mechanism

    For a given input feature mapping x ∈RC×H×W,where C is the number of channels,H and W are height and width,respectively.First of all,for each deformable convolution kernel m,the method computes the sampling offset Δpmqkand attention weight distribution Amqkfor each sampled key k based on the linear projection of the query vector zqand position vector pq:

    where Amqkis a scalar attention weight with a value range of[0,1].are weights exploited to compute the offset and attention distribution in deformable convolutional kernels,which are both learnable parameters.Subsequently,the method multiplies the feature vector xmqklocated at pq+Δpmqkin x by Amqkto obtain a weighted feature vector ymqk:

    Finally,sum up ymqkof all deformable convolution kernels to obtain the final output feature vector:

    To sum up,the calculation process of the Deformable Attention module includes three steps,calculation of sampling offsets and attention distributions,convolution on the input feature map,and weighted addition to obtaining the final output.The advantage of the Deformable Attention Mechanism lies in its ability to accurately capture long-range dependencies and geometric structures of the target,and exchange information between multi-scale feature maps,thus improving the accuracy of object detection and recognition.

    3.4 Optimization Program

    TOIM Loss function can be expressed as:

    where LOIMis OIM[1]Loss function.OIM aims to match the predicted instance in the image with the real instance.It first generates a pair of matching scores between the predicted and the real instance and then applies the matching score to calculate the loss function,which can be expressed as:

    Here,N is the batch size,K is the total number of classes,yirepresents the class label of sample i,firepresents the feature vector of sample i,wyirepresents the weight vector of class yi,I represents the indicator function,and τ is the temperature parameter.OIM is designed to maximize the scores of the predicted instances of its underlying actual class and minimize the scores of other classes.

    The Ltristands for the Triplet Loss function,mainly deployed to address the association problem among multiple images of the same identity in pedestrian re-identification.Ltriworks by encouraging the embeddings of positive instances to be closer while pushing the embeddings of negative instances away from the query target.Its computation is as follows.For a training sample,we need to select two images that are different from it,one belonging to the same category as the sample and the other belonging to a different category from the sample.

    Given a training set containing m triples (A,P,N),where A represents the targeted sample,P represents the Positive sample,and N represents the Negative sample that does not belong to the same class as the targeted sample.The Triplet Loss function can be represented as follows:

    where f(x) represents the embedding vector that maps the input sample x to the feature space.A(i),P(i),N(i)represent the anchor,positive,and negative samples in theitriplet,respectively.‖·‖2is the norm of L2.α is the margin,which indicates the minimum distance that should be different between positive and negative samples.By minimizing the Triplet Loss function,a face recognition model can be trained to map different photos of the same person to similar feature spaces.

    4 Experiment

    This section describes the experimental process and results from seven aspects,the datasets used in the experiment,the model’s implementation details,the evaluation indicators,the attention ablation experiments,the comparison of experimental results,the loss function effect,the ResNeXt+effect,and the visualization results.

    4.1 Dataset

    CUHK-SYSU [1] dataset is a large-scale pedestrian detection dataset with 1.14 gigabytes (GB)shared by the author Prof.Shuang Li.The images are sourced from two data types,authentic street snapshots and movies or TV shows.12,490 images and 6,057 people to be detected are collected using hundreds of street scenes.Moreover,5,694 images and 2,375 people to be detected are selected from movies or TV shows.Unlike the re-ID datasets that manually crop images of the queried person,the CUHK-SYSU is more closely related to real-world scenarios.The data is divided into training and testing sets,with the training set consisting of 11,206 images and 5,532 people to be queried and the testing set covering 6,978 images and 2,900 people to be queried.The images and people in the training and testing set are distinct.

    PRW [41] dataset is an extension of the Market1501 dataset,typically employed for end-toend pedestrian detection and person re-identification in raw video frames,and for evaluating person search and pedestrian re-ID in the wild.PRW dataset includes 11,816 video frames captured by six synchronized cameras and corresponding mat file annotations.Each mat file records the position of the bounding box within the frame along with its ID,and the dataset also contains 2,057 query boxes from the PRW dataset.The PRW dataset,available as an open-source repository on GitHub,is 2.67 GB.It encompasses a training set comprising 5,704 images along with 18,048 corresponding annotations and a test set containing 6,112 images with 250,062 annotations.

    4.2 Implementation Details

    The DAAPS model is implemented using PyTorch and the MMDetection toolkit.ResNeXt101+serves as the backbone of our model.FPN with 3 × 3 deformable convolutions is applied as the neck,with a default Deformable Attention Mechanism.DAAPS is trained by the Stochastic Gradient Descent (SGD) optimizer,in which an initial learning rate is 0.0015,momentum is 0.9,and weight decay is set to 0.0005.Training and testing are conducted on an NVIDIA V100-SXM2-32 GB GPU.By default,the model is trained for 24 epochs.Linear learning rate warming up is chosen during training,with a warming up iteration of 1141 and a ratio of the warming up learning rate to the initial learning rate of 1/200.The learning rate is adjusted at the 16th and 22nd epochs;the remaining epochs are trained with the adjusted learning rate.During the training processes,the length of the image’s longer side is adjusted to a random number between 667 and 2000,while the size of the images in the test set is adjusted to 1500×900 in the testing processes.

    4.3 Evaluation Index

    The performance of the Person Search model is evaluated through mAP and top-1 accuracy.mAP is one of the most commonly utilized evaluation metrics in object detection.First,for each category,all detection results are sorted by confidence from high to low.Then,based on the matching relationship between the actual and predicted categories of the detection results,the Precision-Recall curve is calculated at different confidence thresholds,and the area under the curve is obtained for each category.Finally,the AP values of all categories are averaged to obtain the mAP value of the model.The calculation formula is as follows:

    where C is the number of categories,Rc(m)is the number of class c in positive examples,rank(i)is the ranking of test results ranking i,and TP(j)represents the number of positive cases correctly detected among the first j detection results.

    Top-1 refers to the top-1 accuracy of prediction results in classification tasks,namely,the highest probability value of correct results in the model prediction.mAP stands for the change or delta in mean Average Precision.A positivemAP indicates a performance improvement,while a negativemAP suggests a decrease in performance.Top-1 represents the change or delta in the Top-1 accuracy metric.It helps quantify the improvement or degradation in classification performance.The above four indexes are selected as indicators of the evaluation during experiments.

    4.4 Ablation Studies of Attention Mechanism

    To demonstrate the need for the Deformable Attention Mechanism,the ablation experiments are carried out on the CUHK-SYSU dataset,using a model based on Anchor-free without attention mechanisms as the Base.The impact of attention mechanisms on the algorithm is investigated,and the effectiveness of deformable attention mechanisms on DAAPS is verified.The experimental results are shown in Table 2.

    Table 2: Effects of different attention mechanisms on the CUHK-SYSU dataset

    The addition of Convolutional Block Attention Module(CBAM)based on the Base,the combination of the Spatial Attention Mechanism and Channel Attention Mechanism,resulted in a slight increase in mAP and Top-1 index,0.6% and 0.9%,respectively.Both mAP and Top-1 perform well if the combination is changed to the Attention Mechanism in Dual Attention network (DANet).Nevertheless,none of these improvements compare to DAAPS,with the Deformable Attention Mechanism,giving mAP and Top-1 indexes a 1.9% and 2.2% boost on the CUHK-SYSU dataset.Our Deformable Attention Mechanism is more suitable for models based on Anchor-free.

    To demonstrate the effectiveness and irreplaceability of the Deformable Attention Mechanism on the DAAPS model,this paper adds the CBAM Attention Mechanism to the model to work together with the Deformable Attention Mechanism.Theoretically,the superimposition of attention mechanisms can help the model learn the importance of different parts in the input and adjust the model’s attention based on their importance to better capture relevant features.However,experimental data on the CUHK-SYSU dataset show that including CBAM results in a 0.4%decrease in both mAP and Top-1.Although the model’s performance is only slightly reduced,it fully proves the robustness of the proposed model,as shown in Table 3.

    Table 3: Prove the robustness of DAAPS on the CUHK-SYSU dataset

    4.5 Comparison to State-of-the-Art

    The results compared with state-of-the-art models are shown in Table 4,and our model outperforms most existing one-stage and two-stage Person Search models.The best result of the DAAPS model is compared to the previous best model,AlignPS+,where mAP has improved by 0.5%and Top-1 by 2.1% in the CUHK-SYSU dataset.This advantage is also reflected in the PRW dataset,where the mAP is 1.7%higher than the previous best task-consistent two-stage framework(TCTS)model.Moreover,our model is based on an Anchor-free network architecture,which runs faster than other models.Additionally,more efficient optimization algorithms and hyperparameter tuning techniques enable the proposed model to achieve better model performance with 24 training epochs.

    Table 4: Comparison of experimental results on the CUHK-SYSU and PRW datasets

    Due to the limited training data in the PRW dataset and the fact that images are taken from six different camera viewpoints,the effect of all models on this dataset is constrained.Our model achieves the best mAP among all models.Although DAAPS’s top-1 accuracy on the PRW dataset is 0.2%less than that of the current one-step best-performing ABOS,mAP is 2.1%higher.This indicates an improvement in the model’s overall performance in terms of accuracy and recall,which are critical factors in tasks such as object detection or people search.The trade-off is reasonable because it leads to a more effective and comprehensive performance evaluation.

    It can be seen that TCTS [43] achieves the highest accuracy in Top-1 on PRW,but it is a twostep model that requires a dedicated re-ID model to receive detection results for re-ID.As a model based on Anchor-free,DAAPS can adaptively learn the feature representation of the target without the need for a predefined fixed anchor point,which is certainly not affected by the selection and distribution of anchor points.Moreover,it does not require additional computation to interpolate feature representations between anchor points,making it more robust and efficient.

    4.6 Ablation Studies of Loss Function

    To prove the rationality of choosing the TOIM Loss function,the proposed model’s performance is further evaluated using several different loss functions.As shown in Table 5,it is found that using the composite TOIM Loss function results in better performance of DAAPS than using only Logarithmic Loss Function or Triplet Loss function.Compared to applying the Triplet Loss function with Look-Up-Table(LUT),TOIM increases mAP and Top-1 by 1.7%and 1.8%on the CUHK-SYSU dataset,moreover 0.3%and 0.4%on the PRW dataset.

    Table 5: Comparison of DAAPS under different loss functions on the CUHK-SYSU and PRW

    4.7 Ablation Studies of ResNeXt+

    The study compares pedestrian detection models based on ResNet,ResNeXt,and ResNeXt+network structures,excluding other factors,and the experimental results are shown in Table 6.The accuracy and overall performance of the model based on ResNeXt+are higher than others.Although the improved metric values are small,the results cannot simply be compared with the ResNet-based model.Using the optimized ResNeXt+overcomes the adverse effects of ResNeXt on the model and instead gives better results than ResNet.

    Table 6: Comparison of DAAPS based on different networks on the CUHK-SYSU

    4.8 Visualization Results

    The task of Person Search is challenging due to various factors that can affect people’s posture,clothing,facial expression,and other characteristics in real-world scenes,making the detecting and recognizing of individuals difficult,especially in low-light environments and occluded areas.

    Due to various factors (such as camera distance and resolution),the size of a pedestrian in a natural scene can vary greatly.Some pedestrians may appear at relatively large distances,causing them to appear smaller in size in the image,known as“small targets”.For the performance evaluation of pedestrian detection algorithms,obtaining a high recall rate and accuracy on small targets is essential.The DAAPS addresses these issues by utilizing the Deformable Attention Mechanism to guide the model to focus on the salient features of the target person adaptively.This allows the model to identify small targets better and perform well in complex and occluded environments.The effectiveness of the DAAPS model is further demonstrated through visualization,and its results are shown in Fig.5.

    The paper chooses images with more occlusion,complex backgrounds,and dim light,which is a big challenge for the Person Search model.Enter the person to be detected on the left,ask the DAAPS model to find other pictures with the same target in the vast database and show the detection results in a blue box.The detection results show that the DAAPS model successfully recognizes and accurately locates the target to be queried,proving the proposed model’s effectiveness.

    Figure 5:Visualization results

    5 Conclusion

    To reduce the impact of complex scenes and varying levels of occlusion on the model’s accuracy in Person Search,this paper proposes the DAAPS model to combine Deformable Attention Mechanism with Anchor-free architecture for the first time.Separately,the detection network ResNeXt+of DAAPS with enhanced scalability extracts multi-scale features for improved adaptability in complex Person Search tasks.Moreover,applying the more effective TOIM Loss function in the re-ID module improves the discriminative ability of embedding vectors.The model’s generalization ability and robustness are enhanced,achieving better performance in practical applications demonstrated by simulation experiments,with mAP and Top-1 of 95.0% and 95.6% on the CUHK-SYSU dataset and 48.6% and 84.7% on the PRW dataset,respectively.The DAAPS model outperforms current models based on Anchor-free,showcasing rationality and effectiveness.In the study,many ablation experiments are used to test various essential modules of the model.The experimental results fully demonstrate the adaptability of the Deformable Attention mechanism as well as the rest of the components to the Anchor-free model,which offer a strong accuracy addition to the detector,and therefore provide ideas for later scholars to study the Anchor-free Person Search model.Due to the limitations of hardware,the model proposed in this paper does not achieve its optimal performance.In the future,more perfect algorithms and better hardware devices will be designed to enhance the real-time efficiency of the Person Search model.

    Acknowledgement:Not applicable.

    Funding Statement:We would like to express our sincere gratitude to the Natural Science Foundation of Shanghai under Grant 21ZR1426500,and the Top-Notch Innovative Talent Training Program for Graduate Students of Shanghai Maritime University under Grant 2021YBR008,for their generous support and funding through the project funding program.This funding has played a pivotal role in the successful completion of our research.We are deeply appreciative of their invaluable contribution to our research efforts.

    Author Contributions:Study conception and design:X.Xin,D.Han;data collection:X.Xin;analysis and interpretation of results:X.Xin,D.Han,M.Cui;draft manuscript preparation:X.Xin,D.Han,M.Cui.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Data provided in this study are available on request to the corresponding author by xinxiaoqi@stu.shmtu.edu.cn.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日本在线视频免费播放| 在线观看日韩欧美| 免费av毛片视频| 亚洲 欧美 日韩 在线 免费| 一级毛片高清免费大全| 两人在一起打扑克的视频| 99riav亚洲国产免费| 黄频高清免费视频| 一级a爱片免费观看的视频| 日韩欧美 国产精品| 黑人操中国人逼视频| 国产成人av激情在线播放| 特大巨黑吊av在线直播| 97超级碰碰碰精品色视频在线观看| 嫁个100分男人电影在线观看| 琪琪午夜伦伦电影理论片6080| 国产精品一及| 男人舔女人的私密视频| 国产av在哪里看| 女同久久另类99精品国产91| 又黄又爽又免费观看的视频| 欧洲精品卡2卡3卡4卡5卡区| 无人区码免费观看不卡| 国产高清有码在线观看视频 | 国产免费男女视频| 99热这里只有是精品50| 怎么达到女性高潮| 精品午夜福利视频在线观看一区| 亚洲精品久久国产高清桃花| 国产黄色小视频在线观看| 国产精品爽爽va在线观看网站| 久久久久久久久久黄片| 巨乳人妻的诱惑在线观看| 午夜免费成人在线视频| 欧美日韩黄片免| 麻豆成人av在线观看| 香蕉久久夜色| 成人永久免费在线观看视频| 精品一区二区三区av网在线观看| 老司机福利观看| 国产69精品久久久久777片 | 一卡2卡三卡四卡精品乱码亚洲| 免费看日本二区| 在线看三级毛片| 国产精品美女特级片免费视频播放器 | www.熟女人妻精品国产| 99久久综合精品五月天人人| 成人av在线播放网站| 亚洲成av人片免费观看| 18禁裸乳无遮挡免费网站照片| 在线观看午夜福利视频| 一级片免费观看大全| 国产高清有码在线观看视频 | 国产精品美女特级片免费视频播放器 | 精品人妻1区二区| 午夜两性在线视频| 国产精品1区2区在线观看.| 日韩国内少妇激情av| www日本在线高清视频| 一本综合久久免费| 亚洲成av人片在线播放无| 99久久99久久久精品蜜桃| 国产熟女xx| 婷婷亚洲欧美| 欧美精品啪啪一区二区三区| 亚洲自偷自拍图片 自拍| 久久久久国产精品人妻aⅴ院| 欧美乱色亚洲激情| 亚洲最大成人中文| 在线十欧美十亚洲十日本专区| 超碰成人久久| 欧美精品亚洲一区二区| 亚洲国产高清在线一区二区三| 国产精品一区二区精品视频观看| 久久久久免费精品人妻一区二区| 在线国产一区二区在线| 中文字幕高清在线视频| 天堂av国产一区二区熟女人妻 | 又黄又爽又免费观看的视频| 又黄又爽又免费观看的视频| 欧美日韩瑟瑟在线播放| 国产精品98久久久久久宅男小说| 777久久人妻少妇嫩草av网站| 女人爽到高潮嗷嗷叫在线视频| 欧美乱码精品一区二区三区| 欧美日韩国产亚洲二区| 香蕉久久夜色| 亚洲专区字幕在线| 岛国在线观看网站| 妹子高潮喷水视频| 国产精品久久久久久亚洲av鲁大| 国产爱豆传媒在线观看 | 免费在线观看黄色视频的| 日本在线视频免费播放| 久久伊人香网站| 中出人妻视频一区二区| 欧美黄色淫秽网站| 亚洲一区高清亚洲精品| 精品午夜福利视频在线观看一区| 少妇的丰满在线观看| 欧美日韩亚洲国产一区二区在线观看| 亚洲avbb在线观看| 亚洲精品国产一区二区精华液| 伦理电影免费视频| 中文字幕av在线有码专区| 精品国内亚洲2022精品成人| 丁香六月欧美| 夜夜躁狠狠躁天天躁| 丁香六月欧美| 波多野结衣巨乳人妻| 日日爽夜夜爽网站| 两性午夜刺激爽爽歪歪视频在线观看 | 国产精品乱码一区二三区的特点| 人人妻人人看人人澡| 老司机在亚洲福利影院| 全区人妻精品视频| 两个人的视频大全免费| 老汉色∧v一级毛片| 国产亚洲精品一区二区www| 国产精品国产高清国产av| 757午夜福利合集在线观看| 在线a可以看的网站| 日韩成人在线观看一区二区三区| 久久久水蜜桃国产精品网| 午夜精品在线福利| 成年人黄色毛片网站| 午夜老司机福利片| 可以在线观看的亚洲视频| xxxwww97欧美| 久久久国产成人免费| 久99久视频精品免费| 国产人伦9x9x在线观看| 午夜福利成人在线免费观看| 午夜福利成人在线免费观看| 正在播放国产对白刺激| 国产av麻豆久久久久久久| 99国产精品一区二区三区| 午夜激情av网站| 18禁观看日本| 草草在线视频免费看| 丰满人妻一区二区三区视频av | 亚洲精华国产精华精| 中亚洲国语对白在线视频| 在线观看午夜福利视频| 久久香蕉激情| 亚洲天堂国产精品一区在线| 色综合婷婷激情| 精品久久久久久久久久免费视频| 精品久久久久久久久久免费视频| 国产精品乱码一区二三区的特点| 又黄又爽又免费观看的视频| 亚洲一区二区三区色噜噜| 伊人久久大香线蕉亚洲五| 亚洲欧美日韩无卡精品| 在线观看一区二区三区| 黄色女人牲交| 黄色视频不卡| 亚洲九九香蕉| 母亲3免费完整高清在线观看| 一级片免费观看大全| 欧美日韩一级在线毛片| 欧美黄色淫秽网站| 国产精品 欧美亚洲| 精品一区二区三区av网在线观看| 精品国产乱子伦一区二区三区| 在线国产一区二区在线| 黑人巨大精品欧美一区二区mp4| 国产亚洲精品第一综合不卡| 日韩欧美一区二区三区在线观看| 一二三四社区在线视频社区8| 大型av网站在线播放| 久久草成人影院| 亚洲国产中文字幕在线视频| 国产在线精品亚洲第一网站| netflix在线观看网站| 免费在线观看视频国产中文字幕亚洲| 天天一区二区日本电影三级| 国产亚洲av高清不卡| 亚洲真实伦在线观看| 1024香蕉在线观看| 日韩欧美免费精品| 男人舔女人的私密视频| 丰满的人妻完整版| 99国产综合亚洲精品| 男人的好看免费观看在线视频 | 曰老女人黄片| 亚洲精品中文字幕在线视频| 无遮挡黄片免费观看| 国产精品一区二区三区四区久久| 亚洲男人天堂网一区| 香蕉丝袜av| 18美女黄网站色大片免费观看| 狂野欧美白嫩少妇大欣赏| 91麻豆av在线| 99国产精品99久久久久| 大型黄色视频在线免费观看| 午夜福利欧美成人| 九色成人免费人妻av| 欧美成人性av电影在线观看| 久久午夜亚洲精品久久| 正在播放国产对白刺激| 一本大道久久a久久精品| 亚洲熟女毛片儿| 亚洲精品一区av在线观看| 国产欧美日韩一区二区三| 日本免费a在线| 日韩大码丰满熟妇| 亚洲精品久久成人aⅴ小说| 亚洲国产看品久久| 女生性感内裤真人,穿戴方法视频| av超薄肉色丝袜交足视频| 久9热在线精品视频| 老司机午夜十八禁免费视频| 免费观看精品视频网站| 国产精品一区二区三区四区免费观看 | 亚洲av电影不卡..在线观看| 午夜日韩欧美国产| 成人国产一区最新在线观看| 草草在线视频免费看| 91成年电影在线观看| 亚洲av熟女| 国产av又大| 亚洲在线自拍视频| 国产爱豆传媒在线观看 | 亚洲国产欧美网| 妹子高潮喷水视频| 悠悠久久av| 午夜福利欧美成人| 淫秽高清视频在线观看| 久久久久国内视频| 很黄的视频免费| 九色国产91popny在线| 亚洲最大成人中文| 999精品在线视频| 最好的美女福利视频网| 1024香蕉在线观看| 国产亚洲精品久久久久5区| 后天国语完整版免费观看| 亚洲男人的天堂狠狠| 中文字幕高清在线视频| 日本成人三级电影网站| 国产真人三级小视频在线观看| 淫秽高清视频在线观看| 麻豆国产av国片精品| 老司机福利观看| 韩国av一区二区三区四区| 国产高清视频在线播放一区| 国产私拍福利视频在线观看| 久久中文字幕一级| 亚洲一卡2卡3卡4卡5卡精品中文| 色尼玛亚洲综合影院| netflix在线观看网站| 免费在线观看影片大全网站| 岛国视频午夜一区免费看| 最近最新中文字幕大全免费视频| 美女 人体艺术 gogo| 中文字幕高清在线视频| 久久精品人妻少妇| 国产91精品成人一区二区三区| 香蕉丝袜av| 最近在线观看免费完整版| 亚洲男人的天堂狠狠| 好看av亚洲va欧美ⅴa在| 嫩草影视91久久| 久久中文看片网| 色精品久久人妻99蜜桃| 中文字幕最新亚洲高清| 动漫黄色视频在线观看| 亚洲精品色激情综合| 日韩av在线大香蕉| 亚洲成av人片免费观看| 中出人妻视频一区二区| 中文资源天堂在线| 三级国产精品欧美在线观看 | 在线永久观看黄色视频| 极品教师在线免费播放| 欧美日韩福利视频一区二区| 蜜桃久久精品国产亚洲av| 亚洲专区字幕在线| 国产亚洲精品第一综合不卡| 国产精品爽爽va在线观看网站| 国产成人一区二区三区免费视频网站| 日日爽夜夜爽网站| 在线观看www视频免费| 国内揄拍国产精品人妻在线| 在线观看美女被高潮喷水网站 | 女生性感内裤真人,穿戴方法视频| 国产精华一区二区三区| 久久人妻av系列| av免费在线观看网站| 欧美在线黄色| 久久中文字幕人妻熟女| 精品一区二区三区四区五区乱码| 亚洲av中文字字幕乱码综合| 成人特级黄色片久久久久久久| 久久精品夜夜夜夜夜久久蜜豆 | 好男人在线观看高清免费视频| 国内久久婷婷六月综合欲色啪| 日韩精品中文字幕看吧| 亚洲美女黄片视频| 亚洲国产欧美人成| 黄色片一级片一级黄色片| 波多野结衣巨乳人妻| 天天躁夜夜躁狠狠躁躁| 亚洲精品美女久久av网站| 一本久久中文字幕| 一本精品99久久精品77| 国产精品99久久99久久久不卡| 国产免费av片在线观看野外av| 老鸭窝网址在线观看| 欧美中文综合在线视频| 搡老熟女国产l中国老女人| 国产高清视频在线播放一区| 五月玫瑰六月丁香| 国产真实乱freesex| 亚洲欧美日韩高清在线视频| 亚洲中文字幕一区二区三区有码在线看 | 淫妇啪啪啪对白视频| 中文亚洲av片在线观看爽| 男女那种视频在线观看| 国语自产精品视频在线第100页| 国产不卡一卡二| 狠狠狠狠99中文字幕| 99久久综合精品五月天人人| 成年版毛片免费区| 亚洲美女黄片视频| 国产亚洲欧美在线一区二区| 制服诱惑二区| 可以在线观看毛片的网站| 999久久久国产精品视频| 国产一区二区激情短视频| bbb黄色大片| 欧美不卡视频在线免费观看 | 国产熟女午夜一区二区三区| 99热这里只有是精品50| 999久久久精品免费观看国产| 岛国视频午夜一区免费看| cao死你这个sao货| 久久精品国产99精品国产亚洲性色| 日韩精品中文字幕看吧| 国产一区二区在线观看日韩 | 成人亚洲精品av一区二区| 亚洲最大成人中文| 久久这里只有精品中国| 午夜免费激情av| 两个人看的免费小视频| 全区人妻精品视频| 亚洲18禁久久av| 成人av在线播放网站| 窝窝影院91人妻| 国产精品久久久久久亚洲av鲁大| 国产精品av久久久久免费| 国产区一区二久久| 白带黄色成豆腐渣| 一进一出好大好爽视频| 日韩有码中文字幕| 看黄色毛片网站| 无遮挡黄片免费观看| a级毛片在线看网站| www.熟女人妻精品国产| 亚洲精品在线美女| 美女高潮喷水抽搐中文字幕| 非洲黑人性xxxx精品又粗又长| 欧美精品亚洲一区二区| 精品国产美女av久久久久小说| 1024视频免费在线观看| 久久人妻av系列| 天天躁夜夜躁狠狠躁躁| 国内毛片毛片毛片毛片毛片| 91麻豆av在线| 一边摸一边做爽爽视频免费| 久久精品影院6| av片东京热男人的天堂| 免费无遮挡裸体视频| 欧美午夜高清在线| 亚洲一卡2卡3卡4卡5卡精品中文| 91在线观看av| 国产亚洲精品第一综合不卡| 制服丝袜大香蕉在线| 成熟少妇高潮喷水视频| 777久久人妻少妇嫩草av网站| 一进一出好大好爽视频| 国产真实乱freesex| 国产成人欧美在线观看| 亚洲国产精品合色在线| 国产精品免费一区二区三区在线| 色播亚洲综合网| 两个人看的免费小视频| 久久久久久九九精品二区国产 | 欧美3d第一页| 久久国产精品影院| 国产午夜精品久久久久久| 国产日本99.免费观看| 欧美不卡视频在线免费观看 | 亚洲国产精品久久男人天堂| 一二三四在线观看免费中文在| 国产精品一区二区三区四区久久| 国产单亲对白刺激| 亚洲精品在线美女| 搡老妇女老女人老熟妇| 黄色a级毛片大全视频| 久久精品夜夜夜夜夜久久蜜豆 | www日本在线高清视频| 九九热线精品视视频播放| 成人18禁高潮啪啪吃奶动态图| 五月伊人婷婷丁香| 99热6这里只有精品| 久久香蕉精品热| 国产黄色小视频在线观看| 99久久99久久久精品蜜桃| 美女高潮喷水抽搐中文字幕| 欧美国产日韩亚洲一区| 国产男靠女视频免费网站| 亚洲精品中文字幕一二三四区| 久久久久国产一级毛片高清牌| 国产精品国产高清国产av| 午夜免费观看网址| 亚洲欧美激情综合另类| 精品福利观看| 天天添夜夜摸| 一级毛片女人18水好多| 男女午夜视频在线观看| 最近最新免费中文字幕在线| 啦啦啦观看免费观看视频高清| 久久欧美精品欧美久久欧美| 久久久久国内视频| 波多野结衣巨乳人妻| 国产精品香港三级国产av潘金莲| 午夜精品久久久久久毛片777| 欧美 亚洲 国产 日韩一| 午夜免费观看网址| 精品无人区乱码1区二区| 激情在线观看视频在线高清| 国模一区二区三区四区视频 | 亚洲五月天丁香| 精品国产乱子伦一区二区三区| 亚洲电影在线观看av| 亚洲狠狠婷婷综合久久图片| 色综合欧美亚洲国产小说| 精品国产亚洲在线| 中文字幕熟女人妻在线| 欧美日韩亚洲国产一区二区在线观看| 国产精品免费视频内射| 国产精品九九99| 狂野欧美激情性xxxx| 久久精品91蜜桃| 听说在线观看完整版免费高清| 亚洲第一电影网av| 伊人久久大香线蕉亚洲五| 国产成人影院久久av| 看片在线看免费视频| 99久久精品热视频| 男女之事视频高清在线观看| 色综合欧美亚洲国产小说| 曰老女人黄片| 18禁黄网站禁片免费观看直播| 制服诱惑二区| 在线观看免费视频日本深夜| 一夜夜www| 亚洲av成人一区二区三| 两个人免费观看高清视频| 99精品久久久久人妻精品| 天天一区二区日本电影三级| 五月玫瑰六月丁香| 国产69精品久久久久777片 | 熟女电影av网| 99国产综合亚洲精品| 久久人妻av系列| 美女大奶头视频| 久久香蕉激情| 亚洲午夜精品一区,二区,三区| 老汉色av国产亚洲站长工具| 国产久久久一区二区三区| 999精品在线视频| 久久亚洲真实| 色综合婷婷激情| 波多野结衣高清作品| 国产成人欧美在线观看| 日本黄大片高清| 亚洲av第一区精品v没综合| 伦理电影免费视频| 村上凉子中文字幕在线| 亚洲人成电影免费在线| 国产黄片美女视频| 成在线人永久免费视频| 国产成人av激情在线播放| av欧美777| 国产区一区二久久| 一进一出抽搐gif免费好疼| 亚洲aⅴ乱码一区二区在线播放 | 免费在线观看完整版高清| 首页视频小说图片口味搜索| 国产午夜福利久久久久久| 国产麻豆成人av免费视频| 国产精品综合久久久久久久免费| 在线观看日韩欧美| 嫩草影视91久久| av片东京热男人的天堂| 亚洲美女黄片视频| 亚洲第一欧美日韩一区二区三区| 久久人人精品亚洲av| aaaaa片日本免费| 女人高潮潮喷娇喘18禁视频| av片东京热男人的天堂| 久久久国产欧美日韩av| e午夜精品久久久久久久| 在线视频色国产色| 一卡2卡三卡四卡精品乱码亚洲| 男女下面进入的视频免费午夜| 国产精品 欧美亚洲| 91麻豆精品激情在线观看国产| 日本一本二区三区精品| 欧美黑人欧美精品刺激| 日日夜夜操网爽| 午夜福利免费观看在线| netflix在线观看网站| 欧美日韩乱码在线| 亚洲国产精品合色在线| 欧美日韩亚洲综合一区二区三区_| 国产亚洲精品一区二区www| 极品教师在线免费播放| 天天一区二区日本电影三级| 一个人免费在线观看的高清视频| 50天的宝宝边吃奶边哭怎么回事| 亚洲精品久久成人aⅴ小说| 狂野欧美激情性xxxx| 99国产极品粉嫩在线观看| 亚洲色图 男人天堂 中文字幕| 婷婷精品国产亚洲av| 精品免费久久久久久久清纯| 日韩欧美在线乱码| 久久久水蜜桃国产精品网| 久久热在线av| 久久精品国产亚洲av香蕉五月| 欧美色欧美亚洲另类二区| 一进一出抽搐gif免费好疼| 十八禁人妻一区二区| 亚洲成人免费电影在线观看| 久久九九热精品免费| 妹子高潮喷水视频| 桃红色精品国产亚洲av| 三级男女做爰猛烈吃奶摸视频| 亚洲欧美一区二区三区黑人| 久久精品国产综合久久久| 亚洲av日韩精品久久久久久密| 亚洲精品av麻豆狂野| 亚洲熟妇熟女久久| 黑人欧美特级aaaaaa片| 一区福利在线观看| 成人亚洲精品av一区二区| 亚洲国产精品sss在线观看| 欧美黄色淫秽网站| 亚洲国产精品999在线| 欧美3d第一页| 午夜福利18| 国产亚洲欧美98| 亚洲自拍偷在线| 欧美中文日本在线观看视频| 亚洲乱码一区二区免费版| 国产区一区二久久| 久久精品人妻少妇| 国产成人一区二区三区免费视频网站| 两个人的视频大全免费| 免费看十八禁软件| 麻豆成人av在线观看| 在线观看午夜福利视频| 亚洲人成网站在线播放欧美日韩| 精品欧美国产一区二区三| 亚洲av成人不卡在线观看播放网| 国产69精品久久久久777片 | 18禁观看日本| 免费在线观看成人毛片| 啦啦啦免费观看视频1| 久久久久精品国产欧美久久久| 国产av不卡久久| 99国产精品99久久久久| videosex国产| 亚洲午夜理论影院| 国产精品精品国产色婷婷| 特级一级黄色大片| 免费观看人在逋| 757午夜福利合集在线观看| 母亲3免费完整高清在线观看| 一二三四在线观看免费中文在| 亚洲精华国产精华精| 精品不卡国产一区二区三区| 男插女下体视频免费在线播放| 999久久久精品免费观看国产| 亚洲人与动物交配视频| 欧美日本亚洲视频在线播放| 又爽又黄无遮挡网站| 搡老岳熟女国产| 午夜成年电影在线免费观看| 国产精品久久久人人做人人爽| 观看免费一级毛片| 国产精品一区二区三区四区久久| 亚洲av片天天在线观看| 欧美zozozo另类| 亚洲国产高清在线一区二区三| 午夜精品在线福利| 午夜福利高清视频| 日本撒尿小便嘘嘘汇集6| 无遮挡黄片免费观看| 亚洲精品国产一区二区精华液| 丰满的人妻完整版| 50天的宝宝边吃奶边哭怎么回事| АⅤ资源中文在线天堂| 亚洲av片天天在线观看| 欧美日韩乱码在线| 老司机在亚洲福利影院| 两性夫妻黄色片| 老汉色∧v一级毛片| 无人区码免费观看不卡| 村上凉子中文字幕在线| 欧美另类亚洲清纯唯美|