• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    DTHN:Dual-Transformer Head End-to-End Person Search Network

    2023-12-12 15:49:42ChengFengDezhiHanandChongqingChen
    Computers Materials&Continua 2023年10期

    Cheng Feng,Dezhi Han and Chongqing Chen

    School of Information Engineering,Shanghai Maritime University,Shanghai,201306,China

    ABSTRACT Person search mainly consists of two submissions,namely Person Detection and Person Re-identification (re-ID).Existing approaches are primarily based on Faster R-CNN and Convolutional Neural Network(CNN)(e.g.,ResNet).While these structures may detect high-quality bounding boxes,they seem to degrade the performance of re-ID.To address this issue,this paper proposes a Dual-Transformer Head Network(DTHN)for end-to-end person search,which contains two independent Transformer heads,a box head for detecting the bounding box and extracting efficient bounding box feature,and a re-ID head for capturing high-quality re-ID features for the re-ID task.Specifically,after the image goes through the ResNet backbone network to extract features,the Region Proposal Network(RPN)proposes possible bounding boxes.The box head then extracts more efficient features within these bounding boxes for detection.Following this,the re-ID head computes the occluded attention of the features in these bounding boxes and distinguishes them from other persons or backgrounds.Extensive experiments on two widely used benchmark datasets,CUHK-SYSU and PRW,achieve state-of-the-art performance levels,94.9 mAP and 95.3 top-1 scores on the CUHK-SYSU dataset,and 51.6 mAP and 87.6 top-1 scores on the PRW dataset,which demonstrates the advantages of this paper’s approach.The efficiency comparison also shows our method is highly efficient in both time and space.

    KEYWORDS Transformer;occluded attention;end-to-end person search;person detection;person re-ID;Dual-Transformer Head

    1 Introduction

    Person search aims to localize a specific target person from the gallery set,which means it contains two submissions,Person Detection,and Person re-ID.Depending on these two different submissions,existing work can be divided into two-step and end-to-end methods.Two-step methods[1–6]treat them separately by conducting re-ID[7–10]on cropped person patches found by a standalone person box detector.They trade time and resource consumption for better performance,as shown in Fig.1a.

    By comparison,in a multi-task framework,end-to-end methods [11–17] effectively tackle both detection and re-ID simultaneously,as seen in Fig.1b.These approaches commonly utilize a person detector (e.g.,Faster R-CNN [18],RetinaNet [19],or FCOS [20]) for detection and then feed the feature into re-ID branches.To address the issue caused by the parallel structure of Faster R-CNN,Li et al.[12]proposed SeqNet to perform detection and re-ID sequentially for extracting high-quality features and achieving superior re-ID performance.Yu[17]introduced COAT to solve the imbalance between detection and re-ID by learning pose/scale-invariant features in a coarse-to-fine manner and achieving improved performance.However,end-to-end methods still suffer from several challenges:

    ■Handing occlusions with background objects or partial appearance poses a significant challenge.The detection and correct re-ID of persons become more challenging when they are obscured by objects or positioned at the edges of the captured image.While current models may perform well in person search,they are prone to failure in complex occlusion situations.

    ■The significant scale of pose variations makes it complicated to re-ID.Since current models mainly utilize CNN to extract re-ID features,they tend to suffer from the scale of pose variations due to inconsistent perceptual fields,which degrades the re-ID performance.

    ■Efficient re-ID feature extraction remains a thorny problem.Existing methods either re-ID first or detection first,but still leave the unsolved issue of how to efficiently extract the re-ID feature for better performance.

    Figure 1:Classification and comparison of two person search network

    For such cases,we propose a Dual-Transformer Head End-to-End Person Search Network(DTHN)method to address the above limitations.First,inspired by SeqNet,an additional Faster RCNN head is used as an enhanced RPN to provide high-quality bounding boxes.Then a Transformerbased box head is utilized to efficiently extract box features to perform high-accuracy detection.Next,a Transformer-based re-ID head is employed to efficiently obtain the re-ID representation from the bounding boxes.Moreover,we randomly mix up partial tokens of instances in a mini-batch to learn the cross-attention.Compared to previous works that have difficulty dealing with the balance issue between detection and re-ID,DTHN can achieve high detection accuracy without degrading re-ID performance.

    The main contributions of this paper are as follows:

    ■we propose a Dual-Transformer Head End-to-End Person Search Network,refining the box and re-ID feature extraction problem previous end-to-end frameworks were limited.The performance is improved by designing a Dual-Transformer Head structure containing two independent Transformer heads for handling high-quality bounding box feature extraction and high-quality re-ID feature extraction,respectively.

    ■we improve the end-to-end person search efficiency by using a Dual-Transformer Head instead of traditional CNN,reducing the number of parameters and remain a comparable accuracy.By employing the occlusion attention mechanism,the network can learn person features under occlusion,which substantially improves the performance of the re-ID in small-scale person and occlusion situations.

    ■we validate the effectiveness of our approach by achieving state-of-the-art performance on two widely used datasets,CUHK-SYSU and PRW.94.9 mAP and 95.3 top-1 scores were achieved on the CUHK-SYSU dataset,and 51.6 mAP and 87.6 top-1 scores were achieved on the PRW dataset.

    The remainder of this paper is organized as follows: Section 2 presents the research related to this work in recent years;Section 3 reviews the relative preparatory knowledge and presents the proposed DTHN design in detail;Section 4 presents some relevant experimental setups and verifies the effectiveness of the proposed method through experiments;Section 5 summarizes this work and provides an outlook for feature work.

    2 Related Work

    2.1 Person Search

    Person search has received increasing attention since the release of CUHK-SYSU and PRW,two large-scale datasets.This development marked a shift in researchers’approach to person search,as they began viewing it as a holistic task instead of treating it separately.The early solutions were two-step methods,using a person detector or manually constructing the person box,then constructing a person re-ID model to search for targets in the gallery.With high performance comes high time and resource consumption,two-step methods tend to consume more computational resources and time to perform at the same level as end-to-end methods.End-to-end person search has attracted extensive interest due to the integrity of solving two submissions together.Li et al.[12]shared the stem representations of person detection and re-ID,solving two submissions sequentially.Yan[14]proposed the first anchorfree person search method to address the misalignment problem at different levels.Furthermore,Yu[17]presented a three-cascade framework for progressively balancing person detection and re-ID.

    2.2 Vision Transformer

    Transformer [21] was initially designed to solve problems in natural language processing.Since the release of Vision Transformer (ViT) [22],it has become popular in computer vision (CV) [23–26].This pure Transformer backbone achieves state-of-the-art performance on many CV problems and has been shown to extract multi-scale features that traditional CNNs struggle with.The re-ID process heavily relies on fine-grained features,making it a promising technology in this field.Several efforts have been made to explore the application of ViT in person re-ID.Li et al.[27]proposed the part-aware Transformer to perform occluded person re-ID through diverse part discovery.Yu [17]performed the person search with multi-scale convolutional Transformers,learning discriminative re-ID features and distinguishing people from the background in a cascade pipeline.Our paper proposes a Dual-Transformer Head for the end-to-end person search network to efficiently extract high-quality bounding boxes feature and re-ID feature.

    2.3 Attention Mechanism

    The attention mechanism plays a crucial role in the operation and function of the whole Transformer.After the proposal of ViT,numerous variants of ViT have tried to bring different features to the Transformer by changing the attention mechanism.Among them,in the target detection task,using a combination of artificial token transformations has become a mainstream approach to solve the detection of occluded targets.Based on this,Yu [17] proposed an occlusion attention module in which both positive and negative samples in the same mini-batch are randomly partially swapped to simulate the encountered background occlusion of a person,achieving good performance.This is also mainly the attention mechanism used in this paper.

    To give the reader further insight into the work in this paper,Table 1 provides a summary of the related work and the work in this paper.

    Table 1:A summary of related person search works and our work

    3 Methods

    As previously mentioned,existing end-to-end person search works still struggle with the conflict of person detection and person re-ID.Prior studies have indicated that,despite a potential decrease in detection precision,the precision of re-ID can be maintained or even improved through serialization.However,achieving a high-level detection precision results in accurate bounding box features,which are beneficial for re-ID.Thus,we propose the Dual-Transformer Head Person Search Network(DTHN)manage to get both high-quality detection and refined re-ID accuracy.

    3.1 End-to-End Person Search Network

    As shown in Fig.2,our network is based on the Faster R-CNN object detector backbone with Region Proposal Network.We start by pre-processing the image to be searched for which will be converted to a size of 800 ?1500 as a standard input.We then use the ResNet-50 [28] backbone to extract the 1024-dim backbone feature in a size of 1024 ?58 ?76,then fed it into the RPN to obtain the region proposals.During training,RoI-Align is performed using the proposals generated by RPN to obtain the features of the region of interest for bounding box search,but RoI-Align is performed using a Ground-truth bounding box during the re-ID phase.Note that instead of using ResNet-50 stage 5 (res5) as our box head,we utilize a Transformer to extract high-quality box features and get high detection accuracy,and use the predictor head of Faster R-CNN to obtain high-confidence detection boxes.The RoI-Align operation is applied to pool ah?wregion as our region of interest,we use it as the stem featureF∈Rh?w?c.Note that F has the height of h and the width ofw,andcdenotes the number of channels.We set the intersection-over-union (IoU) thresholds at 0.5 in the training phase to distinguish positive and negative samples,and 0.8 IoU in the testing phase to get high-confidence bounding boxes.Then a Transformer re-ID head is utilized to extract distinguish features from theF.In each Transformer head,we learn the feature supervised by two lossesLreg1andLreg2.WhereNpdenotes the number of positive samples,ridenotes the calculated regression ofi-th positive samples,Δidenotes the corresponding ground truth regression,andLlocdenotes the Smooth-L1-Loss.The expressions forLreg1andLreg2are identical,as shown in the equation forLregbelow.

    Figure 2:Structural framework of the DTHN,the dotted line means only happens in the testing phase

    In addition,we also calculate the classification lossLcls1,andLcls2after two transformer heads.WhereNdenotes the number of samples,pidenotes the predicted classification probability ofi-th sample,andcidenotes the ground truth label.

    Note thatLcls2and the re-ID lossLreidare two different losses calculated by the Norm-Aware Embedding(NAE)Lnae(.),wherefdenotes the extracted 256-dim features.

    3.2 Occluded Attention

    The attention mechanism plays a crucial role in the Transformer.In our application,where we aim to extract high-quality bounding boxes and re-ID features,we must address the issue of occlusion.To this end,we use occluded attention in the DTH to prompt the model to learn the occlusion feature and address it in real applications,as shown in Fig.3.Equations should be flushed to the left of the column.First,we build the token bankwherepdenotes the number of box proposals,andxidenotes the token in one mini-batch.We exchange part of the tokens with another token from the token bank according to the index,using Token-Mix-Up(TMU)function,wherexiandxjdenote the token to be handled,Rdenotes the random value generated by the system,Tdenotes the exchange threshold.

    Figure 3:The occluded attention mechanism in DTHN

    After random swapping,we transform the tokenized features into three matrices through three fully connected(FC)layers:query matrixQ,key matrixKand value matrixV,and then we compute the multi-head self-attention(MSA)as follows,where ?cdenotes the channel scale of the token,it equals,nis the number of slices during tokenization,mdenotes the number of heads MSA has:

    After MSA,we perform Feed Forward Network(FFN)to output features for feature regression,classification,and re-ID.

    3.3 Dual-Transformer Head

    The Dual-Transformer Head(DTH)consists of two individual Transformer heads designed for detection and re-ID.Although working in different parts of the network,the detection and re-ID heads share the same mechanism.The Transformer box head takes box proposals as input and generates processed features as output.In contrast,the Transformer re-ID head takes ground truth as input during the training phase but proposals during the testing phase.Therefore,we hypothesize that the quality of detection can positively impact the re-ID performance.To provide a visual representation,the structure of the DTH is visualized in Fig.4.

    Figure 4:The structure of DTH and how it works

    First,the pooled stem featureF∈Rh?w?cis fed into the Transformer box head and obtains the proposal feature,which is fed into Faster R-CNN to calculate the proposal regression and proposal classification.After that,Fis re-fed into the Transformer re-ID head and obtains box feature,which is fed into the bounding box regressor and Norm-Aware Embedding to calculate the box regression and box classification.The loss function of NAE to calculate the box classificationLcls2is shown in equation below:

    wherey∈{0,1} denotes that the box is a person or background.norm r∈[0,∞).σdenotes the sigmoid activation function,within which is a batch normalization layer.The OIM loss is calculated using the features processed by NAE.OIM only consider the labeled and unlabeled identities,while leave the other proposals untouched.OIM has two auxiliary structures,Look-Up Table(LUT)to store all feature vectors with tagged identities and Circular Queue(CQ)to store untagged identities detected in the recent mini-batch.Based on these two structures,the probability ofxbeing recognized as the identity with class-idiand thei-th unlabeled identity by two Softmax function.OIM loss is calculated as equation below as our re-ID loss.

    wheredenotes the i-th column of the LUT,denotes the i-th column of the CQ,τdenotes softer probability distribution,Exdenotes the expectation,ptdenotes the probability of being judged ast.

    We take the Transformer re-ID head as an example to demonstrate the process.After the feature has been pooled intoF∈Rh?w?c,Fwill go through the tokenization.We splitFtonslices channelwise getting∈Rh?w?c?.We utilize series convolutional layers to generate tokens based ongetting∈R?h??w??c.By flattening ?Finto tokenx∈R?h?w??c.After finishing TMU,go through the MSA and FFN mentioned above transforming each token to enhance its representation ability.The enhanced feature will be projected into the same size it gets in,Then we concatenate the features of the n scales of transformers to the original sizeh?w?c.There is a residual connection outside each transformer.After the global average pooling (GAP) layer,the feature Transformer outputs will be pooled and delivered to different loss functions according to the type of Transformer head.The internal structure of the Transformer head is shown in Fig.5.

    4 Experiment

    All training processes are conducted in PyTorch with one NVIDIA A40 GPU,while testing processes are conducted with one NVIDIA 3070Ti GPU.The origin image will go through the ResNet-50 stage 4 and be resized to 900 ?1500 as the input.The source code and implementation details can be found in https://github.com/FitzCoulson/DTHN/tree/master.

    Figure 5:The internal structure of Transformer head

    4.1 Datasets and Metrics

    We conduct our experiments on two wildly used datasets.The CUHK-SYSU dataset[13]contains images from 18184 scenes with 8432 identities and 96143 bounding boxes.The default gallery contains 2900 testing identities in 6978 images with a default size of 100.While the PRW dataset [6] collects 11816 video frames from 6 cameras with 5704 frames and 482 identities,dividing into a training set with 5705 frames and 482 identities and a testing set with 2057 query persons in 6112 frames.

    We evaluate our model following the standard evaluation metrics.According to the Cumulative Matching Characteristic (CMC),the detection box will only be considered correct when the IoU is more than 0.5.So,we use Recall and Average Precision (AP) as the performance metric for person detection.While the person re-ID uses the mean Average Precision(mAP)and top-1 scores.All the metrics the higher the better.

    whereRnandPnseparately denote the recall and precision of then-th confidence threshold,Cdenotes the number of all classifications.The top-1 score denotes the result with the highest accuracy under the classification.

    4.2 Implementation Detail

    We take ResNet-50 pre-trained on the ImageNet as the backbone.The batch size is set to 5 during training and 1 during testing.The size of theFwill be set to 14 ?14 ?1024.The number of heads m in MSA is set to 8.The loss weightλ1is set to 10,and others are set to 1.We use the SGD optimizer with a momentum of 0.9 to train 20 epochs.The initial learning rate will warm up to 0.003 during the first epoch and decrease by 10 after the 16th epoch.The CQ size of OIM is set to 5000 for CUHK-SYSU and 500 for PRW.The IoU threshold is set to 0.4 in the testing phase.

    4.3 Ablation Study

    We conducted several experiments on the PRW dataset to analyze our proposed method.As shown in Table 2,we test several combinations of different box heads and re-ID heads and evaluate their performance on the PRW dataset.

    We set the default box head and re-ID head as ResNet-50(stage 5)and conduct one experiment,follow by two experiments by setting the box head or the re-ID head to the corresponding Transformer head,respectively,and finally set both the box head and the re-ID head to the Transformer head for one experiment.As we can see from Table 2,when using ResNet-50 (stage 5) as the box head and the re-ID head,both detection and re-ID are at a moderate level.However,when we change the box head to Transformer,the detection accuracy does not improve,while the re-ID accuracy is also slightly reduced,so Transformer cannot play a good effect only for the box head.When we maintain the box head as ResNet-50(stage 5),and replace the re-ID head with Transformer,the re-ID accuracy increases significantly,which shows that Transformer can maximize information extracted from the feature for re-ID.Finally,we replace both the box head and re-ID head with Transformer,while the detection accuracy is slightly reduced,the re-ID accuracy is significantly improved with the support of the DTH.As can be seen,although the Transformer box head reduces the detection accuracy,it efficiently extracts the valid information and improves the overall re-ID performance with the Transformer re-ID head.The Transformer re-ID head undoubtedly enhances the re-ID performance in various occlusion scenarios,and significantly increases the overall re-ID performance.

    Therefore,we believe that our design of the DTHN can fully extract both the box features and the unique features of the person for efficient re-ID.

    4.4 Comparison with State-of-the-Art Models

    We compare our DTHN with state-of-the-art methods on CUHK-SYSU and PRW,including two-step and end-to-end methods.The results are shown in Table 3.

    Table 3:Comparison with SOTA models

    Context Bipartite Graph Matching(CBGM)is a algorithm used in test phase to integrate context information into the matching process.It compares the two most similar targets and use K-M algorithm to the optimal matching with largest weight.

    The results of using CBGM are shown in Table 4.

    Table 4:Comparison with SOTA models using CBGM

    The graphical representations of each dataset’s results are shown in Figs.6 and 7.The horizontal axis is mAP and the vertical axis is top-1.

    Figure 6:Comparison with SOTA end-to-end models in CUHK-SYSU

    4.4.1 Result on CUHK-SYSU

    As shown in the table,we achieved the same 93.9 mAP and a comparable 94.3 top-1 scores compared to the state-of-the-art two-step method TCTS.Compared with the recent end-to-end works,our mAP outperforms the AlignPS,SeqNet,and AGWF,and our top-1 score outperforms the AlignPS and AGWF.Additionally,by using the post-processing operation CBGM,both mAP and top-1 scores of our method improved to 94.9 and 95.3,achieving the best mAP in all methods with a highly competitive top-1 scores.

    4.4.2 Result on PRW

    PRW dataset is well known as more challenging.We achieved 50.7 mAP and 85.1 top-1 scores.Our mAP outperforms all the two-step methods.Among the end-to-end methods,our mAP and top-1 score outperform AlignPS and SeqNet,while remaining a 2.5 gap with AGWT and COAT.Due to the structural advantage of COAT,it remains state-of-the-art status on the PRW dataset,but the DTHN proposed in this paper still achieves respectable results with a smaller number of parameters and computational effort.However,by applying CBGM as a post-processing operation,we obtain a slight gain of 0.9 mAP and a significant gain of 2.5 for the top-1 score,further improving the performance of our method and reducing the gap with COAT.This means that our proposed DTHN is effective in handling the challenging PRW dataset.

    Figure 7:Comparison with SOTA end-to-end models in PRW

    4.4.3 Efficiency Comparison

    We compare our efficiency with two end-to-end networks SeqNet and COAT.All experiments are conducted on the RTX 3070Ti GPU on the PRW dataset.As shown in Table 5,we include the number of parameters,the multiply-accumulate operations(MACs),and the running speed in frames per second(FPS)in the comparison.

    Table 5:Efficiency comparison

    Compared with SeqNet and COAT,we significantly reduce the number of parameters and remain the equivalent MACs,achieving a comparable accuracy.In terms of FPS,SeqNet has the highest 9.43 because it does not need to compute attention,and we have a slight advantage in running speed compared to COAT with also computes attention.In summary,our model can run efficiently while having a good performance.

    4.5 Visualization Analysis

    To show the recognition accuracy of DTHN in different scenes,several scenes are selected as demonstrations as shown in Fig.8.The green bounding box indicates the detection results that are higher than 0.5 similarity.

    Person search is difficult for several reasons,such as camera distance,occlusion,resolution,complex background,and lighting environments.DTHN can extract the features of the target well,thanks to the inclusion of DTH structure.The visualization demonstrates the model’s ability to make sound judgments despite a variety of difficult situations,proving the model’s effectiveness.

    The network takes the query picture as the target and search the person in the gallery.In case(1),the target is a dancing girl on the dance floor.Despite the dim lighting and the fact that dance movements may make the target difficult to recognize,the model is still able to find the target among the many dancers in the scene.In case (2),the target is a young man with a suitcase which covered his lower half body.Despite the lack of information about the lower half,the model can still target in multi-crowd scenarios based on existing information,even with the target’s back toward the camera.In case(3),the target is a male with his back to the camera.In the absence of front side information,the model does a good job of identifying the target based on other information such as clothing.In the same back scene with target undressing,the model is still able to correctly recognize the target.

    5 Conclusion and Outlook

    After noticing the challenges of occlusion and efficiency in end-to-end person search,we propose a DTHN to address the problems.We use two Transformer heads to deal with box detection and re-ID tasks separately,handling high-quality bounding box feature extraction and high-quality re-ID feature extraction.DTHN outperforms existing methods in the CUHK-SYSU dataset and achieves competitive results in the PRW dataset,which demonstrates the method’s superior structural design and effectiveness.

    Although our method is slightly slower than traditional CNN methods due to the scale dot production used by the attention mechanism in the Transformer,which consumes more computational resources.However,thanks to the small size of the Transformer,we have cut down the number of parameters compared to traditional CNNs,which gives us hope for deployment on terminal devices.Despite the good results,we believe that there is still room for improvement in our approach,either in terms of better and more convenient attention computation methods or in terms of adaptive attention mechanisms.Eventually,we may be able to create a pure Transformer model,using different attention heads on a single Transformer to accomplish different tasks.This is the main focus of our team afterward.We believe that the deployment of person search on terminal devices is just around the corner.

    Acknowledgement:Thank you to laboratory colleagues for their support of this paper.

    Funding Statement:This research is supported by the Natural Science Foundation of Shanghai under Grant 21ZR1426500,and the National Natural Science Foundation of China under Grant 61873160.

    Author Contributions:The authors confirm their contribution to the paper as follows:study conception and design: Cheng Feng;data collection: Cheng Feng;analysis and interpretation of results: Cheng Feng;draft manuscript preparation:Cheng Feng,Dezhi Han,Chongqing Chen.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data that support the findings of this study are available upon request from the corresponding author,Cheng Feng,upon reasonable request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久99蜜桃精品久久| 老鸭窝网址在线观看| av线在线观看网站| 嫩草影院入口| 99久久中文字幕三级久久日本| 一级毛片黄色毛片免费观看视频| 亚洲国产看品久久| 久久久久精品性色| 成人18禁高潮啪啪吃奶动态图| av在线观看视频网站免费| 99国产精品免费福利视频| 久久精品国产亚洲av天美| 欧美人与性动交α欧美软件| 一级毛片电影观看| 18在线观看网站| 久久国产精品男人的天堂亚洲| 美女福利国产在线| 欧美精品人与动牲交sv欧美| 日本-黄色视频高清免费观看| h视频一区二区三区| 热re99久久精品国产66热6| 最近手机中文字幕大全| a级毛片黄视频| 蜜桃国产av成人99| 久热久热在线精品观看| 国产欧美日韩一区二区三区在线| 久久精品国产鲁丝片午夜精品| 午夜福利乱码中文字幕| 成年动漫av网址| 在线精品无人区一区二区三| 成人毛片60女人毛片免费| 精品国产一区二区久久| 黑人猛操日本美女一级片| 一二三四中文在线观看免费高清| 日韩av免费高清视频| 亚洲av国产av综合av卡| 久久鲁丝午夜福利片| 亚洲精品一二三| 国产精品麻豆人妻色哟哟久久| 亚洲欧美一区二区三区国产| 久久99热这里只频精品6学生| 女人精品久久久久毛片| 国产精品 国内视频| 亚洲国产日韩一区二区| 亚洲国产毛片av蜜桃av| 亚洲精品在线美女| 中文字幕精品免费在线观看视频| 91久久精品国产一区二区三区| 日韩伦理黄色片| 男女下面插进去视频免费观看| 伦精品一区二区三区| 在线观看人妻少妇| 精品少妇一区二区三区视频日本电影 | 免费不卡的大黄色大毛片视频在线观看| 国产精品一区二区在线不卡| 波多野结衣一区麻豆| 免费女性裸体啪啪无遮挡网站| 国产精品不卡视频一区二区| 老熟女久久久| 欧美成人午夜精品| 色播在线永久视频| 男女高潮啪啪啪动态图| 亚洲国产av新网站| 男女边摸边吃奶| 人人妻人人澡人人爽人人夜夜| h视频一区二区三区| 成年av动漫网址| 人妻少妇偷人精品九色| 久久国产亚洲av麻豆专区| 又粗又硬又长又爽又黄的视频| 哪个播放器可以免费观看大片| 天堂中文最新版在线下载| 十八禁网站网址无遮挡| 国产精品香港三级国产av潘金莲 | 母亲3免费完整高清在线观看 | 国产精品国产三级专区第一集| 久久久久久久久久人人人人人人| 午夜免费观看性视频| 熟妇人妻不卡中文字幕| 久久久久网色| 在线观看免费高清a一片| 国产精品欧美亚洲77777| 国产麻豆69| 91国产中文字幕| 最近最新中文字幕大全免费视频 | 国产一区二区 视频在线| 一本—道久久a久久精品蜜桃钙片| 欧美精品一区二区免费开放| 日韩人妻精品一区2区三区| 女性生殖器流出的白浆| 国产精品国产av在线观看| 国产乱人偷精品视频| 久久久精品区二区三区| 欧美国产精品va在线观看不卡| 美女xxoo啪啪120秒动态图| 中文字幕色久视频| 精品国产乱码久久久久久小说| 成人毛片a级毛片在线播放| 久久久久久久久久人人人人人人| 人人妻人人爽人人添夜夜欢视频| 中文字幕亚洲精品专区| 久久97久久精品| 中文欧美无线码| 天堂中文最新版在线下载| 国产精品.久久久| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 久久av网站| 日韩精品免费视频一区二区三区| 久久精品国产a三级三级三级| 激情视频va一区二区三区| 下体分泌物呈黄色| 最近手机中文字幕大全| 国产1区2区3区精品| 国产精品一区二区在线不卡| 亚洲综合色网址| 性少妇av在线| 国产极品粉嫩免费观看在线| 一区二区三区精品91| 亚洲欧美一区二区三区黑人 | 久久综合国产亚洲精品| av网站在线播放免费| 美女国产视频在线观看| 69精品国产乱码久久久| 日产精品乱码卡一卡2卡三| 满18在线观看网站| 永久网站在线| 免费高清在线观看日韩| 国产在线视频一区二区| 18+在线观看网站| 男人添女人高潮全过程视频| 国产野战对白在线观看| 久久久久久人妻| 蜜桃国产av成人99| 少妇熟女欧美另类| 亚洲欧美中文字幕日韩二区| 最近手机中文字幕大全| 色播在线永久视频| 卡戴珊不雅视频在线播放| 人人澡人人妻人| 亚洲精品国产色婷婷电影| 婷婷色综合大香蕉| 亚洲av中文av极速乱| 交换朋友夫妻互换小说| 永久网站在线| 黄频高清免费视频| 丝袜美腿诱惑在线| 久久精品人人爽人人爽视色| 亚洲精品美女久久久久99蜜臀 | 日韩av免费高清视频| 亚洲av福利一区| 大码成人一级视频| 国产97色在线日韩免费| 美女国产视频在线观看| 亚洲av成人精品一二三区| 久久精品久久久久久噜噜老黄| av免费观看日本| 69精品国产乱码久久久| 国产精品无大码| 国产精品偷伦视频观看了| 日产精品乱码卡一卡2卡三| 视频在线观看一区二区三区| 欧美精品人与动牲交sv欧美| 国产女主播在线喷水免费视频网站| 永久网站在线| 国产免费福利视频在线观看| 永久网站在线| 最近最新中文字幕大全免费视频 | 人妻人人澡人人爽人人| 欧美日韩国产mv在线观看视频| 午夜日韩欧美国产| 毛片一级片免费看久久久久| 在线观看免费高清a一片| 国产成人a∨麻豆精品| 久久 成人 亚洲| 卡戴珊不雅视频在线播放| 在线观看免费高清a一片| 久久韩国三级中文字幕| 久久韩国三级中文字幕| 国产精品久久久久久av不卡| 国产1区2区3区精品| 午夜日韩欧美国产| 咕卡用的链子| 婷婷色综合www| www日本在线高清视频| 中文乱码字字幕精品一区二区三区| 2021少妇久久久久久久久久久| 又黄又粗又硬又大视频| 欧美 亚洲 国产 日韩一| 五月天丁香电影| 9热在线视频观看99| 亚洲欧美日韩另类电影网站| 国产精品国产三级国产专区5o| 超碰97精品在线观看| 久久久久久久国产电影| 久久人人97超碰香蕉20202| 黑人巨大精品欧美一区二区蜜桃| 国产在线一区二区三区精| 国产亚洲av片在线观看秒播厂| 国产成人精品一,二区| 亚洲精品久久成人aⅴ小说| 考比视频在线观看| 欧美激情 高清一区二区三区| 十八禁网站网址无遮挡| 久久久久久久久久久免费av| 国产高清不卡午夜福利| 午夜福利在线观看免费完整高清在| 国产成人精品久久久久久| 国产精品 国内视频| 超色免费av| 中文字幕人妻熟女乱码| 国产成人91sexporn| xxx大片免费视频| 国产亚洲午夜精品一区二区久久| av天堂久久9| 一区二区三区四区激情视频| av国产久精品久网站免费入址| 亚洲四区av| 亚洲精品成人av观看孕妇| 如何舔出高潮| 男女午夜视频在线观看| 欧美97在线视频| 亚洲第一青青草原| 黄色一级大片看看| 免费观看av网站的网址| 国产精品 国内视频| 亚洲天堂av无毛| av片东京热男人的天堂| 国产亚洲欧美精品永久| 国产视频首页在线观看| 久久久久久久精品精品| 国产在视频线精品| 国产精品99久久99久久久不卡 | 久久精品夜色国产| 成人二区视频| videos熟女内射| 久久毛片免费看一区二区三区| 捣出白浆h1v1| 香蕉丝袜av| 2022亚洲国产成人精品| 99国产综合亚洲精品| 男女免费视频国产| 久久精品亚洲av国产电影网| 久久久精品国产亚洲av高清涩受| 亚洲综合色网址| 黄片播放在线免费| 日韩欧美精品免费久久| 亚洲欧美精品综合一区二区三区 | 80岁老熟妇乱子伦牲交| 看免费成人av毛片| 亚洲第一区二区三区不卡| 国产黄色免费在线视频| 99热网站在线观看| 色哟哟·www| 免费观看性生交大片5| 欧美人与性动交α欧美软件| 三级国产精品片| 亚洲av在线观看美女高潮| 一区二区日韩欧美中文字幕| 国产在线视频一区二区| 国产免费福利视频在线观看| 黄片无遮挡物在线观看| 国产一区二区在线观看av| 男女边吃奶边做爰视频| 啦啦啦在线免费观看视频4| 欧美国产精品va在线观看不卡| 国产无遮挡羞羞视频在线观看| 精品亚洲成a人片在线观看| 久久女婷五月综合色啪小说| 亚洲欧洲国产日韩| 我要看黄色一级片免费的| 欧美老熟妇乱子伦牲交| 久久久久久伊人网av| 2021少妇久久久久久久久久久| 精品一区二区三卡| 一区在线观看完整版| 99久久精品国产国产毛片| 精品一区二区三卡| 波多野结衣一区麻豆| 亚洲av电影在线观看一区二区三区| 嫩草影院入口| 99热全是精品| 精品人妻一区二区三区麻豆| 18+在线观看网站| 国产av码专区亚洲av| 成人手机av| 啦啦啦在线观看免费高清www| 春色校园在线视频观看| 999久久久国产精品视频| 精品国产一区二区三区四区第35| 成年人免费黄色播放视频| 中文天堂在线官网| 精品一区二区三区四区五区乱码 | 99久国产av精品国产电影| 免费看av在线观看网站| 国产免费又黄又爽又色| 国产在线视频一区二区| 精品福利永久在线观看| 视频区图区小说| 国产高清国产精品国产三级| 丰满少妇做爰视频| 亚洲成人一二三区av| 久久婷婷青草| 最近中文字幕2019免费版| 91aial.com中文字幕在线观看| av网站在线播放免费| av卡一久久| 国产乱来视频区| 青春草视频在线免费观看| 国产精品一区二区在线观看99| 国产精品偷伦视频观看了| 中文精品一卡2卡3卡4更新| 在线观看免费高清a一片| 韩国av在线不卡| 欧美97在线视频| 亚洲成人av在线免费| 男人操女人黄网站| 欧美另类一区| 国产精品熟女久久久久浪| 亚洲美女搞黄在线观看| 尾随美女入室| 最近最新中文字幕大全免费视频 | 色视频在线一区二区三区| 久久影院123| 亚洲欧美中文字幕日韩二区| 国产日韩欧美亚洲二区| 日韩,欧美,国产一区二区三区| 亚洲精品日韩在线中文字幕| 久久久久国产一级毛片高清牌| 丝袜美足系列| 国产精品蜜桃在线观看| 宅男免费午夜| 一级片免费观看大全| 爱豆传媒免费全集在线观看| 亚洲欧美一区二区三区久久| 美女视频免费永久观看网站| 蜜桃在线观看..| 午夜福利网站1000一区二区三区| 七月丁香在线播放| 91精品国产国语对白视频| 激情视频va一区二区三区| 80岁老熟妇乱子伦牲交| 男女下面插进去视频免费观看| 女人精品久久久久毛片| 美女福利国产在线| 香蕉精品网在线| 国产成人一区二区在线| 最新的欧美精品一区二区| 国产精品免费大片| 国产精品嫩草影院av在线观看| 国产日韩欧美亚洲二区| 久久久国产精品麻豆| 少妇被粗大猛烈的视频| 久久人人爽av亚洲精品天堂| 99热全是精品| 两个人免费观看高清视频| 男女免费视频国产| 国产激情久久老熟女| 看免费av毛片| 亚洲av综合色区一区| 黄片小视频在线播放| 婷婷色麻豆天堂久久| 97在线视频观看| 欧美日韩av久久| 成年女人在线观看亚洲视频| 寂寞人妻少妇视频99o| 中国三级夫妇交换| 性色avwww在线观看| 高清在线视频一区二区三区| 国产免费现黄频在线看| 99香蕉大伊视频| 啦啦啦中文免费视频观看日本| 妹子高潮喷水视频| 高清视频免费观看一区二区| 久久99精品国语久久久| 制服人妻中文乱码| 麻豆精品久久久久久蜜桃| 亚洲精华国产精华液的使用体验| 亚洲一区二区三区欧美精品| 精品一区在线观看国产| 九色亚洲精品在线播放| 赤兔流量卡办理| 国产黄色免费在线视频| 国产高清国产精品国产三级| 啦啦啦啦在线视频资源| 午夜福利在线免费观看网站| 国产一区二区在线观看av| 国产免费一区二区三区四区乱码| 自线自在国产av| 亚洲人成77777在线视频| 亚洲国产av新网站| 天天躁日日躁夜夜躁夜夜| 涩涩av久久男人的天堂| 欧美日韩一区二区视频在线观看视频在线| 亚洲av.av天堂| av不卡在线播放| 欧美在线黄色| 男的添女的下面高潮视频| 男女高潮啪啪啪动态图| 麻豆乱淫一区二区| 在线精品无人区一区二区三| 国产 一区精品| 一二三四中文在线观看免费高清| 自线自在国产av| 最近中文字幕2019免费版| 爱豆传媒免费全集在线观看| 中文乱码字字幕精品一区二区三区| 久久精品熟女亚洲av麻豆精品| 男女啪啪激烈高潮av片| 中文字幕最新亚洲高清| 免费看av在线观看网站| 美女高潮到喷水免费观看| 日本vs欧美在线观看视频| 欧美 亚洲 国产 日韩一| 久久精品熟女亚洲av麻豆精品| 天天躁夜夜躁狠狠久久av| 国产精品一区二区在线不卡| 久久精品久久久久久久性| 日韩,欧美,国产一区二区三区| 又大又黄又爽视频免费| 一级毛片我不卡| 男女无遮挡免费网站观看| 一级,二级,三级黄色视频| 黑人欧美特级aaaaaa片| 亚洲国产成人一精品久久久| 国产高清不卡午夜福利| 国产精品人妻久久久影院| 97人妻天天添夜夜摸| 欧美黄色片欧美黄色片| 国产欧美日韩综合在线一区二区| videossex国产| 精品国产一区二区三区四区第35| 麻豆av在线久日| 观看美女的网站| 丝瓜视频免费看黄片| 免费观看在线日韩| 久久久精品94久久精品| 999精品在线视频| 大陆偷拍与自拍| 男女边吃奶边做爰视频| 国产精品久久久久久精品古装| 久久久久久久久久久免费av| 可以免费在线观看a视频的电影网站 | 如日韩欧美国产精品一区二区三区| 免费观看无遮挡的男女| av天堂久久9| 欧美亚洲日本最大视频资源| 夫妻性生交免费视频一级片| 国产成人精品婷婷| 天堂俺去俺来也www色官网| av有码第一页| 777米奇影视久久| 国产精品一二三区在线看| 国产精品.久久久| 久久久国产一区二区| 久久久精品区二区三区| 国产高清不卡午夜福利| 日本猛色少妇xxxxx猛交久久| 天天躁夜夜躁狠狠躁躁| 天堂中文最新版在线下载| 久久ye,这里只有精品| 国产爽快片一区二区三区| 天堂中文最新版在线下载| 在线观看国产h片| 国产在线一区二区三区精| 天天操日日干夜夜撸| 亚洲色图综合在线观看| 亚洲第一av免费看| 如日韩欧美国产精品一区二区三区| 日韩欧美精品免费久久| 赤兔流量卡办理| 一区二区av电影网| 欧美日韩av久久| 欧美激情极品国产一区二区三区| 中文精品一卡2卡3卡4更新| 精品国产超薄肉色丝袜足j| 99精国产麻豆久久婷婷| 国产亚洲最大av| av线在线观看网站| 亚洲av中文av极速乱| 大片免费播放器 马上看| 赤兔流量卡办理| 婷婷色综合www| 欧美+日韩+精品| 亚洲成国产人片在线观看| 妹子高潮喷水视频| 免费观看av网站的网址| 黑人欧美特级aaaaaa片| av在线老鸭窝| 亚洲熟女精品中文字幕| 热99国产精品久久久久久7| 久久久久久久久久久免费av| 有码 亚洲区| 伊人久久大香线蕉亚洲五| 日韩av不卡免费在线播放| 热re99久久国产66热| 亚洲视频免费观看视频| 色婷婷av一区二区三区视频| 99热全是精品| 免费女性裸体啪啪无遮挡网站| 我的亚洲天堂| 久久狼人影院| 亚洲av成人精品一二三区| 少妇熟女欧美另类| 在线天堂最新版资源| 午夜免费鲁丝| 成年动漫av网址| 精品一区二区免费观看| 激情视频va一区二区三区| 亚洲欧美一区二区三区黑人 | 国产精品不卡视频一区二区| 国产成人免费观看mmmm| 国产精品久久久久久精品古装| 国产成人精品久久二区二区91 | 国产熟女午夜一区二区三区| 下体分泌物呈黄色| 国产高清不卡午夜福利| 青青草视频在线视频观看| 男女无遮挡免费网站观看| 亚洲av成人精品一二三区| 日韩中文字幕欧美一区二区 | 一本大道久久a久久精品| 十八禁网站网址无遮挡| √禁漫天堂资源中文www| videos熟女内射| 69精品国产乱码久久久| 亚洲欧美日韩另类电影网站| 免费女性裸体啪啪无遮挡网站| 免费观看无遮挡的男女| 日韩在线高清观看一区二区三区| 精品久久蜜臀av无| 日韩制服丝袜自拍偷拍| 91aial.com中文字幕在线观看| 黄色视频在线播放观看不卡| 国产av码专区亚洲av| 免费观看无遮挡的男女| 欧美激情 高清一区二区三区| 自拍欧美九色日韩亚洲蝌蚪91| 乱人伦中国视频| 男人操女人黄网站| 秋霞在线观看毛片| 纵有疾风起免费观看全集完整版| 亚洲国产最新在线播放| 看免费成人av毛片| 丰满迷人的少妇在线观看| 久久狼人影院| 中国三级夫妇交换| 九九爱精品视频在线观看| 日本av免费视频播放| 久久久久国产精品人妻一区二区| 午夜福利在线观看免费完整高清在| 另类亚洲欧美激情| 大片电影免费在线观看免费| 国产亚洲av片在线观看秒播厂| 国产极品粉嫩免费观看在线| 热re99久久精品国产66热6| 国产极品天堂在线| a级毛片黄视频| 日本-黄色视频高清免费观看| 精品卡一卡二卡四卡免费| 岛国毛片在线播放| 观看美女的网站| 1024香蕉在线观看| 欧美另类一区| 精品一品国产午夜福利视频| 精品午夜福利在线看| 久久久a久久爽久久v久久| 日韩制服骚丝袜av| 18禁观看日本| 99香蕉大伊视频| 蜜桃国产av成人99| 两性夫妻黄色片| 日韩中文字幕欧美一区二区 | 水蜜桃什么品种好| 成人毛片60女人毛片免费| 99久久精品国产国产毛片| 国产精品av久久久久免费| 国产免费一区二区三区四区乱码| 成人免费观看视频高清| 中文字幕av电影在线播放| 欧美日本中文国产一区发布| 美女xxoo啪啪120秒动态图| 亚洲欧美成人精品一区二区| av一本久久久久| 另类亚洲欧美激情| 国产片内射在线| 免费看不卡的av| 国产精品一区二区在线不卡| 国产又色又爽无遮挡免| 激情五月婷婷亚洲| 国产精品国产av在线观看| 97在线视频观看| 国产精品国产av在线观看| 激情五月婷婷亚洲| 亚洲人成网站在线观看播放| 男女高潮啪啪啪动态图| 另类亚洲欧美激情| 国产精品一区二区在线不卡| 宅男免费午夜| 国产综合精华液| 在线精品无人区一区二区三| 日本av手机在线免费观看| 日本黄色日本黄色录像| 亚洲 欧美一区二区三区| 国产在线免费精品| 亚洲成人手机| 国产av码专区亚洲av| 高清欧美精品videossex| 成年美女黄网站色视频大全免费| 国产精品嫩草影院av在线观看| 国产福利在线免费观看视频| 久久久久精品性色| 久久久亚洲精品成人影院| 侵犯人妻中文字幕一二三四区| 最新的欧美精品一区二区| 啦啦啦啦在线视频资源|