• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Full Scale-Aware Balanced High-Resolution Network for Multi-Person Pose Estimation

    2023-10-26 13:14:38ShaohuaLiHaixiangZhangHanjieMaJieFengandMingfengJiang
    Computers Materials&Continua 2023年9期

    Shaohua Li,Haixiang Zhang,Hanjie Ma,Jie Feng and Mingfeng Jiang

    School of Computer Science and Technology,Zhejiang Sci-Tech University,Hangzhou,310018,China

    ABSTRACT Scale variation is a major challenge in multi-person pose estimation.In scenes where persons are present at various distances,models tend to perform better on larger-scale persons,while the performance for smaller-scale persons often falls short of expectations.Therefore,effectively balancing the persons of different scales poses a significant challenge.So this paper proposes a new multi-person pose estimation model called FSA Net to improve the model’s performance in complex scenes.Our model utilizes High-Resolution Network(HRNet)as the backbone and feeds the outputs of the last stage’s four branches into the DCB module.The dilated convolution-based(DCB)module employs a parallel structure that incorporates dilated convolutions with different rates to expand the receptive field of each branch.Subsequently,the attention operation-based (AOB) module performs attention operations at both branch and channel levels to enhance high-frequency features and reduce the influence of noise.Finally,predictions are made using the heatmap representation.The model can recognize images with diverse scales and more complex semantic information.Experimental results demonstrate that FSA Net achieves competitive results on the MSCOCO and MPII datasets,validating the effectiveness of our proposed approach.

    KEYWORDS Computer vision;high-resolution network;human pose estimation

    1 Introduction

    Human pose estimation is a hot topic in computer vision,which involves simulating the human body’s joints by predicting key points.In recent years,it has been extensively applied in various fields,including but not limited to autonomous driving [1–3],motion tracking [4–6],and intelligent surveillance[7–9].

    In human body key point detection,two mainstream methods currently prevail: top-down and bottom-up.The former involves a multi-stage approach,typically detecting a single-person image using an object detector,followed by single-person pose estimation.The latter is a single-stage method primarily used for multi-person pose estimation,where the standard approach is to directly predict all key points in the image and subsequently perform classification and aggregation.The bottom-up method is more challenging,as larger-scale person features are typically more prominent and more accessible for the network to learn during training.In contrast,smaller-scale person features are inherently more ambiguous and susceptible to surrounding noise.As such,a key focus of research is developing methods to enable the network to perceive better targets of varying scales,which is also the primary focus of our study.

    In multi-person pose estimation tasks,the design of the backbone network plays a crucial role due to the prevalent use of single-stage structures in mainstream multi-person pose estimation models.The depth,downsampling rate,and receptive field size of the network significantly impact its performance.Although HRNet[10]is a popular choice as the backbone,its larger network depth enhances its ability to handle large targets but potentially neglects small-scale targets.Moreover,an excessive number of downsampling layers can impair the network’s capability to detect small-scale targets,thus affecting HRNet’s performance and making it difficult to balance the detection of targets of varying scales.

    In summary,we propose a bottom-up approach with HRNet as the backbone network,coupled with our meticulously designed post-processing modules,which we refer to as FSA Net.Fig.1 illustrates the rough structure of FSA Net,which encompasses the DCB and AOB modules within the full-scale aware module (FSAM).In contrast to the previous practice of merging the four branches of the backbone network into one branch using a simple upsampling and addition method,our method addresses the semantic differences between them and considers the importance of each feature.Specifically,we input the outputs of the four branches into the DCB module,which comprises four parallel branches with dilated convolutions having different dilation rates of 9,7,5,and 3,respectively,to boost the receptive fields of various branches.After processing with the DCB module,the four feature maps of the branches are cross-stitched to form new feature maps of the four branches.Furthermore,attention operations are performed on the feature maps of each branch from both the branch and channel dimensions to enhance essential features and suppress redundant ones.Finally,we employ the heatmap representation to predict the final key points.The main contributions of this paper are summarized as follows:

    ? We present FSA Net,a high-resolution human pose estimation network that achieves scaleaware balance by paying attention to the performance of targets at different scales and extracting more complex semantic information during training.In contrast to other networks that tend to ignore the performance of small-scale targets,FSA Net can achieve a full-scale perception balance.

    ? We propose the DCB module,which covers targets at all scales through parallel structures and controls the receptive fields of different branches through dilated convolutions,thereby better perceiving targets at different scales.

    ? We propose the AOB module,which performs attention operations on feature maps of different branches after concatenating them.Unlike other models that simply add feature maps of different branches without considering their semantic differences,the AOB module can enhance the fusion ability of multi-scale features by strengthening important features and suppressing noise,thus improving the detection ability of small-scale targets.

    ? We have evaluated our proposed method on the widely used COCO Validation and testdev datasets and attained remarkable results.Moreover,the feasibility of our approach was validated by conducting ablation experiments.

    2 Related Studies

    In single-person pose estimation,each person is detected separately,and there is no scale variation problem,making it much simpler than multi-person pose estimation.This chapter focuses on the development history of multi-person pose estimation.This paper first provides an overview of the overall development of multi-person pose estimation,followed by an exploration of the use of highresolution networks in this task.Finally,the paper introduces attention mechanisms in multi-person pose estimation.

    Figure 1:Simplified overall framework diagram of FSA Net

    2.1 Mutil-Person Pose Estimation

    Multi-person pose estimation involves predicting all the key points in an image and grouping them.Hourglass[11]proposed a cascading pyramid network that performs pose estimation by regressing key points directly.In Deepcut[12],the authors used Faster RCNN[13]for human detection,followed by the integer linear program method for pose estimation.Deepercut[14]employed ResNet[15]for body part extraction and Image Conditioned Pairwise Terms for key point prediction,improving accuracy and speed.Personlab[16]detected all the key points in an image using a box-free approach and then used greedy decoding to cluster the key points by combining the predicted offsets.OpenPose[17]used VGG[18]as the backbone network and proposed Part Affinity Fields to connect the key points.PifPaf[19]outperformed previous algorithms in low-resolution and heavily occluded scenes,and Pif predicts body parts,while Paf represents the relationships between them.

    2.2 High-Resolution Network

    Experimental results have shown that high-resolution feature maps are crucial for achieving superior detection performance in complex tasks such as human pose estimation and semantic segmentation.The proposal of HRNet has caused a huge response,as the authors employed a parallel connection of multiple sub-networks with varying resolutions,maintaining a high-resolution branch and performing repeated multi-scale fusion to enable each high-resolution feature map to receive information from other parallel branches of different resolutions repeatedly,thus generating rich high-resolution representations.Subsequently,the authors proposed HigherHRNet [20],which adopted a bottom-up approach using transpose convolution to expand the size of the last layer output feature map to half of the original image size,leading to improved performance.BalanceHRNet[21]introduced a balanced high-resolution module BHRM was proposed,along with a branch attention module for the branch fusion method to capture the importance of different branches.Multi-Stage HRNet[22]used a bottom-up approach to parallel multiple levels of HRNet and employed cross-stage feature fusion to further refine key point prediction.Wang et al.[23]enhanced the network’s ability to capture global context information using switchable convolution operations.NHRNet[24]optimized HRNet by cutting high-level features in low-resolution branches and adding attention mechanisms in residual blocks.Khan et al.[25] proposed a network to address the challenges of multi-scale and global context feature loss and applied it to flood disaster response in high-resolution images.

    2.3 Attention Mechanism

    The attention mechanism has become a cornerstone in many computer vision tasks including image classification,object detection,instance segmentation,and pose estimation,owing to its remarkable simplicity and effectiveness.It enables networks to selectively focus on relevant information while ignoring irrelevant noise,thus resulting in improved performance.In recent years,several attentionbased models have been proposed to enhance the feature fusion capability of the HRNet.For instance,SaMr-Net[26]introduced dilated convolutions and attention modules to promote multi-scale feature fusion.HR-ARNet[27]added a refinement attention module to the end of the HRNet for improved feature fusion.Improved HRNet [28] added a dual attention mechanism to parallel sub-networks to minimize interference caused by unrelated information.Meanwhile,Zhang et al.[29] enhanced the feature expression ability of each branch of HRNet by adding attention modules to different branches.X-HRNet [30] employed a one-dimensional spatial self-attention module to generate two one-dimensional heatmaps by projecting the two-dimensional heatmap onto the horizontal and vertical axes for keypoint prediction.

    3 Proposed Method

    Figure 2:Overall framework diagram of SSA Net

    3.1 Dilated Convolution Based Module

    Multi-person images are fed into our network,and after feature extraction by the backbone network,the feature maps of four branches are outputted.These feature maps are then individually passed through the DCB module,consisting of four parallel branches with different dilation rates of dilated convolution,as illustrated in Fig.3.Specifically,each branch’s output feature maps undergo bottleneck processing similar to that of ResNet [15],consisting of 1 × 1 convolution,dilated convolution with different dilation rates,and 1 × 1 convolution again.The final bottleneck uses a residual operation instead of an activation function.

    In HRNet,limited receptive field and excessive downsampling severely affect the detection ability of small objects.Therefore,we propose the DCB module,which accurately controls the receptive field of each branch through a parallel structure.For the branch with the highest resolution,we use dilated convolution with a dilation rate of 9 to expand the receptive field of each pixel.For the branch with the lowest resolution,we limit its receptive field to a smaller range to better perceive small-scale targets,enhancing the perception ability of targets of various scales.

    Figure 3:Schematic diagram of the DCB module

    3.2 Attention Operation-Based Module

    After undergoing processing by the DCB module,the network can enhance the perception of objects at all scales.However,due to their relatively blurred features,small-scale persons are easily affected by noise.To address this issue,we introduce the AOB module,which performs attention operations on each branch and channel to emphasize important features and suppress noise.This results in significant improvements in detection performance,particularly for small-scale objects.Attention operations on each branch also highlight the semantic differences between branches,enabling the network to better attend to targets of different scales.

    Specifically,as shown in Fig.4.The AOB module takes the output feature maps of the DCB module,denoted asw1,w2,w3,w4,as input.These feature maps are first subjected to cross-splicing to generatebranchw1,branchw2,branchw3,branchw4,as shown in Eq.(1):

    Whereidenotes theithbranch,andjdenotes the number of used branches.Then the attention operation is performed onbranch wifrom branch and channel dimensions.Firstly,each branch is processed by adaptive average pooling and adaptive max pooling,and then sum the results to obtain the attention score,after which the obtained attention score is applied tobranch wito readjust the weights,and the calculation process is shown in Eq.(2):

    WhereAAPstands for adaptive average pooling operation,andAMPstands for adaptive max pooling operation.Then feature fusion is performed on the feature maps after the attention operation to regenerate the four branchs feature maps,and the calculation process is shown in Eq.(3):

    whereidenotes theithbranch,andjdenotes the number of branches to be used.

    Figure 4:Schematic diagram of the AOB module

    3.3 Decoder Module

    In the last stage of HRNet,only the branch with the highest resolution is selected as the final output,while the contributions of other branches are ignored.This approach may hurt the performance of targets at other scales.To address this issue,in our decoding stage,we fuse the outputs of all branches by upsampling,generating a heatmap the same size as the original image and with a 1/4 resolution.This heatmap integrates features of all resolutions and can perceive the key points in different scale persons.The predicted coordinates are then decoded,and the L2 loss function is utilized to calculate the loss by comparing the estimated heatmap with the ground truth heatmap.During the loss calculation in the training process,since we have(the number of key points)channels,we compute the loss by calculating the L2 norm of the corresponding key points’indices in each channel.As the labels are generated as heatmaps representing a small region,the loss calculation measures the distance(L2 distance)between the predicted key points and the key points in the labeled region.

    4 Experiments

    4.1 Experimental Details

    In the FSA Net,we adopted a bottom-up approach for multi-person pose estimation.We set the initial learning rate to 0.001 during the experiments and used the Adam optimizer to obtain gradients.The total number of epochs was set to 300,and we trained and evaluated the network on the widely used COCO dataset.Due to the use of heatmap-based methods,a significant amount of memory was required.The experimental hardware and software environment is presented in Table 1.

    Table 1:The hardware and software configuration environment for the experiments

    4.2 Datasets and Evaluation Metrics

    The dataset employed in this experiment is the widely used MSCOCO dataset in human pose estimation.MSCOCO is a large-scale,multi-purpose dataset developed by Microsoft that plays an essential role in mainstream visual tasks.It contains over 200 k images,with 250 k annotated instances of human key points.The MSCOCO training set consists of 118 k images.In contrast,the test set comprises two subsets: COCO Validation,including 5 k images mainly used for simple testing and ablation experiments,and COCO test-dev,including 20 k images mainly used for online testing and fair comparison with state-of-the-art models.The primary evaluation metrics in the COCO dataset are average precision (AP) and average recall (AR),which are calculated according to the following formulas:

    wherepis the number of detected human instances,andtis the threshold to refine the evaluation index.

    Whentis taken as 0.5 and 0.75,it is noted asAP50andAP75,and OKS is object key point similarity,which is calculated as follows:

    whereidenotes theithkey point,didenotes the Euclidean distance between the true value and the predicted value,sis the area of human instance,Viis whether the recognition instance is visible or not,and δ is the regularization parameter of the key point.When 322

    In addition,we also constructed the Tiny Validation dataset,which is a subset of the COCO Validation dataset.In this dataset,we labeled images that contain persons with an area smaller than 802as“images with small individuals.”After filtering through the 5000 images,we found that only 361 images met this criterion.We curated these images to form a new dataset.The purpose of using this dataset is to evaluate the network’s detection performance specifically for small individuals.

    4.3 Quantitative Experimental Results

    In the study of full-scale aware balance,detecting small-scale person poses is the most significant challenge.Traditional pose estimation networks tend to focus more on medium to large-scale persons and may overlook the performance of small-scale persons.Therefore,we first evaluated FSANet’s detection capability for small persons on the Tiny Validation dataset.Table 2 shows that HigherHRNet[20] exhibits significant performance degradation on the Tiny Validation dataset,with an average loss of 20% in AP.This confirms the inference mentioned earlier.Additionally,it can be seen that FSA Net shows a substantial performance improvement,and FSANet-W48 achieves an AP of 60.3%,surpassing HigherHRNet-W48 and other bottom-up models.This demonstrates the significant enhancement of our proposed FSA Net in detecting small-scale persons.

    Table 2:Comparison with mainstream bottom-up models on the tiny validation dataset

    In Table 3,we present the results of testing FSA Net on the widely used COCO Validation dataset,where we achieved significant improvements across multiple metrics.Specifically,when utilizing HRNet-W32 as the backbone network,FSA Net outperformed HigherHRNet by 1.5% and achieved even greater improvement compared to networks such as the method [15].Furthermore,when the backbone network was HRNet-W48,the network’s AP reached 71.2,which was 1.3% higher than HigherHRNet.Notably,we observed varying degrees of improvement in bothAPMandAPL,underscoring the enhanced performance of our method across different scales.

    Table 3:Comparison with mainstream bottom-up models on the COCO Validation dataset,bold is the best result in each column,HG is the hourglass,DLA is deep layer aggregation,W32 is HRNet-W32 and W48 is HRNet-W48

    In Table 4,we further evaluated our model on the COCO test-dev dataset to strengthen its persuasiveness.Our experiments show that when HRNetW48 was used as the backbone network,our method outperformed other mainstream bottom-up models,achieving an AP of 70.8.It is worth noting that HigherHRNet and our work share similar ideas of enlarging the feature map area to better perceive small-scale objects by upsampling the last output of HRNet through transpose convolution.However,FSA Net focuses on the performance of each branch,expands the receptive field through parallel-structured dilated convolution,and eliminates noise through branch attention operations.Our experimental results demonstrate the clear superiority of our proposed approach.In addition,since we process all four branches,the parameter count and computational complexity(GFLOPs)of FSANet are slightly higher than HrHRNet but significantly better than other models such as Hourglass.

    4.4 Qualitative Experimental Results

    To visually demonstrate the effectiveness of our method,we have provided experimental results in Figs.5 and 6.Specifically,we conducted tests on the COCO Validation dataset with HRNetW48 as the backbone network.As shown in Fig.5.Our model can easily handle scenarios with rich scale information.Moreover,as illustrated in Fig.6.The left is the original image,the middle is HigherHRNet,and the right is FSA Net.It can be seen that in complex scenes,such as self-occlusion of people,foreground occlusion,the size of targets is too small,and complex semantic information,our model predicts more accurately and reasonably than other models.

    Figure 5:Visualization of the results of FSA Net on the COCO Validation dataset

    Figure 6:Qualitative comparison of FSA Net with other state-of-the-art bottom-up models on the COCO Validation dataset

    4.5 Ablation Experiments

    Our method builds upon the model HRNet by incorporating the DCB and AOB modules.To evaluate the impact of each module,we performed extensive ablation experiments on the COCO Validation dataset in Table 5,adopting the identical training strategy as in previous work.The results reveal that the DCB module boosts AP by 2.5% compared to the baseline model,while the AOB module achieves a significant increase of 3.1% in AP.Both modules have contributed to the overall performance improvement,with the AOB module exhibiting a greater contribution.When the two modules are jointly applied,the overall AP is further boosted by 4.2%,attesting to the remarkable effectiveness of our proposed approach.

    Table 5:Ablation experiments on COCO validation,Ba is Baseline

    5 Conclusion

    This study introduces a novel multi-person high-resolution pose estimation network with fullscale perception balance.Our approach improves upon the HRNet baseline by incorporating DCB and AOB modules.The DCB module expands the receptive field of each branch’s results,while the AOB module performs attention operations to enhance feature representation ability.We evaluate our method on popular datasets and outperform the baseline model and similar networks.Additionally,we visualize the experimental results to provide a more intuitive understanding of the performance improvements of our FSA Net.In our initial attempts,we drew inspiration from[33]and used dilation rates of 4,3,2,and 1 for the dilated convolutions in the DCB module.However,the experimental results were not satisfactory.Upon analysis,we realized that precise localization of key points is crucial in pose estimation.Therefore,we needed to use larger dilation rates to achieve a larger receptive field.Ultimately,we achieved success with dilation rates of 9,7,5,and 3.Although FSA Net has achieved impressive results,it still faces some challenges.As shown in Table 2,FSA Net can be considered to have achieved a scale-awareness balance compared to other bottom-up networks,but it has not fully achieved a scale-aware balance.There is still significant room for improvement in the performance on small-scale targets,which will be the focus of our future research.

    Acknowledgement:Thanks for the full guidance and supervision from Associate Professor Haixiang Zhang,and the financial support from Professor Mingfeng Jiang,and we would like to express our gratitude to the esteemed editors and reviewers for their invaluable contributions to this article.

    Funding Statement:This work is supported in part by the National Natural Science Foundation of China 61672466 62011530130,Joint Fund of Zhejiang Provincial Natural Science Foundation LSZ19F010001.

    Author Contributions:The authors confirm contribution to the paper as follows: study conception and design:Shaohua Li;data collection:Shaohua Li;analysis and interpretation of results:Haixiang Zhang,Hanjie Ma,Jie Feng;draft manuscript preparation:Mingfeng Jiang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:This experiment uses the public COCO dataset,which is directly available,and since there are subsequent algorithm improvements,the code can be obtained from the corresponding author under reasonable circumstances.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present research.

    日本一区二区免费在线视频| 国产91精品成人一区二区三区| 国产精品久久久人人做人人爽| 婷婷精品国产亚洲av| 色精品久久人妻99蜜桃| 成人av一区二区三区在线看| 丁香六月欧美| e午夜精品久久久久久久| 国产一区二区在线av高清观看| 此物有八面人人有两片| 亚洲成av人片在线播放无| 日韩有码中文字幕| avwww免费| 久久精品亚洲精品国产色婷小说| 亚洲人成电影免费在线| 久久精品影院6| 三级国产精品欧美在线观看 | 日韩精品青青久久久久久| 亚洲欧洲精品一区二区精品久久久| 丝袜人妻中文字幕| 黄片小视频在线播放| 亚洲国产精品合色在线| 国产区一区二久久| 狠狠狠狠99中文字幕| 99re在线观看精品视频| 又紧又爽又黄一区二区| 精品第一国产精品| 真人一进一出gif抽搐免费| 男女床上黄色一级片免费看| 午夜精品一区二区三区免费看| 欧美日韩中文字幕国产精品一区二区三区| 一进一出抽搐gif免费好疼| 欧美国产日韩亚洲一区| 麻豆av在线久日| 亚洲欧美日韩无卡精品| 亚洲av成人一区二区三| xxxwww97欧美| 最近视频中文字幕2019在线8| 熟妇人妻久久中文字幕3abv| 精品欧美国产一区二区三| 亚洲男人天堂网一区| 日韩大尺度精品在线看网址| 国产亚洲av嫩草精品影院| 美女扒开内裤让男人捅视频| 国产精品爽爽va在线观看网站| 老司机靠b影院| 亚洲欧美日韩高清在线视频| 亚洲成人久久爱视频| 欧美日韩亚洲综合一区二区三区_| 日本五十路高清| 国产亚洲精品久久久久久毛片| 日日夜夜操网爽| 日韩欧美免费精品| 丰满人妻一区二区三区视频av | 中文字幕熟女人妻在线| 亚洲自拍偷在线| 成人高潮视频无遮挡免费网站| 熟女少妇亚洲综合色aaa.| 欧美色视频一区免费| 欧美性猛交╳xxx乱大交人| 草草在线视频免费看| 一进一出抽搐gif免费好疼| 亚洲欧美日韩东京热| 91大片在线观看| 特大巨黑吊av在线直播| 在线观看免费日韩欧美大片| 视频区欧美日本亚洲| 人人妻,人人澡人人爽秒播| 一级黄色大片毛片| 麻豆一二三区av精品| 亚洲av电影不卡..在线观看| 亚洲色图 男人天堂 中文字幕| 天天添夜夜摸| 宅男免费午夜| 国产三级在线视频| 在线观看免费视频日本深夜| 亚洲av成人一区二区三| 欧美国产日韩亚洲一区| 精品一区二区三区四区五区乱码| 国产成人影院久久av| 国产三级黄色录像| 国产精品久久久久久亚洲av鲁大| 欧美不卡视频在线免费观看 | 亚洲免费av在线视频| www.www免费av| 国内毛片毛片毛片毛片毛片| 日本 av在线| 午夜福利高清视频| 日本成人三级电影网站| 丁香六月欧美| 午夜福利成人在线免费观看| 国内精品久久久久久久电影| 国产成人精品无人区| 床上黄色一级片| 日本五十路高清| 麻豆成人av在线观看| 精品高清国产在线一区| 亚洲国产精品久久男人天堂| 在线看三级毛片| 国产精华一区二区三区| 人成视频在线观看免费观看| 国产精品一区二区三区四区久久| 国产亚洲精品第一综合不卡| 999久久久精品免费观看国产| 欧美高清成人免费视频www| 制服人妻中文乱码| 人妻久久中文字幕网| 久久久精品国产亚洲av高清涩受| 欧美zozozo另类| 国产精品免费一区二区三区在线| 色在线成人网| 国产成+人综合+亚洲专区| 性欧美人与动物交配| 午夜福利视频1000在线观看| 国产精品爽爽va在线观看网站| 久久天堂一区二区三区四区| 两个人的视频大全免费| 亚洲天堂国产精品一区在线| 欧美3d第一页| 亚洲色图 男人天堂 中文字幕| 国产伦人伦偷精品视频| 国产黄a三级三级三级人| 久久亚洲精品不卡| 免费在线观看影片大全网站| 亚洲国产精品合色在线| 一级毛片高清免费大全| 亚洲自偷自拍图片 自拍| 男女之事视频高清在线观看| 久久99热6这里只有精品| 中文精品一卡2卡3卡4更新| 蜜臀久久99精品久久宅男| 国产视频内射| av视频在线观看入口| 菩萨蛮人人尽说江南好唐韦庄 | 少妇的逼水好多| 99久国产av精品| 免费看美女性在线毛片视频| 看非洲黑人一级黄片| 此物有八面人人有两片| 日本熟妇午夜| 99热这里只有是精品50| 国产v大片淫在线免费观看| 国产在视频线在精品| 国产精品电影一区二区三区| 欧美激情国产日韩精品一区| av.在线天堂| 蜜臀久久99精品久久宅男| 婷婷六月久久综合丁香| 国产精品国产三级国产av玫瑰| 欧美性猛交黑人性爽| 一个人免费在线观看电影| 国产精品久久久久久av不卡| 日韩亚洲欧美综合| 极品教师在线视频| 热99re8久久精品国产| 少妇裸体淫交视频免费看高清| 国产一区二区激情短视频| 久久精品91蜜桃| 免费看光身美女| 性欧美人与动物交配| 国产精品一二三区在线看| 国产黄色视频一区二区在线观看 | 成人特级av手机在线观看| 日韩中字成人| 两个人的视频大全免费| 网址你懂的国产日韩在线| 少妇的逼水好多| 又爽又黄a免费视频| 99久久人妻综合| 一区二区三区免费毛片| 超碰av人人做人人爽久久| 黑人高潮一二区| 国产一区二区在线观看日韩| av国产免费在线观看| 亚洲精品日韩在线中文字幕 | 成人亚洲欧美一区二区av| 国产伦理片在线播放av一区 | 少妇人妻一区二区三区视频| 12—13女人毛片做爰片一| 亚洲人成网站在线播放欧美日韩| 日韩在线高清观看一区二区三区| 亚洲第一电影网av| 中文字幕免费在线视频6| 久久久国产成人精品二区| 国产成人aa在线观看| 一本久久精品| 最近中文字幕高清免费大全6| 亚洲精品国产成人久久av| 91麻豆精品激情在线观看国产| 成人国产麻豆网| 亚洲成人精品中文字幕电影| 国产极品精品免费视频能看的| 亚洲经典国产精华液单| 亚洲精品456在线播放app| 非洲黑人性xxxx精品又粗又长| 午夜精品在线福利| 欧美性猛交黑人性爽| 99久久中文字幕三级久久日本| 亚洲精品日韩在线中文字幕 | 看黄色毛片网站| 欧美日韩乱码在线| 国产精品一及| 欧美区成人在线视频| 婷婷亚洲欧美| 日日摸夜夜添夜夜添av毛片| 久久久久久久久中文| 免费一级毛片在线播放高清视频| a级一级毛片免费在线观看| av女优亚洲男人天堂| 欧美激情在线99| 久久6这里有精品| 日韩强制内射视频| 九色成人免费人妻av| 国产精品三级大全| 波多野结衣高清作品| 亚洲欧美日韩无卡精品| 中文欧美无线码| ponron亚洲| 身体一侧抽搐| 国产午夜福利久久久久久| 亚洲国产高清在线一区二区三| 亚洲成人av在线免费| 成人亚洲欧美一区二区av| 两性午夜刺激爽爽歪歪视频在线观看| 日日摸夜夜添夜夜添av毛片| 成年av动漫网址| 精品99又大又爽又粗少妇毛片| 亚洲国产精品久久男人天堂| 免费观看精品视频网站| 国产欧美日韩精品一区二区| 午夜免费男女啪啪视频观看| 国产伦精品一区二区三区四那| 日韩在线高清观看一区二区三区| 国产精品99久久久久久久久| 久久久久久久久久久免费av| 亚洲欧美日韩东京热| 夫妻性生交免费视频一级片| 中文亚洲av片在线观看爽| 亚洲中文字幕日韩| 亚洲第一区二区三区不卡| 色哟哟哟哟哟哟| 在线天堂最新版资源| 亚洲av中文av极速乱| 97超视频在线观看视频| 免费观看的影片在线观看| 哪个播放器可以免费观看大片| 一本久久精品| 国产伦一二天堂av在线观看| 黄色欧美视频在线观看| 午夜精品一区二区三区免费看| 老女人水多毛片| 网址你懂的国产日韩在线| 三级经典国产精品| 色哟哟哟哟哟哟| 久久婷婷人人爽人人干人人爱| 国产黄色小视频在线观看| 日韩欧美精品免费久久| 女人十人毛片免费观看3o分钟| 亚洲婷婷狠狠爱综合网| 一边摸一边抽搐一进一小说| 美女高潮的动态| 波多野结衣巨乳人妻| 欧美又色又爽又黄视频| 欧美一区二区国产精品久久精品| 亚洲婷婷狠狠爱综合网| 校园春色视频在线观看| 麻豆成人av视频| 亚洲激情五月婷婷啪啪| 国产精品av视频在线免费观看| 99九九线精品视频在线观看视频| 亚洲国产精品成人久久小说 | 亚洲人成网站在线观看播放| 国产午夜福利久久久久久| 日韩三级伦理在线观看| 能在线免费看毛片的网站| 精品久久久久久久久久久久久| 国产探花极品一区二区| 国产乱人视频| 欧美日韩在线观看h| 少妇被粗大猛烈的视频| 我要看日韩黄色一级片| 日本欧美国产在线视频| 人妻少妇偷人精品九色| 99热这里只有精品一区| av黄色大香蕉| 精品久久久久久久人妻蜜臀av| av免费观看日本| 99在线人妻在线中文字幕| 久久久色成人| 大又大粗又爽又黄少妇毛片口| 欧美日韩综合久久久久久| 26uuu在线亚洲综合色| 日日干狠狠操夜夜爽| 美女高潮的动态| 国产精品人妻久久久久久| 午夜福利高清视频| 久久久精品94久久精品| 嘟嘟电影网在线观看| 色综合色国产| 在线免费观看的www视频| 禁无遮挡网站| 国内精品久久久久精免费| 人妻夜夜爽99麻豆av| 国产精品爽爽va在线观看网站| 午夜爱爱视频在线播放| 人人妻人人看人人澡| 午夜福利高清视频| 黑人高潮一二区| 亚洲精品影视一区二区三区av| 热99re8久久精品国产| 看黄色毛片网站| 国产av在哪里看| 最近视频中文字幕2019在线8| 熟女人妻精品中文字幕| 直男gayav资源| h日本视频在线播放| 日韩制服骚丝袜av| av视频在线观看入口| 国产真实乱freesex| 高清在线视频一区二区三区 | 亚洲va在线va天堂va国产| 精品日产1卡2卡| 99视频精品全部免费 在线| av国产免费在线观看| 99九九线精品视频在线观看视频| 一卡2卡三卡四卡精品乱码亚洲| 欧美不卡视频在线免费观看| 久久99精品国语久久久| 精品免费久久久久久久清纯| 人妻系列 视频| 免费一级毛片在线播放高清视频| 国产亚洲精品久久久久久毛片| 亚洲综合色惰| 美女内射精品一级片tv| 国产亚洲精品久久久com| 亚洲第一电影网av| 成人亚洲欧美一区二区av| 国产不卡一卡二| 国产高清有码在线观看视频| 午夜激情福利司机影院| 熟妇人妻久久中文字幕3abv| 国产三级在线视频| 青青草视频在线视频观看| 国产v大片淫在线免费观看| 国产精品久久久久久久电影| 国产高清激情床上av| 六月丁香七月| 青春草视频在线免费观看| 欧美一区二区精品小视频在线| 一级毛片电影观看 | 色综合亚洲欧美另类图片| 亚洲av免费高清在线观看| 欧美高清成人免费视频www| 欧美一区二区精品小视频在线| 国产成人福利小说| 丰满乱子伦码专区| 亚洲四区av| 可以在线观看毛片的网站| 免费看a级黄色片| 日韩强制内射视频| 国产v大片淫在线免费观看| 欧美不卡视频在线免费观看| 菩萨蛮人人尽说江南好唐韦庄 | 国产精品一及| 国产真实乱freesex| 久久精品国产亚洲av涩爱 | 国产v大片淫在线免费观看| 久久久久免费精品人妻一区二区| 小说图片视频综合网站| 欧美精品国产亚洲| 最近的中文字幕免费完整| 亚洲国产日韩欧美精品在线观看| 亚洲自拍偷在线| 日韩av在线大香蕉| 国产成人a区在线观看| 亚洲精品乱码久久久久久按摩| 我要看日韩黄色一级片| 亚洲人成网站在线播| 日本成人三级电影网站| 国产成人精品婷婷| 麻豆av噜噜一区二区三区| 婷婷精品国产亚洲av| 久久久久久久久久黄片| 男人的好看免费观看在线视频| 18+在线观看网站| 在线观看一区二区三区| 高清毛片免费观看视频网站| 久久久久久久亚洲中文字幕| 久久99蜜桃精品久久| 插阴视频在线观看视频| av专区在线播放| 久久久午夜欧美精品| 日日干狠狠操夜夜爽| 丝袜喷水一区| 欧美日本亚洲视频在线播放| 亚洲婷婷狠狠爱综合网| 日韩大尺度精品在线看网址| 日本-黄色视频高清免费观看| 91麻豆精品激情在线观看国产| 99久久精品国产国产毛片| 国产探花极品一区二区| 日韩三级伦理在线观看| 嘟嘟电影网在线观看| 亚洲精品乱码久久久久久按摩| 国国产精品蜜臀av免费| 麻豆精品久久久久久蜜桃| 九草在线视频观看| 中文欧美无线码| 亚洲精品影视一区二区三区av| 高清在线视频一区二区三区 | 国产综合懂色| 国产人妻一区二区三区在| 国产探花在线观看一区二区| 亚洲精品久久久久久婷婷小说 | 国产精品av视频在线免费观看| 亚洲婷婷狠狠爱综合网| 成人综合一区亚洲| 国产三级在线视频| 最后的刺客免费高清国语| 日韩欧美在线乱码| 九九久久精品国产亚洲av麻豆| 久久午夜亚洲精品久久| 亚洲最大成人中文| 国产精品久久久久久精品电影| 久久午夜亚洲精品久久| 色5月婷婷丁香| 国产男人的电影天堂91| 女人被狂操c到高潮| 在线观看66精品国产| АⅤ资源中文在线天堂| 久久久国产成人精品二区| 三级经典国产精品| 床上黄色一级片| 国产91av在线免费观看| 日本-黄色视频高清免费观看| 激情 狠狠 欧美| 国产精品永久免费网站| 边亲边吃奶的免费视频| 少妇裸体淫交视频免费看高清| 日韩欧美 国产精品| 国产精品一及| 一级黄片播放器| h日本视频在线播放| АⅤ资源中文在线天堂| 精品久久久久久成人av| 国产女主播在线喷水免费视频网站 | 免费黄网站久久成人精品| 免费无遮挡裸体视频| 精品久久久久久久久av| 特大巨黑吊av在线直播| 国产成人精品久久久久久| 久久久精品94久久精品| 日本三级黄在线观看| 久久久久久九九精品二区国产| 国产精品爽爽va在线观看网站| 日韩一区二区三区影片| 黄色日韩在线| 欧美一区二区国产精品久久精品| 青春草国产在线视频 | 黄片无遮挡物在线观看| 午夜福利在线观看吧| 99久久精品一区二区三区| 18禁黄网站禁片免费观看直播| 亚洲高清免费不卡视频| 麻豆一二三区av精品| 男女下面进入的视频免费午夜| 99久久精品国产国产毛片| 伊人久久精品亚洲午夜| 97超视频在线观看视频| a级毛色黄片| 国产精品一区二区在线观看99 | 亚洲四区av| 欧美不卡视频在线免费观看| 麻豆国产97在线/欧美| 免费黄网站久久成人精品| 日韩av不卡免费在线播放| 欧美在线一区亚洲| 小蜜桃在线观看免费完整版高清| 91麻豆精品激情在线观看国产| 国产视频内射| 亚洲中文字幕一区二区三区有码在线看| 在线观看美女被高潮喷水网站| 国产亚洲av片在线观看秒播厂 | 青春草亚洲视频在线观看| 成年av动漫网址| 男的添女的下面高潮视频| 18禁在线播放成人免费| 国内精品久久久久精免费| 国产一区二区在线观看日韩| 国产真实乱freesex| av天堂中文字幕网| 青青草视频在线视频观看| 黄色欧美视频在线观看| 97人妻精品一区二区三区麻豆| 国产亚洲av嫩草精品影院| 日韩,欧美,国产一区二区三区 | 亚洲av.av天堂| 热99re8久久精品国产| 婷婷亚洲欧美| 亚洲精品国产成人久久av| 日本三级黄在线观看| av在线观看视频网站免费| 少妇猛男粗大的猛烈进出视频 | 97人妻精品一区二区三区麻豆| 五月玫瑰六月丁香| 美女cb高潮喷水在线观看| 国产成人a区在线观看| 男的添女的下面高潮视频| 男女边吃奶边做爰视频| 在线免费十八禁| av在线亚洲专区| 最新中文字幕久久久久| 校园人妻丝袜中文字幕| 最近2019中文字幕mv第一页| 麻豆一二三区av精品| 亚洲真实伦在线观看| 青春草亚洲视频在线观看| 蜜桃亚洲精品一区二区三区| 内射极品少妇av片p| 一区福利在线观看| 97热精品久久久久久| 精品久久久久久久久久久久久| 国产亚洲精品久久久久久毛片| av又黄又爽大尺度在线免费看 | 国产精品久久电影中文字幕| 国产精品1区2区在线观看.| 丰满的人妻完整版| 一本一本综合久久| av专区在线播放| 男女做爰动态图高潮gif福利片| 干丝袜人妻中文字幕| 国产高潮美女av| 一本久久精品| 国产白丝娇喘喷水9色精品| 欧美一区二区亚洲| 18+在线观看网站| 国产精品人妻久久久影院| 亚洲不卡免费看| 日本五十路高清| 乱系列少妇在线播放| 亚洲人成网站在线播放欧美日韩| 一区二区三区四区激情视频 | 丰满人妻一区二区三区视频av| 晚上一个人看的免费电影| 蜜臀久久99精品久久宅男| 午夜福利视频1000在线观看| 人人妻人人澡人人爽人人夜夜 | 中文字幕精品亚洲无线码一区| 亚洲电影在线观看av| 插阴视频在线观看视频| 国产精品一区二区三区四区免费观看| 中国国产av一级| 中文字幕av在线有码专区| 成年女人永久免费观看视频| 精品久久久噜噜| 日日撸夜夜添| av黄色大香蕉| 亚洲av男天堂| 国产色爽女视频免费观看| 免费av毛片视频| 在线播放国产精品三级| 国产高清激情床上av| 特大巨黑吊av在线直播| 国产精品无大码| 日本黄色片子视频| 此物有八面人人有两片| 国产麻豆成人av免费视频| 亚洲欧美精品自产自拍| 禁无遮挡网站| 免费观看在线日韩| 久久九九热精品免费| 亚洲一区二区三区色噜噜| 美女cb高潮喷水在线观看| 天堂中文最新版在线下载 | 久久99热这里只有精品18| 精品日产1卡2卡| 哪个播放器可以免费观看大片| 国产精品久久久久久久久免| 国产真实伦视频高清在线观看| 免费在线观看成人毛片| 中文字幕av成人在线电影| 国产精品美女特级片免费视频播放器| 大型黄色视频在线免费观看| 欧美xxxx性猛交bbbb| 悠悠久久av| 国产人妻一区二区三区在| 欧美一区二区精品小视频在线| av女优亚洲男人天堂| 一区二区三区高清视频在线| 成人性生交大片免费视频hd| 我的老师免费观看完整版| 亚洲最大成人手机在线| 悠悠久久av| 久久精品国产亚洲网站| 亚洲人成网站在线播放欧美日韩| 亚洲va在线va天堂va国产| 亚洲第一区二区三区不卡| 好男人在线观看高清免费视频| 1000部很黄的大片| 麻豆成人午夜福利视频| 波多野结衣高清无吗| a级毛色黄片| 日本色播在线视频| 欧美另类亚洲清纯唯美| 日韩制服骚丝袜av| 精品人妻一区二区三区麻豆| 深夜精品福利| 成人一区二区视频在线观看| 床上黄色一级片| 国产成人午夜福利电影在线观看| 黄色视频,在线免费观看| 两性午夜刺激爽爽歪歪视频在线观看| 日本av手机在线免费观看| 欧美激情久久久久久爽电影| 97热精品久久久久久|