• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    PanopticUAV:Panoptic Segmentation of UAV Images for Marine Environment Monitoring

    2024-02-20 12:03:32YulingDouFengqinYaoXiandongWangLiangQuLongChenZhiweiXuLaihuiDingLeonBevanBullockGuoqiangZhongandShengkeWang

    Yuling Dou,Fengqin Yao,Xiandong Wang,Liang Qu,Long Chen,Zhiwei Xu,Laihui Ding,Leon Bevan Bullock,Guoqiang Zhong and Shengke Wang,★

    1School of Computer Science and Technology,Ocean University of China,Qingdao,266100,China

    2North China Sea Environmental Monitoring Center,State Oceanic Administration,Qingdao,266000,China

    3Department of Informatics,University of Leicester,Leicester,LE1 7RH,UK

    4Research and Development Department,Shandong Willand Intelligent Technology Co.,Ltd.,Qingdao,266102,China

    ABSTRACT

    UAV marine monitoring plays an essential role in marine environmental protection because of its flexibility and convenience,low cost and convenient maintenance.In marine environmental monitoring,the similarity between objects such as oil spill and sea surface,Spartina alterniflora and algae is high,and the effect of the general segmentation algorithm is poor,which brings new challenges to the segmentation of UAV marine images.Panoramic segmentation can do object detection and semantic segmentation at the same time,which can well solve the polymorphism problem of objects in UAV ocean images.Currently,there are few studies on UAV marine image recognition with panoptic segmentation.In addition,there are no publicly available panoptic segmentation datasets for UAV images.In this work,we collect and annotate UAV images to form a panoptic segmentation UAV dataset named UAV-OUC-SEG and propose a panoptic segmentation method named PanopticUAV.First,to deal with the large intraclass variability in scale,deformable convolution and CBAM attention mechanism are employed in the backbone to obtain more accurate features.Second,due to the complexity and diversity of marine images,boundary masks by the Laplacian operator equation from the ground truth are merged into feature maps to improve boundary segmentation precision.Experiments demonstrate the advantages of PanopticUAV beyond the most other advanced approaches on the UAV-OUC-SEG dataset.

    KEYWORDS

    Panoptic segmentation;UAV marine monitoring;attention mechanism;boundary mask enhancement

    1 Introduction

    Deep learning has caused a stir in various fields [1–4].In recent years,deep learning has been gradually applied in Marine environmental monitoring.Panoptic segmentation based on deep learning[5] was first proposed by Facebook Research Institute and Heidelberg University in Germany.Combined with UAV technology,it makes up for the deficiency of remote sensing images,and can accurately identify the distribution range of Marine objects by using different flight altitudes and flight angles.As a result,the demand for intelligent data analysis collected by drones is increasing.As shown in Fig.1,UAV images with polymorphic objects captured by UAV pose a major challenge to UAV Marine monitoring.The traditional target detection algorithm can not identify the target accurately.

    Panoramic segmentation is a full-pixel visual analysis task,which divides the input image into different areas [6,7] according to certain standards,and it unifies instance segmentation [8,9] and semantic segmentation [10,11].This task focuses on the things and stuff classes that split objects.Instances objects,such as foreground objects,cars,pedestrians,animals,and tools,can all be counted.stuff refers to an uncountable background,such as grass,sky and roads.The existing methods are mainly targeted at realistic scenarios,such as automatic driving datasets [12,13],and have achieved outstanding results.Panoramic segmentation can well identify shapes of different objects[14].Therefore,the use of panoptic segmentation algorithm for image analysis may have a more comprehensive application and greater significance in the UAV scenes.However,most of the existing panoramic segmentation methods are for real-world scenes,such as unmanned technology.These images are very different from ocean images from drones[15].UAV visual data[16]has polymorphic objects and large scale,and many small objects may be missing in panoramic segmentation.Therefore,we need to design stronger feature representation networks to improve the segmentation accuracy of UAV Marine images.

    Because of the above problems,we propose a new module called PanopticUAV.First,we select ResNet50 [17] to extract features.The polymorphic objects and large intraclass variability require scale and receptive field adaptation.Due to the fixed size of standard CNN extraction features,we employ deformable convolution [18] in ResNet50 to obtain a feature map with a flexible receptive field.Second,we use FPN [19] to fuse features.The objects are complex and diverse,which leads to inaccurate boundary information.We generate boundary masks from the ground truth and merge them into P5 feature maps to improve the boundary segmentation precision.Then,we combine context information to fuse features in UAV scenarios,where targets are small and features are few.Specifically,we use the CBAM [20] attention mechanism to further feature fusion and reduce false detection cases.Finally,we gain a high-resolution feature from FPN [19] and then generate encoded features by convolution.Additionally,the kernel generator produces each kernel weight of an object instance or stuff category.The kernel weight is multiplied by the encoded feature to obtain segmentation results for each thing and stuff.Our method performs well on the collected UAV datasets,UAV-OUC-SEG.Extensive experiments verify the effectiveness of PanopticUAV for UAV sea images.In short,the main contributions of this article are listed as follows:

    1.We collect and annotate UAV sea images,forming a dataset,UAV-OUC-SEG.

    2.We propose a panoptic segmentation method for UAV marine image recognition,named PanopticUAV.To some extent,it addresses the problem that traditional object detection algorithms cannot accurately identify objects with polymorphic objects.

    3.Boundary mask enhancement and the CBAM attention mechanism are integrated in Panoptic-UAV for better application to UAV marine image recognition.

    2 Related Work

    This section presents some of the work related to panoptic segmentation and UAV image parsing.In Section 2.1,we introduce the role of panoptic segmentation,existing panoptic segmentation models and their advantages and disadvantages.In Section 2.2,we introduce and analyze the research work related to UAV image parsing.

    2.1 Panoptic Segmentation

    Panoptic segmentation is a global,uniform,and pixel-level approach that requires assigning semantic labels to each pixel and distinguishing different individual IDs for the same semantics[14],whose evaluation metrics are proposed by[5].Due to the complexity and uniformity,panoptic segmentation appeals to many researchers for excellent work.

    From deep learning [21,22] framework design,existing methods can be split into three forms:top-down,bottom-top,and united.Most advanced approaches address the panoptic segmentation problem from a top-down perspective.Specifically,PanopticFPN [23] uses the mask R-CNNs [8]to extract overlapping instances,and then a branch of semantic segmentation is added.Then,some postprocessing methods are used to resolve the overlap of the mask,and the results of the semantic and instance segmentation are fused.The results overlap the semantic and instance segmentation results and require postprocessing.Hence,the two-stage method is generally slower.UPSNet[7]achieves a panoptic head at the end of the PanopticFPN[23]to better integrate the stuff and things branches.In contrast,the bottom-top methods add an instance segmentation branch in the semantic segmentation method.DeeperLab[6]put forward a single-shot,bottom-up approach for complete image analysis.A fully convolutional approach extracts features for instance segmentation and semantic segmentation.There is a fusing process that fuses semantic and instance information.As the instance information is category-irrelevant,the category corresponding to the mask is selected by voting.PanopticDeepLab[24] uses an encoder backbone network with semantic segmentation and instance segmentation and adds a null convolution to the last module of the backbone network to obtain a denser feature map.A dual ASPP dual decoder structure is used for feature fusion,and finally,the semantic segmentation and instance segmentation results are fused to produce the final segmentation results.These methods are better in terms of speed while maintaining accuracy.PanopticFCN[25],a method for handling things and stuff simultaneously,enables accurate end-to-end panoptic segmentation.DETR[26]applies the transformer to a computer vision task and achieves good panoptic segmentation results.MaskFormer[27] regards panoptic segmentation as a mask classification task that adopts a transformer decoder module to achieve mask prediction.

    These methods aim for image parsing in natural scenes,such as COCO,driverless scenes,and Cityscapes.UAV sea images with morphological diver-sity,large scale,and small targets present new challenges for panoptic segmentation.New networks need to be designed to solve the problems that exist in UAV scenes.

    2.2 UAV Image Parsing

    Image parsing decomposes images into successive visual patterns,such as textures and detection targets,covering segmentation,detection,and recognition tasks.Image parsing is first performed using a Bayesian-based[28]framework.Introducing instance-based panoptic quality(PQ)evaluation into several benchmarks,panoptic segmentation is a task of comprehensive understanding of the image,which is also of great importance for practical applications.

    Drones play an essential role in marine environmental monitoring because of their flexibility and convenience.How to efficiently and accurately understand UAV sea imagers is a pressing issue.Existing image parsing for UAV images is mainly focused on object detection[29],crowd counting[30],object tracking[31],etc.In the popular drone dataset,VisDrone[32],images are acquired in different cities and are mainly gathered for image detection and target tracking tasks in computer vision.Some semantic segmentation datasets,such as FloodNet[33]and UAVid[34]are primarily concerned with specific scenarios.Therefore,there is no UAV sea dataset for panoptic segmentation.We collect and label UAV images,forming a dataset,UAV-OUC-SEG,which will be published shortly.

    3 PanopticUAV

    We provide a detailed description of the PanopticUAV approach.First,we summarize the structure of the panoramic UAV,then describe the composition of each part of the backbone network,boundary mask enhancement,attention mechanism,and prediction head.Finally,we calculate the loss function.

    3.1 Overview

    Fig.2 shows the structure of PanopticUAV.First,we use ResNet50 as the backbone to extract features.As the standard CNN extraction features have a fixed size,we employ deformable convolution in ResNet50 to obtain a flexible receptive field.Second,we use FPN to fuse features P2,P3,P4,and P5.We then generate the P5 boundary mask by using the Laplacian operator from the ground truth,and merge it with the P5 feature maps to improve the boundary segmentation precision.After that,we use the CBAM attention mechanism to combine context information in P2,P3,P4,and P5 features.Finally,we obtain high-resolution feature maps from AP2,AP3,AP4,and AP5,and generate encoded features by convolution.In addition,the kernel generator produces a kernel weight for each object instance or stuff category.We multiply the kernel weight by the encoded feature to obtain segmentation results for each thing and stuff.

    3.2 Network Pipeline

    We introduce the network pipeline for PanopticUAV,including the details of the implementation of each part and the corresponding loss functions.

    3.2.1Backbone

    We use ResNet50 as the common backbone,but the UAV sea images present rich morphological diversity and many small objects.Traditional CNNs use a fixed-size kernel to extract features,which limits the shape of the receptive field.Therefore,we use deformable convolution in ResNet50,which adds an extra direction parameter to each element in the convolution kernel.This allows the kernel to be extended to cover a wider range during training.

    Deformable convolution involves performing both deformable convolutional and pooling operations in 2-dimensions on the same channel.The regular convolution operation can be divided into two main parts:(1)sampling the input feature map using a regular grid R and(2)performing a weighting operation where R determines the size of the receptive field.

    For position P0on the output feature map,the calculation is performed by the following equation:

    In this equation,R refers to 3×3 regions,X refers to the input,W refers to the corresponding weight,and Pn refers to each offset within the range of the P0 convolution kernel.Deformable convolution is different from conventional convolution in that it introduces an offsetPn for each point based on conventional convolution.The offset is generated by the input feature map and another convolution,and it changes the size of the receptive field.This is shown in the following equation:

    With deformable convolution,the sampled position becomes an irregular shape due to the introduction of the offsetPn,which is often represented as a fractional number.We achieve this through bilinear interpolation,which is given by the following equation:

    Here,prepresents an arbitrary decimal position.Q traverses all the integer space positions of the characteristic graphx.G is two-dimensional.It can be separated into two one-dimensional convolution cores.

    3.2.2BoundaryMaskEnhancement

    We obtain feature maps P2,P3,P4,and P5 from the ResNet50 backbone and FPN.However,due to the complexity and diversity of the images,as well as the inaccuracy of the boundary,we obtain the boundary mask from the ground-truth using the Laplacian operator Eq.(5).We then merge the boundary information with a feature map by calculating boundary losses.Specifically,P2,P3,P4,and P5 all calculate boundary losses with the boundary mask.Our experiments have shown that fusing the boundary information with the P5 feature maps yields better results.

    We were inspired by BisenetV2[10]to use a Laplacian operator kernel L to generate a boundary mask.We selected three different strides (2×,4×,and 8×) to obtain mask maps of different scales,which were then upsampled to a uniform size and fused using a 1×1 convolution.We obtained P5 through a 3 × 3 convolution,batch norm,ReLU,and another 3 × 3 convolution,and adjusted its shape to match that of the boundary mask through bilinear interpolation.Finally,we used binary cross-entropy and dice loss to jointly optimize the boundary mask by learning about P5 and the boundary mask.

    wherepddenotes the predicted p5 feature,andgddenotes the corresponding boundary mask,Ldicedenotes the Dice loss,andLbcedenotes the binary cross-entropy loss.

    During the training phase,we use the boundary mask enhancement module to obtain better weights.However,we remove this module during the inference phase.

    3.2.3AttentionModule

    UAV sea images contain objects with rich morphological diversity,and some targets are not always accurately segmented.To reduce the error rate,it is important to incorporate context information.The attention module can help to make objects more distinctive,which is why we use the CBAM network to fuse features further.The CBAM attention mechanism consists of a spatial attention module and a channel attention module,as shown in Fig.2.The channel attention mechanism(green)first obtains weighted results from the output features,which are then fed into the spatial attention mechanism(purple) to give a final weighting to the result.By applying CBAM,the new feature map can be weighted in both channel and space,which greatly improves the interconnectivity of each feature in the channel and space,and is more conducive to extracting effective features of the target.

    In the channel attention module,input featuresF0are subjected to global max pooling and global average pooling based on width and height.Then,the output features are fed to the MLP,which is added and activated by sigmoid.The channel attention features are multiplied by the input featuresF0elementwise to generate the required featuresFcby the spatial attention module.The equation:

    The spatial attention module takes the featuresFcas the input features.First,a global max pooling and global average pooling based on the channel are performed,and then the two results are contacted based on the channel.The sigmoid generates the spatial attention features,and then the features are multiplied by the input featuresFcto obtain the final featuresF8.

    Following the CBAM module,we obtain different feature maps,AP2,AP3,AP4,and AP5.

    3.2.4PredictionHead

    We integrate the high-resolution features from AP2,AP3,AP4,and AP5 to generate encoded features through a convolution layer.Then,we use a kernel generator to produce kernel weights for each level feature APi for both things and stuff.To identify each object class and position from the features APi,we use the center point to position each individual thing.However,stuff is indistinguishable from individual things.Therefore,we represent the location of each instance separately by using the entire area of stuff and the object center of things.Finally,we use a focal loss to calculate the position loss,

    where FL(·,·)represents the focal Loss.Nis the number of things,andWiHiis the stuff region.

    Then,we fuse different kernel weights for the same objects.Finally,every kernel weight is multiplied by the encoded feature gaining the mask of each instance object.The predicted masks calculate the loss with the ground truth.We adopt dice loss,

    whereYjsegrefers to the ground truth for thejthpredicted maskPj.TheNdenotes the number of objects.

    3.2.5LossFunction

    We need to calculate the position loss,boundary mask loss,and segmentation loss in the training stage.Therefore,the total loss function is as follows:

    whereλpos,λsegandλsegare set to constants 1,2 and 1,respectively,in our work.

    4 Experiments

    We first present the dataset UAV-OUC-SEG dataset and experimental settings for PnopticUAV.Then,we report the ablation study on UAV-OUC-SEG to reveal the effect of each component.Finally,comparisons with previous methods on UAVSEG are presented.

    4.1 Dataset

    At present,there are no existing UAV datasets available for panoptic segmentation of ocean scenarios.To address this gap,we have collected and produced a UAV sea image dataset called UAVOUC-SEG,which contains a total of 813 images.Of these,609 images are in the training set,103 are in the validation set,and 101 are in the testing set.The dataset includes a total of 18 categories,consisting of 7 classes of things(person,bicycle,car,hydrant,trashbin,wheelchair,and boat)and 11 classes of stuff (sakura,road,sidewalk,vegetation,building,enteromorpha,sea,sky,land,oil,and seaweed).The dataset covers various scenarios,including campus,enteromorpha,oil spill,and sea grass,which are particularly important for marine environmental monitoring.

    4.2 Experimental Setting

    Our training and evaluation were implemented in GPU 3090 X 4.The software versions include python 3.7,PyTorch 1.8.0,CUDN 11.1,and numpy 1.21.2.Resnet50 with the deformable convolution strategy was used as our backbone,performing pretraining weight initialization in ImageNet.FPN was used to fuse the features.We used an SGD optimizer for training.The initial learning rate was set to 0.001.It employed the polynomial learning rate policy where the current learning rate equals the initial one multiplied byThe power was 0.9,and the max-iter was 90000.Due to GPU memory limitations,the batch size was 4,momentum was 0.9,and weight decay was 0.0001 to train the model.We used the panoptic segmentation assessment metric PQ to evaluate the model.

    4.3 Ablation Study

    We conducted several experiments to evaluate the effectiveness of our model.First,we used the baseline method,PanopticFCN,to test it on the UAV-OUC-SEG dataset.We then added the deformable convolution module,the CBAM module,and the boundary mask enhancement to demonstrate the effectiveness of these approaches.The results of our experiments are shown in Table 1.The PanopticFCN+D denotes the addition of deformable convolution,which improved PQ and PQst by 1.10% and 2.11%,respectively.However,PQth decreased by 1.32%,possibly due to the small foreground objects in the drone scene.The PanopticFCN+B denotes the boundary mask enhancement strategy,which resulted in an additional 3.15%,0.25%,and 2.71% improvement in PQ,PQth and PQst,respectively.These results show that the boundary mask enhancement strategy is effective.

    Table 1 : Panoptic quality (PQ),segmentation quality (SQ),and recognition quality (RQ).Ablation study: PanopticFCN is the baseline,PanopticFCN+D denotes adding deformable convolution,PanopticFCN+B denotes adding boundary mask enhancement strategy,and PanopticFCN+A denotes adding the CBAM attention mechanism

    Furthermore,the PanopticFCN+A represents the CBAM attention mechanism.We observed improvements in PQ,PQth and PQst over the baseline of 0.85%,1.52%,and 0.65%,respectively.In particular,the attention mechanism improved the segmentation accuracy for small targets by incorporating contextual information.Overall,our approach,PanopticUAV,achieved the best results on the UAV-OUC-SEG dataset,with PQ,PQth,and PQst of 52.07%,47.45%,and 52.41%,respectively.

    We have experimented on different feature layers separately for boundary mask enhancement,and the results are shown in Table 2.The best experimental results are obtained when a boundary mask is added to the P5 feature layer.We observe+3.15%PQ,+0.25%PQthand+2.71%PQstimprovements over the PanopticFCN,but the AP only improves by 0.31%.The detailed results are shown in Table 2.

    Table 2 : Boundary mask experiments,P2,P3,P4,and P5 represent feature maps from FPN

    Fig.3 shows the qualitative results of our approach.The first column displays UAV sea images,the second column displays visualization images of PanopticFCN,the third column displays our detection images,and the last column displays the ground truth.In the first row,we observe that the baseline method fails to detect land and buildings,while our method successfully detects them.Moreover,our method detects scattered Ulva prolifera,which the baseline fails to detect.In the second row,the baseline method erroneously detects Ulva prolifera as a vehicle,leading to false detections.Our method addresses this issue.In the third row,we notice that our approach produces more precise boundaries compared to the baseline.

    Figure 3 : Visualization detection images on UAV-OUC-SEG.The first column shows the UAV sea images,the second column shows the visualization results of PanopticFCN,the third column shows our results,and the last column shows the ground truth

    4.4 Comparison Experiments

    Table 3 shows the main comparative experiments of our proposed method,PanopticUAV,with other panoptic segmentation methods,including UPSNet[7],PanopticFPN[24],and PanopticFCN[26].In particular,our approach outperforms the UPSNet approach in the top-down stream with 3.13% PQ,but the PQth of 56.86% is higher than that of our approach,demonstrating that the bottomup approach performs well for things.Compared to the top-down method,PanopticFPN,our method achieves a 4.79% improvement in PQ.Although PanopticFCN is a unified panoptic segmentation method that performs well in driverless scenes,we have added some strategies based on it.Ultimately,our approach,PanopticUAV,achieves a 2.91% improvement in PQ over the previous baseline.

    Table 3 :Comparison of the PQ between PanopticUAV and UPSNet,Panoptic-DeepLab and Panoptic FCN on UAV-OUC-SEG

    5 Discussion

    UAV-based marine image detection is an excellent complement to traditional marine environmental monitoring,and achieving precise segmentation of sea images acquired by drones is an urgent issue.To improve the segmentation accuracy for UAV sea images,we designed a robust feature representation network that utilizes deformable convolutions to obtain a more flexible receptive field suitable for handling the intraclass variability and small objects commonly found in UAV sea images.Additionally,we incorporated the CBAM attention module to parse images more accurately.However,the segmentation results often suffer from imprecise boundary information due to the complex and diverse nature of the marine scenes.To address this issue,we utilized Laplace operator boundary enhancement to obtain more accurate boundary information that can be fused into the feature maps.

    One concern regarding our study is that our method fails to identify some scattered and polymorphic objects in the image.Furthermore,misdetection still occurs in drone images with highresolution and small objects.Fig.4 illustrates some examples of segmentation failures of our method,where the first image inaccurately divides the objects,and the second image regards the boat as a road.One possible reason for these limitations is the unevenness and complexity of the dataset.Another reason is the need to further improve the algorithm.Despite these limitations,our proposed approach,PanopticUAV,outperforms most other advanced approaches on the UAV-OUC-SEG dataset.

    Figure 4 : For some complex scenes and sporadic objects,the segmentation results by PanopticUAV are poor,with problems of missed and false detection

    6 Conclusion

    UAV-based marine monitoring has significant advantages and plays a vital role in marine environmental protection.However,due to the high similarity between some objects in the sea,traditional segmentation algorithms perform poorly,and panoptic segmentation provides a promising solution to this problem.Despite its potential,there is limited research on UAV-based marine image recognition using panoptic segmentation,and no publicly available panoptic segmentation dataset for UAV images exists.To fill this gap,we collected and annotated drone ocean images to create a new dataset called UAV-OUC-SEG.

    UAV images captured by drones that contain polymorphic objects pose significant challenges to traditional object detection algorithms,which often fail to accurately identify such objects.To address this issue,we propose a panoptic segmentation method called PanopticUAV,which can accurately identify polymorphic objects to some extent.Our approach incorporates the CBAM attention module to improve the accuracy of image parsing.However,due to the complex and diverse nature of marine scenes,the segmentation results often suffer from imprecise boundary information.To overcome this issue,we utilize Laplacian boundary enhancement to obtain more accurate boundary information that can be fused into the feature maps.

    After observation,PanopticUAV method performs well on UAV-OUC-SEG dataset,but there are still some problems for sporadic targets and complex scenes,which need further study.

    Acknowledgement: We want to thank “Qingdao AI Computing Center” and “Eco-Innovation Center” for providing inclusive computing power and technical support of MindSpore during the completion of this paper.

    Funding Statement: This work was partially supported by the National Key Research and Development Program of China under Grant No.2018AAA0100400,the Natural Science Foundation of Shandong Province under Grants Nos.ZR2020MF131 and ZR2021ZD19,and the Science and Technology Program of Qingdao under Grant No.21-1-4-ny-19-nsh.

    Author Contributions:The authors confirm contribution to the paper as follows:Study conception and design:Yuling Dou;Data collection:Fengqin Yao,Liang Qu;Analysis and interpretation of results:Xiandong Wang,Yuling Dou;Draft manuscript preparation:Yuling Dou;Investigation:Long Chen,Zhiwei Xu;Visualization:Laihui Ding;Writing-review&editing:Leon Bevan Bullock;Supervision:Guoqiang Zhong,Shengke Wang.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The datasets used or analysed during the current study are available from the corresponding author on reasonable request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    少妇人妻久久综合中文| 99热网站在线观看| 噜噜噜噜噜久久久久久91| 亚洲高清免费不卡视频| 国产v大片淫在线免费观看| 国产在线男女| 成年美女黄网站色视频大全免费 | 你懂的网址亚洲精品在线观看| 亚洲成人中文字幕在线播放| 久久久精品94久久精品| av不卡在线播放| 欧美日韩亚洲高清精品| 国产一区有黄有色的免费视频| 欧美人与善性xxx| 交换朋友夫妻互换小说| 777米奇影视久久| 国产成人a∨麻豆精品| 免费看av在线观看网站| 人人妻人人爽人人添夜夜欢视频 | 国产亚洲5aaaaa淫片| 精品国产乱码久久久久久小说| 国产精品国产av在线观看| 水蜜桃什么品种好| 久久97久久精品| 美女福利国产在线 | 久久韩国三级中文字幕| 最后的刺客免费高清国语| av女优亚洲男人天堂| 久久精品久久久久久久性| 中文字幕免费在线视频6| 国产探花极品一区二区| 99热国产这里只有精品6| 亚洲精品久久午夜乱码| 亚洲av不卡在线观看| 精品少妇黑人巨大在线播放| 国产黄频视频在线观看| 成人18禁高潮啪啪吃奶动态图 | 高清av免费在线| 国产精品久久久久成人av| 新久久久久国产一级毛片| 九草在线视频观看| 精品一区二区免费观看| 丰满少妇做爰视频| 老女人水多毛片| 美女脱内裤让男人舔精品视频| 精品国产三级普通话版| 日韩欧美精品免费久久| 99热这里只有是精品50| 国内少妇人妻偷人精品xxx网站| 久久久国产一区二区| 亚洲av免费高清在线观看| 极品少妇高潮喷水抽搐| 午夜福利在线观看免费完整高清在| 色吧在线观看| 久久久色成人| 成年女人在线观看亚洲视频| 日韩av在线免费看完整版不卡| 有码 亚洲区| 我要看黄色一级片免费的| 国产亚洲av片在线观看秒播厂| 成人免费观看视频高清| 久久久久久久久大av| 亚洲欧洲日产国产| 午夜免费鲁丝| 三级经典国产精品| 一个人免费看片子| 免费看日本二区| 熟妇人妻不卡中文字幕| 中文字幕人妻熟人妻熟丝袜美| 男人添女人高潮全过程视频| 国产伦在线观看视频一区| 亚洲av二区三区四区| 国产伦精品一区二区三区视频9| 日日摸夜夜添夜夜爱| 91精品国产九色| 国产成人freesex在线| 交换朋友夫妻互换小说| 91午夜精品亚洲一区二区三区| 日本黄色片子视频| 51国产日韩欧美| 最近2019中文字幕mv第一页| 如何舔出高潮| 亚洲第一av免费看| 黄色一级大片看看| 看非洲黑人一级黄片| 高清不卡的av网站| 亚洲av成人精品一区久久| 国产精品一区二区性色av| 免费久久久久久久精品成人欧美视频 | 亚洲精品久久久久久婷婷小说| 国产成人a区在线观看| 十八禁网站网址无遮挡 | 联通29元200g的流量卡| av专区在线播放| 99国产精品免费福利视频| 人人妻人人爽人人添夜夜欢视频 | 成年女人在线观看亚洲视频| 国产欧美亚洲国产| 男女免费视频国产| 天天躁夜夜躁狠狠久久av| 在线观看一区二区三区| 97在线视频观看| 亚洲精品国产av成人精品| 国产一级毛片在线| av视频免费观看在线观看| 久久ye,这里只有精品| 免费观看性生交大片5| 男人爽女人下面视频在线观看| 亚洲av在线观看美女高潮| 日韩国内少妇激情av| 国产精品偷伦视频观看了| 一级毛片电影观看| 中文乱码字字幕精品一区二区三区| a 毛片基地| 色视频www国产| 亚洲av成人精品一区久久| 国产毛片在线视频| 大话2 男鬼变身卡| 少妇人妻 视频| 国产亚洲91精品色在线| 成人18禁高潮啪啪吃奶动态图 | 香蕉精品网在线| 大又大粗又爽又黄少妇毛片口| h日本视频在线播放| 亚洲综合色惰| 免费黄色在线免费观看| 女性生殖器流出的白浆| 亚洲av福利一区| 欧美精品亚洲一区二区| 欧美+日韩+精品| 少妇丰满av| 久久韩国三级中文字幕| 99久久精品一区二区三区| 亚洲精品久久久久久婷婷小说| 午夜免费鲁丝| 免费看av在线观看网站| 国产成人一区二区在线| 亚洲真实伦在线观看| 欧美少妇被猛烈插入视频| 大香蕉97超碰在线| 最近最新中文字幕大全电影3| 亚洲电影在线观看av| 亚洲av.av天堂| 国产免费视频播放在线视频| 狠狠精品人妻久久久久久综合| 亚洲国产日韩一区二区| 2022亚洲国产成人精品| 美女xxoo啪啪120秒动态图| 日韩欧美精品免费久久| 又爽又黄a免费视频| 国产高清不卡午夜福利| 日韩精品有码人妻一区| 黄色怎么调成土黄色| 日本欧美国产在线视频| 亚洲国产成人一精品久久久| 成人午夜精彩视频在线观看| 高清午夜精品一区二区三区| 看非洲黑人一级黄片| 青青草视频在线视频观看| 久久久久国产精品人妻一区二区| 老师上课跳d突然被开到最大视频| 黄色一级大片看看| 少妇的逼好多水| 亚洲精品中文字幕在线视频 | 全区人妻精品视频| 超碰av人人做人人爽久久| 国产在线一区二区三区精| 中文天堂在线官网| 国产免费视频播放在线视频| av视频免费观看在线观看| 婷婷色综合大香蕉| 乱系列少妇在线播放| 80岁老熟妇乱子伦牲交| 日本免费在线观看一区| 97在线视频观看| 国产成人a∨麻豆精品| 蜜桃亚洲精品一区二区三区| 免费黄频网站在线观看国产| 免费人成在线观看视频色| 亚洲内射少妇av| 一区二区三区四区激情视频| 夜夜骑夜夜射夜夜干| 丝瓜视频免费看黄片| 久久久久国产网址| 只有这里有精品99| 国产精品女同一区二区软件| 国产欧美日韩一区二区三区在线 | 精品一品国产午夜福利视频| 国产女主播在线喷水免费视频网站| 一级毛片 在线播放| 亚洲av中文av极速乱| 一个人看的www免费观看视频| av播播在线观看一区| 国产欧美另类精品又又久久亚洲欧美| 男男h啪啪无遮挡| 美女高潮的动态| 蜜桃在线观看..| 亚洲一区二区三区欧美精品| 最近手机中文字幕大全| 亚洲无线观看免费| 五月玫瑰六月丁香| 久久精品国产a三级三级三级| 纵有疾风起免费观看全集完整版| 国产成人a∨麻豆精品| 久热这里只有精品99| 看非洲黑人一级黄片| 成年人午夜在线观看视频| 伊人久久精品亚洲午夜| 多毛熟女@视频| 免费黄网站久久成人精品| 两个人的视频大全免费| 男男h啪啪无遮挡| 亚洲av中文字字幕乱码综合| 一本色道久久久久久精品综合| 日韩一区二区三区影片| 亚洲av日韩在线播放| 嫩草影院新地址| 亚洲av成人精品一区久久| 欧美日韩视频精品一区| 亚洲欧美日韩另类电影网站 | 街头女战士在线观看网站| 中国三级夫妇交换| 成人高潮视频无遮挡免费网站| 国产乱人视频| 狂野欧美激情性xxxx在线观看| 亚洲av免费高清在线观看| 中文字幕制服av| 成人美女网站在线观看视频| 精品国产一区二区三区久久久樱花 | 亚洲精品亚洲一区二区| 丰满少妇做爰视频| 亚洲中文av在线| 久久人人爽人人爽人人片va| 99热全是精品| 精品酒店卫生间| 久久99热这里只有精品18| 精品一区在线观看国产| 久久人人爽人人片av| 久久亚洲国产成人精品v| 日韩av在线免费看完整版不卡| 偷拍熟女少妇极品色| 少妇精品久久久久久久| 国产伦精品一区二区三区视频9| 国产中年淑女户外野战色| 国产熟女欧美一区二区| 一二三四中文在线观看免费高清| 日韩成人伦理影院| 2018国产大陆天天弄谢| 国产 一区 欧美 日韩| 亚洲熟女精品中文字幕| 最近2019中文字幕mv第一页| 在线看a的网站| 91久久精品国产一区二区成人| 欧美变态另类bdsm刘玥| 国产男人的电影天堂91| 中文在线观看免费www的网站| 国产黄频视频在线观看| 亚洲精品第二区| 美女脱内裤让男人舔精品视频| 观看美女的网站| 日韩欧美一区视频在线观看 | 国产精品久久久久久av不卡| 黄色欧美视频在线观看| 国产精品麻豆人妻色哟哟久久| 亚洲av综合色区一区| 欧美激情国产日韩精品一区| 26uuu在线亚洲综合色| 国产高清国产精品国产三级 | 免费黄频网站在线观看国产| 永久网站在线| 国产国拍精品亚洲av在线观看| 水蜜桃什么品种好| 91精品一卡2卡3卡4卡| 精品国产乱码久久久久久小说| 亚洲精品国产av成人精品| 九草在线视频观看| 国产黄片视频在线免费观看| kizo精华| 夫妻午夜视频| 久久婷婷青草| 日韩av免费高清视频| 极品少妇高潮喷水抽搐| 菩萨蛮人人尽说江南好唐韦庄| 建设人人有责人人尽责人人享有的 | 搡女人真爽免费视频火全软件| 亚洲精品成人av观看孕妇| av.在线天堂| 91精品伊人久久大香线蕉| 中文资源天堂在线| 大片电影免费在线观看免费| 久久国产精品男人的天堂亚洲 | 精品一品国产午夜福利视频| 免费不卡的大黄色大毛片视频在线观看| 国产v大片淫在线免费观看| 亚洲av中文av极速乱| 五月天丁香电影| 岛国毛片在线播放| 国产成人精品婷婷| 插逼视频在线观看| 免费av不卡在线播放| 日本猛色少妇xxxxx猛交久久| 国产精品久久久久久精品古装| 免费观看在线日韩| 十分钟在线观看高清视频www | 亚洲成人av在线免费| 狂野欧美激情性xxxx在线观看| 亚洲欧美精品自产自拍| 久久国产乱子免费精品| 亚洲,一卡二卡三卡| 久久精品国产亚洲av天美| 久久国产精品大桥未久av | 我要看日韩黄色一级片| 国产片特级美女逼逼视频| 国产毛片在线视频| 97超视频在线观看视频| 色婷婷久久久亚洲欧美| av在线老鸭窝| 亚洲第一区二区三区不卡| 国产国拍精品亚洲av在线观看| 亚洲人与动物交配视频| 久久精品熟女亚洲av麻豆精品| 观看av在线不卡| 人妻少妇偷人精品九色| av专区在线播放| 又大又黄又爽视频免费| 少妇 在线观看| 在线天堂最新版资源| 黑人猛操日本美女一级片| 一级av片app| 欧美日韩一区二区视频在线观看视频在线| 色吧在线观看| 午夜福利影视在线免费观看| 国产又色又爽无遮挡免| 国产精品偷伦视频观看了| 国产免费视频播放在线视频| 亚洲三级黄色毛片| av播播在线观看一区| 简卡轻食公司| 亚洲精品乱码久久久久久按摩| a 毛片基地| 亚洲色图av天堂| 激情五月婷婷亚洲| 乱码一卡2卡4卡精品| 久久99精品国语久久久| 一区二区三区四区激情视频| 99热这里只有是精品50| 国产男女超爽视频在线观看| 一本—道久久a久久精品蜜桃钙片| 久久精品国产亚洲网站| 国产老妇伦熟女老妇高清| 亚洲经典国产精华液单| 多毛熟女@视频| 国产大屁股一区二区在线视频| 91aial.com中文字幕在线观看| 一本久久精品| 亚洲在久久综合| 国产精品免费大片| 特大巨黑吊av在线直播| 国产女主播在线喷水免费视频网站| 美女内射精品一级片tv| 亚洲在久久综合| 男女边吃奶边做爰视频| 少妇精品久久久久久久| 精品国产三级普通话版| 韩国av在线不卡| 免费观看av网站的网址| 丰满迷人的少妇在线观看| 国产乱人偷精品视频| 欧美精品人与动牲交sv欧美| 亚洲欧美日韩卡通动漫| 大又大粗又爽又黄少妇毛片口| 观看av在线不卡| av女优亚洲男人天堂| 久久国产精品大桥未久av | 国语对白做爰xxxⅹ性视频网站| 久久女婷五月综合色啪小说| 久久 成人 亚洲| 国产精品一区二区三区四区免费观看| 少妇猛男粗大的猛烈进出视频| 国产一区二区三区综合在线观看 | 三级经典国产精品| 丰满少妇做爰视频| 免费观看在线日韩| 在线免费十八禁| 午夜福利网站1000一区二区三区| 黄色视频在线播放观看不卡| 只有这里有精品99| 亚洲av不卡在线观看| 久久精品久久久久久噜噜老黄| 欧美高清性xxxxhd video| 99久久精品热视频| 国产成人a区在线观看| 国产男人的电影天堂91| 国产午夜精品一二区理论片| 国产精品女同一区二区软件| 色综合色国产| 亚洲综合精品二区| 纵有疾风起免费观看全集完整版| 亚洲精品成人av观看孕妇| 一区二区三区精品91| 亚洲精品中文字幕在线视频 | 美女国产视频在线观看| 国产色婷婷99| 一级毛片电影观看| 亚洲久久久国产精品| 黄色怎么调成土黄色| 欧美激情极品国产一区二区三区 | 大话2 男鬼变身卡| 免费黄色在线免费观看| 日本色播在线视频| 高清在线视频一区二区三区| 一本色道久久久久久精品综合| 国产免费又黄又爽又色| 亚洲精品国产色婷婷电影| 日日摸夜夜添夜夜添av毛片| 免费观看无遮挡的男女| 一级毛片黄色毛片免费观看视频| 人妻少妇偷人精品九色| 中文欧美无线码| 国产欧美日韩一区二区三区在线 | 国产精品偷伦视频观看了| 国产精品久久久久久久久免| 色婷婷av一区二区三区视频| 一级爰片在线观看| 国产精品三级大全| 欧美+日韩+精品| av国产精品久久久久影院| 久久女婷五月综合色啪小说| 免费观看的影片在线观看| 亚洲av欧美aⅴ国产| 国产成人aa在线观看| 国产精品国产三级专区第一集| 美女高潮的动态| 日本午夜av视频| 九九爱精品视频在线观看| 日本wwww免费看| 亚洲av成人精品一区久久| 夫妻午夜视频| 人人妻人人爽人人添夜夜欢视频 | 亚洲内射少妇av| 国产精品伦人一区二区| 97在线视频观看| 黄色怎么调成土黄色| 国产欧美日韩精品一区二区| 一级毛片aaaaaa免费看小| 国产 一区精品| 国产人妻一区二区三区在| 免费av不卡在线播放| 91在线精品国自产拍蜜月| 一个人看的www免费观看视频| 我要看黄色一级片免费的| 国产爱豆传媒在线观看| 欧美丝袜亚洲另类| 26uuu在线亚洲综合色| 精品亚洲乱码少妇综合久久| 亚洲国产精品专区欧美| 黄色欧美视频在线观看| 欧美一区二区亚洲| 波野结衣二区三区在线| 亚洲精品日本国产第一区| 亚洲精品日韩在线中文字幕| 中文字幕制服av| 一本—道久久a久久精品蜜桃钙片| 日韩一区二区视频免费看| 一级二级三级毛片免费看| 久久久久国产网址| 99久国产av精品国产电影| 国产黄色免费在线视频| 欧美高清性xxxxhd video| 五月玫瑰六月丁香| 人妻一区二区av| 美女内射精品一级片tv| 国产爱豆传媒在线观看| 极品教师在线视频| 亚洲自偷自拍三级| 国产成人一区二区在线| 丰满少妇做爰视频| 亚洲精品视频女| 晚上一个人看的免费电影| 丰满人妻一区二区三区视频av| av视频免费观看在线观看| 嫩草影院新地址| 熟女电影av网| 国产乱来视频区| 少妇人妻 视频| 精品一区二区三卡| 青青草视频在线视频观看| 久久国产精品大桥未久av | 亚洲人成网站在线播| 97超视频在线观看视频| 中国美白少妇内射xxxbb| 99久久综合免费| 成人毛片a级毛片在线播放| 国产精品久久久久久精品古装| 亚洲av男天堂| 赤兔流量卡办理| 一二三四中文在线观看免费高清| 成人毛片60女人毛片免费| 干丝袜人妻中文字幕| 国产精品一及| 熟女av电影| 欧美成人一区二区免费高清观看| 久久人妻熟女aⅴ| 成年人午夜在线观看视频| 边亲边吃奶的免费视频| 欧美精品亚洲一区二区| 午夜日本视频在线| 久久99热6这里只有精品| av播播在线观看一区| 香蕉精品网在线| 欧美日韩精品成人综合77777| 亚洲不卡免费看| 亚洲av男天堂| 国产日韩欧美在线精品| 黑人猛操日本美女一级片| 欧美日本视频| 欧美性感艳星| 久久精品国产亚洲av涩爱| 中国国产av一级| 99热这里只有是精品50| 一本色道久久久久久精品综合| 免费黄频网站在线观看国产| 色吧在线观看| 久久久久久久亚洲中文字幕| 精品午夜福利在线看| 精品一区二区三卡| 精品久久国产蜜桃| 免费看不卡的av| 欧美性感艳星| 国产欧美亚洲国产| 简卡轻食公司| 日韩不卡一区二区三区视频在线| 免费人成在线观看视频色| 久久97久久精品| 边亲边吃奶的免费视频| 精品99又大又爽又粗少妇毛片| 午夜激情久久久久久久| 国产成人精品福利久久| 国产成人aa在线观看| 一区二区av电影网| 国产成人午夜福利电影在线观看| 欧美成人一区二区免费高清观看| 日产精品乱码卡一卡2卡三| 涩涩av久久男人的天堂| 免费大片18禁| 三级国产精品欧美在线观看| 特大巨黑吊av在线直播| 国产亚洲午夜精品一区二区久久| 黄色日韩在线| 一级黄片播放器| 国产成人免费观看mmmm| 97超碰精品成人国产| 一区二区三区免费毛片| 日本黄大片高清| 熟女人妻精品中文字幕| 久热这里只有精品99| 亚洲激情五月婷婷啪啪| 欧美xxxx性猛交bbbb| 亚洲欧美日韩无卡精品| 国产成人aa在线观看| 亚洲精品aⅴ在线观看| 久久精品国产亚洲网站| 99热国产这里只有精品6| 欧美极品一区二区三区四区| 亚洲怡红院男人天堂| tube8黄色片| 2018国产大陆天天弄谢| av专区在线播放| 国产精品三级大全| 国产成人午夜福利电影在线观看| 成人18禁高潮啪啪吃奶动态图 | 亚洲精品亚洲一区二区| 亚洲成人一二三区av| 亚洲怡红院男人天堂| 免费少妇av软件| 激情五月婷婷亚洲| 日韩国内少妇激情av| videossex国产| 免费观看在线日韩| 欧美3d第一页| 九色成人免费人妻av| 三级经典国产精品| 99热这里只有是精品在线观看| 黑人高潮一二区| 国产在线免费精品| 秋霞在线观看毛片| 亚洲精品色激情综合| 国产在线免费精品| 免费在线观看成人毛片| 亚洲av.av天堂| 国产av一区二区精品久久 | 成人特级av手机在线观看| 免费看av在线观看网站| 婷婷色综合大香蕉| 亚洲精品乱码久久久v下载方式| 久久久久久伊人网av| 成人一区二区视频在线观看| 观看美女的网站| 欧美三级亚洲精品| 青春草国产在线视频| 亚洲成人中文字幕在线播放| 国产精品麻豆人妻色哟哟久久| 国产精品精品国产色婷婷| 色综合色国产| 亚洲国产色片| 美女主播在线视频| 国产免费又黄又爽又色| 日本av手机在线免费观看| 在线天堂最新版资源| 久久人人爽人人片av| 2022亚洲国产成人精品| 中文字幕免费在线视频6| 日韩中文字幕视频在线看片 | 黄片wwwwww| 国产亚洲av片在线观看秒播厂|