• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Road Segmentation Model Based on Mixture of the Convolutional Neural Network and the Transformer Network

    2023-02-26 10:18:22FengleiXuHaokaiZhaoFuyuanHuMingfeiShenandYifeiWu

    Fenglei Xu,Haokai Zhao,Fuyuan Hu,Mingfei Shen and Yifei Wu

    Suzhou University of Science and Technology,Suzhou,215009,China

    ABSTRACT Convolutional neural networks(CNN)based on U-shaped structures and skip connections play a pivotal role in various image segmentation tasks.Recently,Transformer starts to lead new trends in the image segmentation task.Transformer layer can construct the relationship between all pixels,and the two parties can complement each other well.On the basis of these characteristics,we try to combine Transformer pipeline and convolutional neural network pipeline to gain the advantages of both.The image is put into the U-shaped encoder-decoder architecture based on empirical combination of self-attention and convolution,in which skip connections are utilized for localglobal semantic feature learning.At the same time,the image is also put into the convolutional neural network architecture.The final segmentation result will be formed by Mix block which combines both.The mixture model of the convolutional neural network and the Transformer network for road segmentation(MCTNet)can achieve effective segmentation results on KITTI dataset and Unstructured Road Scene(URS)dataset built by ourselves.Codes,self-built datasets and trainable models will be available on https://github.com/xflxfl1992/MCTNet.

    KEYWORDS Image segmentation;transformer;mix block;U-shaped structures

    1 Introduction

    Accurate and robust road image segmentation can play a cornerstone role in computer-assisted driving and visual navigation.Due to the deep learning revolution,the segmentation accuracy has achieved impressive results.

    Motivated by the successes of CNN-based classifiers[1–3]and the help of various optimization algorithms[4–6],the task of semantic segmentation overcame many difficulties.Past researchers used image patches to eliminate all the redundant computation [7].The most famous fully convolutional networks [8] extend image-level classification to pixel-level classification.Dilated convolutions were introduced in [9] to perform multi-scale information fusion.The typical U-shaped network,UNet [10],obtained great success in a variety of medical imaging applications.The aforementioned techniques proved the excellent learning ability of CNN.

    Currently,although the CNN-based methods lead the trend in the field of image segmentation,they have more improvement space.Meanwhile,Transformer is showing revolutionary performance improvements in the CV field.In [11],vision transformer (ViT) is proposed to perform the image recognition task.Taking image patches as the input and using self-attention mechanism,ViT can even achieve better performance compared with the CNN-based methods.LeVit [12] designed a patch descriptor to improve calculation efficiency and ensured the accuracy.CaiT[13]optimized the Transformer architecture,which significantly improved the accuracy of the deep Transformer.These methods show that CV and NLP are expected to be unified under the Transformer structure,and the modeling and learning experience of the two fields can be deeply shared,thereby accelerating the progress of their respective fields.

    Motivated by the Transformer’s success,we take an approach based on U-shaped encoder-decoder and design a network architecture that combining the convolutional structure and Transformer structure (MCTNet).This method performs road detection pixel-level segmentation tasks well,and the reasons are as follows:

    ? We propose a fusion structure,which combine the result of CNN structure and Transformer structure.This method can gather the advantage of CNN’s ability to establish the relationship between neighboring pixels and Transformer’s ability to establish the relationship between all pixels.

    ? We find the respective characteristics of Transformer and CNN,which Transformer focuses on the main area of the road image and CNN focus on the details of edge area.Therefore,We design a post-processing adding with prior knowledge of the road scene to deal with the results,which make the accuracy of road area a certain degree of improvement.

    ? We built a harder task on structured and unstructured road area detection to test our method.The dateset we built contains 2000 road scene including gravel pavement,soil-covered pavement,water-covered pavement and highways.The experimental evaluation on KITTI and URS datasets proves our model validity.

    2 Related Work

    Road DetectionIn recent years,more and more researches on autonomous driving at home and abroad have accumulated a certain research foundation.OFA-Net [14] used a strategy called “1-N Alternation”to train the model,which can make a fusion of features from detection and segmentation data.RoadNet-RT[15]speeded up the inference time by optimizing Depthwise separable convolution and non-uniformed kernel size convolution.ALO-AVG-MM[16]extracted multiples side-outputs and used filtering to improve network performance.Volpi et al.[17]proposed a new evaluation framework for online study about segmentation.

    TransformerIn the field of NLP,transformer-based methods have achieved the most advanced performance in various tasks.Vision transformer (ViT) [11],which achieved an impressive speedaccuracy trade-off in image recognition tasks was motivated by the success of Transformer.Deit[18]introduced several training strategies to make ViT perform better.On the basis of these methods,Swin Transformer[19]restricted self-attention calculations to non-overlapping partial windows,while allowing cross-window connections,which brought greater efficiency and flexibility.Compared to previous work,Swin Transformer is able to significantly reduce the computation.Besides,Swin-Unet[20]based on U-net and Swin Transformer stood out from medical image segmentation tasks.

    Self-Attention to Mix CNNGiven the ability to leverage long-term dependence,transformers are expected to help atypical convolutional neural networks overcome their inherent shortcomings of spatial induction bias.However,most of the recently proposed Transformer-based segmentation methods only use Transformer as an auxiliary module to help encode the global context as a convolutional representation,and have not studied how to best combine self-attention (the core of Transformer)with convolution.NnFormer[21]has an interleaved architecture based on a combination of self-attention and convolution experience.Reference[22]was an interleaved architecture based on the experience of self-attention and convolution.It achieved SOTA performance on ImageNet-1k classification without bells and whistles.

    For now,the study in the field of autonomous driving has achieved considerable results.The development of transformer-based methods can provide a research foundation for autonomous driving road detection.Therefore,this paper focuses on road detection tasks based on U-net,Transformer and CNN.

    3 Methodology

    3.1 Architecture Overview

    The overall architecture is presented on Fig.1.Two main pipelines are trained together.The details of two pipelines are presented on Fig.2.We follow the base of[10].The CNN pipeline is divided into encoding block and fusion block.The backbone of CNN pipeline encoder is resnet101.The image is down-sampled by convolution,the features are mapped to different scales,and then the image is up-sampled and restored by deconvolution.The extracted context features are fused with multiscale features from encoder via fusion block to complement spatial information.

    Figure 1:Network structure.The pipeline at the top is transformer-based U-net,and the pipeline at the bottom is CNN-based U-net

    Our second part is based on Swin-Unet[20],which consists of encoder,hall block,decoder and skip connections.The backbone is inspired by Swin Transformer block.For the encoder,the images are split into non-overlapping patches to transform the inputs into sequence embeddings.The decoder is composed of Transformer blocks and Patch Expanding layer.The skip connections have the same function with fusion block.In contrast to patch merging layers,a patch expanding layer is designed to perform up-sampling.Then a linear projection layer is applied on these up-sampled features to output the pixel-level segmentation predictions.The output results of the two pipelines will be sent to Mix Block for further processing to obtain higher accuracy.Both the Hall Model of two pipelines are to learn the deep feature representation.The feature dimension and resolution are kept unchanged.The difference is that one uses convolution and the other uses Swin Transformer block.

    Figure 2:Details of network structure.In the section,blue rectangles indicate down-sampling process,yellow rectangles are up-sampling process,and green rectangles are hall block for deep feature representation

    3.2 Embedding Block

    The embedding block is the common patch processing method for Transformer structures.In this block,patches will be encoded as spatial information.Besides,Position embedding is able to preserve the location information of the image patches.ViT also encodes the position embedding at the input,and it is optional for Swin Transformer,because a relative position encoding is made by Swin Transformer when calculating attention.As a result,the embedding block only contains convolutional layer.

    3.3 Swin Transformer Block

    Swin transformer block is constructed based on shifted windows.In Fig.3,two consecutive Swin transformer blocks are presented.With the shifted window partitioning approach,consecutive Swin transformer blocks are computed as:

    whereQ,K,V∈RM2×ddenote the query,key and value matrices.M2anddrepresent the number of patches in a window and the dimension of the query or key,respectively.And,the values inBare taken from the bias matrix∈R(2M?1)×(2M+1).

    Figure 3:Transformer structure

    3.4 Skip-Connection Block/Fusion Block

    The fusion of deep and shallow information in FCN[8]is through the addition of corresponding pixels,we follow this method and apply it as the fusion block of CNN pipeline.U-net structure is through splicing.In the addition method,the dimension of the feature map has not changed,but each dimension contains more features.For ordinary classification tasks,which do not need to be restored from the feature map to the original resolution,this is an efficient selection;splicing retains more dimensional/location information,which allows the subsequent layers to freely choose between shallow and deep features,which benefits semantic segmentation tasks.Thus,the Skip-connection block of Transformer pipeline is depend on splicing.

    3.5 Mix Block

    As shown in Fig.4,the Transformer pipeline focus on the main area of road detection.The results from CNN pipeline have a remarkable road edge detection effect,but it has quite a few false detection of non-road areas.Thus,we design a mix method below:

    |.|means to calculate the absolute value,ωfandωFthe weight coefficient with a value between 0 and 1.IC(x,y)andIT(x,y)respectively represent the road confidence map based on CNN pipeline and Transformer pipeline.

    Figure 4: The results of different pipelines.The first row is the input; the second row is the result of CNN pipeline;the third row is the result of Transformer pipeline and the forth row is the result of mix block

    The purpose of the above formula is to fuse two road confidence maps and enhance the pixels with similar probabilities.The first half of the formula is the general probability fusion formula,the second half adds consistency as marked in the formula(6).After the consistency item is strengthened,at any point(x,y),if the two confidence graphs have similar values,the consistency item is approximately equal to 1.386.This means that if the judgments of the two confidence graphs at point(x,y)are both roads,IC(x,y)will be enlarged.Conversely,if the two confidence maps both show that the point(x,y)is a background pixel,the road surface probability of the pixel remains 0 after the fusion because the weighted fusion item on the left is 0.In other cases,when the two fusion confidence maps obtain two completely different confidence values at(x,y),then the value ofIT(x,y)will be suppressed by the consistency item after normalization.Here,a constant factor of 1 is added before the consistency term to ensure that the influence of the weighted fusion term is preserved,instead of getting a zero value when fusing two completely different confidence values.In the end,we use some prior knowledge to remove some irrelevant pixel value predictions,such as some areas at the top of the image.

    4 Experiments

    4.1 Datasets

    KITTI DatasetOur model is mainly estimated on KITTI-ROAD dataset,which includes 289 training and 290 testing structured road scenes.We split the original training set for supervised data.The new training and validation set contain 230 and 59 road scenes respectively.Ablation study is conducted on new data split.The final result comparison with other existing models is based on KITTI testing set.

    URS DatasetTo further verify our model efficiency,we examine the results on self-built dataset(Unstructured Road Scene dataset).As shown in Fig.5,the dataset we built contains 2000 road images of different types containing gravel pavement,soil-covered pavement,water-covered pavement and cement pavement.These images include almost all types of roads.Unlike other open datasets,such as KITTI,this dataset contains a large number of unconventional road image,which provides more abundant and diversified challenges and tests for road segmentation tasks.Its applicable scenarios are also wider.We use 1400 images for training,400 images for validation,and 200 images for test.The distribution of the number of image types is shown in the Table 3.

    Figure 5:Different kinds of self-built dataset.The first row is the RGB image;the second row is the ground-truth

    4.2 Implementation Details

    On KITTI dataset,we train two main pipelines separately.While network training,the input image size is set as 512 in both pipeline.Data augmentation is realized by rotating or cropping images.

    We train MCTNet with batch size of 1 on one GPU(GeForce GTX 1080 8 GB),and use SGD optimizer with momentum=0.9 and weight decay=0.0005.Learning rate is set at 0.001 and decays by a factor of 10,attached every 6 epochs of 30 epochs in total.For Transformer pipeline,the stage of encoder and decoder is 4 and the number layers of them are[2,6,8,16].In Mix block,theωfis set to 0.2 and theωFis set to 0.15 to generate evaluation result.The loss function is a common cross-entropy function used for semantic segmentation.

    On self-built dataset verification,the same network setting is utilized.In Mix block,theωfis set to 0.15 and theωFis set to 0.1 to generate evaluation result.

    4.3 Ablation Study

    For the purpose of exploring the influence of different factors on the accuracy,we conducted ablation studies on KITTI dataset.Specifically,optimizer,input sizes,and model scales are discussed below.

    Effect of optimizers,model scales and image sizes:For CNN pipeline,we explore the effect of different optimizers and weight decay(WD).The experimental results in Table 1 indicate that the SGD combined with weight decay can obtain better segmentation accuracy.For Transformer pipeline,we discuss the effect of network deepening on model performance.Similar to[19],we try different scales of the network.It can be seen from Table 1 that increase of model scale improves the performance of the model.Considering the accuracy,we adopt the large-size model to perform road image segmentation.The testing results of the network with 224,512 input resolutions as input are presented in Table 1.As the input size increases from 224 to 512 and the patch size remains the same as 4,the input token sequence of Transformer will become larger,and more semantic information is used to improve the ability of the Transformer pipeline.The CNN pipeline has the same effect,so the experiments in this paper use 512 resolution scale as the input.

    Table 1: Ablation study in different pipelines(%)

    Effect of fusion degree:The experimental results in Table 2 indicate that the results of Transformer pipeline do improve detection accuracy,although most of the effects are contributed by the CNN pipeline.The fusion degree balances the contributions of both and indicates each advantages.

    Table 2:Ablation study of fusion degree(ωF &ωf).MIOU is used to evaluate the effect of mix block

    Table 3: The distribution of the number of road image types

    4.4 Main Results

    Tables 4 and 6 show the performance of our method(MCTNet)on KITTI test set,compared to other listed methods.According to Table 4,the model proposed in this chapter has obvious advantages in road segmentation compared with other models on KITTI dataset.Table 5 shows the result of mixing CNN pipeline and Transformer pipeline.It can be noted that MCTNet method is more effective because of combination of CNN and Transformer.

    Table 4: Main indicators results of BEV segmentation on KITTI test set(%)

    Table 5: Best indicators results on KITTI train set(%)

    Table 6: Segmentation accuracy of different methods on KITTI dataset(%)

    Table 7 shows the performance of our method(MCTNet)on self-built dataset.As shown in Fig.6,the performance prove the ability of the network.Obviously,the blurred boundaries,the brightness interference of the field light,and the absence of artificial dividing lines make this task even more challenging.Table 8 shows the performance of our method and some classical segmentation models on URS dataset.Fig.7 also shows that our method outperforms the others,and our method basically detects the main passable areas of the road.In the future,we will continue to experiment to test the effectiveness of our method and the challenge of this dataset.

    Table 7: Segmentation accuracy on URS dataset.ωF and ωf represent the degree of mixing

    Figure 6:The results of different pipelines on URS dataset.The first row is the input;the second row is the result of CNN pipeline;the third row is the result of Transformer pipeline and the forth row is the result of mix block

    Table 8: Segmentation accuracy of different methods on URS dataset

    Figure 7:The results of different method on URS dataset.The first row is the input;the second row is the test result of FCN method on self-built dataset;the third row is the test result of PSPNet method on self-built dataset and the forth row is the result of our method(MCTNet)

    4.5 Summary

    Through the above experiments,we can naturally find that the convolution module of CNN and Swin Transformer Block have different representation effects.It is a better idea to combine them in a single network,and some researchers are already experimenting with it,such as[21].We have tried to make the convolutional layer and Swin Transformer Block appear in the network at the same time,but we have not obtained excellent results for the time being.We will continue to explore their potentiality for road segmentation tasks.

    5 Conclusion

    We present a mix method of CNN and Transformer for road segmentation and achieve high performance on KITTI and self-built datasets.The method make full use of the CNN’s performance and Transformer’s advantages in the road segmentation task.To meet the demand of autonomous driving,a post-processing adding with prior knowledge of the road scene is also vital.Besides,the Unstructured Road Scene dataset we built shows us a unique road segmentation task scene,which brings diversity and applicability.With regard to future research,we will investigate more methods of combining CNN and Transformer for better performance.

    Funding Statement: This work is supported by the Postgraduate Research & Practice Innovation Program of Jiangsu Province (SJCX21_1427) and General Program of Natural Science Research in Jiangsu Universities(21KJB520019).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产午夜精品一二区理论片| 人人妻人人澡人人爽人人夜夜| 丰满饥渴人妻一区二区三| 精品少妇久久久久久888优播| 激情五月婷婷亚洲| 亚洲av男天堂| 人人妻人人澡人人看| 中文字幕人妻丝袜制服| 精品国产一区二区三区久久久樱花| 亚洲中文av在线| 国产不卡av网站在线观看| 久久久精品免费免费高清| 成年女人在线观看亚洲视频| 国产高清三级在线| 成人综合一区亚洲| 成人综合一区亚洲| 国产又色又爽无遮挡免| 国产一区有黄有色的免费视频| av视频免费观看在线观看| 国产成人精品久久久久久| 色婷婷av一区二区三区视频| av电影中文网址| 黑人巨大精品欧美一区二区蜜桃 | av网站免费在线观看视频| 成人国语在线视频| 免费日韩欧美在线观看| 在线观看一区二区三区激情| 久久久久久久久久人人人人人人| 一区二区av电影网| 人妻人人澡人人爽人人| 精品久久久噜噜| 肉色欧美久久久久久久蜜桃| 又粗又硬又长又爽又黄的视频| 一级毛片我不卡| 自线自在国产av| 亚洲中文av在线| 26uuu在线亚洲综合色| 日韩成人av中文字幕在线观看| a级毛色黄片| 赤兔流量卡办理| 欧美精品一区二区大全| 黑人猛操日本美女一级片| 久久久久网色| 久久毛片免费看一区二区三区| 久久久久久久久久久久大奶| 在线观看国产h片| 免费黄色在线免费观看| 人人妻人人爽人人添夜夜欢视频| 久久人妻熟女aⅴ| 99热国产这里只有精品6| 成人国语在线视频| 精品卡一卡二卡四卡免费| 亚洲精品乱码久久久v下载方式| 少妇高潮的动态图| 成人国语在线视频| 国产精品久久久久久精品电影小说| 日本欧美视频一区| 亚洲精品自拍成人| 综合色丁香网| 久久精品国产鲁丝片午夜精品| 日韩熟女老妇一区二区性免费视频| 3wmmmm亚洲av在线观看| 99九九在线精品视频| 久久精品国产鲁丝片午夜精品| 亚洲人成77777在线视频| 日韩伦理黄色片| 啦啦啦中文免费视频观看日本| 国产精品久久久久久久久免| 免费av不卡在线播放| 国产精品秋霞免费鲁丝片| 在线观看免费高清a一片| 精品99又大又爽又粗少妇毛片| 黄色一级大片看看| 亚洲精品色激情综合| 在现免费观看毛片| 人妻 亚洲 视频| 日韩一本色道免费dvd| 亚洲国产欧美在线一区| a级毛色黄片| 欧美激情国产日韩精品一区| 日本午夜av视频| 国产精品蜜桃在线观看| 成年av动漫网址| 一级爰片在线观看| 麻豆成人av视频| 亚洲第一区二区三区不卡| 免费人妻精品一区二区三区视频| 免费观看a级毛片全部| 国产精品国产三级国产av玫瑰| 成人影院久久| av国产久精品久网站免费入址| 亚洲人与动物交配视频| 久久久精品区二区三区| 亚洲第一av免费看| 99国产精品免费福利视频| 两个人的视频大全免费| 女人精品久久久久毛片| 视频在线观看一区二区三区| 国产爽快片一区二区三区| 日韩在线高清观看一区二区三区| 日韩av不卡免费在线播放| √禁漫天堂资源中文www| 久久久久精品性色| 亚洲图色成人| 看免费成人av毛片| 在线观看免费日韩欧美大片 | 在线亚洲精品国产二区图片欧美 | 欧美性感艳星| 亚洲欧美清纯卡通| 精品亚洲乱码少妇综合久久| 久久精品人人爽人人爽视色| 精品国产一区二区久久| 国产成人a∨麻豆精品| 如何舔出高潮| 国产成人一区二区在线| 汤姆久久久久久久影院中文字幕| 国产欧美另类精品又又久久亚洲欧美| av不卡在线播放| 涩涩av久久男人的天堂| 99精国产麻豆久久婷婷| 亚洲av成人精品一二三区| 精品卡一卡二卡四卡免费| 韩国av在线不卡| 久久这里有精品视频免费| 男女免费视频国产| 亚洲综合色惰| av国产久精品久网站免费入址| 女的被弄到高潮叫床怎么办| 亚洲,欧美,日韩| av在线观看视频网站免费| av又黄又爽大尺度在线免费看| 亚洲高清免费不卡视频| 飞空精品影院首页| 热99久久久久精品小说推荐| 亚洲一区二区三区欧美精品| 2018国产大陆天天弄谢| 美女国产高潮福利片在线看| 日韩视频在线欧美| 国产高清不卡午夜福利| www.色视频.com| 婷婷色综合www| 在线观看人妻少妇| 亚洲人成77777在线视频| 美女主播在线视频| 狂野欧美激情性xxxx在线观看| 九色亚洲精品在线播放| 久久久a久久爽久久v久久| 国产成人精品福利久久| 国产极品天堂在线| 成人国语在线视频| 最后的刺客免费高清国语| 超碰97精品在线观看| 久久久a久久爽久久v久久| www.av在线官网国产| 伊人亚洲综合成人网| 桃花免费在线播放| 久久99热这里只频精品6学生| 男人爽女人下面视频在线观看| 亚洲精品中文字幕在线视频| 久久精品熟女亚洲av麻豆精品| 老司机影院毛片| 欧美日韩精品成人综合77777| 99九九线精品视频在线观看视频| 日韩亚洲欧美综合| 乱人伦中国视频| 亚洲,欧美,日韩| 最新的欧美精品一区二区| 中国三级夫妇交换| 狂野欧美白嫩少妇大欣赏| 人妻制服诱惑在线中文字幕| 啦啦啦在线观看免费高清www| 国产精品国产三级国产专区5o| 人妻制服诱惑在线中文字幕| 黑人猛操日本美女一级片| 中文字幕亚洲精品专区| 三级国产精品欧美在线观看| 亚洲情色 制服丝袜| av线在线观看网站| 精品熟女少妇av免费看| 中国三级夫妇交换| 啦啦啦中文免费视频观看日本| 国产成人91sexporn| 国产免费视频播放在线视频| 十八禁高潮呻吟视频| 久久人妻熟女aⅴ| 777米奇影视久久| 亚洲欧美清纯卡通| 亚洲三级黄色毛片| 久久免费观看电影| 青春草亚洲视频在线观看| 亚洲欧美日韩另类电影网站| 国产精品不卡视频一区二区| 制服诱惑二区| 国产成人免费无遮挡视频| 自线自在国产av| 人人妻人人澡人人爽人人夜夜| 我的女老师完整版在线观看| 老司机亚洲免费影院| 一个人免费看片子| 日韩成人伦理影院| 久久精品人人爽人人爽视色| av网站免费在线观看视频| 又粗又硬又长又爽又黄的视频| 美女内射精品一级片tv| 18禁观看日本| 亚洲在久久综合| 亚洲欧洲日产国产| 一区二区三区乱码不卡18| 欧美日韩亚洲高清精品| 亚洲精品一二三| 亚洲av欧美aⅴ国产| 夫妻午夜视频| 简卡轻食公司| 欧美精品高潮呻吟av久久| 国产av码专区亚洲av| 久久午夜综合久久蜜桃| 青春草视频在线免费观看| 最黄视频免费看| 国产老妇伦熟女老妇高清| 伊人久久精品亚洲午夜| 丰满乱子伦码专区| 精品久久久噜噜| 最黄视频免费看| 丝袜脚勾引网站| 国产在线免费精品| 色哟哟·www| 精品99又大又爽又粗少妇毛片| 亚洲av欧美aⅴ国产| 美女福利国产在线| 亚洲性久久影院| 黑人高潮一二区| 超碰97精品在线观看| 男女免费视频国产| 国语对白做爰xxxⅹ性视频网站| 日日摸夜夜添夜夜添av毛片| 中文精品一卡2卡3卡4更新| 国产精品蜜桃在线观看| 亚洲精品乱码久久久久久按摩| videossex国产| 国产毛片在线视频| 少妇的逼好多水| 亚洲精品乱码久久久v下载方式| freevideosex欧美| 精品一区二区三区视频在线| 日本免费在线观看一区| 日本av手机在线免费观看| 在线观看免费视频网站a站| 少妇人妻 视频| 少妇熟女欧美另类| 22中文网久久字幕| 国产精品嫩草影院av在线观看| 久久久久国产网址| a级毛片免费高清观看在线播放| 新久久久久国产一级毛片| 午夜av观看不卡| av女优亚洲男人天堂| 色94色欧美一区二区| 中文字幕最新亚洲高清| 成人午夜精彩视频在线观看| xxx大片免费视频| 熟妇人妻不卡中文字幕| 观看av在线不卡| 欧美精品亚洲一区二区| 我的老师免费观看完整版| 美女内射精品一级片tv| 亚洲中文av在线| 成人影院久久| av视频免费观看在线观看| 十分钟在线观看高清视频www| 亚洲国产欧美日韩在线播放| 国产午夜精品久久久久久一区二区三区| 国产精品嫩草影院av在线观看| 久久精品人人爽人人爽视色| 亚洲欧美日韩卡通动漫| 十八禁网站网址无遮挡| 成人漫画全彩无遮挡| 美女中出高潮动态图| 搡女人真爽免费视频火全软件| 国产精品国产三级国产专区5o| 亚洲国产精品一区二区三区在线| 国语对白做爰xxxⅹ性视频网站| 人妻制服诱惑在线中文字幕| 日韩成人av中文字幕在线观看| 人妻 亚洲 视频| 免费不卡的大黄色大毛片视频在线观看| 久热久热在线精品观看| 如日韩欧美国产精品一区二区三区 | 国产高清国产精品国产三级| 成人午夜精彩视频在线观看| 国产精品人妻久久久影院| 三上悠亚av全集在线观看| 汤姆久久久久久久影院中文字幕| 少妇 在线观看| 大话2 男鬼变身卡| 日本欧美国产在线视频| 波野结衣二区三区在线| 国产精品久久久久久av不卡| av有码第一页| 免费观看无遮挡的男女| 高清视频免费观看一区二区| 男女边吃奶边做爰视频| 一区二区日韩欧美中文字幕 | 国产一区亚洲一区在线观看| 老熟女久久久| 精品一区二区免费观看| av线在线观看网站| 日韩制服骚丝袜av| 成年av动漫网址| 亚洲熟女精品中文字幕| 国产一级毛片在线| 欧美日韩在线观看h| 51国产日韩欧美| 国产成人精品久久久久久| 99久久综合免费| 国产欧美亚洲国产| 啦啦啦视频在线资源免费观看| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 欧美日韩一区二区视频在线观看视频在线| 精品国产国语对白av| 国产欧美亚洲国产| 日日摸夜夜添夜夜爱| 99re6热这里在线精品视频| av一本久久久久| 男女无遮挡免费网站观看| 日日撸夜夜添| 久久久a久久爽久久v久久| 极品人妻少妇av视频| 女的被弄到高潮叫床怎么办| 国产熟女午夜一区二区三区 | 一区二区av电影网| 在线亚洲精品国产二区图片欧美 | 中文字幕制服av| 高清午夜精品一区二区三区| 卡戴珊不雅视频在线播放| 在线播放无遮挡| 日韩制服骚丝袜av| 久热久热在线精品观看| 69精品国产乱码久久久| 中文字幕免费在线视频6| 国产精品一国产av| 女的被弄到高潮叫床怎么办| 国产极品粉嫩免费观看在线 | 亚洲av中文av极速乱| 国产精品久久久久久精品电影小说| 日本-黄色视频高清免费观看| 久久午夜福利片| 女性生殖器流出的白浆| 国产免费一区二区三区四区乱码| 精品国产一区二区三区久久久樱花| 亚洲高清免费不卡视频| 最近中文字幕2019免费版| 精品卡一卡二卡四卡免费| 国产免费现黄频在线看| 91精品伊人久久大香线蕉| 久久鲁丝午夜福利片| 精品少妇久久久久久888优播| 伦精品一区二区三区| 成人漫画全彩无遮挡| 亚洲精品一二三| 搡老乐熟女国产| 久久久精品免费免费高清| 男女边吃奶边做爰视频| 成人午夜精彩视频在线观看| 制服人妻中文乱码| 欧美人与善性xxx| 青青草视频在线视频观看| 麻豆成人av视频| 欧美日韩视频高清一区二区三区二| 人妻一区二区av| 一级二级三级毛片免费看| 国产不卡av网站在线观看| 久久ye,这里只有精品| 另类亚洲欧美激情| 一级二级三级毛片免费看| 日日摸夜夜添夜夜添av毛片| xxxhd国产人妻xxx| 国产在线视频一区二区| 啦啦啦视频在线资源免费观看| 久久毛片免费看一区二区三区| 日本午夜av视频| av电影中文网址| 日日摸夜夜添夜夜添av毛片| 99久久中文字幕三级久久日本| av电影中文网址| 欧美人与善性xxx| 黑人高潮一二区| 美女视频免费永久观看网站| 成年美女黄网站色视频大全免费 | 亚洲精品一二三| 女人精品久久久久毛片| 丰满饥渴人妻一区二区三| 99国产综合亚洲精品| 免费大片18禁| 亚洲图色成人| 日产精品乱码卡一卡2卡三| 2021少妇久久久久久久久久久| 久久久久久久久久成人| 色哟哟·www| 伊人久久国产一区二区| 观看av在线不卡| 色婷婷av一区二区三区视频| 毛片一级片免费看久久久久| 午夜视频国产福利| 久久久久视频综合| 曰老女人黄片| 我的老师免费观看完整版| 午夜福利影视在线免费观看| 亚洲人成77777在线视频| 亚洲国产欧美在线一区| 在线观看www视频免费| 亚洲欧美一区二区三区黑人 | 人妻一区二区av| 亚洲精品乱久久久久久| 晚上一个人看的免费电影| 22中文网久久字幕| 亚洲国产最新在线播放| 亚洲性久久影院| 97超视频在线观看视频| 婷婷成人精品国产| 日本色播在线视频| 国产黄色视频一区二区在线观看| 你懂的网址亚洲精品在线观看| 毛片一级片免费看久久久久| 人人妻人人爽人人添夜夜欢视频| 多毛熟女@视频| 在线观看一区二区三区激情| 99热网站在线观看| 亚洲成人一二三区av| 母亲3免费完整高清在线观看 | 亚洲在久久综合| 日韩强制内射视频| 成年女人在线观看亚洲视频| 一级毛片 在线播放| 国产乱人偷精品视频| 街头女战士在线观看网站| 黄色视频在线播放观看不卡| 日本免费在线观看一区| 久久 成人 亚洲| 色5月婷婷丁香| √禁漫天堂资源中文www| 在线观看一区二区三区激情| 99热国产这里只有精品6| 国产精品国产三级国产专区5o| 成人毛片60女人毛片免费| 一级毛片我不卡| 精品酒店卫生间| 青青草视频在线视频观看| 国产午夜精品一二区理论片| 少妇的逼好多水| 在线观看人妻少妇| 日韩不卡一区二区三区视频在线| 另类精品久久| 亚洲激情五月婷婷啪啪| 亚洲精品日本国产第一区| 99视频精品全部免费 在线| xxxhd国产人妻xxx| 乱码一卡2卡4卡精品| 国产极品粉嫩免费观看在线 | 久久久国产精品麻豆| 国产黄片视频在线免费观看| 免费看不卡的av| 精品久久国产蜜桃| 免费少妇av软件| av免费在线看不卡| 内地一区二区视频在线| 国产熟女午夜一区二区三区 | 国产一区二区三区av在线| 少妇人妻精品综合一区二区| 纵有疾风起免费观看全集完整版| 性色avwww在线观看| 亚洲av二区三区四区| 国产高清有码在线观看视频| 亚洲久久久国产精品| 9色porny在线观看| 国产伦理片在线播放av一区| 久久ye,这里只有精品| 亚洲欧美成人精品一区二区| 国产男女内射视频| 国产色爽女视频免费观看| 五月伊人婷婷丁香| 99热全是精品| 男女边摸边吃奶| 国产精品99久久99久久久不卡 | 99热网站在线观看| 亚洲av在线观看美女高潮| 欧美日韩视频精品一区| 欧美丝袜亚洲另类| 免费高清在线观看视频在线观看| 亚洲欧美日韩另类电影网站| 亚洲一区二区三区欧美精品| 在现免费观看毛片| 大话2 男鬼变身卡| 夜夜骑夜夜射夜夜干| 色94色欧美一区二区| 美女福利国产在线| 亚洲人成网站在线观看播放| 国产片内射在线| 亚洲三级黄色毛片| 亚洲内射少妇av| 日韩精品免费视频一区二区三区 | 成年美女黄网站色视频大全免费 | 日本免费在线观看一区| 精品久久久精品久久久| 亚洲经典国产精华液单| 亚洲成人手机| 免费高清在线观看日韩| 王馨瑶露胸无遮挡在线观看| 国产精品一区二区在线观看99| 国产黄片视频在线免费观看| 日本黄色日本黄色录像| 综合色丁香网| 不卡视频在线观看欧美| 久久97久久精品| 亚洲国产日韩一区二区| 一区二区三区四区激情视频| 国产欧美亚洲国产| 免费观看无遮挡的男女| 五月开心婷婷网| 嫩草影院入口| 免费高清在线观看日韩| 一区二区三区免费毛片| 美女脱内裤让男人舔精品视频| 人妻人人澡人人爽人人| 欧美三级亚洲精品| 美女cb高潮喷水在线观看| .国产精品久久| 欧美日韩国产mv在线观看视频| 亚洲成色77777| av国产久精品久网站免费入址| 亚洲成人av在线免费| 午夜福利影视在线免费观看| 日本免费在线观看一区| 国产高清有码在线观看视频| 国产精品一区二区三区四区免费观看| 日日摸夜夜添夜夜爱| 免费av不卡在线播放| 中文字幕精品免费在线观看视频 | 日韩av不卡免费在线播放| 51国产日韩欧美| 只有这里有精品99| 免费看av在线观看网站| 美女主播在线视频| 亚洲av在线观看美女高潮| 日日撸夜夜添| 午夜影院在线不卡| 国产精品三级大全| 国产精品不卡视频一区二区| 国产成人av激情在线播放 | 久久久精品区二区三区| 国产黄频视频在线观看| 国产一级毛片在线| 亚洲欧洲国产日韩| av黄色大香蕉| 色吧在线观看| 日韩一区二区视频免费看| 狂野欧美激情性xxxx在线观看| 国产在线视频一区二区| 午夜91福利影院| 三级国产精品欧美在线观看| a 毛片基地| 欧美日韩成人在线一区二区| 亚洲综合色惰| 国产精品嫩草影院av在线观看| 永久免费av网站大全| 丝袜脚勾引网站| 日产精品乱码卡一卡2卡三| 丝袜在线中文字幕| 菩萨蛮人人尽说江南好唐韦庄| 精品久久久噜噜| 搡老乐熟女国产| 99国产精品免费福利视频| 亚洲,一卡二卡三卡| 女性生殖器流出的白浆| 亚洲综合精品二区| 边亲边吃奶的免费视频| 一级黄片播放器| 亚洲色图 男人天堂 中文字幕 | 99久久综合免费| 成年美女黄网站色视频大全免费 | 视频在线观看一区二区三区| 国产视频内射| 亚洲av成人精品一二三区| 亚洲人成网站在线播| 亚洲国产精品专区欧美| 免费少妇av软件| av天堂久久9| www.av在线官网国产| 91精品伊人久久大香线蕉| 中文字幕人妻丝袜制服| 国产精品99久久99久久久不卡 | 人人澡人人妻人| 少妇高潮的动态图| 国产精品熟女久久久久浪| 亚洲精品视频女| 亚洲av不卡在线观看| 少妇人妻久久综合中文| 一级爰片在线观看| √禁漫天堂资源中文www| 高清视频免费观看一区二区| 男男h啪啪无遮挡| 亚洲av日韩在线播放| 国产乱来视频区| 国产一区二区在线观看日韩| 97在线人人人人妻| 国产男女内射视频| 欧美人与善性xxx| 能在线免费看毛片的网站| 亚洲av欧美aⅴ国产| 亚洲成人一二三区av| 亚洲av男天堂| 日韩电影二区| 一级二级三级毛片免费看| 亚洲成色77777| 国产片特级美女逼逼视频| 人妻夜夜爽99麻豆av|