• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Image Semantic Segmentation for Autonomous Driving Based on Improved U-Net

    2023-02-17 03:13:52ChuanlongSunHongZhaoLiangMuFuliangXuandLaiweiLu

    Chuanlong Sun,Hong Zhao,Liang Mu,Fuliang Xu and Laiwei Lu

    College of Mechanical and Electrical Engineering,Qingdao University,Qingdao,266071,China

    ABSTRACT Image semantic segmentation has become an essential part of autonomous driving. To further improve the generalization ability and the robustness of semantic segmentation algorithms,a lightweight algorithm network based on Squeeze-and-Excitation Attention Mechanism (SE) and Depthwise Separable Convolution (DSC) is designed. Meanwhile, Adam-GC, an Adam optimization algorithm based on Gradient Compression (GC), is proposed to improve the training speed,segmentation accuracy,generalization ability and stability of the algorithm network. To verify and compare the effectiveness of the algorithm network proposed in this paper, the trained network model is used for experimental verification and comparative test on the Cityscapes semantic segmentation dataset.The validation and comparison results show that the overall segmentation results of the algorithm network can achieve 78.02%MIoU on Cityscapes validation set,which is better than the basic algorithm network and the other latest semantic segmentation algorithms network.Besides meeting the stability and accuracy requirements,it has a particular significance for the development of image semantic segmentation.

    KEYWORDS Deep learning; semantic segmentation; attention mechanism; depthwise separable convolution; gradient compression

    1 Introduction

    With the combination of Artificial Intelligence(AI)and automobile transportation,autonomous driving [1] has become one of the development strategies in many countries, which involves Global Positioning System (GPS), Computer Vision (CV) [2] and other advanced technologies. The perception system[3,4] is one of the indispensable parts of autonomous driving vehicles. Its perceptual adaptability and real-time performance in the environment can directly affect the safety and reliability of autonomous driving vehicles. While image semantic segmentation is one of the main tasks in the perception system,its effectiveness will directly affect the decision quality of the autonomous driving vehicle.

    In recent years,due to the progress of large datasets,powerful computing power,complex network architectures and optimization algorithms, the application of deep learning in the field of image semantic segmentation has achieved major breakthroughs[5].At present,the semantic segmentation methods for autonomous driving mainly include traditional semantic segmentation methods and deep learning-based semantic segmentation methods.

    Most of the traditional semantic segmentation methods are the early semantic segmentation methods, which were first applied to the medical field with simple scenes and obvious differences between the background objects.The main researches are:segmentation methods based on threshold[6-8], which classifies the image gray histogram by setting different gray thresholds, and the pixels whose gray values are in the same gray range are considered to belong to the same class and have a certain similarity, so as to achieve semantic segmentation; the edge-based image segmentation method [9], which compares the gray value difference between adjacent pixels, regards the points with large differences as boundary points and detects these points.The pixel points at the boundary are connected to form edge contours to achieve the segmentation of different regions; the regionbased image segmentation method[10]segmented the image by obtaining the spatial information of the image. It classifies the pixels by the similarity features of the pixels and forms the region; image segmentation methods based on graph theory [11-13] convert the segmentation problem into graph division and complete the segmentation process by optimizing the objective function. Most of the traditional semantic segmentation methods use the surface information of images,which is not suitable for segmentation tasks that require a lot of semantic information and cannot meet the actual needs.Most of the traditional semantic segmentation methods only use the shallow-level information of the image,which cannot meet the needs of current research.At present,it is often used as a preprocessing step in image processing to obtain key feature information of the image and improve the efficiency of image analysis.

    In the field of semantic segmentation based on deep learning,convolutional neural network has become an important means of image processing, which can fully utilize the semantic information of images to achieve semantic segmentation. To cope with the increasingly complex challenges of image segmentation scenarios,a series of deep learning-based image semantic segmentation methods have been proposed to achieve more accurate and efficient segmentation, and further promote the application scope of image segmentation.Image semantic segmentation based on region classification and image semantic segmentation based on pixel classification are the current mainstream deep learning-based semantic segmentation methods.The former divides the image into a series of target candidate regions and classifies the target region through the deep learning algorithm,which can avoid the generation of superpixels and improve the efficiency of image segmentation effectively.The former is represented by MPA[14],DeepMask[15],etc.Through pixel classification,the latter directly uses deep neural networks of an end-to-end structure for segmentation,which avoid the problems caused by the defects of the candidate-regions algorithm.The latter is represented by DeepLab[16], ICNet[17],U-Net[18],etc.

    As one of the representatives in the field of semantic segmentation algorithms, U-Net uses the“encoder-decoder” structure to perform feature fusion between feature maps, so that the shallow convolutions can focus on texture features and the deep convolutions focus on image essential features.This paper selects the U-Net semantic segmentation algorithm as the basic algorithm for research.In recent years, in the study of U-Net semantic segmentation network, Huang et al. [19] and others used full-scale skip connections to replace the long connections of the U-Net model,which combined high-level semantic information with low-level semantic information to obtain more segmentation accuracy.Zhong et al.[20]introduced the DenseNet module and applied it to the convolutional layer to improve the network’s ability to extract features in small areas and avoid the problem of gradient disappearance.The CRF 3D-U-Net network proposed by Hou et al.[21]used 3D-U-Net and a fully connected conditional random field to segment images coarsely and finely,respectively,which enables the network to improve the correlation between pixels.Although the improvement of the U-Net model in the above research has a positive effect,it increases the complexity and running cost of the algorithm model.It does not take advantage of the relationship between each feature map.Meanwhile,because of the redundancy of U-Net itself,it is easy to have low segmentation and positioning accuracy when used in autonomous driving scenarios.Therefore,to meet the complex environment of autonomous driving scenes and real-time requirements,this paper does the following work based on U-Net:

    (1) Change the convolution method and update the standard convolution to depthwise separable convolution,which reduces the calculation parameters and realizes the separation of channels and regions.

    (2) The attention mechanism is introduced in this paper,which enables the network to learn weight information from the feature channel dimension:the weight of the feature channel with good network performance is improved, and the weight of the feature channel with poor network performance is suppressed,so that the training efficiency can be improved.

    (3) For the segmentation accuracy and generalization ability of the semantic segmentation algorithm,this paper operates on the gradient directly and smoothes the gradient curve by using a suitable gradient compression method. Meanwhile, gradient compression can regularize the weight space and output feature, thereby improving the performance of the detection algorithm.

    2 Network Structure

    2.1 Lightweight Feature Extraction Network

    Fig.1 shows the structure of the improved U-Net lightweight feature extraction network in this paper.The feature extraction network is mainly composed of the Depthwise Separable Convolution block DSC-R (Depthwise Separable Convolution-ReLu), SE Attention block. Maximum Pooling layer, the Upsampling block, and the Skip Connection. Among the component, compared with the basic U-Net algorithm structure, the improvements made in this paper mainly include: the original standard convolution method is replaced by depthwise separable convolution, and the SE attention mechanism module is introduced,which can improve the accuracy of feature extraction while realizing the lightweight of the algorithm model.

    Figure 1:Improved U-NET feature extraction network

    2.2 Activation Function

    Activation function plays an esse ntial role for neural network models to learn and understand the complex and nonlinear input characteristics. In this paper, the widely-used LeakyReLU function is used as the activation function.

    Although the traditional ReLU activation function has a faster calculation speed and convergence speed, however, when the input is negative, the neuron cannot update the parameters because of its 0 value output. As is shown in Fig.2, compared with traditional ReLU function, the Leaky ReLU function introduces the Leaky value in the negative half of the input, avoiding 0 value derivatives,which can cause neurons to fail to update parameters,when the input is negative.

    Figure 2:Image and formula of Leaky ReLU activation function

    3 Algorithm Improvement

    3.1 Depthwise Separable Convolution

    The basic assumption of Depthwise Separable Convolution [22] is that the spatial and channel(depth) dimensions of feature maps in convolutional neural networks can be decoupled. Standard convolution computations use weight matrices to achieve joint mapping of spatial and channeldimensional features,but at the cost of high computational complexity,high memory overhead,and a large number of weight coefficients. Conceptually, the Depthwise Separable Convolution reduces the number of weight coefficients while basically retaining the representation learning ability of the convolution kernel by mapping the spatial dimension and the channel dimension respectively and combining the results.

    Fig.3 shows the process of the Depthwise Separable Convolution. The convolution process is divided into depthwise convolution and pointwise convolution. The former uses a convolution kernel for each channel of the input feature map, and splices the output results of each channel.Then a 1×1 point-by-point convolution to get the final output result.Compared with the standard convolution method,the depthwise separable convolution can reduce the operation cost and improve the calculation speed.At the same time,the spatial feature relationship of the image and the feature relationship between channels can be independently calculated, thereby improving the performance of the semantic segmentation network.

    Figure 3:Depthwise separable convolution

    3.2 Attention Mechanism Module

    The relationship between the feature map channels is particularly important in image semantic segmentation, especially in autonomous driving. Therefore, this paper introduces the SE [23]lightweight attention mechanism module.When the up-sampling feature map and the down-sampling feature map are used for feature fusion,due to the introduction of SE attention mechanism module,the fusion results can focus on the spatial relationship between each feature channel,meanwhile,the network can start from the global information,improving the feature channel weight parameters that are beneficial to network performance,suppressing the feature channel weight parameters that are not conducive to network performance.It can achieve the dynamic calibration of channel information,and the performance of semantic segmentation network can be improved.Fig.4 shows the structure of the SE Attention Mechanism module.

    Figure 4:SE attention mechanism module

    The main operations of the SE module are:Squeeze and Excitation.The SE module compresses the input feature map to obtain channel-level global features first,then performs excitation operations on the global features.While learning the relationship between each feature channel,it also obtains the weights of different feature channels,and finally multiplied with the input features.Finally,multiply the feature map with the input feature map to get the final map.The Squeeze operation can be expressed as follows:

    In the formula,Fsqis the Squeeze operation function;f∈RH×Wis the two-dimensional feature map set,f(i,j)is one of its elements,HandWis the size of the feature map respectively,andzis the output of the compression operation.

    The Excitation operation can be expressed as follows:

    In the formula,Fexis the excitation operation function;σandδrepresent the Sigmoid and ReLu activation functions, respectively;,W1and W2are one of the elements,respectively,Cis the number of dimensions of the feature map,ris the dimensionality reduction coefficient;sis the output of the excitation operation.

    After the above operations, the output weight of the excitation operation is multiplied by the original input feature and the output of the SE module is:

    In the formula:Fscaleis the scale operation;xis one of the value in X which is the final output of the SE module,X=[x1,x2,...,xc].

    3.3 Gradient Compression

    Optimization techniques are of great significance for improving the performance of neural networks.Currently,the optimization methods used in the field of semantic segmentation algorithms mainly include BN (batch normalization), which works in the activation function and WS (weight standardization), which operates on weights [24]. In addition to operating in these two aspects, this paper considers directly improving the gradient part to make the training process more effective and stable, thereby improving the generalization ability and segmentation accuracy of the semantic segmentation network.

    Among the optimization algorithms that operate on gradients in the field of semantic segmentation algorithms, the most common methods are to calculate the momentum of the gradient. The main optimization algorithms are Stochastic Gradient Descent with Momentum (SGDM) [25] and Adaptive Moment Estimation(Adam)[26].After reference to the literature,the Adam optimization algorithm can dynamically adjust the update step size by using the first-order moment estimation of the gradient(that is,the mean value of the gradient)and the second-order moment estimation(and the variance of the gradient),making it more efficient than the SGDM algorithm.In order to further improve the performance of the algorithm and facilitate the operator to use the method in this paper,a method is proposed to automatically update the gradient according to the training epoch on the Adam optimizer,which is called Gradient Compression,and the improved optimizer is referred to as Adam-GC for short.

    The formula for Gradient Compression is as follows:

    In the above formula,wirepresents the weight vector, ?wiLrepresents the gradient of the loss function to the weight vector,μis the ratio of the current training timestto the total training times epoch.In the formula,is the gradient smoothing curve,which is used to smooth the update process of the weight parameters and the image of the curve is shown in Fig.5 whenσ=0.4.

    Figure 5:Gradient smoothing curve

    As long as the network obtain the mean of the gradient matrix,subtract the mean value from the column vector of each gradient,and then multiply it by the gradient smoothing coefficient,it can get the update direction of the optimal weight. The calculation of this method is relatively simple, and it does not require too much computational cost when applied to the Adam optimization algorithm.Experiments show that it only takes about 0.5 s more per epoch when using the LeNet convolutional neural network model to train the Mnist handwritten digit recognition dataset.

    The above formula can be written in matrix form as follows:

    In the above formula, P represents the projection matrix of the hyperplane whose weight space normal vector is e,i=is theNdimensional unit vector,I is theNdimensional unit matrix,P?wLis the projected gradient on the plane[27],the projected gradient on the hyperplane will compress the weight space, And the range of the gradient smoothing curve is between 0.6-1, which will further reduce the projected gradient and compress the weight space. The gradient compression method in this paper can be simply implemented in the Adam optimization algorithm.Table 1 shows the process of the algorithm.

    In Table 1,εis a small constant,represents the weight update direction based on the projected gradient. The gradient compression method in this paper can also be explained from the perspective of projected gradient.Fig.6 shows the geometric explanation of the Adam optimization algorithm using gradient compression.The gradient is first projected into the hyperplane determined by iT(w-w0)=0,then the weights are updated along the update direction determined by the gradient.

    Table 1: The process of the GC algorithm

    Figure 6:Geometric interpretation of Adam-GC

    4 Algorithmic Network Training Settings

    4.1 Semantic Segmentation Dataset

    To better verify the effectiveness of our algorithm, this paper uses the Cityscapes dataset, the urban landscape dataset, which contains various stereoscopic video sequences recorded from street scenes of 50 different cities, except for a larger 20,000 weakly annotated frames. In addition, there are high-quality 5000-frame pixel-level annotations.The Cityscapes dataset has two sets of evaluation criteria: fine and coarse. The former provides 5,000 finely annotated images, and the latter provides 5,000 finely annotated images plus 20,000 coarsely annotated images.

    The Cityscapes dataset is designed to:(1)Evaluate the performance of vision algorithms on the main tasks of semantic urban scene understanding:pixel-level,instance-level and panoramic semantic labels;(2)Support research aimed at leveraging large amounts of(weakly)annotated data,for example for training deep neural networks.Fig.7 shows its data file.

    Figure 7:Cityscapes dataset

    4.2 Loss Function

    MIoU (Mean Intersection over Union) is the average intersection and union ratio, which is the current standard measure of semantic segmentation.It calculates the interaction ratio of the two sets.In the semantic segmentation problem,the two sets are the ground truth and the predicted value.The formula is as follows:.

    In the formula,kis the number of semantic segmentation categories,iis the real value,jis the predicted value,andpijrepresents thatiis predicted to bej.

    5 Test Verification and Result Analysis

    The algorithms in this paper are built under the framework of Pytorch 1.2. The training and detection are based on the hardware configuration of the CPU is Intel(R) Core(TM) i7-9700 CPU@3.00 GHz, the GPU is NVIDIA GeForce RTX 2070 SUPER, 8 G video memory, and the number of CUDA cores is 2560,the running memory is 16 G,and the operating system is Windows10 computer platform.

    5.1 Loss Function

    According to the above-mentioned improvements to the algorithm,they are combined to verify their effectiveness,and the changes in Loss during the training process are recorded.At the same time,the early-stop method in pytorch is used to prevent overfitting of the training model,which leads to poor model generalization ability.The maximum number of training iterations is set to 1000,and the model weights are saved every 50 generations.The Loss of the training process is shown in Fig.8:

    Figure 8:Traing loss curves

    In Fig.8,U-DA represents the basic U-Net algorithm network added with Depthwise Separable Convolution and SE Attention Mechanism.U-DA-GC represents the U-DA algorithm network added with GC.Fig.8 shows that:Compared with the basic U-Net algorithm,both U-DA and U-DA-GC have a relatively stable and lower-Loss training process,especially U-DA-GC,which also a faster lossconvergence speed.

    5.2 Validation Test

    In order to verify the effectiveness of the improved algorithms and training method in this paper,each experiment is performed on the Cityscapes training set,then each accuracy index test is performed on the validation set.The parameter settings are consistent with the overall accuracy test experiment.The visual comparison of some segmentation results is shown in Fig.9.

    By comparing the segmentation effects of the basic U-Net algorithm model and the U-DAGC algorithm model, it is clear that the latter has an overall excellent performance in classification accuracy and positioning accuracy, which is significantly improved compared with the former. The segmentation effect on categories such as pedestrians,trees,vehicles,and roads are excellent,and the category to which it belongs can be basically identified. At the same time, the segmentation edge is also relatively smooth and accurate.However,due to the relatively low maximum number of iterations set,both of them cannot segment small objects or object edges well in the face of long-distance and complex scene segmentation,which is also an inevitable problem in segmentation area.

    Meanwhile,due to the multiple network improvements and methods for the basic U-Net semantic segmentation network, it is necessary to verify the effectiveness of each part, so that its effect on the overall network performance of the model can be quantitatively observed.The resulting data are shown in Table 2.

    Figure 9:Validation of efficiency

    Table 2: Validation of efficiency

    Table 2 shows that the MIoU of the U-DA algorithm network after adding DA can reach 76.68%compared with the original network,which is 4.1%higher than the basic network performance.Due to the SE attention mechanism module and the depthwise separable convolution are introduced into the basic U-Net algorithm network, so that the improved network can focus on the spatial relationship between each feature channel, and then it can start from the global information to achieve better performance.

    Meanwhile,the MIoU of the U-DA-GC algorithm network can reach 78.02%,compared with the basic network and the U-Net algorithm network after adding DA,which is 5.9%higher than the basic U-Net,and 1.7%than U-DA,it can be concluded that since the GC optimization algorithm can make the training process more stable and effective,the ability of the trained model to learn image features has also been enhanced,and the improved U-Net algorithm can achieve better results.

    5.3 Comparative Test

    In order to further verify the effectiveness of the improved algorithms and training method in this paper, this paper selects U-DA-GC and other latest semantic segmentation algorithm network:Deeplabv3 and SegNet to conduct comparative experiments. All experiments are carried out in the same experimental environment. And each experiment is performed on the Cityscapes training set,then each accuracy index test is performed on the validation set.The parameter settings are consistent with the overall accuracy test experiment. The visual comparison of some segmentation results and results data are in Fig.10 and Table 3.

    Figure 10:Comparison of efficiency

    Table 3: Comparison of efficiency

    As is shown in Fig.10,the overall performance of U-DA-GC is excellent in classification accuracy and positioning accuracy, which is significantly better than Deeplabv3 and SegNet, especially in vehicles and roads,which are important categories of autonomous driving scenario.

    Table 3 shows that compared with Deeplabv3 and SegNet,the U-DA-GC algorithm network can achieve 78.02% MIoU on the validation set, which is 4.6% and 3.8% higher than Deeplabv3 and SegNet,respectively.

    6 Conclusion

    In this paper, depthwise separable convolution and attention mechanism are introduced on the basis of the basic network U-Net,and a new training adjustment strategy of gradient compression is proposed at the same time.Through a series of experimental verifications,the following conclusions are obtained:

    (1) The improvement methods in this paper can meet the demand for a lightweight semantic segmentation network in the autonomous driving perception system,reduce the operation cost and improve the operation speed.It also provides support for the road condition analysis and real-time segmentation of the autonomous driving perception system.

    (2) The training optimization algorithm proposed in this paper can not only improve the generalization ability and segmentation accuracy of the training model but also has strong algorithm adaptability that can be easily added to other optimization algorithms.

    (3) Compared with the basic algorithm and the other latest semantic segmentation algorithms,the improved method in this paper has a considerable improvement in the segmentation accuracy of common road objects, especially the segmentation effect on the driving area, which is an important segmentation target in the autonomous driving system.

    (4) The data set used in this paper has less training data,and all of them are in the daytime traffic flow with a good line of sight.The segmentation effect for other weather or nighttime needs to be further researched.

    (5) In the process of segmentation, the problem of low segmentation accuracy is easy to occur when facing more complex driving scenes. To solve this problem, it is necessary to conduct more deep research on the feature extraction network.

    Funding Statement:This work is supported by Qingdao People’s Livelihood Science and Technology Plan(Grant 19-6-1-88-nsh).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲国产精品999在线| 国产男靠女视频免费网站| 国产真人三级小视频在线观看| 欧美黄色片欧美黄色片| 老鸭窝网址在线观看| 国产单亲对白刺激| 久久国产亚洲av麻豆专区| 国产精品1区2区在线观看.| 日韩成人在线观看一区二区三区| 亚洲中文av在线| 国产精品成人在线| 天堂影院成人在线观看| av免费在线观看网站| 在线视频色国产色| 国产精品av久久久久免费| 日韩有码中文字幕| 热re99久久国产66热| 身体一侧抽搐| 在线天堂中文资源库| 色精品久久人妻99蜜桃| 久久伊人香网站| 99久久99久久久精品蜜桃| 丰满饥渴人妻一区二区三| 两个人免费观看高清视频| 51午夜福利影视在线观看| 精品无人区乱码1区二区| 淫妇啪啪啪对白视频| 国产精品99久久99久久久不卡| 村上凉子中文字幕在线| 久久精品国产亚洲av高清一级| 操美女的视频在线观看| 国产免费男女视频| 在线观看免费视频日本深夜| 好看av亚洲va欧美ⅴa在| 最新美女视频免费是黄的| 国产人伦9x9x在线观看| 夜夜躁狠狠躁天天躁| 50天的宝宝边吃奶边哭怎么回事| 人人妻人人添人人爽欧美一区卜| 国产成人av激情在线播放| 亚洲 欧美一区二区三区| 免费一级毛片在线播放高清视频 | 女警被强在线播放| 操美女的视频在线观看| 精品久久久久久久久久免费视频 | 91av网站免费观看| 性少妇av在线| 美国免费a级毛片| 纯流量卡能插随身wifi吗| 亚洲欧美日韩高清在线视频| 黑人巨大精品欧美一区二区mp4| 在线观看免费视频网站a站| 一夜夜www| 最新在线观看一区二区三区| 国产精品久久久久成人av| 国产三级在线视频| 99国产精品一区二区蜜桃av| 欧美一级毛片孕妇| 超碰成人久久| 亚洲自拍偷在线| 国产一区二区三区在线臀色熟女 | 亚洲成人免费av在线播放| av在线天堂中文字幕 | 麻豆久久精品国产亚洲av | 美女扒开内裤让男人捅视频| 怎么达到女性高潮| 岛国在线观看网站| 亚洲 欧美一区二区三区| 又黄又粗又硬又大视频| 国产精品久久电影中文字幕| 国产av一区在线观看免费| 亚洲欧洲精品一区二区精品久久久| 精品第一国产精品| 成人影院久久| 午夜免费鲁丝| 91在线观看av| 久久久久国产精品人妻aⅴ院| 激情在线观看视频在线高清| 欧美日韩av久久| 丰满饥渴人妻一区二区三| 在线观看一区二区三区激情| 欧美老熟妇乱子伦牲交| 两人在一起打扑克的视频| 一区福利在线观看| 日韩欧美一区视频在线观看| 国产精品国产av在线观看| 视频区欧美日本亚洲| 久99久视频精品免费| 美女 人体艺术 gogo| 成熟少妇高潮喷水视频| 日韩欧美在线二视频| 丰满迷人的少妇在线观看| 亚洲中文日韩欧美视频| 久久久久国产精品人妻aⅴ院| 亚洲一区二区三区色噜噜 | 一进一出好大好爽视频| 国产三级黄色录像| 国产亚洲欧美在线一区二区| 黄频高清免费视频| 国产成人影院久久av| 国产国语露脸激情在线看| 欧美成人免费av一区二区三区| 黄色怎么调成土黄色| 色婷婷久久久亚洲欧美| 日本 av在线| 亚洲五月天丁香| 天堂影院成人在线观看| 自线自在国产av| 国产一区二区三区在线臀色熟女 | a级片在线免费高清观看视频| 在线观看免费日韩欧美大片| 午夜91福利影院| 亚洲成人免费av在线播放| 男女下面插进去视频免费观看| 美女扒开内裤让男人捅视频| 国产一区二区三区视频了| 成年人免费黄色播放视频| 亚洲人成伊人成综合网2020| av免费在线观看网站| 国产成年人精品一区二区 | 日本欧美视频一区| 桃红色精品国产亚洲av| 在线观看舔阴道视频| 久久国产乱子伦精品免费另类| 99国产精品一区二区蜜桃av| 两个人看的免费小视频| 99久久人妻综合| 中文字幕另类日韩欧美亚洲嫩草| 在线播放国产精品三级| 成人国产一区最新在线观看| 久久久久久久精品吃奶| 我的亚洲天堂| 涩涩av久久男人的天堂| 午夜免费激情av| 亚洲精品久久午夜乱码| 精品久久久久久,| 亚洲精华国产精华精| 精品午夜福利视频在线观看一区| 国产精品 欧美亚洲| 国产一区在线观看成人免费| 97碰自拍视频| 日韩欧美一区视频在线观看| 午夜福利,免费看| 男女下面插进去视频免费观看| 国产精品美女特级片免费视频播放器 | 亚洲欧美精品综合久久99| 亚洲色图综合在线观看| 精品福利观看| 久久久久亚洲av毛片大全| 一区福利在线观看| 免费观看人在逋| 久久久精品国产亚洲av高清涩受| 夜夜夜夜夜久久久久| 1024香蕉在线观看| 久久久国产欧美日韩av| 色播在线永久视频| 亚洲 国产 在线| 国产精品二区激情视频| 麻豆国产av国片精品| 校园春色视频在线观看| 看免费av毛片| 免费日韩欧美在线观看| 欧美一区二区精品小视频在线| a级片在线免费高清观看视频| 亚洲一码二码三码区别大吗| 成人手机av| 中出人妻视频一区二区| 亚洲中文日韩欧美视频| 12—13女人毛片做爰片一| 精品一品国产午夜福利视频| 免费观看人在逋| 无人区码免费观看不卡| 黄色a级毛片大全视频| 一二三四在线观看免费中文在| 亚洲国产看品久久| 成在线人永久免费视频| 亚洲人成伊人成综合网2020| 国产片内射在线| 国产av又大| 在线播放国产精品三级| 97人妻天天添夜夜摸| 男人舔女人下体高潮全视频| 久久99一区二区三区| 十分钟在线观看高清视频www| 一级毛片高清免费大全| 精品电影一区二区在线| 19禁男女啪啪无遮挡网站| 精品熟女少妇八av免费久了| av国产精品久久久久影院| 中文亚洲av片在线观看爽| 一进一出好大好爽视频| 亚洲人成电影观看| 久久影院123| 天堂俺去俺来也www色官网| 美女高潮喷水抽搐中文字幕| 欧美乱妇无乱码| 成人三级做爰电影| 日韩免费高清中文字幕av| 美女大奶头视频| 91av网站免费观看| 日韩欧美在线二视频| 国产精品98久久久久久宅男小说| 色在线成人网| 亚洲精品美女久久av网站| 精品无人区乱码1区二区| 91在线观看av| 精品一品国产午夜福利视频| 色精品久久人妻99蜜桃| 在线播放国产精品三级| 日韩中文字幕欧美一区二区| 国产成人精品在线电影| 欧美一级毛片孕妇| 国产精品免费视频内射| 男女下面进入的视频免费午夜 | 91精品三级在线观看| 交换朋友夫妻互换小说| 女人被狂操c到高潮| 国产精品电影一区二区三区| 不卡一级毛片| 亚洲男人天堂网一区| 日韩高清综合在线| 国内久久婷婷六月综合欲色啪| 午夜激情av网站| 日韩 欧美 亚洲 中文字幕| 亚洲av熟女| www国产在线视频色| 不卡av一区二区三区| 91成人精品电影| 香蕉丝袜av| 97超级碰碰碰精品色视频在线观看| 日日夜夜操网爽| 午夜两性在线视频| 一区二区日韩欧美中文字幕| 国产亚洲精品久久久久5区| 国产色视频综合| 看免费av毛片| www.自偷自拍.com| 日本精品一区二区三区蜜桃| www.www免费av| 两性夫妻黄色片| 国产aⅴ精品一区二区三区波| 午夜福利影视在线免费观看| 午夜两性在线视频| 精品乱码久久久久久99久播| 男人操女人黄网站| 久久国产精品人妻蜜桃| 亚洲 国产 在线| 久久99一区二区三区| 久久影院123| 一边摸一边做爽爽视频免费| 一夜夜www| 免费观看精品视频网站| 美女高潮喷水抽搐中文字幕| 国产高清videossex| 亚洲欧美日韩无卡精品| 精品一区二区三区四区五区乱码| 久久久久久人人人人人| 制服诱惑二区| 亚洲 欧美 日韩 在线 免费| 久久久久久久午夜电影 | 动漫黄色视频在线观看| 深夜精品福利| 麻豆国产av国片精品| 色综合婷婷激情| 国产亚洲精品第一综合不卡| 99国产精品一区二区蜜桃av| 国产av一区在线观看免费| 一边摸一边做爽爽视频免费| 亚洲美女黄片视频| 日韩欧美免费精品| 免费在线观看亚洲国产| 免费在线观看视频国产中文字幕亚洲| 国产不卡一卡二| a级毛片在线看网站| 久久影院123| 纯流量卡能插随身wifi吗| 亚洲中文av在线| 十分钟在线观看高清视频www| 一本大道久久a久久精品| 巨乳人妻的诱惑在线观看| 久久久久久久精品吃奶| 欧美激情久久久久久爽电影 | 免费av毛片视频| 国产色视频综合| 精品高清国产在线一区| 亚洲成人免费av在线播放| 日本免费a在线| 成人国语在线视频| 亚洲自偷自拍图片 自拍| 91字幕亚洲| 国产精品一区二区三区四区久久 | 超碰成人久久| av在线播放免费不卡| 麻豆av在线久日| 国产成人精品久久二区二区免费| 最好的美女福利视频网| 久久精品国产亚洲av高清一级| 桃色一区二区三区在线观看| 国产成+人综合+亚洲专区| 女人高潮潮喷娇喘18禁视频| 露出奶头的视频| 久久精品国产99精品国产亚洲性色 | 男女之事视频高清在线观看| 高清av免费在线| 一边摸一边抽搐一进一出视频| 欧美在线黄色| 嫁个100分男人电影在线观看| 久久国产亚洲av麻豆专区| 亚洲国产看品久久| 在线观看免费日韩欧美大片| 看免费av毛片| 国产欧美日韩一区二区三区在线| 国产精品日韩av在线免费观看 | 极品人妻少妇av视频| 99国产精品一区二区三区| 在线永久观看黄色视频| 69精品国产乱码久久久| 黑人操中国人逼视频| 悠悠久久av| 亚洲性夜色夜夜综合| 国产成人啪精品午夜网站| 国产激情久久老熟女| 久久婷婷成人综合色麻豆| 日本a在线网址| 最新美女视频免费是黄的| 97碰自拍视频| 国产精品久久视频播放| 亚洲一区二区三区不卡视频| 亚洲熟妇中文字幕五十中出 | 18禁观看日本| 日韩视频一区二区在线观看| 精品久久久久久成人av| 黄色 视频免费看| 久久人妻熟女aⅴ| 国产成人av激情在线播放| 亚洲成人精品中文字幕电影 | 亚洲精品国产一区二区精华液| 亚洲精品在线美女| 国产成人精品在线电影| 校园春色视频在线观看| 夜夜看夜夜爽夜夜摸 | 无人区码免费观看不卡| 国产伦一二天堂av在线观看| 18禁国产床啪视频网站| 国产亚洲欧美98| 亚洲精品成人av观看孕妇| av在线天堂中文字幕 | 丝袜在线中文字幕| 高清黄色对白视频在线免费看| xxx96com| 亚洲自拍偷在线| 嫁个100分男人电影在线观看| 美女福利国产在线| 国产精品久久久人人做人人爽| 美女 人体艺术 gogo| 久久国产精品影院| 亚洲性夜色夜夜综合| av天堂久久9| 人人妻,人人澡人人爽秒播| 一个人观看的视频www高清免费观看 | 国产亚洲精品一区二区www| 国产精品综合久久久久久久免费 | 午夜影院日韩av| 超色免费av| 色综合婷婷激情| 在线永久观看黄色视频| bbb黄色大片| 免费日韩欧美在线观看| 久久中文看片网| 国产成人av教育| 国产成+人综合+亚洲专区| 男女床上黄色一级片免费看| 日本撒尿小便嘘嘘汇集6| 亚洲欧美日韩另类电影网站| 最近最新中文字幕大全电影3 | 日韩视频一区二区在线观看| 亚洲一区高清亚洲精品| 国产99白浆流出| 51午夜福利影视在线观看| 多毛熟女@视频| 免费日韩欧美在线观看| 成人免费观看视频高清| 精品无人区乱码1区二区| 亚洲三区欧美一区| 91av网站免费观看| 免费在线观看日本一区| 91九色精品人成在线观看| 久久久久久免费高清国产稀缺| 满18在线观看网站| 亚洲欧美一区二区三区久久| 亚洲久久久国产精品| 国产麻豆69| 欧美性长视频在线观看| 一进一出好大好爽视频| 97超级碰碰碰精品色视频在线观看| 在线观看免费高清a一片| 88av欧美| 久久久国产一区二区| 老汉色∧v一级毛片| 亚洲七黄色美女视频| 久久国产精品人妻蜜桃| 日本欧美视频一区| 夜夜夜夜夜久久久久| 999精品在线视频| 女性生殖器流出的白浆| 午夜老司机福利片| 久久精品91蜜桃| 午夜福利免费观看在线| 99国产精品99久久久久| 国产一区在线观看成人免费| 一级作爱视频免费观看| 夜夜看夜夜爽夜夜摸 | 国产成人精品久久二区二区91| √禁漫天堂资源中文www| 国产精品综合久久久久久久免费 | 天天影视国产精品| 18美女黄网站色大片免费观看| 精品国产国语对白av| 在线免费观看的www视频| 久久草成人影院| 伊人久久大香线蕉亚洲五| 黄片大片在线免费观看| 丰满饥渴人妻一区二区三| 亚洲熟妇中文字幕五十中出 | 欧美在线黄色| 在线av久久热| 黄色成人免费大全| 久久亚洲真实| 国产亚洲精品一区二区www| 午夜福利,免费看| 欧美精品一区二区免费开放| 美女国产高潮福利片在线看| 国产精品自产拍在线观看55亚洲| 国产精品久久久人人做人人爽| av在线播放免费不卡| 淫秽高清视频在线观看| 国产精品 国内视频| 怎么达到女性高潮| 麻豆久久精品国产亚洲av | 黄片播放在线免费| 18禁观看日本| 国产成人精品无人区| 色综合婷婷激情| 精品一区二区三卡| 免费在线观看黄色视频的| 国产精品一区二区免费欧美| 黑人巨大精品欧美一区二区蜜桃| 国产欧美日韩一区二区三| 一级片'在线观看视频| 巨乳人妻的诱惑在线观看| 大码成人一级视频| 亚洲少妇的诱惑av| 欧美中文日本在线观看视频| ponron亚洲| 中文字幕另类日韩欧美亚洲嫩草| 熟女少妇亚洲综合色aaa.| 久久国产乱子伦精品免费另类| 欧美成人免费av一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 大码成人一级视频| 欧美激情 高清一区二区三区| xxxhd国产人妻xxx| 国产一区二区三区在线臀色熟女 | 欧美在线一区亚洲| 午夜两性在线视频| 欧美日韩福利视频一区二区| 日本五十路高清| 国产97色在线日韩免费| 黄色女人牲交| xxxhd国产人妻xxx| 少妇裸体淫交视频免费看高清 | 中文字幕最新亚洲高清| 亚洲av美国av| 国产精品久久久人人做人人爽| 成人黄色视频免费在线看| 一级毛片高清免费大全| 99国产极品粉嫩在线观看| 国产精品 国内视频| 亚洲一区二区三区不卡视频| 免费高清在线观看日韩| 国产精品野战在线观看 | 很黄的视频免费| 国产高清视频在线播放一区| 波多野结衣一区麻豆| 午夜两性在线视频| 亚洲人成电影观看| 国产精品免费视频内射| 最近最新免费中文字幕在线| 高清欧美精品videossex| 日本黄色日本黄色录像| 久久伊人香网站| 搡老岳熟女国产| 免费在线观看黄色视频的| 亚洲精品在线美女| 久久久久九九精品影院| 亚洲五月色婷婷综合| a级毛片在线看网站| 黄色成人免费大全| a级毛片在线看网站| 久热爱精品视频在线9| 久久亚洲真实| 亚洲精品一卡2卡三卡4卡5卡| 国产亚洲精品久久久久5区| 在线观看舔阴道视频| 国产欧美日韩精品亚洲av| www.精华液| 国产野战对白在线观看| 国产极品粉嫩免费观看在线| 免费在线观看影片大全网站| 国产成人精品无人区| 欧美日韩精品网址| 人成视频在线观看免费观看| 欧美性长视频在线观看| 国产又爽黄色视频| 免费不卡黄色视频| 乱人伦中国视频| 精品久久久久久电影网| 18禁观看日本| 亚洲人成伊人成综合网2020| 18禁观看日本| 精品国产一区二区久久| 老熟妇仑乱视频hdxx| 久久99一区二区三区| 亚洲av成人一区二区三| a级片在线免费高清观看视频| 老司机午夜十八禁免费视频| 国产成人欧美| 一夜夜www| 制服人妻中文乱码| 99re在线观看精品视频| 成年女人毛片免费观看观看9| 久久久国产欧美日韩av| 色婷婷av一区二区三区视频| 亚洲伊人色综图| 亚洲成国产人片在线观看| 无遮挡黄片免费观看| 丝袜人妻中文字幕| 亚洲精品一卡2卡三卡4卡5卡| 18禁裸乳无遮挡免费网站照片 | 成人亚洲精品一区在线观看| 午夜免费鲁丝| 亚洲男人天堂网一区| svipshipincom国产片| 国产精品影院久久| 日韩有码中文字幕| 99久久久亚洲精品蜜臀av| 18禁裸乳无遮挡免费网站照片 | 国产免费av片在线观看野外av| 大码成人一级视频| 久久精品影院6| 欧美性长视频在线观看| 久久久国产成人免费| 美女大奶头视频| 怎么达到女性高潮| cao死你这个sao货| 久久人人97超碰香蕉20202| 熟女少妇亚洲综合色aaa.| 琪琪午夜伦伦电影理论片6080| 又黄又爽又免费观看的视频| 久9热在线精品视频| 国产又爽黄色视频| 中文字幕av电影在线播放| 高清黄色对白视频在线免费看| 欧美日韩亚洲高清精品| 一级片'在线观看视频| av有码第一页| 777久久人妻少妇嫩草av网站| 久久这里只有精品19| 91麻豆精品激情在线观看国产 | av超薄肉色丝袜交足视频| 老熟妇仑乱视频hdxx| 黑人猛操日本美女一级片| 视频区欧美日本亚洲| 国产精品一区二区精品视频观看| 最近最新免费中文字幕在线| 一级黄色大片毛片| 1024视频免费在线观看| 色老头精品视频在线观看| 91av网站免费观看| 亚洲中文日韩欧美视频| 国产一区二区三区在线臀色熟女 | 免费女性裸体啪啪无遮挡网站| 在线观看舔阴道视频| 日韩视频一区二区在线观看| 99久久99久久久精品蜜桃| 欧美激情高清一区二区三区| 一二三四在线观看免费中文在| 久久这里只有精品19| 午夜福利,免费看| 亚洲自偷自拍图片 自拍| 制服诱惑二区| 亚洲国产看品久久| 真人做人爱边吃奶动态| www.www免费av| 精品久久久久久,| 高潮久久久久久久久久久不卡| 日日干狠狠操夜夜爽| 老熟妇仑乱视频hdxx| 午夜成年电影在线免费观看| 国产成人精品在线电影| 午夜两性在线视频| 色综合欧美亚洲国产小说| 久久香蕉国产精品| 久热爱精品视频在线9| 在线av久久热| 国产欧美日韩精品亚洲av| 国产精品1区2区在线观看.| 性少妇av在线| 88av欧美| 欧美日韩瑟瑟在线播放| 三级毛片av免费| 国产免费av片在线观看野外av| 无限看片的www在线观看| 一区在线观看完整版| 久久亚洲精品不卡|