• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    LDNet: structure-focused lane detection based on line deformation①

    2022-10-22 02:23:54ZHANGJunWANGXingbinGUOBinglei
    High Technology Letters 2022年3期

    ZHANG Jun (張 軍), WANG Xingbin, GUO Binglei

    (*School of Computer Engineering, Hubei University of Arts and Science, Xiangyang 411053, P.R.China)

    (**Institute of Information Engineering, Chinese Academy of Science, Beijing 100093, P.R.China)

    Abstract Lane detection is a fundamental necessary task for autonomous driving. The conventional methods mainly treat lane detection as a pixel-wise segmentation problem, which suffers from the challenge of uncontrollable driving road environments and needs post-processing to abstract the lane parameters. In this work,a series of lines are used to represent traffic lanes and a novel line deformation network (LDNet) is proposed to directly predict the coordinates of lane line points. Inspired by the dynamic behavior of classic snake algorithms, LDNet uses a neural network to iteratively deform an initial lane line to match the lane markings. To capture the long and discontinuous structures of lane lines, 1D convolution in LDNet is used for structured feature learning along the lane lines.Based on LDNet, a two-stage pipeline is developed for lane marking detection: (1) initial lane line proposal to predict a list of lane line candidates, and (2) lane line deformation to obtain the coordinates of lane line points. Experiments show that the proposed approach achieves competitive performances on the TuSimple dataset while being efficient for real-time applications on a GTX 1650 GPU. In particular, the accuracy of LDNet with the annotated starting and ending points is up to 99.45%, which indicates the improved initial lane line proposal method can further enhance the performance of LDNet.

    Key words: autonomous driving, convolutional neural networks (CNNs), lane detection, line deformation

    0 Introduction

    With the rapid development of high-precision optics and electronic sensors, and computing capability,autonomous driving has received much attention in both academy and industry. In these systems, camera-based lane detection plays a critical role in the semantic understanding of the world around a vehicle[1-2]. It is challenging to perform accurate lane detection for the following reasons. First, lanes are usually thin and long curves travelling through the entire scenario, and have diverse patterns, such as solid, broken, splitting and merging. Furthermore, the driving road scenarios are complex, highly variable, and uncontrollable due to lighting/weather conditions.

    Traditional methods on lane detection adopt the hand-crafted features to identify lane markings. Features like color, intensity, gradient, edge, geometric shapes, and texture are widely used to describe the segments of lane markings. As these hand-crafted features are usually based on strong assumptions (e. g.,flat ground-planes) and lack high-level semantic information, these traditional methods have difficulty in detecting the lanes in complex situations. Thanks to the strong high-level semantic representation learning ability, recent convolutional neural networks (CNNs) have pushed lane detection to a new level. Most of these methods treat lane detection as a two-stage semantic segmentation task. In the first stage, a network is designed to classify each pixel in an image if it belongs to one of lanes or not. Post-processing in the second stage usually uses some curve-fitting strategy to filter the noise points or cluster the intermittent lane segments.Although the state-of-the-art methods already achieve great progress in lane detection, there are still some important and challenging problems to be addressed.Firstly, the segmentation-based methods with square shape kernels are hard to capture the thin and long curve property of lanes[3-5]. Due to ignoring the highlevel global semantic features (or contextual information), these methods often suffer from discontinuous and noisy detection results. Secondly, the desired output for autonomous driving is control-related parameters, i. e., vehicle lateral offset, turning angle and curvature. However, the outputs of most CNN-based lane detection methods are pixel-level lane instance masks in the image view. To fill this gap, some postprocessing procedures are required, e.g., inverse perspective mapping (IPM), and lane model fitting[6].

    In this paper, a structure-focused lane marking detection network, named line deformation network(LDNet), is proposed to address the problems. LDNet can better capture the long and discontinuous structures of lane lines and predict the coordinates of lane line points in an end-to-end manner. Inspired by the dynamic behavior of previous snake methods[7-10], LDNet takes initial lane lines as input and deforms them by regressing vertex-wise offsets. PickNp= 64 points along each line as the features of the line and apply the standard 1D convolutions on the point features. The 1D convolution kernel not only captures the features of each point but also the relationship from neighboring points. This enhances feature representation and focuses on learning of the long and discontinuous structures of lane markings. Based on LDNet, a pipeline is developed for lane detection. Given a set of initial lane lines,LDNet iteratively deforms them to match the lane markings and obtain the coordinates of lane line points.In this paper, the straight line between the starting point and ending point of each lane is used as its initial lane line. Define the starting point of one lane as the lane point closest to the bottom boundary of the image.As the starting points of different lanes in an image are usually far apart from each other according to the traffic rules, the heat map based keypoint estimation method is used for starting points detection. Different from the starting points, the ending points are converged together around the farthest point of the visible lane. Inspired by VPGNet[6]and CHEVP[8], which use vanishing point as a global geometric context to infer the location of lanes, a vanishing point prediction task is designed to estimate the locations of the ending points.

    In summary, this work has the following contributions.

    (1) A novel LDNet is proposed,which focuses on the long and discontinuous structure learning of lane markings and directly predicts the coordinates of lane line points.

    (2) Based on LDNet, the lane detection method is implemented by proposing the initial lane with two branches, in which heat map and probability map are used to predict lane lines’ starting point and ending points, respectively.

    (3) The proposed method achieves comparable performance with state-of-the-art methods in terms of both accuracy and speed on the TuSimple dataset. In particular, the accuracy of LDNet with ideal starting and ending points is up to 99.45%, which indicates improved initial lane line proposal method can further enhance the performance of the method.

    1 Related work

    1.1 Deep learning based lane detection

    Due to the strong representation learning ability,CNN based approaches have been used for lane marking detection. VPGNet[6]proposed a multi-task network guided by vanishing points for lane and road marking detection. LaneNet[11]used a lane edge proposal network for pixel-wise lane edge classification,and used a lane line localization network in stage two for lane line localization prediction. Neven et al.[12]regarded lane detection as an instance segmentation problem, in which a embedding branch disentangled the segmented lane pixels into different lane instances.Zhang et al.[13]established a framework that accomplished lane boundary segmentation and road area segmentation simultaneously. Ko et al.[14]first obtained the exact lane points by a confidence branch and an offset branch, then clustered the generated points by the point cloud instance segmentation method. To get rid of the perspective effect in the image, the inverse perspective mapping (IPM) was used by several methods[11,15-18]. Although these methods already achieved great progress in lane detection, there are still some important and challenging problems to be addressed.These methods often suffer discontinuous and noisy detection results due to the thinness of traffic lanes[19-22].Furthermore, post-processing is needed to filter the noise points and get the lane line information from pixel-level segmentation results[6,12,14,18].

    To deal with the first problem, several schemes have been proposed to capture richer scene features (or contextual information). Zhang et al.[23]added one convolutional gated recurrent unit (ConveGRU) in the encoder phase to learn more accurate low-level features. Hou et al.[24]adopted a self-attention distillation(SAD) approach to allow the network to exploit attention maps within the network itself during the training stage. Zou et al.[25]combined CNN and recurrent neural network (RNN) to infer lanes from multiple frames of a continuous driving scene. SALMNet[4]used a semantic-guided channel attention module to enhance features representation to the structures of lane markings and suppresses noisy features from the background, and a pyramid deformable convolution module to capture the structures of long and discontinuous lane markings. Message passing between pixels can help capture spatial structures of objects having long structure region and could be occluded. SCNN[3]and RESA[20]utilized slice-by-slice convolutions to enable message passing between pixels across rows and columns in the feature map. Though message passing improves the performance of segmentation results, the dense pixel-wise communication-based message passing required more computational cost. To collect more information concerning a whole lane boundary, Spin-Net[26]designed a novel convolution layer to allow the convolutions to be applied in multiple directions. Line-CNN[27]utilized a line proposal unit (LPU) to proposal potential lanes, which forced the system to learn the global feature representation of the entire traffic lanes.

    To avoid post-processing and predict the coordinates of lane line points directly, Chougule et al.[19]treated the lane detection and classification problems as CNN regression task, which relaxed per-pixel classification requirement to a few points along lane boundary. Qin et al.[5]and Yoo et al.[28]translated the lane detection problem into a row-wise classification task using global features. PRNet[22]and PolyLaneNet[21]are proposed to use polynomial curves to represent traffic lanes and then use a polynomial regression network to predict the polynomial coefficients of lanes. Gansbeke et al.[29]proposed a deep neural network that predicted a weights map like a segmentation output for each lane marking and used a differentiable leastsquares fitting module to directly regress the lane parameters. Instead of representing traffic lanes by curves, PointLaneNet[30]considered lane line as a series of points, and proposed the ConvLaneNet network to predict the lane line offset, start point and confidence. RONELD[31]first extracted lane points from the probability map outputs, followed by detecting curved and straight lanes before using weighted least squares linear regression on straight lanes to fix broken lane edges resulting from fragmentation of edge maps in real images.

    1.2 Snake algorithms

    The proposed method is inspired by the dynamic behavior of classic snake algorithms[7], which have been used for contour-based instance segmentation.Snake algorithms treat the coordinates of the vertices as a set of variables. By applying proper forces at the contour coordinates, the algorithms could push the contour to the object boundary. The implementation of these algorithms contains two stages. Firstly, the contour vertices for image representation are initialized. Then, the contour is deformed to the object boundary by learningbased methods. Recently, Ling et al.[9]followed the pipeline of traditional snake algorithms and used a graph convolutional neural network to predict vertexwise offsets for contour deformation. Instead of treating the contour as a general graph, deep snake[10]leveraged the cycle graph topology and introduced the circular convolution for efficient feature learning on a contour.Wang et al.[8]proposed a B-snake based lane model to describe a wider range of lane structure. In this work,a learning-based snake algorithm LDNet is implemented to deform the initial lane line to match the lane markings. As LDNet utilizes the features along the lane line to directly predict the coordinates of lane line points, it can solve the two problems mentioned above.

    2 Methodology

    In this paper, a novel lane detection is performed by deforming the initial lane lines to match lane markings in an image. Specifically, LDNet takes initial lane lines as input and predicts per-vertex offsets pointing to the lane boundary. The vertex offsets are predicted based on the features of lane line points extracted from the input image with a CNN backbone.

    2.1 Lane representation

    In general, lanes are drawn on roads with a shape of line or curve. It is improper to represent a lane with circular contour, which is used to represent compact object[9-10]. A line or curve can be accurately represented by a series of points, which can be obtained by using spline interpolation between points. A lane line can be determined by the following elements: starting point, ending point, and lane line center points. To facilitate the operation of LDNet, the number of lane points are kept fixed. As a result, the lane lines in an image are transformed into a learnable structured representation, as illustrated in Eq.(1).

    whereNis the number of points for each lane line,Lis the number of lanes in an image, and(x,y) is the coordinate of a lane point.

    Fig.1 An overview of the network architecture

    Fig.1 illustrates the overall network architecture of the work. It contains three modules: (1) a feature extraction backbone (subsection 2.2) that takes a single image as input and provides shared intermediate feature maps for the successive modules; (2) an initial lane proposal module (subsection 2. 3) which outputs the candidate initial lane lines; (3) the learning-based snake algorithm module LDNet (subsection 2.4) which predicts vertex-wise offsets between initial lane line points and their target points. The output of the network are the coordinates of lane line points. The system is fully end-to-end trainable with stochastic gradient descent.

    2.2 Backbone

    The function of the backbone network is to extract semantically meaningful features for the successive modules. Choose stacked hourglass network as the backbone for its efficiency and effectiveness. The input images with size 512 ×256 RGB are resized to a smaller size (e.g.,64 ×32) by the resizing layer, which contains a sequence of convolution layers. The resized image is fed to multiple hourglass modules, each including one encoder, one decoder, and two output branches. The intermediate predictions and features output from the previous hourglass stage are integrated together to implement intermediate supervision. The total loss of the network is the sum of the loss on those hourglass modules.

    2.3 Initial lane proposal module

    Evidently,lane lines start from the boundary (bottom or left or right) of an image and converge together due to the perspective effect. Although lane line is not always straight, the straight line between its starting point and ending point could be used to represent its direction and range. Here the straight line between the starting point and ending point of each lane is used as its initial position. The lane line initialization task is transformed into starting points and ending points detection problems. As the starting points of different lane lines in an image are usually far apart from each other, the heatmap based keypoint estimation method can be used to predict them.

    The ending points of traffic lanes in an image are often close to each other due to perspective projection,it is difficult to accurately localize them by the same method of starting point detection. Inspired by VPGNet[6]and CHEVP[8], which use vanishing point as a global geometric context to infer the location of lanes,a vanishing point prediction task is designed to estimate the locations of the ending points. As shown in Fig.1,these two subtasks are implemented by two branches which share the input features. The pseudocode of the initial lane line proposal module is given in Algorithm 1.

    ?

    To predict starting points, a network head that consists of two 1 ×1 convolution layers is designed to transfer the feature maps into 1 channel heatmap.Peaks in this heatmap correspond to lane starting points. Training the starting point prediction branch follows Law and Den[32]. For each ground truth starting pointp, a low-resolution equivalent^p=pS」 is computed, whereSis the output stride. Then all ground truth starting points are splatted onto a heatmap by using a Gaussian kernelYxy,which is defined in Eq.(2).

    For ending point location prediction, a network head is designed that consists of two 1 ×1 convolution layers to transfer the feature maps into 1 channel probability map and estimate the location and width of vanishing point in the input image. Firstly, the lane point map is extracted by picking points more than an adaptive confidence thresholdT. This confidence threshold is selected based on the confidence point in the probability map outputs. Then,the lane point map is scanned from the first row. Here the width between the first and the last lane line points is defined as the rangeRiof the lane points in rowi. After the first row with lane points is found, assume that this row is the vanishing row

    where lanes disappear. For robustness and to exclude low-confidence noise, its rangeRvis compared with the rang of its subsequent rowRs. IfRv <Rs/2,the subsequent row is set as the vanishing row, and its range is compared with its subsequent row. This process is repeated untilRv≥Rs/2 and the estimated ending points are generated, as defined in Eq.(3).

    2.4 Learning-based snake algorithm module

    The snake algorithm module takes a list of candidate lane lines along with the feature maps of the image from the backbone network as the input and predicts the per-point offsets pointing to the lane boundary. For each candidate lane line, the feature vector for each lane line point(x,y)l,nis first constructed. The input featurefl,nfor a point (x,y)l,nis the concatenation of learning-based features and the point coordinate:[F((x,y)l,n); (x,y)l,n] whereFdenotes the feature maps. The feature mapsFare obtained by applying a CNN backbone on the input image as shown in Fig.1. The feature for one point is computed using the bilinear interpolation at the vertex coordinate(x,y)l,n.The appended point coordinate is used to encode the spatial relationship among lane points.

    The concatenated feature vectors are passed into LDNet, which implements the learning-based snake algorithm. LDNet first predicts offsets based on the initial lane line points and then deforms the initial lane lines by point-wise adding the offsets to their point coordinates. The deformed lane lines can be used for the next iteration. The impact of inference iterations will be studied in subsection 3.2.

    Following the idea from deepsnake[10], the LDNet consists three parts: a feature learning block, a fusion block, and a prediction head, as shown in Fig.2. To adapt to the long and discontinuous structure of lane lines, the circular convolution of deep snake is replaced by a 1D convolution. The feature learning block is composed of 8 ‘Conv1d-Bn-ReLU’ layers and uses residual skip connections for all layers. In all experiments, the kernel size of 1D convolution is fixed to be nine. The fusion block aims to fuse the information across all lane points at multiple scales. It concatenates features from all layers in the feature learning block and forwards them through a 1 ×1 convolution layers followed by max pooling. The fused feature is then concatenated with the feature of each point. The prediction head applies three 1 ×1 convolution layers to the point features and outputs point-wise offsets between initial lane points and the target points, which are used to deform the initial lane.

    Fig.2 Architecture of LDNet

    3 Experiments

    In this section, the accuracy and efficiency of the method are demonstrated with extensive experiments.The following sections mainly focus on three aspects:experimental settings, ablations studies of the method,and results on TuSimple dataset.

    3.1 Implementation setup

    In the experiments, the input images are resized to 512 × 256 during the data augmentation process.Then the resized image is compressed into smaller size data by a resizing layer, which contains a sequence of convolution layers and max pooling layers. The out channel of the resizing layer is 128.

    Training strategy. For starting points detection,the training objective is a penalty-reduced pixel-wise logistic regression with focal loss[32]:

    For the interactive lane deformation, the loss function is defined in Eq.(6).

    whereλk,λpandλiterare loss coefficients. Setλk=0.5,λp= 1,andλiter= 10 in the experiments. Adam with weight decay 1e-5 is used as the optimizer to train the model and the learning rate is set to be 1e-3[35].The total number of training epochs is 500 for TuSimple dataset. All models are trained with PyTorch[36].

    Dataset. To evaluate proposed approach, TuSimple lane dataset[38]is used to conduct the experiment.Tusimple dataset is collected with good and medium weather conditions in highways. It consists of 3626 training and 2782 testing images. They are recorded on 2-lane/3-lane/4-lane or more highway roads, at different daytime. For training, randomly apply simple data augmentation methods like flip, translation, and adding shadow, which contribute to a more comprehensive dataset.

    Evaluation metrics. The main evaluation metric of TuSimple is accuracy, which is calculated as the average correct number of points per image. The accuracy is defined as the average correct number of points per image.

    where,Fpredis the number of wrongly predicted lanes;Npredis the number of predicted lanes;Mpredis the number of missed ground-truth lanes andNgtis the number of all ground-truth lanes.

    3.2 Ablation study

    Effectiveness of LDNet. To prove the effectiveness of LDNet, the annotated starting points and ending points extracted from the annotation of TuSimple dataset are used to avoid the influence of staring points and ending points prediction.

    The results of LDNet model with annotated starting points and ending points are shown in Table 1. The accuracy of the proposed method is above 99.2% across all inference iterations. Fig.3 illustrates qualitative results of LDNet with two iterations on TuSimple dataset.From Fig.3, it can be seen that LDNet with annotated starting points and ending points performs well for occluded lanes (a-d), curve lanes (e-h) and lanes in non-flat plane (i-l). Both the quantitative and qualitative results indicate that LDNet has a strong ability to deform proper initial lane line to match the lane markings. Therefore, LDNet can be applied in both online and offline scenarios, such as accurate lane detection,fast interactive lane annotation,and HD map modeling.

    In LDNet, the iteration number is an important hyper-parameter, which influences the model size and speed. Table 1 also shows the evaluation results of LDNet models with various iterations. The accuracy is up to 99.45% when LDNet has two iterations. However,adding more iterations does not further improve the performance, which shows that it might be harder to train the network with more iterations. In the following experiments, the iterations of LDNet are fixed to two.

    Table 1 Results of LDNet with different iterations. Here the initial lane lines are generated with the annotated starting points and ending points directly

    Fig.3 Examples of results from the TuSimple lane detection benchmark (the initial lane lines are generated with the annotated starting points and ending points, the first row shows cases lane markings are occluded by vehicles, the second row shows curve lanes,and the third row shows lanes in non-flat ground plane)

    Effects of output strideS. Output strideSis another main hyper-parameter for LDNet, which denotes how much the feature maps are scaled down relative to the input image size. In principle, bigger output stride represents more/higher semantic but spatially coarser.

    Table 2 shows the results of LDNet with different output strideS, and the following observations. First,with the increase of output stride, the accuracy drops from 96.87% to 95.87%. Second, for a large output stride (e. g., 8), the accuracy of LDNet with initial lane generated by the annotated starting points and ending points is less than 99%. For smaller output stride(e.g.,2 and 4), the accuracy is above 99.4%. This indicates that the spatial information is much more important than the semantics information for LDNet.Third, the accuracy of LDNet with initial lane generated bySP~andEP^is much higher than initial lane generated bySP^andEP~ across all output strides. Thus,ending point location prediction is the bottleneck of implementation. More accurate ending point estimation can further improve the accuracy of the LDNet model.

    Table 2 Ablation study of output stride

    3.3 Results

    The initial lane lines generated from detected starting points(SP~) and ending points(EP~) are illustrated in Fig.4. The results indicate that the straight line between starting point and ending point can represent the direction and range of lane, especially when the lane lines are straight (Fig.4(a-d)). This figure also indicates that the heatmap based keypoint estimation method could effectively detect the starting points,even when the starting points are occluded by vehicles.Though the vanishing point-based method could provide proper ending points in most cases, it cannot deal with the situation where the ending points are not converged together as shown in Fig.4(k).

    Fig.4 Initial lane lines generated from the predicted starting points and ending points

    Table 3 reports the performance comparison of the LDNet model against the previous representative method. To show the generalization of LDNet, models with Hourglass, ResNet and DLA are used as the backbone. For a fair comparison, their lane marking detection results reported in their paper or website are used directly. The proposed models are represented as “LDNet-S-B”, where S is the output stride and B denotes the used backbone. It can be seen that the proposed models with various backbones achieve comparable performance with state-of-the-art methods.

    The final detected lane lines corresponding to the initial lane lines proposed in Fig.4 are illustrated in Fig.5. As the initial lane lines nearly match the straight lane lines (Fig.4(a -d)), the proposed method can detect straight lane lines precisely,even when the lanes are occluded by vehicles (Fig.5(a-d)). Although the initial lane lines do not match the target lane lines well(Fig.4(e -l)), our LDNet model still accurately predicts the offset between the initial lane line points andthe target boundary points for curve lanes(Fig.5(e-h))and lanes in non-flat ground plane (Fig.5(i-l)). Most notably,the proposed model precisely predicts the curves when they are occluded by vehicles (Fig.5(e,h)).Comparing with Fig.3(g), the predicted lane lines in Fig.5(g) are longer than the predicted lane lines in Fig.3(g) whose initial lane lines are generated from the annotated starting points and ending points. These results indicate that LDNet model has a strong ability in capturing the structures of lane markings.

    Table 3 Performance of different methods on TuSimple

    Fig.5 Results of LDNet with two hourglass modules and S=4

    Table 4 compares proposed approach with other methods in terms of running time. The running time of proposed method is recorded with the average time for 100 runs. For 256 ×512 input images, proposed method runs at 32 FPS on a laptop with an Intel i7 2.60 GHz and GTX 1650 GPU. The performance of GTX 1080-Ti is 2. 7 times higher than that of GTX 1650[38]. The performance of Titan X is 0.81 times higher than that of GTX 1650[39].

    Table 4 Run-time performance of different methods

    Though the proposed model achieves comparable performance with state-of-the-art methods, it cannot predict the lane lines well in complex driving road scenarios shown in Fig.6. In these situations, the lanes are not converged together, thus the vanishing point based ending point prediction method cannot propose proper ending points and initial lane lines. How to improve the ending point prediction method and the initial lane line proposal method will be the future work.

    4 Conclusion

    This paper proposes to use a series of lines to represent traffic lane and proposes a novel Line Deformation Network (LDNet) to iteratively deform an initial lane line to match the lane boundary. Heatmap based keypoint estimation method and vanishing point prediction task are used to propose the initial lane lines.

    Fig.6 Results of LDNet for lanes not converged together. The left figures show the ground truth and the right figures show the detected lanes.

    The experimental results on TuSimple lane dataset show that the proposed method achieves comparable performance with state-of-the-art methods. The accuracy of LDNet with ideal starting points and ending points is up to 99. 4%. Although the proposed initial lane lines do not match the target lane lines well,the LDNet model still accurately predicts the offset between the initial lane line points and the target boundary points.This shows that the LDNet has a strong ability in capturing the structures of lane markings.

    麻豆av噜噜一区二区三区| 亚洲欧美清纯卡通| 黄色一级大片看看| 亚洲精品乱码久久久v下载方式| 国内精品美女久久久久久| 久久久久国产精品人妻aⅴ院| 精品久久久噜噜| 国产精品久久电影中文字幕| 久久天躁狠狠躁夜夜2o2o| 18禁在线播放成人免费| 丰满乱子伦码专区| 99久久中文字幕三级久久日本| 亚洲成人中文字幕在线播放| 最好的美女福利视频网| 村上凉子中文字幕在线| 国产精品福利在线免费观看| 欧美一区二区精品小视频在线| 日韩中文字幕欧美一区二区| 欧美激情久久久久久爽电影| 亚洲国产精品合色在线| 免费无遮挡裸体视频| 又黄又爽又免费观看的视频| 嫩草影院精品99| or卡值多少钱| 综合色av麻豆| 我要看日韩黄色一级片| 久久久久久久亚洲中文字幕| 婷婷丁香在线五月| 国产v大片淫在线免费观看| 欧美日韩亚洲国产一区二区在线观看| 欧美极品一区二区三区四区| 欧美极品一区二区三区四区| 日本在线视频免费播放| 成熟少妇高潮喷水视频| 精品无人区乱码1区二区| 精品一区二区三区视频在线| 日韩欧美免费精品| 国产伦精品一区二区三区视频9| 午夜福利18| 国产三级中文精品| 日本熟妇午夜| 成人综合一区亚洲| 欧美国产日韩亚洲一区| 免费在线观看成人毛片| 无人区码免费观看不卡| 日韩高清综合在线| 99国产精品一区二区蜜桃av| 深爱激情五月婷婷| 免费看美女性在线毛片视频| 久久午夜亚洲精品久久| 午夜福利成人在线免费观看| 小蜜桃在线观看免费完整版高清| av天堂中文字幕网| 丰满的人妻完整版| 性插视频无遮挡在线免费观看| 国产人妻一区二区三区在| 别揉我奶头 嗯啊视频| netflix在线观看网站| 九九在线视频观看精品| 日韩精品有码人妻一区| 国产一区二区激情短视频| 人妻少妇偷人精品九色| 国产成人a区在线观看| 亚洲欧美日韩东京热| 一区二区三区高清视频在线| 国产主播在线观看一区二区| 亚洲欧美日韩卡通动漫| 可以在线观看的亚洲视频| 九九爱精品视频在线观看| 日日摸夜夜添夜夜添av毛片 | 亚洲成av人片在线播放无| 久久久久国产精品人妻aⅴ院| 亚洲第一区二区三区不卡| 国产精品无大码| 99九九线精品视频在线观看视频| 国产真实乱freesex| 别揉我奶头 嗯啊视频| 韩国av一区二区三区四区| 日本-黄色视频高清免费观看| 搡老岳熟女国产| 麻豆国产97在线/欧美| 日韩av在线大香蕉| 婷婷丁香在线五月| 在线播放无遮挡| 午夜福利18| 精品久久国产蜜桃| 可以在线观看的亚洲视频| 老师上课跳d突然被开到最大视频| 91狼人影院| 动漫黄色视频在线观看| 色吧在线观看| 一本一本综合久久| 精品久久久噜噜| 欧美不卡视频在线免费观看| 91午夜精品亚洲一区二区三区 | 亚洲va在线va天堂va国产| 亚洲精品久久国产高清桃花| 人妻久久中文字幕网| 久久精品国产亚洲av涩爱 | 在线观看午夜福利视频| 国产v大片淫在线免费观看| 在线播放无遮挡| 精品一区二区三区视频在线观看免费| 欧美日本亚洲视频在线播放| 国产精品女同一区二区软件 | 国产色爽女视频免费观看| 少妇的逼水好多| 亚洲色图av天堂| 嫩草影视91久久| 一a级毛片在线观看| 亚洲熟妇中文字幕五十中出| 无人区码免费观看不卡| 午夜视频国产福利| 国产高潮美女av| 国产精品,欧美在线| 丝袜美腿在线中文| 国产单亲对白刺激| av在线亚洲专区| 亚洲精品一卡2卡三卡4卡5卡| 少妇被粗大猛烈的视频| 国产精品久久视频播放| 老师上课跳d突然被开到最大视频| 国产精品国产高清国产av| 免费看a级黄色片| 哪里可以看免费的av片| 色播亚洲综合网| 一本久久中文字幕| 老司机午夜福利在线观看视频| 精品久久久久久久末码| 日韩高清综合在线| 亚洲av免费高清在线观看| 日本欧美国产在线视频| 人妻少妇偷人精品九色| 乱人视频在线观看| a级毛片免费高清观看在线播放| 丰满乱子伦码专区| 成人午夜高清在线视频| 久久久久九九精品影院| 亚洲人成伊人成综合网2020| 国产精品野战在线观看| 久久精品人妻少妇| or卡值多少钱| 韩国av在线不卡| 男女那种视频在线观看| 夜夜看夜夜爽夜夜摸| 精品久久久久久,| 久久久精品欧美日韩精品| 亚洲va在线va天堂va国产| 国内精品宾馆在线| 国内精品久久久久精免费| 99riav亚洲国产免费| 高清毛片免费观看视频网站| 国产精品av视频在线免费观看| 色吧在线观看| 乱人视频在线观看| 日本一二三区视频观看| 狂野欧美白嫩少妇大欣赏| 久99久视频精品免费| aaaaa片日本免费| 乱码一卡2卡4卡精品| 天堂av国产一区二区熟女人妻| 精品久久久久久久人妻蜜臀av| 韩国av一区二区三区四区| 亚洲最大成人中文| 真人做人爱边吃奶动态| 成人一区二区视频在线观看| 国产伦精品一区二区三区四那| 日日摸夜夜添夜夜添小说| 国产亚洲精品综合一区在线观看| 亚洲成av人片在线播放无| 国产精品亚洲美女久久久| 亚洲av五月六月丁香网| 日韩,欧美,国产一区二区三区 | 村上凉子中文字幕在线| 久久精品91蜜桃| 国产女主播在线喷水免费视频网站 | 久9热在线精品视频| 级片在线观看| 免费av不卡在线播放| 国产免费av片在线观看野外av| 亚洲五月天丁香| 国产精品永久免费网站| 国产一级毛片七仙女欲春2| 97超视频在线观看视频| 给我免费播放毛片高清在线观看| 国产av一区在线观看免费| 亚洲最大成人中文| 欧美三级亚洲精品| 精品免费久久久久久久清纯| 久久精品国产鲁丝片午夜精品 | 给我免费播放毛片高清在线观看| 亚洲在线自拍视频| 国内少妇人妻偷人精品xxx网站| 男女边吃奶边做爰视频| 狂野欧美白嫩少妇大欣赏| 小蜜桃在线观看免费完整版高清| 国产 一区精品| 深爱激情五月婷婷| 精品午夜福利视频在线观看一区| 国产女主播在线喷水免费视频网站 | 欧美日韩亚洲国产一区二区在线观看| 亚洲经典国产精华液单| 国产免费av片在线观看野外av| 成人无遮挡网站| 又爽又黄a免费视频| 免费无遮挡裸体视频| 3wmmmm亚洲av在线观看| 亚洲精品国产成人久久av| 日本在线视频免费播放| 1000部很黄的大片| 99在线人妻在线中文字幕| 99精品久久久久人妻精品| 赤兔流量卡办理| 国产男靠女视频免费网站| 丰满乱子伦码专区| 国产v大片淫在线免费观看| 亚洲电影在线观看av| 亚洲18禁久久av| 成人av在线播放网站| 人人妻人人看人人澡| 国产三级在线视频| 亚洲精品影视一区二区三区av| 成年女人毛片免费观看观看9| 在线播放无遮挡| 亚洲自拍偷在线| 亚洲性夜色夜夜综合| 久久精品国产99精品国产亚洲性色| 黄色配什么色好看| 丰满的人妻完整版| 日韩 亚洲 欧美在线| 色5月婷婷丁香| 国内少妇人妻偷人精品xxx网站| 欧美精品国产亚洲| 可以在线观看的亚洲视频| 国产综合懂色| 免费看日本二区| 日日撸夜夜添| 丰满人妻一区二区三区视频av| 免费电影在线观看免费观看| 久久久久久大精品| 久久久久九九精品影院| xxxwww97欧美| 十八禁国产超污无遮挡网站| x7x7x7水蜜桃| 国产大屁股一区二区在线视频| 赤兔流量卡办理| 亚洲欧美日韩东京热| 悠悠久久av| 黄片wwwwww| 欧美精品国产亚洲| 亚洲国产精品成人综合色| 18禁裸乳无遮挡免费网站照片| 久久精品国产亚洲网站| 搡老熟女国产l中国老女人| 日本色播在线视频| 亚洲av免费高清在线观看| 女的被弄到高潮叫床怎么办 | 欧美又色又爽又黄视频| 亚洲va日本ⅴa欧美va伊人久久| 亚洲欧美日韩东京热| 国产精品综合久久久久久久免费| 精华霜和精华液先用哪个| 成人特级av手机在线观看| 亚洲黑人精品在线| 国产淫片久久久久久久久| 欧洲精品卡2卡3卡4卡5卡区| 天堂av国产一区二区熟女人妻| 欧美日韩瑟瑟在线播放| 精品一区二区三区视频在线观看免费| 2021天堂中文幕一二区在线观| 丰满人妻一区二区三区视频av| а√天堂www在线а√下载| 老司机午夜福利在线观看视频| 中文资源天堂在线| 久久热精品热| 性插视频无遮挡在线免费观看| 亚洲av成人精品一区久久| 欧美又色又爽又黄视频| 欧美黑人巨大hd| 又黄又爽又刺激的免费视频.| 午夜福利在线观看免费完整高清在 | 夜夜夜夜夜久久久久| 少妇高潮的动态图| 少妇的逼好多水| 最新在线观看一区二区三区| 国产熟女欧美一区二区| 欧美性猛交黑人性爽| 最新在线观看一区二区三区| 波野结衣二区三区在线| 在线观看美女被高潮喷水网站| 又爽又黄a免费视频| ponron亚洲| 亚洲性夜色夜夜综合| 最近在线观看免费完整版| 国产大屁股一区二区在线视频| 国产男靠女视频免费网站| 女生性感内裤真人,穿戴方法视频| 超碰av人人做人人爽久久| 偷拍熟女少妇极品色| 国产亚洲av嫩草精品影院| 国产精品永久免费网站| 日韩,欧美,国产一区二区三区 | 亚洲一级一片aⅴ在线观看| 色噜噜av男人的天堂激情| 成人特级av手机在线观看| 国产精品嫩草影院av在线观看 | 国产精品自产拍在线观看55亚洲| 欧美zozozo另类| x7x7x7水蜜桃| 性色avwww在线观看| 欧美xxxx性猛交bbbb| 久久精品国产鲁丝片午夜精品 | 免费高清视频大片| 最近在线观看免费完整版| 人人妻人人看人人澡| 成人国产麻豆网| 非洲黑人性xxxx精品又粗又长| 1000部很黄的大片| 别揉我奶头~嗯~啊~动态视频| 欧美三级亚洲精品| 成年人黄色毛片网站| av福利片在线观看| 日本 欧美在线| 少妇高潮的动态图| 九九久久精品国产亚洲av麻豆| 久久久久国内视频| 色5月婷婷丁香| 免费电影在线观看免费观看| 成年免费大片在线观看| 三级国产精品欧美在线观看| 女人十人毛片免费观看3o分钟| 国产在线精品亚洲第一网站| 精品一区二区三区视频在线观看免费| 91在线精品国自产拍蜜月| 亚洲av成人精品一区久久| 夜夜看夜夜爽夜夜摸| 99久久中文字幕三级久久日本| 亚洲av免费高清在线观看| 国产亚洲精品av在线| 色哟哟·www| 免费黄网站久久成人精品| 99久久久亚洲精品蜜臀av| 男女之事视频高清在线观看| 黄色欧美视频在线观看| 热99在线观看视频| 精品人妻熟女av久视频| 全区人妻精品视频| 搡老妇女老女人老熟妇| 国产精品电影一区二区三区| 国内揄拍国产精品人妻在线| 美女 人体艺术 gogo| av福利片在线观看| 男女之事视频高清在线观看| av黄色大香蕉| 欧美bdsm另类| 国产麻豆成人av免费视频| a级一级毛片免费在线观看| 身体一侧抽搐| 十八禁国产超污无遮挡网站| 韩国av一区二区三区四区| 欧美日韩黄片免| 一进一出好大好爽视频| 观看美女的网站| 欧美成人性av电影在线观看| 久久精品国产99精品国产亚洲性色| 看黄色毛片网站| 别揉我奶头~嗯~啊~动态视频| 午夜精品一区二区三区免费看| 成人av一区二区三区在线看| 国产黄a三级三级三级人| 久9热在线精品视频| 国产色爽女视频免费观看| 欧美激情国产日韩精品一区| 亚洲精品日韩av片在线观看| 欧美性猛交╳xxx乱大交人| 亚洲久久久久久中文字幕| 久久热精品热| 国内精品宾馆在线| 国产伦在线观看视频一区| 99久久中文字幕三级久久日本| 欧美xxxx性猛交bbbb| 精品福利观看| 国产一区二区三区在线臀色熟女| 天堂av国产一区二区熟女人妻| 免费电影在线观看免费观看| 蜜桃亚洲精品一区二区三区| 国产免费一级a男人的天堂| 欧美黑人巨大hd| 亚洲无线观看免费| 久久精品综合一区二区三区| 精品久久久久久久久久免费视频| avwww免费| 18禁黄网站禁片午夜丰满| 国产精品乱码一区二三区的特点| 国产亚洲91精品色在线| 中文字幕免费在线视频6| 国产精品一区二区性色av| 高清毛片免费观看视频网站| av天堂中文字幕网| 岛国在线免费视频观看| 中文字幕av在线有码专区| 国产精品,欧美在线| 特大巨黑吊av在线直播| 日日啪夜夜撸| 禁无遮挡网站| 日韩精品中文字幕看吧| 免费大片18禁| 免费av观看视频| 国产私拍福利视频在线观看| 欧美中文日本在线观看视频| 免费无遮挡裸体视频| 亚洲熟妇熟女久久| 亚洲avbb在线观看| 亚洲欧美日韩无卡精品| 无人区码免费观看不卡| 两个人的视频大全免费| 国产成人影院久久av| 人妻制服诱惑在线中文字幕| 淫秽高清视频在线观看| 精品国内亚洲2022精品成人| 99久久中文字幕三级久久日本| xxxwww97欧美| 人妻夜夜爽99麻豆av| 一个人免费在线观看电影| 欧美最黄视频在线播放免费| 香蕉av资源在线| 色视频www国产| 夜夜看夜夜爽夜夜摸| 十八禁网站免费在线| 美女被艹到高潮喷水动态| 丰满人妻一区二区三区视频av| 亚洲av电影不卡..在线观看| 最好的美女福利视频网| 国产成人av教育| 亚洲第一区二区三区不卡| 亚洲精品影视一区二区三区av| 老司机深夜福利视频在线观看| 亚洲无线在线观看| 男女那种视频在线观看| 亚洲美女视频黄频| 国产综合懂色| 一进一出好大好爽视频| 一卡2卡三卡四卡精品乱码亚洲| 国产三级在线视频| 国内精品久久久久久久电影| 12—13女人毛片做爰片一| 18禁黄网站禁片午夜丰满| 小说图片视频综合网站| 中文字幕人妻熟人妻熟丝袜美| 亚洲自拍偷在线| 日韩 亚洲 欧美在线| 日韩欧美 国产精品| 性色avwww在线观看| 一级黄色大片毛片| 色播亚洲综合网| 熟妇人妻久久中文字幕3abv| a级毛片a级免费在线| 成人美女网站在线观看视频| 色吧在线观看| 午夜久久久久精精品| av.在线天堂| 制服丝袜大香蕉在线| 乱码一卡2卡4卡精品| 亚洲精品在线观看二区| 久久午夜福利片| 亚洲人成网站在线播| 成人毛片a级毛片在线播放| 嫩草影院精品99| 村上凉子中文字幕在线| 欧美在线一区亚洲| 国产探花在线观看一区二区| 少妇丰满av| 精品一区二区三区视频在线| 伦精品一区二区三区| 黄片wwwwww| 真人一进一出gif抽搐免费| 欧美日本视频| 国产蜜桃级精品一区二区三区| 美女 人体艺术 gogo| 国内少妇人妻偷人精品xxx网站| 好男人在线观看高清免费视频| 成人特级黄色片久久久久久久| 最后的刺客免费高清国语| 亚洲色图av天堂| 啪啪无遮挡十八禁网站| 成年版毛片免费区| 国产精品久久视频播放| 国产黄片美女视频| 毛片一级片免费看久久久久 | 久久香蕉精品热| 一区二区三区免费毛片| av福利片在线观看| 国产精品久久视频播放| av天堂中文字幕网| 中国美女看黄片| 熟女电影av网| 99久久精品国产国产毛片| 免费观看在线日韩| 久久午夜亚洲精品久久| 99久久久亚洲精品蜜臀av| 别揉我奶头~嗯~啊~动态视频| 成人av一区二区三区在线看| 成人av在线播放网站| 嫩草影视91久久| 国产中年淑女户外野战色| 亚洲 国产 在线| 51国产日韩欧美| 免费在线观看影片大全网站| 亚洲欧美日韩高清专用| 韩国av一区二区三区四区| 久久久精品欧美日韩精品| 日日摸夜夜添夜夜添av毛片 | 中文字幕av在线有码专区| 国产精品爽爽va在线观看网站| 欧美日本视频| 欧美一区二区亚洲| 国产免费男女视频| 91久久精品国产一区二区成人| 亚洲经典国产精华液单| 欧美成人一区二区免费高清观看| 免费观看人在逋| 一进一出抽搐动态| 国产午夜精品论理片| 国产成人福利小说| 午夜视频国产福利| 亚洲性夜色夜夜综合| 成人永久免费在线观看视频| 免费大片18禁| 黄色一级大片看看| 最近最新免费中文字幕在线| 国产精品1区2区在线观看.| 中国美女看黄片| 亚洲电影在线观看av| 国产淫片久久久久久久久| 日本免费一区二区三区高清不卡| 国产成人av教育| 国产三级在线视频| 欧美成人免费av一区二区三区| 精品国产三级普通话版| 中文字幕人妻熟人妻熟丝袜美| 免费不卡的大黄色大毛片视频在线观看 | 亚洲精品一区av在线观看| 国产单亲对白刺激| 久久精品久久久久久噜噜老黄 | 欧美不卡视频在线免费观看| 中国美女看黄片| 欧美三级亚洲精品| 韩国av在线不卡| 国产私拍福利视频在线观看| 欧美+日韩+精品| 日韩亚洲欧美综合| 日韩精品有码人妻一区| 色综合婷婷激情| 亚洲va在线va天堂va国产| 亚洲成人精品中文字幕电影| 人妻丰满熟妇av一区二区三区| 亚洲欧美日韩卡通动漫| 又粗又爽又猛毛片免费看| 精品久久久久久久久久久久久| 国产精品自产拍在线观看55亚洲| 中文亚洲av片在线观看爽| 一级av片app| 国产亚洲91精品色在线| 白带黄色成豆腐渣| 韩国av在线不卡| 国产精品国产三级国产av玫瑰| 国产探花极品一区二区| 国产人妻一区二区三区在| 男女做爰动态图高潮gif福利片| 中文字幕av在线有码专区| 亚洲va在线va天堂va国产| 成人三级黄色视频| 国产精品野战在线观看| 久久久久国内视频| 少妇人妻一区二区三区视频| 美女被艹到高潮喷水动态| 欧美xxxx黑人xx丫x性爽| 亚洲真实伦在线观看| 淫妇啪啪啪对白视频| 免费人成视频x8x8入口观看| 97碰自拍视频| 最近最新中文字幕大全电影3| 69人妻影院| 91午夜精品亚洲一区二区三区 | 精品久久国产蜜桃| 小说图片视频综合网站| 精品无人区乱码1区二区| 亚洲国产精品sss在线观看| 特大巨黑吊av在线直播| 我的女老师完整版在线观看| 免费观看精品视频网站| 国产一区二区在线av高清观看| 18+在线观看网站| 欧美激情在线99| 三级国产精品欧美在线观看| 一级黄片播放器| 婷婷色综合大香蕉| 天堂影院成人在线观看| 国产高清视频在线观看网站| 中文字幕av在线有码专区| 琪琪午夜伦伦电影理论片6080| 午夜爱爱视频在线播放| 自拍偷自拍亚洲精品老妇| 最新在线观看一区二区三区| 国产伦精品一区二区三区视频9| 啦啦啦观看免费观看视频高清| 欧美日韩中文字幕国产精品一区二区三区| 亚洲欧美日韩东京热| 给我免费播放毛片高清在线观看| 人人妻,人人澡人人爽秒播| 一级av片app| 国产私拍福利视频在线观看| 亚洲三级黄色毛片| 最近视频中文字幕2019在线8| 黄色配什么色好看|