• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    SpinNet: Spinning convolutional network for lane boundary detection

    2019-02-27 10:37:26RuochenFanXuanrunWangQibinHouHanchaoLiuandTaiJiangMu
    Computational Visual Media 2019年4期

    Ruochen Fan, Xuanrun Wang, Qibin Hou, Hanchao Liu, and Tai-Jiang Mu()

    Abstract In this paper, we propose a simple but effective framework for lane boundary detection, called SpinNet. Considering that cars or pedestrians often occlude lane boundaries and that the local features of lane boundaries are not distinctive,therefore,analyzing and collecting global context information is crucial for lane boundary detection. To this end, we design a novel spinning convolution layer and a brand-new lane parameterization branch in our network to detect lane boundaries from a global perspective. To extract features in narrow strip-shaped fields, we adopt stripshaped convolutions with kernels which have 1×n or n×1 shape in the spinning convolution layer. To tackle the problem of that straight strip-shaped convolutions are only able to extract features in vertical or horizontal directions, we introduce the concept of feature map rotation to allow the convolutions to be applied in multiple directions so that more information can be collected concerning a whole lane boundary. Moreover,unlike most existing lane boundary detectors, which extract lane boundaries from segmentation masks, our lane boundary parameterization branch predicts a curve expression for the lane boundary for each pixel in the output feature map. And the network utilizes this information to predict the weights of the curve,to better form the final lane boundaries. Our framework is easy to implement and end-to-end trainable. Experiments show that our proposed SpinNet outperforms state-of-the-art methods.

    Keywords object detection;lane boundary detection;autonomous driving; deep learning

    1 Introduction

    Object detection and segmentation are two of the most widely investigated areas of computer vision in recent decades. Generally speaking,advances in these domains are mostly driven by step changes made by classic baseline systems, such as Fast/Faster R-CNN[1, 2] and its follow-up architecture Mask R-CNN [3].These methods provide a general and conceptional platform which is both flexible and robust.

    Autonomous driving is a concept that has the ability to define future transportation. Fully autonomous cars are currently a major topic for computer vision and robotics research, both academically and industrially.The goal of autonomous driving requires full control of the car, which in turn necessitates understanding the surroundings of the car by gathering information from sensors attached to it. One of the most challenging subtasks for such perception is lane boundary detection for car positioning. Lane boundaries differ from typical objects in having stronger structural features but fewer appearance clues [4]. For instance, lane boundaries often have long, continuous shapes, and are located at the bottom of the scene. Because of the lack of apperance clues, lane boundaries may be incorrectly detected if they mainly rely on local features. Additionally, occlusion is a significant difficulty for lane boundary detection, as in many scenarios, some parts of lane boundaries may be occluded by other objects, especially vehicles.

    Given the above issues, it is essential to utilize global information for lane boundary detection so that lane boundaries can be predicted in a holistic way. One natural idea is to attempt to enlarge the receptive field of the network. Spatial CNN(SCNN) [4], inspired by RNN [5], proposed a sliceby-slice convolution method that is able to transfer information between neurons in the same layer. This architecture is also able to segment objects with long shapes. However, due to signal decay in the SCNN layer, distant information in the feature map is still hard to collect.

    In order to gather and analyse features in large fields, in this paper, we present the idea of a spinning convolution and apply it within neural networks. The idea of a spinning convolution is to extract features along a narrow strip using a 1×norn×1 convolution layer. “Spinning” means that long kernel convolution can run at arbitrary angle. Because of the long kernel used in a strip convolution, features covered by the kernels can be equally collected without signal fading caused by long distances. Generally speaking,a straightforward strip convolution operation is performed only along a vertical or horizontal direction;however, the directions of lane boundaries or other slender detection targets can vary. As rotating strip convolution kernels to lie along arbitrary directions is difficult, we propose to instead rotate the feature maps to increase the number of ways of collecting features. In this way, a vertical strip convolution with a large kernel can process features along any direction.

    Furthermore, most existing methods consider lane boundary detection as a segmentation problem,that is, predicting which pixels are covered by lane boundaries, and the final parameterized lane boundary curves are produced by a hand-crafted algorithm which takes the segmentation maps predicted by the neural network as inputs. But this sort of method may be confused when the lane boundary mask is irregular in shape. For instance, they have problems processing masks with “holes”. Instead, our approach carries out lane boundary detection by introducing a parameterization branch. To explicitly force the network to predict lane boundaries from a global perspective, our parameterization branch predicts the coefficients of lane boundary curves rather than segmentation masks. The final predicted lines are weighted combinations of the curves produced by the parameterization branch. To demonstrate the effectiveness of our proposed framework, we evaluate our results on the CULane dataset. We show that our SpinNet improves all prior lane boundary detection methods. To further verify the importance of our proposed spinning convolution and parameterisation branch, adequate ablation analysis is conducted. It reveals that both of the new operations contribute evidently to the overall performance.In summary, our proposed approach contains the following contributions:

    ·A spinning convolution operation which can extract long range features running in arbitrary directions, without information decay.

    ·A novel method of determining parametric lane boundaries using information from all the pixels in a feature map.

    ·An end-to-end lane boundary detection framework, called SpinNet, which achieves state-of-theart performance.

    2 Related work

    In this section, we briefly introduce some previous approaches closely related to our work.

    2.1 Traditional methods

    Traditional lane boundary detection algorithms are based on highly-specialised and hand-crafted low level features [6—10]. Methods exploiting structure tensors[11], color-based detectors [12], and bar filters [13]have achieved reasonable results; they are typically combined with Hough transforms [14] or Kalman filters [13]. When using such traditional techniques,post-processing is needed to weed out false detections,and to cluster lane boundary segments into final lines[7]. The weakness of traditional methods is that the models cannot handle complex circumstances. For example, occlusion, which is ever-present in real data[4, 15], presents difficulties for pattern-recognitionbased methods: filters extracting image features fail.Traditional methods are insufficient to process the complex situations required in real driving conditions.

    2.2 Neural-network-based methods

    Deep learning has recently become mainstream in this domain due to its better results [16, 17].The perception module in traditional approaches is replaced by a neural network, to overcome problems mentioned above. Huval et al. [18] proposed a deep learning based work, but lacked suitable large scale datasets for training. Spatial CNN [4] is now the state-of-the-art solution. It uses a specially designed architecture of convolution layers, enabling message passing between pixels across rows and columns in a layer. This paper also provided a brand-new large scale dataset. However, the limited number of message passing directions resulted in problematic lane boundaries with unsatisfactory directions.

    Methods predicting lane boundaries with the assistance of other information are now a new trend.For instance,some research[19,20]has indicated that continuous driving scenes could provide information to constrain the predicted lane boundaries. However,these works place significant demands on dataset preparation. 3D vision has also been introduced to the lane boundary detection domain [21, 22] with reasonable results, but a major problem remains the costliness and rarity of both devices and datasets.Lee et al. [23] proposed a new way of predicting vanishing points to guide lane boundary prediction,giving a new approach to object-guided lane boundary segmentation.

    2.3 Lane boundary parameterization

    LaneNet [24] proposed a lane boundary detection framework that tackles the lane boundary detection task as an instance segmentation problem and also parametrizes the lane boundary by training an H-Net for view transformation. However, this work does not make good use of the predicted parameters,which are only used to generate the output,and are not capable of end-to-end training. Other works like Refs.[25,26]utilized similar methods, fitting a parabola or spline for each lane boundary, but some lane boundaries are so irregular that such curves cannot provide a good fit.

    2.4 Feature map deformation

    A common method [27] is to rotate the convolution kernel, making the output of the neural network rotationally invariant. However, features of long strip shape are needed to detect the lane boundary,rather than rotational invariance, in our approach.Rotational invariance is typically of benefit in limited specific situations, like fingerprint recognition [28],galaxy morphology prediction [29], etc. When using the message passing technique of Spatial CNN[4], we have to find another solution.

    3 SpinNet

    In this section, we propose a framework for lane boundary detection and positioning, called SpinNet.SpinNet introduces two new ideas that are easy to implement.

    3.1 Overall structure

    SpinNet is an end-to-end trainable neural network,which employs VGGNet [30] as its backbone model.The structure is shown in Fig. 1. To ensure high resolution in the output feature map, we discard all fully-connected layers and the last two pooling layers (pool4 and pool5) and change the dilation rates of all convolution layers in stage 5 from 1 to 2 to enlarge the receptive field. The proposed spinning convolution layer,whose purpose is to extract features along a long narrow field in a series of directions, is deployed after conv5 in VGGNet. After the spinning convolution layer, we also use a Spatial CNN layer,following Ref.[4],to further enlarge the receptive field.A fully-connected layer, followed by a softmax layer,is then used to predict a lane boundary presence vectorei, representing the probability that theith lane boundary is present in the image. After upsampling the feature map to the same size as the input image, a convolution layer predicts a map giving the probability of pixels being covered by lane boundaries, a coefficient matrix determining the parameters for the curves belonging to each pixel,and a coefficient confidence map.

    3.2 Spinning convolution layer

    Classical convolution kernels are usually square, e.g.,3×3,5×5,etc. Using larger square kernels can enlarge the receptive field of a network. However, using huge square convolution kernels would lead to unacceptable computational costs in our specific task if we wish to cover a lane boundary. Furthermore, square convolution layers would have so many parameters in the convolution kernel,making them difficult to train.In practice, a lane boundary only occupies a small proportion of a square area, as they are narrow strips.A better approach to detecting these objects is to use strip-shaped narrow convolution kernels, e.g., 1×norn×1.

    To collect information about objects, Spatial CNN[4] borrowed ideas from RNN, passing messages inside its layers. However, its critical weakness is the message decay occurring in its message passing procedure.

    Fig. 1 The network structure of SpinNet and an illustration of the proposed spinning convolution. (A) shows the whole network structure of SpinNet, in which Conv_1 to Dilated_Conv_5 are VGGNet backbone and the elements in dotted box and “Existence Vector” are the output of SpinNet.

    Our specially-designed convolution kernels possess three clear advantages compared to square-shaped convolution kernels and the SCNN technique. First,strip-shaped kernels provide a set of long narrow receptive fields in a series of directions. Second, these kernels require fewer network parameters: while a convolution layers with 3×3 and 9×1 kernels have the same number of network parameters, the latter has a much bigger receptive field than the former for the specific task of lane boundary detection. Third,our kernels are decay-free so that no information will be easily lost.

    But lane boundaries are not just in these two specific directions, namely horizontal and vertical—they are arbitrary in direction. To overcome this problem, we have designed a method of rotating feature maps: the tilted lane boundaries become vertical or horizontal in one of the rotated feature maps, allowing them to be detected by vertical or horizontal convolution kernels. Although lane boundaries may not always be straight, some certain long strip kernels possessing different angles in variant positions of the feature map can fit a curve properly.Combining the above two concepts in a single layer gives our spinning convolution layer. Sample outputs of our spinning convolution are shown in Fig. 2.

    The architecture of this spinning convolution layer is based on an original convolution layer, in which we replace the square convolution kernels by our specially designed strip kernels,and initially,we get the feature map generated by the backbone network. We first pad the feature map, and the rotation operation is then applied to the padded feature map, using a series of angles. Each different angle gives a new padded and rotated feature map. The strip convolution layer is then connected to each of these feature maps, which generates a series of new feature maps. We then perform an inverse rotation operation on each, and crop out the padding. Finally, we concatenate these outputs to form the final output of our spinning convolution layer. The process described above is shown in Fig. 1(B), in whichαis a super parameter determining the spinning angle. We use five spinning convolutions with differentαto form a spinning convolution stack.

    Before our approach, there are many traditional approaches that aim to collect straight lane boundary information. However, there are several lane boundaries in one input image, and the boundaries may have some spatial information, or even correlations with each other. Thus, cropping the input to detect a lane boundary object is improper,for it will omit spatial information and correlations.Additionally, if we perform featuremap/input-picture rotation at the beginning of the network, we will not be able to finish the strip-shaped convolution, thus not be able to collect information on various directions explicitly. As we mentioned above, the directions of our strip-shaped kernels are fixed into 2, namely,horizontal and vertical. Thus, rotating feature maps is the very method to perform convolution using stripshaped non-horizontal/vertical kernels, and this is our original purpose to perform this rotation.

    Fig. 2 The result feature maps of spinning convolution. Given the strip-shaped kernels, performing convolution in traditional way can only use horizontal and vertical kernels. But in our purposed method, kernels can be designed to spin at arbitrary angles, as shown in the upper-right corner of each feature map. (a) is the original image input. (b–f) are the output feature maps of the spinning convolution stack. Each of the feature maps has 64 channels and the mean values of them are shown. The kernels are rotated with -60, -30, 0, 30, 60 degrees respectively.Obviously, a kernel with a certain rotation angle can collect more edge information of lane boundaries in that direction.

    It seems that our method, rotating the feature maps,will receive a better result compared with these traditional techniques, since the spatial information and the correlation between lane boundaries are properly preserved, while the increase in calculation just arises a little.

    3.3 Parameterization branch

    Various works [24—26] consider lane boundary parameterization as a problem of fitting the lane boundary with a spline or parabola. However, all of the above methods share two common defects.Firstly, lane boundary parameterization is treated as a separate process from lane boundary semantic segmentation. In fact, the existing parameterization process is a hand-crafted algorithm only using the segmentation mask as input—it is unable to exploit the rich information in network feature maps. Instead,the spatial information from these two tasks can be exploited simultaneously inside the network, so that the tasks of lane boundary prediction and coefficient prediction can work together to provide mutual assistance, and the network will be an end-to-end task in this way. In our method, we combine these two tasks into one end-to-end network. Secondly, all previous works predict only a single set of coefficients,i.e., a single curve, for a lane boundary in an image.Commonly, lane boundaries are long smooth shapes,but in complex situations, their trajectories may be highly curved or irregular. Fitting a single curve with simple few global coefficients, typically a low order curve,will not give satisfactory results. In these situations, it is necessary to predict lane boundary curves locally, and then concatenate these local curves into the final complex lane boundary. We demonstrate later how we solve this problem.

    We mitigate the first defect by combining parameterization and segmentation into one organic system: we generate two results using two branches attached to the same backbone. During training,gradient back-propagation influences the backbone through two paths: a semantic segmentation branch and a parameterization branch, allowing it to utilize the rich information included in the feature map for both tasks.

    To alleviate the second defect,we force the network to predict not only local probability information, but global curve coefficients in each pixel of the feature map. By doing so, we explicitly make every pixel predict the curve from a global perspective.

    After computing curve coefficients pixelwisely,the next step is to acquire global lane boundary information from this information. First, we know curve could be represented by a set of coefficients.Let these coefficients beP0,··· ,Pn. The curve

    then represents that lane boundary.

    Let us first define the local curve. Unlike the global curve whose origin is at the upper left of the image,the origin of the local (relative to a certain pixel)curve is at the corresponding pixel in the image itself.

    Fig. 3 The curve aggregation pipeline. (a) shows a part of an input picture. There are 4×4 grids, and each grid indicates one pixel in the output feature maps and a square area in the original picture. The green horizontal line is the virtual baseline we are trying to solve the intersection point on. A grid is in orange only if the grid has the maximum Mi on a horizontal vertical baseline. (b) shows the separate curves Qi generated from the four orange grids. These curves have intersections xi,2 with the green horizontal baseline, which are shown in (c). These intersections are summed after being weighted by their confidence M and the distance between their baselines and the green one, generating the final answer x2 on the green baseline. Repeat this process, we get all the xjs, and then we perform a simple post-process to draw the predicted lane boundary in orange shown in (d).

    Overall, we first treat the whole lane boundary as a combination of several local curves, which is simple to generate from the dataset by polynomial fitting.As the dataset provides some key points on the lane boundary, we choose one key point and its neighbors,to determine the local curve coefficients near that key point by polynomial fitting. As for other key points,the method is similar. This is a simple pre-processing task by which we generate the ground truth of the lane boundaries coefficients. Then, we add a branch in parallel with the existing segmentation branch. Its aim is to generate the local curve coefficients at each pixel.

    The method used to generate the set of coefficients of a lane boundary is regular,and could be performed by simply adding several dense layers after the output.But this method will let spatial information from the feature map be lost. Instead, for each pixel,the coefficients of the local lane boundary are easy to predict as this prediction only require adjacent information. Furthermore, the accuracy of local prediction is much better since one pixel only needs to predict the lane boundary nearby. We can thus use a feature-map-based pixelwise coefficient computation branch.

    As the network only predicts local coefficients and the global coefficients are easy to acquire from the dataset,we need a simple global—local transformation.And this can be performed by

    wherefkis the local curve function whose origin is at pixelk:(xk,yk). Solving the equation above,we can easily get the new lane boundary curve coefficientsPk,i, with origin at pixelk. Preliminary experiments showed that usingn= 1, i.e., fitting straight lines to lane boundaries, is not sufficient, but on the other hand, usingn >2 makes imperceptible further improvement. Thus we setn= 2, i.e., fit parabolas. In our approach, a parameterization branch is connected after the output layer of our backbone to produce a pixelwise coefficient matrix.

    Some may believe there is a paradox that our output has passed the spinning pipeline, and thus only straight edge information was preserved, which is a misunderstanding. The shape of the convolution kernels only indicates the shape of the sampling points, and the ability to predict objects in other shapes will not be lost. In Section 4, we proved by experiments that strip-shaped convolution kernels have a better capability of detecting lane boundaries.

    3.4 Curve aggregation

    In the above introduction, we could get the local coefficient matrices generated by the neural network.That is, for each pixel of the feature map, there are a set of coefficients indicating the local lane boundary curve. To merge the set of local curves into final lane boundaries, we design the curve aggregation algorithm.

    Curve aggregation is a weighted averaging process.For a point (x,yi) which lies in the virtual horizontal baseliney=yiin the final lane boundary, the target coordinatexis the weighted average of the horizontal ordinates of the intersections of the local curves and the virtual baselines. The weights are positively correlated with the confidence of local curves predicted by the network and are negatively correlated with the distance between the pixel and the virtual baselines. Therefore, the more accurate the curve is, the more contribution the curve makes to the final lane boundary. Besides, every pixel is mainly responsible for the section of the final lane boundary close to the pixel. Therefore, we proposed the following method to generate the key points of a lane boundary.

    So far,the parameterization branch has provided:a probability mapwhereMiis the probability that pixeliis covered by the final lane boundary andNCis the number of pixels,a coefficient mapP ∈RNC×3wherePiis the vector of coefficients of the curve ati, and a confidence mapF ∈RNC,whereFiis the confidence in the curve coefficients atias predicted by the network. Detailed defination and implementation of confidence mapFicould be found in Section 3.5. We process the four lane boundaries separately,so from now on,we splitinto four slices,process each sliceM ∈RNCseparately. As we can exploitto achieve the availability of reusingFandPin different lane boundaries, so it is unnecessary forForPto possess four channels.

    We first setNbvirtual horizontal baselines evenly located in the image, aty=j×C, j=0,...,Nb-1 whereCis the baseline interval. The result of our algorithm is several points lying on the virtual horizontal baselines. For each baseline, we find the maximumMi. The corresponding values ofiindicate the pixels most likely to be in the lane boundary,so we call them key-points from now on. However,we ignore any such points for whichMiis below a threshold,as they are unlikely to be covered by a lane boundary. LetKias thei-th key-point, withyvalueyKi. For each key-pointKi, we can get the curveQKiassembled byPKi. These curves in general have several intersections with at least some of the virtual horizontal baselines. Let thex-coordinate of the intersection ofQKiandy=j×C, j=0,1,...,Nb-1 bexKi,j. The coordinatexjof the result fory=j×Cis given by

    whereαis a constant. Thus, (xj,jC) is one of our result points. After finding such points for all virtual horizontal baselines, the set of (xj,jC) is then the final output.

    3.5 Loss function

    The loss functionLshould have the ability to refine the probability maps and the coefficient maps simultaneously. To do so, it must be a weighted sum of multiple parts:

    where the weighting coefficients are hyperparameters which must be tuned.

    The first part of the loss is the lane boundary object binary segmentation lossLossbin. To compute it, we can easily calculate a softmax cross-entropy between prediction and the ground truth in the semantic segmentation task. The second part of the loss is the lane boundary existence lossLossex. This essentially concerns a binary classification task, so again a crossentropy computation suffices. The third part of the loss is the lane boundary coefficients lossLosspar. In one step of forward propagation, after finding ˉM, we are able to categorize each pixel. Thus, if a pixelkbelongs tolane boundary j, its coefficientsPishould be as close to the ground truthPi,gt,j,kas possible.Here,Pi,gt,j,kstands for the ground truth coefficientsPifor lanej, transformed to origin atk. We may then calculate the loss as follows:

    where|·|denotes theL1loss. As we treat the curves as parabolas, in the following we simply replaceP2bya,P1byb, andP0byc.

    There are two tasks at each pixel to perform,to predict the semantic segmentation of the lane boundary, and to predict the confidence in the coefficients of the lane boundary passing through it. While it seems obvious that these two tasks are connected, they do not in fact constitute a sound causal relationship, as pixels with precise coefficients prediction may lie outside any lane boundary. So, we need a measurement of the confidence of coefficients accuracy as the contribution factor mentioned in the previous section. At pixelk, the ground truth valueFgt,j,kfor lane boundaryjand origin at pixelkis defined as follows:

    Our experiments indicate thatbandcare essential to lane boundary prediction, and the loss forbandcis key to determining the confidence.

    The loss function forFis easy to compute:

    We can then get the total loss:

    whereCs are the hyperparameters to finetune.

    4 Experiments

    In this section,we show the efficacy of SpinNet on the large-scale CULane dataset. Comparisons with stateof-the-art methods show that our proposed framework outperforms all prior lane detection methods. To further analyze the effectiveness of each component of SpinNet, we have performed a series of ablation experiments, and the effects of some key hyperparameters also are shown.

    4.1 Implementation details

    4.1.1 Training and testing

    Images provided by datasets only have points located evenly on each of the lane boundaries. To generate ground truths for lane boundary coefficients, we performed parabola fitting to these point sets.

    During training, the original image, binary masks of corresponding lane boundaries, lane boundary existence vectors, and curve coefficients sets are fed into our end-to-end network. During testing, in order to judge if a lane boundary is correctly detected,we regard the lane boundaries as polyline segments with a width of 30 pixels, and then calculate the intersection-over-union (IoU) between the ground truth and the prediction. Predictions whose IoUs are greater than a certain threshold (0.5) are regarded as true positives (TP). This enables us to assess our method using the F1 measure.

    4.1.2 Hyper-parameters

    Our proposed network was implemented on TensorFlow [31]. The input images were not augmented. The hyper-parameters are set as follows:momentum (0.9), learning rate (0.05), weight decay(0.0001). We trained our network on a single Nvidia GTX1080 Ti GPU for 180k iterations.

    4.2 Ablation study and setup

    4.2.1 The effect of spinning convolution and parameterization branch

    To evaluate the effectiveness of spinning convolution,we replaced this operation by a traditional convolution stack. The results in line “w/o Spin-Conv” of Table 1 indicate that spinning convolution contributes 1.1 percentage points to the overall performance.

    To evaluate the effectiveness of our lane boundary parameterization branch, we disabled local curve prediction in our model and extracted curves from lane boundary segmentation masks,following existing methods such as SCNN [4]. The results in line“w/o Parameterization” of Table 1 indicate that the parameterization branch benefits evidently to lane boundary detection performance.

    We also disabled both operations; the inferior experimental results further demonstrate the effectiveness of our proposed operations.

    4.2.2 Spinning angles

    To better understand our spinning convolution layer and suggest the best way to use it, we determined how choice of spinning angles affected the results. We used 3, 5, 7 simultaneous spinning convolutions in each experiment, with angles chosen as in Table 2.The experiment indicates that rotation angles affect lane boundary detection performance. Only a small spinning angle cannot give us the best result.Moreover,too many or too few directions of rotations will also harm the performance. The experiments show that (±60°,±30°,0°) is the optimal angle settings we have found, implying that making the spinning angles evenly distributed is a good strategy.

    Table 1 Ablation study of our proposed SpinNet. “w/o Spin-Conv”represents the performance when spinning convolution is replaced with traditional convolution stack. “w/o Parameterization”means disabling parameterization branch and generates lane boundary prediction results from segmentation mask. The experiments show that SpinNet achieves best performance when both proposed branches are used

    Table 2 Performance of SpinNet when using different rotation angles in spinning convolution. (±20°,±10°,0°) means that there are five sub-branches in spinning convolution stack, with rotation angles from-20° to 20°

    4.2.3 Spinning convolution kernel size

    A kernel of size 1× nis used in the spinning convolution, and its sizencontrols the range of the receptive field of this convolution layer. In theory, a large receptive field usually benefits performance,but requires more parameters which may increase training difficulty. We performed an experiment to find the optimal size of spinning convolution kernel. The results are shown in Table 3. The results reveal that a kernel of size 12×1 achieves the best result. A shorter or longer kernel harms the overall performance.

    Table 3 Performance of SpinNet when using different size of stripshaped large kernel in spinning convolution. The size of the kernel we use is 1×n. The experiment reveals that too long or too short kernels harm the performance, and a 1×12 kernel is most appropriate for our task

    4.3 Comparison with state-of-the-art

    We compared our proposed method with existing lane boundary detection methods using the CULane dataset. Table 4 lists the results; performance is measured by F1-score. This table also gives efficacy comparison in some particular scenario. It is clear that our new approach, SpinNet, achieves the best overall results, improving on the baseline result presented in Zhang et al. [34] or LineNet [35] by 1.1 percentage.

    In addition to quantitative evaluation, some visualization results are shown in Fig. 4. We label ground truth lines in green, true positive predictions in blue,and incorrect predictions in red. Our SpinNet is able to handle crowded road with evident occlusion.For failure cases, we find that it is difficult to detect lane boundaries fully occluded by cars.

    5 Conclusions

    This paper has presented a lane boundary detection framework, including a novel spinning convolutionlayer and a new lane boundary parameterization branch. The spinning convolution layer with strip kernel is able to extract features in long narrow fields along a series of directions. The parameterization branch predicts a whole curve for each pixel in the output feature map. Experiments show that both operations improve the performance of a lane detection network, and that the proposed framework outperforms prior methods on the CULane dataset.

    Table 4 Lane boundary results on CULane dataset (F1-measure). The columns from “Normal” to “Curve” show the effectiveness comparison in some particular scenario. The column “Total” shows the overall performance in the whole test set of CULane dataset, indicating that our SpinNet achieves new state-of-the-art result

    Fig. 4 Selected examples produced by our SpinNet. Green lines are the ground truths. Blue lines are the predicted true positive line boundaries, while red lines are the incorrect prediction results. From this figure, we can see that our framework works well even in crowded road with obvious occlusion and scenes with low contrast.

    Acknowledgements

    This work was supported by the National Natural Science Foundation of China (Project No. 61572264),Research Grant of Beijing Higher Institution Engineering Research Center, and Tsinghua—Tencent Joint Laboratory for Internet Innovation Technology.

    Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format,as long as you give appropriate credit to the original author(s)and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

    The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use,you will need to obtain permission directly from the copyright holder.

    To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095.To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

    色吧在线观看| 69av精品久久久久久| 亚洲av免费在线观看| 午夜福利在线在线| 亚洲成人中文字幕在线播放| 国产一区二区三区av在线| 色综合亚洲欧美另类图片| 成年女人永久免费观看视频| 搡老妇女老女人老熟妇| 国产免费视频播放在线视频 | 国产精品不卡视频一区二区| 国产精品人妻久久久久久| 午夜视频国产福利| 国产亚洲av片在线观看秒播厂 | 中文乱码字字幕精品一区二区三区 | 亚洲国产精品专区欧美| 日本猛色少妇xxxxx猛交久久| 国产一区二区在线av高清观看| 免费观看性生交大片5| 99久久无色码亚洲精品果冻| 成人av在线播放网站| 青春草国产在线视频| 99热全是精品| 久久精品国产亚洲网站| 麻豆av噜噜一区二区三区| 好男人在线观看高清免费视频| 成人性生交大片免费视频hd| 在线免费十八禁| 久久精品夜色国产| 午夜爱爱视频在线播放| 久99久视频精品免费| 老司机影院成人| 中文乱码字字幕精品一区二区三区 | 久99久视频精品免费| 欧美成人一区二区免费高清观看| 久久久久网色| 精品人妻熟女av久视频| 免费黄网站久久成人精品| 国产精品麻豆人妻色哟哟久久 | 亚洲色图av天堂| 麻豆久久精品国产亚洲av| 日日啪夜夜撸| 久久人人爽人人爽人人片va| 一区二区三区四区激情视频| 最近的中文字幕免费完整| 蜜桃亚洲精品一区二区三区| 国产在线男女| 最近中文字幕2019免费版| 国产午夜精品久久久久久一区二区三区| 亚洲av福利一区| 免费搜索国产男女视频| 精品欧美国产一区二区三| 99热这里只有精品一区| 少妇的逼水好多| 亚洲中文字幕一区二区三区有码在线看| 亚洲精品,欧美精品| 欧美精品国产亚洲| 精品久久国产蜜桃| 亚洲最大成人av| 男人的好看免费观看在线视频| 韩国高清视频一区二区三区| 免费黄网站久久成人精品| 国产探花在线观看一区二区| 日韩一区二区视频免费看| 久久亚洲国产成人精品v| 国产高清不卡午夜福利| 寂寞人妻少妇视频99o| 色噜噜av男人的天堂激情| 七月丁香在线播放| 国产精品精品国产色婷婷| 欧美zozozo另类| 亚洲国产最新在线播放| 一级黄片播放器| 国产免费又黄又爽又色| 国产白丝娇喘喷水9色精品| 日韩av在线免费看完整版不卡| 国产成年人精品一区二区| 免费搜索国产男女视频| 久久精品国产99精品国产亚洲性色| 国产激情偷乱视频一区二区| av天堂中文字幕网| 久久国产乱子免费精品| 国产视频首页在线观看| 亚洲国产色片| av在线老鸭窝| 精品不卡国产一区二区三区| 自拍偷自拍亚洲精品老妇| 亚洲精品色激情综合| 综合色丁香网| 国产真实乱freesex| 国产精品永久免费网站| 久久鲁丝午夜福利片| 色综合色国产| 久久久亚洲精品成人影院| 免费播放大片免费观看视频在线观看 | 啦啦啦韩国在线观看视频| 国产精品99久久久久久久久| 免费观看在线日韩| 乱人视频在线观看| 国产精品久久视频播放| 最近2019中文字幕mv第一页| videos熟女内射| av在线亚洲专区| 永久免费av网站大全| 精品久久久久久久久亚洲| 精品久久国产蜜桃| 国产精品嫩草影院av在线观看| 极品教师在线视频| 可以在线观看毛片的网站| 日本免费一区二区三区高清不卡| 69人妻影院| 日本三级黄在线观看| 全区人妻精品视频| 激情 狠狠 欧美| 成人鲁丝片一二三区免费| 两性午夜刺激爽爽歪歪视频在线观看| 99热网站在线观看| 亚洲四区av| 国产又黄又爽又无遮挡在线| 国产成人精品婷婷| 日本午夜av视频| 色哟哟·www| 男人和女人高潮做爰伦理| 18禁裸乳无遮挡免费网站照片| 99久国产av精品| 三级经典国产精品| 久久久久久久久大av| 69人妻影院| 99热6这里只有精品| 黄色欧美视频在线观看| 国产老妇女一区| www日本黄色视频网| 一个人观看的视频www高清免费观看| 一卡2卡三卡四卡精品乱码亚洲| 国产伦一二天堂av在线观看| 22中文网久久字幕| 亚洲av成人av| 久久国内精品自在自线图片| 国产精品永久免费网站| 欧美不卡视频在线免费观看| 免费观看a级毛片全部| 中国国产av一级| 精品人妻熟女av久视频| 精品国内亚洲2022精品成人| 国产精品久久久久久久电影| 黑人高潮一二区| 亚洲五月天丁香| 国产精品久久电影中文字幕| 91精品一卡2卡3卡4卡| 日本与韩国留学比较| 国产亚洲最大av| 欧美xxxx性猛交bbbb| 久久久久久久亚洲中文字幕| 九九在线视频观看精品| 久久精品夜夜夜夜夜久久蜜豆| 丰满乱子伦码专区| 久久韩国三级中文字幕| 女的被弄到高潮叫床怎么办| 免费人成在线观看视频色| 伊人久久精品亚洲午夜| 1000部很黄的大片| 国产精品久久久久久精品电影| 综合色丁香网| 一级黄片播放器| 麻豆一二三区av精品| 永久网站在线| 美女黄网站色视频| 嫩草影院入口| 如何舔出高潮| 国产午夜精品论理片| 日韩av在线免费看完整版不卡| АⅤ资源中文在线天堂| 97热精品久久久久久| 亚洲av不卡在线观看| 最近中文字幕2019免费版| 丰满乱子伦码专区| 国产色爽女视频免费观看| 亚洲精品久久久久久婷婷小说 | 男人狂女人下面高潮的视频| 两个人的视频大全免费| 91精品伊人久久大香线蕉| 国产探花在线观看一区二区| 久久这里只有精品中国| 麻豆乱淫一区二区| 亚洲国产日韩欧美精品在线观看| 女人久久www免费人成看片 | 国产伦一二天堂av在线观看| 欧美不卡视频在线免费观看| 国产精华一区二区三区| 久热久热在线精品观看| 在线播放无遮挡| 91午夜精品亚洲一区二区三区| 国产高清视频在线观看网站| 午夜免费激情av| 日本三级黄在线观看| 精品酒店卫生间| 深夜a级毛片| 亚洲av电影在线观看一区二区三区 | 国产国拍精品亚洲av在线观看| 桃色一区二区三区在线观看| 身体一侧抽搐| 国产精品久久视频播放| 国产伦理片在线播放av一区| 日产精品乱码卡一卡2卡三| 亚洲最大成人av| 亚洲欧美精品综合久久99| 女人久久www免费人成看片 | 91精品国产九色| 岛国毛片在线播放| 欧美日韩精品成人综合77777| 日韩欧美三级三区| 午夜激情福利司机影院| 欧美xxxx性猛交bbbb| 乱码一卡2卡4卡精品| 看十八女毛片水多多多| 精品国产三级普通话版| av卡一久久| 成年av动漫网址| 在线观看一区二区三区| 免费观看性生交大片5| 国产亚洲一区二区精品| 久久亚洲国产成人精品v| 国产午夜精品久久久久久一区二区三区| 青春草亚洲视频在线观看| 老司机福利观看| 午夜福利高清视频| 插阴视频在线观看视频| 少妇高潮的动态图| 青春草亚洲视频在线观看| 久久人妻av系列| 亚洲自拍偷在线| 精品欧美国产一区二区三| 亚洲在久久综合| 亚洲自偷自拍三级| 直男gayav资源| 亚洲真实伦在线观看| 亚洲精品国产av成人精品| 18+在线观看网站| av免费在线看不卡| 成年av动漫网址| 男人舔奶头视频| 最后的刺客免费高清国语| 亚洲国产日韩欧美精品在线观看| 国产精品三级大全| 夜夜爽夜夜爽视频| 日韩大片免费观看网站 | 九九在线视频观看精品| 一个人观看的视频www高清免费观看| 少妇的逼水好多| 99久久中文字幕三级久久日本| 女人被狂操c到高潮| 成年女人看的毛片在线观看| 97热精品久久久久久| 国产精品99久久久久久久久| 国产视频首页在线观看| 一级毛片aaaaaa免费看小| 亚洲av成人精品一二三区| 中文字幕精品亚洲无线码一区| 亚洲成人av在线免费| 中文字幕熟女人妻在线| 成人欧美大片| 免费av毛片视频| 女人被狂操c到高潮| 亚洲欧美精品自产自拍| 亚洲五月天丁香| 伦精品一区二区三区| 国产亚洲91精品色在线| 国产综合懂色| 亚洲怡红院男人天堂| 麻豆av噜噜一区二区三区| 纵有疾风起免费观看全集完整版 | 成人性生交大片免费视频hd| 成年女人看的毛片在线观看| av卡一久久| 免费观看精品视频网站| 日产精品乱码卡一卡2卡三| 99久久中文字幕三级久久日本| 亚洲aⅴ乱码一区二区在线播放| 日韩视频在线欧美| 人人妻人人澡欧美一区二区| 亚洲av免费在线观看| 日韩三级伦理在线观看| 亚洲av成人av| 一边摸一边抽搐一进一小说| 亚洲国产精品国产精品| 免费一级毛片在线播放高清视频| 久久久午夜欧美精品| 国产三级在线视频| 亚洲精品色激情综合| 国产亚洲av嫩草精品影院| 免费看日本二区| 日韩欧美在线乱码| 精品国产露脸久久av麻豆 | 白带黄色成豆腐渣| 午夜a级毛片| 亚洲国产欧美人成| 男人的好看免费观看在线视频| 亚洲国产精品久久男人天堂| 成人一区二区视频在线观看| 国产毛片a区久久久久| 久久精品人妻少妇| 亚洲精品影视一区二区三区av| 小蜜桃在线观看免费完整版高清| 亚洲精品久久久久久婷婷小说 | 中文字幕亚洲精品专区| 亚洲av中文字字幕乱码综合| 色噜噜av男人的天堂激情| 久久午夜福利片| 免费看av在线观看网站| 亚洲精华国产精华液的使用体验| 久久精品国产自在天天线| 国产亚洲最大av| 啦啦啦啦在线视频资源| 国产精品久久久久久久电影| 亚洲精品乱久久久久久| 久久久色成人| 免费在线观看成人毛片| 熟妇人妻久久中文字幕3abv| 精品一区二区免费观看| 1024手机看黄色片| 欧美+日韩+精品| 久久久久久久久大av| 国产人妻一区二区三区在| 内射极品少妇av片p| 七月丁香在线播放| 久99久视频精品免费| 国产亚洲精品av在线| 在线观看一区二区三区| 国产精品久久视频播放| 国产欧美日韩精品一区二区| 五月伊人婷婷丁香| 国产精品嫩草影院av在线观看| 国产成人freesex在线| 欧美性猛交黑人性爽| 国产精品电影一区二区三区| 有码 亚洲区| 国产免费又黄又爽又色| 国产av不卡久久| 国产一区有黄有色的免费视频 | av又黄又爽大尺度在线免费看 | 久久精品久久久久久噜噜老黄 | 黑人高潮一二区| 久久精品国产鲁丝片午夜精品| 日韩精品有码人妻一区| 日韩一区二区三区影片| 99久久九九国产精品国产免费| 2021天堂中文幕一二区在线观| 少妇裸体淫交视频免费看高清| 久久草成人影院| 看十八女毛片水多多多| 少妇的逼好多水| 久久精品国产99精品国产亚洲性色| 一级黄色大片毛片| 搡女人真爽免费视频火全软件| 啦啦啦观看免费观看视频高清| 日韩人妻高清精品专区| 免费看美女性在线毛片视频| 国产麻豆成人av免费视频| 日韩在线高清观看一区二区三区| 国内揄拍国产精品人妻在线| 日韩 亚洲 欧美在线| 国语自产精品视频在线第100页| 久久6这里有精品| 亚洲最大成人中文| 欧美激情在线99| 黄色配什么色好看| 久久午夜福利片| 小说图片视频综合网站| 亚洲精品国产av成人精品| 熟女电影av网| 国产淫片久久久久久久久| 三级国产精品欧美在线观看| 精品人妻熟女av久视频| 国产亚洲5aaaaa淫片| 韩国高清视频一区二区三区| av国产免费在线观看| 99久国产av精品| 少妇被粗大猛烈的视频| 99久久人妻综合| 精品一区二区免费观看| 亚洲人成网站高清观看| 色哟哟·www| 欧美变态另类bdsm刘玥| 深爱激情五月婷婷| 美女被艹到高潮喷水动态| 亚洲美女视频黄频| 日本一二三区视频观看| 夜夜爽夜夜爽视频| 黄片wwwwww| 国产一级毛片在线| 波多野结衣高清无吗| 能在线免费观看的黄片| .国产精品久久| 综合色av麻豆| 国产免费又黄又爽又色| 99热网站在线观看| 中文字幕制服av| 日本与韩国留学比较| 成人性生交大片免费视频hd| 欧美三级亚洲精品| 亚洲婷婷狠狠爱综合网| 亚洲在久久综合| 日韩强制内射视频| 亚洲三级黄色毛片| 日韩精品有码人妻一区| 亚洲一区高清亚洲精品| 国产极品天堂在线| 熟女人妻精品中文字幕| 寂寞人妻少妇视频99o| 免费观看性生交大片5| 美女被艹到高潮喷水动态| 久久久久久久久久黄片| 亚洲久久久久久中文字幕| 日韩精品青青久久久久久| 国产亚洲av嫩草精品影院| 国产黄片美女视频| 我要搜黄色片| 亚洲性久久影院| 一级毛片电影观看 | 熟女电影av网| 97热精品久久久久久| 亚洲成色77777| 国产精品久久久久久av不卡| 搞女人的毛片| 亚洲av一区综合| 久久久久免费精品人妻一区二区| 国国产精品蜜臀av免费| 国产精品电影一区二区三区| 欧美精品国产亚洲| 级片在线观看| 婷婷色综合大香蕉| 国产淫片久久久久久久久| 亚洲第一区二区三区不卡| 一边摸一边抽搐一进一小说| 黄色配什么色好看| 日韩强制内射视频| 精品国内亚洲2022精品成人| 亚州av有码| 国产在线一区二区三区精 | 亚洲av免费在线观看| 国产精品麻豆人妻色哟哟久久 | 国产精品爽爽va在线观看网站| 亚洲人成网站在线观看播放| 有码 亚洲区| 国产欧美日韩精品一区二区| 国产男人的电影天堂91| 中文字幕久久专区| 少妇的逼好多水| 国产精品麻豆人妻色哟哟久久 | 26uuu在线亚洲综合色| 99热这里只有是精品在线观看| 永久免费av网站大全| 午夜精品在线福利| 在线天堂最新版资源| 欧美成人一区二区免费高清观看| 精品酒店卫生间| 最近中文字幕高清免费大全6| 国产成人一区二区在线| 日韩强制内射视频| 国产午夜精品一二区理论片| av又黄又爽大尺度在线免费看 | 久久久久久久久久久免费av| 校园人妻丝袜中文字幕| 特级一级黄色大片| 赤兔流量卡办理| 亚洲av免费在线观看| 国产高清三级在线| av国产免费在线观看| 晚上一个人看的免费电影| 男的添女的下面高潮视频| 六月丁香七月| 久久久久国产网址| 青春草亚洲视频在线观看| 青春草视频在线免费观看| 亚洲自偷自拍三级| 国产 一区 欧美 日韩| 久久久成人免费电影| 亚洲久久久久久中文字幕| 五月伊人婷婷丁香| 一级av片app| 欧美日本亚洲视频在线播放| 成人性生交大片免费视频hd| 国产精品一区二区性色av| 国产在视频线精品| 欧美区成人在线视频| 好男人视频免费观看在线| 99久久九九国产精品国产免费| 嫩草影院新地址| 麻豆成人av视频| 亚洲av电影在线观看一区二区三区 | 国产成人精品久久久久久| 欧美xxxx性猛交bbbb| 国产一区二区在线观看日韩| 黄色欧美视频在线观看| 观看美女的网站| 美女脱内裤让男人舔精品视频| 亚洲人与动物交配视频| 免费播放大片免费观看视频在线观看 | 插逼视频在线观看| 少妇熟女欧美另类| www.色视频.com| av专区在线播放| 岛国在线免费视频观看| 国产精品一区二区性色av| 免费看av在线观看网站| 97超视频在线观看视频| 91精品一卡2卡3卡4卡| 亚洲婷婷狠狠爱综合网| 亚洲精品乱码久久久久久按摩| 女的被弄到高潮叫床怎么办| 亚洲自偷自拍三级| 精品少妇黑人巨大在线播放 | 亚洲精品乱码久久久久久按摩| 一卡2卡三卡四卡精品乱码亚洲| 亚洲精品乱久久久久久| 免费一级毛片在线播放高清视频| 国产精品爽爽va在线观看网站| 我要看日韩黄色一级片| 欧美日本视频| 一边摸一边抽搐一进一小说| 汤姆久久久久久久影院中文字幕 | 成人综合一区亚洲| 搡老妇女老女人老熟妇| 国产极品精品免费视频能看的| a级毛色黄片| 日日撸夜夜添| 中文字幕av在线有码专区| 91久久精品国产一区二区成人| 日本免费在线观看一区| 美女高潮的动态| 亚州av有码| 欧美人与善性xxx| 国产亚洲av嫩草精品影院| av播播在线观看一区| 国产色爽女视频免费观看| 一级二级三级毛片免费看| 你懂的网址亚洲精品在线观看 | 99热网站在线观看| 国产伦理片在线播放av一区| 最后的刺客免费高清国语| 国产极品天堂在线| 最近最新中文字幕大全电影3| 99九九线精品视频在线观看视频| 夜夜看夜夜爽夜夜摸| 色视频www国产| 日韩大片免费观看网站 | 成年女人永久免费观看视频| 欧美不卡视频在线免费观看| 亚洲欧美一区二区三区国产| 菩萨蛮人人尽说江南好唐韦庄 | 99在线人妻在线中文字幕| 国产精品乱码一区二三区的特点| 国产精品嫩草影院av在线观看| 日本午夜av视频| 欧美潮喷喷水| 毛片一级片免费看久久久久| 久久精品人妻少妇| 级片在线观看| 免费观看人在逋| 精品一区二区三区人妻视频| 欧美成人精品欧美一级黄| 国产成人福利小说| 国语自产精品视频在线第100页| 在线免费观看不下载黄p国产| 热99在线观看视频| 国产精品不卡视频一区二区| 亚洲欧美清纯卡通| 日本午夜av视频| 日韩一区二区视频免费看| 男女视频在线观看网站免费| 国产精品国产三级专区第一集| 老师上课跳d突然被开到最大视频| 欧美性感艳星| 国产免费一级a男人的天堂| 亚洲av不卡在线观看| 久久久久久久午夜电影| 九九在线视频观看精品| 久久草成人影院| 国产一区二区在线av高清观看| 亚洲经典国产精华液单| 少妇熟女欧美另类| 亚洲自偷自拍三级| 成年版毛片免费区| a级毛色黄片| 亚洲av福利一区| h日本视频在线播放| 麻豆精品久久久久久蜜桃| 日韩三级伦理在线观看| 国产精品一区二区三区四区免费观看| 国产精品美女特级片免费视频播放器| 亚洲综合色惰| 伊人久久精品亚洲午夜| 乱系列少妇在线播放| 七月丁香在线播放| 日本熟妇午夜| 国产黄色视频一区二区在线观看 | 亚洲成av人片在线播放无| 国产精华一区二区三区| eeuss影院久久| 波多野结衣巨乳人妻| 亚洲欧美清纯卡通| 男人舔女人下体高潮全视频| 国产成人午夜福利电影在线观看| 国产亚洲精品久久久com| www日本黄色视频网| 国产又色又爽无遮挡免| 2021少妇久久久久久久久久久| 久久久久精品久久久久真实原创| 国产亚洲精品av在线| 欧美色视频一区免费| www.色视频.com| 不卡视频在线观看欧美| 国产美女午夜福利| 亚洲av免费在线观看| 亚洲经典国产精华液单|