Abstract
Lane detection algorithms play a key role in Advanced Driver Assistance Systems (ADAS), which are however unable to achieve accurate lane recognition in low-light environments. This paper presents a novel deep network structure, namely LLSS-Net (low-light images semantic segmentation), to achieve accurate lane detection in low-light environments. The method integrates a convolutional neural network for low-light image enhancement and a semantic segmentation network for lane detection. The image quality is firstly improved by a low-light image enhancement network and lane features are then extracted using semantic segmentation. Fast lane clustering is finally performed by using the KD tree models. Cityscapes and Tusimple datasets are utilized to demonstrate the robustness of the proposed method. The experimental results show that the proposed method has an excellent performance for lane detection in low-light roads.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
