Abstract
A reliable, punctual, and spatially accurate dataset of sidewalks is vital for identifying where improvements can be made upon urban environment to enhance multi-modal accessibility, social cohesion, and residents' physical activity. This paper develops a synthetically new spatial procedure to extract the sidewalk by integrating the detected results from aerial and street view imagery. We first train neural networks to extract sidewalks from aerial images, and then use pre-trained models to restore occluded and missing sidewalks from street view images. By combining the results from both data sources, a complete network of sidewalks can be produced. Our case study includes four counties in the U.S., and both precision and recall reach about 0.9. The street view imagery helps restore the occluded sidewalks and largely enhances the sidewalk network's connectivity by linking 20% of dangles.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
