Abstract
This paper describes an approach that allows for a steerable environment to be generated directly from a video for the purpose of integration with a video-based driving simulator. As the range of steering motion in a driving simulator is relatively limited, a pseudo-three-dimensional approach can be taken. This method requires only a single image sequence or video, acquired by any type of imaging system along a road. No three-dimensional, stereo or visual odometry data is acquired or calculated. An experiment is then presented, involving multiple lane change requests, where participants are asked to change from the left-hand lane to the right-hand lane and back again.
Get full access to this article
View all access options for this article.
