A humanoid robot that can go up and down stairs, crawl underneath obstacles or simply walk around requires reliable perceptual capabilities for obtaining accurate and useful information about its surroundings. In this work we present a system for generating three-dimensional (3D) environment maps from data taken by stereo vision. At the core is a method for precise segmentation of range data into planar segments based on the algorithm of scan-line grouping extended to cope with the noise dynamics of stereo vision. In off-line experiments we demonstrate that our extensions achieve a more precise segmentation. When compared to a previously developed patch-let method, we obtain a richer segmentation with a higher accuracy while also requiring far less computations. From the obtained segmentation we then build a 3D environment map using occupancy grid and floor height maps. The resulting representation classifies areas into one of six different types while also providing object height information. We apply our perception method for the navigation of the humanoid robot QRIO and present experiments of the robot stepping through narrow space, walking up and down stairs and crawling underneath a table.
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
0.00 MB