Abstract
On the way to autonomous robots, perception is a key point. Among all the perception senses, vision is undoubtedly the most important for the information it can provide. However, it is not easy to identify what is seen from the provided visual input. On this regard, inspired by humans, we have studied motion as a primary cue. Particularly, we present a computational solution for motion detection, object location and tracking from images captured by perspective and fisheye cameras. The proposed approach has been validated with an extensive set of experiments and applications using different testbeds of real environments with real and/or virtual targets.
Get full access to this article
View all access options for this article.
