Abstract
Robot manipulators typically rely on complete knowledge of object geometry in order to plan motions and compute grasps. However, when an object is not fully in view it can be difficult to form an accurate estimate of the object's shape and pose, particularly when the object deforms. In this paper we describe a generative model of object geometry based on Mardia and Dryden's “Probabilistic Procrustean Shape”, which captures both non-rigid deformations and object variability in a class. We extend their shape model to the setting where point correspondences are unknown using Scott and Nowak's COPAP framework. We use this model to recognize objects in a cluttered image and to infer their complete two-dimensional boundaries with a novel algorithm called OSIRIS. We show examples of learned models from image data and demonstrate how the models can be used by a manipulation planner to grasp objects in cluttered visual scenes.
Get full access to this article
View all access options for this article.
