Automated industrial assembly today require that the 3D position and orientation (hereafter ''pose`) of the objects to be assembled are known precisely. Today this precision is mostly established by a dedicated mechanical object alignment system. However, such systems are often dedicated to the particular object and in order to handle the demand for flexibility, there is an increasing demand for avoiding such dedicated mechanical alignment systems. Rather, it would be desirable to automatically locate and grasp randomly placed objects from tables, conveyor belts or even bins with a high accuracy that enables direct assembly. Conventional vision systems and laser triangulation systems can locate randomly placed known objects (with 3D CAD models available) with some accuracy, but not necessarily a good enough accuracy. In this paper, we present a novel method for refining the pose accuracy of an object that has been located based on the appearance as detected by a monocular camera. We illustrate the quality of our refinement method experimentally.
Proceedings of the International Symposium on Robotics, 2010