1 The Maersk Mc-Kinney Moller Institute, Faculty of Engineering, SDU2 SDU Robotics, The Maersk Mc-Kinney Moller Institute, Faculty of Engineering, SDU3 Georg-August University of Göttingen4 unknown5 The Maersk Mc-Kinney Moller Institute, Faculty of Engineering, SDU
Humans can perform a multitude of different actions with their hands (manipulations). In spite of this, so far there have been only a few attempts to represent manipulation types trying to understand the underlying principles. Here we first discuss how manipulation actions are structured in space and time. For this we use as temporal anchor points those moments where two objects (or hand and object) touch or un-touch each other during a manipulation. We show that by this one can define a relatively small tree-like manipulation ontology. We find less than 30 fundamental manipulations. The temporal anchors also provide us with information about when to pay attention to additional important information, for example when to consider trajectory shapes and relative poses between objects. As a consequence a highly condensed representation emerges by which different manipulations can be recognized and encoded. Examples of manipulations recognition and execution by a robot based on this representation are given at the end of this study.
Ieee Transactions on Autonomous Mental Development, 2013, Vol 5, Issue 2, p. 117-134