Haptic interaction involved in almost any physical interaction with the environment performed by humans is a highly sophisticated and to a large extent a computationally unmodelled process. Unlike humans, who seamlessly handle a complex mixture of haptic features and profit from their integration over space and time, even the most advanced robots are strongly constrained in performing contact-rich interaction tasks. In this work we approach the described problem by demonstrating the success of our online haptic interaction learning approach on an example task: haptic identification of four unknown objects. Building upon our previous work performed with a floating haptic sensor array, here we show functionality of our approach within a fully-fledged robot simulation. To this end, we utilize the haptic attention model (HAM), a meta-controller neural network architecture trained with reinforcement learning. HAM is able to learn to optimally parameterize a sequence of so-called haptic glances, primitive actions of haptic control derived from elementary human haptic interaction. By coupling a simulated KUKA robot arm with the haptic attention model, we pursue to mimic the functionality of a finger.
<br /><br />
Our modeling strategy allowed us to arrive at a tactile reinforcement learning architecture and characterize some of its advantages. Owing to a rudimentary experimental setting and an easy acquisition of simulated data, we believe our approach to be particularly useful for both time-efficient robot training and a flexible algorithm prototyping.