We present a novel hierarchical control framework
that unifies our previous work on tactile-servoing with
visual-servoing approaches to allow for robust manipulation
and exploration of unknown objects, including – but not
limited to – robust grasping, online grasp optimization, in-hand
manipulation, and exploration of object surfaces. The control
framework is divided into three layers: a joint-level positioncontrol
layer, a tactile servoing control layer, and a high-level
visual servoing control layer. While the middle layer provides
“blind” surface exploration skills, maintaining desired contact
patterns, the visual layer monitors and controls the actual object
pose providing high-level finger-tip motion commands that are
merged with the tactile-servoing control commands.
Because the high spatial resolution tactile array and tactile
servoing method is used, the robot end-effector can actively
perform slide, roll and twist motion in order to improve the
contact quality with the unknown object only depending on
the tactile feedback. Our control method can be consider
as another alternative option for vision-force shared control
method and vision-force-tactile control method which heavily
depend on the 3D force/torque sensor to perform end-effector
fine manipulation after the contact happening.
We illustrate the efficiency of the proposed framework using
a series of manipulation actions performed with two KUKA
LWR arms equipped with a tactile sensor array as a “sensitive
fingertip”. The two considered objects are unknown to the
robot, i.e. neither shape nor friction properties are available.