TY - CHAP AB - Humanoid robot companions that are intended to engage in natural and fluent human-robot interaction are supposed to combine speech with non-verbal modalities for comprehensible and believable behavior. We present an approach to enable the humanoid robot ASIMO to flexibly produce and synchronize speech and co-verbal gestures at run-time, while not being limited to a predefined repertoire of motor action. Since this research challenge has already been tackled in various ways within the domain of virtual conversational agents, we build upon the experience gained from the development of a speech and gesture production model used for our virtual human Max. Being one of the most sophisticated multi-modal schedulers, the Articulated Communicator Engine (ACE) has replaced the use of lexicons of canned behaviors with an on-the-spot production of flexibly planned behavior representations. As an underlying action generation architecture, we explain how ACE draws upon a tight, bi-directional coupling of ASIMO’s perceptuo-motor system with multi-modal scheduling via both efferent control signals and afferent feedback. DO - 10.1007/978-3-642-10403-9_18 LA - eng PY - 2009 SN - 978-3-642-10402-2 SP - 173-182 T3 - Human Centered Robot Systems: Cognition, Interaction, Technology TI - Towards meaningful robot gesture UR - https://nbn-resolving.org/urn:nbn:de:0070-pub-19931338 Y2 - 2024-11-24T14:58:28 ER -