TY - EDBOOK AB - Speakers in dialogue tend to adapt to each other by starting to use similar lexical items, syntactic structures, or gestures. This behaviour, called alignment, may serve important cognitive, communicative and social functions (such as speech facilitation, grounding and rapport). Our aim is to enable and study the effects of these subtle aspects of communication in virtual conversational agents. Building upon a model for autonomous speech and gesture generation, we describe an approach to make the agent's multimodal behaviour adaptive in an interactive manner. This includes (1) an activation-based microplanner that makes linguistic choices based on lexical and syntactic priming, and (2) an empirically grounded gesture generation such that linguistic priming parallels concordant gestural adaptation. First results show that the agent aligns to its interaction partners by picking up their syntactic structures and lexical items in its subsequent utterances. These changes in the agent's verbal behaviour also have a direct influence on gestural expressions. DA - 2010 KW - multimodal interaction KW - modelling natural language KW - verbal and non-verbal expressiveness KW - interactive alignment KW - user-adaptated interaction LA - eng PY - 2010 TI - Adaptive Expressiveness – Virtual Conversational Agents That Can Align to Their Interaction Partner UR - https://nbn-resolving.org/urn:nbn:de:0070-bipr-46120 Y2 - 2024-12-26T04:55:49 ER -