Speakers in dialogue tend to adapt to each other by starting to use similar lexical items, syntactic structures, or gestures. This behaviour, called alignment, may serve important cognitive, communicative and social functions (such as speech facilitation, grounding and rapport). Our aim is to enable and study the effects of these subtle aspects of communication in virtual conversational agents. Building upon a model for autonomous speech and gesture generation, we describe an approach to make the agent's multimodal behaviour adaptive in an interactive manner. This includes (1) an activation-based microplanner that makes linguistic choices based on lexical and syntactic priming, and (2) an empirically grounded gesture generation such that linguistic priming parallels concordant gestural adaptation. First results show that the agent aligns to its interaction partners by picking up their syntactic structures and lexical items in its subsequent utterances. These changes in the agent's verbal behaviour also have a direct influence on gestural expressions.
Titelaufnahme
Titelaufnahme
- TitelAdaptive Expressiveness – Virtual Conversational Agents That Can Align to Their Interaction Partner
- Verfasser
- Herausgeber
- Erschienen
- SpracheEnglisch
- DokumenttypKonferenzband
- Schlagwörter
- URN
Zugriffsbeschränkung
- Das Dokument ist frei verfügbar
Links
- Social MediaShare
- NachweisKein Nachweis verfügbar
- IIIF
Dateien
Klassifikation
Abstract
Statistik
- Das PDF-Dokument wurde 5 mal heruntergeladen.