TY - EDBOOK AB - Embodied conversational agents are required to be able to express themselves convincingly and autonomously. Based on an empirial study on spatial descriptions of landmarks in direction-giving, we present a model that allows virtual agents to automatically generate, i.e., select the content and derive the form of coordinated language and iconic gestures. Our model simulates the interplay between these two modes of expressiveness on two levels. First, two kinds of knowledge representation (propositional and imagistic) are utilized to capture the modality-specific contents and processes of content planning. Second, specific planners are integrated to carry out the formulation of concrete verbal and gestural behavior. A probabilistic approach to gesture formulation is presented that incorporates multiple contextual factors as well as idiosyncratic patterns in the mapping of visuo-spatial referent properties onto gesture morphology. Results from a prototype implementation are described. DA - 2009 LA - eng PY - 2009 SN - 978-0-9817381-6-1 TI - Increasing the expressiveness for virtual agents. Autonomous generation of speech and gesture for spatial description tasks UR - https://nbn-resolving.org/urn:nbn:de:0070-pub-18578353 Y2 - 2024-11-22T04:48:54 ER -