TY - EDBOOK AB - This paper addresses the semantic coordination of speech and gesture, a major prerequisite when endowing virtual agents with convincing multimodal behavior. Previous research has focused on build- ing rule- or data-based models speci c for a particular language, culture or individual speaker, but without considering the underlying cognitive processes. We present a exible cognitive model in which both linguistic as well as cognitive constraints are considered in order to simulate natu- ral semantic coordination across speech and gesture. An implementation of this model is presented and rst simulation results, compatible with empirical data from the literature are reported. DA - 2013 LA - eng PY - 2013 SN - 978-3-642-40414-6 TI - Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints UR - https://nbn-resolving.org/urn:nbn:de:0070-pub-26107199 Y2 - 2024-11-22T05:34:29 ER -