When verbs of motion are accompanied by gestures, this comes along with a relatively complex relation between the two modalities.
In this paper, we investigate the semantic coordination of speech and event-related gestures in an interdisciplinary way.
First, we explain how to efficiently construct a speech-gesture-interface for a gesture which accompanies a verb phrase from a theoretical viewpoint. Resting upon this analysis, we also provide a computational simulation model which further explicates the relation between the two modalities based on activation-spreading within dynamically shaped multi-modal memories.