TY - BOOK AB - For solving complex tasks cooperatively in close interaction with robots, they need to understand natural human communication. To achieve this, robots could benefit from a deeper understanding of the processes that humans use for successful communication. Such skills can be studied by investigating human face-to-face interactions in complex tasks. In our work the focus lies on shared-space interactions in a path planning task and thus 3D gaze directions and hand movements are of particular interest. However, the analysis of gaze and gestures is a time-consuming task: Usually, manual annotation of the eye tracker's scene camera video is necessary in a frame-by-frame manner. To tackle this issue, based on the EyeSee3D method, an automatic approach for annotating interactions is presented: A combination of geometric modeling and 3D marker tracking serves to align real world stimuli with virtual proxies. This is done based on the scene camera images of the mobile eye tracker alone. In addition to the EyeSee3D approach, face detection is used to automatically detect fixations on the interlocutor. For the acquisition of the gestures, an optical marker tracking system is integrated and fused in the multimodal representation of the communicative situation. DA - 2014 DO - 10.1145/2578153.2582176 KW - Eyetracking KW - geometric modelling KW - motion tracking KW - Gaze-based Interaction KW - 3D gaze analysis KW - Augmented Reality KW - eye tracking KW - marker tracking LA - eng PY - 2014 TI - Model-based acquisition and analysis of multimodal interactions for improving human-robot interaction UR - https://nbn-resolving.org/urn:nbn:de:0070-pub-26660493 Y2 - 2024-11-21T21:44:29 ER -