In order to understand and model the non-verbal communicative conduct of humans, it seems fruitful to combine qualitative methods (Conversation Analysis) and quantitative techniques (motion capturing). A tool for data visualization and annotation is important as it constitutes a central interface between different research approaches and methodologies. We have developed the pre-annotation tool “PAMOCAT” that detects motion segments of individual joints. A sophisticated user interface enables the annotating person to easily find correlations between different joints and to export combined qualitative and quantitative annotations to standard annotation tools. Using this technique we are able to examine complex setups with three persons in tight conversation. A functionality to search for special postures of interest and display the frames in an overview makes it easy to analyze different phenomena in Conversation Analysis.
Titelaufnahme
Titelaufnahme
- TitelPAMOCAT: Automatic retrieval of specified postures
- Verfasser
- Herausgeber
- Erschienen
- SpracheEnglisch
- DokumenttypKonferenzband
- Schlagwörter
- ISBN978-2-9517408-7-7
- URN
Zugriffsbeschränkung
- Das Dokument ist frei verfügbar
Links
- Social MediaShare
- NachweisKein Nachweis verfügbar
- IIIF
Dateien
Klassifikation
Abstract
Statistik
- Das PDF-Dokument wurde 3 mal heruntergeladen.