de
en
Schliessen
Detailsuche
Bibliotheken
Projekt
Impressum
Datenschutz
zum Inhalt
Detailsuche
Schnellsuche:
OK
Ergebnisliste
Titel
Titel
Inhalt
Inhalt
Seite
Seite
Im Dokument suchen
Richter, Viktor: Addressing in Smart Environments. An Investigation of Human Conversational Behaviours Towards Devices and Autonomous Agents in a Smart Environment. 2020
Inhalt
Title
Acknowledgments
Abstract
Contents
List of figures
List of tables
Research Topic
1 Introduction
1.1 Interaction in Smart Environments
1.2 Research Questions
1.3 Research Environment
1.4 Document Overview
2 Principles of Human Interaction
2.1 Interaction between Humans
2.1.1 Proxemics
2.1.2 Unfocused Interaction
2.1.2.1 Coordination and Social Communication
2.1.2.2 Civil inattention
2.1.2.3 Initiation of Focused Interaction
2.1.3 Focused Interaction
2.1.3.1 Face Engagements
2.1.3.2 Conversational Groups
2.1.3.3 Conversational Roles
2.1.3.4 Turn-Taking System
2.1.3.5 Role of Gaze in Conversation
2.2 Interaction with Artificial Agents
2.2.1 Agents in Unfocused Interaction
2.2.2 Agents in Focused Interactions
2.2.2.1 Impact on the Perception of Interaction and Human Behaviour
2.2.2.2 Automated Addressee Recognition
2.2.2.3 Turn-taking behaviour generation
2.2.2.4 Conversational Group Detection
2.2.2.5 Utilizing Conversational Groups
2.2.3 Summary
2.3 Interaction with Devices and Smart Environments
2.3.1 Unfocused Interaction
2.3.2 Focused Interaction
2.3.2.1 Touch & Gui
2.3.2.2 Gestures
2.3.2.3 Speech
2.4 Cross-Cultural Applicability
2.5 Summary
Addressee in Communicative Acts
3 Addressing Behaviour in Smart Environments
3.1 Introduction
3.2 Interaction Corpus
3.2.1 Experimental Set-up
3.2.1.1 Study Procedure
3.2.1.2 Briefing
3.2.1.3 Participant's Tasks
3.2.1.4 Task solution
3.2.2 Recording & Annotation
3.3 Analysis of Addressing Behaviour
3.3.1 Observations of Addressing Behaviour
3.3.1.1 Content of Observations
3.3.2 Predictability of Addressee
3.3.2.1 Correlations between Variables
3.3.2.2 Addressee and Attention
3.3.2.3 Summary
3.4 Addressee Modelling & Recognition
3.4.1 Modelling Addressing Behaviour
3.4.2 Evaluation Procedure
3.4.3 Results & Discussion
3.5 Summary
4 Addressing in Human-Robot Conversational Groups
4.1 Introduction
4.2 Human-Robot Addressing Corpus
4.2.1 Multi-Party Interaction Scenario
4.2.2 System Set-Up
4.2.3 Addressee Recognition
4.2.4 Study Procedure
4.2.5 Contents of the Corpus
4.3 Visual Speaker Detection
4.3.1 Discussion
4.4 Turn-Release Detection
4.4.1 Mutual Gaze Detection
4.4.2 Addressee Deduction from Mutual Gaze
4.4.3 Discussion
4.5 Bayesian Addressee Recognition
4.5.1 Bayesian Models
4.5.2 ROC Performance
4.5.3 Precision-Recall Performance
4.5.4 Discussion
4.6 Summary
Groups & Roles in Copresence
5 Human-Agent Interaction Corpus
5.1 Introduction
5.2 Scenario
5.3 Recording
5.4 Annotation
5.5 Automatic Data Extraction
5.6 Summary
6 Conversational Group Detection
6.1 F-Formation Detection
6.1.1 Assignment Costs & Detectors
6.1.2 Evaluation
6.1.3 Results
6.1.4 Discussion
6.2 In/Out of Group Distinction
6.2.1 Detectors
6.2.2 Evaluation
6.2.3 Results
6.2.4 Discussion
6.3 Summary
7 Conversational Role Recognition
7.1 High-Level Role Features
7.1.1 Feature Selection
7.1.2 Rule Based Model
7.1.3 Bayesian Network
7.1.4 Evaluation
7.1.5 Results
7.1.6 Discussion
7.2 Low-Level & Time-Based Features
7.2.1 Feature Selection
7.2.2 Neural Network Models
7.2.3 Evaluation
7.2.4 Results
7.2.5 Discussion
7.3 Summary
Perspectives
8 Recapitulation of Contributions
8.1 Research Topic
8.2 Addressee in Communicative Acts
8.3 Groups & Roles in Copresence
9 Outlook
9.1 Possibilities for Improvement
9.2 Applications & Possibilities for Further Research
Appendix
A Addressing Behaviour in Smart Environments
B Conversational Role Recognition
Acronyms
Glossary
Bibliography
Own publications
General
Software packages
Declaration
Colophon