TY - THES AB - In most real-world situations, a robot is interacting with multiple people. In this case, understanding of the dialogs is essential. However, dialog scene analysis is missing in most existing systems of human-robot interaction. In such systems, only one speaker can talk with the robot or each speaker wears an attached microphone or a headset. The target of Computational AudioVisual Scene Analysis (CAVSA) is therefore making dialogs between humans and robots more natural and flexible. The CAVSA system is able to learn how many speakers are in the scenario, where the speakers are and who is currently speaking. CAVSA is a challenging task due to the complexity of dialogue scenarios. First, speakers are unknown in advance, thus a database for training high-level features beforehand to recognize faces or voices is not available. Second, people can dynamically come into and leave the scene, may move all the time and even change their locations outside the camera field of view. Third, the robot can not see all the people at the same time due to limited camera field of view and head movements. Moreover, a sound could be related to a person who stands outside the camera field of view and has never been seen. I will show that the CAVSA system is able to assign words to corresponding speakers. A speaker is recognized again when he leaves and enters the scene, or changes his position even with a newly appearing person. DA - 2014 KW - Multimodel Interface KW - Audiovisual Integration KW - Scene Analysis KW - Human-Robot Interaction LA - eng PY - 2014 TI - Computational Audiovisual Scene Analysis UR - https://nbn-resolving.org/urn:nbn:de:hbz:361-26953860 Y2 - 2024-11-25T01:13:16 ER -