TY - JOUR AB - Auditory data display is an interdisciplinary field linking auditory perception research, sound engineering, data mining, and human-computer interaction in order to make semantic contents of data perceptually accessible in the form of (nonverbal) audible sound. For this goal it is important to understand the different ways in which sound can encode meaning. We discuss this issue from the perspectives of language, music, functionality, listening modes, and physics, and point out some limitations of current techniques for auditory data display, in particular when targeting high-dimensional data sets. As a promising, potentially very widely applicable approach, we discuss the method of model-based sonification (MBS) introduced recently by the authors and point out how its natural semantic grounding in the physics of a sound generation process supports the design of sonifications that are accessible even to untrained, everyday listening. We then proceed to show that MBS also facilitates the design of an intuitive, active navigation through "acoustic aspects", somewhat analogous to the use of successive two-dimensional views in three-dimensional visualization. Finally, we illustrate the concept with a first prototype of a "tangible" sonification interface which allows us to "perceptually map" sonification responses into active exploratory hand motions of a user, and give an outlook on some planned extensions. DA - 2004 DO - 10.1109/JPROC.2004.825904 KW - thermann LA - eng IS - 4 M2 - 730 PY - 2004 SN - 0018-9219 SP - 730-741 T2 - Proceedings of the IEEE (Special Issue on Engineering and Music - Supervisory Control and Auditory Communication) TI - Sound and Meaning in Auditory Data Display UR - https://nbn-resolving.org/urn:nbn:de:0070-pub-20174054 Y2 - 2024-11-22T03:45:07 ER -