Auditory data display is an interdisciplinary field linking auditory perception research, sound engineering, data mining, and human-computer interaction in order to make semantic contents of data perceptually accessible in the form of (nonverbal) audible sound. For this goal it is important to understand the different ways in which sound can encode meaning. We discuss this issue from the perspectives of language, music, functionality, listening modes, and physics, and point out some limitations of current techniques for auditory data display, in particular when targeting high-dimensional data sets. As a promising, potentially very widely applicable approach, we discuss the method of model-based sonification (MBS) introduced recently by the authors and point out how its natural semantic grounding in the physics of a sound generation process supports the design of sonifications that are accessible even to untrained, everyday listening. We then proceed to show that MBS also facilitates the design of an intuitive, active navigation through "acoustic aspects", somewhat analogous to the use of successive two-dimensional views in three-dimensional visualization. Finally, we illustrate the concept with a first prototype of a "tangible" sonification interface which allows us to "perceptually map" sonification responses into active exploratory hand motions of a user, and give an outlook on some planned extensions.