This paper focusses on the connection between human listening and datamining. The goal in the research field of datamining is to find patterns, to detect hidden regularities in data. Often, high-dimensional datasets are given which are not easily understood from pure inspection of the table of numbers representing the data. There are two ways to solve the datamining problem: one is to implement perceptional capabilities in artificial systems - this is the approach of machine learning. The other way is to make usage of the human brain which actually is the most brilliant data mining system we know. In connection with our sensory system we are able to recognize and distinguish patterns, and this capability is usually exploited when data is presented in form of a visualization. However, we also have extremely high-
developed pattern recognition capabilites in the auditory domain, and the field of sonification addresses this modality by rendering auditory representation for data for the joint purposes of deepening insight into given data and facilitating the monitoring of complex processes.
An unanswered question is how high-dimensional data could or should sound. This paper looks at the relation between sound and meaning in our real world and transfers some findings onto the sonification domain. The result is the technique of Model-Based Sonification, which allows the development of sonifications that can easily be interpreted by the listener.