Intelligent virtual objects gain more and more significance in the development of virtual worlds. Although this concept has high potential in generating all kinds of multimodal output, so far it is mostly used to enrich graphical properties. This paper proposes a framework, in which objects, enriched with information about their sound properties, are being processed to generate virtual sound sources. To create a sufficient surround sound experience not only single sounds but also environmental properties have to be considered. We introduce a concept, transferring features from the Phong lighting model to sound rendering.