We present an initial investigation from a semi-experimental setting, in which an Augmented Reality (AR) system, based on Head Mounted Displays (HMD), has been used for real- time collaboration in a task-oriented scenario (design of a museum exhibition). While allowing for a range of technical augmentations, the setting also restricts – due to the wear of HMDs – the participants’ ‘natural’ communicational resources. Our analysis reveals that – under these particular conditions – some everyday strategies of establishing co- orientation with the co-participant turn out to be not functional for the participants. At the same time, we find that some participants change their referencing strategies to overcome system-based limitations and to develop a – under these particular conditions – more efficient method in orienting the co-participant to specific objects or to the interaction situation itself: Participants transform their individual deictic gestures on several objects into other forms of gestural activities like for example the lifting or the tilting of an object. These particular changes in object trajectory are done in order to orientate the co-participants and establish joint attention. Furthermore, gestural referencing seems to be highly variable and contextual, if important interactional resources are artificially reduced.