In this paper we present our humanoid robot “Meka”, partici- pating in a multi party human robot dialogue scenario. Active arbitration of the robot's attention based-on multi-modal stim- uli is utilised to attain persons which are outside of the robots field of view. We investigate the impact of this attention management and an addressee recognition on the robot's capability to distinguish utterances directed at it from communication between humans. Based on the results of a user study, we show that mutual gaze at the end of an utterance, as a means of yielding a turn, is a substantial cue for addressee recognition. Verification of a speaker through the detection of lip movements can be used to further increase precision. Further- more, we show that even a rather simplistic fusion of gaze and lip movement cues allows a considerable enhancement in addressee estimation, and can be altered to adapt to the requirements of a particular scenario.