In Virtual Reality environments, real humans can meet
virtual humans to collaborate on tasks. The agent Max is
such a virtual human. In construction tasks, he is standing
face-to-face to a human partner as a co-situated companion.
To maintain a social conversation, Max has to perceive
the user’s engagement and to solicit the user’s engagement,
he needs to demonstrate engagement himself. This paper
presents ongoing work on the development of a model that
describes interacting levels of engagement. We present how
the agent’s perceptive, cognitive, and expressive capabilities interact on these levels ranging from observing the human partner, to taking her goals, mental state or emotions into account when making decisions on how to interact and intervene.