It is well-known that the effort invested in prosodic expression can be adjusted to the information structure in a message, but also to the characteristics of the transmission channel. To investigate wether visibly accessible cues to information structure or facial prosodic expression have a differentiated impact on acoustic prosody, we modified the visibility conditions in a spontaneous dyadic interaction task, i.e. a verbalized version of TicTacToe. The main hypothesis was that visibly accessible cues should lead to a decrease in prosodic effort. While we found that - as expected - information structure is expressed throughout a number of acoustic-prosodic cues, visible accessibility to context information makes accents shorter, while accessability to an interlocutor's facial expression slightly increases the mean F0 of an accent.