This thesis presents a new paradigm for the modeling of cooperative human–computer interaction in order to evaluate the antecedents, formation, and regulation of human–computer trust. Human–computer trust is the degree to which human users trust computers to help them achieve their goals, and functions as powerful psychological variable that governs user behavior. The modeling framework presented in this thesis aims to extend predominant methods for the study of trust and cooperation by building on competent problemsolving and equal goal contributions by users and computers. Specifically, the framework permits users to participate in interactive and interdependent decision-making games with autonomous computer agents. The main task is to solve a two-dimensional puzzle, similar to the popular game Tetris. The games derived from this framework include cooperative interaction factors known from interpersonal cooperation: the duality of competence and selfishness, anthropomorphism, task advice, and social blame.
One validation study (68 participants) and four experiments (318 participants) investigate how these cooperative interaction factors influence human–computer trust. In particular, the results show how trust in computers is mediated by warmth as universal dimension of social cognition, how anthropomorphism of computers influences trust formation over time, and how expressive anthropomorphic cues can be used to regulate trust. We explain how these findings can be applied to design trustworthy computer agents for successful cooperation.