Computational skill acquisition in robots and simulated agents has been a topic of increasing popularity throughout the last years. Despite impressive progress, autonomous behavior at a level of animals and humans are not yet replicated by machines. Especially when a complex environment demands versatile, goal-oriented behavior, current artificial systems show shortcomings. Consider for instance modern 3D computer games. Despite their key role for more emersive game experience, surprisingly little effort was made towards new techniques for life-like behavior creation in artificial characters.
Modern interactive computer games provide the ability to objectively record complex human behavior, offering a variety of interesting challenges to the pattern recognition community. Such recordings often represent a multiplexing of long-term strategy, mid-term tactics and short-term reactions, in addition to the more low-level details of the player's movements. In this work we approach the goal of behavior acquisition in artifical game agents using methods from machine learning and pattern recognition. Recordings of human players implicitly encode the behaviors we are looking for and consequently serve as a training sample base. We assume the behavior of game characters to be a function that maps the current game state onto a reaction. Since a global approach for imitating human behavior is not feasible, various methods are applied to extract and mimic behaviors operating on different timescales (long-term, mid-term, and short-term). For example Bayesian approaches map the world-state onto a discrete set of movement prototypes to imitate situative action sequences, or a Mixture of Experts architecture is used for learning (mid-term) behavioral functions. Although the focus is on the acquisition of single behaviors, first approaches on integrating behaviors are elaborated as well. For experimental validation we use the popular computer game Quake II. The experimental results show that human behavior in simulated environments can indeed be learned from scratch using supervised learning. In final experiments we compare learned behaviors to a human player, and to the behavior of a conventional artificial game agent. A Turing-Test like survey indicates that the learned behaviors are in most cases perceived as humanlike.