Cognition—understood as a form of planning ahead—complements adaptive behavior. It leverages knowledge about performing a specific behavior into a novel context while minimizing any harm to the behaving system itself as it is using an internal simulation to predict possible outcomes. In this thesis, I propose a minimal cognitive system that integrates these two kinds of processes in one control system for a six-legged robot.
On the one hand, adaptive behavior emerges from interaction of simple local control modules which allows the system to react quickly when facing disturbances. Detailed experimental findings in insects suggests that this evolved flexibility results from a hierarchical and decentralized architecture. While a lower control level coordinates muscle activation patterns and joint movements on a short timescale, a higher level handles action selection on longer timescales.
On the other hand, following a bottom-up approach this is extended towards a cognitive system that is able to invent new behaviors and to plan ahead. Using a grounded internal body model planning is realized as a form of internal simulation of possible actions which are applied out of their original context. Exploiting the decentralized architecture this cognitive expansion allows to test and predict properties of newly invented behaviors, while the body is decoupled from the control system.
The thesis introduces the minimal cognitive system as it is applied on the robot Hector in a climbing task. It consecutively introduces the underlying control characteristics and relates these to findings from biology and neuroscience. First, hierarchical organization can be found in many animals and it structures control into parsimonious modules. Second, this is complemented by research on stick insects in particular which offers an even more detailed neuronal and behavioral level for analysis. This emphasizes decentralization of control structures and the importance of an embodied perspective which integrates bodily properties into the concurrent control process exploiting, for example, elasticities of muscles for simplifying the control problem. Third, internal representations are introduced in a bottom-up manner as grounded internal models—realized as recurrent neural networks—that are at first considered in the context of serving a specific behavior. Fourth, as a consequence, cognitive processing is realized as recruitment of the already existing flexible internal models in an internal simulation. The underlying architecture is applied on the hexapod robot Hector and analyzed in detail in simulation. Furthermore, learning is considered for this approach.