This thesis addresses the problem of generating goal-directed movements for robots, a
challenging problem especially in cases where the involved state spaces are high-dimensional
and complex constraints, such as obstacles and joint-limits, are present. Local methods can
be severely hindered in such situations and planning on a global level becomes necessary.
Although an exhaustive search of the state space can generally solve these problems, complete
planning methods do not scale well with the number of dimensions. To avoid the inherent
complexity of complete state space planning, this work proposes the use of a condensed task-
centered representation, focusing on the task to be achieved and neglecting all unnecessary
details. The motion planning problem can then be effectively decomposed into two layers:
A global search component that is restricted to the coarse-scale task representation and a
local control method that generates fine-scale motions.
The ability of the approach to solve difficult planning problems is demonstrated with an
application for redundant robotic manipulators. Due to the limitation of global planning to
the task representation, completeness can no longer be guaranteed, but it can be shown that
many relevant situations can be solved. In particular, the ability to solve planning tasks
for a humanoid robot operating in human-centered environments is demonstrated in both
simulation and laboratory experiments, providing evidence of a significant computational
advantage compared to complete state space sampling methods.