Modern robotic applications create high demands on adaptation of actions with respect to
variance in a given task. Reinforcement learning is able to optimize for these changing conditions,
but relearning from scratch is hardly feasible due to the high number of required rollouts. We
propose a parameterized skill that generalizes to new actions for changing task parameters,
which is encoded as a meta-learner that provides parameters for task-specific dynamic motion
primitives. Our work shows that utilizing parameterized skills for initialization of the optimization
process leads to a more effective incremental task learning. In addition, we introduce a hybrid
optimization method that combines a fast coarse optimization on a manifold of policy parameters
with a fine grained parameter search in the unrestricted space of actions. The proposed algorithm
reduces the number of required rollouts for adaptation to new task conditions. Application in
illustrative toy scenarios, for a 10-DOF planar arm, and a humanoid robot point reaching task
validate the approach.