In human-machine interaction scenarios, low latency recognition and reproduction is crucial for successful communication. For reproduction of general gesture classes it is important to realize a representation that is insensitive with respect to the variation of performer specific speed development along gesture trajectories. Here, we present an approach to learning of speed-invariant gesture models that provide fast recognition and convenient reproduction of gesture trajectories. We evaluate our gesture model with a data set comprising 520 examples for 48 gesture classes. The results indicate that the model is able to learn gestures from few observations with high accuracy.