Neuronal representation and extraction of spatial information are essential for behavioral
control. For flying insects, a plausible way to gain spatial information is to exploit distancedependent
optic flow that is generated during translational self-motion. Optic flow is computed
by arrays of local motion detectors retinotopically arranged in the second neuropile
layer of the insect visual system. These motion detectors have adaptive response characteristics,
i.e. their responses to motion with a constant or only slowly changing velocity
decrease, while their sensitivity to rapid velocity changes is maintained or even increases.
We analyzed by a modeling approach how motion adaptation affects signal representation
at the output of arrays of motion detectors during simulated flight in artificial and natural 3D
environments. We focused on translational flight, because spatial information is only contained
in the optic flow induced by translational locomotion. Indeed, flies, bees and other
insects segregate their flight into relatively long intersaccadic translational flight sections
interspersed with brief and rapid saccadic turns, presumably to maximize periods of translation
(80% of the flight). With a novel adaptive model of the insect visual motion pathway we
could show that the motion detector responses to background structures of cluttered environments
are largely attenuated as a consequence of motion adaptation, while responses to
foreground objects stay constant or even increase. This conclusion even holds under the
dynamic flight conditions of insects.