Flying insects, such as flies or bees, rely on consistent information regarding the
depth structure of the environment when performing their flight maneuvers in cluttered natural
environments. These behaviors include avoiding collisions, approaching targets or spatial
navigation. Insects are thought to obtain depth information visually from the retinal image
displacements (“optic flow”) during translational ego-motion. Optic flow in the insect visual
system is processed by a mechanism that can be modeled by correlation-type elementary motion
detectors (EMDs). However, it is still an open question how spatial information can be extracted
reliably from the responses of the highly contrast- and pattern-dependent EMD responses, especially
if the vast range of light intensities encountered in natural environments is taken into account.
This question will be addressed here by systematically modeling the peripheral visual system of
flies, including various adaptive mechanisms. Different model variants of the peripheral visual
system were stimulated with image sequences that mimic the panoramic visual input during
translational ego-motion in various natural environments, and the resulting peripheral signals were
fed into an array of EMDs. We characterized the influence of each peripheral computational unit on
the representation of spatial information in the EMD responses. Our model simulations reveal that
information about the overall light level needs to be eliminated from the EMD input as is
accomplished under light-adapted conditions in the insect peripheral visual system. The response
characteristics of large monopolar cells (LMCs) resemble that of a band-pass filter, which reduces
the contrast dependency of EMDs strongly, effectively enhancing the representation of the nearness
of objects and, especially, of their contours. We furthermore show that local brightness adaptation
of photoreceptors allows for spatial vision under a wide range of dynamic light
conditions.