We investigate the suitability of unsupervised dimensionality
reduction (DR) for transfer learning in the context of different representations
of the source and target domain. Essentially, unsupervised DR
establishes a link of source and target domain by representing the data in
a common latent space. We consider two settings: a linear DR of source
and target data which establishes correspondences of the data and an according
transfer, and its combination with a non-linear DR which allows to
adapt to more complex data characterised by a global non-linear structure.