In this paper we prove a deterministic approximation theorem for a sequence of Markov decision processes with finitely many actions and general state spaces as they
appear frequently in economics, game theory and operations research. Using viscosity solution methods no a-priori differentiabililty assumptions are imposed on the value
function. Applications for this result can be found in large deviation theory, and some simple economic problems.