We develop a theory of optimal stopping problems under ambiguity in continuous time. Using results from (backward) stochastic calculus, we characterize the value function as the smallest (nonlinear) supermartingale dominating the payoff process. For Markovian models, we derive an adjusted Hamilton-Jacobi-Bellman equation involving a nonlinear drift term that stems from the agent's ambiguity aversion. We show how to use these general results for search problems and American Options.