We solve an infinite time-horizon bounded-variation stochastic control problem with regime
switching between *N* states. This is motivated by the problem of a government that wants to control
the country's debt-to-GDP (gross domestic product) ratio. In our formulation, the debt-to-GDP ratio
evolves stochastically in continuous time, and its drift - given by the interest rate on government debt,
net of the growth rate of GDP - is affected by an exogenous macroeconomic risk process modelled by a
continuous-time Markov chain with *N* states. The government can act on the public debt by increasing
or decreasing its level, and it aims at minimising a net expected cost functional. Without relying on a
guess-and-verify approach, but performing a direct probabilistic study, we show that it is optimal to keep
the debt-to-GDP ratio in an interval, whose boundaries depend on the states of the risk process. These
boundaries are given through a zero-sum optimal stopping game with regime switching with *N* states and
we completely characterise them as solutions to a system of nonlinear algebraic equations with constraints.
To the best of our knowledge, such a result appears here for the first time. Finally, we put in practice our
methodology in a case study of a Markov chain with *N* = 2 states; we provide a thorough analysis and
we complement our theoretical results by a detailed numerical study on the sensitivity of the optimal debt
ratio management policy with respect to the problem's parameters.