paper
arXiv stat.ML
November 18th, 2025 at 5:00 AM

Learning Optimal Distributionally Robust Stochastic Control in Continuous State Spaces

arXiv:2406.11281v2 Announce Type: replace Abstract: We study data-driven learning of robust stochastic control for infinite-horizon systems with potentially continuous state and action spaces. In many managerial settings--supply chains, finance, manufacturing, services, and dynamic games--the state-transition mechanism is determined by system design, while available data capture the distributional properties of the stochastic inputs from the environment. For modeling and computational tractability, a decision maker often adopts a Markov control model with i.i.d. environment inputs, which can render learned policies fragile to internal dependence or external perturbations. We introduce a distributionally robust stochastic control paradigm that promotes policy reliability by introducing adaptive adversarial perturbations to the environment input, while preserving the modeling, statistical, and computational tractability of the Markovian formulation. From a modeling perspective, we examine two adversary models--current-action-aware and current-action-unaware--leading to distinct dynamic behaviors and robust optimal policies. From a statistical learning perspective, we characterize optimal finite-sample minimax rates for uniform learning of the robust value function across a continuum of states under ambiguity sets defined by the $f_k$-divergence and Wasserstein distance. To efficiently compute the optimal robust policies, we further propose algorithms inspired by deep reinforcement learning methodologies. Finally, we demonstrate the applicability of the framework to real managerial problems.

#ai
#research

Score: 2.80

Engagement proxy: 0

Canonical link: https://arxiv.org/abs/2406.11281