Machine learning and stochastic control

H. Pham

This course will present some recent developments on the interplay between control and machine learning. More precisely, we shall address the following topics:

Part I: Neural networks-based algorithms for PDEs and stochastic control. Deep learning based on the approximation capability of neural networks and efficiency of gradient descent optimizers has shown remarkable success in recent years for solving high dimensional partial differential equations (PDEs) arising notably in stochastic optimal control and finance. We present the different methods that have been developed in the literature relying either on deterministic or probabilistic approaches: - Deep Galerkin, Physics informed Neural networks - Deep BSDEs and Deep Backward dynamic programming, - Control learning and value function iteration. This will be illustrated with some numerical tests from financial applications.

Part II: Deep reinforcement learning. The second part of the lecture is concerned with the resolution of stochastic control in a model-free setting, i.e. when the environment and model coefficients are unknown, and optimal strategies are learnt from samples observation of state and reward by trial and error. This is the principle of reinforcement learning (RL), a classical topic in machine learning, and which has attracted an increasing interest in the stochastic analysis/control community. We shall review the basics of RL theory, and present the latest developments on policy gradients, actor/critic and q-learning methods in continuous time.

Part III: Generative modeling for time series via optimal transport approach. Simulation of time series is useful in finance for backtesting the robustness of systematic strategies, for generating stress-test scenarios in market risk measurement, for prediction, and for learning of optimal strategies. We present generative models based on diffusion processes and optimal transport approach for synthesizing new samples of times series data distribution.

Bibliography

  • [1] M. Germain, H. Pham, X. Warin: Neural networks-based algorithms for stochastic control and PDEs in finance, Machine Learning and Data Sciences for Financial Markets: a guide to contemporary practices, Cambridge University Press, 2023, Editors: A. Capponi and C. A. Lehalle

  • [2] M. Hamdouche, P. Henry-Labordère, H. Pham: Generative modeling for time series via Schrödinger bridge, 2023.

  • [3] Y. Jia and X.Y. Zhou: Policy gradient and Actor-Critic learning in continuous time and space: theory and algorithms, 2022, Journal of Machine Learning and Research.

  • [4] Y. Jia and X.Y. Zhou: q-Learning in continuous time, 2023, Journal of Machine Learning and Research.

  • [5] C. Remlinger, J. Mikael, R. Elie: Conditional loss and deep Euler scheme for time series generation, 2021, AAAI Conference on Artificial Intelligence.

  • [6] R. Sutton and A. Barto: Introduction to reinforcement learning, second edition 2016,

  • [7] M. Xia, X. Li, Q. Shen, T. Chou: Squared Wasserstein-2 distance for efficient reconstruction of stochastic differential equations, 2024, arXiv:2401.11354