headgraphic
loader graphic

Loading content ...

Neural Closure Models for Chaotic Dynamical Systems

Jalan, A., 2023. Neural Closure Models for Chaotic Dynamical Systems. SM Thesis, Massachusetts Institute of Technology, Mechanical Engineering, February 2023.

An important challenge in the problem of producing accurate forecasts of multiscale dynamics, including but not limited to weather prediction and ocean modeling, is that these dynamical systems are chaotic in nature. A hallmark of chaotic dynamical systems is that they are highly sensitive to small perturbations in the initial conditions and parameter values. As a result, even the best physics-based computational models, often derived from first principles but limited by varied sources of errors, have limited predictive capabilities for both shorter-term state forecasts and for important longer-term global characteristics of the true system. Observational data, however, provide an avenue to increase predictive capabilities by learning the physics missing from lower-fidelity computational models and reducing their various errors. Recent advances in machine learning, and specifically data-driven knowledge-based prediction, have made this a possibility but even state-of-the-art techniques in this area have not been able to produce short-term forecasts beyond a small multiple of the Lyapunov time of the system, even for simple chaotic systems such as the Lorenz 63 model. In this work, we develop a training framework to apply neural ordinary differential equation-based (nODE) closure models to correct errors in the equations of such dynamical systems. We first identify the key training parameters that have an outsize effect on the learning ability of the neural closure models. We then develop a novel learning algorithm, broadly consisting of adaptive tuning of these parameters, designing dynamic multi-loss objective functions, and an error-targeting batching process. We evaluate and showcase our methodology to the chaotic Balance Equations in an array of increasingly difficult learning settings: first, only the coefficient of one missing term in one perturbed equation; second, one entire missing term in on perturbed equation; third, two missing terms in two perturbed equations; and finally the previous but with a perturbation being two orders of magnitude larger than the state, thereby resulting in a completely different attractor. In each of these cases, our new multi-faceted training approach drastically increases both state-of-the-art state predictability (up to 15 Lyapunov times) and attractor-reproducibility. Finally, we validate our results by comparing them with the predictability limit of the chaotic BE system under different magnitudes of perturbations.