A Framework for Machine Learning of Model Error in Dynamical Systems
Speaker: Matthew Levine
[Announcement (PDF)]
Speaker Affiliation: Ph.D. Candidate
Computing and Mathematical Sciences
California Institute of Technology
Date: Friday, October 28, 2022 at 3 p.m. in 5-314 and Zoom
Abstract: The development of data-informed predictive models for dynamical systems is of widespread interest in many disciplines. Here, we present a unifying framework for blending mechanistic and machine-learning approaches for identifying dynamical systems from data. This framework is agnostic to the chosen machine learning model parameterization, and casts the problem in both continuous- and discrete-time. We will also show recent developments that allow these methods to learn from noisy, partial observations. We first study model error from the learning theory perspective, defining the excess risk and generalization error. For a linear model of the error used to learn about ergodic dynamical systems, both excess risk and generalization error are bounded by terms that diminish with the square-root of T (the length of the training trajectory data). In our numerical examples, we first study an idealized, fully-observed Lorenz system with model error, and demonstrate that hybrid methods substantially outperform solely data-driven and solely mechanistic-approaches. Then, we present recent results for modeling partially observed Lorenz dynamics that leverages both data assimilation and neural differential equations.
Biography: Matthew Levine is a graduate student in computing and mathematical sciences at Caltech. His work focuses on improving the prediction and inference of physical systems by blending machine learning, mechanistic modeling, and data assimilation techniques. He aims to build robust, unifying theory for these approaches, as well as develop concrete applications. He has worked substantially in the biomedical sciences, and enjoy collaborating on impactful applied projects.