headgraphic
loader graphic

Loading content ...

Sparse and Deep Gaussian Process Closure Modelling for 2-D Fluid and Ocean Flows

Truncated fluid and ocean models omit subgrid physics and introduce numerical biases that degrade forecasts. We present a Bayesian, data-driven closure for 2-D finite-volume solvers that learns the dynamical discrepancy between low-resolution (LR) and high-resolution (HR) simulations. Using sparse variational Gaussian processes (GPs) and deep GPs, we map resolved features (local velocities and gradients) to a closure source term that corrects LR tendencies toward HR dynamics while quantifying predictive uncertainty. GPs can be well-suited to closure modeling in fluids because they encode smoothness/invariance via kernels, learn nonparametric mappings from data, and return uncertainty estimates alongside the mean correction. The trained GP is embedded intrusively into a numerical finite volume framework and evaluated online each coarse time step, keeping the closure consistent with the numerics.

We assess the approach on three test beds: (i) flow past a cylinder across multiple Reynolds numbers; (ii) tidally modulated flow past a cylinder with time-varying Reynolds number; and (iii) bottom gravity currents. Models are trained on HR downsamplings–LR pairs and then tested across different regimes. We evaluate performance by using field-wise errors and wake metrics: mean velocity profiles in the near and far wake, lift C_L and drag C_D coefficients, and Strouhal number St. Relative to LR baselines without closure, GP closures reduce L2 / L∞ errors of the resolved fields and bring mean velocity, C_D/C_L, and St closer to HR references across trained Reynolds numbers. The online GP closure adds negligible wall-clock cost relative to the fluid step, preserves the conservative finite-volume structure, and provides uncertainty estimates. Overall, these results demonstrate a practical, uncertainty-aware GP closure that improves coarse-grid fidelity for 2-D fluid and ocean flows, which could potentially be extended to 3-D ocean frameworks.

Probabilistic Forecasting, Optimal Path Planning, and Adaptive Sampling for Multi-Platform Operations in the Gulf of Mexico

The Loop Current (LC), along with its associated meanders, eddies (LCEs), and cyclonic frontal eddies (LCFEs), plays a major role in the Gulf of Mexico and has been extensively studied over the past decade, with several field campaigns. During the recent 6-month collaborative GRand Adaptive Sampling Experiment (GRASE; April to September 2025), we employed our MIT Multidisciplinary Simulation, Estimation, and Assimilation Systems (MSEAS), including Error Subspace Statistical Estimation (ESSE) large-ensemble forecasting to provide real-time probabilistic forecasts. We describe the evolution of ocean features and evaluate the predictive skill of our forecasts compared to independent data. We present our probabilistic glider reachability and optimal path planning forecasts. This includes the use of reachability and heading forecasts for optimal deployment, feature sampling and tracking, and recovery of multiple gliders. We show that the actual glider tracks remain within our forecast reachability fronts and that headings could be followed in real-time. We demonstrate the use of our information-theoretic methodology for optimal adaptive sampling with gliders and floats, where we maximize information about specific future properties of the LC, LCEs, and LCFEs. We issued reachability forecasts for floats and modified 3D Lagrangian flow maps to account for float motions. We forecast float transports during several periods, highlighting how float deployment regions remain coherent or are being distorted, especially how the transport of floats on the edges of LCFEs can be affected by shear and turbulence. Lastly, we illustrate our real-time clustering of the large-ensemble probabilistic LCE forecasts and how we showed that an LCE detachment in June/early July 2025 was very unlikely. This work is in collaboration with the whole GRASE team.

Physics-Inspired Multiscale Neural Architectures for Forecasting Fluid and Oceanic Flows

Recent advances in deep learning have led to neural architectures effective for modeling fluid dynamics, with an emphasis on weather prediction and atmospheric modeling. In this work, we develop physics-inspired deep learning models for fluid and oceanic processes, integrating principles from physics and numerical modeling directly within the deep neural architecture to learn multi-scale features and train effectively from limited data — essential characteristics of ocean dynamics and data. Inspired by attention-based architectures, we adapt attention mechanisms based on physics and computational stencil concepts from numerical PDE solvers. Given that fluid dynamics depends on both spatial locality and temporal history, we modify attention mechanisms to capture the rich spatiotemporal dynamics of fluid flows efficiently. Our new physics-inspired attention mechanisms can handle complex bathymetry and coastal land, support learning multiscale features and multi-dynamics, and model the effects of external ocean forcing. We also investigate different choices of numerical integration schemes, error norms, and loss functions to ensure stable predictions over long temporal roll-outs.

To evaluate and validate the utility of these models, we first showcase applications to predict idealized fluid flows such as eddy shedding past obstacles and quasi-geostrophic turbulence. We then train our deep learning architectures for realistic high-resolution data-assimilative ocean simulations and real-time sea experiments, e.g., surface velocity fields from the Loop Current System (LCS) in the Gulf of Mexico. We illustrate both ensemble and deterministic deep learning forecasts under various scenarios and in recursive and non-recursive applications. We quantify the performance of the deep learning training and forecasts using comprehensive skill metrics.

Generative Models for Super-Resolution and Inference of Quasi-Geostrophic Turbulence and Oceanic Flows

Typically, numerical simulations of the ocean, weather, and climate are coarse, and observations are sparse and gappy. Recently, generative diffusion models have emerged as state-of-the-art tools for image generation and shown promise in various high-dimensional inverse problems. In this work, we apply and benchmark generative diffusion modeling approaches to super-resolution and inference from coarse, sparse, and gappy observations. We apply both guided approaches that minimally adapt a pre-trained unconditional diffusion model and conditional approaches that require training with paired high-resolution and coarse-resolution or observational data. We first show applications to idealized 2D quasi-gesotrophic turbulence on the beta-plane in two dynamical regimes, the eddy regime and the jet regime. Next, we show extensions to inference of surface oceanic flows in the Gulf of Mexico from gappy, noisy observations. Our comprehensive skill metrics include norms of the reconstructed fields, turbulence statistical quantities, quantification of the super-resolved probabilistic ensembles and their errors, and validation of the generated posterior distributions. We also study the sensitivity to tuning parameters such as guidance strength. Our results highlight the trade-offs between ease of implementation, fidelity (sharpness), and cycle-consistency of the diffusion models, and offer practical guidance for deployment in oceanographic and geophysical inverse problems.

Bayesian Learning of Reactive Fluid Dynamical Models