headgraphic
loader graphic

Loading content ...

Francesco Benfenati

 

I am currently a PhD student in the Physics and Astronomy Department, at the University of Bologna (Italy). I will be joining the MSEAS group from January to April 2025, for a visiting period. My research interest is related to ocean submesoscale and marine litter transport, with focus on oil spill modelling. I really enjoy developing softwares for research purposes. As regards my private life, I enjoy cooking, trekking and practicing sports. I love music and playing the double bass. I am married with one daughter.

Alternating-Implicit Dynamically Orthogonal Runge-Kutta Schemes and Efficient Nonlinearity Evaluation

We introduce a family of implicit integration methods for the dynamical low-rank approximation: the alternating-implicit dynamically orthogonal Runge-Kutta (ai-DORK) schemes. Explicit integration often requires restrictively small time steps and has stability issues; our implicit schemes eliminate these concerns in the low-rank setting. We incorporate our alternating iterative low-rank linear solver into high-order Runge-Kutta methods, creating accurate and stable schemes for a variety of previously intractable problems including stiff systems. Fully implicit and implicit-explicit (IMEX) ai-DORK are derived, and we perform a stability analysis on both. The schemes may be made rank-adaptative and can handle ill-conditioned systems. To evaluate nonlinearities effectively, we propose a local/piecewise polynomial approximation with adaptive clustering, and on-the-fly reclustering may be performed efficiently in the coefficient space. We demonstrate the ai-DORK schemes and our local nonlinear approximation technique on an ill-conditioned matrix differential equation, a stiff, two-dimensional viscous Burgers’ equation, the nonlinear, stochastic ray equations, the nonlinear, stochastic Hamilton-Jacobi-Bellman PDE for time-optimal path planning, and the parabolic wave equation with low-rank domain decomposition in Massachusetts Bay.

An Adaptive High-Order Locally-Nonhydrostatic Ocean Solver

To simulate and study ocean phenomena involving complex dynamics over a wider range of scales, from regional to small scales (e.g., thousands of kilometers to meters), resolving submesoscale features, nonlinear internal waves, subduction, and overturning where they occur, non-hydrostatic (NHS) ocean models are needed, at least locally. The main computational burden for NHS models arises from solving a globally coupled 3D elliptic PDE for the NHS pressure. To address this challenge, we start with a high-order hybridizable discontinuous Galerkin (HDG) (Nguyen et al. 2009) finite element NHS ocean solver (Ueckermann and Lermusiaux 2016) that is well suited for multidynamics systems. We present a new adaptive algorithm to decompose a domain into NHS and HS dynamics subdomains and solve their corresponding equations, thereby reducing the cost associated with the NHS pressure solution step. The NHS/HS subdomains are adapted based on new numerical NHS estimators, such that NHS dynamics is used only where needed. We compare and explore choices of boundary conditions imposed on the internal boundaries between subdomains of different dynamics. We evaluate and analyze the computational costs and accuracy of the adaptive NHS-HS solver using three idealized NHS dynamics test cases, (i) idealized internal waves (Vitousek and Fringer 2011), (ii)  tidally-forced oscillatory flow over seamounts and (iii)  bottom gravity currents. We then complete more realistic NHS-HS simulations of Rayleigh-Taylor instability-driven subduction events by nesting with our MSEAS realistic and operational data-assimilative HS ocean modeling system. Finally, we discuss DG-FEM-based numerical techniques to stabilize and accelerate the high-order ocean solvers by leveraging the high aspect ratio characteristic of ocean domains.

Multiscale Delay Neural Operators for Fluid and Ocean Flows

We propose a new delay neural operator applicable to both large-scale advective flow field prediction and corresponding subgrid-scale closure. Our operator optimizes computation by focusing solely on input fields and correcting for any unseen field influences due to model truncation, coarsening, or aggregation of the full-order model. Compressing input fields to a latent space efficiently enables arbitrary output resolution without storing a complete, discretized system state in memory. Additionally, unseen fields are never computed, unlike classic numerical and many deep-learning Markov process models.  

We construct the delay neural operator by extending neural delay differential equations to 2D and higher dimensions. Inspired by the Mori-Zwanzig formulation, neural delay differential equations and neural closure models perform temporal convolution or kernel integration to accumulate hidden processes (a distributed delay), approximating unseen field effects without storing additional variables. Linear, discretized versions of this distributed delay (discrete delays) have been used to develop effective reduced-order models. We extend these distributed and discrete delays to neural operators. In particular, we present discrete delayed RNNs as a superset of Picard iteration-performing neural operators. We explore multiscale and scale-invariant architectures, enabling arbitrary input and output resolution flow fields. We also investigate the origin and extension of physical representations—concepts of waves, eddies, and vortices—through network layers. Towards an efficient backpropagation with constant memory (which is independent of the number of layers), we simplify adjoint computation and explore integral-free alternatives, including Suzuki-Trotter operator-splitting and simple discretization. Tests are performed against simulated 2D viscous Burger’s equation with Smagorinsky closure, 2D homogeneous isotropic, quasi-geostrophic beta-plane turbulence (2D-HIT QG), and data-assimilated ocean surface velocity simulations.

Physics-Inspired Neural Architectures for Forecasting Fluid and Oceanic Flows

Recent advances in deep learning have led to neural architectures effective for modeling fluid dynamics, with an emphasis on weather prediction and atmospheric modeling. In this work, we develop physics-inspired deep learning models for fluid and oceanic processes, integrating principles from physics and numerical modeling directly within the deep neural architecture to learn multi-scale features and train effectively from limited data — essential characteristics of ocean dynamics and data. Inspired by attention-based architectures, we adapt attention mechanisms based on physics and computational stencil concepts from numerical PDE solvers. Given that fluid dynamics depends on both spatial locality and temporal history, we modify attention mechanisms to capture the rich spatiotemporal dynamics of fluid flows efficiently. Our new physics-inspired attention mechanisms can handle complex bathymetry and coastal land, support learning multiscale features and multi-dynamics, and model the effects of external ocean forcing. We also investigate different choices of numerical integration schemes, error norms, and loss functions to ensure stable predictions over long temporal roll-outs. 

To evaluate and validate the utility of these models, we first showcase applications to predict idealized fluid flows such as eddy shedding past obstacles, vorticity dynamics, and bottom gravity currents for varied Reynolds and Grashof numbers. We then train our deep learning architectures for realistic high-resolution data-assimilative ocean simulations and real-time sea experiments, e.g., surface velocity fields from the Loop Current System (LCS) in the Gulf of Mexico. We illustrate both ensemble and deterministic deep learning forecasts under various scenarios and in recursive and non-recursive applications. We quantify the performance of the deep learning training and forecasts using comprehensive skill metrics.