headgraphic
loader graphic

Loading content ...

Range-Dynamical Low-Rank Split-Step Fourier Method for the Parabolic Wave Equation

Charous, A. and P.F.J. Lermusiaux, 2024. Range-Dynamical Low-Rank Split-Step Fourier Method for the Parabolic Wave Equation. Journal of the Acoustical Society of America, sub-judice.

Numerical solutions to the parabolic wave equation are plagued by the curse of dimensionality coupled with the Nyquist criterion. As a remedy, a new range-dynamical low-rank split-step Fourier methodology is developed. Our integration scheme scales sub-linearly with the number of classical degrees of freedom in the transverse directions. It is orders of magnitude faster than the classic full-rank split-step Fourier algorithm and also saves copious amounts of storage space. This enables numerical solutions of the parabolic wave equation at higher frequencies and on larger domains, and simulations may be performed on laptops rather than high-performance computing clusters. By using a rank-adaptive scheme to further optimize the low-rank equations, we ensure our approximate solution is highly accurate and efficient. The methodology and algorithms are demonstrated on realistic high-resolution data-assimilative ocean fields in Massachusetts Bay for three-dimensional acoustic configurations with different source locations and frequencies. The acoustic pressure, transmission loss, and phase solutions are analyzed in geometries with seamounts and canyons across and along Stellwagen Bank. The convergence with the rank of the subspace and the properties of the rank-adaptive scheme are demonstrated, and all results are successfully compared with those of the full-rank method when feasible.

Share

Stable Rank-adaptive Dynamically Orthogonal Runge-Kutta Schemes

Charous, A. and P.F.J. Lermusiaux, 2024. Stable Rank-adaptive Dynamically Orthogonal Runge-Kutta Schemes. SIAM Journal on Scientific Computing 46(1), A529-A560. doi:10.1137/22M1534948

We develop two new sets of stable, rank-adaptive Dynamically Orthogonal Runge-Kutta (DORK) schemes that capture the high-order curvature of the nonlinear low-rank manifold. The DORK schemes asymptotically approximate the truncated singular value decomposition at a greatly reduced cost while preserving mode continuity using newly derived retractions. We show that arbitrarily high-order optimal perturbative retractions can be obtained, and we prove that these new retractions are stable. In addition, we demonstrate that repeatedly applying retractions yields a gradient-descent algorithm on the low-rank manifold that converges geometrically when approximating a low-rank matrix. When approximating a higher-rank matrix, iterations converge linearly to the best low-rank approximation. We then develop a rank-adaptive retraction that is robust to overapproximation. Building off of these retractions, we derive two novel, rank-adaptive integration schemes that dynamically update the subspace upon which the system dynamics is projected within each time-step: the stable, optimal Dynamically Orthogonal Runge-Kutta (so-DORK) and gradient-descent Dynamically Orthogonal Runge-Kutta (gd-DORK) schemes. These integration schemes are numerically evaluated and compared on an ill-conditioned matrix differential equation, an advection-diffusion partial differential equation, and a nonlinear, stochastic reaction-diffusion partial differential equation. Results show a reduced error accumulation rate with the new stable, optimal and gradient-descent integrators. In addition, we find that rank adaptation allows for highly accurate solutions while preserving computational efficiency.

Share

Generalized Neural Closure Models with Interpretability

Gupta, A., and P.F.J. Lermusiaux, 2023. Generalized Neural Closure Models with Interpretability. Scientific Reports 13, 10364. doi:10.1038/s41598-023-35319-w

Improving the predictive capability and computational cost of dynamical models is often at the heart of augmenting computational physics with machine learning (ML). However, most learning results are limited in interpretability and generalization over different computational grid resolutions, initial and boundary conditions, domain geometries, and physical or problem-specific parameters. In the present study, we simultaneously address all these challenges by developing the novel and versatile methodology of unified neural partial delay differential equations. We augment existing/low-fidelity dynamical models directly in their partial differential equation (PDE) forms with both Markovian and non-Markovian neural network (NN) closure parameterizations. The melding of the existing models with NNs in the continuous spatiotemporal space followed by numerical discretization automatically allows for the desired generalizability. The Markovian term is designed to enable extraction of its analytical form and thus provides interpretability. The non-Markovian terms allow accounting for inherently missing time delays needed to represent the real world. Our flexible modeling framework provides full autonomy for the design of the unknown closure terms such as using any linear-, shallow-, or deep-NN architectures, selecting the span of the input function libraries, and using either or both Markovian and non-Markovian closure terms, all in accord with prior knowledge. We obtain adjoint PDEs in the continuous form, thus enabling direct implementation across differentiable and non-differentiable computational physics codes, different ML frameworks, and treatment of nonuniformly-spaced spatiotemporal training data. We demonstrate the new generalized neural closure models (gnCMs) framework using four sets of experiments based on advecting nonlinear waves, shocks, and ocean acidification models. Our learned gnCMs discover missing physics, find leading numerical error terms, discriminate among candidate functional forms in an interpretable fashion, achieve generalization, and compensate for the lack of complexity in simpler models. Finally, we analyze the computational advantages of our new framework.

Share

Dynamically Orthogonal Runge–Kutta Schemes with Perturbative Retractions for the Dynamical Low-Rank Approximation

Charous, A. and P.F.J. Lermusiaux, 2023. Dynamically Orthogonal Runge–Kutta Schemes with Perturbative Retractions for the Dynamical Low-Rank Approximation. SIAM Journal on Scientific Computing 45(2): A872-A897. doi:10.1137/21M1431229

Whether due to the sheer size of a computational domain, the fine resolution required, or the multiples scales and stochasticity of the dynamics, the dimensionality of a system must often be reduced so that problems of interest become computationally tractable. In this paper, we develop retractions for time-integration schemes that efficiently and accurately evolve the dynamics of a system’s low-rank approximation. Through differential geometry, we analyze the error incurred at each time-step due to the high-order curvature of the manifold of fixed-rank matrices. We first obtain a novel, explicit, computationally inexpensive set of algorithms that we refer to as perturbative retractions and show that the set converges to an ideal retraction that projects optimally and exactly to the manifold of fixed-rank matrices by reducing what we define as the projection-retraction error. Furthermore, each perturbative retraction itself exhibits high-order convergence to the best low-rank approximation of the full-rank solution. Using perturbative retractions, we then develop a new class of integration techniques that we refer to as dynamically orthogonal Runge–Kutta (DORK) schemes. DORK schemes integrate along the nonlinear manifold, updating the subspace upon which we project the system’s dynamics as it is integrated. Through numerical test cases, we demonstrate our schemes for matrix addition, real-time data compression, and deterministic and stochastic partial differential equations. We find that DORK schemes are highly accurate by incorporating knowledge of the dynamic, nonlinear manifold’s high-order curvature, and they are computationally efficient by limiting the growing rank needed to represent the evolving dynamics.

Share

Minimum-Correction Second-Moment Matching: Theory, Algorithms and Applications

Lin, J. and P.F.J. Lermusiaux, 2021. Minimum-Correction Second-Moment Matching: Theory, Algorithms and Applications. Numerische Mathematik 147(3): 611–650. doi:10.1007/s00211-021-01178-8

We address the problem of finding the closest matrix to a given U under the constraint that a prescribed second-moment matrix must be matched, i.e. TŨ=P̃. We obtain a closed-form formula for the unique global optimizer for the full-rank case, that is related to U by an SPD (symmetric positive definite) linear transform. This result is generalized to rank-deficient cases as well as to infinite dimensions. We highlight the geometric intuition behind the theory and study the problem’s rich connections to minimum congruence transform, generalized polar decomposition, optimal transport, and rank-deficient data assimilation. In the special case of =I, minimum-correction second-moment matching reduces to the well-studied optimal orthonormalization problem. We investigate the general strategies for numerically computing the optimizer and analyze existing polar decomposition and matrix square root algorithms. We modify and stabilize two Newton iterations previously deemed unstable for computing the matrix square root, such that they can now be used to efficiently compute both the orthogonal polar factor and the SPD square root. We then verify the higher performance of the various new algorithms using benchmark cases with randomly generated matrices. Lastly, we complete two applications for the stochastic Lorenz-96 dynamical system in a chaotic regime. In reduced subspace tracking using dynamically orthogonal equations, we maintain the numerical orthonormality and continuity of time-varying base vectors. In ensemble square root filtering for data assimilation, the prior samples are transformed into posterior ones by matching the covariance given by the Kalman update while also minimizing the corrections to the prior samples.

Share

The Extrinsic Geometry of Dynamical Systems Tracking Nonlinear Matrix Projections

Feppon, F. and P.F.J. Lermusiaux, 2019. The Extrinsic Geometry of Dynamical Systems Tracking Nonlinear Matrix Projections. SIAM Journal on Matrix Analysis and Applications, 40(2), 814–844. doi: 10.1137/18M1192780

A generalization of the concepts of extrinsic curvature and Weingarten endomorphism is introduced to study a class of nonlinear maps over embedded matrix manifolds. These (nonlinear) oblique projections, generalize (nonlinear) orthogonal projections, i.e. applications mapping a point to its closest neighbor on a matrix manifold. Examples of such maps include the truncated SVD, the polar decomposition, and functions mapping symmetric and non-symmetric matrices to their linear eigenprojectors. This paper specifically investigates how oblique projections provide their image manifolds with a canonical extrinsic differential structure, over which a generalization of the Weingarten identity is available. By diagonalization of the corresponding Weingarten endomorphism, the manifold principal curvatures are explicitly characterized, which then enables us to (i) derive explicit formulas for the differential of oblique projections and (ii) study the global stability of a governing generic Ordinary Differential Equation (ODE) computing their values. This methodology, exploited for the truncated SVD in (Feppon 2018), is generalized to non-Euclidean settings, and applied to the four other maps mentioned above and their image manifolds: respectively, the Stiefel, the isospectral, the Grassmann manifolds, and the manifold of fixed rank (non-orthogonal) linear projectors. In all cases studied, the oblique projection of a target matrix is surprisingly the unique stable equilibrium point of the above gradient flow. Three numerical applications concerned with ODEs tracking dominant eigenspaces involving possibly multiple eigenvalues finally showcase the results.

Share

Dynamically Orthogonal Numerical Schemes for Efficient Stochastic Advection and Lagrangian Transport

Feppon, F. and P.F.J. Lermusiaux, 2018. Dynamically Orthogonal Numerical Schemes for Efficient Stochastic Advection and Lagrangian Transport. SIAM Review, 60(3), 595–625. doi:10.1137/16m1109394

Quantifying the uncertainty of Lagrangian motion can be performed by solving a large number of ordinary differential equations with random velocities, or equivalently a stochastic transport partial differential equation (PDE) for the ensemble of flow-maps. The Dynamically Orthogonal (DO) decomposition is applied as an efficient dynamical model order reduction to solve for such stochastic advection and Lagrangian transport. Its interpretation as the method that applies instantaneously the truncated SVD on the matrix discretization of the original stochastic PDE is used to obtain new numerical schemes. Fully linear, explicit central advection schemes stabilized with numerical filters are selected to ensure efficiency, accuracy, stability, and direct consistency between the original deterministic and stochastic DO advections and flow-maps. Various strategies are presented for selecting a time-stepping that accounts for the curvature of the fixed rank manifold and the error related to closely singular coefficient matrices. Efficient schemes are developed to dynamically evolve the rank of the reduced solution and to ensure the orthogonality of the basis matrix while preserving its smooth evolution over time. Finally, the new schemes are applied to quantify the uncertain Lagrangian motions of a 2D double gyre flow with random frequency and of a stochastic flow past a cylinder.

Share

A Geometric Approach to Dynamical Model–Order Reduction

Feppon, F. and P.F.J. Lermusiaux, 2018. A Geometric Approach to Dynamical Model-Order Reduction. SIAM Journal on Matrix Analysis and Applications, 39(1), 510–538. doi:10.1137/16m1095202

Any model order reduced dynamical system that evolves a modal decomposition to approximate the discretized solution of a stochastic PDE can be related to a vector field tangent to the manifold of fixed rank matrices. The Dynamically Orthogonal (DO) approximation is the canonical reduced order model for which the corresponding vector field is the orthogonal projection of the original system dynamics onto the tangent spaces of this manifold. The embedded geometry of the fixed rank matrix manifold is thoroughly analyzed.  The curvature of the manifold is characterized and related to the smallest singular value through the study of the Weingarten map.  Differentiability results for the orthogonal projection onto embedded manifolds are reviewed and used to derive an explicit dynamical system for tracking the truncated Singular Value Decomposition (SVD)  of a time-dependent matrix. It is demonstrated that the error made by the DO approximation remains controlled under the minimal condition that the original solution stays close to the low rank manifold, which translates into an explicit dependence of this error on the gap between singular values.  The DO approximation is also justified as the dynamical system that applies instantaneously the SVD truncation to optimally constrain the rank of the reduced solution.  Riemannian matrix optimization is investigated in this extrinsic framework to provide algorithms that adaptively update the best low rank approximation of a smoothly varying matrix.  The related gradient flow provides a dynamical system that converges to the truncated SVD of an input matrix for almost every initial data.

Share