Randomized Algorithms in Scientific Computing
Please login to view abstract download link
The talk will describe how techniques based on randomized methods for dimension reduction have enabled dramatic increases in computational capabilities for fundamental tasks within scientific computing and data science. A prime example concerns randomized methods for approximating matrices and for solving large linear systems. The talk will also describe how these ideas generalize to the case of entire operators such as solution operators for elliptic PDEs, time stepping operators for parabolic problems, and boundary-to-boundary maps such as the Dirichlet-to-Neumann map that form essential building blocks of domain decomposition methods and multi-physics simulations. Randomized compression techniques can also be used to build accurate but light-weight representations of physical systems for on-the-fly computing in autonomous systems, or in a real-time digital twin of a drone. These are ideal environments where one can afford a one-time expensive computation on a workstation that fully resolves the physics. Randomized compression is then used to build accurate surrogate models that faithfully reproduce the input-to-output map. These models would have small memory footprints and would allow instantaneous evaluation of physical systems where the full model originally involved millions or billions of degrees of freedom. The ideas presented also provide paths for advances in Scientific Machine Learning. Some paths are straightforward, such as the rapid generation of large training sets, or the fast evaluation and differentiation of objective functions involving global operators. More challenging questions concern how multiresolution representations and randomized compression techniques can be incorporated into compressed models of fully nonlinear operators.
