Session: 12-05-02: Data-Enabled Predictive Modeling, Scientific Machine Learning, and Uncertainty Quantification in Computational Mechanics
Paper Number: 99998
99998 - Recurrent Localization Networks Applied to the Lippmann-Schwinger Equation
A vast number of problems in computational mechanics require the solution of a parametric PDE in a local sense. This includes scenarios such as predicting internal strain or strain-rate fields, damage evolution, plastic yielding, fatigue properties and time-to-failure. In these situations, classical methods based on solving PDEs numerically are trusted to provide robust and interpretable results. However, these problems usually need to be solved a great number of times as part of a larger workflow such as topology optimization or inverse design. Standard approaches such as finite element analysis can be prohibitively costly — for some problems solving a single instance can require hours, days, or more. More specialized models such as spectral methods, while being somewhat faster, still pose a major performance bottleneck.
As an alternative, data-driven models are frequently applied to quickly construct approximate solutions to a wide number of physics problems. Rather than solve the governing equations of a problem directly, these models learn approximate mappings using curated datasets of input/output pairs. These methods are generally quite fast and flexible; their use has revolutionized fields across the natural and social sciences. However, they provide poor control over approximation error and no theoretical guarantees on convergence or error bounds. This becomes a major problem when extreme values (e.g. damage localization, plastic yielding) are the most important values. To address these issues models such as Physics-Informed Neural Networks have been proposed which utilize knowledge of the governing equation to restrict or stabilize the learning process. However, these models are generally trained on one instance of a governing equation, and thus incur similar computational costs as analytical methods when sweeping over parameters of a given PDE.
Inspired by classical spectral iterative solvers, also known as Fast Fourier Transform (FFT) methods, we present a hybrid learning model for the elastic localization problem. Rather than solve the problem in a single pass, our approach performs a series of update (or "proximal") steps which gradually refine a candidate solution field. These steps approximate successive applications of the Green's function in FFT methods. While still far from perfect, this work presents a step forward towards a unified physics-based data-driven model for computational mechanics. We examine this model as a learning-based loop-unrolling of the classical iterative method, providing analysis from both physical and computational perspectives. We discuss the advantages and drawbacks of an iterative learning model, and consider means of embedding parametric terms of the governing equation directly into the network's structure, which would further stabilize the model's performance in the future.
Presenting Author: Conlain Kelly Georgia Institute of Technlogy
Presenting Author Biography: Conlain is a PhD student at Georgia Institute of Technology studying Computational Science<br/>and Engineering. His interests lie in the intersections between statistical mechanics, numerical<br/>methods, and machine learning.
Authors:
Conlain Kelly Georgia Institute of TechnlogySurya Kalidindi Georgia Institute of Technology
Recurrent Localization Networks Applied to the Lippmann-Schwinger Equation
Paper Type
Technical Presentation