This website requires JavaScript.

Accelerated Training of Physics Informed Neural Networks (PINNs) using Meshless Discretizations

Ramansh SharmaVarun Shankar
May 2022
We present a new technique for the accelerated training of physics-informedneural networks (PINNs): discretely-trained PINNs (DT-PINNs). The repeatedcomputation of partial derivative terms in the PINN loss functions viaautomatic differentiation during training is known to be computationallyexpensive, especially for higher-order derivatives. DT-PINNs are trained byreplacing these exact spatial derivatives with high-order accurate numericaldiscretizations computed using meshless radial basis function-finitedifferences (RBF-FD) and applied via sparse-matrix vector multiplication. Theuse of RBF-FD allows for DT-PINNs to be trained even on point cloud samplesplaced on irregular domain geometries. Additionally, though traditional PINNs(vanilla-PINNs) are typically stored and trained in 32-bit floating-point(fp32) on the GPU, we show that for DT-PINNs, using fp64 on the GPU leads tosignificantly faster training times than fp32 vanilla-PINNs with comparableaccuracy. We demonstrate the efficiency and accuracy of DT-PINNs via a seriesof experiments. First, we explore the effect of network depth on both numericaland automatic differentiation of a neural network with random weights and showthat RBF-FD approximations of third-order accuracy and above are more efficientwhile being sufficiently accurate. We then compare the DT-PINNs tovanilla-PINNs on both linear and nonlinear Poisson equations and show thatDT-PINNs achieve similar losses with 2-4x faster training times on a consumerGPU. Finally, we also demonstrate that similar results can be obtained for thePINN solution to the heat equation (a space-time problem) by discretizing thespatial derivatives using RBF-FD and using automatic differentiation for thetemporal derivative. Our results show that fp64 DT-PINNs offer a superiorcost-accuracy profile to fp32 vanilla-PINNs.