Dynamic & norm-based weights to normalize imbalance in back-propagated gradients of physics-informed neural networks
Physics-Informed Neural Networks (PINNs) have been a promising machine learning model for evaluating various physical problems. Despite their success in solving many types of partial differential equations (PDEs), some problems have been found to be difficult to learn, implying that the baseline PIN...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2023-01-01
|
Series: | Journal of Physics Communications |
Subjects: | |
Online Access: | https://doi.org/10.1088/2399-6528/ace416 |
_version_ | 1797768756357758976 |
---|---|
author | Shota Deguchi Mitsuteru Asai |
author_facet | Shota Deguchi Mitsuteru Asai |
author_sort | Shota Deguchi |
collection | DOAJ |
description | Physics-Informed Neural Networks (PINNs) have been a promising machine learning model for evaluating various physical problems. Despite their success in solving many types of partial differential equations (PDEs), some problems have been found to be difficult to learn, implying that the baseline PINNs is biased towards learning the governing PDEs while relatively neglecting given initial or boundary conditions. In this work, we propose Dynamically Normalized Physics-Informed Neural Networks (DN-PINNs), a method to train PINNs while evenly distributing multiple back-propagated gradient components. DN-PINNs determine the relative weights assigned to initial or boundary condition losses based on gradient norms, and the weights are updated dynamically during training. Through several numerical experiments, we demonstrate that DN-PINNs effectively avoids the imbalance in multiple gradients and improves the inference accuracy while keeping the additional computational cost within a reasonable range. Furthermore, we compare DN-PINNs with other PINNs variants and empirically show that DN-PINNs is competitive with or outperforms them. In addition, since DN-PINN uses exponential decay to update the relative weight, the weights obtained are biased toward the initial values. We study this initialization bias and show that a simple bias correction technique can alleviate this problem. |
first_indexed | 2024-03-12T20:58:53Z |
format | Article |
id | doaj.art-469d599bc67842f3a287f470531b5985 |
institution | Directory Open Access Journal |
issn | 2399-6528 |
language | English |
last_indexed | 2024-03-12T20:58:53Z |
publishDate | 2023-01-01 |
publisher | IOP Publishing |
record_format | Article |
series | Journal of Physics Communications |
spelling | doaj.art-469d599bc67842f3a287f470531b59852023-07-31T10:42:09ZengIOP PublishingJournal of Physics Communications2399-65282023-01-017707500510.1088/2399-6528/ace416Dynamic & norm-based weights to normalize imbalance in back-propagated gradients of physics-informed neural networksShota Deguchi0https://orcid.org/0000-0002-9538-8663Mitsuteru Asai1https://orcid.org/0000-0002-1124-2895Department of Civil Engineering, Kyushu University , 744 Motooka, Nishi-ku, Fukuoka 819-0395, JapanDepartment of Civil Engineering, Kyushu University , 744 Motooka, Nishi-ku, Fukuoka 819-0395, JapanPhysics-Informed Neural Networks (PINNs) have been a promising machine learning model for evaluating various physical problems. Despite their success in solving many types of partial differential equations (PDEs), some problems have been found to be difficult to learn, implying that the baseline PINNs is biased towards learning the governing PDEs while relatively neglecting given initial or boundary conditions. In this work, we propose Dynamically Normalized Physics-Informed Neural Networks (DN-PINNs), a method to train PINNs while evenly distributing multiple back-propagated gradient components. DN-PINNs determine the relative weights assigned to initial or boundary condition losses based on gradient norms, and the weights are updated dynamically during training. Through several numerical experiments, we demonstrate that DN-PINNs effectively avoids the imbalance in multiple gradients and improves the inference accuracy while keeping the additional computational cost within a reasonable range. Furthermore, we compare DN-PINNs with other PINNs variants and empirically show that DN-PINNs is competitive with or outperforms them. In addition, since DN-PINN uses exponential decay to update the relative weight, the weights obtained are biased toward the initial values. We study this initialization bias and show that a simple bias correction technique can alleviate this problem.https://doi.org/10.1088/2399-6528/ace416physics-informed neural networkspartial differential equationsmulti-objective optimization |
spellingShingle | Shota Deguchi Mitsuteru Asai Dynamic & norm-based weights to normalize imbalance in back-propagated gradients of physics-informed neural networks Journal of Physics Communications physics-informed neural networks partial differential equations multi-objective optimization |
title | Dynamic & norm-based weights to normalize imbalance in back-propagated gradients of physics-informed neural networks |
title_full | Dynamic & norm-based weights to normalize imbalance in back-propagated gradients of physics-informed neural networks |
title_fullStr | Dynamic & norm-based weights to normalize imbalance in back-propagated gradients of physics-informed neural networks |
title_full_unstemmed | Dynamic & norm-based weights to normalize imbalance in back-propagated gradients of physics-informed neural networks |
title_short | Dynamic & norm-based weights to normalize imbalance in back-propagated gradients of physics-informed neural networks |
title_sort | dynamic norm based weights to normalize imbalance in back propagated gradients of physics informed neural networks |
topic | physics-informed neural networks partial differential equations multi-objective optimization |
url | https://doi.org/10.1088/2399-6528/ace416 |
work_keys_str_mv | AT shotadeguchi dynamicnormbasedweightstonormalizeimbalanceinbackpropagatedgradientsofphysicsinformedneuralnetworks AT mitsuteruasai dynamicnormbasedweightstonormalizeimbalanceinbackpropagatedgradientsofphysicsinformedneuralnetworks |