Data efficiency and extrapolation trends in neural network interatomic potentials
Recently, key architectural advances have been proposed for neural network interatomic potentials (NNIPs), such as incorporating message-passing networks, equivariance, or many-body expansion terms. Although modern NNIP models exhibit small differences in test accuracy, this metric is still consider...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
IOP Publishing
2023-01-01
|
Series: | Machine Learning: Science and Technology |
Subjects: | |
Online Access: | https://doi.org/10.1088/2632-2153/acf115 |
_version_ | 1827804588061753344 |
---|---|
author | Joshua A Vita Daniel Schwalbe-Koda |
author_facet | Joshua A Vita Daniel Schwalbe-Koda |
author_sort | Joshua A Vita |
collection | DOAJ |
description | Recently, key architectural advances have been proposed for neural network interatomic potentials (NNIPs), such as incorporating message-passing networks, equivariance, or many-body expansion terms. Although modern NNIP models exhibit small differences in test accuracy, this metric is still considered the main target when developing new NNIP architectures. In this work, we show how architectural and optimization choices influence the generalization of NNIPs, revealing trends in molecular dynamics (MD) stability, data efficiency, and loss landscapes. Using the 3BPA dataset, we uncover trends in NNIP errors and robustness to noise, showing these metrics are insufficient to predict MD stability in the high-accuracy regime. With a large-scale study on NequIP, MACE, and their optimizers, we show that our metric of loss entropy predicts out-of-distribution error and data efficiency despite being computed only on the training set. This work provides a deep learning justification for probing extrapolation and can inform the development of next-generation NNIPs. |
first_indexed | 2024-03-11T21:13:58Z |
format | Article |
id | doaj.art-953c2019a17e4b8e99178921492a2669 |
institution | Directory Open Access Journal |
issn | 2632-2153 |
language | English |
last_indexed | 2024-03-11T21:13:58Z |
publishDate | 2023-01-01 |
publisher | IOP Publishing |
record_format | Article |
series | Machine Learning: Science and Technology |
spelling | doaj.art-953c2019a17e4b8e99178921492a26692023-09-29T05:49:36ZengIOP PublishingMachine Learning: Science and Technology2632-21532023-01-014303503110.1088/2632-2153/acf115Data efficiency and extrapolation trends in neural network interatomic potentialsJoshua A Vita0https://orcid.org/0000-0001-9191-055XDaniel Schwalbe-Koda1https://orcid.org/0000-0001-9176-0854Lawrence Livermore National Laboratory , Livermore, CA 94550, United States of America; Department of Materials Science and Engineering, University of Illinois at Urbana-Champaign University of Illinois at Urbana-Champaign, Urbana, IL 61801, United States of AmericaLawrence Livermore National Laboratory , Livermore, CA 94550, United States of AmericaRecently, key architectural advances have been proposed for neural network interatomic potentials (NNIPs), such as incorporating message-passing networks, equivariance, or many-body expansion terms. Although modern NNIP models exhibit small differences in test accuracy, this metric is still considered the main target when developing new NNIP architectures. In this work, we show how architectural and optimization choices influence the generalization of NNIPs, revealing trends in molecular dynamics (MD) stability, data efficiency, and loss landscapes. Using the 3BPA dataset, we uncover trends in NNIP errors and robustness to noise, showing these metrics are insufficient to predict MD stability in the high-accuracy regime. With a large-scale study on NequIP, MACE, and their optimizers, we show that our metric of loss entropy predicts out-of-distribution error and data efficiency despite being computed only on the training set. This work provides a deep learning justification for probing extrapolation and can inform the development of next-generation NNIPs.https://doi.org/10.1088/2632-2153/acf115neural network potentialsextrapolationloss landscapesgraph neural networksmachine learning potentialsatomistic simulations |
spellingShingle | Joshua A Vita Daniel Schwalbe-Koda Data efficiency and extrapolation trends in neural network interatomic potentials Machine Learning: Science and Technology neural network potentials extrapolation loss landscapes graph neural networks machine learning potentials atomistic simulations |
title | Data efficiency and extrapolation trends in neural network interatomic potentials |
title_full | Data efficiency and extrapolation trends in neural network interatomic potentials |
title_fullStr | Data efficiency and extrapolation trends in neural network interatomic potentials |
title_full_unstemmed | Data efficiency and extrapolation trends in neural network interatomic potentials |
title_short | Data efficiency and extrapolation trends in neural network interatomic potentials |
title_sort | data efficiency and extrapolation trends in neural network interatomic potentials |
topic | neural network potentials extrapolation loss landscapes graph neural networks machine learning potentials atomistic simulations |
url | https://doi.org/10.1088/2632-2153/acf115 |
work_keys_str_mv | AT joshuaavita dataefficiencyandextrapolationtrendsinneuralnetworkinteratomicpotentials AT danielschwalbekoda dataefficiencyandextrapolationtrendsinneuralnetworkinteratomicpotentials |