Exploring model complexity in machine learned potentials for simulated properties
Abstract Machine learning (ML) enables the development of interatomic potentials with the accuracy of first principles methods while retaining the speed and parallel efficiency of empirical potentials. While ML potentials traditionally use atom-centered descriptors as inputs, different...
Main Authors: | , , , , , , , |
---|---|
Other Authors: | |
Format: | Article |
Language: | English |
Published: |
Springer International Publishing
2023
|
Online Access: | https://hdl.handle.net/1721.1/152387 |
Summary: | Abstract
Machine learning (ML) enables the development of interatomic potentials with the accuracy of first principles methods while retaining the speed and parallel efficiency of empirical potentials. While ML potentials traditionally use atom-centered descriptors as inputs, different models such as linear regression and neural networks map descriptors to atomic energies and forces. This begs the question: what is the improvement in accuracy due to model complexity irrespective of descriptors? We curate three datasets to investigate this question in terms of ab initio energy and force errors: (1) solid and liquid silicon, (2) gallium nitride, and (3) the superionic conductor Li
$$_{10}$$
10
Ge(PS
$$_{6}$$
6
)
$$_{2}$$
2
(LGPS). We further investigate how these errors affect simulated properties and verify if the improvement in fitting errors corresponds to measurable improvement in property prediction. By assessing different models, we observe correlations between fitting quantity (e.g. atomic force) error and simulated property error with respect to ab initio values.
Graphical abstract |
---|