Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in molecules

Abstract Graph Convolutional Neural Network (GCNN) is a popular class of deep learning (DL) models in material science to predict material properties from the graph representation of molecular structures. Training an accurate and comprehensive GCNN surrogate for molecular design requires large-scale...

Full description

Bibliographic Details
Main Authors: Jong Youl Choi, Pei Zhang, Kshitij Mehta, Andrew Blanchard, Massimiliano Lupo Pasini
Format: Article
Language:English
Published: BMC 2022-10-01
Series:Journal of Cheminformatics
Subjects:
Online Access:https://doi.org/10.1186/s13321-022-00652-1
_version_ 1797986021558714368
author Jong Youl Choi
Pei Zhang
Kshitij Mehta
Andrew Blanchard
Massimiliano Lupo Pasini
author_facet Jong Youl Choi
Pei Zhang
Kshitij Mehta
Andrew Blanchard
Massimiliano Lupo Pasini
author_sort Jong Youl Choi
collection DOAJ
description Abstract Graph Convolutional Neural Network (GCNN) is a popular class of deep learning (DL) models in material science to predict material properties from the graph representation of molecular structures. Training an accurate and comprehensive GCNN surrogate for molecular design requires large-scale graph datasets and is usually a time-consuming process. Recent advances in GPUs and distributed computing open a path to reduce the computational cost for GCNN training effectively. However, efficient utilization of high performance computing (HPC) resources for training requires simultaneously optimizing large-scale data management and scalable stochastic batched optimization techniques. In this work, we focus on building GCNN models on HPC systems to predict material properties of millions of molecules. We use HydraGNN, our in-house library for large-scale GCNN training, leveraging distributed data parallelism in PyTorch. We use ADIOS, a high-performance data management framework for efficient storage and reading of large molecular graph data. We perform parallel training on two open-source large-scale graph datasets to build a GCNN predictor for an important quantum property known as the HOMO-LUMO gap. We measure the scalability, accuracy, and convergence of our approach on two DOE supercomputers: the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) and the Perlmutter system at the National Energy Research Scientific Computing Center (NERSC). We present our experimental results with HydraGNN showing (i) reduction of data loading time up to 4.2 times compared with a conventional method and (ii) linear scaling performance for training up to 1024 GPUs on both Summit and Perlmutter.
first_indexed 2024-04-11T07:27:32Z
format Article
id doaj.art-629e18f0b5ff4724a2df015181dbb21a
institution Directory Open Access Journal
issn 1758-2946
language English
last_indexed 2024-04-11T07:27:32Z
publishDate 2022-10-01
publisher BMC
record_format Article
series Journal of Cheminformatics
spelling doaj.art-629e18f0b5ff4724a2df015181dbb21a2022-12-22T04:37:02ZengBMCJournal of Cheminformatics1758-29462022-10-0114111010.1186/s13321-022-00652-1Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in moleculesJong Youl Choi0Pei Zhang1Kshitij Mehta2Andrew Blanchard3Massimiliano Lupo Pasini4Computer Science and Mathematics Division, Oak Ridge National LaboratoryComputational Sciences and Engineering Division, Oak Ridge National LaboratoryComputer Science and Mathematics Division, Oak Ridge National LaboratoryComputational Sciences and Engineering Division, Oak Ridge National LaboratoryComputational Sciences and Engineering Division, Oak Ridge National LaboratoryAbstract Graph Convolutional Neural Network (GCNN) is a popular class of deep learning (DL) models in material science to predict material properties from the graph representation of molecular structures. Training an accurate and comprehensive GCNN surrogate for molecular design requires large-scale graph datasets and is usually a time-consuming process. Recent advances in GPUs and distributed computing open a path to reduce the computational cost for GCNN training effectively. However, efficient utilization of high performance computing (HPC) resources for training requires simultaneously optimizing large-scale data management and scalable stochastic batched optimization techniques. In this work, we focus on building GCNN models on HPC systems to predict material properties of millions of molecules. We use HydraGNN, our in-house library for large-scale GCNN training, leveraging distributed data parallelism in PyTorch. We use ADIOS, a high-performance data management framework for efficient storage and reading of large molecular graph data. We perform parallel training on two open-source large-scale graph datasets to build a GCNN predictor for an important quantum property known as the HOMO-LUMO gap. We measure the scalability, accuracy, and convergence of our approach on two DOE supercomputers: the Summit supercomputer at the Oak Ridge Leadership Computing Facility (OLCF) and the Perlmutter system at the National Energy Research Scientific Computing Center (NERSC). We present our experimental results with HydraGNN showing (i) reduction of data loading time up to 4.2 times compared with a conventional method and (ii) linear scaling performance for training up to 1024 GPUs on both Summit and Perlmutter.https://doi.org/10.1186/s13321-022-00652-1Graph neural networksDistributed data parallelismSurrogate modelsAtomic modelingMolecular dynamicsHOMO-LUMO gap
spellingShingle Jong Youl Choi
Pei Zhang
Kshitij Mehta
Andrew Blanchard
Massimiliano Lupo Pasini
Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in molecules
Journal of Cheminformatics
Graph neural networks
Distributed data parallelism
Surrogate models
Atomic modeling
Molecular dynamics
HOMO-LUMO gap
title Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in molecules
title_full Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in molecules
title_fullStr Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in molecules
title_full_unstemmed Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in molecules
title_short Scalable training of graph convolutional neural networks for fast and accurate predictions of HOMO-LUMO gap in molecules
title_sort scalable training of graph convolutional neural networks for fast and accurate predictions of homo lumo gap in molecules
topic Graph neural networks
Distributed data parallelism
Surrogate models
Atomic modeling
Molecular dynamics
HOMO-LUMO gap
url https://doi.org/10.1186/s13321-022-00652-1
work_keys_str_mv AT jongyoulchoi scalabletrainingofgraphconvolutionalneuralnetworksforfastandaccuratepredictionsofhomolumogapinmolecules
AT peizhang scalabletrainingofgraphconvolutionalneuralnetworksforfastandaccuratepredictionsofhomolumogapinmolecules
AT kshitijmehta scalabletrainingofgraphconvolutionalneuralnetworksforfastandaccuratepredictionsofhomolumogapinmolecules
AT andrewblanchard scalabletrainingofgraphconvolutionalneuralnetworksforfastandaccuratepredictionsofhomolumogapinmolecules
AT massimilianolupopasini scalabletrainingofgraphconvolutionalneuralnetworksforfastandaccuratepredictionsofhomolumogapinmolecules