Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation
Graph Neural Networks (GNNs) are a popular class of machine learning models that allow scientists to leverage machine learning techniques to perform inference on unstructured data. However, when graphs become too large, partitioning becomes necessary to allow for distributed computation. Standard gr...
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2024
|
Online Access: | https://hdl.handle.net/1721.1/153893 |
_version_ | 1811096348757327872 |
---|---|
author | Alkhafaji, Yaseen |
author2 | Leiserson, Charles E. |
author_facet | Leiserson, Charles E. Alkhafaji, Yaseen |
author_sort | Alkhafaji, Yaseen |
collection | MIT |
description | Graph Neural Networks (GNNs) are a popular class of machine learning models that allow scientists to leverage machine learning techniques to perform inference on unstructured data. However, when graphs become too large, partitioning becomes necessary to allow for distributed computation. Standard graph partitioning methods for GNNsinclude Random partitioning and the state-of-the-art METIS. Whereas METIS produces partitions of high-quality, its preprocessing overheads make it impractical for extremely large graphs. Conversely, random partitioning is cheap to compute, but results in poor partition quality that causes GNN training to be bottlenecked by communication. In my thesis, I seek to prove that it is possible to reduce the data preprocessing overhead on small machines for large graph datasets used in ML while maintaining partition quality. In support of this goal, I design and implement a hierarchical label-propagation-based graph partitioning system known as PLaTE (Propagating Labels to Train Efficiently), partially based on the paper “How to Partition a Billion Node Graph” [18]. PLaTE runs 5.6x faster than METIS on the Open Graph Benchmark’s papers100M dataset, while consuming 4.9x less memory. PLaTE produces partitions that are equally balanced to METIS with comparable communication volumes under certain conditions. In real GNN training experiments, PLaTE has comparable average epoch times to METIS. |
first_indexed | 2024-09-23T16:42:25Z |
format | Thesis |
id | mit-1721.1/153893 |
institution | Massachusetts Institute of Technology |
last_indexed | 2024-09-23T16:42:25Z |
publishDate | 2024 |
publisher | Massachusetts Institute of Technology |
record_format | dspace |
spelling | mit-1721.1/1538932024-03-22T03:02:44Z Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation Alkhafaji, Yaseen Leiserson, Charles E. Kaler, Tim Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Graph Neural Networks (GNNs) are a popular class of machine learning models that allow scientists to leverage machine learning techniques to perform inference on unstructured data. However, when graphs become too large, partitioning becomes necessary to allow for distributed computation. Standard graph partitioning methods for GNNsinclude Random partitioning and the state-of-the-art METIS. Whereas METIS produces partitions of high-quality, its preprocessing overheads make it impractical for extremely large graphs. Conversely, random partitioning is cheap to compute, but results in poor partition quality that causes GNN training to be bottlenecked by communication. In my thesis, I seek to prove that it is possible to reduce the data preprocessing overhead on small machines for large graph datasets used in ML while maintaining partition quality. In support of this goal, I design and implement a hierarchical label-propagation-based graph partitioning system known as PLaTE (Propagating Labels to Train Efficiently), partially based on the paper “How to Partition a Billion Node Graph” [18]. PLaTE runs 5.6x faster than METIS on the Open Graph Benchmark’s papers100M dataset, while consuming 4.9x less memory. PLaTE produces partitions that are equally balanced to METIS with comparable communication volumes under certain conditions. In real GNN training experiments, PLaTE has comparable average epoch times to METIS. M.Eng. 2024-03-21T19:14:11Z 2024-03-21T19:14:11Z 2024-02 2024-03-04T16:38:03.043Z Thesis https://hdl.handle.net/1721.1/153893 In Copyright - Educational Use Permitted Copyright retained by author(s) https://rightsstatements.org/page/InC-EDU/1.0/ application/pdf Massachusetts Institute of Technology |
spellingShingle | Alkhafaji, Yaseen Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation |
title | Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation |
title_full | Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation |
title_fullStr | Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation |
title_full_unstemmed | Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation |
title_short | Fast Partitioning for Distributed Graph Learning using Multi-level Label Propagation |
title_sort | fast partitioning for distributed graph learning using multi level label propagation |
url | https://hdl.handle.net/1721.1/153893 |
work_keys_str_mv | AT alkhafajiyaseen fastpartitioningfordistributedgraphlearningusingmultilevellabelpropagation |