Summary: | Graph Neural Networks (GNNs) are a popular class of machine learning models that allow scientists to leverage machine learning techniques to perform inference on unstructured data. However, when graphs become too large, partitioning becomes necessary to allow for distributed computation. Standard graph partitioning methods for GNNsinclude Random partitioning and the state-of-the-art METIS. Whereas METIS produces partitions of high-quality, its preprocessing overheads make it impractical for extremely large graphs. Conversely, random partitioning is cheap to compute, but results in poor partition quality that causes GNN training to be bottlenecked by communication. In my thesis, I seek to prove that it is possible to reduce the data preprocessing overhead on small machines for large graph datasets used in ML while maintaining partition quality. In support of this goal, I design and implement a hierarchical label-propagation-based graph partitioning system known as PLaTE (Propagating Labels to Train Efficiently), partially based on the paper “How to Partition a Billion Node Graph” [18]. PLaTE runs 5.6x faster than METIS on the Open Graph Benchmark’s papers100M dataset, while consuming 4.9x less memory. PLaTE produces partitions that are equally balanced to METIS with comparable communication volumes under certain conditions. In real GNN training experiments, PLaTE has comparable average epoch times to METIS.
|