GraphPipe: Improving the Performance and Scalability of DNN Training with Graph Pipeline Parallelism
Deep neural networks (DNNs) continue to grow rapidly in size, thus it is infeasible to train them on a single device. To address this challenge, current DNN training systems apply pipeline-parallel techniques. They split a DNN into multiple stages, construct a pipeline of them, and assign to each st...
מחבר ראשי: | Kim, Sunghyun |
---|---|
מחברים אחרים: | Alizadeh, Mohammad |
פורמט: | Thesis |
יצא לאור: |
Massachusetts Institute of Technology
2024
|
גישה מקוונת: | https://hdl.handle.net/1721.1/156292 |
פריטים דומים
-
Parallel and scalable neural image segmentation for connectome graph extraction
מאת: Nguyen, Quan, M. Eng. (Quan T.) Massachusetts Institute of Technology
יצא לאור: (2016) -
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
מאת: Dhulipala, Laxman, et al.
יצא לאור: (2021) -
Theoretically Efficient Parallel Graph Algorithms Can Be Fast and Scalable
מאת: Dhulipala, Laxman, et al.
יצא לאור: (2022) -
ScaleGPS: Scalable Graph Parallel Sampling via Data-centric Performance Engineering
מאת: Cai, Miranda J.
יצא לאור: (2024) -
Benchmarking Graph Transformers Toward Scalability for Large Graphs
מאת: Lim, Katherine S.
יצא לאור: (2024)