Summary: | Large language models (LLMs) are an effective way to solve many text-based machine learning tasks, but require huge amounts of computation to train and evaluate. Mixture of experts models have emerged as a way to reduce the amount of computation required for LLMs without compromising accuracy. It is necessary to distribute these large models across several devices, but this requires substantial communication between devices throughout training. Expert parallel is a promising approach to distributing the model across devices and communicating necessary information during training, especially for small batch sizes or models with large embedding sizes. Unfortunately, expert parallel creates an imbalanced workload across devices, causes errors with existing memory conservation strategies, and has poor overlapping of communication and computation. Some existing works solve the imbalanced workload by dropping excess tokens sent to experts above a capacity, but that may reduce accuracy.
In my thesis I introduce ModuleFormer-PRM, an expert parallel training system that addresses these issues without dropping tokens. I will explain a subtle error that occurs when trying to save memory and a strategy to prevent it. I will analyze the distribution of workload among experts and show two approaches to better balance the workload across devices, leading to more stable memory use and faster runtime. I evaluate ModuleFormerPRM using pretrained MoE models and show my optimizations improved expert parallel’s throughput by 2.1×.
|