Sparse Expansion and Neuronal Disentanglement

We show how to improve the inference efficiency of an LLM by expanding it into a mixture of sparse experts, where each expert is a copy of the same weights and one-shot pruned for a specific cluster of input values. We call this approach Sparse Expansion. We show that for models like Llama 2 7B, as...

Full description

Bibliographic Details
Main Author: Kong, Linghao
Other Authors: Shavit, Nir N.
Format: Thesis
Published: Massachusetts Institute of Technology 2024
Online Access:https://hdl.handle.net/1721.1/156287