Sparse Expansion and Neuronal Disentanglement
We show how to improve the inference efficiency of an LLM by expanding it into a mixture of sparse experts, where each expert is a copy of the same weights and one-shot pruned for a specific cluster of input values. We call this approach Sparse Expansion. We show that for models like Llama 2 7B, as...
Main Author: | Kong, Linghao |
---|---|
Other Authors: | Shavit, Nir N. |
Format: | Thesis |
Published: |
Massachusetts Institute of Technology
2024
|
Online Access: | https://hdl.handle.net/1721.1/156287 |
Similar Items
-
Disentangling the relationship between Lewy bodies and nigral neuronal loss in Parkinson's disease
by: O'Sullivan, S, et al.
Published: (2011) -
Disentangling Jet Modification
by: Brewer, Jasmine, et al.
Published: (2022) -
On the fairness of disentangled representations
by: Locatello, F, et al.
Published: (2019) -
Disentangled representations in neural models
by: Whitney, William, M. Eng (William F.) Massachusetts Institute of Technology
Published: (2017) -
Disentangling heavy flavor at colliders
by: Rodd, Nicholas L., et al.
Published: (2017)