HyperBlock floating point: generalised quantization scheme for gradient and inference computation
Prior quantization methods focus on producing networks for fast and lightweight inference. However, the cost of unquantised training is overlooked, despite requiring significantly more time and energy than inference. We present a method for quantizing convolutional neural networks for efficient trai...
Главные авторы: | , , , |
---|---|
Формат: | Conference item |
Язык: | English |
Опубликовано: |
IEEE
2023
|