A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN Weights

This paper proposes a memory-efficient deep neural network (DNN) framework-based symbol level precoding (SLP). We focus on a DNN with realistic finite precision weights and adopt an unsupervised deep learning (DL) based SLP model (SLP-DNet). We apply a stochastic quantization (SQ) technique to obtai...

Full description

Bibliographic Details
Main Authors: Abdullahi Mohammad, Christos Masouros, Yiannis Andreopoulos
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Open Journal of the Communications Society
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10153979/
Description
Summary:This paper proposes a memory-efficient deep neural network (DNN) framework-based symbol level precoding (SLP). We focus on a DNN with realistic finite precision weights and adopt an unsupervised deep learning (DL) based SLP model (SLP-DNet). We apply a stochastic quantization (SQ) technique to obtain its corresponding quantized version called SLP-SQDNet. The proposed scheme offers a scalable performance vs memory trade-off, by quantizing a scalable percentage of the DNN weights, and we explore binary and ternary quantizations. Our results show that while SLP-DNet provides near-optimal performance, its quantized versions through SQ yield <inline-formula> <tex-math notation="LaTeX">$\sim 3.46\times $ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\sim 2.64\times $ </tex-math></inline-formula> model compression for binary-based and ternary-based SLP-SQDNets, respectively. We also find that our proposals offer <inline-formula> <tex-math notation="LaTeX">$\sim 20\times $ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\sim 10\times $ </tex-math></inline-formula> computational complexity reductions compared to SLP optimization-based and SLP-DNet, respectively.
ISSN:2644-125X