A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN Weights
This paper proposes a memory-efficient deep neural network (DNN) framework-based symbol level precoding (SLP). We focus on a DNN with realistic finite precision weights and adopt an unsupervised deep learning (DL) based SLP model (SLP-DNet). We apply a stochastic quantization (SQ) technique to obtai...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Open Journal of the Communications Society |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10153979/ |