A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN Weights
This paper proposes a memory-efficient deep neural network (DNN) framework-based symbol level precoding (SLP). We focus on a DNN with realistic finite precision weights and adopt an unsupervised deep learning (DL) based SLP model (SLP-DNet). We apply a stochastic quantization (SQ) technique to obtai...
Main Authors: | Abdullahi Mohammad, Christos Masouros, Yiannis Andreopoulos |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Open Journal of the Communications Society |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10153979/ |
Similar Items
-
An Unsupervised Deep Unfolding Framework for Robust Symbol-Level Precoding
by: Abdullahi Mohammad, et al.
Published: (2023-01-01) -
Effective user selection algorithm for quantized precoding in massive MIMO
by: Nayan fang, et al.
Published: (2015-02-01) -
Symbol Error Rate Minimization Based Constructive Interference Precoding for Multi-User Systems
by: Ling Zhang, et al.
Published: (2021-01-01) -
Learning-Assisted Eavesdropping and Symbol-Level Precoding Countermeasures for Downlink MU-MISO Systems
by: Abderrahmane Mayouche, et al.
Published: (2020-01-01) -
Multi-Antenna Data-Driven Eavesdropping Attacks and Symbol-Level Precoding Countermeasures
by: Abderrahmane Mayouche, et al.
Published: (2021-01-01)