A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN Weights
This paper proposes a memory-efficient deep neural network (DNN) framework-based symbol level precoding (SLP). We focus on a DNN with realistic finite precision weights and adopt an unsupervised deep learning (DL) based SLP model (SLP-DNet). We apply a stochastic quantization (SQ) technique to obtai...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Open Journal of the Communications Society |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10153979/ |
_version_ | 1797782805147549696 |
---|---|
author | Abdullahi Mohammad Christos Masouros Yiannis Andreopoulos |
author_facet | Abdullahi Mohammad Christos Masouros Yiannis Andreopoulos |
author_sort | Abdullahi Mohammad |
collection | DOAJ |
description | This paper proposes a memory-efficient deep neural network (DNN) framework-based symbol level precoding (SLP). We focus on a DNN with realistic finite precision weights and adopt an unsupervised deep learning (DL) based SLP model (SLP-DNet). We apply a stochastic quantization (SQ) technique to obtain its corresponding quantized version called SLP-SQDNet. The proposed scheme offers a scalable performance vs memory trade-off, by quantizing a scalable percentage of the DNN weights, and we explore binary and ternary quantizations. Our results show that while SLP-DNet provides near-optimal performance, its quantized versions through SQ yield <inline-formula> <tex-math notation="LaTeX">$\sim 3.46\times $ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\sim 2.64\times $ </tex-math></inline-formula> model compression for binary-based and ternary-based SLP-SQDNets, respectively. We also find that our proposals offer <inline-formula> <tex-math notation="LaTeX">$\sim 20\times $ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\sim 10\times $ </tex-math></inline-formula> computational complexity reductions compared to SLP optimization-based and SLP-DNet, respectively. |
first_indexed | 2024-03-13T00:17:08Z |
format | Article |
id | doaj.art-2691a689a4ae4c778c393ed28898cde6 |
institution | Directory Open Access Journal |
issn | 2644-125X |
language | English |
last_indexed | 2024-03-13T00:17:08Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Open Journal of the Communications Society |
spelling | doaj.art-2691a689a4ae4c778c393ed28898cde62023-07-11T23:00:53ZengIEEEIEEE Open Journal of the Communications Society2644-125X2023-01-0141334134910.1109/OJCOMS.2023.328579010153979A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN WeightsAbdullahi Mohammad0https://orcid.org/0000-0001-9665-1649Christos Masouros1https://orcid.org/0000-0002-8259-6615Yiannis Andreopoulos2https://orcid.org/0000-0002-2714-4800Department of Electronic and Electrical Engineering, University College London, London, U.K.Department of Electronic and Electrical Engineering, University College London, London, U.K.Department of Electronic and Electrical Engineering, University College London, London, U.K.This paper proposes a memory-efficient deep neural network (DNN) framework-based symbol level precoding (SLP). We focus on a DNN with realistic finite precision weights and adopt an unsupervised deep learning (DL) based SLP model (SLP-DNet). We apply a stochastic quantization (SQ) technique to obtain its corresponding quantized version called SLP-SQDNet. The proposed scheme offers a scalable performance vs memory trade-off, by quantizing a scalable percentage of the DNN weights, and we explore binary and ternary quantizations. Our results show that while SLP-DNet provides near-optimal performance, its quantized versions through SQ yield <inline-formula> <tex-math notation="LaTeX">$\sim 3.46\times $ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\sim 2.64\times $ </tex-math></inline-formula> model compression for binary-based and ternary-based SLP-SQDNets, respectively. We also find that our proposals offer <inline-formula> <tex-math notation="LaTeX">$\sim 20\times $ </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">$\sim 10\times $ </tex-math></inline-formula> computational complexity reductions compared to SLP optimization-based and SLP-DNet, respectively.https://ieeexplore.ieee.org/document/10153979/Symbol-level-precodingconstructive interferencepower minimizationdeep neural networks (DNNs)stochastic quantization (SQ) |
spellingShingle | Abdullahi Mohammad Christos Masouros Yiannis Andreopoulos A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN Weights IEEE Open Journal of the Communications Society Symbol-level-precoding constructive interference power minimization deep neural networks (DNNs) stochastic quantization (SQ) |
title | A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN Weights |
title_full | A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN Weights |
title_fullStr | A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN Weights |
title_full_unstemmed | A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN Weights |
title_short | A Memory-Efficient Learning Framework for Symbol Level Precoding With Quantized NN Weights |
title_sort | memory efficient learning framework for symbol level precoding with quantized nn weights |
topic | Symbol-level-precoding constructive interference power minimization deep neural networks (DNNs) stochastic quantization (SQ) |
url | https://ieeexplore.ieee.org/document/10153979/ |
work_keys_str_mv | AT abdullahimohammad amemoryefficientlearningframeworkforsymbollevelprecodingwithquantizednnweights AT christosmasouros amemoryefficientlearningframeworkforsymbollevelprecodingwithquantizednnweights AT yiannisandreopoulos amemoryefficientlearningframeworkforsymbollevelprecodingwithquantizednnweights AT abdullahimohammad memoryefficientlearningframeworkforsymbollevelprecodingwithquantizednnweights AT christosmasouros memoryefficientlearningframeworkforsymbollevelprecodingwithquantizednnweights AT yiannisandreopoulos memoryefficientlearningframeworkforsymbollevelprecodingwithquantizednnweights |