HyDNN: A Hybrid Deep Learning Framework Based Multiuser Uplink Channel Estimation and Signal Detection for NOMA-OFDM System

Deep learning (DL) techniques can significantly improve successive interference cancellation (SIC) performance for the non-orthogonal multiple access (NOMA) system. The NOMA-orthogonal frequency division multiplexing (OFDM) system is considered in this paper to develop a hybrid deep neural network (...

Full description

Bibliographic Details
Main Authors: Md Habibur Rahman, Mohammad Abrar Shakil Sejan, Md Abdul Aziz, Young-Hwan You, Hyoung-Kyu Song
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10167605/
Description
Summary:Deep learning (DL) techniques can significantly improve successive interference cancellation (SIC) performance for the non-orthogonal multiple access (NOMA) system. The NOMA-orthogonal frequency division multiplexing (OFDM) system is considered in this paper to develop a hybrid deep neural network (HyDNN) model for multiuser uplink channel estimation (CE) and signal detection (SD). The proposed HyDNN uses a combination of a bi-directional long short-term memory (BiLSTM) network and a one-dimensional convolutional neural network (1D-CNN) to optimize errors in the system. The extraction of input signal characteristics from OFDM is carried out using the 1D-CNN model and fed into the time series BiLSTM network to infer the signal at the receiver terminal. The HyDNN model learns through the simulated channel data during offline training. To optimize the loss during learning the model the Adam optimizer is utilized. After successful training, the transmitted symbols in the online deployment are instantly recovered with optimal prediction rates by using the proposed HyDNN model. In comparison to the traditional CE and SD method for the NOMA scheme and other existing DL models, the proposed technique demonstrates satisfactory performance enhancements. In addition, the simulation outcomes show robustness with different training parameters such as minibatch sizes and learning rates.
ISSN:2169-3536