Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning

In healthcare, real-time decision making is crucial for ensuring timely and accurate patient care. However, traditional computing infrastructures, with their wide ranging capabilities, suffer from inherent latency, which compromises the efficiency of time-sensitive medical applications. This paper e...

Full description

Bibliographic Details
Main Authors: Prashanth Choppara, Bommareddy Lokesh
Format: Article
Language:English
Published: IEEE 2025-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10876121/
_version_ 1826800335936552960
author Prashanth Choppara
Bommareddy Lokesh
author_facet Prashanth Choppara
Bommareddy Lokesh
author_sort Prashanth Choppara
collection DOAJ
description In healthcare, real-time decision making is crucial for ensuring timely and accurate patient care. However, traditional computing infrastructures, with their wide ranging capabilities, suffer from inherent latency, which compromises the efficiency of time-sensitive medical applications. This paper explores the potential of fog computing to better address this challenge, proposing a new framework that uses deep reinforcement learning (DRL) to advance task scheduling in crucial healthcare. The paper addresses the limitations of cloud computing systems. It proposes and replaces a fog computing architecture in supporting low latency for healthcare applications. This architecture reduces transmission latency by placing processing nodes close to the source of data generation, namely IoT-enabled healthcare devices. The foundation of this approach is the DRL model, which is designed to dynamically optimize the partition of computational tasks across fog nodes to improve both data throughput and operational response times. The effectiveness of the proposed DRL based fog computing model is validated with a series of simulations performed with the SimPy simulation environment. In such simulations, diverse healthcare scenarios, ranging from continuous patient monitoring systems to crucial emergency response applications, are recreated, providing a rich framework for testing the real-time processing capabilities of the model. This algorithm, DRL, has been fine-tuned and extensively implemented in these scenarios to show how the algorithm controls and optimizes tasks and their urgency in accordance with resource demand. By dynamically learning from real-time system states and optimizing task allocation to minimize delays, the DRL model reduces the makespan by up to 30% compared to traditional scheduling approaches. Comparative performance analysis indicated a 30% reduction in task completion times, a 40% reduction in operational latency, and a 25% improvement in fault tolerance relative to traditional scheduling approaches. The flexibility of the DRL model is further considered through its application to diverse real-time data processing contexts in industrial automation and smart traffic systems.
first_indexed 2025-03-17T00:49:21Z
format Article
id doaj.art-8dc8a940ae3f4e86b1e5dac37fdd8227
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2025-03-17T00:49:21Z
publishDate 2025-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-8dc8a940ae3f4e86b1e5dac37fdd82272025-02-20T00:01:11ZengIEEEIEEE Access2169-35362025-01-0113265422656310.1109/ACCESS.2025.353933610876121Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement LearningPrashanth Choppara0https://orcid.org/0009-0001-7360-1224Bommareddy Lokesh1https://orcid.org/0000-0002-8753-6160School of Computer Science and Engineering, VIT-AP University, Amaravati, Andhra Pradesh, IndiaSchool of Computer Science and Engineering, VIT-AP University, Amaravati, Andhra Pradesh, IndiaIn healthcare, real-time decision making is crucial for ensuring timely and accurate patient care. However, traditional computing infrastructures, with their wide ranging capabilities, suffer from inherent latency, which compromises the efficiency of time-sensitive medical applications. This paper explores the potential of fog computing to better address this challenge, proposing a new framework that uses deep reinforcement learning (DRL) to advance task scheduling in crucial healthcare. The paper addresses the limitations of cloud computing systems. It proposes and replaces a fog computing architecture in supporting low latency for healthcare applications. This architecture reduces transmission latency by placing processing nodes close to the source of data generation, namely IoT-enabled healthcare devices. The foundation of this approach is the DRL model, which is designed to dynamically optimize the partition of computational tasks across fog nodes to improve both data throughput and operational response times. The effectiveness of the proposed DRL based fog computing model is validated with a series of simulations performed with the SimPy simulation environment. In such simulations, diverse healthcare scenarios, ranging from continuous patient monitoring systems to crucial emergency response applications, are recreated, providing a rich framework for testing the real-time processing capabilities of the model. This algorithm, DRL, has been fine-tuned and extensively implemented in these scenarios to show how the algorithm controls and optimizes tasks and their urgency in accordance with resource demand. By dynamically learning from real-time system states and optimizing task allocation to minimize delays, the DRL model reduces the makespan by up to 30% compared to traditional scheduling approaches. Comparative performance analysis indicated a 30% reduction in task completion times, a 40% reduction in operational latency, and a 25% improvement in fault tolerance relative to traditional scheduling approaches. The flexibility of the DRL model is further considered through its application to diverse real-time data processing contexts in industrial automation and smart traffic systems.https://ieeexplore.ieee.org/document/10876121/Fog nodestask schedulingDRLhealth caremakespantrust
spellingShingle Prashanth Choppara
Bommareddy Lokesh
Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning
IEEE Access
Fog nodes
task scheduling
DRL
health care
makespan
trust
title Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning
title_full Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning
title_fullStr Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning
title_full_unstemmed Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning
title_short Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning
title_sort efficient task scheduling and load balancing in fog computing for crucial healthcare through deep reinforcement learning
topic Fog nodes
task scheduling
DRL
health care
makespan
trust
url https://ieeexplore.ieee.org/document/10876121/
work_keys_str_mv AT prashanthchoppara efficienttaskschedulingandloadbalancinginfogcomputingforcrucialhealthcarethroughdeepreinforcementlearning
AT bommareddylokesh efficienttaskschedulingandloadbalancinginfogcomputingforcrucialhealthcarethroughdeepreinforcementlearning