Closed-Loop Control of Fluid Resuscitation Using Reinforcement Learning

Fluid resuscitation (therapy) is used to maintain tissue perfusion and restore cardiac functions in critical care. Automated fluid therapy can result in faster care, fewer dosing errors, and less cognitive burden on healthcare providers, ultimately improving patient outcomes. Despite a few attempts...

Full description

Bibliographic Details
Main Authors: Elham Estiri, Hossein Mirinejad
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10352163/
_version_ 1797376258827354112
author Elham Estiri
Hossein Mirinejad
author_facet Elham Estiri
Hossein Mirinejad
author_sort Elham Estiri
collection DOAJ
description Fluid resuscitation (therapy) is used to maintain tissue perfusion and restore cardiac functions in critical care. Automated fluid therapy can result in faster care, fewer dosing errors, and less cognitive burden on healthcare providers, ultimately improving patient outcomes. Despite a few attempts at automating this process, fluid management is an open research area for which optimal, personalized strategies are yet to be developed. This work presents a novel, model-free, subject-specific dose adjustment tool for fluid resuscitation. The proposed approach is based on reinforcement learning (RL) where a Q-learning algorithm automatically recommends subject-specific fluid infusion dosages in different hemorrhaging scenarios without having the knowledge of dose-response models. Comparison studies against two model-free fluid resuscitation controllers, i.e., fuzzy and proportional-integral-derivative (PID), within a verified simulated environment demonstrated the superior performance of the proposed approach in the closed-loop control of fluid resuscitation. Statistical analyses of performance measures indicated that the RL approach, with lower average resuscitation rates, can achieve more desired mean arterial pressure (MAP) responses than the fuzzy and PID controller for all virtual subjects. Additionally, simulation results demonstrated the higher robustness of our approach than the other two methods against external disturbances in resuscitation scenarios. These results confirm the potential of RL in the closed-loop control of hemodynamic responses in fluid therapy.
first_indexed 2024-03-08T19:35:57Z
format Article
id doaj.art-012882876aa943d0b377d53bfe1a1ffe
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-03-08T19:35:57Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-012882876aa943d0b377d53bfe1a1ffe2023-12-26T00:10:19ZengIEEEIEEE Access2169-35362023-01-011114056914058110.1109/ACCESS.2023.334103610352163Closed-Loop Control of Fluid Resuscitation Using Reinforcement LearningElham Estiri0Hossein Mirinejad1https://orcid.org/0000-0002-6505-2245College of Aeronautics and Engineering, Kent State University, Kent, OH, USACollege of Aeronautics and Engineering, Kent State University, Kent, OH, USAFluid resuscitation (therapy) is used to maintain tissue perfusion and restore cardiac functions in critical care. Automated fluid therapy can result in faster care, fewer dosing errors, and less cognitive burden on healthcare providers, ultimately improving patient outcomes. Despite a few attempts at automating this process, fluid management is an open research area for which optimal, personalized strategies are yet to be developed. This work presents a novel, model-free, subject-specific dose adjustment tool for fluid resuscitation. The proposed approach is based on reinforcement learning (RL) where a Q-learning algorithm automatically recommends subject-specific fluid infusion dosages in different hemorrhaging scenarios without having the knowledge of dose-response models. Comparison studies against two model-free fluid resuscitation controllers, i.e., fuzzy and proportional-integral-derivative (PID), within a verified simulated environment demonstrated the superior performance of the proposed approach in the closed-loop control of fluid resuscitation. Statistical analyses of performance measures indicated that the RL approach, with lower average resuscitation rates, can achieve more desired mean arterial pressure (MAP) responses than the fuzzy and PID controller for all virtual subjects. Additionally, simulation results demonstrated the higher robustness of our approach than the other two methods against external disturbances in resuscitation scenarios. These results confirm the potential of RL in the closed-loop control of hemodynamic responses in fluid therapy.https://ieeexplore.ieee.org/document/10352163/Automated fluid resuscitationfluid managementmean arterial pressure (MAP)model-free reinforcement learningQ-learning
spellingShingle Elham Estiri
Hossein Mirinejad
Closed-Loop Control of Fluid Resuscitation Using Reinforcement Learning
IEEE Access
Automated fluid resuscitation
fluid management
mean arterial pressure (MAP)
model-free reinforcement learning
Q-learning
title Closed-Loop Control of Fluid Resuscitation Using Reinforcement Learning
title_full Closed-Loop Control of Fluid Resuscitation Using Reinforcement Learning
title_fullStr Closed-Loop Control of Fluid Resuscitation Using Reinforcement Learning
title_full_unstemmed Closed-Loop Control of Fluid Resuscitation Using Reinforcement Learning
title_short Closed-Loop Control of Fluid Resuscitation Using Reinforcement Learning
title_sort closed loop control of fluid resuscitation using reinforcement learning
topic Automated fluid resuscitation
fluid management
mean arterial pressure (MAP)
model-free reinforcement learning
Q-learning
url https://ieeexplore.ieee.org/document/10352163/
work_keys_str_mv AT elhamestiri closedloopcontroloffluidresuscitationusingreinforcementlearning
AT hosseinmirinejad closedloopcontroloffluidresuscitationusingreinforcementlearning