Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT

Abstract The exponential device growth in industrial Internet of things (IIoT) has a noticeable impact on the volume of data generated. Edge-cloud computing cooperation has been introduced to the IIoT to lessen the computational load on cloud servers and shorten the processing time for data. General...

Full description

Bibliographic Details
Main Authors: Tingting Fu, Yanjun Peng, Peng Liu, Haksrun Lao, Shaohua Wan
Format: Article
Language:English
Published: SpringerOpen 2022-10-01
Series:Journal of Cloud Computing: Advances, Systems and Applications
Subjects:
Online Access:https://doi.org/10.1186/s13677-022-00348-9
_version_ 1811335252777369600
author Tingting Fu
Yanjun Peng
Peng Liu
Haksrun Lao
Shaohua Wan
author_facet Tingting Fu
Yanjun Peng
Peng Liu
Haksrun Lao
Shaohua Wan
author_sort Tingting Fu
collection DOAJ
description Abstract The exponential device growth in industrial Internet of things (IIoT) has a noticeable impact on the volume of data generated. Edge-cloud computing cooperation has been introduced to the IIoT to lessen the computational load on cloud servers and shorten the processing time for data. General programmable logic controllers (PLCs), which have been playing important roles in industrial control systems, start to gain the ability to process a large amount of industrial data and share the workload of cloud servers. This transforms them into edge-PLCs. However, the continuous influx of multiple types of concurrent production data streams against the limited capacity of built-in memory in PLCs brings a huge challenge. Therefore, the ability to reasonably allocate memory resources in edge-PLCs to ensure data utilization and real-time processing has become one of the core means of improving the efficiency of industrial processes. In this paper, to tackle dynamic changes in arrival data rate over time at each edge-PLC, we propose to optimize memory allocation with Q-learning distributedly. The simulation experiments verify that the method can effectively reduce the data loss probability while improving the system performance.
first_indexed 2024-04-13T17:21:28Z
format Article
id doaj.art-617c79bef9dc477398bba2578d7d71d2
institution Directory Open Access Journal
issn 2192-113X
language English
last_indexed 2024-04-13T17:21:28Z
publishDate 2022-10-01
publisher SpringerOpen
record_format Article
series Journal of Cloud Computing: Advances, Systems and Applications
spelling doaj.art-617c79bef9dc477398bba2578d7d71d22022-12-22T02:37:57ZengSpringerOpenJournal of Cloud Computing: Advances, Systems and Applications2192-113X2022-10-0111111410.1186/s13677-022-00348-9Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoTTingting Fu0Yanjun Peng1Peng Liu2Haksrun Lao3Shaohua Wan4School of Computer Science and Technology, Hangzhou Dianzi UniversitySchool of Computer Science and Technology, Hangzhou Dianzi UniversitySchool of Computer Science and Technology, Hangzhou Dianzi UniversityCenter of Engineering and Design, Chhong Cheng Chinese SchoolShenzhen Institute for Advanced Study, University of Electronic Science and Technology of ChinaAbstract The exponential device growth in industrial Internet of things (IIoT) has a noticeable impact on the volume of data generated. Edge-cloud computing cooperation has been introduced to the IIoT to lessen the computational load on cloud servers and shorten the processing time for data. General programmable logic controllers (PLCs), which have been playing important roles in industrial control systems, start to gain the ability to process a large amount of industrial data and share the workload of cloud servers. This transforms them into edge-PLCs. However, the continuous influx of multiple types of concurrent production data streams against the limited capacity of built-in memory in PLCs brings a huge challenge. Therefore, the ability to reasonably allocate memory resources in edge-PLCs to ensure data utilization and real-time processing has become one of the core means of improving the efficiency of industrial processes. In this paper, to tackle dynamic changes in arrival data rate over time at each edge-PLC, we propose to optimize memory allocation with Q-learning distributedly. The simulation experiments verify that the method can effectively reduce the data loss probability while improving the system performance.https://doi.org/10.1186/s13677-022-00348-9Industrial internet of thingsEdge-PLCResource allocationQ-learning
spellingShingle Tingting Fu
Yanjun Peng
Peng Liu
Haksrun Lao
Shaohua Wan
Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT
Journal of Cloud Computing: Advances, Systems and Applications
Industrial internet of things
Edge-PLC
Resource allocation
Q-learning
title Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT
title_full Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT
title_fullStr Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT
title_full_unstemmed Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT
title_short Distributed reinforcement learning-based memory allocation for edge-PLCs in industrial IoT
title_sort distributed reinforcement learning based memory allocation for edge plcs in industrial iot
topic Industrial internet of things
Edge-PLC
Resource allocation
Q-learning
url https://doi.org/10.1186/s13677-022-00348-9
work_keys_str_mv AT tingtingfu distributedreinforcementlearningbasedmemoryallocationforedgeplcsinindustrialiot
AT yanjunpeng distributedreinforcementlearningbasedmemoryallocationforedgeplcsinindustrialiot
AT pengliu distributedreinforcementlearningbasedmemoryallocationforedgeplcsinindustrialiot
AT haksrunlao distributedreinforcementlearningbasedmemoryallocationforedgeplcsinindustrialiot
AT shaohuawan distributedreinforcementlearningbasedmemoryallocationforedgeplcsinindustrialiot