Deep reinforcement learning for dynamic scheduling of a flexible job shop

The ability to handle unpredictable dynamic events is becoming more important in pursuing agile and flexible production scheduling. At the same time, the cyber-physical convergence in production system creates massive amounts of industrial data that needs to be mined and analysed in real-time. To fa...

Full description

Bibliographic Details
Main Authors: Liu, Renke, Piplani, Rajesh, Toro, Carlos
Other Authors: School of Mechanical and Aerospace Engineering
Format: Journal Article
Language:English
Published: 2022
Subjects:
Online Access:https://hdl.handle.net/10356/163903
_version_ 1826115662435581952
author Liu, Renke
Piplani, Rajesh
Toro, Carlos
author2 School of Mechanical and Aerospace Engineering
author_facet School of Mechanical and Aerospace Engineering
Liu, Renke
Piplani, Rajesh
Toro, Carlos
author_sort Liu, Renke
collection NTU
description The ability to handle unpredictable dynamic events is becoming more important in pursuing agile and flexible production scheduling. At the same time, the cyber-physical convergence in production system creates massive amounts of industrial data that needs to be mined and analysed in real-time. To facilitate such real-time control, this research proposes a hierarchical and distributed architecture to solve the dynamic flexible job shop scheduling problem. Double Deep Q-Network algorithm is used to train the scheduling agents, to capture the relationship between production information and scheduling objectives, and make real-time scheduling decisions for a flexible job shop with constant job arrivals. Specialised state and action representations are proposed to handle the variable specification of the problem in dynamic scheduling. Additionally, a surrogate reward-shaping technique to improve learning efficiency and scheduling effectiveness is developed. A simulation study is carried out to validate the performance of the proposed approach under different scenarios. Numerical results show that not only does the proposed approach deliver superior performance as compared to existing scheduling strategies, its advantages persist even if the manufacturing system configuration changes.
first_indexed 2024-10-01T03:58:48Z
format Journal Article
id ntu-10356/163903
institution Nanyang Technological University
language English
last_indexed 2024-10-01T03:58:48Z
publishDate 2022
record_format dspace
spelling ntu-10356/1639032022-12-21T07:19:35Z Deep reinforcement learning for dynamic scheduling of a flexible job shop Liu, Renke Piplani, Rajesh Toro, Carlos School of Mechanical and Aerospace Engineering Engineering::Industrial engineering Dynamic Scheduling Flexible Job Shop The ability to handle unpredictable dynamic events is becoming more important in pursuing agile and flexible production scheduling. At the same time, the cyber-physical convergence in production system creates massive amounts of industrial data that needs to be mined and analysed in real-time. To facilitate such real-time control, this research proposes a hierarchical and distributed architecture to solve the dynamic flexible job shop scheduling problem. Double Deep Q-Network algorithm is used to train the scheduling agents, to capture the relationship between production information and scheduling objectives, and make real-time scheduling decisions for a flexible job shop with constant job arrivals. Specialised state and action representations are proposed to handle the variable specification of the problem in dynamic scheduling. Additionally, a surrogate reward-shaping technique to improve learning efficiency and scheduling effectiveness is developed. A simulation study is carried out to validate the performance of the proposed approach under different scenarios. Numerical results show that not only does the proposed approach deliver superior performance as compared to existing scheduling strategies, its advantages persist even if the manufacturing system configuration changes. 2022-12-21T07:19:35Z 2022-12-21T07:19:35Z 2022 Journal Article Liu, R., Piplani, R. & Toro, C. (2022). Deep reinforcement learning for dynamic scheduling of a flexible job shop. International Journal of Production Research, 60(13), 4049-4069. https://dx.doi.org/10.1080/00207543.2022.2058432 0020-7543 https://hdl.handle.net/10356/163903 10.1080/00207543.2022.2058432 2-s2.0-85129220142 13 60 4049 4069 en International Journal of Production Research © 2022 Informa UK Limited, trading as Taylor & Francis Group. All rights reserved.
spellingShingle Engineering::Industrial engineering
Dynamic Scheduling
Flexible Job Shop
Liu, Renke
Piplani, Rajesh
Toro, Carlos
Deep reinforcement learning for dynamic scheduling of a flexible job shop
title Deep reinforcement learning for dynamic scheduling of a flexible job shop
title_full Deep reinforcement learning for dynamic scheduling of a flexible job shop
title_fullStr Deep reinforcement learning for dynamic scheduling of a flexible job shop
title_full_unstemmed Deep reinforcement learning for dynamic scheduling of a flexible job shop
title_short Deep reinforcement learning for dynamic scheduling of a flexible job shop
title_sort deep reinforcement learning for dynamic scheduling of a flexible job shop
topic Engineering::Industrial engineering
Dynamic Scheduling
Flexible Job Shop
url https://hdl.handle.net/10356/163903
work_keys_str_mv AT liurenke deepreinforcementlearningfordynamicschedulingofaflexiblejobshop
AT piplanirajesh deepreinforcementlearningfordynamicschedulingofaflexiblejobshop
AT torocarlos deepreinforcementlearningfordynamicschedulingofaflexiblejobshop