Autonomous optimization of cutting conditions in end milling operation based on deep reinforcement learning (Offline training in simulation environment for feed rate optimization)

Full automation of manufacturing is strongly desired to improve the productivity. Autonomous optimization of the cutting conditions in the end milling operation is one of the challenges in achieving this goal. This paper proposes a system for optimization of the cutting conditions based on Deep Q-Ne...

Full description

Bibliographic Details
Main Authors: Kazuki KANEKO, Toshihiro KOMATSU, Libo ZHOU, Teppei ONUKI, Hirotaka OJIMA, Jun SHIMIZU
Format: Article
Language:English
Published: The Japan Society of Mechanical Engineers 2023-09-01
Series:Journal of Advanced Mechanical Design, Systems, and Manufacturing
Subjects:
Online Access:https://www.jstage.jst.go.jp/article/jamdsm/17/5/17_2023jamdsm0064/_pdf/-char/en
_version_ 1797659772699279360
author Kazuki KANEKO
Toshihiro KOMATSU
Libo ZHOU
Teppei ONUKI
Hirotaka OJIMA
Jun SHIMIZU
author_facet Kazuki KANEKO
Toshihiro KOMATSU
Libo ZHOU
Teppei ONUKI
Hirotaka OJIMA
Jun SHIMIZU
author_sort Kazuki KANEKO
collection DOAJ
description Full automation of manufacturing is strongly desired to improve the productivity. Autonomous optimization of the cutting conditions in the end milling operation is one of the challenges in achieving this goal. This paper proposes a system for optimization of the cutting conditions based on Deep Q-Network (DQN), which is a kind of deep reinforcement learning. An end mill is used as an agent and the end milling simulation is employed to provide the environment in the proposed system. Geometric information of interference state between tool and workpiece in the simulation is considered as the state of the environment and acceleration of feed rate is the action for the agent to take. The action is optimized by DQN to maximize the accumulated reward given from the environment, which evaluates how good the scenario of action is. Therefore, the cutting conditions can be optimized according to the defined reward function. We performed three case studies to verify our proposed method, in which the cutting torque is controlled to be a specified value. The objective was successfully achieved regardless of differences in the end milling scenario. The obtained results strongly suggested a fact that the reinforcement learning is a promising solution to autonomous optimization of the cutting conditions.
first_indexed 2024-03-11T18:19:00Z
format Article
id doaj.art-4f919a307c7b40f8b2156e98b4b53e0a
institution Directory Open Access Journal
issn 1881-3054
language English
last_indexed 2024-03-11T18:19:00Z
publishDate 2023-09-01
publisher The Japan Society of Mechanical Engineers
record_format Article
series Journal of Advanced Mechanical Design, Systems, and Manufacturing
spelling doaj.art-4f919a307c7b40f8b2156e98b4b53e0a2023-10-16T02:44:59ZengThe Japan Society of Mechanical EngineersJournal of Advanced Mechanical Design, Systems, and Manufacturing1881-30542023-09-01175JAMDSM0064JAMDSM006410.1299/jamdsm.2023jamdsm0064jamdsmAutonomous optimization of cutting conditions in end milling operation based on deep reinforcement learning (Offline training in simulation environment for feed rate optimization)Kazuki KANEKO0Toshihiro KOMATSU1Libo ZHOU2Teppei ONUKI3Hirotaka OJIMA4Jun SHIMIZU5Graduate School of Science and Engineering, Ibaraki UniversityGraduate School of Science and Engineering, Ibaraki UniversityGraduate School of Science and Engineering, Ibaraki UniversityGraduate School of Science and Engineering, Ibaraki UniversityGraduate School of Science and Engineering, Ibaraki UniversityGraduate School of Science and Engineering, Ibaraki UniversityFull automation of manufacturing is strongly desired to improve the productivity. Autonomous optimization of the cutting conditions in the end milling operation is one of the challenges in achieving this goal. This paper proposes a system for optimization of the cutting conditions based on Deep Q-Network (DQN), which is a kind of deep reinforcement learning. An end mill is used as an agent and the end milling simulation is employed to provide the environment in the proposed system. Geometric information of interference state between tool and workpiece in the simulation is considered as the state of the environment and acceleration of feed rate is the action for the agent to take. The action is optimized by DQN to maximize the accumulated reward given from the environment, which evaluates how good the scenario of action is. Therefore, the cutting conditions can be optimized according to the defined reward function. We performed three case studies to verify our proposed method, in which the cutting torque is controlled to be a specified value. The objective was successfully achieved regardless of differences in the end milling scenario. The obtained results strongly suggested a fact that the reinforcement learning is a promising solution to autonomous optimization of the cutting conditions.https://www.jstage.jst.go.jp/article/jamdsm/17/5/17_2023jamdsm0064/_pdf/-char/enend millingfeed rateoptimizationdeep q-networksimulation
spellingShingle Kazuki KANEKO
Toshihiro KOMATSU
Libo ZHOU
Teppei ONUKI
Hirotaka OJIMA
Jun SHIMIZU
Autonomous optimization of cutting conditions in end milling operation based on deep reinforcement learning (Offline training in simulation environment for feed rate optimization)
Journal of Advanced Mechanical Design, Systems, and Manufacturing
end milling
feed rate
optimization
deep q-network
simulation
title Autonomous optimization of cutting conditions in end milling operation based on deep reinforcement learning (Offline training in simulation environment for feed rate optimization)
title_full Autonomous optimization of cutting conditions in end milling operation based on deep reinforcement learning (Offline training in simulation environment for feed rate optimization)
title_fullStr Autonomous optimization of cutting conditions in end milling operation based on deep reinforcement learning (Offline training in simulation environment for feed rate optimization)
title_full_unstemmed Autonomous optimization of cutting conditions in end milling operation based on deep reinforcement learning (Offline training in simulation environment for feed rate optimization)
title_short Autonomous optimization of cutting conditions in end milling operation based on deep reinforcement learning (Offline training in simulation environment for feed rate optimization)
title_sort autonomous optimization of cutting conditions in end milling operation based on deep reinforcement learning offline training in simulation environment for feed rate optimization
topic end milling
feed rate
optimization
deep q-network
simulation
url https://www.jstage.jst.go.jp/article/jamdsm/17/5/17_2023jamdsm0064/_pdf/-char/en
work_keys_str_mv AT kazukikaneko autonomousoptimizationofcuttingconditionsinendmillingoperationbasedondeepreinforcementlearningofflinetraininginsimulationenvironmentforfeedrateoptimization
AT toshihirokomatsu autonomousoptimizationofcuttingconditionsinendmillingoperationbasedondeepreinforcementlearningofflinetraininginsimulationenvironmentforfeedrateoptimization
AT libozhou autonomousoptimizationofcuttingconditionsinendmillingoperationbasedondeepreinforcementlearningofflinetraininginsimulationenvironmentforfeedrateoptimization
AT teppeionuki autonomousoptimizationofcuttingconditionsinendmillingoperationbasedondeepreinforcementlearningofflinetraininginsimulationenvironmentforfeedrateoptimization
AT hirotakaojima autonomousoptimizationofcuttingconditionsinendmillingoperationbasedondeepreinforcementlearningofflinetraininginsimulationenvironmentforfeedrateoptimization
AT junshimizu autonomousoptimizationofcuttingconditionsinendmillingoperationbasedondeepreinforcementlearningofflinetraininginsimulationenvironmentforfeedrateoptimization