Multimodal deep learning for predicting the choice of cut parameters in the milling process

In this paper, we use multimodal deep learning to predict the choice of optimal cutting parameters (cutting speed, depth of cut, and feed rate per tooth) and the appropriate cutting tool for reproducing an existent piece of the same surface state, considering the footprints left by the cutting tool....

Full description

Bibliographic Details
Main Authors: Cheick Abdoul Kadir A Kounta, Bernard Kamsu-Foguem, Farid Noureddine, Fana Tangara
Format: Article
Language:English
Published: Elsevier 2022-11-01
Series:Intelligent Systems with Applications
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2667305322000503
Description
Summary:In this paper, we use multimodal deep learning to predict the choice of optimal cutting parameters (cutting speed, depth of cut, and feed rate per tooth) and the appropriate cutting tool for reproducing an existent piece of the same surface state, considering the footprints left by the cutting tool. We use the image of the aluminum plate's surface states considering the tool's footprints, the cutting parameters, and the roughness average (Ra) obtained with a roughness meter to drive our model. We built a late multimodal fusion model with two networks, a convolutional neural network (CNN) and a recurrent neural network with long short-term memory layers (LSTM). The first network consists of the first branch with a convolutional network that receives the input images. In the second network, modeling is performed by the LSTM network to receive the digital input data. This provides a framework to integrate information from two modalities to ensure surface quality in machining processes. This approach aims to assist in selecting the appropriate cutting tool and cutting parameters to automatically reproduce a machined piece using the image and roughness of an already existing piece. It is observed that the performance of the multimodal model is better than that of the unimodal model on image data. The accuracy continues to improve on both sets (training and validation), and the multimodal model finally reaches good accuracy results. Contrary to the unimodal model, which fails to generalize the training on a validation dataset. The results estimated by the multimodal fusion model are encouraging when applied to the milling activity in industrial production processes.
ISSN:2667-3053