Model order reduction based on Runge–Kutta neural networks

Model order reduction (MOR) methods enable the generation of real-time-capable digital twins, with the potential to unlock various novel value streams in industry. While traditional projection-based methods are robust and accurate for linear problems, incorporating machine learning to deal with nonl...

Full description

Bibliographic Details
Main Authors: Qinyu Zhuang, Juan Manuel Lorenzi, Hans-Joachim Bungartz, Dirk Hartmann
Format: Article
Language:English
Published: Cambridge University Press 2021-01-01
Series:Data-Centric Engineering
Subjects:
Online Access:https://www.cambridge.org/core/product/identifier/S2632673621000150/type/journal_article
_version_ 1811156486926106624
author Qinyu Zhuang
Juan Manuel Lorenzi
Hans-Joachim Bungartz
Dirk Hartmann
author_facet Qinyu Zhuang
Juan Manuel Lorenzi
Hans-Joachim Bungartz
Dirk Hartmann
author_sort Qinyu Zhuang
collection DOAJ
description Model order reduction (MOR) methods enable the generation of real-time-capable digital twins, with the potential to unlock various novel value streams in industry. While traditional projection-based methods are robust and accurate for linear problems, incorporating machine learning to deal with nonlinearity becomes a new choice for reducing complex problems. These kinds of methods are independent to the numerical solver for the full order model and keep the nonintrusiveness of the whole workflow. Such methods usually consist of two steps. The first step is the dimension reduction by a projection-based method, and the second is the model reconstruction by a neural network (NN). In this work, we apply some modifications for both steps respectively and investigate how they are impacted by testing with three different simulation models. In all cases Proper orthogonal decomposition is used for dimension reduction. For this step, the effects of generating the snapshot database with constant input parameters is compared with time-dependent input parameters. For the model reconstruction step, three types of NN architectures are compared: multilayer perceptron (MLP), explicit Euler NN (EENN), and Runge–Kutta NN (RKNN). The MLPs learn the system state directly, whereas EENNs and RKNNs learn the derivative of system state and predict the new state as a numerical integrator. In the tests, RKNNs show their advantage as the network architecture informed by higher-order numerical strategy.
first_indexed 2024-04-10T04:51:23Z
format Article
id doaj.art-526f641640294973930003367ec5a6ec
institution Directory Open Access Journal
issn 2632-6736
language English
last_indexed 2024-04-10T04:51:23Z
publishDate 2021-01-01
publisher Cambridge University Press
record_format Article
series Data-Centric Engineering
spelling doaj.art-526f641640294973930003367ec5a6ec2023-03-09T12:31:48ZengCambridge University PressData-Centric Engineering2632-67362021-01-01210.1017/dce.2021.15Model order reduction based on Runge–Kutta neural networksQinyu Zhuang0https://orcid.org/0000-0002-5186-8438Juan Manuel Lorenzi1Hans-Joachim Bungartz2Dirk Hartmann3Technology, Siemens AG, Bayern, GermanyTechnology, Siemens AG, Bayern, GermanyChair of Scientific Computing, Technical University of Munich, Bayern, GermanyTechnology, Siemens AG, Bayern, GermanyModel order reduction (MOR) methods enable the generation of real-time-capable digital twins, with the potential to unlock various novel value streams in industry. While traditional projection-based methods are robust and accurate for linear problems, incorporating machine learning to deal with nonlinearity becomes a new choice for reducing complex problems. These kinds of methods are independent to the numerical solver for the full order model and keep the nonintrusiveness of the whole workflow. Such methods usually consist of two steps. The first step is the dimension reduction by a projection-based method, and the second is the model reconstruction by a neural network (NN). In this work, we apply some modifications for both steps respectively and investigate how they are impacted by testing with three different simulation models. In all cases Proper orthogonal decomposition is used for dimension reduction. For this step, the effects of generating the snapshot database with constant input parameters is compared with time-dependent input parameters. For the model reconstruction step, three types of NN architectures are compared: multilayer perceptron (MLP), explicit Euler NN (EENN), and Runge–Kutta NN (RKNN). The MLPs learn the system state directly, whereas EENNs and RKNNs learn the derivative of system state and predict the new state as a numerical integrator. In the tests, RKNNs show their advantage as the network architecture informed by higher-order numerical strategy.https://www.cambridge.org/core/product/identifier/S2632673621000150/type/journal_articleDynamic parameter samplingexplicit Euler neural networkmodel order reductionmultilayer perceptronRunge–Kutta neural network
spellingShingle Qinyu Zhuang
Juan Manuel Lorenzi
Hans-Joachim Bungartz
Dirk Hartmann
Model order reduction based on Runge–Kutta neural networks
Data-Centric Engineering
Dynamic parameter sampling
explicit Euler neural network
model order reduction
multilayer perceptron
Runge–Kutta neural network
title Model order reduction based on Runge–Kutta neural networks
title_full Model order reduction based on Runge–Kutta neural networks
title_fullStr Model order reduction based on Runge–Kutta neural networks
title_full_unstemmed Model order reduction based on Runge–Kutta neural networks
title_short Model order reduction based on Runge–Kutta neural networks
title_sort model order reduction based on runge kutta neural networks
topic Dynamic parameter sampling
explicit Euler neural network
model order reduction
multilayer perceptron
Runge–Kutta neural network
url https://www.cambridge.org/core/product/identifier/S2632673621000150/type/journal_article
work_keys_str_mv AT qinyuzhuang modelorderreductionbasedonrungekuttaneuralnetworks
AT juanmanuellorenzi modelorderreductionbasedonrungekuttaneuralnetworks
AT hansjoachimbungartz modelorderreductionbasedonrungekuttaneuralnetworks
AT dirkhartmann modelorderreductionbasedonrungekuttaneuralnetworks