Efficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic Algorithms

Match-3 puzzle games have garnered significant popularity across all age groups due to their simplicity, non-violent nature, and concise gameplay. However, the development of captivating and well-balanced stages in match-3 puzzle games remains a challenging task for game developers. This study aims...

Full description

Bibliographic Details
Main Authors: Byounggwon Kim, Jungyoon Kim
Format: Article
Language:English
Published: MDPI AG 2023-10-01
Series:Electronics
Subjects:
Online Access:https://www.mdpi.com/2079-9292/12/21/4456
_version_ 1797632029809967104
author Byounggwon Kim
Jungyoon Kim
author_facet Byounggwon Kim
Jungyoon Kim
author_sort Byounggwon Kim
collection DOAJ
description Match-3 puzzle games have garnered significant popularity across all age groups due to their simplicity, non-violent nature, and concise gameplay. However, the development of captivating and well-balanced stages in match-3 puzzle games remains a challenging task for game developers. This study aims to identify the optimal algorithm for reinforcement learning to streamline the level balancing verification process in match-3 games by comparison with Soft Actor-Critic (SAC) and Proximal Policy Optimization (PPO) algorithms. By training the agent with these two algorithms, the paper investigated which approach yields more efficient and effective difficulty level balancing test results. After the comparative analysis of cumulative rewards and entropy, the findings illustrate that the SAC algorithm is the optimal choice for creating an efficient agent capable of handling difficulty level balancing for stages in a match-3 puzzle game. This is because the superior learning performance and higher stability demonstrated by the SAC algorithm are more important in terms of stage difficulty balancing in match-3 gameplay. This study expects to contribute to the development of improved level balancing techniques in match-3 puzzle games besides enhancing the overall gaming experience for players.
first_indexed 2024-03-11T11:32:08Z
format Article
id doaj.art-e1378a855edd4bf3a24fde153776ce62
institution Directory Open Access Journal
issn 2079-9292
language English
last_indexed 2024-03-11T11:32:08Z
publishDate 2023-10-01
publisher MDPI AG
record_format Article
series Electronics
spelling doaj.art-e1378a855edd4bf3a24fde153776ce622023-11-10T15:01:30ZengMDPI AGElectronics2079-92922023-10-011221445610.3390/electronics12214456Efficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic AlgorithmsByounggwon Kim0Jungyoon Kim1Department of Game Media, College of Future Industry, Gachon University, Seongnma-si 13120, Republic of KoreaDepartment of Game Media, College of Future Industry, Gachon University, Seongnma-si 13120, Republic of KoreaMatch-3 puzzle games have garnered significant popularity across all age groups due to their simplicity, non-violent nature, and concise gameplay. However, the development of captivating and well-balanced stages in match-3 puzzle games remains a challenging task for game developers. This study aims to identify the optimal algorithm for reinforcement learning to streamline the level balancing verification process in match-3 games by comparison with Soft Actor-Critic (SAC) and Proximal Policy Optimization (PPO) algorithms. By training the agent with these two algorithms, the paper investigated which approach yields more efficient and effective difficulty level balancing test results. After the comparative analysis of cumulative rewards and entropy, the findings illustrate that the SAC algorithm is the optimal choice for creating an efficient agent capable of handling difficulty level balancing for stages in a match-3 puzzle game. This is because the superior learning performance and higher stability demonstrated by the SAC algorithm are more important in terms of stage difficulty balancing in match-3 gameplay. This study expects to contribute to the development of improved level balancing techniques in match-3 puzzle games besides enhancing the overall gaming experience for players.https://www.mdpi.com/2079-9292/12/21/4456match-3 puzzle gamebalancing testreinforcement learningPPOSAC
spellingShingle Byounggwon Kim
Jungyoon Kim
Efficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic Algorithms
Electronics
match-3 puzzle game
balancing test
reinforcement learning
PPO
SAC
title Efficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic Algorithms
title_full Efficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic Algorithms
title_fullStr Efficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic Algorithms
title_full_unstemmed Efficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic Algorithms
title_short Efficient Difficulty Level Balancing in Match-3 Puzzle Games: A Comparative Study of Proximal Policy Optimization and Soft Actor-Critic Algorithms
title_sort efficient difficulty level balancing in match 3 puzzle games a comparative study of proximal policy optimization and soft actor critic algorithms
topic match-3 puzzle game
balancing test
reinforcement learning
PPO
SAC
url https://www.mdpi.com/2079-9292/12/21/4456
work_keys_str_mv AT byounggwonkim efficientdifficultylevelbalancinginmatch3puzzlegamesacomparativestudyofproximalpolicyoptimizationandsoftactorcriticalgorithms
AT jungyoonkim efficientdifficultylevelbalancinginmatch3puzzlegamesacomparativestudyofproximalpolicyoptimizationandsoftactorcriticalgorithms