Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning
This paper proposes a DRL -based training method for spellcaster units in StarCraft II, one of the most representative Real-Time Strategy (RTS) games. During combat situations in StarCraft II, micro-controlling various combat units is crucial in order to win the game. Among many other combat units,...
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-06-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/9/6/996 |
_version_ | 1797565373077258240 |
---|---|
author | Wooseok Song Woong Hyun Suh Chang Wook Ahn |
author_facet | Wooseok Song Woong Hyun Suh Chang Wook Ahn |
author_sort | Wooseok Song |
collection | DOAJ |
description | This paper proposes a DRL -based training method for spellcaster units in StarCraft II, one of the most representative Real-Time Strategy (RTS) games. During combat situations in StarCraft II, micro-controlling various combat units is crucial in order to win the game. Among many other combat units, the spellcaster unit is one of the most significant components that greatly influences the combat results. Despite the importance of the spellcaster units in combat, training methods to carefully control spellcasters have not been thoroughly considered in related studies due to the complexity. Therefore, we suggest a training method for spellcaster units in StarCraft II by using the A3C algorithm. The main idea is to train two Protoss spellcaster units under three newly designed minigames, each representing a unique spell usage scenario, to use ‘Force Field’ and ‘Psionic Storm’ effectively. As a result, the trained agents show winning rates of more than 85% in each scenario. We present a new training method for spellcaster units that releases the limitation of StarCraft II AI research. We expect that our training method can be used for training other advanced and tactical units by applying transfer learning in more complex minigame scenarios or full game maps. |
first_indexed | 2024-03-10T19:11:13Z |
format | Article |
id | doaj.art-89118f0f76644884ac405cc79d3beba5 |
institution | Directory Open Access Journal |
issn | 2079-9292 |
language | English |
last_indexed | 2024-03-10T19:11:13Z |
publishDate | 2020-06-01 |
publisher | MDPI AG |
record_format | Article |
series | Electronics |
spelling | doaj.art-89118f0f76644884ac405cc79d3beba52023-11-20T03:47:21ZengMDPI AGElectronics2079-92922020-06-019699610.3390/electronics9060996Spellcaster Control Agent in StarCraft II Using Deep Reinforcement LearningWooseok Song0Woong Hyun Suh1Chang Wook Ahn2Artificial Intelligence Graduate School, Gwangju Institute of Science and Technology (GIST), Gwangju 61005, KoreaArtificial Intelligence Graduate School, Gwangju Institute of Science and Technology (GIST), Gwangju 61005, KoreaArtificial Intelligence Graduate School, Gwangju Institute of Science and Technology (GIST), Gwangju 61005, KoreaThis paper proposes a DRL -based training method for spellcaster units in StarCraft II, one of the most representative Real-Time Strategy (RTS) games. During combat situations in StarCraft II, micro-controlling various combat units is crucial in order to win the game. Among many other combat units, the spellcaster unit is one of the most significant components that greatly influences the combat results. Despite the importance of the spellcaster units in combat, training methods to carefully control spellcasters have not been thoroughly considered in related studies due to the complexity. Therefore, we suggest a training method for spellcaster units in StarCraft II by using the A3C algorithm. The main idea is to train two Protoss spellcaster units under three newly designed minigames, each representing a unique spell usage scenario, to use ‘Force Field’ and ‘Psionic Storm’ effectively. As a result, the trained agents show winning rates of more than 85% in each scenario. We present a new training method for spellcaster units that releases the limitation of StarCraft II AI research. We expect that our training method can be used for training other advanced and tactical units by applying transfer learning in more complex minigame scenarios or full game maps.https://www.mdpi.com/2079-9292/9/6/996deep reinforcement learningA3Cgame AIStarCraft IIspellcasterminigame |
spellingShingle | Wooseok Song Woong Hyun Suh Chang Wook Ahn Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning Electronics deep reinforcement learning A3C game AI StarCraft II spellcaster minigame |
title | Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning |
title_full | Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning |
title_fullStr | Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning |
title_full_unstemmed | Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning |
title_short | Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning |
title_sort | spellcaster control agent in starcraft ii using deep reinforcement learning |
topic | deep reinforcement learning A3C game AI StarCraft II spellcaster minigame |
url | https://www.mdpi.com/2079-9292/9/6/996 |
work_keys_str_mv | AT wooseoksong spellcastercontrolagentinstarcraftiiusingdeepreinforcementlearning AT woonghyunsuh spellcastercontrolagentinstarcraftiiusingdeepreinforcementlearning AT changwookahn spellcastercontrolagentinstarcraftiiusingdeepreinforcementlearning |