Spellcaster Control Agent in StarCraft II Using Deep Reinforcement Learning
This paper proposes a DRL -based training method for spellcaster units in StarCraft II, one of the most representative Real-Time Strategy (RTS) games. During combat situations in StarCraft II, micro-controlling various combat units is crucial in order to win the game. Among many other combat units,...
Main Authors: | Wooseok Song, Woong Hyun Suh, Chang Wook Ahn |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2020-06-01
|
Series: | Electronics |
Subjects: | |
Online Access: | https://www.mdpi.com/2079-9292/9/6/996 |
Similar Items
-
Feature Extraction for StarCraft II League Prediction
by: Chan Min Lee, et al.
Published: (2021-04-01) -
Team Recommendation Using Order-Based Fuzzy Integral and NSGA-II in StarCraft
by: Lin Wang, et al.
Published: (2020-01-01) -
Learning Macromanagement in Starcraft by Deep Reinforcement Learning
by: Wenzhen Huang, et al.
Published: (2021-05-01) -
The StarCraft Multi-Agent Exploration Challenges: Learning Multi-Stage Tasks and Environmental Factors Without Precise Reward Functions
by: Mingyu Kim, et al.
Published: (2023-01-01) -
SC-MAIRL: Semi-Centralized Multi-Agent Imitation Reinforcement Learning
by: Paul Brackett, et al.
Published: (2023-01-01)