Question

Answer the following about the methods used by Google’s DeepMind to train AlphaStar, an agent developed to play StarCraft II that reached the highest rank of Grandmaster in 2019. For 10 points each:
[10h] Two answers required. The reinforcement learning procedure used by AlphaStar is based on a policy gradient algorithm in a framework named for these two entities. A popular RL algorithm is named for “Asynchronous Advantage” and these two entities, where policy and value functions are simultaneously learned and updated.
ANSWER: actors and critics [accept (Asynchronous) Advantage Actor-Critic; prompt on A2C or A3C]
[10m] The supervised and reinforcement stages of AlphaStar combined losses using this optimizer. Momentum and RMSProp are precursors to this often-default ML optimization method that has a four-letter acronym.
ANSWER: Adam algorithm [or Adaptive Moment Estimation]
[10e] The multi-agent stage of AlphaStar avoids solely using naive self-play because of its tendency to chase these constructs, leading to an infinite loop. In graphs, these constructs are paths that have the same first and last vertex.
ANSWER: cycles [or circuits]
<Science - Other Science - Math>

Back to bonuses

Summary

2024 ARGOS @ Brandeis03/22/2025Y316.6767%67%33%
2024 ARGOS @ Chicago11/23/2024Y615.0083%50%17%
2024 ARGOS @ Christ's College12/14/2024Y36.6733%33%0%
2024 ARGOS @ Columbia11/23/2024Y310.0067%33%0%
2024 ARGOS @ Stanford02/22/2025Y310.0067%33%0%
2024 ARGOS Online03/22/2025Y313.33100%33%0%

Data

That Feeling When Knee Surgery Is in Five DaysWashU0000
BHSU ReFantaziohawk two of10101030
Clown SenpaisThe Love Song of J Alfred PrufRock and Roll All Nite (and Party Every Day)001010
Music to Help You Stop SmokingClown Squad0101020
Who is the Colleen Hoover of the Zulus?Northeast by Northwestern001010
BHSU RebirthNotre Dame0101020