原文:https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii

 

SRE实战 互联网时代守护先锋,助力企业售后服务体系运筹帷幄!一键直达领取阿里云限量特价优惠。

SL = supervised learning, RL = reinforcement learning

AlphaStar: Mastering the Real-Time Strategy Game StarCraft II 博客阅读 随笔 第1张

 

  • how AlphaStar is trained

units, properties -> DNN -> instructions

DNN: transform torso(relational deep RL), deep LSTM core, auto-regressive policy head with pointer network, centralised value baseline

train: SL -> mico/macro strategies

        compete -> hyper parameters updated by RL -> Nash distribution -> final agent

multi-agent RL: play against each other: population-based, multi-agent RL -> huge strategic space -> defeat strongest and eariler ones

 

explore new build orders, unit compositions, micro-management plans

personal objective: beat specific competitor/beat distribution of competitors/building more of specific unit

NN weights: off-policy actor-critic RL with experience replay, self-imitation learning, policy distillation

 

run on TPUs, final agent: Nash distribution of the league: best mixture of strategies

AlphaStar: Mastering the Real-Time Strategy Game StarCraft II 博客阅读 随笔 第2张

 

  • how AlphaStar plays and how to evaluate

TLO/MaNa  ~ 100 APM

agent  ~ 1000, 10000 APM

AlphaStar vs. TLO/MaNa  ~280 APM (read screen frames use raw interface)

AlphaStar act: observation -> action: 350ms/avg, process every frame

results: 5:0

AlphaStar: Mastering the Real-Time Strategy Game StarCraft II 博客阅读 随笔 第3张

 

扫码关注我们
微信号:SRE实战
拒绝背锅 运筹帷幄