Talk Sessions:



Poster Sessions:



June 21, Booth 28

June 22, Booth 23

Learning Multi-agent Action Coordination via Electing First-move Agent

Jingqing Ruan, Linghui Meng, Xuantang Xiong, Dengpeng Xing and Bo Xu

Abstract: Learning to coordinate actions among agents is essential in complicated multi-agent systems. Prior works are constrained mainly by the assumption that all agents act simultaneously, and asynchronous action coordination between agents is rarely considered. This paper introduces a bi-level multi-agent decision hierarchy for coordinated behavior planning. We propose a novel election mechanism in which we adopt a graph convolutional network to model the interaction among agents and elect a first-move agent for asynchronous guidance. We also propose a dynamically weighted mixing network to effectively reduce the misestimation of the value function during training. This work is the first to explicitly model the asynchronous multi-agent action coordination, and this explicitness enables to choose the optimal first-move agent. The results on Cooperative Navigation and Google Football demonstrate that the proposed algorithm can achieve superior performance in cooperative environments. Our code is available at https://github.com/Amanda-1997/EFA-DWM.

*This password protected talk video will only be available after it was presented at the conference.