Learning hierarchical teaching policies for cooperative agents

© 2020 International Foundation for Autonomous. Collective learning can be greatly enhanced when agents effectively exchange knowledge with their peers. In particular, recent work studying agents that learn to teach other teammates has demonstrated that action advising accelerates team-wide learning...

Full description

Bibliographic Details
Main Author: How, Jonathan P. (Author)
Format: Article
Language:English
Published: 2021-11-02T18:45:14Z.
Subjects:
Online Access:Get fulltext
LEADER 01670 am a22001453u 4500
001 137164
042 |a dc 
100 1 0 |a How, Jonathan P.  |e author 
245 0 0 |a Learning hierarchical teaching policies for cooperative agents 
260 |c 2021-11-02T18:45:14Z. 
856 |z Get fulltext  |u https://hdl.handle.net/1721.1/137164 
520 |a © 2020 International Foundation for Autonomous. Collective learning can be greatly enhanced when agents effectively exchange knowledge with their peers. In particular, recent work studying agents that learn to teach other teammates has demonstrated that action advising accelerates team-wide learning. However, the prior work has simplified the learning of advising policies by using simple function approximations and only considered advising with primitive (low-level) actions, limiting the scalability of learning and teaching to complex domains. This paper introduces a novel learning-to-teach framework, called hierarchical multiagent teaching (HMAT), that improves scalability to complex environments by using the deep representation for student policies and by advising with more expressive extended action sequences over multiple levels of temporal abstraction. Our empirical evaluations demonstrate that HMAT improves team-wide learning progress in large, complex domains where previous approaches fail. HMAT also learns teaching policies that can effectively transfer knowledge to different teammates with knowledge of different tasks, even when the teammates have heterogeneous action spaces. 
546 |a en 
655 7 |a Article 
773 |t Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS