Coordinated Learning by Model Difference Identification in Multiagent Systems with Sparse Interactions

Multiagent Reinforcement Learning (MARL) is a promising technique for agents learning effective coordinated policy in Multiagent Systems (MASs). In many MASs, interactions between agents are usually sparse, and then a lot of MARL methods were devised for them. These methods divide learning process i...

Full description

Bibliographic Details
Main Authors: Qi Zhang, Peng Jiao, Quanjun Yin, Lin Sun
Format: Article
Language:English
Published: Hindawi Limited 2016-01-01
Series:Discrete Dynamics in Nature and Society
Online Access:http://dx.doi.org/10.1155/2016/3207460
id doaj-d0c6438f864e4cca89ce2c2b99479091
record_format Article
spelling doaj-d0c6438f864e4cca89ce2c2b994790912020-11-24T23:02:31ZengHindawi LimitedDiscrete Dynamics in Nature and Society1026-02261607-887X2016-01-01201610.1155/2016/32074603207460Coordinated Learning by Model Difference Identification in Multiagent Systems with Sparse InteractionsQi Zhang0Peng Jiao1Quanjun Yin2Lin Sun3College of Information Systems and Management, National University of Defense Technology, Changsha, Hunan, ChinaCollege of Information Systems and Management, National University of Defense Technology, Changsha, Hunan, ChinaCollege of Information Systems and Management, National University of Defense Technology, Changsha, Hunan, ChinaCollege of Information Systems and Management, National University of Defense Technology, Changsha, Hunan, ChinaMultiagent Reinforcement Learning (MARL) is a promising technique for agents learning effective coordinated policy in Multiagent Systems (MASs). In many MASs, interactions between agents are usually sparse, and then a lot of MARL methods were devised for them. These methods divide learning process into independent learning and joint learning in coordinated states to improve traditional joint state-action space learning. However, most of those methods identify coordinated states based on assumptions about domain structure (e.g., dependencies) or agent (e.g., prior individual optimal policy and agent homogeneity). Moreover, situations that current methods cannot deal with still exist. In this paper, a modified approach is proposed to learn where and how to coordinate agents’ behaviors in more general MASs with sparse interactions. Our approach introduces sample grouping and a more accurate metric of model difference degree to identify which states of other agents should be considered in coordinated states, without strong additional assumptions. Experimental results show that the proposed approach outperforms its competitors by improving the average agent reward per step and works well in some broader scenarios.http://dx.doi.org/10.1155/2016/3207460
collection DOAJ
language English
format Article
sources DOAJ
author Qi Zhang
Peng Jiao
Quanjun Yin
Lin Sun
spellingShingle Qi Zhang
Peng Jiao
Quanjun Yin
Lin Sun
Coordinated Learning by Model Difference Identification in Multiagent Systems with Sparse Interactions
Discrete Dynamics in Nature and Society
author_facet Qi Zhang
Peng Jiao
Quanjun Yin
Lin Sun
author_sort Qi Zhang
title Coordinated Learning by Model Difference Identification in Multiagent Systems with Sparse Interactions
title_short Coordinated Learning by Model Difference Identification in Multiagent Systems with Sparse Interactions
title_full Coordinated Learning by Model Difference Identification in Multiagent Systems with Sparse Interactions
title_fullStr Coordinated Learning by Model Difference Identification in Multiagent Systems with Sparse Interactions
title_full_unstemmed Coordinated Learning by Model Difference Identification in Multiagent Systems with Sparse Interactions
title_sort coordinated learning by model difference identification in multiagent systems with sparse interactions
publisher Hindawi Limited
series Discrete Dynamics in Nature and Society
issn 1026-0226
1607-887X
publishDate 2016-01-01
description Multiagent Reinforcement Learning (MARL) is a promising technique for agents learning effective coordinated policy in Multiagent Systems (MASs). In many MASs, interactions between agents are usually sparse, and then a lot of MARL methods were devised for them. These methods divide learning process into independent learning and joint learning in coordinated states to improve traditional joint state-action space learning. However, most of those methods identify coordinated states based on assumptions about domain structure (e.g., dependencies) or agent (e.g., prior individual optimal policy and agent homogeneity). Moreover, situations that current methods cannot deal with still exist. In this paper, a modified approach is proposed to learn where and how to coordinate agents’ behaviors in more general MASs with sparse interactions. Our approach introduces sample grouping and a more accurate metric of model difference degree to identify which states of other agents should be considered in coordinated states, without strong additional assumptions. Experimental results show that the proposed approach outperforms its competitors by improving the average agent reward per step and works well in some broader scenarios.
url http://dx.doi.org/10.1155/2016/3207460
work_keys_str_mv AT qizhang coordinatedlearningbymodeldifferenceidentificationinmultiagentsystemswithsparseinteractions
AT pengjiao coordinatedlearningbymodeldifferenceidentificationinmultiagentsystemswithsparseinteractions
AT quanjunyin coordinatedlearningbymodeldifferenceidentificationinmultiagentsystemswithsparseinteractions
AT linsun coordinatedlearningbymodeldifferenceidentificationinmultiagentsystemswithsparseinteractions
_version_ 1725636388384145408