Summary: | Coalition formation is a central problem in multiagent systems research, but most models assume common knowledge of agent types. In practice, however, agents are often unsure of the types or capabilities of their potential partners, but gain information about these capabilities through repeated interaction. In this paper, we propose a novel Bayesian, model-based reinforcement learning framework for this problem, assuming that coalitions are formed (and tasks undertaken) repeatedly. Our model allows agents to refine their beliefs about the types of others as they interact within a coalition. The model also allows agents to make explicit tradeoffs between exploration (forming "new" coalitions to learn more about the types of new potential partners) and exploitation (relying on partners about which more is known), using value of information to define optimal exploration policies. Our framework effectively integrates decision making during repeated coalition formation under type uncertainty with Bayesian reinforcement learning techniques. Specifically, we present several learning algorithms to approximate the optimal Bayesian solution to the repeated coalition formation and type-learning problem, providing tractable means to ensure good sequential performance. We evaluate our algorithms in a variety of settings, showing that one method in particular exhibits consistently good performance in practice. We also demonstrate the ability of our model to facilitate knowledge transfer across different dynamic tasks.
|