|
|
|
|
LEADER |
02813 am a22003253u 4500 |
001 |
100515 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Amato, Christopher
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
|e contributor
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Aeronautics and Astronautics
|e contributor
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
|e contributor
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Laboratory for Information and Decision Systems
|e contributor
|
100 |
1 |
0 |
|a Amato, Christopher
|e contributor
|
100 |
1 |
0 |
|a Konidaris, George D.
|e contributor
|
100 |
1 |
0 |
|a Cruz, Gabriel
|e contributor
|
100 |
1 |
0 |
|a Maynor, Christopher A.
|e contributor
|
100 |
1 |
0 |
|a How, Jonathan P.
|e contributor
|
100 |
1 |
0 |
|a Kaelbling, Leslie P.
|e contributor
|
700 |
1 |
0 |
|a Cruz, Gabriel
|e author
|
700 |
1 |
0 |
|a Maynor, Christopher A.
|e author
|
700 |
1 |
0 |
|a How, Jonathan P.
|e author
|
700 |
1 |
0 |
|a Kaelbling, Leslie P.
|e author
|
700 |
1 |
0 |
|a Konidaris, George D.
|e author
|
245 |
0 |
0 |
|a Planning for decentralized control of multiple robots under uncertainty
|
260 |
|
|
|b Institute of Electrical and Electronics Engineers (IEEE),
|c 2015-12-28T00:00:56Z.
|
856 |
|
|
|z Get fulltext
|u http://hdl.handle.net/1721.1/100515
|
520 |
|
|
|a This paper presents a probabilistic framework for synthesizing control policies for general multi-robot systems that is based on decentralized partially observable Markov decision processes (Dec-POMDPs). Dec-POMDPs are a general model of decision-making where a team of agents must cooperate to optimize a shared objective in the presence of uncertainty. Dec-POMDPs also consider communication limitations, so execution is decentralized. While Dec-POMDPs are typically intractable to solve for real-world problems, recent research on the use of macro-actions in Dec-POMDPs has significantly increased the size of problem that can be practically solved. We show that, in contrast to most existing methods that are specialized to a particular problem class, our approach can synthesize control policies that exploit any opportunities for coordination that are present in the problem, while balancing uncertainty, sensor information, and information about other agents. We use three variants of a warehouse task to show that a single planner of this type can generate cooperative behavior using task allocation, direct communication, and signaling, as appropriate. This demonstrates that our algorithmic framework can automatically optimize control and communication policies for complex multi-robot systems.
|
546 |
|
|
|a en_US
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA)
|