|
|
|
|
LEADER |
02406 am a22002893u 4500 |
001 |
85953 |
042 |
|
|
|a dc
|
100 |
1 |
0 |
|a Branavan, Satchuthanan R.
|e author
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
|e contributor
|
100 |
1 |
0 |
|a Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science
|e contributor
|
100 |
1 |
0 |
|a Branavan, Satchuthanan R.
|e contributor
|
100 |
1 |
0 |
|a Kushman, Nate
|e contributor
|
100 |
1 |
0 |
|a Lei, Tao
|e contributor
|
100 |
1 |
0 |
|a Barzilay, Regina
|e contributor
|
700 |
1 |
0 |
|a Kushman, Nate
|e author
|
700 |
1 |
0 |
|a Lei, Tao
|e author
|
700 |
1 |
0 |
|a Barzilay, Regina
|e author
|
245 |
0 |
0 |
|a Learning High-Level Planning from Text
|
260 |
|
|
|b The Association for Computational Linguistics,
|c 2014-03-28T16:08:14Z.
|
856 |
|
|
|z Get fulltext
|u http://hdl.handle.net/1721.1/85953
|
520 |
|
|
|a Comprehending action preconditions and effects is an essential step in modeling the dynamics of the world. In this paper, we express the semantics of precondition relations extracted from text in terms of planning operations. The challenge of modeling this connection is to ground language at the level of relations. This type of grounding enables us to create high-level plans based on language abstractions. Our model jointly learns to predict precondition relations from text and to perform high-level planning guided by those relations. We implement this idea in the reinforcement learning framework using feedback automatically obtained from plan execution attempts. When applied to a complex virtual world and text describing that world, our relation extraction technique performs on par with a supervised baseline, yielding an F-measure of 66% compared to the baseline's 65%. Additionally, we show that a high-level planner utilizing these extracted relations significantly outperforms a strong, text unaware baseline - successfully completing 80% of planning tasks as compared to 69% for the baseline.
|
520 |
|
|
|a National Science Foundation (U.S.) (CAREER Grant IIS-0448168)
|
520 |
|
|
|a United States. Defense Advanced Research Projects Agency. Machine Reading Program (FA8750-09-C-0172, PO#4910018860)
|
520 |
|
|
|a Battelle Memorial Institute (PO#300662)
|
546 |
|
|
|a en_US
|
655 |
7 |
|
|a Article
|
773 |
|
|
|t Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics
|