One-shot visual appearance learning for mobile manipulation
We describe a vision-based algorithm that enables a robot to robustly detect specific objects in a scene following an initial segmentation hint from a human user. The novelty lies in the ability to 'reacquire' objects over extended spatial and temporal excursions within challenging environ...
Main Authors: | Walter, Matthew R. (Contributor), Friedman, Yuli (Author), Antone, Matthew (Author), Teller, Seth (Contributor) |
---|---|
Other Authors: | Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor), Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science (Contributor) |
Format: | Article |
Language: | English |
Published: |
Sage Publications,
2012-10-02T14:57:19Z.
|
Subjects: | |
Online Access: | Get fulltext |
Similar Items
-
Appearance-based object reacquisition for mobile manipulation
by: Walter, Matthew R., et al.
Published: (2011) -
Closed-loop pallet manipulation in unstructured environments
by: Walter, Matthew R., et al.
Published: (2011) -
Understanding natural language commands for robotic navigation and mobile manipulation
by: Tellex, Stefanie A., et al.
Published: (2012) -
Invariance properties of the human visual system in one-shot learning
by: Han, Yena
Published: (2018) -
Learning Articulated Constraints From a One-Shot Demonstration for Robot Manipulation Planning
by: Yizhou Liu, et al.
Published: (2019-01-01)