Robotic pick-and-place of partially visible and novel objects

If robots are to be capable of performing tasks in uncontrolled, natural environments, they must be able to handle objects they have never seen before, i.e., novel objects. We study the problem of grasping a partially visible, novel object and placing it in a desired way, e.g., placing a bottle upri...

Full description

Bibliographic Details
Published:
Online Access:http://hdl.handle.net/2047/D20412868
id ndltd-NEU--neu-bz611w26q
record_format oai_dc
spelling ndltd-NEU--neu-bz611w26q2021-07-23T05:10:14ZRobotic pick-and-place of partially visible and novel objectsIf robots are to be capable of performing tasks in uncontrolled, natural environments, they must be able to handle objects they have never seen before, i.e., novel objects. We study the problem of grasping a partially visible, novel object and placing it in a desired way, e.g., placing a bottle upright onto a coaster. There are two main approaches to this problem: policy learning, where a direct mapping from observations to actions is learned, and modular systems, where a perceptual module predicts the objects' geometry and a planning module calculates a sequence of grasps and places valid for the perceived geometry. We have two contributions. The first relates to policy learning. We develop efficient mechanisms for sampling six degree-of-freedom gripper poses. Efficient sampling enables the use of established value-based reinforcement learning algorithms for pick-and-place of novel objects. Our second contribution relates to modular systems. We show that perceptual uncertainty is relevant to regrasping performance, and we compare different ways of incorporating perceptual uncertainty into the regrasp planning cost. Overall, we increase the range of objects robots can pick-and-place reliably without human intervention. This gets us a step closer to robots that work outside of factories and laboratories, i.e., in uncontrolled environments.--Author's abstracthttp://hdl.handle.net/2047/D20412868
collection NDLTD
sources NDLTD
description If robots are to be capable of performing tasks in uncontrolled, natural environments, they must be able to handle objects they have never seen before, i.e., novel objects. We study the problem of grasping a partially visible, novel object and placing it in a desired way, e.g., placing a bottle upright onto a coaster. There are two main approaches to this problem: policy learning, where a direct mapping from observations to actions is learned, and modular systems, where a perceptual module predicts the objects' geometry and a planning module calculates a sequence of grasps and places valid for the perceived geometry. We have two contributions. The first relates to policy learning. We develop efficient mechanisms for sampling six degree-of-freedom gripper poses. Efficient sampling enables the use of established value-based reinforcement learning algorithms for pick-and-place of novel objects. Our second contribution relates to modular systems. We show that perceptual uncertainty is relevant to regrasping performance, and we compare different ways of incorporating perceptual uncertainty into the regrasp planning cost. Overall, we increase the range of objects robots can pick-and-place reliably without human intervention. This gets us a step closer to robots that work outside of factories and laboratories, i.e., in uncontrolled environments.--Author's abstract
title Robotic pick-and-place of partially visible and novel objects
spellingShingle Robotic pick-and-place of partially visible and novel objects
title_short Robotic pick-and-place of partially visible and novel objects
title_full Robotic pick-and-place of partially visible and novel objects
title_fullStr Robotic pick-and-place of partially visible and novel objects
title_full_unstemmed Robotic pick-and-place of partially visible and novel objects
title_sort robotic pick-and-place of partially visible and novel objects
publishDate
url http://hdl.handle.net/2047/D20412868
_version_ 1719417704225439744