Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction States

Our daily environments are complex, composed of objects with different features. These features can be categorized into low-level features, e.g., an object position or temperature, and high-level features resulting from a pre-processing of low-level features for decision purposes, e.g., a binary val...

Full description

Bibliographic Details
Main Authors: Carlos Maestre, Ghanim Mukhtar, Christophe Gonzales, Stephane Doncieux
Format: Article
Language:English
Published: Frontiers Media S.A. 2019-07-01
Series:Frontiers in Neurorobotics
Subjects:
Online Access:https://www.frontiersin.org/article/10.3389/fnbot.2019.00056/full
id doaj-b24b3849a46c462ab65cbc0c27df2620
record_format Article
spelling doaj-b24b3849a46c462ab65cbc0c27df26202020-11-25T01:14:53ZengFrontiers Media S.A.Frontiers in Neurorobotics1662-52182019-07-011310.3389/fnbot.2019.00056403137Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction StatesCarlos Maestre0Ghanim Mukhtar1Christophe Gonzales2Stephane Doncieux3UMR 7222, ISIR, Sorbonne Université and CNRS, Paris, FranceUMR 7222, ISIR, Sorbonne Université and CNRS, Paris, FranceUMR 7606, LIP6, Sorbonne Université and CNRS, Paris, FranceUMR 7222, ISIR, Sorbonne Université and CNRS, Paris, FranceOur daily environments are complex, composed of objects with different features. These features can be categorized into low-level features, e.g., an object position or temperature, and high-level features resulting from a pre-processing of low-level features for decision purposes, e.g., a binary value saying if it is too hot to be grasped. Besides, our environments are dynamic, i.e., object states can change at any moment. Therefore, robots performing tasks in these environments must have the capacity to (i) identify the next action to execute based on the available low-level and high-level object states, and (ii) dynamically adapt their actions to state changes. We introduce a method named Interaction State-based Skill Learning (IS2L), which builds skills to solve tasks in realistic environments. A skill is a Bayesian Network that infers actions composed of a sequence of movements of the robot's end-effector, which locally adapt to spatio-temporal perturbations using a dynamical system. In the current paper, an external agent performs one or more kinesthetic demonstrations of an action generating a dataset of high-level and low-level states of the robot and the environment objects. First, the method transforms each interaction to represent (i) the relationship between the robot and the object and (ii) the next robot end-effector movement to perform at consecutive instants of time. Then, the skill is built, i.e., the Bayesian network is learned. While generating an action this skill relies on the robot and object states to infer the next movement to execute. This movement selection gets inspired by a type of predictive models for action selection usually called affordances. The main contribution of this paper is combining the main features of dynamical systems and affordances in a unique method to build skills that solve tasks in realistic scenarios. More precisely, combining the low-level movement generation of the dynamical systems, to adapt to local perturbations, with the next movement selection simultaneously based on high-level and low-level states. This contribution was assessed in three experiments in realistic environments using both high-level and low-level states. The built skills solved the respective tasks relying on both types of states, and adapting to external perturbations.https://www.frontiersin.org/article/10.3389/fnbot.2019.00056/fullskill buildingaction generationlearning from demonstrationaffordancesmotor controlstate
collection DOAJ
language English
format Article
sources DOAJ
author Carlos Maestre
Ghanim Mukhtar
Christophe Gonzales
Stephane Doncieux
spellingShingle Carlos Maestre
Ghanim Mukhtar
Christophe Gonzales
Stephane Doncieux
Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction States
Frontiers in Neurorobotics
skill building
action generation
learning from demonstration
affordances
motor control
state
author_facet Carlos Maestre
Ghanim Mukhtar
Christophe Gonzales
Stephane Doncieux
author_sort Carlos Maestre
title Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction States
title_short Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction States
title_full Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction States
title_fullStr Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction States
title_full_unstemmed Action Generation Adapted to Low-Level and High-Level Robot-Object Interaction States
title_sort action generation adapted to low-level and high-level robot-object interaction states
publisher Frontiers Media S.A.
series Frontiers in Neurorobotics
issn 1662-5218
publishDate 2019-07-01
description Our daily environments are complex, composed of objects with different features. These features can be categorized into low-level features, e.g., an object position or temperature, and high-level features resulting from a pre-processing of low-level features for decision purposes, e.g., a binary value saying if it is too hot to be grasped. Besides, our environments are dynamic, i.e., object states can change at any moment. Therefore, robots performing tasks in these environments must have the capacity to (i) identify the next action to execute based on the available low-level and high-level object states, and (ii) dynamically adapt their actions to state changes. We introduce a method named Interaction State-based Skill Learning (IS2L), which builds skills to solve tasks in realistic environments. A skill is a Bayesian Network that infers actions composed of a sequence of movements of the robot's end-effector, which locally adapt to spatio-temporal perturbations using a dynamical system. In the current paper, an external agent performs one or more kinesthetic demonstrations of an action generating a dataset of high-level and low-level states of the robot and the environment objects. First, the method transforms each interaction to represent (i) the relationship between the robot and the object and (ii) the next robot end-effector movement to perform at consecutive instants of time. Then, the skill is built, i.e., the Bayesian network is learned. While generating an action this skill relies on the robot and object states to infer the next movement to execute. This movement selection gets inspired by a type of predictive models for action selection usually called affordances. The main contribution of this paper is combining the main features of dynamical systems and affordances in a unique method to build skills that solve tasks in realistic scenarios. More precisely, combining the low-level movement generation of the dynamical systems, to adapt to local perturbations, with the next movement selection simultaneously based on high-level and low-level states. This contribution was assessed in three experiments in realistic environments using both high-level and low-level states. The built skills solved the respective tasks relying on both types of states, and adapting to external perturbations.
topic skill building
action generation
learning from demonstration
affordances
motor control
state
url https://www.frontiersin.org/article/10.3389/fnbot.2019.00056/full
work_keys_str_mv AT carlosmaestre actiongenerationadaptedtolowlevelandhighlevelrobotobjectinteractionstates
AT ghanimmukhtar actiongenerationadaptedtolowlevelandhighlevelrobotobjectinteractionstates
AT christophegonzales actiongenerationadaptedtolowlevelandhighlevelrobotobjectinteractionstates
AT stephanedoncieux actiongenerationadaptedtolowlevelandhighlevelrobotobjectinteractionstates
_version_ 1725155876071800832