Learning Actions From Natural Language Instructions Using an ON-World Embodied Cognitive Architecture

Endowing robots with the ability to view the world the way humans do, to understand natural language and to learn novel semantic meanings when they are deployed in the physical world, is a compelling problem. Another significant aspect is linking language to action, in particular, utterances involvi...

Full description

Bibliographic Details
Main Authors: Ioanna Giorgi, Angelo Cangelosi, Giovanni L. Masala
Format: Article
Language:English
Published: Frontiers Media S.A. 2021-05-01
Series:Frontiers in Neurorobotics
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fnbot.2021.626380/full
id doaj-4ceea93e70594a6aa6514314cf7365fb
record_format Article
spelling doaj-4ceea93e70594a6aa6514314cf7365fb2021-05-13T06:40:50ZengFrontiers Media S.A.Frontiers in Neurorobotics1662-52182021-05-011510.3389/fnbot.2021.626380626380Learning Actions From Natural Language Instructions Using an ON-World Embodied Cognitive ArchitectureIoanna Giorgi0Angelo Cangelosi1Giovanni L. Masala2Department of Computer Science, The University of Manchester, Manchester, United KingdomDepartment of Computer Science, The University of Manchester, Manchester, United KingdomDepartment of Computing and Mathematics, Manchester Metropolitan University, Manchester, United KingdomEndowing robots with the ability to view the world the way humans do, to understand natural language and to learn novel semantic meanings when they are deployed in the physical world, is a compelling problem. Another significant aspect is linking language to action, in particular, utterances involving abstract words, in artificial agents. In this work, we propose a novel methodology, using a brain-inspired architecture, to model an appropriate mapping of language with the percept and internal motor representation in humanoid robots. This research presents the first robotic instantiation of a complex architecture based on the Baddeley's Working Memory (WM) model. Our proposed method grants a scalable knowledge representation of verbal and non-verbal signals in the cognitive architecture, which supports incremental open-ended learning. Human spoken utterances about the workspace and the task are combined with the internal knowledge map of the robot to achieve task accomplishment goals. We train the robot to understand instructions involving higher-order (abstract) linguistic concepts of developmental complexity, which cannot be directly hooked in the physical world and are not pre-defined in the robot's static self-representation. Our proposed interactive learning method grants flexible run-time acquisition of novel linguistic forms and real-world information, without training the cognitive model anew. Hence, the robot can adapt to new workspaces that include novel objects and task outcomes. We assess the potential of the proposed methodology in verification experiments with a humanoid robot. The obtained results suggest robust capabilities of the model to link language bi-directionally with the physical environment and solve a variety of manipulation tasks, starting with limited knowledge and gradually learning from the run-time interaction with the tutor, past the pre-trained stage.https://www.frontiersin.org/articles/10.3389/fnbot.2021.626380/fullcognitive architecturenatural language learninglanguage to actionsemantic mappingabstract wordsaction grounding
collection DOAJ
language English
format Article
sources DOAJ
author Ioanna Giorgi
Angelo Cangelosi
Giovanni L. Masala
spellingShingle Ioanna Giorgi
Angelo Cangelosi
Giovanni L. Masala
Learning Actions From Natural Language Instructions Using an ON-World Embodied Cognitive Architecture
Frontiers in Neurorobotics
cognitive architecture
natural language learning
language to action
semantic mapping
abstract words
action grounding
author_facet Ioanna Giorgi
Angelo Cangelosi
Giovanni L. Masala
author_sort Ioanna Giorgi
title Learning Actions From Natural Language Instructions Using an ON-World Embodied Cognitive Architecture
title_short Learning Actions From Natural Language Instructions Using an ON-World Embodied Cognitive Architecture
title_full Learning Actions From Natural Language Instructions Using an ON-World Embodied Cognitive Architecture
title_fullStr Learning Actions From Natural Language Instructions Using an ON-World Embodied Cognitive Architecture
title_full_unstemmed Learning Actions From Natural Language Instructions Using an ON-World Embodied Cognitive Architecture
title_sort learning actions from natural language instructions using an on-world embodied cognitive architecture
publisher Frontiers Media S.A.
series Frontiers in Neurorobotics
issn 1662-5218
publishDate 2021-05-01
description Endowing robots with the ability to view the world the way humans do, to understand natural language and to learn novel semantic meanings when they are deployed in the physical world, is a compelling problem. Another significant aspect is linking language to action, in particular, utterances involving abstract words, in artificial agents. In this work, we propose a novel methodology, using a brain-inspired architecture, to model an appropriate mapping of language with the percept and internal motor representation in humanoid robots. This research presents the first robotic instantiation of a complex architecture based on the Baddeley's Working Memory (WM) model. Our proposed method grants a scalable knowledge representation of verbal and non-verbal signals in the cognitive architecture, which supports incremental open-ended learning. Human spoken utterances about the workspace and the task are combined with the internal knowledge map of the robot to achieve task accomplishment goals. We train the robot to understand instructions involving higher-order (abstract) linguistic concepts of developmental complexity, which cannot be directly hooked in the physical world and are not pre-defined in the robot's static self-representation. Our proposed interactive learning method grants flexible run-time acquisition of novel linguistic forms and real-world information, without training the cognitive model anew. Hence, the robot can adapt to new workspaces that include novel objects and task outcomes. We assess the potential of the proposed methodology in verification experiments with a humanoid robot. The obtained results suggest robust capabilities of the model to link language bi-directionally with the physical environment and solve a variety of manipulation tasks, starting with limited knowledge and gradually learning from the run-time interaction with the tutor, past the pre-trained stage.
topic cognitive architecture
natural language learning
language to action
semantic mapping
abstract words
action grounding
url https://www.frontiersin.org/articles/10.3389/fnbot.2021.626380/full
work_keys_str_mv AT ioannagiorgi learningactionsfromnaturallanguageinstructionsusinganonworldembodiedcognitivearchitecture
AT angelocangelosi learningactionsfromnaturallanguageinstructionsusinganonworldembodiedcognitivearchitecture
AT giovannilmasala learningactionsfromnaturallanguageinstructionsusinganonworldembodiedcognitivearchitecture
_version_ 1721442641036443648