Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization

Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing chall...

Full description

Bibliographic Details
Main Authors: German I. Parisi, Jun Tani, Cornelius Weber, Stefan Wermter
Format: Article
Language:English
Published: Frontiers Media S.A. 2018-11-01
Series:Frontiers in Neurorobotics
Subjects:
Online Access:https://www.frontiersin.org/article/10.3389/fnbot.2018.00078/full
id doaj-d8493f8d205e4aef9c86826a76139860
record_format Article
spelling doaj-d8493f8d205e4aef9c86826a761398602020-11-25T02:32:26ZengFrontiers Media S.A.Frontiers in Neurorobotics1662-52182018-11-011210.3389/fnbot.2018.00078401624Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-OrganizationGerman I. Parisi0Jun Tani1Cornelius Weber2Stefan Wermter3Knowledge Technology, Department of Informatics, Universität Hamburg, Hamburg, GermanyCognitive Neurorobotics Research Unit, Okinawa Institute of Science and Technology, Okinawa, JapanKnowledge Technology, Department of Informatics, Universität Hamburg, Hamburg, GermanyKnowledge Technology, Department of Informatics, Universität Hamburg, Hamburg, GermanyArtificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting in which novel sensory experience interferes with existing representations and leads to abrupt decreases in the performance on previously acquired knowledge. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. Therefore, specialized neural network mechanisms are required that adapt to novel sequential experience while preventing disruptive interference with existing representations. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenarios.https://www.frontiersin.org/article/10.3389/fnbot.2018.00078/fulllifelong learningcomplementary learning systemsself-organizing networkscontinuous object recognitioncatastrophic forgetting
collection DOAJ
language English
format Article
sources DOAJ
author German I. Parisi
Jun Tani
Cornelius Weber
Stefan Wermter
spellingShingle German I. Parisi
Jun Tani
Cornelius Weber
Stefan Wermter
Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
Frontiers in Neurorobotics
lifelong learning
complementary learning systems
self-organizing networks
continuous object recognition
catastrophic forgetting
author_facet German I. Parisi
Jun Tani
Cornelius Weber
Stefan Wermter
author_sort German I. Parisi
title Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title_short Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title_full Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title_fullStr Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title_full_unstemmed Lifelong Learning of Spatiotemporal Representations With Dual-Memory Recurrent Self-Organization
title_sort lifelong learning of spatiotemporal representations with dual-memory recurrent self-organization
publisher Frontiers Media S.A.
series Frontiers in Neurorobotics
issn 1662-5218
publishDate 2018-11-01
description Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting in which novel sensory experience interferes with existing representations and leads to abrupt decreases in the performance on previously acquired knowledge. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. Therefore, specialized neural network mechanisms are required that adapt to novel sequential experience while preventing disruptive interference with existing representations. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenarios.
topic lifelong learning
complementary learning systems
self-organizing networks
continuous object recognition
catastrophic forgetting
url https://www.frontiersin.org/article/10.3389/fnbot.2018.00078/full
work_keys_str_mv AT germaniparisi lifelonglearningofspatiotemporalrepresentationswithdualmemoryrecurrentselforganization
AT juntani lifelonglearningofspatiotemporalrepresentationswithdualmemoryrecurrentselforganization
AT corneliusweber lifelonglearningofspatiotemporalrepresentationswithdualmemoryrecurrentselforganization
AT stefanwermter lifelonglearningofspatiotemporalrepresentationswithdualmemoryrecurrentselforganization
_version_ 1724819211011751936