Unsupervised learning of reflexive and action-based affordances to model adaptive navigational behavior

Here we introduce a cognitive model capable to model a variety of behavioral domains and apply it to a navigational task. We used place cells as sensory representation, such that the cells’ place fields divided the environment into discrete states. The robot learns knowledge of the environ...

Full description

Bibliographic Details
Main Authors: Daniel Weiller, Leonhard Läer, Andreas K Engel, Peter Konig
Format: Article
Language:English
Published: Frontiers Media S.A. 2010-05-01
Series:Frontiers in Neurorobotics
Subjects:
Online Access:http://journal.frontiersin.org/Journal/10.3389/fnbot.2010.00002/full
Description
Summary:Here we introduce a cognitive model capable to model a variety of behavioral domains and apply it to a navigational task. We used place cells as sensory representation, such that the cells’ place fields divided the environment into discrete states. The robot learns knowledge of the environment by memorizing the sensory outcome of its motor actions. This is composed of a central process, learning the probability of state-to-state transitions by motor actions and a distal processing routine, learning the extent to which these state-to-state transitions are caused by sensory-driven reflex behavior (obstacle avoidance). Navigational decision making integrates central and distal learned environmental knowledge to select an action that leads to a goal state. Differentiating distal and central processing increases the behavioral accuracy of the selected actions and the ability of behavioral adaptation to a changed environment. We propose that the system can canonically be expanded to model other behaviors, using alternative definitions of states and actions. The emphasis of this paper is to test this general cognitive model on a robot in a real world environment
ISSN:1662-5218