Summary: | As the field of autonomous robotics grows and its applications broaden up, an enormous amount of sensors and actuators, sometimes redundant, have been added to mobile robots. These now fully equipped entities are expected to perceive and act in their surrounding world in a human-like fashion, through perception, reasoning, planning and decision making processes. The higher complexity level of the resulting system and the nature of the environments where autonomous robots are usually expected to operate - continuous, partially unknown and usually unpredictable - demand the application of techniques to deal with this overload of data. In humans, that face the same problem when sounds, images and smells are presented to their sensors in a daily scene, a natural filter is applied: Attention. Although there are many computational models that apply attentive systems to Robotics, they usually are restricted to two classes of systems: a) those that have complex biologically-based attentional visual systems and b) those that have simpler attentional mechanisms with a larger variety of sensors. In order to evaluate an attentional system that operates with other robotics sensors than visual ones, this work presents a biologically inspired computational attentional model that can handle both top-down and bottom-up attention and that is able to learn how to re-distribute its limited resources over time and space. Experiments performed on a high fidelity simulator demonstrates the feasibility of the proposed attentional model and its capability on performing decision making and learning processes over attentional modulated data. The proposed system promotes a significant reduction on the original state space (96%) that was created over multiple sensory systems.
|