A neuromechanistic model for rhythmic beat generation.

When listening to music, humans can easily identify and move to the beat. Numerous experimental studies have identified brain regions that may be involved with beat perception and representation. Several theoretical and algorithmic approaches have been proposed to account for this ability. Related t...

Full description

Bibliographic Details
Main Authors: Amitabha Bose, Áine Byrne, John Rinzel
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2019-05-01
Series:PLoS Computational Biology
Online Access:https://doi.org/10.1371/journal.pcbi.1006450
id doaj-44f8d622d3f04fcd846169bd01d6d2d4
record_format Article
spelling doaj-44f8d622d3f04fcd846169bd01d6d2d42021-04-21T15:11:20ZengPublic Library of Science (PLoS)PLoS Computational Biology1553-734X1553-73582019-05-01155e100645010.1371/journal.pcbi.1006450A neuromechanistic model for rhythmic beat generation.Amitabha BoseÁine ByrneJohn RinzelWhen listening to music, humans can easily identify and move to the beat. Numerous experimental studies have identified brain regions that may be involved with beat perception and representation. Several theoretical and algorithmic approaches have been proposed to account for this ability. Related to, but different from the issue of how we perceive a beat, is the question of how we learn to generate and hold a beat. In this paper, we introduce a neuronal framework for a beat generator that is capable of learning isochronous rhythms over a range of frequencies that are relevant to music and speech. Our approach combines ideas from error-correction and entrainment models to investigate the dynamics of how a biophysically-based neuronal network model synchronizes its period and phase to match that of an external stimulus. The model makes novel use of on-going faster gamma rhythms to form a set of discrete clocks that provide estimates, but not exact information, of how well the beat generator spike times match those of a stimulus sequence. The beat generator is endowed with plasticity allowing it to quickly learn and thereby adjust its spike times to achieve synchronization. Our model makes generalizable predictions about the existence of asymmetries in the synchronization process, as well as specific predictions about resynchronization times after changes in stimulus tempo or phase. Analysis of the model demonstrates that accurate rhythmic time keeping can be achieved over a range of frequencies relevant to music, in a manner that is robust to changes in parameters and to the presence of noise.https://doi.org/10.1371/journal.pcbi.1006450
collection DOAJ
language English
format Article
sources DOAJ
author Amitabha Bose
Áine Byrne
John Rinzel
spellingShingle Amitabha Bose
Áine Byrne
John Rinzel
A neuromechanistic model for rhythmic beat generation.
PLoS Computational Biology
author_facet Amitabha Bose
Áine Byrne
John Rinzel
author_sort Amitabha Bose
title A neuromechanistic model for rhythmic beat generation.
title_short A neuromechanistic model for rhythmic beat generation.
title_full A neuromechanistic model for rhythmic beat generation.
title_fullStr A neuromechanistic model for rhythmic beat generation.
title_full_unstemmed A neuromechanistic model for rhythmic beat generation.
title_sort neuromechanistic model for rhythmic beat generation.
publisher Public Library of Science (PLoS)
series PLoS Computational Biology
issn 1553-734X
1553-7358
publishDate 2019-05-01
description When listening to music, humans can easily identify and move to the beat. Numerous experimental studies have identified brain regions that may be involved with beat perception and representation. Several theoretical and algorithmic approaches have been proposed to account for this ability. Related to, but different from the issue of how we perceive a beat, is the question of how we learn to generate and hold a beat. In this paper, we introduce a neuronal framework for a beat generator that is capable of learning isochronous rhythms over a range of frequencies that are relevant to music and speech. Our approach combines ideas from error-correction and entrainment models to investigate the dynamics of how a biophysically-based neuronal network model synchronizes its period and phase to match that of an external stimulus. The model makes novel use of on-going faster gamma rhythms to form a set of discrete clocks that provide estimates, but not exact information, of how well the beat generator spike times match those of a stimulus sequence. The beat generator is endowed with plasticity allowing it to quickly learn and thereby adjust its spike times to achieve synchronization. Our model makes generalizable predictions about the existence of asymmetries in the synchronization process, as well as specific predictions about resynchronization times after changes in stimulus tempo or phase. Analysis of the model demonstrates that accurate rhythmic time keeping can be achieved over a range of frequencies relevant to music, in a manner that is robust to changes in parameters and to the presence of noise.
url https://doi.org/10.1371/journal.pcbi.1006450
work_keys_str_mv AT amitabhabose aneuromechanisticmodelforrhythmicbeatgeneration
AT ainebyrne aneuromechanisticmodelforrhythmicbeatgeneration
AT johnrinzel aneuromechanisticmodelforrhythmicbeatgeneration
AT amitabhabose neuromechanisticmodelforrhythmicbeatgeneration
AT ainebyrne neuromechanisticmodelforrhythmicbeatgeneration
AT johnrinzel neuromechanisticmodelforrhythmicbeatgeneration
_version_ 1714667765778874368