A Computational Model of Immanent Accent Salience in Tonal Music

Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dyna...

Full description

Bibliographic Details
Main Authors: Erica Bisesi, Anders Friberg, Richard Parncutt
Format: Article
Language:English
Published: Frontiers Media S.A. 2019-03-01
Series:Frontiers in Psychology
Subjects:
Online Access:https://www.frontiersin.org/article/10.3389/fpsyg.2019.00317/full
id doaj-6664f649769a499b89d07b94749f100c
record_format Article
spelling doaj-6664f649769a499b89d07b94749f100c2020-11-25T00:51:42ZengFrontiers Media S.A.Frontiers in Psychology1664-10782019-03-011010.3389/fpsyg.2019.00317414371A Computational Model of Immanent Accent Salience in Tonal MusicErica Bisesi0Erica Bisesi1Anders Friberg2Richard Parncutt3Centre for Systematic Musicology, University of Graz, Graz, AustriaLaboratory “Perception and Memory”, Department of Neuroscience, Institut Pasteur, Paris, FranceDepartment of Speech, Music and Hearing, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, SwedenCentre for Systematic Musicology, University of Graz, Graz, AustriaAccents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dynamics, articulation, and timbre. In the past, grouping, metrical and melodic accents were investigated in the context of expressive music performance. We present a novel computational model of immanent accent salience in tonal music that automatically predicts the positions and saliences of metrical, melodic and harmonic accents. The model extends previous research by improving on preliminary formulations of metrical and melodic accents and introducing a new model for harmonic accents that combines harmonic dissonance and harmonic surprise. In an analysis-by-synthesis approach, model predictions were compared with data from two experiments, respectively involving 239 sonorities and 638 sonorities, and 16 musicians and 5 experts in music theory. Average pair-wise correlations between raters were lower for metrical (0.27) and melodic accents (0.37) than for harmonic accents (0.49). In both experiments, when combining all the raters into a single measure expressing their consensus, correlations between ratings and model predictions ranged from 0.43 to 0.62. When different accent categories of accents were combined together, correlations were higher than for separate categories (r = 0.66). This suggests that raters might use strategies different from individual metrical, melodic or harmonic accent models to mark the musical events.https://www.frontiersin.org/article/10.3389/fpsyg.2019.00317/fullimmanent accentssaliencemusic expressionmusic analysiscomputational modeling
collection DOAJ
language English
format Article
sources DOAJ
author Erica Bisesi
Erica Bisesi
Anders Friberg
Richard Parncutt
spellingShingle Erica Bisesi
Erica Bisesi
Anders Friberg
Richard Parncutt
A Computational Model of Immanent Accent Salience in Tonal Music
Frontiers in Psychology
immanent accents
salience
music expression
music analysis
computational modeling
author_facet Erica Bisesi
Erica Bisesi
Anders Friberg
Richard Parncutt
author_sort Erica Bisesi
title A Computational Model of Immanent Accent Salience in Tonal Music
title_short A Computational Model of Immanent Accent Salience in Tonal Music
title_full A Computational Model of Immanent Accent Salience in Tonal Music
title_fullStr A Computational Model of Immanent Accent Salience in Tonal Music
title_full_unstemmed A Computational Model of Immanent Accent Salience in Tonal Music
title_sort computational model of immanent accent salience in tonal music
publisher Frontiers Media S.A.
series Frontiers in Psychology
issn 1664-1078
publishDate 2019-03-01
description Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dynamics, articulation, and timbre. In the past, grouping, metrical and melodic accents were investigated in the context of expressive music performance. We present a novel computational model of immanent accent salience in tonal music that automatically predicts the positions and saliences of metrical, melodic and harmonic accents. The model extends previous research by improving on preliminary formulations of metrical and melodic accents and introducing a new model for harmonic accents that combines harmonic dissonance and harmonic surprise. In an analysis-by-synthesis approach, model predictions were compared with data from two experiments, respectively involving 239 sonorities and 638 sonorities, and 16 musicians and 5 experts in music theory. Average pair-wise correlations between raters were lower for metrical (0.27) and melodic accents (0.37) than for harmonic accents (0.49). In both experiments, when combining all the raters into a single measure expressing their consensus, correlations between ratings and model predictions ranged from 0.43 to 0.62. When different accent categories of accents were combined together, correlations were higher than for separate categories (r = 0.66). This suggests that raters might use strategies different from individual metrical, melodic or harmonic accent models to mark the musical events.
topic immanent accents
salience
music expression
music analysis
computational modeling
url https://www.frontiersin.org/article/10.3389/fpsyg.2019.00317/full
work_keys_str_mv AT ericabisesi acomputationalmodelofimmanentaccentsalienceintonalmusic
AT ericabisesi acomputationalmodelofimmanentaccentsalienceintonalmusic
AT andersfriberg acomputationalmodelofimmanentaccentsalienceintonalmusic
AT richardparncutt acomputationalmodelofimmanentaccentsalienceintonalmusic
AT ericabisesi computationalmodelofimmanentaccentsalienceintonalmusic
AT ericabisesi computationalmodelofimmanentaccentsalienceintonalmusic
AT andersfriberg computationalmodelofimmanentaccentsalienceintonalmusic
AT richardparncutt computationalmodelofimmanentaccentsalienceintonalmusic
_version_ 1725244321586741248