1 Department of Clinical Medicine - Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Health, Aarhus University2 University of Jyväskylä3 University of Juväskylä4 Department of Clinical Medicine - Center for Music In the Brain, Department of Clinical Medicine, Health, Aarhus University5 School of Communication and Culture - Center for Semiotics, School of Communication and Culture, Arts, Aarhus University6 Department of Clinical Medicine - Center for Music In the Brain, Department of Clinical Medicine, Health, Aarhus University7 School of Communication and Culture - Center for Semiotics, School of Communication and Culture, Arts, Aarhus University
dynamic decoding of musical features from fMRI data
We investigated neural correlates of musical feature processing with a decoding approach. To this end, we used a method that combines computational extraction of musical features with regularized multiple regression (LASSO). Optimal model parameters were determined by maximizing the decoding accuracy using a leave-one-out cross-validation scheme. The method was applied to functional magnetic resonance imaging (fMRI) data that were collected using a naturalistic paradigm, in which participants' brain responses were recorded while they were continuously listening to pieces of real music. The dependent variables comprised musical feature time series that were computationally extracted from the stimulus. We expected timbral features to obtain a higher prediction accuracy than rhythmic and tonal ones. Moreover, we expected the areas significantly contributing to the decoding models to be consistent with areas of significant activation observed in previous research using a naturalistic paradigm with fMRI. Of the six musical features considered, five could be significantly predicted for the majority of participants. The areas significantly contributing to the optimal decoding models agreed to a great extent with results obtained in previous studies. In particular, areas in the superior temporal gyrus, Heschl's gyrus, Rolandic operculum, and cerebellum contributed to the decoding of timbral features. For the decoding of the rhythmic feature, we found the bilateral superior temporal gyrus, right Heschl's gyrus, and hippocampus to contribute most. The tonal feature, however, could not be significantly predicted, suggesting a higher inter-participant variability in its neural processing. A subsequent classification experiment revealed that segments of the stimulus could be classified from the fMRI data with significant accuracy. The present findings provide compelling evidence for the involvement of the auditory cortex, the cerebellum and the hippocampus in the processing of musical features during continuous listening to music.