1 School of Communication and Culture - Center for Semiotics, School of Communication and Culture, Arts, Aarhus University2 School of Culture and Society - Interacting Minds (IMC), Centre for, School of Culture and Society, Arts, Aarhus University3 School of Communication and Culture - Linguistics, School of Communication and Culture, Arts, Aarhus University4 School of Communication and Culture - Semotics, School of Communication and Culture, Arts, Aarhus University5 Børne- og Ungdomspsykiatri6 School of Communication and Culture - Semotics, School of Communication and Culture, Arts, Aarhus University7 School of Communication and Culture - Linguistics, School of Communication and Culture, Arts, Aarhus University
Predicting Diagnostic Status and Symptoms Severity
Background: Individuals with autism spectrum disorder (ASD) tend to have atypical modulation of speech, often described as awkward, monotone, or sing-songy [1-3]. The patterns may be one of the most robust and fast signals of social communication deficits in ASD [4, 5]. However, it has proven difficult to determine a consistent set of acoustic features that can account for these perceived differences. Using Recurrence Quantification analysis of acoustic features, Fusaroli et al.  demonstrated a high efficacy of identifying voice patterns characteristic of adult Danish speakers with Asperger’s syndrome. Objectives: We systematically quantify and explore speech patterns in Danish children (8-12 years) with and without autism. We employ traditional and non-linear techniques measuring the structure (regularity and complexity) of speech behavior (i.e. fundamental frequency, use of pauses, speech rate). Our aims are (1) to achieve a more fine-grained understanding of the speech patterns in children with ASD, and (2) to employ the results in a supervised machine-learning process to determine whether acoustic features can be used to predict diagnostic status and severity of the symptoms. Methods: Our analysis was based on previously-acquired repeated narratives (TOMAL-2 ). We tested 25 Danish children diagnosed with ASD and matched controls. Participants had been diagnosed using ADOS and ADI-R and their symptoms assess with SRS and SCQ. Transcripts were time-coded, and pitch (F0), speech-pause sequences and speech rate were automatically extracted. Per each prosodic feature we calculated traditional statistical measures. We then extracted non-linear measure of recurrence: treating voice as a dynamical system, we reconstructed its phase space and measured the number, duration and structure of repeated trajectories in that space. The results were employed to train (1) a linear discriminant function algorithm to classify the descriptions as belonging either to the ASD or the control group, and (2) a multiple linear regression to predict scores in Social Responsiveness Scale (SRS) and Social Communication Questionnaire (SCQ). Both models were developed and tested using 1000 iterations of 10-fold cross-validation (to test the generalizability of the accuracy) and variational Bayesian mixed-effects inferences (to compensate for biases in sample sizes). Results: While traditional measured did not allow for accurate classification, recurrence measures allowed to define voices as autistic or not with balanced accuracy > 77% (p<.00001, CI =71.79%- 81.01%), sensitivity: 79.19%, specificity: 82.37%. Recurrence also allowed to explain variance in the severity of the symptoms: 42.76% (p<.00001) for SCQ and 55.80% for SRS (p<.00001, 48.18% for Social Consciousness, 53.92% for Social Cognition, 54.46% for Social Communication, 47.18% for Social Motivation and 61,04% for Autistic Mannerism). Autistic voice can be characterized as more regular (i.e. with regularly repeated patterns) pitch and pause use than neurotypical voices, Conclusions: Non-linear time series analyses techniques suggest that there are quantifiable acoustic features in speech production of children with ASD that both distinguish them from typically developing speakers and reflect the severity of the symptoms.  R.B. Grossman, L. Edelson, H. Tager-Flusberg, Production of emotional facial and vocal expressions during story retelling by children and adolescents with high-functioning autism, Journal of Speech Language and Hearing Research, 56 (2013) 1035-1044.  J.J. Diehl, L. Bennetto, D. Watson, C. Gunlogson, J. McDonough, Resolving ambiguity: A psycholinguistic approach to understanding prosody processing in high-functioning autism, Brain and Language, 106 (2008) 144–152.  L.D. Shriberg, R. Paul, J.L. McSweeny, A. Klin, D.J. Cohen, F.R. Volkmar)Speech and prosody characteristics of adolescents and adults with high-functioning autism and Asperger syndrome, Journal of Speech, Language, and Hearing Research, 44 (2001) 1097–1115.  R. Paul, L.D. Shriberg, J. McSweeny, D. Cicchetti, A. Klin, F.R. Volkmar, Relations between prosodic performance and communication and socialization ratings in high functioning speakers with autism spectrum disorders, Journal of autism and developmental disorders, 35 (2005) 861–869.  R.B. Grossman, H. Tager-Flusberg, Quality matters! Differences between expressive and receptive non-verbal communication skills in children with ASD, Res Autism Spect Dis, 6 (2012) 1150-1155.  R. Fusaroli, D. Bang, E. Weed, Non-Linear Analyses of Speech and Prosody in Asperger's Syndrome, in: IMFAR 2013, San Sebastian, 2013.  C.R. Reynolds, J. Voress, Test of Memory and Learning (TOMAL-2), TX: PRO-ED, (2007).  N. Marwan, M. Carmen Romano, M. Thiel, J. Kurths, Recurrence plots for the analysis of complex systems, Physics Reports, 438 (2007) 237-329.
Main Research Area:
the International Meeting for Autism Research 2014