It is widely acknowledged that people on the ASD spectrum behave atypically in the way they modulate aspects of speech and voice, including pitch, fluency, and voice quality. ASD speech has been described at times as “odd”, “mechanical”, or “monotone”. However, it has proven difficult to quantify and explain this oddness of speech pattern. In this project, we quantify how the speech patterns of people with Asperger’s Syndrome (AS) differ from that of matched controls. To do so, we employed both traditional measures (pitch range and standard deviation, pause duration, and so on) and 2) non-linear techniques measuring the structure (regularity and complexity) of verbal, prosodic and fluency behaviour. Our aims were (1) to achieve a more fine-grained understanding of the speech patterns in AS than has previously been achieved using traditional, linear measures of prosody and fluency, and (2) to employ the results in a supervised machine-learning process to classify speech production as either belonging to the control or the AS group as well as to assess the severity of the disorder (as measured by Autism Spectrum Quotient), based solely on acoustic features.