Recent neuroscience studies have shown that it is possible to predict how concrete objects are represented in the brain based on the semantic relations of words defining the corresponding concepts. Whether we read the word ‘smile’ or recognize the same expression in a face, the mental processes captured as event related potentials in EEG brain imaging appear indistinguishable. As both low-level semantics and our affective responses can be encoded in words, we propose a simplified cognitive approach to model how we emotionally perceive media. Representing song texts in a vector space of reduced dimensionality using LSA, we define distances between lines of lyrics and frequently used emotional last.fm tags, that constrain the latent semantics according to the psychological dimensions of valence and arousal. We compare the LSA derived emotions from texts with the user annotated tag clouds describing the corresponding songs at last.fm, and suggest the retrieved patterns may provide a sparse representation of how we perceive the emotional content in media.
Proceedings of 9th Ieee Conference on Automatic Face and Gesture Recognition Fg 2011, 2011, p. 821-826
Song lyrics; Emotions; Latent semantics
Main Research Area:
IEEE Conference on automatic face and gesture recognition, 2011