Welcome to the webpage for the Language and Music Cognition Lab (LMCL), part of the Department of Psychology and the Program in Neuroscience and Cognitive Science at the University of Maryland, College Park.
Language and music may be the most impressive examples of humans’ capacity to process complex sound and structure. Work in our lab aims to better understand these abilities – that is, we investigate the cognitive science of language and music. This includes work focusing specifically on language processing (especially word and sentence production), work focusing specifically on sound and music perception, and work directly comparing linguistic and musical processing. Most of our work relies on behavioral paradigms, but we also draw on methods from cognitive neuroscience (including EEG, MEG, and fMRI) and neuropsychology (specifically, investigations of linguistic and musical perception in individuals with brain damage).
Please explore our recent publications, past and current collaborators, and how you might be able to get involved! Additional resources are available in the resources tab at the top of the page or feel free to send us an email via the contact link.
Pitch can convey information about emotion in both spoken language and in music. Given this, do people use pitch to communicate emotion in similar ways across both domains? To investigate this question we look at intervals between the fundamental frequency (f0) of adjacent syllables in emotional speech produced by actors. We first investigate whether descending minor third intervals are more prevalent in sad speech compared to other types of emotional speech, as has been reported previously. In these data, we see no evidence for descending minor thirds being characteristic of sad speech. In fact, we find little evidence for any specific musical intervals being associated with specific emotions in these longer sentences. We suggest that speakers might borrow emotional cues from music only when other prosodic options are infeasible.