Welcome to the webpage for the Language and Music Cognition Lab (LMCL), part of the Department of Psychology and the Program in Neuroscience and Cognitive Science at the University of Maryland, College Park.
Language and music may be the most impressive examples of humans’ capacity to process complex sound and structure. Work in our lab aims to better understand these abilities – that is, we investigate the cognitive science of language and music. This includes work focusing specifically on language processing (especially word and sentence production), work focusing specifically on sound and music perception, and work directly comparing linguistic and musical processing. Most of our work relies on behavioral paradigms, but we also draw on methods from cognitive neuroscience (including EEG, MEG, and fMRI) and neuropsychology (specifically, investigations of linguistic and musical perception in individuals with brain damage).
Please explore our recent publications, past and current collaborators, and how you might be able to get involved! Additional resources are available in the resources tab at the top of the page or feel free to send us an email via the contact link.
Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magentoencephalography (MEG), we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the MEG recordings. We show that sound tokens can be decoded from brain activity beginning 90 milliseconds after stimulus onset with peak decoding performance occurring at 155 milliseconds post stimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.