Research in our lab is focused on the neurobiological basis for speech and language. Common themes include the interplay between acoustic and cognitive processing during communication, how listeners deal with reduced sensory detail (such as speech in a noisy restaurant, or due to hearing loss), and the various linguistic and sensory cues that help listeners predict incoming signals.
Below are some of our ongoing research projects. For all of these we make use of behavioral measures and human brain imaging (including structural and functional MRI, optical imaging, and MEG/EEG).
Understanding speech with a cochlear implant
If the inner ear is damaged, a cochlear implant can restore hearing by electrically stimulating the auditory nerve. However, the clarity of the signal provided by a cochlear implant is not as good as achieved through good biological hearing, and listeners with cochlear implants often find the process of understanding speech to be effortful. We use eye tracking, pupillometry, and optical brain imaging to better understand how listeners compensate for acoustically challenging speech, and to explore why some listeners are more successful than others.
The brain basis of natural communication
Our lab’s research focuses on understanding how cognitive and neural processes generalize to everyday human experiences. We investigate real-world contexts such as social interactions, decision-making, emotional responses, and the integration of sensory information in dynamic environments. For example, speech perception is a fundamental multisensory process that requires integrating auditory cues with visual details like lip movements and facial expressions. Our goal is to study these situations in real-time, bridging the gap between controlled laboratory studies and the complexity of daily life.
Previous work in our lab has shown that audiovisual cues—such as seeing a speaker’s mouth—enhance speech perception, particularly in noisy environments. Based on these findings, we investigate the mechanisms supporting speech processing in natural settings. We use neuroimaging techniques to study the brain under realistic conditions, including fMRI, fNIRS, and HD-DOT. We also employ naturalistic stimuli, such as movies, to provide rich, multisensory input and engage a broad network of neural systems to conduct real-world recording. We strive to uncover how the brain supports complex behaviors in real-world contexts by combining cutting-edge neuroimaging with ecologically valid paradigms.
Exercise, cognition, and communication
Treadmill VO2max test with spotters
Have you ever been at a crowded restaurant or mall with your grandparents and despite hearing aids, they complain about hearing you with all the surrounding noise? Understanding speech and conversations in noisy places becomes increasingly difficult with age. Even with decades of interest in understanding why some listeners struggle understanding speech in noise more than others, answers remain elusive. Some prior work has suggested such difficult tasks generate upregulation of cognitive load and possible executive resources rather than just auditory processes in the brain. This would mean that being able to decipher meaning from a degraded acoustic signal increases cognitive demand and burdens the finite neural resources that would otherwise be used for tasks like memory or attention.
Previous work in the lab has shown that performance on executive function tasks correlate with how well participants perform in speech in noise tasks. There is also a rich history of research elucidating the benefits of aerobic fitness on executive function where higher fitness benefits performance on numerous executive tasks. If speech in noise tasks relate to executive function performance and aerobic fitness helps improve executive functions, can aerobic fitness improve speech in noise function?
Our lab aims to test this idea through looking at the relationship between one’s maximal oxygen consumption (VO2max) and their performance on various cognitive as well as speech in noise tasks. Further work hopes to uncover whether improving one’s cardiorespiratory fitness will simultaneously improve speech in noise performance in the same way it does for executive function performance.
