Research in our lab is focused on the neurobiological basis for speech comprehension. Common themes include the interplay between acoustic and cognitive processing during speech comprehension, how listeners deal with reduced sensory detail (such as speech in a noisy restaurant, or due to hearing impairment), and the various linguistic and sensory cues that help listeners predict incoming signals.

Below are some of our ongoing research projects. For all of these we make use of behavioral measures and human brain imaging (including structural and functional MRI, optical imaging, and MEG/EEG).

Listening effort and the neural consequences of acoustic challenge

​Variation in normal hearing in a group of adults over the age of 60 was significantly related to the amount of gray matter in primary auditory cortex: people with poorer hearing ability had less gray matter. (From Peelle et al., 2011, Journal of Neuroscience)

We frequently listen to speech that is acoustically degraded due to background noise, foreign accents, or as a result of hearing loss. In these situations, our brains must make sense of an acoustic signal that is less detailed, and thus less certain. How do our brains cope with this type of degraded sensory input? What are the long-term consequences of hearing impairment for neural organization?

One way we have studied the effect of hearing ability on the brain is to look at the structure and function of auditory brain regions in listeners over the age of 60, who frequently have some degree of hearing loss. We find that individual differences in hearing ability are correlated with both the pattern of brain activity during speech comprehension, and with the volume of gray matter in auditory cortex. These results suggest that hearing impairment is associated with both functional and structural brain changes, which may influence other aspects of language processing.

Additional studies are investigating the degree to which acoustic clarity of speech may affect our ability to remember what we have heard. Our prediction is that when speech is more difficult to hear, it will require increased reliance on cognitive processes, which will impact memory. However, this challenge may be reduced for speech that is very predictable (as might occur in a short story) or in listeners with high levels of cognitive resources.

Age-related changes in speech comprehension

Top: Brain regions that show an increased response for syntactically-complex spoken sentences in both young and older adults. Bottom: Regions in which this syntax-related activity differs as a function of age; note that older adults show increased activity in numerous regions of frontal and prefrontal cortex outside the core syntax network. (From Peelle et al., 2010, Cerebral Cortex)

Our brains undergo significant change over our lifetimes. How are we able to maintain high levels of functioning despite these changes? One of our research interests is in the neural systems supporting successful aging in the context of spoken language. Speech comprehension is a particularly interesting case because it involves changes to both sensory and cognitive systems.

What we find is that when listening to spoken sentences, older adults rely on many of the same brain regions as young adults. However, older adults also tend to use some additional regions not used by young adults, especially in frontal cortex. One of the goals of our ongoing work is to better specify the additional cognitive processes involved, and to determine whether these are supporting acoustic processing, linguistic processing, or some combination of the two.

Rhythm and predictability in auditory processing

Various representations of a spoken sentence, highlighting the rhythmic information contained in the amplitude envelope (top) and various levels of linguistic information conveyed (bottom). (From Peelle & Davis, 2012, Frontiers in Psychology​)

When we listen to someone talk, one of the many cues we use to predict the upcoming speech signal is the amplitude modulation of the ongoing speech—that is, rhythmic information caused by the opening and closing of the mouth in combination with the vibration of the vocal chords. Interestingly, ongoing oscillations in the brain track rhythmic information in speech, locking on to the acoustic information in a way that may aid comprehension. We study the way in which this neural "tuning" to predictable temporal information, like that in speech rhythm, helps listeners to process auditory input efficiently.