New grant funding from NIH

I'm happy to announce that we have just been awarded a five year research grant from the National Institute on Deafnesss and other Communication Disorders (NIDCD) to study some of the neural processes involved in listening effort. My talented co-investigators on the project are Kristin Van Engen and Mitch Sommers from the Psychology Department.

The sobering side of this news is that it remains a very tough funding climate, and there are many talented scientists with great ideas who are not being funded. We count ourselves very fortunate to have the opportunity to pursue this research over the next few years.

The official abstract for the grant follows. We'll be starting the project as soon as we can and will post updates here. Stay tuned!

Approximately 36 million Americans report having some degree of hearing impairment. Hearing loss is associated with social isolation, depression, cognitive decline, and economic cost due to reduced work productivity. Understanding ways to optimize communication in listeners with hearing impairment is therefore a critical challenge for speech perception researchers. A hallmark of recent research has been the development of the concept of listening effort, which emphasizes the importance of cognitive processing during speech perception: Listeners with hearing impairment can often understand spoken language, but with increased cognitive effort, taking resources away from other processes such as attention and memory. Unfortunately, the specific cognitive processes that play a role in effortful listening remain poorly understood. The goal of the current research is to provide a more specific account of the neural and cognitive systems involved in effortful listening, and investigate how these factors affect speech comprehension. The studies are designed around a framework of lexical competition, which refers to how listeners select a correct target word from among the possible words they may have heard (Was that word “cap” or “cat”?). Lexical competition is influenced by properties of single words (words that sound similar to many others, like “cat”, are more difficult to process), the acoustic signal (poorer acoustic clarity makes correct identification more difficult), and individual differences in cognitive processing (lower inhibitory ability makes incorrect targets more likely to be perceived). Neuroanatomically, these processes are supported by dissociable regions of temporal and frontal cortex, consistent with a large-scale cortical network that supports speech comprehension. Importantly, individual differences in both hearing impairment and cognitive ability interact with the type of speech being processed to determine the level of success a listener will have in understanding speech. The current research will involve collecting measures of hearing and cognition in all participants to investigate how individual differences in these measures impact speech perception. Converging evidence from behavioral studies, eyetracking, and functional magnetic resonance imaging (fMRI) will be used to explore the cognitive and neural basis of speech perception. Aim 1 evaluates the relationship between lexical competition and listening effort during speech perception. Aim 2 characterizes multiple cognitive processes involved in processing degraded speech. Aim 3 assesses how individual differences in hearing and cognition predict speech perception, relying on a framework of lexical competition to inform theoretical interpretation. These studies will show a relationship between lexical competition and the cognitive processes engaged when processing degraded speech, providing a theoretically-motivated framework to better explain the challenges faced by both normal-hearing and hearing-impaired listeners.

 

New paper: mapping speech comprehension with optical imaging (Hassanpour et al.)

Although fMRI is great for a lot of things, it also presents challenges, especially for auditory neuroscience. Echoplanar imaging is loud, and this acoustic noise can obscure stimuli or change the cognitive demand of a task (Peelle, 2014). In addition, patients with implanted medical devices can't be scanned.

My lab has been working with Joe Culver's optical radiology lab to develop a solution to these problems using high-density diffuse optical tomography (HD-DOT). Similar to fNIRS, HD-DOT uses light spectroscopy to image oxygenated and deoxygenated blood signals, related to the BOLD response in fMRI. HD-DOT also incorporates realistic light models to facilitate source reconstruction—this of huge importance for studies of cognitive function and facilitates  combining results across subjects. A detailed description of our current large field-of-view HD-DOT system can be found in Eggebrecht et al. (2014).

Because HD-DOT is relatively new, an important first step in using it for speech studies was to verify that it is indeed able to capture responses to spoken sentences, both in terms of effect size and spatial location. Mahlega Hassanpour is a PhD student who enthusiastically took on this challenge. In our paper now out in NeuroImage (Hassanpour et al., 2015), Mahlega used a well-studied comparison of syntactic complexity looking at sentences containing subject-relative or object-relative center embedded clauses (taken from our previous fMRI study; Peelle et al 2010).

Consistent with previous fMRI work, we found a sensible increase from a low level acoustic control condition (1 channel vocoded speech) to subject-relative sentences to object-relative sentences. The results were seen at both the single subject level (with some expected noise) and the group level.

We are really glad to see nice responses to spoken sentences with HD-DOT and are already pursuing several other projects. More to come!


References:

Eggebrecht AT, Ferradal SL, Robichaux-Viehoever A, Hassanpour MS, Dehghani H, Snyder AZ, Hershey T, Culver JP (2014) Mapping distributed brain function and networks with diffuse optical tomography. Nature Photonics 8:448-454. doi:10.1038/nphoton.2014.107

Hassanpour MS, Eggebrecht AT, Culver JP, Peelle JE (2015) Mapping cortical responses to speech using high-density diffuse optical tomography. NeuroImage 117:319–326. doi:10.1016/j.neuroimage.2015.05.058 (PDF)

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Frontiers in Neuroscience 8:253. doi:10.3389/fnins.2014.00253 (PDF)

Peelle JE, Troiani V, Wingfield A, Grossman M (2010) Neural processing during older adults' comprehension of spoken sentences: Age differences in resource allocation and connectivity. Cerebral Cortex 20:773-782. doi:10.1093/cercor/bhp142 (PDF)

NSF workshop on speech technology

I've just returned from a workshop at the National Science Foundation on "the role of speech science in developing robust processing applications". Participants included neuroscientists, speech scientists, psychologists, and engineers interested in speech production and perception. The goal was to foster interdisciplinary thinking about the future of speech technology, and the role NSF might play in supporting these directions. It was a very interesting workshop and I hope leads to future discussions!