New paper: Acoustic richness modulates networks involved in speech comprehension (Lee et al.)

Many functional imaging studies have investigated the brain networks responding to intelligible speech. Far fewer have looked at how the brain responds to speech that is acoustically degraded, but remains intelligible. This type of speech is particularly interesting, because as listeners we are frequently in the position of hearing unclear speech that we nevertheless understand—a situation even more common for people with hearing aids or cochlear implants. Does the brain care about acoustic clarity when speech is fully intelligible?

We address this question in our new paper now out in Hearing Research (Lee et al., 2016) in which we played short sentences for listeners they varied in both syntactic complexity and acoustic clarity (normal speech vs. 24 channel vocoded speech). We used an ISSS fMRI sequence (Schwarzbauer et al., 2006) to collect data, allowing us to present the sentences with reduced acoustic noise but still obtain relatively good temporal resolution (Peelle, 2014).

In response to syntactically complex sentences, listeners showed increased activity in large regions of left-lateralized frontoparietal cortex. This finding was expected given previous results from our group and others. In contrast, most of the increases in response based on acoustic clarity were due to the presence of more activity for the acoustically detailed, normal speech. Although this was somewhat unexpected as many studies show increased response for degraded speech relative to clear speech, we have some ideas as to what might explain our result:

  1. Studies finding degradation-related increases frequently also involve a loss of intelligibility;
  2. We indeed saw some areas of increased activity for the degraded speech, they were just smaller in size than the increases;
  3. We used noise vocoding to manipulate the acoustic clarity of the speech signal which reduced cues to the sex, age, emotion, and other characteristics of the speaker.

These results continue an interesting line of work (Obleser et al., 2011) looking at the role of acoustic detail apart from intelligibility. This ties in to prosody and other aspects of spoken communication that go beyond the identity of the words being spoken (McGettigan, 2015).

Overall, we think our finding that large portions of the brain show less activation when less information is available is not as surprising as it seems, and extraordinarily relevent for patients with hearing loss or using an assistive device.

Finally, I'm very happy that we've made the unthresholded statistical maps available on neurovault.org, which is a fantastic resource. Hopefully we'll see more brain imaging data deposited there (from our lab, and others!).

References:

Lee Y-S, Min NE, Wingfield A, Grossman M, Peelle JE (2016) Acoustic richness modulates the neural networks supporting intelligible speech processing. Hearing Research 333:108-117. doi: 10.1016/j.heares.2015.12.008 (PDF)

McGettigan C (2015) The social life of voices: Studying the neural bases for the expression and perception of the self and others during spoken communication. Front Hum Neurosci 9:129. doi:10.3389/fnhum.2015.00129

Obleser J, Meyer L, Friederici AD (2011) Dynamic assignment of neural resources in auditory comprehension of complex sentences. NeuroImage 56:2310-2320. doi:10.1016/j.neuroimage.2011.03.035

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 8:253. doi: 10.3389/fnins.2014.00253

Schwarzbauer C, Davis MH, Rodd JM, Johnsrude I (2006) Interleaved silent steady state (ISSS) imaging: A new sparse imaging method applied to auditory fMRI. NeuroImage 29:774-782. doi:10.1016/j.neuroimage.2005.08.025

New paper: Automatic analysis (aa) for neuroimaging analyes

I'm extra excited about this one! Out now in Frontiers in Neuroinformatics is our paper describing the automatic analysis (aa) processing pipeline (Cusak et al., 2015). aa started at the MRC Cognition and Brain Sciences Unit in Cambridge, spearheaded by Rhodri Cusack and aided by several other contributors. Recent years have seen aa mature into an extremely flexible processing environment. My own commitment to using aa was sealed at the CBU when working on our VBM comparison of 400+ subjects—with aa it was possible to run a full analysis in about a week (with 16-32 compute nodes running full time) (don't tell anyone—I think technically we weren't supposed to use more than 8...). And, because we were comparing different segmentation routines (among other things) we ran several of these analyses. Without aa I can't imagine ever doing the study. aa also played a key role in our winning HBM Hackathon entry from 2013 (or as we affectionally called it, the haackathon).

Based on my own experience I strongly recommend that all neuroimagers learn to use some form of imaging pipeline, and aa is a great choice. For most of us there is a significant upfront investment of time and frustration. However, the payoff is well worth it, both in terms of time (you will end up saving time in the long run) and scientific quality (reproducibility, openness, and fewer opportunities for point-and-click error).

The code for aa is freely available, hosted on github. Links, help, and more can be found on the main aa website: automaticanalysis.org. Comments and suggestions are very welcome, especially for the "getting started" portions (many of which are new).

By the way, several os the aa team will be at HBM this year, and we are submitting an aa poster as well. Please stop by and say hi!

Reference:

Cusack R, Vicente-Gravobetsky A, Mitchell DJ, Wild C, Auer T, Linke AC, Peelle JE (2015) Automatic analysis (aa): Efficient neuroimaging workflows and parallel processing using Matlab and XML. Frontiers in Neuroinformatics 8:90. http://journal.frontiersin.org/Journal/10.3389/fninf.2014.00090/abstract

New paper: Methodological challenges and solutions in auditory fMRI

Fresh off the Frontiers press, my review paper on auditory fMRI methods. There are a number of other papers on this topic, but most are more than a decade old. My goal in this paper was to give a contemporary overview of the current state of auditory fMRI, and emphasize a few points that sometimes fall by the wayside. Scanner noise is often seen as a methodological issue (and a nuisance)—and understandably so—but it's one that can drastically impact our interpretation of results, particularly for auditory fMRI studies.

One key point is that acoustic scanner noise can affect neural activity through multiple pathways. Typically the most focus is placed on audibility (can subjects hear the stimuli?), followed by acknowledging a possible reduction in sensitivity in auditory regions of the brain. However, acoustic noise can also change the cognitive processes required for tasks such as speech perception. Behaviorally there is an extensive literature showing that speech perception in quiet differs from speech perception in noise; the same is true in the scanner environment. Although we may not be able to provide optimal acoustic conditions inside a scanner, at a minimum it is useful to consider the possible impact of the acoustic challenge on observed neural responses. To me this continues to be an important point when interpreting auditory fMRI studies. I'm not convinced by the argument that because acoustic noise is present equally in all conditions, we don't have to worry about it—there are good reasons to think that acoustic challenge interacts with the cognitive systems engaged.

Another point that has long been around in the literature but frequently downplayed in practice is that scanner noise appears to impact other cognitive tasks, too—so it's not probably just auditory neuroscientists who should be paying attention to the issue of acoustic noise in the scanner.

On the solution side, at this point sparse imaging (aka "clustered volume acquisition") is fairly well-known. I also emphasize the benefits ISSS (Schwarzbauer et al, 2006), which is a more recent approach to auditory fMRI. ISSS allows improved temporal resolution while still presenting stimuli in relative quiet, although because it produces a discontinuous timeseries of images, some care needs to be taken during analysis.

It's clear that if we care about auditory processing, scanner noise will always be a challenge. However, I'm optimistic that with some increased attention to the issue and striving to understand the effects of scanner noise rather than ignore them, things will only get better. To quote the last line of the paper: "It is an exciting time for auditory neuroscience, and continuing technical and methodological advances suggest an even brighter (though hopefully quieter) future."

[As a side note I'm also happy to publish in the "Brain Imaging Methods" section of Frontiers. I wish it had it's own title, but it's subsumed in the Frontiers in Neuroscience journal for citation purposes.]

 

References:

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Frontiers in Neuroscience 8:253. http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00253/abstract

Schwarzbauer C, Davis MH, Rodd JM, Johnsrude I (2006) Interleaved silent steady state (ISSS) imaging: A new sparse imaging method applied to auditory fMRI. NeuroImage 29:774-782. http://dx.doi.org/10.1016/j.neuroimage.2005.08.025