New paper: mapping speech comprehension with optical imaging (Hassanpour et al.)

Although fMRI is great for a lot of things, it also presents challenges, especially for auditory neuroscience. Echoplanar imaging is loud, and this acoustic noise can obscure stimuli or change the cognitive demand of a task (Peelle, 2014). In addition, patients with implanted medical devices can't be scanned.

My lab has been working with Joe Culver's optical radiology lab to develop a solution to these problems using high-density diffuse optical tomography (HD-DOT). Similar to fNIRS, HD-DOT uses light spectroscopy to image oxygenated and deoxygenated blood signals, related to the BOLD response in fMRI. HD-DOT also incorporates realistic light models to facilitate source reconstruction—this of huge importance for studies of cognitive function and facilitates  combining results across subjects. A detailed description of our current large field-of-view HD-DOT system can be found in Eggebrecht et al. (2014).

Because HD-DOT is relatively new, an important first step in using it for speech studies was to verify that it is indeed able to capture responses to spoken sentences, both in terms of effect size and spatial location. Mahlega Hassanpour is a PhD student who enthusiastically took on this challenge. In our paper now out in NeuroImage (Hassanpour et al., 2015), Mahlega used a well-studied comparison of syntactic complexity looking at sentences containing subject-relative or object-relative center embedded clauses (taken from our previous fMRI study; Peelle et al 2010).

Consistent with previous fMRI work, we found a sensible increase from a low level acoustic control condition (1 channel vocoded speech) to subject-relative sentences to object-relative sentences. The results were seen at both the single subject level (with some expected noise) and the group level.

We are really glad to see nice responses to spoken sentences with HD-DOT and are already pursuing several other projects. More to come!


References:

Eggebrecht AT, Ferradal SL, Robichaux-Viehoever A, Hassanpour MS, Dehghani H, Snyder AZ, Hershey T, Culver JP (2014) Mapping distributed brain function and networks with diffuse optical tomography. Nature Photonics 8:448-454. doi:10.1038/nphoton.2014.107

Hassanpour MS, Eggebrecht AT, Culver JP, Peelle JE (2015) Mapping cortical responses to speech using high-density diffuse optical tomography. NeuroImage 117:319–326. doi:10.1016/j.neuroimage.2015.05.058 (PDF)

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Frontiers in Neuroscience 8:253. doi:10.3389/fnins.2014.00253 (PDF)

Peelle JE, Troiani V, Wingfield A, Grossman M (2010) Neural processing during older adults' comprehension of spoken sentences: Age differences in resource allocation and connectivity. Cerebral Cortex 20:773-782. doi:10.1093/cercor/bhp142 (PDF)

New paper: A role for the angular gyrus in combinatorial semantics (Price et al.)

We know what a "leaf" is, and we know what "wet" means. But combining these concepts together into a "wet leaf" yields a new and possibly more specific idea. Similarly, a "brown leaf" is qualitatively different than any old leaf. Our ability to flexibly and dynamically combine concepts enables us to represent and communicate an enormous set of ideas from a relatively small number of constituents. The question of what neural systems might support conceptual combination has been a focus of research for Amy Price at Penn. Combinatorial semantics is an especially timely topic as there are ongoing debates about the anatomical systems most strongly involved in semantic memory more generally (angular gyrus? anterior temporal lobes? ventral visual regions?), as well as the nature of the information being represented (to what degree do concepts rely on sensorimotor cortices?).

In a new paper out this week in the Journal of Neuroscience (Price et al., 2015), Amy presents data from both fMRI and patients with neurodegenerative disease suggesting that the angular gyrus plays an important role in conceptual combination. Amy designed a clever task in which participants read word pairs that varied in how easily they could be combined into a single concept. For example, you could imagine that "turnip rock" is difficult to combine, whereas a "wet rock" is easier. Amy used all adjective-noun pairs, but still found a considerable amount of variability (for example a "plaid apple" combines less easily than a "plaid jacket"). This "ease of combination" was initially quantified using subject ratings, but Amy found that lexical co-occurrence statistics for these word pairs strongly correlate with their degree of combination, and thus co-occurrence measures were used in all analyses. 

These findings are in good agreement with previous work emphasizing an important role for the angular gyrus in semantic representation (Binder & Desai 2011; Bonner et al. 2013).

References:

Binder JR, Desai RH (2011) The neurobiology of semantic memory. Trends in Cognitive Sciences 15:527-536. doi:10.1016/j.tics.2011.10.001

Bonner MF, Peelle JE, Cook PA, Grossman M (2013) Heteromodal conceptual processing in the angular gyrus. NeuroImage 71:175–186. doi:10.1016/j.neuroimage.2013.01.006 (PDF)

Price AR, Bonner MF, Peelle JE, Grossman M (2015) Converging evidence for the neuroanatomic basis of combinatorial semantics in the angular gyrus. Journal of Neuroscience 35:3276–3284. http://www.jneurosci.org./content/35/7/3276.short (PDF)

New paper: Automatic analysis (aa) for neuroimaging analyes

I'm extra excited about this one! Out now in Frontiers in Neuroinformatics is our paper describing the automatic analysis (aa) processing pipeline (Cusak et al., 2015). aa started at the MRC Cognition and Brain Sciences Unit in Cambridge, spearheaded by Rhodri Cusack and aided by several other contributors. Recent years have seen aa mature into an extremely flexible processing environment. My own commitment to using aa was sealed at the CBU when working on our VBM comparison of 400+ subjects—with aa it was possible to run a full analysis in about a week (with 16-32 compute nodes running full time) (don't tell anyone—I think technically we weren't supposed to use more than 8...). And, because we were comparing different segmentation routines (among other things) we ran several of these analyses. Without aa I can't imagine ever doing the study. aa also played a key role in our winning HBM Hackathon entry from 2013 (or as we affectionally called it, the haackathon).

Based on my own experience I strongly recommend that all neuroimagers learn to use some form of imaging pipeline, and aa is a great choice. For most of us there is a significant upfront investment of time and frustration. However, the payoff is well worth it, both in terms of time (you will end up saving time in the long run) and scientific quality (reproducibility, openness, and fewer opportunities for point-and-click error).

The code for aa is freely available, hosted on github. Links, help, and more can be found on the main aa website: automaticanalysis.org. Comments and suggestions are very welcome, especially for the "getting started" portions (many of which are new).

By the way, several os the aa team will be at HBM this year, and we are submitting an aa poster as well. Please stop by and say hi!

Reference:

Cusack R, Vicente-Gravobetsky A, Mitchell DJ, Wild C, Auer T, Linke AC, Peelle JE (2015) Automatic analysis (aa): Efficient neuroimaging workflows and parallel processing using Matlab and XML. Frontiers in Neuroinformatics 8:90. http://journal.frontiersin.org/Journal/10.3389/fninf.2014.00090/abstract

New paper: Methodological challenges and solutions in auditory fMRI

Fresh off the Frontiers press, my review paper on auditory fMRI methods. There are a number of other papers on this topic, but most are more than a decade old. My goal in this paper was to give a contemporary overview of the current state of auditory fMRI, and emphasize a few points that sometimes fall by the wayside. Scanner noise is often seen as a methodological issue (and a nuisance)—and understandably so—but it's one that can drastically impact our interpretation of results, particularly for auditory fMRI studies.

One key point is that acoustic scanner noise can affect neural activity through multiple pathways. Typically the most focus is placed on audibility (can subjects hear the stimuli?), followed by acknowledging a possible reduction in sensitivity in auditory regions of the brain. However, acoustic noise can also change the cognitive processes required for tasks such as speech perception. Behaviorally there is an extensive literature showing that speech perception in quiet differs from speech perception in noise; the same is true in the scanner environment. Although we may not be able to provide optimal acoustic conditions inside a scanner, at a minimum it is useful to consider the possible impact of the acoustic challenge on observed neural responses. To me this continues to be an important point when interpreting auditory fMRI studies. I'm not convinced by the argument that because acoustic noise is present equally in all conditions, we don't have to worry about it—there are good reasons to think that acoustic challenge interacts with the cognitive systems engaged.

Another point that has long been around in the literature but frequently downplayed in practice is that scanner noise appears to impact other cognitive tasks, too—so it's not probably just auditory neuroscientists who should be paying attention to the issue of acoustic noise in the scanner.

On the solution side, at this point sparse imaging (aka "clustered volume acquisition") is fairly well-known. I also emphasize the benefits ISSS (Schwarzbauer et al, 2006), which is a more recent approach to auditory fMRI. ISSS allows improved temporal resolution while still presenting stimuli in relative quiet, although because it produces a discontinuous timeseries of images, some care needs to be taken during analysis.

It's clear that if we care about auditory processing, scanner noise will always be a challenge. However, I'm optimistic that with some increased attention to the issue and striving to understand the effects of scanner noise rather than ignore them, things will only get better. To quote the last line of the paper: "It is an exciting time for auditory neuroscience, and continuing technical and methodological advances suggest an even brighter (though hopefully quieter) future."

[As a side note I'm also happy to publish in the "Brain Imaging Methods" section of Frontiers. I wish it had it's own title, but it's subsumed in the Frontiers in Neuroscience journal for citation purposes.]

 

References:

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Frontiers in Neuroscience 8:253. http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00253/abstract

Schwarzbauer C, Davis MH, Rodd JM, Johnsrude I (2006) Interleaved silent steady state (ISSS) imaging: A new sparse imaging method applied to auditory fMRI. NeuroImage 29:774-782. http://dx.doi.org/10.1016/j.neuroimage.2005.08.025

New paper: Listening effort and accented speech

Out now in Frontiers: A short opinion piece on listening effort and accented speech, written in collaboration with Wash U colleague Kristin Van Engen. The crux of the article is that there is increasing agreement that listening to degraded speech requires listeners to engage additional cognitive processes, under a generic label of "listening effort". Listening effort is typically discussed in terms of hearing impairment or background noise, both of which obscure acoustic features in the speech signal and make it more difficult to understand. In this paper Kristin and I argue that accented speech is also difficult to understand, and should be thought of in a similar context.

We have tried to frame these issues in a general way that incorporates multiple kinds of acoustic challenge. That is, the degree to which the incoming speech signal does not match our stored representations determines the amount of cognitive support needed. This mismatch could come from background noise, or from systematic phonemic or suprasegmental deviations associated with accented speech. A related point is that comprehension accuracy depends both on the quality of the incoming acoustic signal, and the amount of additional cognitive support a listener allocates: Degraded or accented speech may be perfectly intelligible if sufficient cognitive resources are available (and engaged).

Figure 1. (A) Speech signals that match listeners' perceptual expectations are processed relatively automatically, but when acoustic match is reduced (due to, for example, noise or unfamiliar accents), additional cognitive resources are needed to compensate. (B) Executive resources are recruited in proportion to the degree of acoustic mismatch between incoming speech and listeners' representations. When acoustic match is high, good comprehension is possible without executive support. However, as the acoustic match becomes poorer, successful comprehension cannot be accomplished unless executive resources are engaged. Not shown is the extreme situation in which acoustic mismatch is so poor that comprehension is impossible.

I like this article because it raises a number of interesting questions that can be experimentally tested. One of the big ones is the degree to which the type of acoustic mismatch matters: that is, are similar cognitive processes engaged when speech is degraded due to background noise as when an unfamiliar accent reduces intelligibility? My instinct says yes, but I wouldn't bet on it until more data are in.

Reference:

Van Engen KJ, Peelle JE (2014) Listening effort and accented speech. Front Hum Neurosci 8:577. http://journal.frontiersin.org/Journal/10.3389/fnhum.2014.00577/full