New paper: Acoustic richness modulates networks involved in speech comprehension (Lee et al.)

Many functional imaging studies have investigated the brain networks responding to intelligible speech. Far fewer have looked at how the brain responds to speech that is acoustically degraded, but remains intelligible. This type of speech is particularly interesting, because as listeners we are frequently in the position of hearing unclear speech that we nevertheless understand—a situation even more common for people with hearing aids or cochlear implants. Does the brain care about acoustic clarity when speech is fully intelligible?

We address this question in our new paper now out in Hearing Research (Lee et al., 2016) in which we played short sentences for listeners they varied in both syntactic complexity and acoustic clarity (normal speech vs. 24 channel vocoded speech). We used an ISSS fMRI sequence (Schwarzbauer et al., 2006) to collect data, allowing us to present the sentences with reduced acoustic noise but still obtain relatively good temporal resolution (Peelle, 2014).

In response to syntactically complex sentences, listeners showed increased activity in large regions of left-lateralized frontoparietal cortex. This finding was expected given previous results from our group and others. In contrast, most of the increases in response based on acoustic clarity were due to the presence of more activity for the acoustically detailed, normal speech. Although this was somewhat unexpected as many studies show increased response for degraded speech relative to clear speech, we have some ideas as to what might explain our result:

  1. Studies finding degradation-related increases frequently also involve a loss of intelligibility;
  2. We indeed saw some areas of increased activity for the degraded speech, they were just smaller in size than the increases;
  3. We used noise vocoding to manipulate the acoustic clarity of the speech signal which reduced cues to the sex, age, emotion, and other characteristics of the speaker.

These results continue an interesting line of work (Obleser et al., 2011) looking at the role of acoustic detail apart from intelligibility. This ties in to prosody and other aspects of spoken communication that go beyond the identity of the words being spoken (McGettigan, 2015).

Overall, we think our finding that large portions of the brain show less activation when less information is available is not as surprising as it seems, and extraordinarily relevent for patients with hearing loss or using an assistive device.

Finally, I'm very happy that we've made the unthresholded statistical maps available on neurovault.org, which is a fantastic resource. Hopefully we'll see more brain imaging data deposited there (from our lab, and others!).

References:

Lee Y-S, Min NE, Wingfield A, Grossman M, Peelle JE (2016) Acoustic richness modulates the neural networks supporting intelligible speech processing. Hearing Research 333:108-117. doi: 10.1016/j.heares.2015.12.008 (PDF)

McGettigan C (2015) The social life of voices: Studying the neural bases for the expression and perception of the self and others during spoken communication. Front Hum Neurosci 9:129. doi:10.3389/fnhum.2015.00129

Obleser J, Meyer L, Friederici AD (2011) Dynamic assignment of neural resources in auditory comprehension of complex sentences. NeuroImage 56:2310-2320. doi:10.1016/j.neuroimage.2011.03.035

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 8:253. doi: 10.3389/fnins.2014.00253

Schwarzbauer C, Davis MH, Rodd JM, Johnsrude I (2006) Interleaved silent steady state (ISSS) imaging: A new sparse imaging method applied to auditory fMRI. NeuroImage 29:774-782. doi:10.1016/j.neuroimage.2005.08.025

New paper: Acoustic challenge affects memory for narrative speech (Ward et al.)

An enduring question for many of us is how relevant our laboratory experiments are for the "real world". In a paper now out in Experimental Aging Research we took a small step towards answering this, in work that Caitlin Ward did as part of her senior honors project a couple of years ago.  In this study, participants listened to short stories (Aesop's fables); after each story, they repeated it back as accurately as possible.

We scored each story recall for accuracy, sitting apart scoring for different levels of narrative detail (as frequently done in so-called propositional scoring approaches). The stories were presented as normal speech (acoustically clear) or as noise-vocoded speech, which is lacking in spectral detail. We predicted that the vocoded speech would require additional cognitive processes to understand, and that this increased cognitive challenge would affect participants' memory for what they heard—something that we often care about in real life.

We found that recall was poorer for degraded speech, although only at some levels of detail. These findings are broadly consistent with the idea that acoustically degraded speech is cognitively challenging. However, it is important to note that the size of this effect was relatively small: recall was only 4% worse, on average, for the challenging speech. The small effect size suggests that listeners are largely able to compensate for the acoustic challenge.

Interestingly, we also found that a listener's verbal short-term memory ability (assessed by reading span) was correlated with their memory for short stories, especially when the stories were acoustically degraded. Both young and older adults show a fair amount of variability in their short term memory, so it seems this correlation is more reflective of a cognitive ability than a simple age effect.

Hearing ability—measured by pure tone average—was not significantly related to recall performance, although there was a trend towards participants with poorer hearing showing worse recall.

One side note to this study is we have provided all of the sound files used in the experiment through our lab website, and I've referenced the github repository that includes my vocodong scripts. One step closer to fully open science!

This article appears as part of a special issue of Experimental Aging Research that I edited in honor of Art Wingfield, my PhD supervisor. There are a number of interesting articles written by folks who have a connection to Art. It was a lot of fun to put this issue together!

Reference:

Ward CM, Rogers CS, Van Engen KJ, Peelle JE (2016) Effects of age, acoustic challenge, and verbal working memory on recall of narrative speech. Experimental Aging Research 42:126–144. doi:10.1080/0361073X.2016.1108785 (PDF)

New paper: mapping speech comprehension with optical imaging (Hassanpour et al.)

Although fMRI is great for a lot of things, it also presents challenges, especially for auditory neuroscience. Echoplanar imaging is loud, and this acoustic noise can obscure stimuli or change the cognitive demand of a task (Peelle, 2014). In addition, patients with implanted medical devices can't be scanned.

My lab has been working with Joe Culver's optical radiology lab to develop a solution to these problems using high-density diffuse optical tomography (HD-DOT). Similar to fNIRS, HD-DOT uses light spectroscopy to image oxygenated and deoxygenated blood signals, related to the BOLD response in fMRI. HD-DOT also incorporates realistic light models to facilitate source reconstruction—this of huge importance for studies of cognitive function and facilitates  combining results across subjects. A detailed description of our current large field-of-view HD-DOT system can be found in Eggebrecht et al. (2014).

Because HD-DOT is relatively new, an important first step in using it for speech studies was to verify that it is indeed able to capture responses to spoken sentences, both in terms of effect size and spatial location. Mahlega Hassanpour is a PhD student who enthusiastically took on this challenge. In our paper now out in NeuroImage (Hassanpour et al., 2015), Mahlega used a well-studied comparison of syntactic complexity looking at sentences containing subject-relative or object-relative center embedded clauses (taken from our previous fMRI study; Peelle et al 2010).

Consistent with previous fMRI work, we found a sensible increase from a low level acoustic control condition (1 channel vocoded speech) to subject-relative sentences to object-relative sentences. The results were seen at both the single subject level (with some expected noise) and the group level.

We are really glad to see nice responses to spoken sentences with HD-DOT and are already pursuing several other projects. More to come!


References:

Eggebrecht AT, Ferradal SL, Robichaux-Viehoever A, Hassanpour MS, Dehghani H, Snyder AZ, Hershey T, Culver JP (2014) Mapping distributed brain function and networks with diffuse optical tomography. Nature Photonics 8:448-454. doi:10.1038/nphoton.2014.107

Hassanpour MS, Eggebrecht AT, Culver JP, Peelle JE (2015) Mapping cortical responses to speech using high-density diffuse optical tomography. NeuroImage 117:319–326. doi:10.1016/j.neuroimage.2015.05.058 (PDF)

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Frontiers in Neuroscience 8:253. doi:10.3389/fnins.2014.00253 (PDF)

Peelle JE, Troiani V, Wingfield A, Grossman M (2010) Neural processing during older adults' comprehension of spoken sentences: Age differences in resource allocation and connectivity. Cerebral Cortex 20:773-782. doi:10.1093/cercor/bhp142 (PDF)

New paper: A role for the angular gyrus in combinatorial semantics (Price et al.)

We know what a "leaf" is, and we know what "wet" means. But combining these concepts together into a "wet leaf" yields a new and possibly more specific idea. Similarly, a "brown leaf" is qualitatively different than any old leaf. Our ability to flexibly and dynamically combine concepts enables us to represent and communicate an enormous set of ideas from a relatively small number of constituents. The question of what neural systems might support conceptual combination has been a focus of research for Amy Price at Penn. Combinatorial semantics is an especially timely topic as there are ongoing debates about the anatomical systems most strongly involved in semantic memory more generally (angular gyrus? anterior temporal lobes? ventral visual regions?), as well as the nature of the information being represented (to what degree do concepts rely on sensorimotor cortices?).

In a new paper out this week in the Journal of Neuroscience (Price et al., 2015), Amy presents data from both fMRI and patients with neurodegenerative disease suggesting that the angular gyrus plays an important role in conceptual combination. Amy designed a clever task in which participants read word pairs that varied in how easily they could be combined into a single concept. For example, you could imagine that "turnip rock" is difficult to combine, whereas a "wet rock" is easier. Amy used all adjective-noun pairs, but still found a considerable amount of variability (for example a "plaid apple" combines less easily than a "plaid jacket"). This "ease of combination" was initially quantified using subject ratings, but Amy found that lexical co-occurrence statistics for these word pairs strongly correlate with their degree of combination, and thus co-occurrence measures were used in all analyses. 

These findings are in good agreement with previous work emphasizing an important role for the angular gyrus in semantic representation (Binder & Desai 2011; Bonner et al. 2013).

References:

Binder JR, Desai RH (2011) The neurobiology of semantic memory. Trends in Cognitive Sciences 15:527-536. doi:10.1016/j.tics.2011.10.001

Bonner MF, Peelle JE, Cook PA, Grossman M (2013) Heteromodal conceptual processing in the angular gyrus. NeuroImage 71:175–186. doi:10.1016/j.neuroimage.2013.01.006 (PDF)

Price AR, Bonner MF, Peelle JE, Grossman M (2015) Converging evidence for the neuroanatomic basis of combinatorial semantics in the angular gyrus. Journal of Neuroscience 35:3276–3284. http://www.jneurosci.org./content/35/7/3276.short (PDF)

New paper: Automatic analysis (aa) for neuroimaging analyes

I'm extra excited about this one! Out now in Frontiers in Neuroinformatics is our paper describing the automatic analysis (aa) processing pipeline (Cusak et al., 2015). aa started at the MRC Cognition and Brain Sciences Unit in Cambridge, spearheaded by Rhodri Cusack and aided by several other contributors. Recent years have seen aa mature into an extremely flexible processing environment. My own commitment to using aa was sealed at the CBU when working on our VBM comparison of 400+ subjects—with aa it was possible to run a full analysis in about a week (with 16-32 compute nodes running full time) (don't tell anyone—I think technically we weren't supposed to use more than 8...). And, because we were comparing different segmentation routines (among other things) we ran several of these analyses. Without aa I can't imagine ever doing the study. aa also played a key role in our winning HBM Hackathon entry from 2013 (or as we affectionally called it, the haackathon).

Based on my own experience I strongly recommend that all neuroimagers learn to use some form of imaging pipeline, and aa is a great choice. For most of us there is a significant upfront investment of time and frustration. However, the payoff is well worth it, both in terms of time (you will end up saving time in the long run) and scientific quality (reproducibility, openness, and fewer opportunities for point-and-click error).

The code for aa is freely available, hosted on github. Links, help, and more can be found on the main aa website: automaticanalysis.org. Comments and suggestions are very welcome, especially for the "getting started" portions (many of which are new).

By the way, several os the aa team will be at HBM this year, and we are submitting an aa poster as well. Please stop by and say hi!

Reference:

Cusack R, Vicente-Gravobetsky A, Mitchell DJ, Wild C, Auer T, Linke AC, Peelle JE (2015) Automatic analysis (aa): Efficient neuroimaging workflows and parallel processing using Matlab and XML. Frontiers in Neuroinformatics 8:90. http://journal.frontiersin.org/Journal/10.3389/fninf.2014.00090/abstract