New paper: Acoustic challenge affects memory for narrative speech (Ward et al.)

An enduring question for many of us is how relevant our laboratory experiments are for the "real world". In a paper now out in Experimental Aging Research we took a small step towards answering this, in work that Caitlin Ward did as part of her senior honors project a couple of years ago.  In this study, participants listened to short stories (Aesop's fables); after each story, they repeated it back as accurately as possible.

We scored each story recall for accuracy, sitting apart scoring for different levels of narrative detail (as frequently done in so-called propositional scoring approaches). The stories were presented as normal speech (acoustically clear) or as noise-vocoded speech, which is lacking in spectral detail. We predicted that the vocoded speech would require additional cognitive processes to understand, and that this increased cognitive challenge would affect participants' memory for what they heard—something that we often care about in real life.

We found that recall was poorer for degraded speech, although only at some levels of detail. These findings are broadly consistent with the idea that acoustically degraded speech is cognitively challenging. However, it is important to note that the size of this effect was relatively small: recall was only 4% worse, on average, for the challenging speech. The small effect size suggests that listeners are largely able to compensate for the acoustic challenge.

Interestingly, we also found that a listener's verbal short-term memory ability (assessed by reading span) was correlated with their memory for short stories, especially when the stories were acoustically degraded. Both young and older adults show a fair amount of variability in their short term memory, so it seems this correlation is more reflective of a cognitive ability than a simple age effect.

Hearing ability—measured by pure tone average—was not significantly related to recall performance, although there was a trend towards participants with poorer hearing showing worse recall.

One side note to this study is we have provided all of the sound files used in the experiment through our lab website, and I've referenced the github repository that includes my vocodong scripts. One step closer to fully open science!

This article appears as part of a special issue of Experimental Aging Research that I edited in honor of Art Wingfield, my PhD supervisor. There are a number of interesting articles written by folks who have a connection to Art. It was a lot of fun to put this issue together!

Reference:

Ward CM, Rogers CS, Van Engen KJ, Peelle JE (2016) Effects of age, acoustic challenge, and verbal working memory on recall of narrative speech. Experimental Aging Research 42:126–144. doi:10.1080/0361073X.2016.1108785 (PDF)

New paper: mapping speech comprehension with optical imaging (Hassanpour et al.)

Although fMRI is great for a lot of things, it also presents challenges, especially for auditory neuroscience. Echoplanar imaging is loud, and this acoustic noise can obscure stimuli or change the cognitive demand of a task (Peelle, 2014). In addition, patients with implanted medical devices can't be scanned.

My lab has been working with Joe Culver's optical radiology lab to develop a solution to these problems using high-density diffuse optical tomography (HD-DOT). Similar to fNIRS, HD-DOT uses light spectroscopy to image oxygenated and deoxygenated blood signals, related to the BOLD response in fMRI. HD-DOT also incorporates realistic light models to facilitate source reconstruction—this of huge importance for studies of cognitive function and facilitates  combining results across subjects. A detailed description of our current large field-of-view HD-DOT system can be found in Eggebrecht et al. (2014).

Because HD-DOT is relatively new, an important first step in using it for speech studies was to verify that it is indeed able to capture responses to spoken sentences, both in terms of effect size and spatial location. Mahlega Hassanpour is a PhD student who enthusiastically took on this challenge. In our paper now out in NeuroImage (Hassanpour et al., 2015), Mahlega used a well-studied comparison of syntactic complexity looking at sentences containing subject-relative or object-relative center embedded clauses (taken from our previous fMRI study; Peelle et al 2010).

Consistent with previous fMRI work, we found a sensible increase from a low level acoustic control condition (1 channel vocoded speech) to subject-relative sentences to object-relative sentences. The results were seen at both the single subject level (with some expected noise) and the group level.

We are really glad to see nice responses to spoken sentences with HD-DOT and are already pursuing several other projects. More to come!


References:

Eggebrecht AT, Ferradal SL, Robichaux-Viehoever A, Hassanpour MS, Dehghani H, Snyder AZ, Hershey T, Culver JP (2014) Mapping distributed brain function and networks with diffuse optical tomography. Nature Photonics 8:448-454. doi:10.1038/nphoton.2014.107

Hassanpour MS, Eggebrecht AT, Culver JP, Peelle JE (2015) Mapping cortical responses to speech using high-density diffuse optical tomography. NeuroImage 117:319–326. doi:10.1016/j.neuroimage.2015.05.058 (PDF)

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Frontiers in Neuroscience 8:253. doi:10.3389/fnins.2014.00253 (PDF)

Peelle JE, Troiani V, Wingfield A, Grossman M (2010) Neural processing during older adults' comprehension of spoken sentences: Age differences in resource allocation and connectivity. Cerebral Cortex 20:773-782. doi:10.1093/cercor/bhp142 (PDF)

New paper: A role for the angular gyrus in combinatorial semantics (Price et al.)

We know what a "leaf" is, and we know what "wet" means. But combining these concepts together into a "wet leaf" yields a new and possibly more specific idea. Similarly, a "brown leaf" is qualitatively different than any old leaf. Our ability to flexibly and dynamically combine concepts enables us to represent and communicate an enormous set of ideas from a relatively small number of constituents. The question of what neural systems might support conceptual combination has been a focus of research for Amy Price at Penn. Combinatorial semantics is an especially timely topic as there are ongoing debates about the anatomical systems most strongly involved in semantic memory more generally (angular gyrus? anterior temporal lobes? ventral visual regions?), as well as the nature of the information being represented (to what degree do concepts rely on sensorimotor cortices?).

In a new paper out this week in the Journal of Neuroscience (Price et al., 2015), Amy presents data from both fMRI and patients with neurodegenerative disease suggesting that the angular gyrus plays an important role in conceptual combination. Amy designed a clever task in which participants read word pairs that varied in how easily they could be combined into a single concept. For example, you could imagine that "turnip rock" is difficult to combine, whereas a "wet rock" is easier. Amy used all adjective-noun pairs, but still found a considerable amount of variability (for example a "plaid apple" combines less easily than a "plaid jacket"). This "ease of combination" was initially quantified using subject ratings, but Amy found that lexical co-occurrence statistics for these word pairs strongly correlate with their degree of combination, and thus co-occurrence measures were used in all analyses. 

These findings are in good agreement with previous work emphasizing an important role for the angular gyrus in semantic representation (Binder & Desai 2011; Bonner et al. 2013).

References:

Binder JR, Desai RH (2011) The neurobiology of semantic memory. Trends in Cognitive Sciences 15:527-536. doi:10.1016/j.tics.2011.10.001

Bonner MF, Peelle JE, Cook PA, Grossman M (2013) Heteromodal conceptual processing in the angular gyrus. NeuroImage 71:175–186. doi:10.1016/j.neuroimage.2013.01.006 (PDF)

Price AR, Bonner MF, Peelle JE, Grossman M (2015) Converging evidence for the neuroanatomic basis of combinatorial semantics in the angular gyrus. Journal of Neuroscience 35:3276–3284. http://www.jneurosci.org./content/35/7/3276.short (PDF)

New paper: Automatic analysis (aa) for neuroimaging analyes

I'm extra excited about this one! Out now in Frontiers in Neuroinformatics is our paper describing the automatic analysis (aa) processing pipeline (Cusak et al., 2015). aa started at the MRC Cognition and Brain Sciences Unit in Cambridge, spearheaded by Rhodri Cusack and aided by several other contributors. Recent years have seen aa mature into an extremely flexible processing environment. My own commitment to using aa was sealed at the CBU when working on our VBM comparison of 400+ subjects—with aa it was possible to run a full analysis in about a week (with 16-32 compute nodes running full time) (don't tell anyone—I think technically we weren't supposed to use more than 8...). And, because we were comparing different segmentation routines (among other things) we ran several of these analyses. Without aa I can't imagine ever doing the study. aa also played a key role in our winning HBM Hackathon entry from 2013 (or as we affectionally called it, the haackathon).

Based on my own experience I strongly recommend that all neuroimagers learn to use some form of imaging pipeline, and aa is a great choice. For most of us there is a significant upfront investment of time and frustration. However, the payoff is well worth it, both in terms of time (you will end up saving time in the long run) and scientific quality (reproducibility, openness, and fewer opportunities for point-and-click error).

The code for aa is freely available, hosted on github. Links, help, and more can be found on the main aa website: automaticanalysis.org. Comments and suggestions are very welcome, especially for the "getting started" portions (many of which are new).

By the way, several os the aa team will be at HBM this year, and we are submitting an aa poster as well. Please stop by and say hi!

Reference:

Cusack R, Vicente-Gravobetsky A, Mitchell DJ, Wild C, Auer T, Linke AC, Peelle JE (2015) Automatic analysis (aa): Efficient neuroimaging workflows and parallel processing using Matlab and XML. Frontiers in Neuroinformatics 8:90. http://journal.frontiersin.org/Journal/10.3389/fninf.2014.00090/abstract

New paper: Methodological challenges and solutions in auditory fMRI

Fresh off the Frontiers press, my review paper on auditory fMRI methods. There are a number of other papers on this topic, but most are more than a decade old. My goal in this paper was to give a contemporary overview of the current state of auditory fMRI, and emphasize a few points that sometimes fall by the wayside. Scanner noise is often seen as a methodological issue (and a nuisance)—and understandably so—but it's one that can drastically impact our interpretation of results, particularly for auditory fMRI studies.

One key point is that acoustic scanner noise can affect neural activity through multiple pathways. Typically the most focus is placed on audibility (can subjects hear the stimuli?), followed by acknowledging a possible reduction in sensitivity in auditory regions of the brain. However, acoustic noise can also change the cognitive processes required for tasks such as speech perception. Behaviorally there is an extensive literature showing that speech perception in quiet differs from speech perception in noise; the same is true in the scanner environment. Although we may not be able to provide optimal acoustic conditions inside a scanner, at a minimum it is useful to consider the possible impact of the acoustic challenge on observed neural responses. To me this continues to be an important point when interpreting auditory fMRI studies. I'm not convinced by the argument that because acoustic noise is present equally in all conditions, we don't have to worry about it—there are good reasons to think that acoustic challenge interacts with the cognitive systems engaged.

Another point that has long been around in the literature but frequently downplayed in practice is that scanner noise appears to impact other cognitive tasks, too—so it's not probably just auditory neuroscientists who should be paying attention to the issue of acoustic noise in the scanner.

On the solution side, at this point sparse imaging (aka "clustered volume acquisition") is fairly well-known. I also emphasize the benefits ISSS (Schwarzbauer et al, 2006), which is a more recent approach to auditory fMRI. ISSS allows improved temporal resolution while still presenting stimuli in relative quiet, although because it produces a discontinuous timeseries of images, some care needs to be taken during analysis.

It's clear that if we care about auditory processing, scanner noise will always be a challenge. However, I'm optimistic that with some increased attention to the issue and striving to understand the effects of scanner noise rather than ignore them, things will only get better. To quote the last line of the paper: "It is an exciting time for auditory neuroscience, and continuing technical and methodological advances suggest an even brighter (though hopefully quieter) future."

[As a side note I'm also happy to publish in the "Brain Imaging Methods" section of Frontiers. I wish it had it's own title, but it's subsumed in the Frontiers in Neuroscience journal for citation purposes.]

 

References:

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Frontiers in Neuroscience 8:253. http://journal.frontiersin.org/Journal/10.3389/fnins.2014.00253/abstract

Schwarzbauer C, Davis MH, Rodd JM, Johnsrude I (2006) Interleaved silent steady state (ISSS) imaging: A new sparse imaging method applied to auditory fMRI. NeuroImage 29:774-782. http://dx.doi.org/10.1016/j.neuroimage.2005.08.025