New paper: concept representation and the dynamic multilevel reactivation framework (Reilly et al.)

I'm fortunate to have as a collaborator Jamie Reilly, who over the past decade has been spearheading an effort to deepen our understanding about how the brain represents concepts (i.e., semantic memory). Our review paper out in Psychonomic Bulletin and Review (Reilly et al., 2016) puts forth the current version of the dynamic multilevel reactivation framework. (It's part of a special issue on concept representation that contains a number of interesting articles.) 

Recent years have seen increasing interest in the idea that concept representations depend in part on modality-specific representation in or near sensory and motor cortex. For example, our concept of a bell includes something about the acoustic sound of a bell ringing, which this view suggests is supported by regions coding auditory information. Information from different modalities would also need to be bound together, perhaps in heteromodal regions such as the angular gyrus (Bonner et al., 2013; Price et al., 2015). (Interestingly, Wernicke proposed much the same thing well over 100 years ago, as related in Gage & Hickok 2005. Smart guy!)

A distributed semantics view has intuitive appeal for many aspects of concrete concepts for which we can easily imagine sensory details associated with an object. However, it is much more difficult to apply this distributed sensorimotor approach to abstract concepts such as "premise" or "vapid". Similar challenges arise for encyclopedic (verbal) knowledge. These difficulties suggest that distributed sensorimotor representations are not the only thing supporting semantic memory. An alternative view focuses more on amodal semantic "hub" regions that integrate information across modalities. The existence of hub regions is supported by cases such as semantic dementia (i.e., the semantic variant of primary progressive aphasia), in which patients lose access to concepts regardless of how those concepts are tested. Reconciling the evidence in support of distributed vs. hub-like representations has been one of the most interesting challenges in contemporary semantic memory research.

In our recent paper, we suggest that concepts are represented in a high dimensional semantic space that encompasses both concrete and abstract concepts. Representations can be selectively activated depending on task demands. Our difficult-to-pronounce but accurate name for this is the "dynamic multilevel reactivation framework" (DMRF).

Although the nature of the link between sensorimotor representations and linguistic knowledge needs to be further clarified, we think a productive way forward will be models of semantic memory that parsimoniously account for both "concrete" and "abstract" concepts within a unified framework.

References:

Bonner MF, Peelle JE, Cook PA, Grossman M (2013) Heteromodal conceptual processing in the angular gyrus. NeuroImage 71:175-186. doi:10.1016/j.neuroimage.2013.01.006 (PDF)

Gage N, Hickok G (2005) Multiregional cell assemblies, temporal binding and the representation of conceptual knowledge in cortex: A modern theory by a "classical" neurologist, Carl Wernicke. Cortex 41:823-832. doi:10.1016/S0010-9452(08)70301-0

Price AR, Bonner MF, Peelle JE, Grossman M (2015) Converging evidence for the neuroanatomic basis of combinatorial semantics in the angular gyrus. Journal of Neuroscience 35:3276-3284. doi:10.1523/JNEUROSCI.3446-14.201 (PDF)

Reilly J, Peelle JE, Garcia A, Crutch SJ (2016) Linking somatic and symbolic representation in semantic memory: The dynamic multilevel reactivation framework. Psychonomic Bulletin and Review. doi:10.3758/s13423-015-0824-5 (PDF)

New paper: The neural consequences of age-related hearing loss

I'm fortunate to have stayed close to my wonderful PhD supervisor, Art Wingfield. A couple of years ago Art and I hosted a Frontiers research topic on how hearing loss affects neural processing. One of our goals was to follow the effects from the periphery (i.e. effects in the cochlea) through higher-level cognitive function.

We've now written a review article that covers these topics (Peelle and Wingfield, 2016). Our theme is one Art has come back to over the years: given the numerous age-related declines in both hearing and cognition, we might expect speech comprehension to be relatively poor in older adults. The fact that it is, in fact, generally quite good speaks to the flexibility of the auditory system and compensatory cognitive and neural mechanisms.

A few highlights:

  • Hearing impairment affects neural function at every level of the ascending auditory system, from the cochlea to primary auditory cortex. Although frequently demonstrated using noise induced hearing loss, many of the same effects are seen for age-related hearing impairment.

  • Functional brain imaging in humans routinely shows that when speech is acoustically degraded, listeners engage more regions outside the core speech network, suggesting this activation may play a compensatory role in making up for the reduced acoustic information. (An important caveat is that task effects have to be considered).

  • Moving forward, an important effort will be understanding how individual differences in both hearing and cognitive abilities affect the brain networks listened use to process spoken language.


We had fun writing this paper, and hope it's a useful resource!


References:

Peelle JE, Wingfield A (2016) The neural consequences of age-related hearing loss. Trends in Neurosciences 39:486–497. doi:10.1016/j.tins.2016.05.001 (PDF)

 

New paper: Acoustic richness modulates networks involved in speech comprehension (Lee et al.)

Many functional imaging studies have investigated the brain networks responding to intelligible speech. Far fewer have looked at how the brain responds to speech that is acoustically degraded, but remains intelligible. This type of speech is particularly interesting, because as listeners we are frequently in the position of hearing unclear speech that we nevertheless understand—a situation even more common for people with hearing aids or cochlear implants. Does the brain care about acoustic clarity when speech is fully intelligible?

We address this question in our new paper now out in Hearing Research (Lee et al., 2016) in which we played short sentences for listeners they varied in both syntactic complexity and acoustic clarity (normal speech vs. 24 channel vocoded speech). We used an ISSS fMRI sequence (Schwarzbauer et al., 2006) to collect data, allowing us to present the sentences with reduced acoustic noise but still obtain relatively good temporal resolution (Peelle, 2014).

In response to syntactically complex sentences, listeners showed increased activity in large regions of left-lateralized frontoparietal cortex. This finding was expected given previous results from our group and others. In contrast, most of the increases in response based on acoustic clarity were due to the presence of more activity for the acoustically detailed, normal speech. Although this was somewhat unexpected as many studies show increased response for degraded speech relative to clear speech, we have some ideas as to what might explain our result:

  1. Studies finding degradation-related increases frequently also involve a loss of intelligibility;
  2. We indeed saw some areas of increased activity for the degraded speech, they were just smaller in size than the increases;
  3. We used noise vocoding to manipulate the acoustic clarity of the speech signal which reduced cues to the sex, age, emotion, and other characteristics of the speaker.

These results continue an interesting line of work (Obleser et al., 2011) looking at the role of acoustic detail apart from intelligibility. This ties in to prosody and other aspects of spoken communication that go beyond the identity of the words being spoken (McGettigan, 2015).

Overall, we think our finding that large portions of the brain show less activation when less information is available is not as surprising as it seems, and extraordinarily relevent for patients with hearing loss or using an assistive device.

Finally, I'm very happy that we've made the unthresholded statistical maps available on neurovault.org, which is a fantastic resource. Hopefully we'll see more brain imaging data deposited there (from our lab, and others!).

References:

Lee Y-S, Min NE, Wingfield A, Grossman M, Peelle JE (2016) Acoustic richness modulates the neural networks supporting intelligible speech processing. Hearing Research 333:108-117. doi: 10.1016/j.heares.2015.12.008 (PDF)

McGettigan C (2015) The social life of voices: Studying the neural bases for the expression and perception of the self and others during spoken communication. Front Hum Neurosci 9:129. doi:10.3389/fnhum.2015.00129

Obleser J, Meyer L, Friederici AD (2011) Dynamic assignment of neural resources in auditory comprehension of complex sentences. NeuroImage 56:2310-2320. doi:10.1016/j.neuroimage.2011.03.035

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 8:253. doi: 10.3389/fnins.2014.00253

Schwarzbauer C, Davis MH, Rodd JM, Johnsrude I (2006) Interleaved silent steady state (ISSS) imaging: A new sparse imaging method applied to auditory fMRI. NeuroImage 29:774-782. doi:10.1016/j.neuroimage.2005.08.025

New paper: Acoustic challenge affects memory for narrative speech (Ward et al.)

An enduring question for many of us is how relevant our laboratory experiments are for the "real world". In a paper now out in Experimental Aging Research we took a small step towards answering this, in work that Caitlin Ward did as part of her senior honors project a couple of years ago.  In this study, participants listened to short stories (Aesop's fables); after each story, they repeated it back as accurately as possible.

We scored each story recall for accuracy, sitting apart scoring for different levels of narrative detail (as frequently done in so-called propositional scoring approaches). The stories were presented as normal speech (acoustically clear) or as noise-vocoded speech, which is lacking in spectral detail. We predicted that the vocoded speech would require additional cognitive processes to understand, and that this increased cognitive challenge would affect participants' memory for what they heard—something that we often care about in real life.

We found that recall was poorer for degraded speech, although only at some levels of detail. These findings are broadly consistent with the idea that acoustically degraded speech is cognitively challenging. However, it is important to note that the size of this effect was relatively small: recall was only 4% worse, on average, for the challenging speech. The small effect size suggests that listeners are largely able to compensate for the acoustic challenge.

Interestingly, we also found that a listener's verbal short-term memory ability (assessed by reading span) was correlated with their memory for short stories, especially when the stories were acoustically degraded. Both young and older adults show a fair amount of variability in their short term memory, so it seems this correlation is more reflective of a cognitive ability than a simple age effect.

Hearing ability—measured by pure tone average—was not significantly related to recall performance, although there was a trend towards participants with poorer hearing showing worse recall.

One side note to this study is we have provided all of the sound files used in the experiment through our lab website, and I've referenced the github repository that includes my vocodong scripts. One step closer to fully open science!

This article appears as part of a special issue of Experimental Aging Research that I edited in honor of Art Wingfield, my PhD supervisor. There are a number of interesting articles written by folks who have a connection to Art. It was a lot of fun to put this issue together!

Reference:

Ward CM, Rogers CS, Van Engen KJ, Peelle JE (2016) Effects of age, acoustic challenge, and verbal working memory on recall of narrative speech. Experimental Aging Research 42:126–144. doi:10.1080/0361073X.2016.1108785 (PDF)