New paper: Quantifying subjective effort during listening

When we have to understand challenging speech - for example, speech in background noise - we have to “work harder” when listening. A recurring challenge in this area of research is how exactly to quantify this additional cognitive effort. One way that has been used is self-report: simply asking people to rate on a scale how difficult a listening situation was.

Self-report measures can be challenging, because they rely on meta-linguistic decisions: that is, I am asking you, as a listener, to have some insight into how difficult something was for you. It could be that people vary in how they assign a difficulty to a number, and so even though two people’s brains might have been doing the same thing, the extra step of having to assign a number might produce different self-report numbers. Because of this and other challenges, other measures have also been used, including physiological measures like pupil dilation that do not rely on a listener’s decision making ability.

At the same time, subjective effort is an important measure that likely provides information above and beyond what is captured by physiological measures. For example, personality traits might affect how much challenge a person is subjectively experiencing during listening, and a person’s subjective experience (however closely it ties to physiological measures) is probably what will determine their behavior. So, it would be useful to have a measure of subjective listening difficulty that did not rely on a listener’s overt judgments about listening.

Drew McLaughlin developed exactly such a task, building on elegant work in non-speech domains (McLaughlin et al., 2021). The approach uses a discounting paradigm borrowed from behavioral economics. Listeners are presented with speech at different levels of noise (some easy, some moderate, some difficult). Once they understand how difficult the various conditions are, they are given a choice on every trial to perform an easier trial for less money, or a difficult trial for more money (for example: I’ll give you $1.50 to listen to this easy sentence or $2.00 to listen to this hard sentence). We can then use the difference in reward to quantify the additional “cost” of a difficult trial. If I am equally likely to do an easy trial for $1.75 or a hard trial for $2.00, then I am “discounting” the value of the hard trial by $0.25.

discounting.png

We found that all listeners showed discounting at more difficult listening levels, with older adults showing more discounting than young adults. This is consistent with what we know about age-related changes in hearing and cognitive abilities important for speech that would lead us to expect greater effort (and thus more discounting) in older adults.

To complement these group analyses, we also performed some exploratory correlations with working memory and hearing scores. For the older adults, we found that listeners with better working memory showed less discounting (that is, found the noisy speech easier); listeners with poorer hearing showed more discounting (found the task harder). We also looked at a hearing handicap index, which is a questionnaire assessing subjective hearing and communication function, which also correlated with discounting.

individual_differences.png

I’m really excited about this approach because it provides us a way to quantify subjective effort without directly asking participants to rate their own effort. There is certainly no single bulletproof measure of cognitive effort during listening but we hope this will be a useful tool that provides some unique information.

Reference

McLaughlin DJ, Braver TS, Peelle JE (2021) Measuring the subjective cost of listening effort using a discounting task. Journal of Speech, Language, and Hearing Research 64:337–347. doi:10.1044/2020_JSLHR-20-00086 (PDF)

New paper: Sentence completion norms for 3085 English sentences

As speech researchers we are often interested in how the context in which a word appears influences its processing. For example, in a noisy environment, the word "cat" might be confused with other, similar sounding words ("cap", "can", "hat", etc.). However, in that same noisy environment, if it were in a sentence such as "The girl did not like dogs but she loved her pet cat", it would be much easier to recognize: The preceding sentence context limits the number of sensible ways to finish the sentence.

One way to measure how predictable a word is at the end of a sentence is to ask people to guess what it is. So, for example, people might see the sentence "The girl did not like dogs but she loved her pet _______" and be asked to write down the first word that comes to mind in finishing the sentence. The proportion of people giving a particular word is then taken to indicate the probability of that word. If 99 out of 100 people (99%) think the sentence ends with "cat", we might assume a 0.99 probability for "cat" being the final word.

Unfortunately, using this approach means that as researchers we are generally limited by the lists of available sentences with this sort of data. A few years ago we discovered we needed a greater variety of sentences for an experiment, and thus was born our new set of norms. Over the course of a summer, undergraduate students in the lab created 3085 sentences. We broke these up into lists of 50 sentences and recruited participants online to fill in sentence-final words. We got at least 100 responses for each sentence: a total of 309 participants (many of whom did more than one list), and over 325,000 total responses.

We then wrote some Python code to tally the responses, including manually checking all of the responses to correct typos, etc. Our hope is that with this large number of sentences and target words, researchers will be able to select stimuli that meet their needs for a variety of experiments.

(Although the norms are available on OSF, we are working on making a more user-friendly search interface...hopefully, coming soon.)

Peelle, J. E., Miller, R., Rogers, C. S., Spehar, B., Sommers, M., & Van Engen, K. J. (2019, September 4). Completion norms for 3085 English sentence contexts. https://doi.org/10.31234/osf.io/r8gsy

New paper: concept representation and the dynamic multilevel reactivation framework (Reilly et al.)

I'm fortunate to have as a collaborator Jamie Reilly, who over the past decade has been spearheading an effort to deepen our understanding about how the brain represents concepts (i.e., semantic memory). Our review paper out in Psychonomic Bulletin and Review (Reilly et al., 2016) puts forth the current version of the dynamic multilevel reactivation framework. (It's part of a special issue on concept representation that contains a number of interesting articles.) 

Recent years have seen increasing interest in the idea that concept representations depend in part on modality-specific representation in or near sensory and motor cortex. For example, our concept of a bell includes something about the acoustic sound of a bell ringing, which this view suggests is supported by regions coding auditory information. Information from different modalities would also need to be bound together, perhaps in heteromodal regions such as the angular gyrus (Bonner et al., 2013; Price et al., 2015). (Interestingly, Wernicke proposed much the same thing well over 100 years ago, as related in Gage & Hickok 2005. Smart guy!)

A distributed semantics view has intuitive appeal for many aspects of concrete concepts for which we can easily imagine sensory details associated with an object. However, it is much more difficult to apply this distributed sensorimotor approach to abstract concepts such as "premise" or "vapid". Similar challenges arise for encyclopedic (verbal) knowledge. These difficulties suggest that distributed sensorimotor representations are not the only thing supporting semantic memory. An alternative view focuses more on amodal semantic "hub" regions that integrate information across modalities. The existence of hub regions is supported by cases such as semantic dementia (i.e., the semantic variant of primary progressive aphasia), in which patients lose access to concepts regardless of how those concepts are tested. Reconciling the evidence in support of distributed vs. hub-like representations has been one of the most interesting challenges in contemporary semantic memory research.

In our recent paper, we suggest that concepts are represented in a high dimensional semantic space that encompasses both concrete and abstract concepts. Representations can be selectively activated depending on task demands. Our difficult-to-pronounce but accurate name for this is the "dynamic multilevel reactivation framework" (DMRF).

Although the nature of the link between sensorimotor representations and linguistic knowledge needs to be further clarified, we think a productive way forward will be models of semantic memory that parsimoniously account for both "concrete" and "abstract" concepts within a unified framework.

References:

Bonner MF, Peelle JE, Cook PA, Grossman M (2013) Heteromodal conceptual processing in the angular gyrus. NeuroImage 71:175-186. doi:10.1016/j.neuroimage.2013.01.006 (PDF)

Gage N, Hickok G (2005) Multiregional cell assemblies, temporal binding and the representation of conceptual knowledge in cortex: A modern theory by a "classical" neurologist, Carl Wernicke. Cortex 41:823-832. doi:10.1016/S0010-9452(08)70301-0

Price AR, Bonner MF, Peelle JE, Grossman M (2015) Converging evidence for the neuroanatomic basis of combinatorial semantics in the angular gyrus. Journal of Neuroscience 35:3276-3284. doi:10.1523/JNEUROSCI.3446-14.201 (PDF)

Reilly J, Peelle JE, Garcia A, Crutch SJ (2016) Linking somatic and symbolic representation in semantic memory: The dynamic multilevel reactivation framework. Psychonomic Bulletin and Review. doi:10.3758/s13423-015-0824-5 (PDF)

New paper: The neural consequences of age-related hearing loss

I'm fortunate to have stayed close to my wonderful PhD supervisor, Art Wingfield. A couple of years ago Art and I hosted a Frontiers research topic on how hearing loss affects neural processing. One of our goals was to follow the effects from the periphery (i.e. effects in the cochlea) through higher-level cognitive function.

We've now written a review article that covers these topics (Peelle and Wingfield, 2016). Our theme is one Art has come back to over the years: given the numerous age-related declines in both hearing and cognition, we might expect speech comprehension to be relatively poor in older adults. The fact that it is, in fact, generally quite good speaks to the flexibility of the auditory system and compensatory cognitive and neural mechanisms.

A few highlights:

  • Hearing impairment affects neural function at every level of the ascending auditory system, from the cochlea to primary auditory cortex. Although frequently demonstrated using noise induced hearing loss, many of the same effects are seen for age-related hearing impairment.

  • Functional brain imaging in humans routinely shows that when speech is acoustically degraded, listeners engage more regions outside the core speech network, suggesting this activation may play a compensatory role in making up for the reduced acoustic information. (An important caveat is that task effects have to be considered).

  • Moving forward, an important effort will be understanding how individual differences in both hearing and cognitive abilities affect the brain networks listened use to process spoken language.


We had fun writing this paper, and hope it's a useful resource!


References:

Peelle JE, Wingfield A (2016) The neural consequences of age-related hearing loss. Trends in Neurosciences 39:486–497. doi:10.1016/j.tins.2016.05.001 (PDF)