New paper: Quantifying subjective effort during listening

When we have to understand challenging speech - for example, speech in background noise - we have to “work harder” when listening. A recurring challenge in this area of research is how exactly to quantify this additional cognitive effort. One way that has been used is self-report: simply asking people to rate on a scale how difficult a listening situation was.

Self-report measures can be challenging, because they rely on meta-linguistic decisions: that is, I am asking you, as a listener, to have some insight into how difficult something was for you. It could be that people vary in how they assign a difficulty to a number, and so even though two people’s brains might have been doing the same thing, the extra step of having to assign a number might produce different self-report numbers. Because of this and other challenges, other measures have also been used, including physiological measures like pupil dilation that do not rely on a listener’s decision making ability.

At the same time, subjective effort is an important measure that likely provides information above and beyond what is captured by physiological measures. For example, personality traits might affect how much challenge a person is subjectively experiencing during listening, and a person’s subjective experience (however closely it ties to physiological measures) is probably what will determine their behavior. So, it would be useful to have a measure of subjective listening difficulty that did not rely on a listener’s overt judgments about listening.

Drew McLaughlin developed exactly such a task, building on elegant work in non-speech domains (McLaughlin et al., 2021). The approach uses a discounting paradigm borrowed from behavioral economics. Listeners are presented with speech at different levels of noise (some easy, some moderate, some difficult). Once they understand how difficult the various conditions are, they are given a choice on every trial to perform an easier trial for less money, or a difficult trial for more money (for example: I’ll give you $1.50 to listen to this easy sentence or $2.00 to listen to this hard sentence). We can then use the difference in reward to quantify the additional “cost” of a difficult trial. If I am equally likely to do an easy trial for $1.75 or a hard trial for $2.00, then I am “discounting” the value of the hard trial by $0.25.

discounting.png

We found that all listeners showed discounting at more difficult listening levels, with older adults showing more discounting than young adults. This is consistent with what we know about age-related changes in hearing and cognitive abilities important for speech that would lead us to expect greater effort (and thus more discounting) in older adults.

To complement these group analyses, we also performed some exploratory correlations with working memory and hearing scores. For the older adults, we found that listeners with better working memory showed less discounting (that is, found the noisy speech easier); listeners with poorer hearing showed more discounting (found the task harder). We also looked at a hearing handicap index, which is a questionnaire assessing subjective hearing and communication function, which also correlated with discounting.

individual_differences.png

I’m really excited about this approach because it provides us a way to quantify subjective effort without directly asking participants to rate their own effort. There is certainly no single bulletproof measure of cognitive effort during listening but we hope this will be a useful tool that provides some unique information.

Reference

McLaughlin DJ, Braver TS, Peelle JE (2021) Measuring the subjective cost of listening effort using a discounting task. Journal of Speech, Language, and Hearing Research 64:337–347. doi:10.1044/2020_JSLHR-20-00086 (PDF)

New grant: Cognitive effort in listening and beyond

Dr. Peelle and Dr. Braver (Department of Psychological and Brain Sciences) were recently awarded a $432,938 grant from the National Institutes of Health (NIH) to support a project titled “Healthy Aging and the Cost of Cognitive Effort.” This project will investigate the ways in which people take on cognitively-demanding activities - including listening to speech in noise (for example, having dinner at a noisy restaurant). Better understanding why people engage in these activities (or choose not to) will help us understand how to make it easier.

New paper: Sentence completion norms for 3085 English sentences

As speech researchers we are often interested in how the context in which a word appears influences its processing. For example, in a noisy environment, the word "cat" might be confused with other, similar sounding words ("cap", "can", "hat", etc.). However, in that same noisy environment, if it were in a sentence such as "The girl did not like dogs but she loved her pet cat", it would be much easier to recognize: The preceding sentence context limits the number of sensible ways to finish the sentence.

One way to measure how predictable a word is at the end of a sentence is to ask people to guess what it is. So, for example, people might see the sentence "The girl did not like dogs but she loved her pet _______" and be asked to write down the first word that comes to mind in finishing the sentence. The proportion of people giving a particular word is then taken to indicate the probability of that word. If 99 out of 100 people (99%) think the sentence ends with "cat", we might assume a 0.99 probability for "cat" being the final word.

Unfortunately, using this approach means that as researchers we are generally limited by the lists of available sentences with this sort of data. A few years ago we discovered we needed a greater variety of sentences for an experiment, and thus was born our new set of norms. Over the course of a summer, undergraduate students in the lab created 3085 sentences. We broke these up into lists of 50 sentences and recruited participants online to fill in sentence-final words. We got at least 100 responses for each sentence: a total of 309 participants (many of whom did more than one list), and over 325,000 total responses.

We then wrote some Python code to tally the responses, including manually checking all of the responses to correct typos, etc. Our hope is that with this large number of sentences and target words, researchers will be able to select stimuli that meet their needs for a variety of experiments.

(Although the norms are available on OSF, we are working on making a more user-friendly search interface...hopefully, coming soon.)

Peelle, J. E., Miller, R., Rogers, C. S., Spehar, B., Sommers, M., & Van Engen, K. J. (2019, September 4). Completion norms for 3085 English sentence contexts. https://doi.org/10.31234/osf.io/r8gsy

#snlmtg17photo contest winners!

Thanks to everyone who participated in the unofficial photo contest at the Society for the Neurobiology of Language conference! There were a lot of great entries but I've managed to select some winners.

Grand prize

The top prize goes to Ethan Weed for this beautiful picture of a brain coral. This ticks all the boxes: a beautiful picture, from a memorable reception, and...BRAINS (even if they're coral)!

Runners up