New paper: Acoustic richness modulates networks involved in speech comprehension (Lee et al.)

Many functional imaging studies have investigated the brain networks responding to intelligible speech. Far fewer have looked at how the brain responds to speech that is acoustically degraded, but remains intelligible. This type of speech is particularly interesting, because as listeners we are frequently in the position of hearing unclear speech that we nevertheless understand—a situation even more common for people with hearing aids or cochlear implants. Does the brain care about acoustic clarity when speech is fully intelligible?

We address this question in our new paper now out in Hearing Research (Lee et al., 2016) in which we played short sentences for listeners they varied in both syntactic complexity and acoustic clarity (normal speech vs. 24 channel vocoded speech). We used an ISSS fMRI sequence (Schwarzbauer et al., 2006) to collect data, allowing us to present the sentences with reduced acoustic noise but still obtain relatively good temporal resolution (Peelle, 2014).

In response to syntactically complex sentences, listeners showed increased activity in large regions of left-lateralized frontoparietal cortex. This finding was expected given previous results from our group and others. In contrast, most of the increases in response based on acoustic clarity were due to the presence of more activity for the acoustically detailed, normal speech. Although this was somewhat unexpected as many studies show increased response for degraded speech relative to clear speech, we have some ideas as to what might explain our result:

  1. Studies finding degradation-related increases frequently also involve a loss of intelligibility;
  2. We indeed saw some areas of increased activity for the degraded speech, they were just smaller in size than the increases;
  3. We used noise vocoding to manipulate the acoustic clarity of the speech signal which reduced cues to the sex, age, emotion, and other characteristics of the speaker.

These results continue an interesting line of work (Obleser et al., 2011) looking at the role of acoustic detail apart from intelligibility. This ties in to prosody and other aspects of spoken communication that go beyond the identity of the words being spoken (McGettigan, 2015).

Overall, we think our finding that large portions of the brain show less activation when less information is available is not as surprising as it seems, and extraordinarily relevent for patients with hearing loss or using an assistive device.

Finally, I'm very happy that we've made the unthresholded statistical maps available on neurovault.org, which is a fantastic resource. Hopefully we'll see more brain imaging data deposited there (from our lab, and others!).

References:

Lee Y-S, Min NE, Wingfield A, Grossman M, Peelle JE (2016) Acoustic richness modulates the neural networks supporting intelligible speech processing. Hearing Research 333:108-117. doi: 10.1016/j.heares.2015.12.008 (PDF)

McGettigan C (2015) The social life of voices: Studying the neural bases for the expression and perception of the self and others during spoken communication. Front Hum Neurosci 9:129. doi:10.3389/fnhum.2015.00129

Obleser J, Meyer L, Friederici AD (2011) Dynamic assignment of neural resources in auditory comprehension of complex sentences. NeuroImage 56:2310-2320. doi:10.1016/j.neuroimage.2011.03.035

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 8:253. doi: 10.3389/fnins.2014.00253

Schwarzbauer C, Davis MH, Rodd JM, Johnsrude I (2006) Interleaved silent steady state (ISSS) imaging: A new sparse imaging method applied to auditory fMRI. NeuroImage 29:774-782. doi:10.1016/j.neuroimage.2005.08.025

New paper: Acoustic challenge affects memory for narrative speech (Ward et al.)

An enduring question for many of us is how relevant our laboratory experiments are for the "real world". In a paper now out in Experimental Aging Research we took a small step towards answering this, in work that Caitlin Ward did as part of her senior honors project a couple of years ago.  In this study, participants listened to short stories (Aesop's fables); after each story, they repeated it back as accurately as possible.

We scored each story recall for accuracy, sitting apart scoring for different levels of narrative detail (as frequently done in so-called propositional scoring approaches). The stories were presented as normal speech (acoustically clear) or as noise-vocoded speech, which is lacking in spectral detail. We predicted that the vocoded speech would require additional cognitive processes to understand, and that this increased cognitive challenge would affect participants' memory for what they heard—something that we often care about in real life.

We found that recall was poorer for degraded speech, although only at some levels of detail. These findings are broadly consistent with the idea that acoustically degraded speech is cognitively challenging. However, it is important to note that the size of this effect was relatively small: recall was only 4% worse, on average, for the challenging speech. The small effect size suggests that listeners are largely able to compensate for the acoustic challenge.

Interestingly, we also found that a listener's verbal short-term memory ability (assessed by reading span) was correlated with their memory for short stories, especially when the stories were acoustically degraded. Both young and older adults show a fair amount of variability in their short term memory, so it seems this correlation is more reflective of a cognitive ability than a simple age effect.

Hearing ability—measured by pure tone average—was not significantly related to recall performance, although there was a trend towards participants with poorer hearing showing worse recall.

One side note to this study is we have provided all of the sound files used in the experiment through our lab website, and I've referenced the github repository that includes my vocodong scripts. One step closer to fully open science!

This article appears as part of a special issue of Experimental Aging Research that I edited in honor of Art Wingfield, my PhD supervisor. There are a number of interesting articles written by folks who have a connection to Art. It was a lot of fun to put this issue together!

Reference:

Ward CM, Rogers CS, Van Engen KJ, Peelle JE (2016) Effects of age, acoustic challenge, and verbal working memory on recall of narrative speech. Experimental Aging Research 42:126–144. doi:10.1080/0361073X.2016.1108785 (PDF)

Cognitive psychology research assistant job opening at Washington University

UPDATE: Position filled.

(Alternate title: Meet lots of cool people and learn great science at the same time!)

(This is an unofficial announcement for an upcoming opening—an official HR posting will follow at some point, but with less helpful details. We've taken the unusual step here of just writing down what we actually want in a research assistant—it might be a little on the long side but hopefully useful. Please pass this on to anyone you think might be interested!)

We have an exciting new research project and are looking to hire a full-time research assistant. This is a joint project between Jonathan Peelle, Kristin Van Engen, and Mitch Sommers at Washington University in Saint Louis. We are looking at the cognitive and neural systems involved in understanding speech, especially when it is acoustically degraded (due to background noise or hearing loss). If you got the job you would be located in the Sommers lab in the psychology department on the main campus of Wash U, working closely with all 3 co-investigators.

Accurately measuring individual differences in cognitive abilities typically requires a lot of data; your primary responsibility would be to collect behavioral data from our research participants (on average 1-2 participants per day). This includes scheduling participants over the phone, running the study, and transferring the data and paperwork afterwards. Running this many participants is a tall order, and requires someone who is naturally very organized and good with people.

By "naturally organized" we don't need someone who understands what being organized means, or who can file and alphabetize paperwork. That's true of most of the applicants for this job. We are looking for the kind of person who intuitively designs systems to organize things in life outside of work because that's how their mind works.

It is also critical that you are comfortable interacting with a range of people. First, because our university research team is spread out, you'll need to be able to coordinate and communicate with all of us. Second, and more importantly, you'll need to be able to be engaging and friendly with both undergraduates and older adults who come in for our study. It is imperative that they feel valued and enjoy their experience, but that you are also able to keep them on task. If you are highly introverted you'll need to consider whether you can keep up a high level of interaction with participants for a long period of time.

On a related note, engaging our participants in scientific communication is also a big part of the job: Compensation for participating in our experiments is usually modest, but our participants are willing to go out of their way to take part in our project because they are genuinely interested in the work that we do. Therefore, you will need to communicate the purpose and eventual applications of our work to participants during their visit.

Although not required, we anticipate that having some post-undergraduate experience will be really helpful in developing the skills necessary for the job. Although research experience would be great, it's more the overall level of maturity and life experience we think would be useful.

We are asking for a minimum of a 2-year commitment—there will be a significant training period, and we want to make sure you're around to benefit from the environment, and to contribute to the project. If you are considering further education we are confident that the experience (and potential publications) you gain from this time will serve you well. We have a 5-year grant and if all goes well we would love to have you stay part of the team for a long time.

There are other skills and background that would be useful but not required:
Any sort of computer programming, statistics, or research design is very relevant, although in practice we appreciate not everyone has had the chance to get this experience. A background in psychology or cognitive neuroscience will be extremely useful in understanding the project and being able to contribute to the interpretation of the results. We'd love if you had all of these qualities but they aren't strictly required for the daily performance of your job.

If you're not familiar with Saint Louis, it's a great city. None of the main investigators on the grant are natives but we all like the area: the culture, food, and beer scenes are all excellent, and the overall cost of living relatively low. Wash U is a great academic institution with good benefits and a good place to work.

In summary, we are really excited about this project and want to find the right person for the job. We think the most successful candidates will be naturally organized, enthusiastic about the project, and have excellent interpersonal skills.

For informal inquiries, please send a CV to Jonathan Peelle (peellej at the domain ent.wustl.edu). In your email let us know why you think you'd be a good fit, and what might set you apart from other candidates.

We are looking for the best person for the job, not the person with the "right" background or CV. If you are interested and think you'd do well we really encourage you to apply. We won't be able to interview everyone and we may not interview you, but let us be the ones to make this decision.

An official job posting will be available shortly (we hope). We won't be able to respond personally to all inquiries so please keep an eye on the Wash U human resources page and apply officially if you are interested.

 

New grant funding from NIH

I'm happy to announce that we have just been awarded a five year research grant from the National Institute on Deafnesss and other Communication Disorders (NIDCD) to study some of the neural processes involved in listening effort. My talented co-investigators on the project are Kristin Van Engen and Mitch Sommers from the Psychology Department.

The sobering side of this news is that it remains a very tough funding climate, and there are many talented scientists with great ideas who are not being funded. We count ourselves very fortunate to have the opportunity to pursue this research over the next few years.

The official abstract for the grant follows. We'll be starting the project as soon as we can and will post updates here. Stay tuned!

Approximately 36 million Americans report having some degree of hearing impairment. Hearing loss is associated with social isolation, depression, cognitive decline, and economic cost due to reduced work productivity. Understanding ways to optimize communication in listeners with hearing impairment is therefore a critical challenge for speech perception researchers. A hallmark of recent research has been the development of the concept of listening effort, which emphasizes the importance of cognitive processing during speech perception: Listeners with hearing impairment can often understand spoken language, but with increased cognitive effort, taking resources away from other processes such as attention and memory. Unfortunately, the specific cognitive processes that play a role in effortful listening remain poorly understood. The goal of the current research is to provide a more specific account of the neural and cognitive systems involved in effortful listening, and investigate how these factors affect speech comprehension. The studies are designed around a framework of lexical competition, which refers to how listeners select a correct target word from among the possible words they may have heard (Was that word “cap” or “cat”?). Lexical competition is influenced by properties of single words (words that sound similar to many others, like “cat”, are more difficult to process), the acoustic signal (poorer acoustic clarity makes correct identification more difficult), and individual differences in cognitive processing (lower inhibitory ability makes incorrect targets more likely to be perceived). Neuroanatomically, these processes are supported by dissociable regions of temporal and frontal cortex, consistent with a large-scale cortical network that supports speech comprehension. Importantly, individual differences in both hearing impairment and cognitive ability interact with the type of speech being processed to determine the level of success a listener will have in understanding speech. The current research will involve collecting measures of hearing and cognition in all participants to investigate how individual differences in these measures impact speech perception. Converging evidence from behavioral studies, eyetracking, and functional magnetic resonance imaging (fMRI) will be used to explore the cognitive and neural basis of speech perception. Aim 1 evaluates the relationship between lexical competition and listening effort during speech perception. Aim 2 characterizes multiple cognitive processes involved in processing degraded speech. Aim 3 assesses how individual differences in hearing and cognition predict speech perception, relying on a framework of lexical competition to inform theoretical interpretation. These studies will show a relationship between lexical competition and the cognitive processes engaged when processing degraded speech, providing a theoretically-motivated framework to better explain the challenges faced by both normal-hearing and hearing-impaired listeners.