On Friday I had the chance to give a talk at Mizzou. Given that it's less than 2 hours from Saint Louis, I can't believe it took me this long to get out there. It was great to meet with so many great folks in cognitive science including Jeff Johnson, Nelson Cowan, and Jeff Rouder. Hopefully I'll be back before too long!
I'm off to New York for a Think Tank on auditory brainstem implants (ABIs), sponsored by Cochlear. They've brought together a group of surgeons, researchers, and clinicians interested in auditory processing and rehabilitation, with a goal of improving outcomes for patients who receive ABIs. I'm looking forward to learning a lot about ABIs, and hopefully helping to generate some useful ideas for the future!
I've just had a wonderful visit to the University of Maryland, where I gave a talk as part of the Neuroscience and Cognitive Science seminar series. (Luckily for the audience, I never was able to think of an appropriate April Fool's joke to incorporate into my talk.) Stefanie Kuchinsky hosted me, and I had a great time meeting so many interesting folks—my mind has expanded. Thanks for a great trip!
The Grossman Lab at the University of Pennsylvania is seeking a motivated and enthusiastic Postdoctoral Research Fellow to contribute to a range of research projects investigating the neurobiology of language. Applicants should have completed a PhD in neuroscience, psychology, or an equivalent field, and have proven technical ability in image analysis and a demonstrated publication record. This position is funded in part through a collaborative grant looking at aging and speech comprehension with Jonathan Peelle (Washington University in Saint Louis) and Art Wingfield (Brandeis University). We are interested in the neurobiologic basis of the interaction of acoustic challenges (such as background noise or hearing loss) and linguistic factors (such as syntactic complexity or semantic predictability).
The University of Pennsylvania is a leading center in human brain imaging, with access to advanced MRI and PET imaging. The lab studies language and cognitive processing in healthy adults, normal aging, and neurodegenerative disease using converging evidence from multiple methods. There may also be opportunity for outstanding candidates to develop new projects and obtain competitive funding based on their own research interests, in alignment with the goals and interests of the lab. Philadelphia is an outstanding city with extraordinary cultural resources.
Primary responsibilities in this position include the analysis, interpretation, and writing up of functional and structural MRI data relating to the neural systems supporting speech processing in young and older adults. Previous experience in all of these areas is helpful, and the successful candidate will benefit from demonstrated independence in conducting analyses and interpreting results. Thus essential skills are motivation, critical thinking, and a strong record of scientific communication (papers, posters, and talks). Background knowledge in speech or aging, fMRI data analysis, experience with scripting languages (such as Matlab), and familiarity with behavioral statistical analyses (e.g., in R) are highly desirable. The anticipated start date is August 2016.
Informal inquiries can be directed to Murray Grossman (email@example.com).
Many functional imaging studies have investigated the brain networks responding to intelligible speech. Far fewer have looked at how the brain responds to speech that is acoustically degraded, but remains intelligible. This type of speech is particularly interesting, because as listeners we are frequently in the position of hearing unclear speech that we nevertheless understand—a situation even more common for people with hearing aids or cochlear implants. Does the brain care about acoustic clarity when speech is fully intelligible?
We address this question in our new paper now out in Hearing Research (Lee et al., 2016) in which we played short sentences for listeners they varied in both syntactic complexity and acoustic clarity (normal speech vs. 24 channel vocoded speech). We used an ISSS fMRI sequence (Schwarzbauer et al., 2006) to collect data, allowing us to present the sentences with reduced acoustic noise but still obtain relatively good temporal resolution (Peelle, 2014).
In response to syntactically complex sentences, listeners showed increased activity in large regions of left-lateralized frontoparietal cortex. This finding was expected given previous results from our group and others. In contrast, most of the increases in response based on acoustic clarity were due to the presence of more activity for the acoustically detailed, normal speech. Although this was somewhat unexpected as many studies show increased response for degraded speech relative to clear speech, we have some ideas as to what might explain our result:
- Studies finding degradation-related increases frequently also involve a loss of intelligibility;
- We indeed saw some areas of increased activity for the degraded speech, they were just smaller in size than the increases;
- We used noise vocoding to manipulate the acoustic clarity of the speech signal which reduced cues to the sex, age, emotion, and other characteristics of the speaker.
These results continue an interesting line of work (Obleser et al., 2011) looking at the role of acoustic detail apart from intelligibility. This ties in to prosody and other aspects of spoken communication that go beyond the identity of the words being spoken (McGettigan, 2015).
Overall, we think our finding that large portions of the brain show less activation when less information is available is not as surprising as it seems, and extraordinarily relevent for patients with hearing loss or using an assistive device.
Finally, I'm very happy that we've made the unthresholded statistical maps available on neurovault.org, which is a fantastic resource. Hopefully we'll see more brain imaging data deposited there (from our lab, and others!).
Lee Y-S, Min NE, Wingfield A, Grossman M, Peelle JE (2016) Acoustic richness modulates the neural networks supporting intelligible speech processing. Hearing Research 333:108-117. doi: 10.1016/j.heares.2015.12.008 (PDF)
McGettigan C (2015) The social life of voices: Studying the neural bases for the expression and perception of the self and others during spoken communication. Front Hum Neurosci 9:129. doi:10.3389/fnhum.2015.00129
Obleser J, Meyer L, Friederici AD (2011) Dynamic assignment of neural resources in auditory comprehension of complex sentences. NeuroImage 56:2310-2320. doi:10.1016/j.neuroimage.2011.03.035
Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 8:253. doi: 10.3389/fnins.2014.00253
Schwarzbauer C, Davis MH, Rodd JM, Johnsrude I (2006) Interleaved silent steady state (ISSS) imaging: A new sparse imaging method applied to auditory fMRI. NeuroImage 29:774-782. doi:10.1016/j.neuroimage.2005.08.025
An enduring question for many of us is how relevant our laboratory experiments are for the "real world". In a paper now out in Experimental Aging Research we took a small step towards answering this, in work that Caitlin Ward did as part of her senior honors project a couple of years ago. In this study, participants listened to short stories (Aesop's fables); after each story, they repeated it back as accurately as possible.
We scored each story recall for accuracy, sitting apart scoring for different levels of narrative detail (as frequently done in so-called propositional scoring approaches). The stories were presented as normal speech (acoustically clear) or as noise-vocoded speech, which is lacking in spectral detail. We predicted that the vocoded speech would require additional cognitive processes to understand, and that this increased cognitive challenge would affect participants' memory for what they heard—something that we often care about in real life.
We found that recall was poorer for degraded speech, although only at some levels of detail. These findings are broadly consistent with the idea that acoustically degraded speech is cognitively challenging. However, it is important to note that the size of this effect was relatively small: recall was only 4% worse, on average, for the challenging speech. The small effect size suggests that listeners are largely able to compensate for the acoustic challenge.
Interestingly, we also found that a listener's verbal short-term memory ability (assessed by reading span) was correlated with their memory for short stories, especially when the stories were acoustically degraded. Both young and older adults show a fair amount of variability in their short term memory, so it seems this correlation is more reflective of a cognitive ability than a simple age effect.
Hearing ability—measured by pure tone average—was not significantly related to recall performance, although there was a trend towards participants with poorer hearing showing worse recall.
One side note to this study is we have provided all of the sound files used in the experiment through our lab website, and I've referenced the github repository that includes my vocodong scripts. One step closer to fully open science!
This article appears as part of a special issue of Experimental Aging Research that I edited in honor of Art Wingfield, my PhD supervisor. There are a number of interesting articles written by folks who have a connection to Art. It was a lot of fun to put this issue together!
Ward CM, Rogers CS, Van Engen KJ, Peelle JE (2016) Effects of age, acoustic challenge, and verbal working memory on recall of narrative speech. Experimental Aging Research 42:126–144. doi:10.1080/0361073X.2016.1108785 (PDF)