New paper: Acoustic challenge affects memory for narrative speech (Ward et al.)

An enduring question for many of us is how relevant our laboratory experiments are for the "real world". In a paper now out in Experimental Aging Research we took a small step towards answering this, in work that Caitlin Ward did as part of her senior honors project a couple of years ago.  In this study, participants listened to short stories (Aesop's fables); after each story, they repeated it back as accurately as possible.

We scored each story recall for accuracy, sitting apart scoring for different levels of narrative detail (as frequently done in so-called propositional scoring approaches). The stories were presented as normal speech (acoustically clear) or as noise-vocoded speech, which is lacking in spectral detail. We predicted that the vocoded speech would require additional cognitive processes to understand, and that this increased cognitive challenge would affect participants' memory for what they heard—something that we often care about in real life.

We found that recall was poorer for degraded speech, although only at some levels of detail. These findings are broadly consistent with the idea that acoustically degraded speech is cognitively challenging. However, it is important to note that the size of this effect was relatively small: recall was only 4% worse, on average, for the challenging speech. The small effect size suggests that listeners are largely able to compensate for the acoustic challenge.

Interestingly, we also found that a listener's verbal short-term memory ability (assessed by reading span) was correlated with their memory for short stories, especially when the stories were acoustically degraded. Both young and older adults show a fair amount of variability in their short term memory, so it seems this correlation is more reflective of a cognitive ability than a simple age effect.

Hearing ability—measured by pure tone average—was not significantly related to recall performance, although there was a trend towards participants with poorer hearing showing worse recall.

One side note to this study is we have provided all of the sound files used in the experiment through our lab website, and I've referenced the github repository that includes my vocodong scripts. One step closer to fully open science!

This article appears as part of a special issue of Experimental Aging Research that I edited in honor of Art Wingfield, my PhD supervisor. There are a number of interesting articles written by folks who have a connection to Art. It was a lot of fun to put this issue together!

Reference:

Ward CM, Rogers CS, Van Engen KJ, Peelle JE (2016) Effects of age, acoustic challenge, and verbal working memory on recall of narrative speech. Experimental Aging Research 42:126–144. doi:10.1080/0361073X.2016.1108785 (PDF)

Cognitive psychology research assistant job opening at Washington University

UPDATE: Position filled.

(Alternate title: Meet lots of cool people and learn great science at the same time!)

(This is an unofficial announcement for an upcoming opening—an official HR posting will follow at some point, but with less helpful details. We've taken the unusual step here of just writing down what we actually want in a research assistant—it might be a little on the long side but hopefully useful. Please pass this on to anyone you think might be interested!)

We have an exciting new research project and are looking to hire a full-time research assistant. This is a joint project between Jonathan Peelle, Kristin Van Engen, and Mitch Sommers at Washington University in Saint Louis. We are looking at the cognitive and neural systems involved in understanding speech, especially when it is acoustically degraded (due to background noise or hearing loss). If you got the job you would be located in the Sommers lab in the psychology department on the main campus of Wash U, working closely with all 3 co-investigators.

Accurately measuring individual differences in cognitive abilities typically requires a lot of data; your primary responsibility would be to collect behavioral data from our research participants (on average 1-2 participants per day). This includes scheduling participants over the phone, running the study, and transferring the data and paperwork afterwards. Running this many participants is a tall order, and requires someone who is naturally very organized and good with people.

By "naturally organized" we don't need someone who understands what being organized means, or who can file and alphabetize paperwork. That's true of most of the applicants for this job. We are looking for the kind of person who intuitively designs systems to organize things in life outside of work because that's how their mind works.

It is also critical that you are comfortable interacting with a range of people. First, because our university research team is spread out, you'll need to be able to coordinate and communicate with all of us. Second, and more importantly, you'll need to be able to be engaging and friendly with both undergraduates and older adults who come in for our study. It is imperative that they feel valued and enjoy their experience, but that you are also able to keep them on task. If you are highly introverted you'll need to consider whether you can keep up a high level of interaction with participants for a long period of time.

On a related note, engaging our participants in scientific communication is also a big part of the job: Compensation for participating in our experiments is usually modest, but our participants are willing to go out of their way to take part in our project because they are genuinely interested in the work that we do. Therefore, you will need to communicate the purpose and eventual applications of our work to participants during their visit.

Although not required, we anticipate that having some post-undergraduate experience will be really helpful in developing the skills necessary for the job. Although research experience would be great, it's more the overall level of maturity and life experience we think would be useful.

We are asking for a minimum of a 2-year commitment—there will be a significant training period, and we want to make sure you're around to benefit from the environment, and to contribute to the project. If you are considering further education we are confident that the experience (and potential publications) you gain from this time will serve you well. We have a 5-year grant and if all goes well we would love to have you stay part of the team for a long time.

There are other skills and background that would be useful but not required:
Any sort of computer programming, statistics, or research design is very relevant, although in practice we appreciate not everyone has had the chance to get this experience. A background in psychology or cognitive neuroscience will be extremely useful in understanding the project and being able to contribute to the interpretation of the results. We'd love if you had all of these qualities but they aren't strictly required for the daily performance of your job.

If you're not familiar with Saint Louis, it's a great city. None of the main investigators on the grant are natives but we all like the area: the culture, food, and beer scenes are all excellent, and the overall cost of living relatively low. Wash U is a great academic institution with good benefits and a good place to work.

In summary, we are really excited about this project and want to find the right person for the job. We think the most successful candidates will be naturally organized, enthusiastic about the project, and have excellent interpersonal skills.

For informal inquiries, please send a CV to Jonathan Peelle (peellej at the domain ent.wustl.edu). In your email let us know why you think you'd be a good fit, and what might set you apart from other candidates.

We are looking for the best person for the job, not the person with the "right" background or CV. If you are interested and think you'd do well we really encourage you to apply. We won't be able to interview everyone and we may not interview you, but let us be the ones to make this decision.

An official job posting will be available shortly (we hope). We won't be able to respond personally to all inquiries so please keep an eye on the Wash U human resources page and apply officially if you are interested.

 

New grant funding from NIH

I'm happy to announce that we have just been awarded a five year research grant from the National Institute on Deafnesss and other Communication Disorders (NIDCD) to study some of the neural processes involved in listening effort. My talented co-investigators on the project are Kristin Van Engen and Mitch Sommers from the Psychology Department.

The sobering side of this news is that it remains a very tough funding climate, and there are many talented scientists with great ideas who are not being funded. We count ourselves very fortunate to have the opportunity to pursue this research over the next few years.

The official abstract for the grant follows. We'll be starting the project as soon as we can and will post updates here. Stay tuned!

Approximately 36 million Americans report having some degree of hearing impairment. Hearing loss is associated with social isolation, depression, cognitive decline, and economic cost due to reduced work productivity. Understanding ways to optimize communication in listeners with hearing impairment is therefore a critical challenge for speech perception researchers. A hallmark of recent research has been the development of the concept of listening effort, which emphasizes the importance of cognitive processing during speech perception: Listeners with hearing impairment can often understand spoken language, but with increased cognitive effort, taking resources away from other processes such as attention and memory. Unfortunately, the specific cognitive processes that play a role in effortful listening remain poorly understood. The goal of the current research is to provide a more specific account of the neural and cognitive systems involved in effortful listening, and investigate how these factors affect speech comprehension. The studies are designed around a framework of lexical competition, which refers to how listeners select a correct target word from among the possible words they may have heard (Was that word “cap” or “cat”?). Lexical competition is influenced by properties of single words (words that sound similar to many others, like “cat”, are more difficult to process), the acoustic signal (poorer acoustic clarity makes correct identification more difficult), and individual differences in cognitive processing (lower inhibitory ability makes incorrect targets more likely to be perceived). Neuroanatomically, these processes are supported by dissociable regions of temporal and frontal cortex, consistent with a large-scale cortical network that supports speech comprehension. Importantly, individual differences in both hearing impairment and cognitive ability interact with the type of speech being processed to determine the level of success a listener will have in understanding speech. The current research will involve collecting measures of hearing and cognition in all participants to investigate how individual differences in these measures impact speech perception. Converging evidence from behavioral studies, eyetracking, and functional magnetic resonance imaging (fMRI) will be used to explore the cognitive and neural basis of speech perception. Aim 1 evaluates the relationship between lexical competition and listening effort during speech perception. Aim 2 characterizes multiple cognitive processes involved in processing degraded speech. Aim 3 assesses how individual differences in hearing and cognition predict speech perception, relying on a framework of lexical competition to inform theoretical interpretation. These studies will show a relationship between lexical competition and the cognitive processes engaged when processing degraded speech, providing a theoretically-motivated framework to better explain the challenges faced by both normal-hearing and hearing-impaired listeners.

 

New paper: mapping speech comprehension with optical imaging (Hassanpour et al.)

Although fMRI is great for a lot of things, it also presents challenges, especially for auditory neuroscience. Echoplanar imaging is loud, and this acoustic noise can obscure stimuli or change the cognitive demand of a task (Peelle, 2014). In addition, patients with implanted medical devices can't be scanned.

My lab has been working with Joe Culver's optical radiology lab to develop a solution to these problems using high-density diffuse optical tomography (HD-DOT). Similar to fNIRS, HD-DOT uses light spectroscopy to image oxygenated and deoxygenated blood signals, related to the BOLD response in fMRI. HD-DOT also incorporates realistic light models to facilitate source reconstruction—this of huge importance for studies of cognitive function and facilitates  combining results across subjects. A detailed description of our current large field-of-view HD-DOT system can be found in Eggebrecht et al. (2014).

Because HD-DOT is relatively new, an important first step in using it for speech studies was to verify that it is indeed able to capture responses to spoken sentences, both in terms of effect size and spatial location. Mahlega Hassanpour is a PhD student who enthusiastically took on this challenge. In our paper now out in NeuroImage (Hassanpour et al., 2015), Mahlega used a well-studied comparison of syntactic complexity looking at sentences containing subject-relative or object-relative center embedded clauses (taken from our previous fMRI study; Peelle et al 2010).

Consistent with previous fMRI work, we found a sensible increase from a low level acoustic control condition (1 channel vocoded speech) to subject-relative sentences to object-relative sentences. The results were seen at both the single subject level (with some expected noise) and the group level.

We are really glad to see nice responses to spoken sentences with HD-DOT and are already pursuing several other projects. More to come!


References:

Eggebrecht AT, Ferradal SL, Robichaux-Viehoever A, Hassanpour MS, Dehghani H, Snyder AZ, Hershey T, Culver JP (2014) Mapping distributed brain function and networks with diffuse optical tomography. Nature Photonics 8:448-454. doi:10.1038/nphoton.2014.107

Hassanpour MS, Eggebrecht AT, Culver JP, Peelle JE (2015) Mapping cortical responses to speech using high-density diffuse optical tomography. NeuroImage 117:319–326. doi:10.1016/j.neuroimage.2015.05.058 (PDF)

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Frontiers in Neuroscience 8:253. doi:10.3389/fnins.2014.00253 (PDF)

Peelle JE, Troiani V, Wingfield A, Grossman M (2010) Neural processing during older adults' comprehension of spoken sentences: Age differences in resource allocation and connectivity. Cerebral Cortex 20:773-782. doi:10.1093/cercor/bhp142 (PDF)