New paper: Sentence completion norms for 3085 English sentences

As speech researchers we are often interested in how the context in which a word appears influences its processing. For example, in a noisy environment, the word "cat" might be confused with other, similar sounding words ("cap", "can", "hat", etc.). However, in that same noisy environment, if it were in a sentence such as "The girl did not like dogs but she loved her pet cat", it would be much easier to recognize: The preceding sentence context limits the number of sensible ways to finish the sentence.

One way to measure how predictable a word is at the end of a sentence is to ask people to guess what it is. So, for example, people might see the sentence "The girl did not like dogs but she loved her pet _______" and be asked to write down the first word that comes to mind in finishing the sentence. The proportion of people giving a particular word is then taken to indicate the probability of that word. If 99 out of 100 people (99%) think the sentence ends with "cat", we might assume a 0.99 probability for "cat" being the final word.

Unfortunately, using this approach means that as researchers we are generally limited by the lists of available sentences with this sort of data. A few years ago we discovered we needed a greater variety of sentences for an experiment, and thus was born our new set of norms. Over the course of a summer, undergraduate students in the lab created 3085 sentences. We broke these up into lists of 50 sentences and recruited participants online to fill in sentence-final words. We got at least 100 responses for each sentence: a total of 309 participants (many of whom did more than one list), and over 325,000 total responses.

We then wrote some Python code to tally the responses, including manually checking all of the responses to correct typos, etc. Our hope is that with this large number of sentences and target words, researchers will be able to select stimuli that meet their needs for a variety of experiments.

(Although the norms are available on OSF, we are working on making a more user-friendly search interface...hopefully, coming soon.)

Peelle, J. E., Miller, R., Rogers, C. S., Spehar, B., Sommers, M., & Van Engen, K. J. (2019, September 4). Completion norms for 3085 English sentence contexts. https://doi.org/10.31234/osf.io/r8gsy

New paper: concept representation and the dynamic multilevel reactivation framework (Reilly et al.)

I'm fortunate to have as a collaborator Jamie Reilly, who over the past decade has been spearheading an effort to deepen our understanding about how the brain represents concepts (i.e., semantic memory). Our review paper out in Psychonomic Bulletin and Review (Reilly et al., 2016) puts forth the current version of the dynamic multilevel reactivation framework. (It's part of a special issue on concept representation that contains a number of interesting articles.) 

Recent years have seen increasing interest in the idea that concept representations depend in part on modality-specific representation in or near sensory and motor cortex. For example, our concept of a bell includes something about the acoustic sound of a bell ringing, which this view suggests is supported by regions coding auditory information. Information from different modalities would also need to be bound together, perhaps in heteromodal regions such as the angular gyrus (Bonner et al., 2013; Price et al., 2015). (Interestingly, Wernicke proposed much the same thing well over 100 years ago, as related in Gage & Hickok 2005. Smart guy!)

A distributed semantics view has intuitive appeal for many aspects of concrete concepts for which we can easily imagine sensory details associated with an object. However, it is much more difficult to apply this distributed sensorimotor approach to abstract concepts such as "premise" or "vapid". Similar challenges arise for encyclopedic (verbal) knowledge. These difficulties suggest that distributed sensorimotor representations are not the only thing supporting semantic memory. An alternative view focuses more on amodal semantic "hub" regions that integrate information across modalities. The existence of hub regions is supported by cases such as semantic dementia (i.e., the semantic variant of primary progressive aphasia), in which patients lose access to concepts regardless of how those concepts are tested. Reconciling the evidence in support of distributed vs. hub-like representations has been one of the most interesting challenges in contemporary semantic memory research.

In our recent paper, we suggest that concepts are represented in a high dimensional semantic space that encompasses both concrete and abstract concepts. Representations can be selectively activated depending on task demands. Our difficult-to-pronounce but accurate name for this is the "dynamic multilevel reactivation framework" (DMRF).

Although the nature of the link between sensorimotor representations and linguistic knowledge needs to be further clarified, we think a productive way forward will be models of semantic memory that parsimoniously account for both "concrete" and "abstract" concepts within a unified framework.

References:

Bonner MF, Peelle JE, Cook PA, Grossman M (2013) Heteromodal conceptual processing in the angular gyrus. NeuroImage 71:175-186. doi:10.1016/j.neuroimage.2013.01.006 (PDF)

Gage N, Hickok G (2005) Multiregional cell assemblies, temporal binding and the representation of conceptual knowledge in cortex: A modern theory by a "classical" neurologist, Carl Wernicke. Cortex 41:823-832. doi:10.1016/S0010-9452(08)70301-0

Price AR, Bonner MF, Peelle JE, Grossman M (2015) Converging evidence for the neuroanatomic basis of combinatorial semantics in the angular gyrus. Journal of Neuroscience 35:3276-3284. doi:10.1523/JNEUROSCI.3446-14.201 (PDF)

Reilly J, Peelle JE, Garcia A, Crutch SJ (2016) Linking somatic and symbolic representation in semantic memory: The dynamic multilevel reactivation framework. Psychonomic Bulletin and Review. doi:10.3758/s13423-015-0824-5 (PDF)

New paper: The neural consequences of age-related hearing loss

I'm fortunate to have stayed close to my wonderful PhD supervisor, Art Wingfield. A couple of years ago Art and I hosted a Frontiers research topic on how hearing loss affects neural processing. One of our goals was to follow the effects from the periphery (i.e. effects in the cochlea) through higher-level cognitive function.

We've now written a review article that covers these topics (Peelle and Wingfield, 2016). Our theme is one Art has come back to over the years: given the numerous age-related declines in both hearing and cognition, we might expect speech comprehension to be relatively poor in older adults. The fact that it is, in fact, generally quite good speaks to the flexibility of the auditory system and compensatory cognitive and neural mechanisms.

A few highlights:

  • Hearing impairment affects neural function at every level of the ascending auditory system, from the cochlea to primary auditory cortex. Although frequently demonstrated using noise induced hearing loss, many of the same effects are seen for age-related hearing impairment.

  • Functional brain imaging in humans routinely shows that when speech is acoustically degraded, listeners engage more regions outside the core speech network, suggesting this activation may play a compensatory role in making up for the reduced acoustic information. (An important caveat is that task effects have to be considered).

  • Moving forward, an important effort will be understanding how individual differences in both hearing and cognitive abilities affect the brain networks listened use to process spoken language.


We had fun writing this paper, and hope it's a useful resource!


References:

Peelle JE, Wingfield A (2016) The neural consequences of age-related hearing loss. Trends in Neurosciences 39:486–497. doi:10.1016/j.tins.2016.05.001 (PDF)

 

New paper: Acoustic richness modulates networks involved in speech comprehension (Lee et al.)

Many functional imaging studies have investigated the brain networks responding to intelligible speech. Far fewer have looked at how the brain responds to speech that is acoustically degraded, but remains intelligible. This type of speech is particularly interesting, because as listeners we are frequently in the position of hearing unclear speech that we nevertheless understand—a situation even more common for people with hearing aids or cochlear implants. Does the brain care about acoustic clarity when speech is fully intelligible?

We address this question in our new paper now out in Hearing Research (Lee et al., 2016) in which we played short sentences for listeners they varied in both syntactic complexity and acoustic clarity (normal speech vs. 24 channel vocoded speech). We used an ISSS fMRI sequence (Schwarzbauer et al., 2006) to collect data, allowing us to present the sentences with reduced acoustic noise but still obtain relatively good temporal resolution (Peelle, 2014).

In response to syntactically complex sentences, listeners showed increased activity in large regions of left-lateralized frontoparietal cortex. This finding was expected given previous results from our group and others. In contrast, most of the increases in response based on acoustic clarity were due to the presence of more activity for the acoustically detailed, normal speech. Although this was somewhat unexpected as many studies show increased response for degraded speech relative to clear speech, we have some ideas as to what might explain our result:

  1. Studies finding degradation-related increases frequently also involve a loss of intelligibility;
  2. We indeed saw some areas of increased activity for the degraded speech, they were just smaller in size than the increases;
  3. We used noise vocoding to manipulate the acoustic clarity of the speech signal which reduced cues to the sex, age, emotion, and other characteristics of the speaker.

These results continue an interesting line of work (Obleser et al., 2011) looking at the role of acoustic detail apart from intelligibility. This ties in to prosody and other aspects of spoken communication that go beyond the identity of the words being spoken (McGettigan, 2015).

Overall, we think our finding that large portions of the brain show less activation when less information is available is not as surprising as it seems, and extraordinarily relevent for patients with hearing loss or using an assistive device.

Finally, I'm very happy that we've made the unthresholded statistical maps available on neurovault.org, which is a fantastic resource. Hopefully we'll see more brain imaging data deposited there (from our lab, and others!).

References:

Lee Y-S, Min NE, Wingfield A, Grossman M, Peelle JE (2016) Acoustic richness modulates the neural networks supporting intelligible speech processing. Hearing Research 333:108-117. doi: 10.1016/j.heares.2015.12.008 (PDF)

McGettigan C (2015) The social life of voices: Studying the neural bases for the expression and perception of the self and others during spoken communication. Front Hum Neurosci 9:129. doi:10.3389/fnhum.2015.00129

Obleser J, Meyer L, Friederici AD (2011) Dynamic assignment of neural resources in auditory comprehension of complex sentences. NeuroImage 56:2310-2320. doi:10.1016/j.neuroimage.2011.03.035

Peelle JE (2014) Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 8:253. doi: 10.3389/fnins.2014.00253

Schwarzbauer C, Davis MH, Rodd JM, Johnsrude I (2006) Interleaved silent steady state (ISSS) imaging: A new sparse imaging method applied to auditory fMRI. NeuroImage 29:774-782. doi:10.1016/j.neuroimage.2005.08.025