I gave a talk today at the Center for Neural Circuits and Behaviour at Oxford University. Though I spent over two years in Cambridge I was only in Oxford once before—and that was literally just to stop at the Eagle and Child. So, this is my first proper visit to Oxford, and I'm enjoying it very much.
The audience was diverse, as many people at CNBC are doing theoretical work or work in nonhuman systems (including both ferrets and *drosophila*—fruit flies). If I'm not mistaken there were so some folks from Experimental Psychology and FMRIB. I tried to give an overview of recent work on the role of ongoing oscillations in speech perception, and connect this with a somewhat separate line of research showing that degraded or noisy speech requires additional cognitive resources. The lines between these two bodies of research are tentative but also tantalizing. In any case, it's been a great visit and I'm already looking forward to returning!
Dallas Aging and Cognition Conference
I just got back from a short but productive trip to Dallas for the Dallas Aging and Cognition Conference, hosted by the Center for Vital Longevity at UT Dallas. The program was excellent was a number of very interesting speakers. If you are interested in cognitive aging, put the 2017 conference on your calendar. Hope to see you there!
New paper: Automatic analysis (aa) for neuroimaging analyes
I'm extra excited about this one! Out now in Frontiers in Neuroinformatics is our paper describing the automatic analysis (aa) processing pipeline (Cusak et al., 2015). aa started at the MRC Cognition and Brain Sciences Unit in Cambridge, spearheaded by Rhodri Cusack and aided by several other contributors. Recent years have seen aa mature into an extremely flexible processing environment. My own commitment to using aa was sealed at the CBU when working on our VBM comparison of 400+ subjects—with aa it was possible to run a full analysis in about a week (with 16-32 compute nodes running full time) (don't tell anyone—I think technically we weren't supposed to use more than 8...). And, because we were comparing different segmentation routines (among other things) we ran several of these analyses. Without aa I can't imagine ever doing the study. aa also played a key role in our winning HBM Hackathon entry from 2013 (or as we affectionally called it, the haackathon).
Based on my own experience I strongly recommend that all neuroimagers learn to use some form of imaging pipeline, and aa is a great choice. For most of us there is a significant upfront investment of time and frustration. However, the payoff is well worth it, both in terms of time (you will end up saving time in the long run) and scientific quality (reproducibility, openness, and fewer opportunities for point-and-click error).
The code for aa is freely available, hosted on github. Links, help, and more can be found on the main aa website: automaticanalysis.org. Comments and suggestions are very welcome, especially for the "getting started" portions (many of which are new).
By the way, several os the aa team will be at HBM this year, and we are submitting an aa poster as well. Please stop by and say hi!
Reference:
Cusack R, Vicente-Gravobetsky A, Mitchell DJ, Wild C, Auer T, Linke AC, Peelle JE (2015) Automatic analysis (aa): Efficient neuroimaging workflows and parallel processing using Matlab and XML. Frontiers in Neuroinformatics 8:90. http://journal.frontiersin.org/Journal/10.3389/fninf.2014.00090/abstract
Wash U postdoctoral fellowship in aging
The Department of Psychology has a postdoctoral fellowship in aging available:
POSTDOCTORAL FELLOWSHIP IN AGING at Washington University in St. Louis, Psychology Department, will be available the Summer of 2015. Fellowships, sponsored by the National Institute on Aging, are for 1 to 3 years and are designed to train psychologists for academic and research careers in the psychology of aging. Fellows carry out their own research under the supervision of a faculty preceptor. Current faculty interests related to aging include memory, attention, emotion, visual perception, hearing, social/personality, clinical psychology, neuropsychology, neuroimaging, and Alzheimer’s disease. Prior training in aging is not required. Fellows must be citizens, noncitizen nationals, or permanent residents of the United States. Send curriculum vitae and three letters of reference to David A. Balota, Ph.D., Department of Psychology (Box 1125), Washington University, One Brookings Drive, St. Louis, MO 63130 or to dbalota@artsci.wustl.edu. Initial review will begin immediately. Washington University is an equal opportunity/affirmative action employer. Employment eligibility verification required on hire.
There are a lot of great resources and investigators at Wash U for someone interesting in aging, including the opportunity to work with me as part of a joint project (with a home lab in the psychology department). Please get in touch with me if this is something you are interested in!
Peelle Lab 2014 Year in Review
The dream of starting the new year with a clean desk seems to be over. Instead of dealing with the many piles of "very important and urgent things" dating back to last January, I'm using that time to reflect on the last 12 months. A few highlights of the year have included:
- Our first postdoc, Chad Rogers, joining the lab in July;
- Two Frontiers review papers that I'm pretty happy with: one on listening effort and accented speech (with Kristin Van Engen), and a second one reviewing methods for auditory fMRI. Hopefully these will be useful to other folks too!
- 5 NIH grant submissions/resubmissions. I'm happy to say that as stressful as the funding climate can be, I've learned something every grant that I've written, and the process has definitely helped to sharpen my own thinking about the theoretical issues involved.
- Several symposium talks, including the Gerontological Society of America, the Society for Neuroscience, and the Max Planck Institute for Psycholinguistics in Nijmegen.
Looking ahead to 2015 there is a lot to be excited about as well. I have two grants getting reviewed in February (an R01 and and R21)—both are projects I'm really excited about that complement each other nicely. An ongoing collaboration with the optical radiology lab to look at speech comprehension using high-density diffuse optical tomography (HD-DOT) is beginning to yield some nice results, and the first papers from this project should be submitted this Spring. (My optimism is probably peaking now, with a trough around the time of February grant submissions...)
Ok, enough reflecting—back to work to help get 2015 off to a good start. May your study sections be kind, your experiments informative, and your reviewers constructive!
Talk at SfN
I'm in Washington DC for the annual meeting of the Society for Neuroscience (SfN). SfN was my first scientific conference (San Diego, 2001) and it always has a special place in my heart: I'm always inspired by the diversity of methods and approaches to studying the brain.
This year I had the pleasure of speaking in a symposium on hearing impairment that Art Wingfield and I organized. Andy King started the symposium off by talking about animal models of hearing loss, and how hearing impairment affects spatial localization abilities. Barb Shinn-Cunningham talked about the impact of hearing loss on auditory attention in human listeners. Paul Miller talked about some elegant cognitive modeling work he has done to tease apart cognitive processes involved in processing degraded speech. And I finished up the session by attempting to summarize the rather large literature on the cognitive effects of acoustic challenge.
One analogy to hearing impairment is to think about having blurred vision. Those of us who wear glasses can attest to the fact that it's possible to see things without them, but it can be more difficult (and we may make more errors!). The analogy to vision is imperfect, however, because we frequently have the opportunity to look at objects around us for as long as we'd like to. When listening to degraded speech, we have only a brief moment to understand what we hear. If we are unable to, we need to maintain a representation in memory to have any hope of recovering meaning.
The simple take home message is that hearing loss impacts neural functioning at multiple levels, from "low level" responses in auditory cortex to affecting verbal short-term memory and attention-based performance monitoring in humans. Although the details are still actively being investigated it was inspiring to see, again, the multidisciplinary approaches being taken.