Talk at UCL

I gave a talk today at University College London, to the Speech Science Forum. UCL has a strong complement of speech, language, and cognitive scientists and it was a real pleasure to be here. There were a lot of interesting questions afterward, and several people helpfully pointed me towards some additional constraints or predictions that would be useful to consider. 

As a side note, London will be the location of the 2016 annual meeting of the Society for the Neurobiology of Language..it will be a great opportunity to visit this amazing city.

Talk at Oxford University

I gave a talk today at the Center for Neural Circuits and Behaviour at Oxford University. Though I spent over two years in Cambridge I was only in Oxford once before—and that was literally just to stop at the Eagle and Child. So, this is my first proper visit to Oxford, and I'm enjoying it very much.

The audience was diverse, as many people at CNBC are doing theoretical work or work in nonhuman systems (including both ferrets and *drosophila*—fruit flies). If I'm not mistaken there were so some folks from Experimental Psychology and FMRIB. I tried to give an overview of recent work on the role of ongoing oscillations in speech perception, and connect this with a somewhat separate line of research showing that degraded or noisy speech requires additional cognitive resources. The lines between these two bodies of research are tentative but also tantalizing. In any case, it's been a great visit and I'm already looking forward to returning!

New paper: Automatic analysis (aa) for neuroimaging analyes

I'm extra excited about this one! Out now in Frontiers in Neuroinformatics is our paper describing the automatic analysis (aa) processing pipeline (Cusak et al., 2015). aa started at the MRC Cognition and Brain Sciences Unit in Cambridge, spearheaded by Rhodri Cusack and aided by several other contributors. Recent years have seen aa mature into an extremely flexible processing environment. My own commitment to using aa was sealed at the CBU when working on our VBM comparison of 400+ subjects—with aa it was possible to run a full analysis in about a week (with 16-32 compute nodes running full time) (don't tell anyone—I think technically we weren't supposed to use more than 8...). And, because we were comparing different segmentation routines (among other things) we ran several of these analyses. Without aa I can't imagine ever doing the study. aa also played a key role in our winning HBM Hackathon entry from 2013 (or as we affectionally called it, the haackathon).

Based on my own experience I strongly recommend that all neuroimagers learn to use some form of imaging pipeline, and aa is a great choice. For most of us there is a significant upfront investment of time and frustration. However, the payoff is well worth it, both in terms of time (you will end up saving time in the long run) and scientific quality (reproducibility, openness, and fewer opportunities for point-and-click error).

The code for aa is freely available, hosted on github. Links, help, and more can be found on the main aa website: automaticanalysis.org. Comments and suggestions are very welcome, especially for the "getting started" portions (many of which are new).

By the way, several os the aa team will be at HBM this year, and we are submitting an aa poster as well. Please stop by and say hi!

Reference:

Cusack R, Vicente-Gravobetsky A, Mitchell DJ, Wild C, Auer T, Linke AC, Peelle JE (2015) Automatic analysis (aa): Efficient neuroimaging workflows and parallel processing using Matlab and XML. Frontiers in Neuroinformatics 8:90. http://journal.frontiersin.org/Journal/10.3389/fninf.2014.00090/abstract