A new series on “Improving Neuroscience”

I (Dr. Bob) am excited to be organizing a new series of papers in eNeuro on Improving Neuroscience.

eNeuro has been leading the way for rigorous and reproducible research for some time. This new series will provide accessible, authoritative, hands-on tutorials for steps you can take to improve your research. We’ll cover sample-size planning, how to prove a negligible effect, per-registration, Bayesian methods, and much more.

If you have a topic you’d love to see covered or a tutorial you’d like to contribute, let me know!

Here is the editorial I wrote announcing this project ​(Calin-Jageman, 2024)​: https://www.eneuro.org/content/11/3/ENEURO.0048-24.2024. This all grew out of the editorial I wrote in 2019 about improving statistical practices in neuroscience ​(Calin-Jageman & Cumming, 2019)​.

Can’t say much yet, but already have a couple of great authors lined up to contribute tutorials. More soon!

  1. Calin-Jageman, R. J. (2024). NeweNeuroSeries: Improving Your Neuroscience. Society for Neuroscience. doi: 10.1523/eneuro.0048-24.2024
  2. Calin-Jageman, R. J., & Cumming, G. (2019). Estimation for Better Inference in Neuroscience. Society for Neuroscience. doi: 10.1523/eneuro.0205-19.2019

The ignorome and the joys of discovery

Here’s a very clever paper that combines bibliolmetrics with second-gen sequencing to define the brain’s ignorome–the set of genes which are highly and selectively expressed in the CNS yet poorly studied in the neuroscience literature [cite source=’pubmed’]24523945[/cite].   Out of 650 genes with CNS-specific expression, about 38 have one or fewer neuroscience papers.   In contrast to this ‘ignorome’, the top 5% of genes account for almost 70% of publications.  Even more interesting, biological properties don’t predict which genes are hot vs. ignored (it’s not number of interacting partners, number of motifs, etc.).  Instead, popularity is best predicted by date of discovery.

There are a couple of interesting things to think about here.  First, it seems like neuroscientists are guilty of lampposting–looking at what we know how to look at rather truly exploring.  Second, and related, we need to be more open to discovery (something I wish our NIH reviewers would recognize)–we shouldn’t be racing to mechanism when we haven’t even fully characterized all the players yet.

Animal models of psychiatric disorders: the science of craziness, or crazy science?

Here’s an interesting trio of articles on animal models of psychiatric disorders. The article “Crazy like a fox” is a bit over the top in places [cite source=’pubmed’]24534739[/cite], but it makes some interesting points about the logical minefield of modelling specifically psychological disorders in animals. As a counterpoint, I’ve also included Koob’s recent review of animal models [cite source=’pubmed’]22608620[/cite] (an update, I believe, of a classic I first read in grad school) and another by Nestler from Nature Neuroscience [cite source=’pubmed’]20877280[/cite]. It’s a topic well-worth considering in depth: are animal models truley useful in this domain, or is it somewhat crazy to expect them to be?

 

 

NIH chimes in on reproduciblility crisis

Now we’re getting somewhere? The NIH has chimed in on the reproducibility crisis in Nature [cite source=’doi’]10.1038/505612a[/cite]. It frankly acknowledges the problem and lists some of the potential causes (pressure to publish, funding, etc.). However, in some important ways, the commentary falls flat. The NIH’s actions will be: better training in statistics, piloting having a reviewer critique the scientific premise of grant applications, and a big data repository.  Ho hum.   Scientists already know enough about p values to hack them.  The grant proposal can be wonderfully sound–it’s the quality of the product produced after the award that’s at issue.  And big data repositories for positive and negative findings…. not that bold or effective.  The overall lack of oomph from this commentary is telegraphed by the claim that the problems are mainly with animal pre-clinical work, since clinical trials has really cleaned up its act (ha!).  Well, I guess we’ll see if they have anything more visionary to say at SFN this year.  Fingers crossed.

Diagnosing mental illness: more than half a million reasons to worry

Here’s a fascinating article from Perspective on Psychological Science about the very troubling state of psychiatric diagnosis, using PTSD as the prime example [cite source=’doi’]10.1177/1745691613504115[/cite].  I know DSM bashing is all the rage (looking at you, Tom Insell), but this paper takes DSM to the woodshed in some new and exhilarating ways.    I had my behavioral euroscience students read it last semester, and it got them appropriately agitated.

Prediction errors and dopamine – new evidence

An important idea in learning is the notion that dopamine signalling from the midbrain represents a prediction error that can guide associative learning. This idea has been around for a while, but evidence so far has been correlational rather than causal. Moreover, some data has suggested that dopamine represents reward and/or reward saliency rather than prediction error. A new report from Steinberg et al provides some causal evidence supporting the idea that dopamine release from the midbrain really does signal prediction error [cite source=’pubmed’]23708143[/cite]. Specifically, the lab used optocogenetics to enable light-activated midbrain dopamine release. They then used activation to overcome the classic effect of blocking. The primary paper is above as well as a good summary [cite source=’pubmed’]23799468[/cite].

Enjoy!

23708143

June Neuroscience Roundup

Hampson et al. (2012) – a paper that had slipped by me from last year in which the Deadwyler lab shows that their MIMO neural prosthesis can improve performance in primates, not just rats [cite source=’pubmed’]22976769[/cite]. Brave new world.

Kilner (2013) with a short, readable explanation of how MEG and EEG analysis can also be biased by selecting regions for anlaysis based on group differences (similar to the voodoo correlations paper for fMRI and PET). [cite source=’pubmed’]23639379[/cite]

Calrson (2013) with hearbreaking data and commentary about the rise of antipsycholtic medication for treating ADHD [cite source=’pubmed’]23607407[/cite]. Compelling argument that these drugs are being used to compensate for a decrease in the duration and quality of inpatient care, but that this is a poor substitute. Also points out that rage outbursts, though the most prominent clinical symptom for which children are admitted for psychiatric inpatient care, is not well categorized in the DSM nor well researched as a specific symptom on its own.

Brain magnesium…. magic bullet for Alzheimer’s?

This is an incredibly detailed study showing that magnesium supplements (in a form that crosses the blood/brain barrier) can reverse both cognitive and molecular problems that occur in a mouse-model of Alzheimer’s [cite source=’pubmed’]23658180[/cite]. Couple this to the lab’s previous findings that the same Mg supplement a) enhances the memory of young rats, b) prevents age-related declines in normal adult rats, and c) drastically increases synaptic density in hippocampal cultures and you’ve got some pretty good buzz for Mg. Unfortunately, eating a banana is unlikely to work due to tight blood-level regulation by the liver and even tighter regulation by the blood-brain barrier. This is an exceptionally detailed and expansive work. To be honest, it’s the type of thing you expect in Neuron or Cell; what’s it doing in J Neurosci?