Catching up on comments for recent papers. Back on 4/26 a *bombshell* paper from Ioannidis and colleagues came out on power and research quality [cite source=’pubmed’]23571845[/cite]. First, the paper summarizes some statistical points which *should* be well known by now: small samples sizes lead to poor power and extreme levels of sampling error in estimating the real effects of a treatment. With small sample sizes, the occasional over-estimate of effect size will lead to publication, but this will be misleading about the true nature of the effect. Moreover, if the overall level of true effects is low, then the published literature can end up dominated by false positives because they will continue to be found at 0.05, whereas true positives will be found only at their base rate * power.
What’s even more electrifying about Button et al. is a set of meta-anlaysis to determine the degree to which low power is a problem in neuroscience. In imaging studies, power was typically about 8%; in a large set of animals studies between 18-31%, and in papers being meta-analyzed in neuroscience 21%. That’s astonishing! This data suggests that most neuroscience studies are not at all adequate for accurate study of the effects of interest. Moreover, there were more poisitive findings reported in these studies than is statistically plausible given their low power.
Overall, Button et al., is a must-read for serious neuroscienctists and should become required reading for graduate programs.