Reader Comments

Post a new comment on this article

Does the evidence really support a training benefit?

Posted by dsimons on 02 Apr 2014 at 19:22 GMT

I have just written a HI-BAR commentary (Had I Been A Reviewer) on my blog (accessible at http://goo.gl/hTHbt3). The commentary raises a number of issues. A few of them are fundamental concerns about what inferences actually follow from this study. Given these concerns, I do not think the paper permits the conclusion that the game training produced any reliable benefits on the reported outcome measure.

Below I list these concerns, but the blog post provides a detailed explanation of each and gives more of an explanation for why these are concerns. I hope the authors will respond by providing more information that will allow readers to better evaluate the findings.

1) The sample size of 15 in the training group and 12 in the control group is problematically small, especially for correlational analysis, but also for the primary analyses.

2) The "limited-contact" control group does not permit an inference that anything specific to the training led to the transfer effects. See http://pps.sagepub.com/co...

3) The paper includes no corrections for multiple tests, and the core findings likely would not be significant with correction.

4) The paper does not report the means and variability for the accuracy data, leaving open the possibility of a speed-accuracy tradeoff.

5) The choice of response time cutoffs and exclusions were somewhat arbitrary, so it's not clear how robust these effects would be to other cutoffs.

6) The contrasts used to measure alertness and distraction were not defined. Which conditions were compared?

7) The alertness and distraction tests do not include a test of the difference between the training and control group. The fact that the training group difference was significant (but see below) and the control group difference was not does not mean that the difference between the groups was significant.

8) The training improvements for the alertness and distraction outcome measures were reported to be p=.05 and p=.04. But, they were truncated from p=.0565 and p=.0451. The first was not significant, and truncating the p-values is inappropriate. (Note that neither would be significant after correcting for multiple tests.)

9) The paper reports 20 correlations (each outcome measure with each of the 10 games in the training condition), but does not correct for multiple tests. And, correlations based on N=15 are of questionable reliability anyway. Moreover, correlations between training improvements and improvements on an outcome measure do not provide evidence for the efficacy of training.

10) The conclusion claims support for the idea that training improved "attention filtering," but the study does not test the mechanism that improved (and, the evidence that anything improved is uncertain).

11) The clinicaltrials.gov registration linked from the paper was posted after the paper was first submitted for publication. It is not a pre-registration.

12) The clinicaltrials.gov registration mentions a number of outcome measures that were not reported in the paper and were not mentioned on the PLoS Protocol and Consort Checklist (in the supplementary materials). If these measures were collected, they should be reported in the paper and in the supplemental materials. It is unclear whether these outcome measures just were not significant or were withheld for other reasons. In either case, the presence of unreported outcome measures makes it impossible to interpret the p-values for the one outcome measure reported in the paper.

13) The clinicaltrials.gov registration also lists a 24-week testing session that wasn't mentioned in the paper. Was the reported testing session an interim one?

No competing interests declared.