Reader Comments

Post a new comment on this article

Academic editor reply to criticism of this paper

Posted by deevybee on 27 Feb 2011 at 16:14 GMT

As academic editor of this paper, I had expected it to be controversial, given the striking result and potential implications for application. I’m a congenital sceptic, but it had been through a round of review by two sober individuals who did not find major flaws and I felt it was methodologically adequate and merited publication. I haven’t used transcranial electrical direct current stimulation (tDCS) in my own research, but I’m interested in possibly doing so in future and know a little about it.
This paper has achieved a startling level of prominence, with over 10,000 article views in the 24 days since publication. Yet not all reaction has been favourable. I’m aware of some blogposts that have been highly critical. This has led me to look again at the paper to judge whether I was wrong to accept it for publication.
But before I summarise my re-evaluation, I’d like to make one other point. There has been only ONE comment on this paper. I have tried to persuade critics on the blogosphere to use the Comments facility to start up a debate but nobody has. Why not? It does seem an opportunity wasted: it would be a way of drawing attention of the criticisms to the authors, who could then respond, and it would also mean that the 10,000 people who’d viewed the paper would be aware there was controversy.
A widely-disseminated critique of this paper was in a blogpost on the Guardian Science blog website, by Chris Chambers, Sven Bestmann and Elena Rusconi, entitled, “'Thinking caps' are pseudoscience masquerading as neuroscience”, http://tinyurl.com/6l9xcu... .
The first part of the post criticised the sensationalised media accounts of the research rather than the PLOS One paper; as readers of my own blog (http://deevybee.blogspot.... ) will know, that’s a topic that I’m very interested in, but my concern here is with comments that specifically question the methodology and conclusions of the paper itself. According to Chambers et al, the study “suffers from a catalogue of confounding factors and logical flaws”. In essence, they argue there is a lack of experimental control: “by failing to control for alternative explanations, their results … are open to a multitude of possible interpretations” and that “without appropriate experimental controls, the results are virtually meaningless.”
Nobody seems to disagree that the brain stimulation improved task performance. In fact, in so far as I’m sceptical, it’s that part of the paper that concerned me - I found myself wondering whether such a dramatic effect was just a fluke. I wondered if it would replicate. But that’s why we have statistics, and I didn’t think it was my place as an editor to reject a paper just because I find it implausible if the stats are telling me it’s a big effect that is highly unlikely to have arisen by chance. This is particularly the case for PLOS One, whose policy very much discourages editors from allowing personal prejudices to affect decisions.
The criticisms raised by Chambers and colleagues are different. They contest the claim by Chi and Snyder that stimulation affected ‘insight’. They suggest alternative interpretations that have not been elimiated: (a) participants became less cautious in reaching a decision; (b) they were helped to recall a similar problem seen a few minutes earlier; (c) they were temporarily less distractible; (d) they had dulled hearing; or (e) were more generally alert.
While I agree that the mechanism of the effect remains unclear, most of these alternatives don’t strike me as plausible. Anyone who has tried the stick problems (which are specified in the paper) will realise that these are problems where it’s not hard to know whether you’re right or wrong. You essentially get stuck, especially if you’ve been primed with a different class of problem, until the moment when you have the ‘ah ha’ experience. It’s not clear to me how caution in decision-making would be an issue for such a task. As regards (b), the participants haven’t seen similar problems: that’s the whole point. This is a new type of problem and that’s why it’s hard. Hearing is neither here nor there; this is a visual task. Distractibility or alertness could, I agree, be mechanisms underlying the effect. In that case, we might have expected the tDCS to have an equivalent effect regardless of laterality of stimulation, but I agree we can’t rule out a rather general mechanism underlying the effect on this task - it's possible that the side of stimulation has a differential effect, e.g. on attention.
I’m prepared to accept that this one experiment has not given a watertight account of how tDCS works - there are a number of explanations that remain open. But I don’t regard the results as ‘virtually meaningless’. On the contrary, I would argue this study lays the ground for more work on tDCS to both replicate this study and refine the methods. As the authors themselves pointed out “Further studies using a variety of control tasks are needed to disentangle the specific mechanisms of action and to determine whether the improvement in insight problem solving is task specific or can be widely generalized.”
The other major objections raised by Chambers et al concerned the ethical implications of the work, especially the possibility that attempts to enhance cognitive abilities might have unwanted side effects. I agree. We need to be very cautious. But as someone who works with individuals who have problems in learning, I think we need careful studies of just how safe tDCS is, rather than assuming it will be harmful.
So do I regret accepting this manuscript? I hope that I’m honest enough to be able to accept when I’m wrong, but in this case I think the decision to publish was correct. I do thoroughly agree with critics, however, in my dislike of the subsequent sensationalism surrounding media reports of the findings.

No competing interests declared.

RE: Academic editor reply to criticism of this paper

basimpson replied to deevybee on 07 Jun 2011 at 00:13 GMT

Greetings. I have been extremely interested in this study since it was published some 6 months ago. As a foreign language teacher, I see the effects of past mental templates on students all the time. I can tell them how to pronounce a word time after time, but their mental template of the alphabet (in their own language) dictates the way they repeat what I tell them.
I don't believe they fail to listen but that what I tell them is filtered through that particular mental template and the pronunciation of any given word thus changes when repeated back to me.
Just an hour ago I walked into an English business class to give an exam and a coworker of the students told me that my student was [praɪjiŋ] instead of [preɪjiŋ]. That simple change from one diphthong to another kept me from understanding the man until he put his hands up as if he were [preɪjiŋ]. So, both our mental templates put as at a disadvantage.
Anyway, I hope we can find ways to apply this to learning.
Also, to the authors of this study, could there be any relation with your findings to the concepts of so-called "neuro-linguistic programming”?
It has received what I consider to be similar criticism to its ideas, i.e. a lack of control in proving their methodology.
I am not aware of anyone attempting to prove the concepts but I see a relationship between your ideas and some of the ideas related to neuro-linguistic programming.
I know some of their methods involve incorporating motor tasks that supposedly stimulate the right side of students' brains by having them do exercises with their left hands.
Maybe this could be an area of opportunity. Could we possibly examine the effects of these simple, physical exercises on a more empirical level with the aid of new computer imaging technology?

Aside from that, I find it quite funny that deevybee sites a PLoS One policy which “very much discourages editors from allowing personal prejudices to affect decisions.”

I suggest that PloS One use the “thinking cap” in order to avoid such policy violations.

No competing interests declared.