Reader Comments

Post a new comment on this article

'Wei Chen', identity theft, and sock puppetry

Posted by PhilipMoriarty on 17 Feb 2015 at 21:40 GMT

Readers of the comments on this paper should be aware that a great deal of underhand abuse of PLOS ONE's commenting system has occurred in recent weeks. This blog post, by Julian Stirling (the lead author of the paper), describes the frustrating issues at length: http://physicsfocus.org/d...

PLOS ONE have once again removed the comments of the person who calls himself 'Wei Chen' because they have not verified their identity and, as described in Julian's post linked to above, initially stole the identity of another scientist. It is very likely that this person will once again post comments under a false identity. And then PLOS ONE will remove them. And then...

We have already addressed, at length, the questions/comments of 'Wei Chen' (or whatever identity they choose to adopt). In anticipation of those comments appearing yet again, I have cut-and-pasted our previous responses below. Note that in the comments below I refer to a GDhror who has also posted comments under our paper. It should be noted that GDhror is also in violation of the PLOS ONE commenting protocols, as their identity cannot be verified.

------------------------------------------------------------------------------------------------------------------------------------------------------------------------

-- Dear Wei Chen,

Thanks for your comments. We have, however, dealt with each of the points you raise multiple times over the past year. Given that you may not be familiar with those detailed responses, I post the relevant links below. Note that many of your comments were also indirectly addressed by the referees of our paper (see https://raphazlab.wordpre...).

--------------------------------------
---> "However, when analyzing images for scanning artifacts they only analyze 1 image (Figure-1) out of many tens that have been published. It is granted that raw data may not be available for all images but surely it is for many"
---------------------------------------

This is simply incorrect. We acquired an archive of the raw data for many of Stellacci et al's images (after a long struggle to do so). Those images all suffer from the issues we describe in the paper. Note that there a wide variety of different artefacts and rudimentary errors in the experimental methodology.

See the PubPeer thread for the preprint version of pour paper.

See also the comments in response to GDhror in the comments thread for our paper here at PLOS ONE.

See also this collation of the STM data of Stellacci et al: http://www.nottingham.ac....

-----------------
---> "Then Stirling et al attempt to show with Figure-4 that feature spacing of the stripes "falls within the broad background noise measured for the whole image". This has been heavily criticized already here ...
If one assumes that there is feedback ringing in the whole image, their conclusion regarding broad background noise is not surprising but also irrelevant to the discussed images."
------------------
Addressed in detail in our response to GDhror here at PLOS ONE..

-------------------------------
---> "In Figure-5 Stirling et al show that arithmetic addition of a sets of published trace and retrace images of same area showing stripes leads to particles without stripe....with better resolution shows clear stripes (Moglianetti et al, doi:10.1039/c3sc52595c)."
---------------------------------

This has indeed been discussed at length in the PubPeer thread, as you suggest. As can be seen from that thread, the vast majority of the peers commenting were deeply surprised (to put it mildly) by the assertion that one can just cherry-pick the images that should be added together. If one adopts this entirely biased methodology then one can, of course, interpret pure noise in whatever way one likes! This the reason why no-one else in the STM literature has previously adopted the approach you (and, interestingly, an unregistered commenter in the PubPeer thread) suggest.

See the section titled "Reducing Noise Pollution" in this post: https://raphazlab.wordpre...

I'll state it again: One can't cherry-pick the data one selects to average. If averaged measurements do not result in a signal then either there is no signal there or the experiment is incapable of detecting it. This is not esoteric measurement science -- it's basic (undergraduate) laboratory science.

-----
In the next Figure, Figure-6, Stirling et al. show that trace and retrace images sometimes do not match, which is also no news to STM researchers. But Stirling et al do not say that Stellacci and colleagues have published examples of trace and retrace images that show the same features on same place independent of scanning angle (also in Moglianetti et al).
--------

No. Your statement here is clearly and demonstrably wrong. Please read our paper more carefully. We explicitly state that Moglianetti show trace and retrace images which show the same features.

Here's the precise quote:

"Although Figure 1 of Moglianetti et al. [29] shows arguably the
most convincing images of nanoparticle sub-structure we have
seen to date in the work of Stellacci and co-authors (the persistence
of features in the trace and retrace images is particularly
compelling)" [p. 11].

It would be helpful for all concerned if you could first read our paper (and the PubPeer thread) in its entirety before posting comments here which we have dealt with at length previously. That will save us all a lot of time.

------
With Figure-7 and Figure-10, Stirling et al argue that PSD analyses show that features with random order can lead to a PSD with a broad shoulder. This is also well known, e.g. PSD of a gel or glass structure has a broad shoulder. But Stirling et al do not say that broad PSD shoulders for homoligand (usually non striped) and mixed ligand striped nanoparticles correspond to different spacings, and that these can be distinguished. This is explained well in Ong et al. (doi 10.1021/nn402414b).
------

Again, your explanation here is just simply incorrect. We have dealt with this point time and time again at the PubPeer thread. But you don't even have to visit the PubPeer thread. See responses to GDhror.

-----
In Figure 8 it is shown that tip artifacts can generate "Janus " nanoparticles. Agreed, but this does not mean that stripe features with the spacing shown by Stellacci and colleagues can be generated by tip artifacts, or that all the images with striped nanoparticles arise from tip artifacts. If one finds a broken glass on floor, it is possible to conclude that someone might have hit the glass with a hammer, but there are of course more plausible conclusions such as the glass falling from table.
-----------

We didn't suggest otherwise. We simply showed that tip artefacts can generate a wide range of different features and that probe microscopists must be at pains to eliminate those artefacts. For the majority of Stellacci et al.'s work the most rudimentary checks on image consistency and/or the role of the tip have not been carried out. See https://raphazlab.wordpre...

-------

Stirling et al criticize the work in Moglianetti et al in Figure-9 by writing that "there is strong observer bias in the identification of stripes". However, stripe spacings in Moglianetti are consistent with the spacings in all other works of Stellacci where stripes can be seen by identified by eye without doubt and where observer bias is not possible (example: Ong et al doi:10.1039/c4cc04114c).

----

As I also asked GDhror, please tell me which one of these images (from the work of Ong et al.) shows stripes: https://twitter.com/moria...

One of those images is meant to be from a control (i.e. non-striped) sample. The other ostensibly shows stripes. They are indistinguishable both in real space and reciprocal space.

See also my extensive comments to GDhror on this point.

-------
There are other flaws in this paper that have been discussed elsewhere. The arguments of Stirling et al pick a few images of striped nanoparticles published in the literature and apply one of their battery of criticisms in a selective and biased way; if it is not observer bias, it's feedback ringing or other tip artifacts, or lack of features in a summation or similar PSD broad peaks from random features. The papers of Stellacci et al have analyzed all this points, but Stirling et al have ignored this and applied arguments inconsistenly. Stirling et al also ignored that spacings from PSDs agree for tens of images from different labs in different years. They also "forgot" that observer bias is negligible in the most recent images (notably 10.1039/c4cc04114c) and that feedback ringing has proved to be negligible in many published papers (Moglianetti et al, Ong et al, Jackson et al…).
------------

This is because there are a *wide variety* of basic flaws in data acquisition and data analysis. We state this very clearly in our paper and elsewhere (including the PubPeer thread). We even structure our paper to deal with each of these types of flaw in different sections.

The problem is that when the experiment is carried out correctly (i.e. without the feedback ringing that was present in roughly the first half of Stellaci et al.'s work), no stripes are visible (see https://twitter.com/moria... ).


-----


Best wishes,

Philip

Competing interests declared: I am a co-author of the paper.