Reader Comments

Post a new comment on this article

Accessing 'Technical Report'

Posted by Apis_mellifera on 07 Oct 2010 at 02:32 GMT

Can anyone tell me how to access Ref #20 of this article, the Army 'Technical Report' where all the details of the mass spec are supposedly reported?

Competing interests declared: Also work on honey bee proteomics

RE: Accessing 'Technical Report'

Apis_mellifera replied to Apis_mellifera on 07 Oct 2010 at 21:52 GMT

Sorry, just realized my real name doesn't show up. I am very interested in seeing this Technical Report so if someone can share it with the community that would be fantastic.

Thanks

Leonard Foster
UBC

No competing interests declared.

RE: RE: Accessing 'Technical Report'

Picapung replied to Apis_mellifera on 07 Oct 2010 at 22:28 GMT

I didn't look it up but you might try the National Technical Information Service
NTIS.gov

No competing interests declared.

RE: Accessing 'Technical Report'

jdevans replied to Apis_mellifera on 08 Oct 2010 at 03:35 GMT

Hi Leonard,
The proper avenue for requesting such reports is to email CBRN@conus.army.mil and request the public bulletin (in this case ECBC publication TR-814). However, as below, this critical bulletin is not actually available and the liaison said to 'try again in about two months.
Please folks, we truly want to believe that IIV's are widespread, or at least vet the actual data, and time is wasting. It is impossible to do so without any of the baseline data; MS/MS peptides for the proposed matches to IIV for starts, and the same for the suggested matches to 900(!) additional microbes. Incredible claims require at least credible evidence, and you should proudly display the data behind your extraordinary claims.
thanks,
Jay Evans (acting as an individual concerned scientist)

--------------------------------------------------------------------------------
From: Henderkott, Patricia A Ms CIV USA [mailto:patricia.henderkott@us.army.mil]
Sent: Mon 9/27/2010 8:14 AM
To: Evans, Jay
Cc: Rasmussen, Emily A CIV USA; Lingard, Twyla J Mrs CIV USA AMC
Subject: RE: CBRN-IRC Inquiry Ticket #10963 FW: ECBC publication? (UNCLASSIFIED)


Dr. Evans,

Per Mr. D'Eramo's below request, this publication is not available at this time. Please resubmit your request at a later time (possibly 2 months) and I will see if review and approval for release has been completed.


Thank you for using the CBRN-Information Resource Center

Patricia Henderkott
IRC Officer
Information Technology Solutions Team
DSN: 793-5680 Comm: 309-782-5680
email: patricia.henderkott@us.army.mil
All CBRN related questions contact the CBRN-IRC 24/7 at JACKS: https://jacks.jpeocbd.arm...
Comm: 309-782-7349 (DSN 793) U.S.A. (toll free) 1-800-831-4408 Germany (toll free) 0130810280 Korea (toll free) 0078-14-800-0335 FAX (DSN) 793-3226 (Commercial) 309-782-3226
mailto:cbrn@conus.army.mil

Competing interests declared: researcher in same field who shares a belief in pathogenic causes for bee declines

RE: Accessing 'Technical Report'

Apis_mellifera replied to Apis_mellifera on 09 Jan 2011 at 05:22 GMT

I have just written a point-by-point technical response to this work that can be found at Mol. Cell. Proteomics: http://www.mcponline.org/...

Leonard Foster

No competing interests declared.

RE: Accessing 'Technical Report'

psommer replied to Apis_mellifera on 02 May 2011 at 01:56 GMT

This paper does not provide sufficient information to judge the validity of proteomics-based findings presented herein (MSP technology). However, the manuscript suggests that all necessary information is available in the US Army Technical Report (TR-814) which is referenced as a foundation for this work [ref. 20].

Recently I’ve read this report (http://www.dtic.mil/cgi-b...) and found serious errors in the proteomics work performed by the Army laboratory.

For example, the Authors reported search results of mass spectra against small database for tests #5 through #100 in Appendix B. However, they do not provide any information which samples they represent. Nevertheless, these summaries shed light on the interpretation of SEQUEST results used by the Authors.

(a) In each table shown in Appendix B one can find six columns with SEQUEST output parameters designated as: (a) (M + H); (b) ^M, (c) ^Cn, (d) Xcorr; (e) Sp; and (f) RSp. However, it is troubling that descriptions of these parameters, provided by the Authors in an introductory table, are incorrect for five of them!!!. For example, (ii) ^M (in fact it is ΔM or deltaM) is described as ”(M + H) – M“, instead as a mass difference between measured and theoretical values, (iii) ^Cn (in fact ΔCn or deltaCn) is described as “Error”, while it is the difference in cross-correlation score between the top-ranked peptide sequences (normalized to the highest Xcorr value); consequently, the higher value of this parameter is better (just the opposite to what the term error suggests!); (v) “Sp” is described as the “Highest peak in a given spectrum” and (vi) RSp as “Repeat of Sp”, while, in fact, “Sp” stands for the preliminary score of the top candidate peptide used to rank matching sequences, and the actual rank of a given sequence based on Sp values is called “RSp”.
(b) The most troubling is the description of the Xcorr (cross-correlation) score which the Authors named as “fitness match”. Although the term “fitness match” reflects the meaning of this score, the Authors added an explanation that “numbers > 1.5 are significant” and actually use this value for sorting all the matches and as a cut-off criterion for reporting assignments in all Appendix–B tables.

This is a serious flaw! Although SEQUEST assignments with Xcorr >1.5 are frequently considered as correct matches; this criterion is valid ONLY for +1 ions that usually represent less than 1% of reported assignments. For example, the analysis of results for Tests #6 and #100 (the first and the last test reported in Appendix-B) indicate that 66.1% of analyzed peptide ions were doubly charged (+2 ions), 33.1 % triply charged (+3 ions), while the +1 ions represented only 0.85% (there is only one +1 ion in test#6 and none in test#100).

NOTE: Washburn-Yates (inventors of SEQUEST) criteria require Xcorr values to be ≥ 1.9, 2.2, and 3.75 for parent ion state charges +1, +2 and +3, respectively [Washburn et al., Nat. Biotechnol. 2001, 19, 242–247]. Nevertheless, nobody recommended lower cut-off criteria than Xcorr values of 1.5, 2.2 and 3.3 for charge states +1, +2 and +3, respectively [Durr et al., Nat. Biotechnol. 2004, 22, 985–992]. (In addition , these Xcorr cut-off criteria should be associated with deltaCn values ≥ 0.1).

What happens when you will accept +2 ions with Xcorr >1.5? In short, you will select a set of matches with false positives reaching 90% (for example see Kall et al., J. Proteome Research 2008, 7, 29–34).

No competing interests declared.

RE: Accessing 'Technical Report'

psommer replied to Apis_mellifera on 04 May 2011 at 18:48 GMT

Analysis of the US Army Technical Report (TR-814) strongly suggests that serious errors were made in the proteomics work performed by the Army laboratory. As I described a few days ago, the Authors misinterpreted SEQUEST parameters and accepted all spectrum-sequence matches characterized by cross-correlation scores higher than 1.5. Next, they sorted them by Xcorr values and presented in Appendix B of the report.
SEQUEST identified spectrum-sequence matches are associated with arbitrary scores that reflect the quality of such assignments but not a statistically meaningful significance measure. To validate their matches the Authors used a PeptideProphet algorithm that assigns probabilities to search results by computing their likelihood of being correct. The results obtained with PeptideProphet algorithm are reported in columns marked PP and indicate that correctness of all reported matches is better than 95%. However, it is puzzling to see that both +2 and +3 peptide ions with Xcorr values as low as 1.5 and deltaM > 1.0 are associated with 99% probabilities that they are correct [even for matches to sequences with missing amino acids, indicated by “*”]. It is surprising because any experienced user of SEQUEST would suggest that such assignments are simply incorrect (for example see Kall et al., J. Proteome Research 2008, 7, 29–34). How to explain these discrepancies?
Peptide Prophet employs the EM algorithm to find distributions of matches and computes probabilities that matches are correct. It does so by partitioning the observed distributions into inferred correct and incorrect assignments which then contribute to the computed probability that any result is correct.

Unfortunately, in cases of questionable data quality, i.e. a low number of high-scoring peptides relative to low scoring ones (e.g., due to lack of correct matches in the database), PeptideProphet models are improperly fitting the negative/positive distributions, with the effect of giving too many high peptide probabilities to dubious matches. Therefore, according to developers of this software: “It is important to view distributions of search score and peptide properties learned by the program among correct and incorrect results of each parent charge. This model information …. should be used to be sure the program did an adequate job and as a diagnostic.” [Keller and Shteynberg, Methods Mol. Biol. Vol. 694, 169-189 (2011)]. Unfortunately, Bromenshenk et al. ignored this important step; therefore, the SEQUEST generated peptide-sequence assignments used in this work are not validated!

Paul Sommer, PhD
Brooklyn, NY

No competing interests declared.