The Neurocritic has been feeling reproved this past week, so it's time to post an oldie but goodie about the arbitrary nature of peer review, published in Brain (full-text available free).
Might as well take up gambling.
Rothwell PM, Martyn CN. Reproducibility of peer review in clinical neuroscience. Is agreement between reviewers any greater than would be expected by chance alone? Brain. 2000 123:1964-9.Or one could take up lobbying for open peer review. The new journal, Biology Direct, has already done so.
We aimed to determine the reproducibility of assessments made by independent reviewers of papers submitted for publication to clinical neuroscience journals and abstracts submitted for presentation at clinical neuroscience conferences. We studied two journals in which manuscripts were routinely assessed by two reviewers, and two conferences in which abstracts were routinely scored by multiple reviewers. Agreement between the reviewers as to whether manuscripts should be accepted, revised or rejected was not significantly greater than that expected by chance [kappa = 0.08, 95% confidence interval (CI) -0.04 to -0.20] for 179 consecutive papers submitted to Journal A, and was poor (kappa = 0.28, 0.12 to 0. 40) for 116 papers submitted to Journal B. However, editors were very much more likely to publish papers when both reviewers recommended acceptance than when they disagreed or recommended rejection (Journal A, odds ratio = 73, 95% CI = 27 to 200; Journal B, 51, 17 to 155). There was little or no agreement between the reviewers as to the priority (low, medium, or high) for publication (Journal A, kappa = -0.12, 95% CI -0.30 to -0.11; Journal B, kappa = 0.27, 0.01 to 0.53). Abstracts submitted for presentation at the conferences were given a score of 1 (poor) to 6 (excellent) by multiple independent reviewers. For each conference, analysis of variance of the scores given to abstracts revealed that differences between individual abstracts accounted for only 10-20% of the total variance of the scores. Thus, although recommendations made by reviewers have considerable influence on the fate of both papers submitted to journals and abstracts submitted to conferences, agreement between reviewers in clinical neuroscience was little greater than would be expected by chance alone.
Koonin EV, Landweber LF, Lipman DJ. A community experiment with fully open and published peer review. Biol Direct. 2006 Jan 31;1(1):1.The Neurocritic is happy to provide a new form of anonymous peer review, free of charge.
. . .
The general view of the current system of peer review of scientific work boils down, more or less, to the tired Churchill quote on democracy: it is the worst system imaginable except for all others. Yet, all publishing scientists are painfully aware of the growing problems of the peer-review system. The crucial feature of the present peer review approach is that it is, predominantly, anonymous, and a reviewer, safe behind the veil of anonymity, has virtually absolute power over the helpless author. And absolute power we know only too well can corrupt absolutely. Hence the biased reviews, the inexplicably delayed perfunctory reviews, the pedantic reviews making a huge deal of minor quibbles, and other kinds of unfair and upsetting reviews that we all dread reading but, unfortunately, may even find ourselves writing. Of course, it is not some sinister villains who produce these obnoxious reviews, it is we peers, we distinguished members of the active scientific community. Indeed it would not be much of an exaggeration to state that all of us, on one occasion and another, have been on the receiving end of peer review abuses, and (almost) any frequent referee has probably been an abuser as well, whether accidentally, intentionally, or at least in the eye of the author.
Subscribe to Post Comments [Atom]