10 January 2017

Academic publishing death match: Double blind review vs. preprints

Double-blind peer review (hereafter, DBPR) has quite a few supporters.  I imagine that people who suspect that their manuscripts have been unfairly treated (say, by a reviewer who is a rival or just doesn't like them personally) are likely to be among this group.  But I've seen other credible arguments that DBPR will level the playing field in science.  Some research suggests that the identity of the author, or even just the prestige of their institution, can affect the likelihood of a manuscript being accepted.  There are also issues about the fair treatment of women and other groups who have been traditionally disadvantaged within science.  If it's all about the quality of the research and not the reputation of the big-name authors, then the science ought to be judged independently of its origin, and since we're only human, eliminating whatever relationship that the reviewers might have with the author looks like it has to be a good thing.

The concept of preprints --- that is, putting a draft of your article somewhere online to get feedback from the community before you submit it for publication in a journal --- also has quite a few supporters.  In the last couple of years we have seen the launch of several new preprint servers for the biological and social sciences, and the open access journal PeerJ has its own preprint section.  I was recently a co-author on a preprint for which it wasn't quite clear where the best journal to submit it would be; this problem went away (give or take the article processing charges, but one of my co-authors had some funding) when an open access journal contacted us and offered to publish it.  (Exactly what conflicts of interest this might create for the peer-review process is left as an exercise for the reader; in view of my general skepticism about OA journals maybe I am being a little hypocritical here, but I will claim that I didn't want to let my co-authors, most of whom are enthusiastic proponents of OA, down here.)

The biggest advantage of preprints is that you can get your research out there quickly.  The GRIM article that I published with James Heathers is a good example of it.  Within a month of us posting the preprint, it had close to a thousand downloads and been featured in The Economist.  Even with a quick review turnaround at Social Psychological and Personality Science (SPPS) --- which might have been expedited by the action editor or reviewers having been exposed to the preprint --- it took five months for this article to be published online.

However, there seems to be a problem when you mix these two good ideas.  The whole point of a preprint is to get people talking about your new ideas, and give you feedback --- presumably in a less formal way than the leaden tone of a decision letter, and the subsequent obsequiousness of your reply ("We thank Reviewer 2 immensely for his extremely helpful comments on section 2.3, although we suspect that they might have been even more extremely helpful if he had read section 2.4 where we anticipated and addressed, with individually-numbered bullet points, every one of these extremely helpful comments").  This is generally going to involve you generating some publicity for your preprint.  Now of course, you could create an egg account on Twitter, and a sock puppet on Facebook ("Danielle Kahnewoman", for example) and a Gmail address for correspondence, and spam the world with links to your anonymised preprint.  But in practice, everyone is going to know who wrote it.  And that means that when the manuscript gets to the reviewers at the journal that offers (or, in some cases, mandates) DBPR, those reviewers won't even have to resort to the standard techniques that they might use to identify the authors (e.g., seeing which author is the most cited in the References section); there is a high chance that they will already have read the preprint.  Even if they haven't, they will just need to put the first sentence of the manuscript inside quotes into Google and they will find the preprint in seconds.

I discovered today that Personality and Social Psychology Bulletin (PSPB) --- a stablemate of SPPS where we published the GRIM article --- is introducing a policy of mandatory DBPR from March 2017.  That's a decision for the Editorial Board, but it makes me wonder what their policy is on preprints.  (Wikipedia has a list of journals and publishers whose preprint policy is known --- generally, it seems, preprints are fairly well accepted --- but the word "psychology" doesn't appear anywhere on that page.)  It seems to me that by mandating DBPR, a journal is essentially committing itself to refusing to consider manuscripts that have previously been posted as preprints, because anonymity is essentially impossible --- or rather, it's untenable to pretend that anonymity is possible --- under such circumstances.

A related problem with mandatory DBPR, if the journal wants to actually attempt to enforce it (in my experience, many problems in any form of professional life start when someone creates a rule and then tries to be consistent in enforcing it, despite the messiness of the world), is that in addition to the assumption that the manuscript is not available through Google, it also assumes, more completely, that it has not previously been seen by the reviewers in an unblinded state.  That seems like a rather untenable assumption, especially in specialised fields.  PSPB is a well-respected journal by any measure, but like any journal ("Cell wouldn't take it? Let's try Nature!") it may not always be the first port of call for the authors who submit there.  Should the reviewer who has already seen the manuscript unblinded on behalf of another journal recuse herself because she knows who the author is, thus depriving the editor of an expert opinion (which, as a bonus, could presumably be provided very quickly)?

For what it's worth, I don't have a solution to this.  I like preprints, but I also like the idea of DBPR (although here are some short counterarguments, and here is some pro-and-con discussion).  I suspect that mandatory DBPR may be incompatible with the realities of the scientific world (even without preprints), because reviewers are human; as mentioned elsewhere in this post, they may have strong suspicions or even outright knowledge of the authors' identities, and it could place them in a morally ambiguous situation to impose a requirement that they declare such suspicions or knowledge.  But I'm loath to criticise this decision by PSPB --- which is by no means the only journal to impose DBPR --- because it was presumably taken for good reasons and after considerable thought.  Short of introducing peer review by AI robots (insert your own joke here about the last terrible review you received), it looks like we're going to be stuck with at least some of the problems associated with scientists being human for a while yet.

[ Update 2017-01-10 15:37 UTC: Thanks to Stepan Bahnik for pointing out that the new, mandatory DBPR policy at PSPB also applies to SPPS and their other stablemate, Personality and Social Psychology Review.  I would be very interested to hear from any members of the Editorial Board of any of those journals about how they see the relationship between that decision and their policy on preprints. ]


  1. Great post. I wonder though, do preprints present any more of a threat to DBPR than the existence of conference presentations and abstracts? I suppose the main difference is that a "hot" preprint might reach more people than the attendees at any given conference, and would contain more methodological detail.

  2. In my (erstwhile) field of electophysiology, a referee would certainly have been influenced by the authors' names if they got a paper written by, say, Andrew Huxley or Bernard Katz. There's a perfectly good reason for that. They wrote consistently great papers. Furthermore anyone who knew the field would recognise the papers even if their names were blanked out. Double-blind wouldn't work in a relatively small field like that.

    I think that peer review works well only for highly specialist journals. But it doesn't guarantee quality at all in the vast majority of journals (including 'glamour' journals). Anything, however bad, can be published in a "peer-reviewed" journal that's indexed in Pubmed. The system is, for the most part, utterly broken. To a large extent that is because there aren't anywhere near enough competent referees to review the vast numbers of papers that are now being produced.

    The only solution that I can see is to put your paper on the web and open the comments. This would also save universities a huge amount of money because Elsevier and NPG would go out of business.

  3. nice post!
    FWIW: the hardcore open-science advocates will always call for open peer review as well, which means that the review process would be fully transparent (so instead of double-blind it would be zero-blind). in that perfect, transparent, open science world where all editorial decisions are only based on scientific soundness and something like (selective) journal policies is non-existent the bias that DBPR is supposed to work against will be much more subtle compared to the situation in our de-facto publishing world.


  4. I suggested how we can make preprints compatible with double blind review here: https://medium.com/@OmnesRes/walking-the-plank-preprinting-and-double-blind-review-3f72f4825b74#.d5wvgh114