27 January 2021

Why I blog about apparent problems in science

In this post I want to discuss why I blog directly about what I see as errors or other problems in scientific articles. I had the idea to write this some time ago, and indeed some of the sentences below have been sitting in my drafts folder for quite a while, but the discussions on Twitter about my most recent post have prodded me to finally write this up. (However, I don't go into that post or those Twitter discussions further here.)

I have seen criticism of the "blog first" approach because it "drops stuff on the authors out of the blue" or "doesn't give them a right to defend themselves". People have suggested that it would be better to approach the authors first and discuss the problems. That seems obvious, and it was how I used to approach things too, but over time I have changed my mind, for a couple of reasons.

First, I believe that, in principle, science should be conducted with radical transparency. Subject only to the need to protect participants, all review should take place in public, with all code and data fully open. Currently only a few journals (e.g., Meta-Psychology) offer this, but open review and commenting, at least, is part of the deal with most preprint servers. The whole reason for posting a preprint is to allow direct feedback on it, which anyone can take part in. In contrast, in order to comment on published articles in journals, with a few exceptions such as PLOS One (which allows informal comments to be posted on articles as well as formal comments that go through peer review), the choices are blogging/tweeting, PubPeer, or trying to fit a letter to the editor into some bizarre word count limit. (Some journals refuse to entertain any discussion of their articles unless it arrives via the manuscript submission system.)

So if I publish an unannounced blog describing what I see as issues in an article or a body of work, what I'm doing, in effect, is bringing the rules of Preprint World™ to bear. That might seem unfair, given that the authors have already "run the gauntlet" of peer review. (In some cases they may even have actually published a preprint first, but (spoiler alert) unless your preprint is about a politically hot topic, it isn't going to get much feedback, because we're all too busy with our own stuff.) But peer review is utterly broken, especially in its present mostly-secret form, which allows bad stuff to get published (through various forms of cronyism, as well as the limitations of editors and reviewers) and keeps critical voices out. A few journals now publish reviews alongside accepted articles, which is a first step along the road, but this strikes me as hugely insufficient because we don't get to see what happened to the manuscripts that were rejected.

(I should also acknowledge here that I am in a very unusual and privileged position. I am retired and don't have to keep anyone happy; nor, unlike if I were an emeritus professor, do I have a large list of buddies going back to my time in grad school to whom I feel some kind of obligation of loyalty.) 

The second reason is much more personal. If I write to an author and say "Umm, I think I've found these problems in your article", it feels to me as if I'm entering into a process of negotiation with them. I worry that maybe it feels to them like there is something they can say or do that will persuade me not to share what I've found. Maybe they feel blackmailed. Maybe they will try and negotiate: say, to address three problems if I will "let them off" the fourth ("We didn't feel we had a choice to collect the data any other way, the postdoc had left and the grant money had run out").

I really hate that feeling.

Some people seem to have no problem with that kind of implicit, low-level conflict, but it really doesn't sit well with me. Perhaps this is irrational, but it's how my personal sense of embarrassment works, and I don't think that's about to change. (I'm not usually a fan of psychoanalytic approaches, but for what it's worth, I'm pretty sure that my relationship with my late mother is indeed involved here.)

Of course, this approach has disadvantages. Sometimes it can lead to pointing out "problems" that aren't actual problems, because I didn't understand something. You also risk sounding like a crank, which James Heathers and I wrote here about trying to not to be. You can mitigate this, for example by checking your analyses with multiple other people, but in practice even your closest colleagues don't always have time to go through the boring detailed stuff (and some of it is really, really boring). On occasions it gets you a nastygram from the authors, who in one case complained to my dean that I was violating their human rights [sic] by citing an e-mail of theirs verbatim. When I replaced the verbatim text with a paraphrased version, they then complained that I had misrepresented what they had written. (Another small benefit of not having corresponded with the authors is that you avoid the question of how to cite them.)

I think this is a consequence of the unique nature of science as a human activity. In an ideal world, science would not be conducted by social beings at all. Mr Spock doesn't mind if you call out apparent errors in his regression tables (although presumably he doesn't make many errors). But we don't live in that world. We all have reputations to consider, and we all like to be evaluated positively. (Aside: I thoroughly recommend Judged: The Value of Being Misunderstood by Ziyad Marar, although I wish the author hadn't used quite so much recently-discredited social psychology to support his arguments.) So any kind of scientific criticism is likely to be orthogonal to our usual ways of rubbing along in polite society.

In fact, to me it feels even more rude/disloyal/distasteful to blog about an issue if I have been discussing it cordially (up to a point) with the authors. If you're having a "Dear Jane", "Hi Nick", "Best regards" kind of exchange of e-mails, and at some point you realise that something is badly wrong, what do you do? In some legal systems a lawyer can request the judge to allow them to treat a witness as "hostile" based on their responses during a trial, but lawyers are trained (at least, I hope it doesn't come naturally) to disconnect their embarrassment, they get to walk away at the end of the case, and all of this is taking place in front of others who can see why they are asking.

This problem is even more awkward when you realise that in some cases you may be helping the author to construct an alibi ("Gosh, do 80% of our reaction times really end in 7? Ah yes, I remember now, we made some fake data to test the code, ha ha, yes, we must have used that by mistake, but hey, I've looked and just found the real data here, I'll write up the correction this afternoon, thanks for your keen observational skills"). Indeed, just this week we have heard this story from Joe Hilgard of how, whenever he pointed out a specific problem in the implausibly prolific output of one particular researcher, the next (equally implausible) article from the same person didn't have that particular problem in it. If this happened in a court of law there would be someone in overall charge who could take into account that the story keeps changing, but in science it seems you can get a lot of do-overs. In one case with which I was involved, when it was pointed out to the authors—two of whom were PIs with multiple R01 grants between them—that a coding error in their dataset meant that all of their regressions were uninterpretable (this was in the supplement of our critique, as it was only about the fourth worst thing about the original article), they merely uploaded a corrected version of the data without issuing a correction or indeed telling anyone at all. This meant that anyone who tried to reproduce the problem that I had discovered would now be unable to, but it also meant that re-running their code substantially changed all of their results. I wrote to the journal and the federal office of research integrity, but nothing happened.

Now, I'm aware that a risk of this approach is that it could end up turning me into some kind of solipsistic critic of science, the kind of person who has "Independent researcher" in their bio(*) and has been writing about the same one or two issues, and little else, for the last 10 years. The haughty drive-by posts that dominate sites devoted to "skepticism" in those fields of science that attract a lot of, er, enthusiastic amateur investigators also often seem to be based on an attitude of "I don't care what the authors have to say, here's why they're wrong" (although the one time I came close to having my own work featured on such a site, the potential author first sent me what appeared to be a rather crude form of blackmail note; I ignored it and as far as I know he never wrote up whatever perceived hypocrisy on my part that he was threatening to expose to the world). And indeed it is not hard to see parallels between a hardcore insistence on "science should be about objective truth" and the more juvenile kinds of libertarianism. Science should indeed be dispassionate, but it's still possible to be a dick about it. I don't want to be one of those perpetually disagreeable relatives that most of us have who are proud of proclaiming that "I say what I think, and I'm entitled to my opinion".

So there are limits to how one can go about this process in a reasonable way. I try to keep my language restrained; as James Heathers is fond of repeating, we can usually only talk in terms of error, because determining intent is hard and ultimately requires knowledge of someone's mental states, a method for which has so far—perhaps ironically—eluded psychologists. (The justice system sometimes has to do it, but even then it can result in strange effects; Ziyad Marar's book, mentioned above, has a nice section on this.) Before I go public with something I generally get several other people to take a look at it and see what they think, and if anyone has strong doubts I will often leave a post at the draft stage out of caution.

However, a complex study or body of work can produce a lot of what might look like smoke without the need for any kind of major fire, so this is always going to be an imperfect process. I'm happy to correct my posts in as transparent a way as possible when they are shown to have been based on faulty assumptions or other errors, although I acknowledge that such a correction is always inferior to not getting things wrong in the first place. But I think it's important for the issues themselves to be discussed in public; I hope that it keeps everyone honest, me first of all.



(*) Shout-out to a Twitter pal whose bio used to contain the words "Independent researcher (yes, I know...)"

4 comments:

  1. As a reasonably regular critic, I've also spent some time considering optimal (for me) stragegy.

    One important issue to get out of the way: if you may wish to remain anonymous, it is difficult to contact the authors.

    Assuming one is prepared to go public, I do generally find it helpful to contact the authors first. This takes some time, but does allow refinement and possibly correction of arguments, which improves final presentation and impact. However, in my experience so far, such contact has never yet led to an admission of error, let alone actual corrective action, so hoping that the authors will do the right thing is unrealistic.

    ReplyDelete
  2. Hi Dr. Nick
    (yes i know);-) keep it up and continue your way of putting the finger in the wound. You and @jamesheathers and @schneiderleonid (see, no Oxford comma) doing great work to expose fraudsters and snake oil salesmen.
    Thanks

    ReplyDelete
  3. Hi Nick,
    Thanks for the thoughtful post!

    Regardless of whether you should contact authors in advance of writing a critical blog post, I think it would be good practice to send authors a short email at the time that you publish your post. Otherwise, it is quite possible that the authors will not be aware of the criticism, and it would be you who is removing the chance for a fruitful reaction, rather than the authors.

    I go into my reasoning more on Andrew Gelman's blog here, under the name "fogpine".

    https://statmodeling.stat.columbia.edu/2021/06/02/why-i-blog-about-apparent-problems-in-science/

    Best of luck with your criticisms! Oh, and apologies if you already email the authors at the time of posting -- I couldn't be sure from your writing and assumed not.

    ReplyDelete
    Replies
    1. I think this is a good idea. Generally, though, the authors find out via Twitter soon enough. :-)

      Delete