Another story of apparent scientific fraud has hit the headlines. I'm sure that most people who are reading this post will have seen that story and formed their own opinions on it. It certainly doesn't look good. And the airbrushing of history has already begun, as you can see by comparing the current state of this page on the website of the MidWest Political Science Association with how it looked back in March 2015 (search for "Fett" and look at the next couple of paragraphs). Meanwhile, Michael LaCour hastily replaced his CV (which was dated 2015-02-09) with an older version (dated 2014-09-01) that omitted his impressive-looking list of funding sources (see here for the main difference between the two versions); at this writing (2015-05-22 10:37 UTC), his CV seems to be missing entirely from his site.
This rapidly- (aka "hastily-") written post is in response to some tweets calling for fraudsters to be banned from academia for life. I have a few problems with that.
First, I'm not quite sure what banning someone would mean. Are they to have "Do Not Hire In Any Academic Context" tattooed on their forehead? In six languages? Or should we have a central "Do Not Hire" repository, with DNA samples to prevent false identities (and fingerprints to prevent people impersonating their identical twin)?
Second, most fraudsters don't confess, nor are they subjected to any formal legal process (Diederik Stapel is a notable exception, having both confessed in a book [PDF] and been given a community service penalty, as well as what amounts to a 6-figure fine, by a court in the Netherlands). As far as I can tell, these people tend to deny any involvement, get fired, disappear for a while, and then maybe turn up a few years later teaching mathematics at a private high school or something, once the publicity has died down and they've massaged their CVs sufficiently. Should that be forbidden too? How far do we let our dislike of people who have let us down extend to depriving them of any chance of earning a living in future?
After all, we rehabilitate people who kill other people; indeed, in some cases, we rehabilitate them as academics. And as the case of Frank Abagnale shows, sometimes a fraudster can be very good at detecting fraud in others. Perhaps we should give the few fraudsters who confess a shot at redemption. Sure, we should treat their subsequent discoveries with skepticism, and we probably won't allow them to collect data unsupervised, but by simply casting them out, we miss an opportunity to learn, both about what drove (and enabled) them to do what they did, and how to prevent or mitigate future cases. We study all kinds of unpleasant things, so why impose this blind spot on ourselves?
Let's face it, nobody likes being the victim of wrongdoing. When I came downstairs a couple of years ago to find that my bicycle had been stolen from my yard overnight, the one time that I didn't lock it because it was raining so hard when I arrived home that I didn't want to stay out in the rain a second longer to do it, I was all in favour of the death penalty, or at the very least lifelong imprisonment with no possibility of parole, for bicycle thieves. The inner reactionary in me had come out; I had become the conservative that apparently emerges whenever a liberal gets mugged. Yet, we know from research (that we have to presume wasn't faked --- ha ha, just kidding!) that more severe punishments don't deter crime, and that what really makes a difference [PDF] is the perceived chance of being caught (and/or sentenced). And here, academia does a really, really terrible job.
First, our publishing system is, to a first approximation, completely broken. It rewards style over substance in a systematic way (and Open Access publishing, in and of itself, will not fix this). As outside observers of any given article, we are fundamentally unable to distinguish between reviewers who insist on more rigour because our work needs more rigour, and those who have missed the point completely; anyone who has had an article rejected from a journal that has also recently published some piece of "obvious" garbage will know this feeling (especially if our article was critical of that same garbage, and seems to be being held to a totally different set of standards [PDF]).
Second, we --- society, the media, the general public, but also scientists among ourselves (I include myself in the set of "scientists" here mostly for syntactic convenience) --- lionize "brilliant" scientists when they discover something, even though that something --- if it's a true scientific discovery --- was surely just sitting there waiting to be discovered. (Maybe this confusion between scientists and inventors will get sorted out one day; I think it's a very fundamental problem. Perhaps we would be better off if Einstein hadn't been so photogenic.) And that's assuming that what the scientist has discovered is even, as the saying goes, "a thing", a truth; let's face it, in the social sciences, there are very few truths, only some trends, and very little from which one can make valid predictions about people with any worthwhile degree of reliability. (An otherwise totally irrelevant aside to illustrate this gap: one of the most insanely cool things I know of from "hard" science is that GPS uses both special and general relativity to make corrections to its timing, and those corrections go in opposite directions.) We elevate the people who make these "amazing discoveries" to superstar status. They get to fly business class to conferences and charge substantial fees to deliver a keynote speech in which they present their probably unreplicable findings. They go on national TV and tell us how their massive effect sizes mean that we can change the world for $29.99.
Thus, we have a system that is almost perfectly set up to reward people who tell the world what it wants to hear. Given those circumstances, perhaps the surprising thing is that we don't find out about more fraud. We can't tell with any objectivity how much cheating goes on, but judging by what people are prepared to report about their own and (especially) their colleagues' behaviour, what gets discovered is probably only the tip of a very large and dense iceberg. It turns out that there are an awful lot of very hungry dogs eating a lot of homework.
I'm not going to claim that I have a solution, because I haven't done any research on this (another amusing point about reactions to the LaCour case is how little they have been based on data and how much they have depended on visceral reactions; much of this post also falls into that category, of course). But I have two ideas. First, we should work towards 100% publication of datasets, along with the article, first time, every time. No excuses, and no need to ask the original authors for permission, either to look at the data or to do anything else with them; as the originators of the data, you'll get an acknowledgement in my subsequent article, and that's all. Second, reviewers and editors should exercise extreme caution when presented with large effect sizes for social or personal phenomena that have not already been predicted by Shakespeare or Plato. As far as most social science research is concerned, those guys already have the important things pretty well covered.
(Updated 2015-05-22 to incorporate the details of LaCour's CV updates.)
This rapidly- (aka "hastily-") written post is in response to some tweets calling for fraudsters to be banned from academia for life. I have a few problems with that.
First, I'm not quite sure what banning someone would mean. Are they to have "Do Not Hire In Any Academic Context" tattooed on their forehead? In six languages? Or should we have a central "Do Not Hire" repository, with DNA samples to prevent false identities (and fingerprints to prevent people impersonating their identical twin)?
Second, most fraudsters don't confess, nor are they subjected to any formal legal process (Diederik Stapel is a notable exception, having both confessed in a book [PDF] and been given a community service penalty, as well as what amounts to a 6-figure fine, by a court in the Netherlands). As far as I can tell, these people tend to deny any involvement, get fired, disappear for a while, and then maybe turn up a few years later teaching mathematics at a private high school or something, once the publicity has died down and they've massaged their CVs sufficiently. Should that be forbidden too? How far do we let our dislike of people who have let us down extend to depriving them of any chance of earning a living in future?
After all, we rehabilitate people who kill other people; indeed, in some cases, we rehabilitate them as academics. And as the case of Frank Abagnale shows, sometimes a fraudster can be very good at detecting fraud in others. Perhaps we should give the few fraudsters who confess a shot at redemption. Sure, we should treat their subsequent discoveries with skepticism, and we probably won't allow them to collect data unsupervised, but by simply casting them out, we miss an opportunity to learn, both about what drove (and enabled) them to do what they did, and how to prevent or mitigate future cases. We study all kinds of unpleasant things, so why impose this blind spot on ourselves?
Let's face it, nobody likes being the victim of wrongdoing. When I came downstairs a couple of years ago to find that my bicycle had been stolen from my yard overnight, the one time that I didn't lock it because it was raining so hard when I arrived home that I didn't want to stay out in the rain a second longer to do it, I was all in favour of the death penalty, or at the very least lifelong imprisonment with no possibility of parole, for bicycle thieves. The inner reactionary in me had come out; I had become the conservative that apparently emerges whenever a liberal gets mugged. Yet, we know from research (that we have to presume wasn't faked --- ha ha, just kidding!) that more severe punishments don't deter crime, and that what really makes a difference [PDF] is the perceived chance of being caught (and/or sentenced). And here, academia does a really, really terrible job.
First, our publishing system is, to a first approximation, completely broken. It rewards style over substance in a systematic way (and Open Access publishing, in and of itself, will not fix this). As outside observers of any given article, we are fundamentally unable to distinguish between reviewers who insist on more rigour because our work needs more rigour, and those who have missed the point completely; anyone who has had an article rejected from a journal that has also recently published some piece of "obvious" garbage will know this feeling (especially if our article was critical of that same garbage, and seems to be being held to a totally different set of standards [PDF]).
Second, we --- society, the media, the general public, but also scientists among ourselves (I include myself in the set of "scientists" here mostly for syntactic convenience) --- lionize "brilliant" scientists when they discover something, even though that something --- if it's a true scientific discovery --- was surely just sitting there waiting to be discovered. (Maybe this confusion between scientists and inventors will get sorted out one day; I think it's a very fundamental problem. Perhaps we would be better off if Einstein hadn't been so photogenic.) And that's assuming that what the scientist has discovered is even, as the saying goes, "a thing", a truth; let's face it, in the social sciences, there are very few truths, only some trends, and very little from which one can make valid predictions about people with any worthwhile degree of reliability. (An otherwise totally irrelevant aside to illustrate this gap: one of the most insanely cool things I know of from "hard" science is that GPS uses both special and general relativity to make corrections to its timing, and those corrections go in opposite directions.) We elevate the people who make these "amazing discoveries" to superstar status. They get to fly business class to conferences and charge substantial fees to deliver a keynote speech in which they present their probably unreplicable findings. They go on national TV and tell us how their massive effect sizes mean that we can change the world for $29.99.
Thus, we have a system that is almost perfectly set up to reward people who tell the world what it wants to hear. Given those circumstances, perhaps the surprising thing is that we don't find out about more fraud. We can't tell with any objectivity how much cheating goes on, but judging by what people are prepared to report about their own and (especially) their colleagues' behaviour, what gets discovered is probably only the tip of a very large and dense iceberg. It turns out that there are an awful lot of very hungry dogs eating a lot of homework.
I'm not going to claim that I have a solution, because I haven't done any research on this (another amusing point about reactions to the LaCour case is how little they have been based on data and how much they have depended on visceral reactions; much of this post also falls into that category, of course). But I have two ideas. First, we should work towards 100% publication of datasets, along with the article, first time, every time. No excuses, and no need to ask the original authors for permission, either to look at the data or to do anything else with them; as the originators of the data, you'll get an acknowledgement in my subsequent article, and that's all. Second, reviewers and editors should exercise extreme caution when presented with large effect sizes for social or personal phenomena that have not already been predicted by Shakespeare or Plato. As far as most social science research is concerned, those guys already have the important things pretty well covered.
(Updated 2015-05-22 to incorporate the details of LaCour's CV updates.)