## 01 July 2018

### This researcher compared two identical numbers. The effect size he obtained will shock you!

Here's an extract from an article that makes some remarkable claims about the health benefits of drinking green tea. The article itself seems to merit scrutiny for a number of reasons, but here I just want to look at a point that illustrates why (as previously noted by James Heathers in his inimitable style) rounding to one decimal place (or significant figure) when reporting your statistics is not a good idea.

The above image is taken from Table 3 of the article, on page 596. This table shows the baseline and post-treatment values of a large number of variables that were measured in the study.  The highlighted row shows the participants' waist–hip ratio in each of two groups and at each of two time points. As you can see, all of the (rounded) means are equal, as are all of the (rounded) SDs.

Does this mean that there was absolutely no difference between the participants? Not quite. You can see that the p value is different for the two conditions. This p value corresponds to the paired t test that will have been performed for the 39 participants in the treatment group across the period of the study, or for the 38 participants in the control group. The p values (corresponding to the respective t statistics) would likely be different even if the means and SDs were identical to many decimal places because the paired t test makes 39 (or 38) comparisons of individual values between baseline and the end of the study.

However, what I'm interested in here is the difference in mean waist–hip ratios between the groups at baseline (i.e., the first and fourth columns of numbers). The participants have been randomized to conditions, so presumably the authors decided not to worry about baseline differences [PDF], but it's interesting to see what those differences could have been (not least because these same numbers could also have been, say, the results obtained by the two groups on a psychological test after they had been assigned randomly to conditions without a baseline measurement).

We can calculate the possible range of differences(*) by noting that the rounded mean of 0.9 could have corresponded to an actual value anywhere between 0.85001 and 0.94999 (let's leave the question of how to round values of exactly 0.85 or 0.95 for now; it's complicated). Meanwhile, each of the rounded SDs of 0.1 could have been as low as 0.05001. (The lower the SD, the higher the effect.)  Let's put those numbers into this online effect size calculator (M1=0.94999, M2=0.85001, SD1=SD2=0.05001) and click "Compute" (**).

Yes, you are reading that right: An effect size of = (almost) 2 is possible for the baseline difference between the groups even though the reported means are identical. (For what it's worth, the p value here, with 75 degrees of freedom, is .0000000000004). Again, James has you covered if you want to know what an effect size of 2 means in the real world.

Now, you might think that this is a bit pathological, and you're probably right. So play around with the means and SDs until they look reasonable to you. For example, if you keep the extreme means but use the rounded SDs as if they were exactly correct, you get = 0.9998.  That's still a whopping effect size for the difference between numbers that are reported as being equal. And even if you bring the means in from the edge of the cliff, the effect size can still be pretty large. Means of 0.93 and 0.87 with SDs of 1.0 will give you d = 0.6 and p = .01, which is good enough for publication in most journals.

Conclusion: Always report, not just two decimal places, but also at least two significant figures (it's very frustrating to see standardized regression coefficients, in particular, reported as 0.02 with a standard error of 0.01). In fact, since most people read papers on their screens and black pixels use less energy to display than white ones, save the planet and your battery lifetime and report three or four decimals. After all, you aren't afraid of GRIM, are you?

(*) I did this calculation by hand. My f_range() function, described here, doesn't work in this case because the underlying code (from a module that I didn't write, and have no intention of fixing) chokes when trying to calculate the midpoint test statistic when the means and SDs are identical.

(**) This calculator seems to be making the simplifying assumption that the group sizes are identical, which is close enough as to make no difference in this case. You can also do the calculation of d by hand: just divide the difference between the means by the standard deviation, assuming you're using the same SD for both means, or see here.

## 31 May 2018

### How SPRITE works: a step-by-step introduction

Our preprint about SPRITE went live a few hours ago. I encourage you to read it, but not everyone will have the time, so here is a simple (I hope) explanation of what we're trying to do.

Before we start, I suggest that you open this Google spreadsheet and either make a copy or download an Excel version (both of these options are in the File menu) so you can follow along.

Imagine that you have read in an article that N=20 people responded to a 1–5 Likert-type item with a mean of 2.35 and an SD of 1.39. Here's how you could test whether that's possible:

1. Make a column of 20 random numbers in the range 1–5 and have your spreadsheet software display their mean and SD. Now we'll try and get the mean and SD to match the target values.

2. If the mean is less than the target mean (2.35), add 1 to one of the numbers that isn't a 5 (the maximum on the scale). If the mean is greater than the target mean, subtract 1 from one of the numbers that isn't a 1. Repeat this step until the mean matches the target mean.

3. If the SD doesn't match the target SD, select a pair of numbers from the list. Call the smaller number A and the larger one B (if they are identical, either can be A or B). If the SD is currently smaller than the target SD, subtract 1 from A and add 1 to B. If the SD is currently larger than the target SD, add 1 to A and subtract 1 from B. Repeat this step until the SD matches the target SD. (Not all pairs of numbers are good choices, as you will see if you play around a bit with the spreadsheet, but we can ignore that for the moment.)

Let's go through this in the spreadsheet; I hope you'll see that it's quite simple.

Here's the spreadsheet. Cells B2 and B3 contain the target mean and SD. Cells D2 and D3 contain the current mean and SD of the test data, which are the 20 numbers in cells D5 through D24. Cells C2 and C3 contain the difference between the current and target mean and SD, respectively. When that difference drops to 0.005 or less (which means that the numbers are equal, within the limits of rounding), these two cells will change colour. (For some reason, they turn green in Google Sheets but blue in my copy of Excel.)

In this spreadsheet, most of the work has already been done. The mean is 2.30 and the target is 2.35, so if you increase one value by 1 (say, D11, from 1 to 2), the mean will go to 2.35 and cell C1 will change colour. That's step 2 completed.

For the SD, observe that after you fixed the mean by changing D11, the SD became 1.31, which is smaller than the target. So you want to increase the SD, which means pushing two values further apart. For example, change D12 from 2 to 1 and D13 from 2 to 3. The mean will be unchanged, but now the SD is 1.35; changing two 2s to a 1 and a 3 increased the SD by 0.04, which is the amount that the SD is still short of the target. So let's do the same operation again. Change D14 from 2 to 1 and D15 from 2 to 3. You should now have an SD of 1.39, equal to the target value, and cell C2 should have changed colour. Step 3 is now completed.

Congratulations, you just found a SPRITE solution! That is, the list of values (after sorting)
1,1,1,1,1,1,1,1,2,2,2,3,3,3,3,3,4,4,5,5
has a mean of 2.35 and an SD of 1.39, and could have been the combination that gave the result that you were trying to reproduce from the article.

Not every swap of two values gives the same result, however. Let's back up a little by changing D15 from 3 back to 2 and D14 from 1 back to 2 (so the SD should now be back to 1.35). Now change cell D20 from 3 to 2 and cell D21 from 4 to 5. The mean is still OK, but the SD has overshot the target value of 1.39 and is now 1.42. So this means that
1,1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,4,5,5,5
is not a valid solution.

There are eight unique solutions (I checked with CORVIDS); rSPRITE will usually find all eight, although it doesn't always get 100% of possible solutions in more complex cases. If playing with numbers like this is your idea of fun, you could try and find more solutions by hand. Here's the spoiler picture, with the solution we found earlier right in the middle:

Basically, that's all there is to it. SPRITE is just software that does this adding and swapping, with a few extra subtleties, very fast. It's the computer version of some checks that James Heathers and I first started doing in late 2015 when we were looking at some dodgy-looking articles. But we certainly aren't the first people who have had this idea to see if means/SD combinations are possible; it really isn't rocket science.

## 02 May 2018

### A footnote on self-citation and duplicate publication

Since discussion of the topics of self-citation and duplicate publication seems to be "hot" at the moment, and I have probably had something to do with at least the second of those, I feel that I ought to mention my own record in this area, in the interest of full transparency.

I've never really thought about how bad "excessive" self-citation is as a misdemeanour on the academic "just not done" scale, nor indeed what "excessive might mean", but I think there are a few rather severe problems with self-plagiarism (aka duplicate publication):

1. Copyright (whether we like it or not, most of the time, we sign over copyright to all of the text, give or take "fair use", to the publisher of an article or chapter);
3. Possible deception of the editors of books or journals to which the duplicates were submitted;
4. Possible deception of the readers.
James Heathers has more thoughts on this here.

First, self-citation: According to Google Scholar, my published work (most, but not all, of it peer-reviewed) has 376 citations as of today. I have gone through all of the manuscripts with which I have been involved and counted 14 self-citations, plus two citations of chapters by other people in a book that I co-edited (which count towards my Google Scholar h-index). For what it's worth, I am not suggesting that anyone should feel the need to calculate and disclose their self-citation rate, as it would be a very tedious exercise for people with a lot of publications.

Second, duplicate publication: I am the third author (of four; I estimate that I contributed about 10% of the words) of this article in thJournal of Social and Political Psychology (JSPP), which I also blogged about here. In order to bring the ideas in that article to the attention of a wider public, we reworked it into a piece in Skeptical Inquirer, which included the following disclosure:

The JSPP article was published under a CC-BY 3.0 license, which means that there were no issues with copyright when re-using some parts of its text verbatim:

Both articles are mentioned in my CV [PDF], with the Skeptical Inquirer piece being filed under "Other publications", including a note that it was derived from the earlier peer-reviewed article in JSPP.

That's all I have to disclose on the questions of self-citation and duplicate publication. If you find something else, please feel free to call me out on it.

## 25 April 2018

### Some instances of apparent duplicate publication by Dr. Robert J. Sternberg

Dr. Robert J. Sternberg is a past president of the American Psychological Association, currently at Cornell University, with a CV that is over 100 pages long [PDF] and, according to Google Scholar, almost 150,000 citations.

Recently, some people have been complaining that too many of those are self-citations, leading to a formal petition to the APS Publication CommitteeBut sometimes, it seems, Dr. Sternberg prefers to make productive use of his previous work in a more direct manner.  I was recently contacted by Brendan O'Connor, a graduate student at the University of Leicester, who had noticed that some of the text in Dr. Sternberg's many articles and chapters appeared to be almost identical. It seems that he may be on to something.

#### Exhibit 1

Brendan—who clearly has a promising career ahead of him as a data thug, should he choose that line of work—noticed that this 2010 article by Dr. Sternberg was basically a mashup of this article of his from the same year and this book chapter of his from 2002. One of the very few meaningful differences in the chunks that were recycled between the two 2010 articles is that the term "school psychology" is used in the mashup article to replace "cognitive education" from the other; this may perhaps not be unrelated to the fact that the former was published in School Psychology International (SPI) and the latter in the Journal of Cognitive Education and Psychology (JCEP). If you want to see just how much of the SPI article was recycled from the other two sources, have a look at this. Yellow highlighted text is copied verbatim from the 2002 chapter, green from the JCEP article. You can see that about 95% of the text is in one or the other colour:
Curiously, despite Dr. Sternberg's considerable appetite for self-citation (there are 26 citations of his own chapters or articles, plus 1 of a chapter in a book that he edited, in the JCEP article; 25 plus 5 in the SPI article), neither of the 2010 articles cites the other, even as "in press" or "manuscript under review"; nor does either of them cite the 2002 book chapter. If previously published work is so good that you want to copy big chunks from it, why would you not also cite it?

### Exhibit 2

Inspired by Brendan's discovery, I decided to see if I could find any more examples. I downloaded Dr. Sternberg's CV and selected a couple of articles at random, then spent a few minutes googling some sentences that looked like the kind of generic observations that an author in search of making "efficient" use of his time might want to re-use.  On about the third attempt, after less than ten minutes of looking, I found a pair of articles, from 2003 and 2004, by Dr. Sternberg and Dr. Elena Grigorenko, with considerable overlaps in their text. About 60% of the text in the later article (which is about the general school student population) has been recycled from the earlier one (which is about gifted children), as you can see here (2003 on the left, 2004 on the right). The little blue paragraph in the 2004 article has also come from another source; see exhibit 4.
Neither of these articles cites the other, even as "in press" or "manuscript in preparation".

### Exhibit 3

I wondered whether some of the text that was shared between the above pair of articles might have been used in other publications as well. It didn't take long(*) to find Dr. Sternberg's contribution (chapter 6) to this 2012 book, in which the vast majority of the text (around 85%, I estimate) has been assembled almost entirely from previous publications: chapter 11 of this 1990 book by Dr. Sternberg (blue), this 1998 chapter by Dr. Janet Davidson and Dr. Sternberg (green), the above-mentioned 2003 article by Dr. Sternberg and Dr. Grigorenko (yellow), and chapter 10 of this 2010 book by Dr. Sternberg, Dr. Linda Jarvin, and Dr. Grigorenko (pink).

Once again, despite the fact that this chapter cites 59 of Dr. Sternberg's own publications and another 10 chapters by other people in books that he (co-)edited, none of those citations are to the four works that were the source of all the highlighted text in the above illustration.

Now, sometimes one finds book chapters that are based on previous work. In such cases, it is the usual practice to include a note to that effect. And indeed, two chapters (numbered 26 and 27) in that 2012 book edited by Dr. Dawn Flanagan and Dr. Patti Harrison, contain an acknowledgement along the lines of "This chapter is adapted from <reference>.  Copyright 20xx by <publisher>.  Adapted by permission". But there is no such disclosure in chapter 6.

#### Exhibit 4

It appears that Dr. Sternberg has assembled a chapter almost entirely from previous work on more than one occasion. Here's a recent example of a chapter made principally from his earlier publications. About 80% of the words have been recycled from chapter 9 of this 2011 book by Dr. Sternberg, Dr. Jarvin, and Dr. Grigorenko (yellow), chapter 2 of this 2003 book by Dr. Sternberg, (blue; this is also the source of the blue paragraph in Exhibit 2), chapter 1 of this 2002 book by Drs Sternberg and Grigorenko (green), the 2012 chapter(**) mentioned in Exhibit 3 above (pink), and a wafer-thin slice from chapter 2 (contributed by Dr. Sternberg) of this 2008 book (purple).

This chapter cites 50 of Dr. Sternberg's own publications and another 7 chapters by others in books that he (co-)edited. This time, one of the citations was for one of the five books that were the basis of the highlighted text in the above illustration, namely the 2003 book Wisdom, Intelligence, and Creativity Synthesized that was the source of the blue text. However, none of the citations of that book indicate that any of the text taken from it is being correctly quoted, with quote marks (or appropriate indentation) and a page number. The four other books from which the highlighted text was taken were not cited. No disclosure that this chapter has been adapted from previously published material appears in the chapter, or anywhere else in the 2017 book (or, indeed, in the first edition of the book from 2005, where a similar chapter by Dr. Sternberg was published).

#### Why this might be a problem (other than for the obvious reasons)

There are a lot of reasons why this sort of thing is not great for science, and I suspect that there will be quite a lot of discussion about the meta-scientific, moral, and perhaps even legal aspects (I seem to recall that when I publish something, I generally have to sign my copyright over to someone, which means I can't go round distributing it as I like, and I certainly can't sign the copyright of the same text over to a second publisher). But I also want to make a point about how, even if the copying process itself does no apparent direct harm, this practice can damage the process of scientific inquiry.

During a number of the copy-and-paste operations that were apparently performed, a few words were sometimes changed. In some cases this was merely cosmetic (e.g., "participants" being changed to "students"), or a reflection of changing norms over time. But in other cases it seemed that the paragraphs being copied were merely being repurposed to describe a different construct that, while perhaps being in some ways analogous to the previous one, was not the same.  For example, the 2017 chapter that is the subject of Exhibit 4 above contains this sentence:

"In each case, important kinds of developing competencies for life were not adequately reflected by the kinds of competencies measured by the conventional ability tests" (p. 12).

But if we go to yet another chapter by Dr. Sternberg, this time from 2002, that contains mostly the same text (tracing all of the places in which a particular set of paragraphs have been recycled turns out to be computationally intensive for the human brain), we find:

"In each case, important kinds of developing expertise for life were not adequately reflected by the kinds of expertise measured by the conventional ability tests" (p. 21).

Are we sure that "competencies" are the same thing as "expertise"? How about "school psychology" and "cognitive education", as in the titles of the articles in Exhibit 1? Are these concepts really so similar that one can recycle, verbatim, hundreds of words at a time about one of them and be sure that all of those words, and the empirical observations that they sometimes describe, are equally applicable to both? And if so, why bother to have the two concepts at all?

Relatedly, the single biggest source of words for exhibit 3
—published in 2012—was a chapter published in 1990. Can it really be the case that so little has been discovered in 22 years in research into the nature of intelligence that this material doesn't even merit rewriting from a retrospective viewpoint?

#### What next?

I'm not sure, frankly. But James Heathers has some thoughts here.

(*) Brendan and I are looking for other similar examples to the ones described in this post. Given how easy it was to find these ones, we suspect that there may be more to be uncovered.

(**) While searching, I lost track of the number of times that the descriptions of the Rainbow and Kaleidoscope projects have been recycled across multiple publications. Citing the copy from the 2012 article seemed like an appropriate way to convey the continuity of the problem. For some reason, though, in this version from 2005, the number of students included in the sample was 777, instead of the 793 reported everywhere else.

## 13 March 2018

### Announcing a crowdsourced reanalysis project

(Update 2018-03-14 10:18 UTC: I have received lots of offers to help with this, and I now have enough people helping.  So please don't send me an e-mail about this.)

Back in the spring of 2016, for reasons that don’t matter here, I found myself needing to understand a little bit about the NHANES (National Health and Nutrition Examination Survey) family of datasets.  NHANES is an ongoing programme that has been running in the United States since the 1970s, looking at how nutrition and health interact.

Most of the datasets produced by the various waves of NHANES are available to anyone who wants to download them. Before I got started on my project (which, in the end, was abandoned, again for reasons that don’t matter here), I thought that it was a good idea to check that I understood the structure of the data by reproducing the results of an article based on them. This seemed especially important because the NHANES files—at least, the ones I was interested in—are supplied in a format that requires SAS to read, and I needed to convert them to CSV before analyzing them in R.  So I thought the best way to check this would be to take a well-cited article and reproduce its table of results, which would allow me to be reasonably confident that I had done the conversion right, understood the variable names, etc.

Since I was using the NHANES-IIIdata (from the third wave of the NHANES programme, conducted in the mid-1990s) I chose an article at random by looking for references to NHANES-III in Google Scholar (I don’t remember the exact search string) and picking the first article that had several hundred citations.  I won't mention its title here (read on for more details), but it addresses what is clearly an important topic and seemed like a very nice paper—exactly what I was looking for to test whether or not I was converting, importing, and interpreting the NHANES data correctly.

Having identified and downloaded the NHANES files that I needed, opening those files using SAS University Edition and exporting them to CSV format turned out to required just a couple of lines of code using PROC EXPORT, for which I was able to find the syntax on the web quite easily.  Once I had those CSV files, I could write my code to read them in, extract the appropriate variables, and repeat most of the analyses in the article that I had chosen.

Regular readers of this blog may be able to guess what happened next: I didn’t get the same results as the authors.  I won’t disclose too many details here because I don’t want to bias the reanalysis exercise that I’m proposing to conduct, but I will say that the differences did not seem to me to be trivial.  If my numbers are correct then a fairly substantial correction to the tables of results will be required.  At least one (I don't want to give more away) of the statistically significant results is no longer statistically significant, and many of the significant odds ratios are considerably smaller.  (There are also a couple of reporting errors in plain sight in the article itself.)

When I discovered these apparent issues back in 2016, I wrote to the lead author, who told me that s/he was rather busy and invited me to get in touch again after the summer. I did so, but s/he then didn't reply further. Oh well. People are indeed often very busy, and I can see how, just because one person who maybe doesn't understand everything that you did in your study writes to you, that perhaps isn't a reason to drop everything and start going through some calculations you ran more than a decade ago.  I let the matter drop at the time because I had other stuff to do, but a few weeks ago it stuck its nose up through the pile of assorted back burner projects (we all have one) and came to my attention again.

So, here's the project.  I want to recruit a few (ideally around three) people to independently reanalyse this article using the NHANES-III datasets and see if they come up with the same results as the original authors, or the same as me, or some different set of results altogether.  My idea is that, if several people working completely independently (within reason) come up with numbers that are (a) the same as each other and (b) different from the ones in the article, we will be well placed to submit a commentary article for publication in the journal (which has an impact factor over 5), suggesting that a correction might be in order. On the other hand, if it turns out that my analyses were wrong, and the article is correct, then I can send the lead author a note to apologise for the (brief) waste of his time that my 2016 correspondence with him represented. Whatever the outcome, I hope that we will all learn something.

For the moment I'm not going to name the article here, because I don't want to have too many people running around reanalysing it outside of this "crowdsourced" project.  Of course, if you sign up to take part, I will tell you what the article is, and then I can't stop you shouting its DOI from the rooftops, but I'd prefer to keep this low-key for now.

If you would like to take part, please read the conditions below.

1. If the line below says "Still accepting offers", proceed. If it says "I have enough people who have offered to help", stop here, and thanks for reading this far.

========== I have enough people who have offered to help ==========

3. You need to be reasonably competent at performing logistic regressions in SAS, or in a software package than can read SAS or CSV files.  I used R; the original authors used proprietary software (not SAS).  It would be great if all of the people who volunteered used different packages, but I'm not going to turn down anyone just because someone else wants to use the same analysis software. However, I'm also not going to give you a tutorial on how to run a logistic regression (not least because I am not remotely an expert on this myself).

4. Volunteers will be anonymous until I have all the results (to avoid, as far as possible, people collaborating with each other).  However, by participating, you accept that once the results are in, your name and your principal results may be published in a follow-up blog post. You also accept, in principle, to be a co-author on any letter to the editor that might result from this exercise.  (This point isn't a commitment to be signed in blood at this stage, but I don't want anyone to be surprised or offended when I ask if I can publish their results or use them to support a letter.)

5. If you want to work in a team on this with some colleagues, please feel free to do so, but I will only put one person's name forward per reanalysis on the hypothetical letter to the editor; others who helped may get an acknowledgement, if the journal allows.  Basically, ensure that you can say "Yes, I did most of the work on this reanalysis, I meet the criteria for co-authorship".

6. The basic idea is for you to work on your own and solve your own problems, including understanding what the original authors did.  The article is reasonably transparent about this, but it's not perfect and there are some ambiguities. I would have liked to have the lead author explain some of this, but as mentioned above, s/he appears to be too busy. If you hit problems then I can give you a minimum amount of help based on my insights, but of course the more I do that, the more we risk not being independent of each other. (That said, I could do with some help in understanding what the authors did at one particular point...)

7. You need to be able to get your reanalysis done by June 30, 2018.  This deadline may be moved (by me) if I have trouble recruiting people, but I don't want to repeat a recent experience where a couple of the people who had offered to help me on a project stopped responding to their e-mails for several months, leaving me to decide whether or not to drop them.  I expect that the reanalysis will take between 10 and 30 hours of your time, depending on your level of comfort with computers and regression analyses.

Are you still here? Then I would be very happy if you would decide whether you think this reanalysis is within your capabilities, and then make a small personal commitment to follow through with it.  If you can do that, please send me an e-mail (nicholasjlbrown, gmail) and I will give you the information you need to get started.

## 26 February 2018

### The Cornell Food and Brand Lab story goes full circle, possibly scooping up much of social science research on the way, and keeps turning

Stephanie Lee of BuzzFeed has just published another excellent article about the tribulations of the Cornell Food and Brand Lab.  This time, her focus is on the p-hacking, HARKing, and other "questionable research practices" (QRPs) that seem to have been standard in this lab for many years, as revealed in a bunch of e-mails that she obtained via Freedom of Information (FoI) requests.  In a way, this brings the story back to the beginning.

It was a bit more than a year ago when Dr. Brian Wansink wrote a blog post (since deleted, hence the archived copy) that attracted some negative attention, partly because of what some people saw as poor treatment of graduate students, but more (in terms of the weight of comments, anyway) because it described what appeared to be some fairly terrible ways of doing research (sample: 'Every day she came back with puzzling new results, and every day we would scratch our heads, ask "Why," and come up with another way to reanalyze the data with yet another set of plausible hypotheses'). It seemed pretty clear that researcher degrees of freedom were a big part of the business model of this lab. Dr. Wansink claimed not to have heard of p-hacking before the comments started appearing on his blog post; I have no trouble believing this, because news travels slowly outside the bubble of Open Science Twitter.  (Some advocates of better scientific practices in psychology have recently claimed that major improvements are now underway. All I can say is, they can't be reviewing the same manuscripts that I'm reviewing.)

However, things rapidly became a lot stranger.  When Tim, Jordan, and I re-analyzed some of the articles that were mentioned in the blog post, we discovered that many of the reported numbers were simply impossible, which is not a result you'd expect from the kind of "ordinary" QRPs that are common in psychology.  If you decide to exclude some outliers, or create subgroups based on what you find in your data, your ANOVA still ought to give you a valid test statistic and your means ought to be compatible with the sample sizes.

Then we found recycled text and tables of results, and strangely consistent numbers of responses to multiple surveys, and results that correlated .97 across studies with different populations, and large numbers of female WW2 combat veterans, and references that went round in circles, and unlikely patterns of responses. It seemed that nobody in the lab could even remember how old their participants were.  Clearly, this lab's outputgoing back 20 or more years, to a time before Dr. Wansink joined Cornellwas a huge mess.

Amidst all that weirdness, it was possible to lose sight of the fact that what got everything started was the attention drawn to the lab by that initial blog post from November 2016, at which point most of us thought that the worst we were dealing with was rampant p-hacking.  Since then, various people have offered opinions on what might be going on in the lab; one of the most popular explanations has been, if I can paraphrase, "total cluelessness".  On this account, the head of the lab is so busy (perhaps at least partly due to his busy schedule of media appearancestestifying before Congress, and corporate consulting*), the management of the place so overwhelmed on a day-to-day basis, that nobody knows what is being submitted to journals, which table to include in which manuscript, which folder on the shared drive contains the datasets.  You could almost feel sorry for them.

Stephanie's latest article changes that, at least for me.  The e-mail exchanges that she cites and discusses seem to show deliberate and considered discussion about what to include and what to leave out, why it's important to "tweek" [sic] results to get a p value down to .05, which sets of variables to combine in search of moderators, and which types of message will appeal to the editors (and readers) of various journals.  Far from being chaotic, it all seems to be rather well planned to me; in fact, it gives just the impression Dr. Wansink presumably wanted to give in his blog post that led us down this rabbit hole in the first place. When Brian Nosek, one of the most diplomatic people in science, is prepared to say that something looks like research misconduct, it's hard to imply that you're just in an argument with over-critical data thugs.

Maybe this anger can be turned into something good.  Perhaps we will see a social media-based movement, inspired by some of the events of the past year, for people to reveal some of the bad methodological stuff their PIs expect them to do. I won't go into any details here, partly because the other causes I'm thinking about are arguably more important than social science research and I don't want to appear to be hitching a ride on their bandwagon by proposing hashtags (although I wonder how many people who thought that they would lose weight by decanting their breakfast cereal into small bags are about to receive a diagnosis of type II diabetes mellitus that could have been prevented if they had actually changed their dietary habits), and partly because as someone who doesn't work in a lab, it's a lot easier for me to talk about this stuff than it is for people with insecure employment that depends on keeping a p-hacking boss happy.

Back to Cornell: we've come full circle.  But maybe we're just starting on the second lap.  Because, as I noted earlier, all the p-hacking, HARKing, and other stuff that renders p values meaningless still can't explain the impossible numbers, duplicated tables, and other stuff that makes this story rather different from what, I suspect, mightapart, perhaps, from the scale at which these QRPs are being appliedbe "business as usual" in a lot of places. Why go to all the trouble of combining variables until a significant moderator shows up in SPSS or Stata, and then report means and test statistic that can't possibly have been output by those programs?  That part still makes no sense to me.  Nor does Dr. Wansink's claim that he and all his colleagues "didn't remember" when he wrote the correction to the "Elmo" article in the summer of 2017 that the study was conducted on daycare kids, when in February of that year he referred to daycare explicitly (and there are several other clues, some of which I've documented over the past year in assorted posts). And people with better memories than me have noted that the "complete" releases of data that we've been given appear not to be as complete as they might be.  We are still owed another round of explanations, and I hope that, among what will probably be a wave of demands for more improvements in research practices, we can still find time to get to the bottom of what exactly happened here, because I don't think that an explanation based entirely on "traditional" QRPs is going to cover it.

* That link is to a Google cache from 2018-02-19, because for some reason, the web page for McDonald's Global Advisory Council gives a 404 error as I'm writing this. I have no idea whether that has anything to do with current developments, or if it's just a coincidence.

## 06 February 2018

### The latest Cornell Food and Brand Lab correction: Some inconsistencies and strange data patterns

[Update 2018-05-12 20:40 UTC: The study discussed below has now been retracted. ]

The Cornell Food and Brand Lab has a new correction. Tim van der Zee already tweeted a bit about it.

"Extremely odd that it isn't a retraction"? Let's take a closer look.

Here is the article that was corrected:
Wansink, B., Just, D. R., Payne, C. R., & Klinger, M. Z. (2012). Attractive names sustain increased vegetable intake in schools. Preventive Medicine55, 330–332. http://dx.doi.org/10.1016/j.ypmed.2012.07.012

This is the second article from this lab in which data were reported as having been collected from elementary school children aged 811, but it turned out that they were in fact collected from children aged 3–5 in daycares.  You can read the lab's explanation for this error at the link to the correction above (there's no paywall at present), and decide how convincing you find it.

Just as a reminder, the first article, published in JAMA Pediatrics, was initially corrected (via JAMA's "Retract and replace" mechanism) in September 2017. Then, after it emerged that the children were in fact in daycare, and that there were a number of other problems in the dataset that I blogged about, the article was definitively retracted in October 2017.

I'm going to concentrate on Study 1 of the recently-corrected article here, because the corrected errors in this study are more egregious than those in Study 2, and also because there are still some very substantial problems remaining.  If you have access to SPSS, I also encourage you to download the dataset for Study 1, along with the replication syntax and annotated output file, from here.

By the way, in what follows, you will see a lot of discussion about the amount of "carrots" eaten.  There has been some discussion about this, because the original article just discussed "carrots" with no qualification. The corrected article tells us that the carrots were "matchstick carrots", which are about 1/4 the size of a baby carrot. Presumably there is a U.S. Standard Baby Carrot kept in a science museum somewhere for calibration purposes.

So, what are the differences between the original article and the correction? Well, there are quite a few. For one thing, the numbers in Table 1 now finally make sense, in that the number of carrots considered to have been "eaten" is now equal to the number of carrots "taken" (i.e., served to the children) minus the number of carrots "uneaten" (i.e., counted when their plates came back after lunch).  In the original article, these numbers did not add up; that is, "taken" minus "uneaten" did not equal "eaten".  This is important because, when asked by Alison McCook of Retraction Watch why this was the case, Dr. Brian Wansink (the head of the Cornell Food and Brand Lab) implied that it must have been due to some carrots being lost (e.g., dropped on the floor, or thrown in food fights). But this makes no sense for two reasons. First, in the original article, the difference between the number of carrots "eaten" was larger than the difference between "taken" and "uneaten", which would imply that, rather than being dropped on the floor or thrown, some extra carrots had appeared from somewhere.  Second, and more fundamentally, the definition of the number of carrots eaten is (the number taken) minus (the number left uneaten).  Whether the kids ate, threw, dropped, or made sculptures out of the carrots doesn't matter; any that didn't come back were classed as "eaten". There was no monitoring of each child's oesophagus to count the carrots slipping down.

When we look in the dataset, we can see that there are separate variables for "taken" (e.g., "@1CarTaken" for Monday, "@2CarTaken" for Tuesday, etc), "uneaten" (e.g., "@1CarEnd", where "End" presumably corresponds to "left at the end"), and "eaten" (e.g., "@1CarEaten").  In almost all cases, the formula ("eaten" equals "taken" minus "uneaten") holds, except for a few missing values and two participants (#42 and #152) whose numbers for Monday seem to have been entered in the wrong order; for both of these participants, "eaten" equals "taken" plus "uneaten". That's slightly concerning because it suggests that, instead of just entering "taken" and "uneaten" (the quantities that were capable of being measured) and letting their computer calculate "eaten", the researchers calculated "eaten" by hand and typed in all three numbers, doing so in the wrong order for these two participants in the process.

Another major change is that whereas in the original article the study was run on three days, in the correction there are reports of data from four days.  In the original, Monday was a control day, the between-subject manipulation of the carrot labels was done on Tuesday, and Thursday was a second control day, to see if the effect persisted. In the correction, Thursday is now a second experimental day, with a different experiment that carries over to Friday. Instead of measuring how many carrots were eaten on Thursday, between two labelling conditions ("X-ray Vision Carrots" and "Food of the Day"; there was no "no-label" condition), the dependent variable was the number of carrots eaten on the next day (Friday).

OK, so those are the differences between the two articles. But arguably the most interesting discoveries are in the dataset, so let's look at that next.

### Randomisation #fail

As Tim van der Zee noted in the Twitter thread that I linked to at the top of this post, the number of participants in Study 1 in the corrected article has mysteriously increased since the original publication. Specifically, the number of children in the "Food of the Day" condition has gone from 38 to 48, an increase of 10, and the number of children in the "no label" condition has gone from 45 to 64, an increase of 19.  You might already be thinking that a randomisation process that leads to only 22.2% (32 of 144) participants being in the experimental condition might not be an especially felicitous one, but as we will see shortly, that is by no means the largest problem here.  (The original article does not actually discuss randomisation, and the corrected version only mentions it in the context of the choice of two labels in the part of the experiment that was conducted on the Thursday, but I think it's reasonable to assume that children were meant to be randomised to one of the carrot labelling conditions on the Tuesday.)

The participants were split across seven daycare centres and/or school facilities (I'll just go with the authors' term "schools" from now on).  Here is the split of children per condition and per school:

Oh dear. It looks like the randomisation didn't so much fail here, as not take place at all, in almost all of the schools.

Only two schools (#1 and #4) had a non-zero number of children in each of the three conditions. Three schools had zero children in the experimental condition. Schools #3, #5, #6, and #7 only had children in one of the three conditions. The justification for the authors' model in the corrected version of the article ("a Generalized Estimated Equation model using a negative binominal distribution and log link method with the location variable as a repeated factor"), versus the simple ANOVA that they performed in the original, was to be able to take into account the possible effect of the school. But I'm not sure that any amount of correction for the effect of the school is going to help you when the data are as unbalanced as this.  It seems quite likely that the teachers or researchers in most of the schools were not following the protocol very carefully.

### At school #1, thou shalt eat carrots

Something very strange must have been happening in school #1.  Here is the table of the numbers of children taking each number of carrots in schools #2-#7 combined:

I think that's pretty much what one might expect.  About a quarter of the kids took no carrots at all, most of the rest took a few, and there were a couple of major carrot fans.  Now let's look at the distribution from school #1:

Whoa, that's very different. No child in school #1 had a lunch plate with zero carrots. In fact, all of the children took a minimum of 10 carrots, which is more than 44 (41.1%) of the 107 children in the other schools took.  Even more curiously, almost all of the children in school #1 apparently took an exact multiple of 10 carrots - either 10 or 20. And if we break these numbers down by condition, it gets even stranger:

So 17 out of 21 children in the control condition ("no label", which in the case of daycare children who are not expected to be able to read labels anyway presumably means "no teacher describing the carrots") in school #1 chose exactly 10 carrots. Meanwhile, every single child12 out of 12in the "Food of the Day" condition selected exactly 20 carrots.

I don't think it's necessary to run any statistical tests here to see that there is no way that this happened by chance. Maybe the teachers were trying extra hard to help the researchers get the numbers they wanted by encouraging the children to take more carrots than they otherwise would (remember, from schools #2-#7, we could expect a quarter of the kids to take zero carrots). But then, did they count out these matchstick carrots individually, 1, 2, 3, up to 10 or 20? Or did they serve one or two spoonfuls and think, screw it, I can't be bothered to count them, let's call it 10 per spoon?  Participants #59 (10 carrots), #64 (10), #70 (22), and #71 (10) have the comment "pre-served" recorded in their data for this day; does this mean that for these children (and perhaps others with no comment recorded), the teachers chose how many carrots to give them, thus making a mockery of the idea that the experiment was trying to determine how the labelling would affect the kids' choices?  (I presume it's just a coincidence that the number of kids with 20 carrots in the "Food of the Day" condition, and the number with 10 carrots in the "no label" condition, are very similar to the number of extra kids in these respective conditions between the original and corrected versions of the article.)

### The tomatoes... and the USDA project report

Another interesting thing to emerge from an examination of the dataset is that not one but two foods, with and without "cool names", were tested during the study.  As well as "X-ray Vision Carrots", children were also offered tomatoes. On at least one day, these were described as "Tomato Blasts". The dataset contains variables for each day recording what appears to be the order in which each child was served with the tomatoes or carrots.  Yet, there are no variables recording how many tomatoes each child took, ate, or left uneaten on each day. This is interesting, because we know that these quantities were measured. How? Because it's described in this project report by the Cornell Food and Brand Lab on the USDA website:

"... once exposed to the x-ray vision carrots kids ate more of the carrots even when labeled food of the day. No such strong relationship was observed for tomatoes, which could mean that the label used (tomato blasts) might not be particularly meaningful for children in this age group."

This appears to mean that the authors tested two dependent variables, but only reported the one that gave a statistically significant result. Does that sound like readers of the Preventive Medicine article (either the original or the corrected version) are being provided with an accurate representation of the research record? What other variables might have been removed from the dataset?

It's also worth noting that the USDA project report that I linked to above states explicitly that both the carrots-and-tomatoes study and the "Elmo"/stickers-on-apples study (later retracted by JAMA Pediatrics) were conducted in daycare facilities, with children aged 35.  It appears that the Food and Brand Lab probably sent that report to the USDA in 2009. So how was it that by March 2012the date on this draft version of the original "carrots" articleeverybody involved in writing "Attractive Names Sustain Increased Vegetable Intake in Schools" had apparently forgotten about it, and was happy to report that the participants were elementary school students?  And yet, when Dr. Wansink cited the JAMA Pediatrics article in 2013 and 2015, he referred to the participants as "daycare kids" and "daycare children", respectively; so his incorrect citation of his own work actually turns out to have been a correct statement of what had happened.  And in the original version of that same "Elmo" article, published in 2012, the authors referred to the childrenwho were meant to be aged 8–11as "preliterate". So even if everyone had forgotten about the ages of the participants at a conscious level, this knowledge seems to have been floating around subliminally. This sounds like a very interesting case study for psychologists.

Another interesting thing about the March 2012 draft that I mentioned in the previous paragraph is that it describes data being collected on four days (i.e., the same number of days as the corrected article), rather than the three days that were mentioned in the original published version of the article, which was published just four months after the date of the draft:

Extract from the March 2012 draft manuscript, showing the description of the data collection period, with the PDF header information (from File/Properties) superposed.

So apparently at some point between drafting the original article and submitting it, one of the days was dropped, with the second control day being moved up from Friday to Thursday. Again, some people might feel that at least one version of this article might not be an accurate representation of the research record.

### Miscellaneous stuff

Some other minor peculiarites in the dataset, for completeness:

- On Tuesdaythe day of the experiment, after a "control" dayparticipants 194, 198, and 206 was recorded as commenting about "cool carrots"; it is unclear whether this was a reference to the name that was given to the carrots on Monday or Tuesday.  But on Monday, a "control" day, the carrots should presumably have had no name, and on Tuesday they should have been described as "X-ray Vision Carrots".

- On Monday and Friday, all of the carrots should have been served with no label. But the dataset records that five participants (#199, #200, #203, #205, and #208) were in the "X-ray Vision Carrots" condition on Monday, and one participant (#12) was in the "Food of the Day" condition on Friday. Similarly, on Thursday, according to the correction, all of the carrots were labelled as "Food of the Day" or "X-ray Vision Carrots". But two of the cases (participants #6 and #70) have the value that corresponds to "no label" here.

These are, again, minor issues, but they shouldn't be happening. In fact there shouldn't even be a variable in the dataset for the labelling condition on Monday and Friday, because those were control-only days.

### Conclusion

What can we take away from this story?  Well, the correction at least makes one thing clear: absolutely nothing about the report of Study 1 in the original published article makes any sense. If the correction is indeed correct, the original article got almost everything wrong: the ages and school status of the participants, the number of days on which the study was run, the number of participants, and the number of outcome measures. We have an explanation of sorts for the first of these problems, but not the others.  I find it very hard to imagine how the authors managed to get so much about Study 1 wrong the first time they wrote it up. The data for the four days and the different conditions are all clearly present in the dataset.  Getting the number of days wrong, and incorrectly describing the nature of the experiment that was run on Thursday, is not something that can be explained by a simple typo when copying the numbers from SPSS into a Word document (especially since, as I noted above, the draft version of the original article mentions four days of data collection).

In summary: I don't know what happened here, and I guess we may never know. What I am certain of is that the data in Study 1 of this article, corrected or not, cannot be the basis of any sort of scientific conclusion about whether changing the labels on vegetables makes children want to eat more of them.

I haven't addressed the corrections to Study 2 in the same article, although these would be fairly substantial on their own if they weren't overshadowed by the ongoing dumpster fire of Study 1.  It does seem, however, that the spin that is now being put on the story is that Study 1 was a nice but perhaps "slightly flawed" proof-of-concept, but that there is really nothing to see there and we should all look at Study 2 instead.  I'm afraid that I find this very unconvincing.  If the authors have real confidence in their results, I think they should retract the article and resubmit Study 2 for review on its own. It would be sad for Matthew Z. Klinger, the then high-school student who apparently did a lot of the grunt work for Study 2, to lose a publication like this, but if he is interested in pursuing an academic career, I think it would be a lot better for him to not to have his name on the corrected article in its present form.

## 08 January 2018

### Some quick and dirty R code for checking between-subjects F (and t) statistics

A while back, someone asked me how we (Tim van der Zee, Jordan Anaya, and I) checked the validity of the F statistics when we analyzed the "pizza papers" from the Cornell Food and Brand Lab.  I had an idea to write this up here, which has now floated to the top of the pile because I need to cite it in a forthcoming presentation. :-)

A quick note before we start: this technique applies to one- or two-way between-subjects ANOVAs. A one-way, two-condition ANOVA is equivalent to an independent samples t test; the F statistic is the square of the t statistic. I will sometimes mention only F statistics, but everything here applies to independent samples t tests too.  On the other hand, you can't use this technique to check mixed (between/within) ANOVAs, or paired-sample t tests, as those require knowledge of every value in the dataset.

It turns out that, subject to certain limitations later), you can derive the F statistic for a between-subjects ANOVA from (only) the per-cell means, SDs, and sample sizes.  You don't need the full dataset.  There are some online calculators that perform these tests; however, they typically assume that the input means and SDs are exact, which is unrealistic.  I can illustrate this point with the t test (!) calculator from GraphPad.  Open that up and put these numbers in:
(Note that we are not using Welch's t test here, although DaniÃ«l Lakens will tell you --- and he is very probably right --- that we should usually do so; indeed, we should use Welch's ANOVA too. Our main reason for not doing that here is that the statistics you are checking will usually not have been made with the Welch versions of these tests; indeed, the code that I present below depends on an R package assumes that the non-Welch tests are used.  You can usually detect if Welch's tests have been used, as the [denominator] degrees of freedom will not be integers.)

Having entered those numbers, click on "Calculate now" (I'm not sure why the "Clear the form" button is so large or prominent!) and you should get these results: t = 4.3816, df = 98.  Now, suppose the article you are reading states that "People in condition B (M=5.06, SD=1.18, N=52) outperformed those in condition A (M=4.05, SD=1.12, N=48), t(98)=4.33". Should you reach for your green ballpoint pen and start writing a letter to the editor about the difference in the t value?  Probably not.  The thing is, the reported means (4.05, 5.06) and SDs (1.12, 1.18) will have been rounded after being used to calculate the test statistic.  The mean of 4.05 could have been anywhere from 4.045 to 4.055 (rounding rules are complex, but whether this is 4.0499999 or 4.0500000 doesn't matter too much), the SD of 1.12 could have been in the range 1.115 to 1.125, etc.  This can make quite a difference.  How much?  Well, we can generate the maximum t statistic by making the difference between the means as large as possible, and both of the SDs as small as possible:

That gives a t statistic of 4.4443.  To get the smallest possible t value, we make the difference between the means as small as possible, and both of the SDs as large as possible, in an analogous way.  I'll leave the filling in of the form as an exercise for you, but the result is 4.3195.

So we now know that when we see a statement such as "People in condition B (M=5.06, SD=1.18, N=52) outperformed those in condition A (M=4.05, SD=1.12, N=48), t(98)=<value>", any t value from 4.32 through 4.44 is plausible; values outside that range are, in principle, not possible. If you see multiple such values in an article, or even just one or two with a big discrepancy, it can be worth investigating further.

The online calculators I have seen that claim to do these tests have a few other limitations as well as the problem of rounded input values.  First, the interface is a bit clunky (typically involving typing numbers into a web form, which you have to do again tomorrow if you want to re-run the analyses). Second, some of them use Java, and that may not work with your browser. What we needed, at least for the "Statistical Heartburn" analyses, was some code.  I wrote mine in R and Jordan independently wrote a version in Python; we compared our results at each step of the way, so we were fairly confident that we had the right answers (or, of course, we could have both made the same mistakes).

My solution uses an existing R library called rpsychi, which does the basic calculations of the test statistics. I wrote a wrapper function called f_range(), which does the work of calculating the upper and lower bounds of the means and SDs, and outputs the minimum and maximum F (or t, if you set the parameter show.t to TRUE) statistics.

Usage is intended to be relatively straightforward.  The main parameters of f_range() are vectors or matrices of the per-cell means (m), SDs (s), and sample sizes (n).  You can add show.t=TRUE to get t (rather than F) statistics, if appropriate; setting dp.p forces the number of decimal places used to that value (although the default almost always works); and title and labels are cosmetic. Here are a couple of examples from the "pizza papers":

1. Check Table 1 from "Lower Buffet Prices Lead to Less Taste Satisfaction"
```n.lbp.t1 <- c(62, 60)

m.lbp.t1.l1 <- c(44.16, 46.08)
sd.lbp.t1.l1 <- c(18.99, 14.46)
f_range(m=m.lbp.t1.l1, s=sd.lbp.t1.l1, n=n.lbp.t1, title="Age")

m.lbp.t1.l3 <- c(68.52, 67.91)
sd.lbp.t1.l3 <- c(3.95, 3.93)
f_range(m=m.lbp.t1.l3, s=sd.lbp.t1.l3, n=n.lbp.t1, title="Height")

m.lbp.t1.l4 <- c(180.84, 182.31)
sd.lbp.t1.l4 <- c(48.37, 48.41)
f_range(m=m.lbp.t1.l4, s=sd.lbp.t1.l4, n=n.lbp.t1, title="Weight")

m.lbp.t1.l5 <- c(3.00, 3.28)
sd.lbp.t1.l5 <- c(1.55, 1.29)
f_range(m=m.lbp.t1.l5, s=sd.lbp.t1.l5, n=n.lbp.t1, title="Group size")

m.lbp.t1.l6 <- c(6.62, 6.64)
sd.lbp.t1.l6 <- c(1.85, 2.06)
# Next line gives an F too small for rpsychi to calculate
# f_range(m=m.lbp.t1.l6, s=sd.lbp.t1.l6, n=n.lbp.t1, title="Hungry then")

m.lbp.t1.l7 <- c(1.88, 1.85)
sd.lbp.t1.l7 <- c(1.34, 1.75)
f_range(m=m.lbp.t1.l7, s=sd.lbp.t1.l7, n=n.lbp.t1, title="Hungry now")
```

2. Check Table 2 from "Eating Heavily: Men Eat More in the Company of Women"
```lab.eh.t2 <- c("gender", "group", "gender x group")
n.eh.t2 <- matrix(c(40, 20, 35, 10), ncol=2)

m.eh.t2.l1 <- matrix(c(5.00, 2.69, 4.83, 5.54), ncol=2)
sd.eh.t2.l1 <- matrix(c(2.99, 2.57, 2.71, 1.84), ncol=2)
f_range(m=m.eh.t2.l1, s=sd.eh.t2.l1, n=n.eh.t2, title="Line 1", labels=lab.eh.t2)

m.eh.t2.l2 <- matrix(c(2.99, 1.55, 1.33, 1.05), ncol=2)
sd.eh.t2.l2 <- matrix(c(1.75, 1.07, 0.83, 1.38), ncol=2)
f_range(m=m.eh.t2.l2, s=sd.eh.t2.l2, n=n.eh.t2, title="Line 2", labels=lab.eh.t2)

m.eh.t2.l3 <- matrix(c(2.67, 2.76, 2.73, 1.00), ncol=2)
sd.eh.t2.l3 <- matrix(c(2.04, 2.18, 2.16, 0.00), ncol=2)
f_range(m=m.eh.t2.l3, s=sd.eh.t2.l3, n=n.eh.t2, title="Line 3", labels=lab.eh.t2)

m.eh.t2.l4 <- matrix(c(1.46, 1.90, 2.29, 1.18), ncol=2)
sd.eh.t2.l4 <- matrix(c(1.07, 1.48, 2.28, 0.40), ncol=2)
f_range(m=m.eh.t2.l4, s=sd.eh.t2.l4, n=n.eh.t2, title="Line 4", labels=lab.eh.t2)

m.eh.t2.l5 <- matrix(c(478.75, 397.5, 463.61, 111.71), ncol=2)
sd.eh.t2.l5 <- matrix(c(290.67, 191.37, 264.25, 109.57), ncol=2)
f_range(m=m.eh.t2.l5, s=sd.eh.t2.l5, n=n.eh.t2, title="Line 5", labels=lab.eh.t2)

m.eh.t2.l6 <- matrix(c(2.11, 2.27, 2.20, 1.91), ncol=2)
sd.eh.t2.l6 <- matrix(c(1.54, 1.75, 1.71, 2.12), ncol=2)
f_range(m=m.eh.t2.l6, s=sd.eh.t2.l6, n=n.eh.t2, title="Line 6", labels=lab.eh.t2)
```

I mentioned earlier that there were some limitations on what this software can do. Basically, once you get beyond a 2x2 design (e.g., 3x2), there can be some (usually minor) discrepancies between the F statistics calculated by rpsychi and the numbers that might have been returned by the ANOVA software used by the authors of the article that you are reading, if the sample sizes are unbalanced across three or more conditions; the magnitude of such discrepancies will depend on the degree of imbalance.  This issue is discussed in a section starting at the bottom of page 3 of our follow-up preprint.

A further limitation is that rpsychi has trouble with very small F statistics (such as 0.02). If you have a script that makes multiple calls to f_range(), it may stop when this occurs. The only workaround I know of for this is to comment out that call (as shown in the first example above).

Here is the R code for f_range(). It is released under a CC BY license, so you can do pretty much what you like with it.  I decided not to turn it into an R package because I want it to remain "quick and dirty", and packaging it would require an amount of polishing that I don't want to put in at this point.  This software comes with no technical support (but I will answer polite questions if you ask them via the comments on this post) and I accept no responsibility for anything you might do with it. Proceed with caution, and make sure you understand what you are doing (for example, by having a colleague check your reasoning) before you do... well, anything in life, really.

```library(rpsychi)

# Function to display the possible ranges of the F or t statistic from a one- or two-way ANOVA.
f_range <- function (m, s, n, title=FALSE, show.t=FALSE, dp.p=-1, labels=c()) {
m.ok <- m
if (class(m.ok) == "matrix") {
func <- ind.twoway.second
useF <- c(3, 2, 4)
default_labels <- c("col F", "row F", "inter F")
}
else {
m.ok <- matrix(m)
func <- ind.oneway.second
useF <- 1
default_labels <- c("F")
if (show.t) {
default_labels <- c("t")
}
}

# Determine how many DPs to use from input numbers, if not specified
dp <- dp.p
if (dp.p == -1) {
dp <- 0
numbers <- c(m, s)
for (i in numbers) {
if (i != round(i, 0)) {
dp <- max(dp, 1)
j <- i * 10
if (j != round(j, 0)) {
dp <- max(dp, 2)
}
}
}
}

if (length(labels) == 0) {
labels <- default_labels
}

# Calculate the nominal test statistic(s) (i.e., assuming no rounding error)
f.nom <- func(m=m.ok, sd=s, n=n)\$anova.table\$F

# We correct for rounding in reported numbers by allowing for the maximum possible rounding error.
# For the maximum F estimate, we subtract .005 from all SDs; for minimum F estimate, we add .005.
# We then add or subtract .005 to every mean, in all possible permutations.
# (".005" is an example, based on 2 decimal places of precision.)
delta <- (0.1 ^ dp) / 2    #typically 0.005
s.hi <- s - delta
s.lo <- s + delta

# Initialise maximum and minimum F statistics to unlikely values.
f.hi <- rep(-1, length(useF))
f.lo <- rep(999999, length(useF))
f.hi <- f.nom
f.lo <- f.nom

# Generate every possible combination of +/- maximum rounding error to add to each mean.
l <- length(m.ok)
rawcomb <- combn(rep(c(-delta, delta), l), l)
comb <- rawcomb[,!duplicated(t(rawcomb))]

# Generate every possible set of test statistics within the bounds of rounding error,
#  and retain the largest and smallest.
for (i in 1:ncol(comb)) {
f.hi <- pmax(f.hi, func(m=m.adj, sd=s.hi, n=n)\$anova.table\$F)
f.lo <- pmin(f.lo, func(m=m.adj, sd=s.lo, n=n)\$anova.table\$F)
}

if (show.t) {
f.nom <- sqrt(f.nom)
f.hi <-  sqrt(f.hi)
f.lo <-  sqrt(f.lo)
}

if (title != FALSE) {
cat(title)
}

sp <- " "
fdp <- 2     # best to report Fs to 2 DP always, I think
dpf <- paste("%.", fdp, "f", sep="")
for (i in 1:length(useF)) {
j <- useF[i]
cat(sp, labels[i], ": ", sprintf(dpf, f.nom[j]),
" (min=", sprintf(dpf, f.lo[j]),
", max=", sprintf(dpf, f.hi[j]), ")",
sep="")
sp <- "  "
}

if ((dp.p == -1)  && (dp < 2)) {
cat(" <<< dp set to", dp, "automatically")
}

cat("\n", sep="")
}

```