The retraction of Clark et al (2020) from Psychological Science

Cory J. Clark, Bo M. Winegard, Jordan Beardslee, Roy F. Baumeister, and Azim F. Shariff had a paper published in Psychological Science - a/the top journal in psychology. Only months later, now in June, they have decided to retract it.

The retraction notice talks about data based on small and skewed samples, and bad measures that opens up for questionable research practices. Thus, pretty standard bad science in psychology, in particular the small samples and bad measures. See e.g. Malte Elson’s site Flexible Measures on how aggression is measured. I have myself talked about the bad measures used in investigative psychology in an episode of Everything Hertz (about 20 min into the episode). Low level of generalisibility is also very common in psychology.

Although I agree we should expect a higher standard from psychology’s flagship journal, I was very surprised to see a retraction based on these reasons only. A retroactive disclosure statement seem to have been the more obvious option. As we write in our manuscript currently under re-review at Psych Science:

“[I]t is not at all clear that widespread adoption of retractions would be an effective, fair, or appropriate approach. Willén (2018) argued that retraction of articles in which questionable practices were employed could deter researchers from being honest about their past actions. Furthermore, retracting papers because of questionable research practices (QRPs) known to be widespread (e.g., John et al., 2012) could have the unintended side effect that some researchers might naively conclude that a lack of a retraction implies a lack of QRPs. Hence, Willén (2018) suggested that all articles should be supplemented by transparent retroactive disclosure statements. In this manner, the historical research record remains intact, as information would be added rather than removed.”

But then I read the Editorial and realised that the retraction may not really be about the methods and data after all.

The journal’s Editor in Chief, Patricia J. Bauer, wrote an Editorial on the retraction, and this Editorial does not (only) discuss how the journal want to handle the generally low standard in psychology, the replicability crisis or how the reviewers should be instructed to handle it. Instead is the Editorial focusing on a completely different reason for retraction - something that is not mentioned at all in the retraction notice - namely, racism.

"As social scientists, we have a responsibility to be sensitive to the political, social, and cultural issues raised by our work. […] We must be especially sensitive when the topics with which we are dealing are associated with a history of injustice and when the message of our work could be inflammatory or incendiary.

In the case of the now-retracted article, some readers may debate whether the authors themselves were sufficiently sensitive to these issues. It is not my place to voice a perspective on that concern. It is my place to take a stand on whether in our handling of the manuscript, Psychological Science was sufficiently sensitive. I have concluded that we were not. We failed to recognize that the message of this article could be interpreted to have racial overtones and thus could be highly controversial. We therefore failed to act to mitigate the potential harm to which the message could contribute."

“And because words matter, we also will be paying closer attention that in the articles we select for publication […] that conclusions and their possible implications are conveyed in a socially sensitive and scientifically responsible manner. These actions will make both our journal and our science more socially responsible. […]”

“I close with an apology to the field and the broader society for any harm to which we contributed by publishing research without sufficient sensitivity.”

The Editor emphasises:

“We should not and will not shy away from publishing articles on sensitive political, social, and cultural issues. But what we must and will do is exercise greater care in our handling of all submissions, including those on sensitive topics.”

She writes that the journal will add a submission type called “Further reflections” to supplement papers published on controversial topics. A more important solution would be to require preregistration, open data and materials, and that the article is published open access. That is, no controversial topics or claims are published without preregistration and complete public access to data, materials, and paper. I’m surprised that nothing like that was mentioned at all in the Editorial and neither have I seen anything about this in the few discussions I’ve seen on Twitter. These solutions would have changed things also with Bem’s 2011 publication in Psychological Science (see Wagenmakers et al, 2011, for an overview of the issue), unlike “Further Reflections” comments.

Correct me if I’m wrong but a retraction due to methodological issues in this case seems to be outright wrong.


Link to Psychological Science where both the retraction notice and the editorial are found: https://www.psychologicalscience.org/publications/psychological_science/clark-2020-retraction-editorial

1 Like

I agree that the real motivation for this retraction seems to have been based on the author’s perceived lack of ‘sensitivity’ to issues they were writing about, rather than methodological problems. Had they been writing about a less sensitive topic using the same datasets (admittedly, it’s hard to imagine how studying datasets on countries religiosity, violence, and IQ could lead to a zero controversy outcome), I expect that there would have been almost no public debate and a retraction would have been very unlikely.

This does raise the question of how a researcher can actually conduct research on sensitive topics? While all research should be technically sound, research on sensitive topics should probably exceed the standard for a field so that it can stand up to the methodological scrutiny it will undoubtedly receive.

Another question is if our scientific peers, and society more broadly, wishes for research on sensitive topics to be conducted in the first place. I agree with @rebecca that this is a particularly good situation for using pre-registration, as besides ensuring that data is not cherry-picked to fit a specific conclusion, it also allows for editors and reviewers to discuss if work should (from a political, societal, and ethical perspective) be conducted before it actually is. It would also be useful to have an open editorial and peer-review process to demonstrate that the research has received sufficient academic scrutiny prior to its final publication (a large part of the editorial is used to defend the editorial and review process that Clark’s paper went through).

I’d also like to add a second suggestion that I think could be useful for research on sensitive, or otherwise controversial, research topics - that of adversarial collaborations. While scientists would ideally be disinterested in the actual results of their studies, in reality, we are only human and suffer from personal, political, and sometimes commercial, biases. An adversarial collaboration aims to combat such biases by:

two or more scientists with opposing views work[ing] together. This can take the form of a scientific experiment conducted by two groups of experimenters with competing hypotheses, with the aim of constructing and implementing an experimental design in a way that satisfies both groups that there are no obvious biases or weaknesses in the experimental design.

While originally proposed for use in science by Daniel Kahneman, I’m not aware of any academic field where adversarial collaborations are commonly undertaken. However, they are quite regularly used to debate controversial topics in the rationality community, with some good examples available on Slate Star Codex (link to an archived version as the site is currently down). Combined with pre-registration, the results of an adversarial collaboration studying a sensitive topic seem like they stand the best chance of being accepted as the unbiased truth on the matter. Unfortunately:

what makes adversarial collaboration scientifically attractive—the prospect of breaking epistemic impasses—may also render it politically unattractive. Nothing will happen if either side decides that it is better off when there is less scientific clarity. For this reason, failures to broker adversarial collaborations are profoundly informative: they signal to the policy world that the American racism debate and the sub-debate on unconscious prejudice may be politicised beyond scientific redemption.

1 Like

i agree with both @rebecca and @Gavin. just to note, i thought the term ‘adversarial collaboration’ is an oxymoron, but reading the link was fun, and reminded me of dominic cummings idea of red team and blue team for governments. see below.

also, perhaps force11 is practicing this ‘adversarial collaboration’ when multibillion corporations are represented heavily in their events. however, i think these corporations should not be allowed to sponsor force11 events, as any other open science events.

as the adage goes, those who pay the piper will call the tune. :slight_smile:

links to red team blue team:

If someone could produce some meaningful data about cross-national IQ (once we’ve sorted out how to measure IQ, which seems to me to be mostly about the ability to think about abstract concepts, in populations that may not have a lot of schooling in that regard), then I would be more sympathetic to this kind of discussion. But Clark et al. based their analyses pretty much on the data of Richard Lynn, whose book was published by a white supremacist publisher. According to Lynn’s numbers, the median citizen of many African countries has an IQ that places them in the “intellectual disability” category. Lynn has the mean national IQ of Nepal is 43, meaning that about 90% of Nepalese people are intellectually disabled, and some non-trivial percentage at the low end are less intelligent than dogs. When you are presented with data equivalent to a claim that the mean BMI in a country is 12 or the mean height of adult men is 3.7 metres, you stop copying and pasting the numbers and you think a bit.

Now, did the authors fall, or were they pushed? I think it’s not inconceivable that they were offered the chance to retract the article themselves, to save face for all concerned (I know this has happened in other cases at the same journal). They are now, of course, being called “cucks” by the alt-Right as well as racists by almost everyone else, but them’s the breaks. Maybe in a year or two they will be able to reveal that they had their fingers crossed behind their backs when they wrote the retraction notice. Certainly Bo Winegard’s recent social media posts suggest that he may not be fully on board with the wording; I don’t think I’ve seen anything from Cory Clark after the official statement.

2 Likes

I find research on race differences in IQ to be extremely uninteresting and pointless for many reasons but the primary reason is that I severly dislike the idea of studying IQ differences between categories of healthy people because it will undeniably lead to one or several categories of people drawing the shortest straw. May it be men, may it be Australians, may it be people with names starting on R or J. Or worse: may it be people with a long history of oppression. Thus, I don’t like that type of research and I would not engage in it myself, not as a researcher, not as a research participant, and not as a reader of the research.

However, once a paper actually has been published, I do argue that a retroactive disclosure statement (RDS) is a better solution than retraction in almost all cases. Even papers based on completely fabricated data can actually be useful (thank you @dbernt for once pointing that out to me). So I nowadays think that Stapel’s papers shouldn’t have been retracted either, but instead been supplemented with RDS. It’s obviously crucial that every reader understands that the paper is based on fake data, but the hypotheses, arguments, methods, all of it can still be of great value to generate new ideas and new research.

Retracted papers can of course still be read, but an RDS would be a crucial addition for other researchers to know exactly what went wrong, what part of the data or methods they shouldn’t trust or what they should do differently in their own work. You don’t get that information from a retracted paper even if you read the retraction notice. For example, the Clark et al retraction notice is not detailed enough to guide future research. Your post above, @sTeamTraen, is basically more informative than the retraction notice.

A very much related issue here is that it’s way too easy as an individual researcher to not know that a paper I read/downloaded now has been retracted or supplemented with an RDS. We need software to deal with that. I want a program on my computer that continuously scans my folder with scientific articles to notify me of any relevant retractions and RDS. I believe the developers behind http://conscience.network/ were aiming at including something like that.

1 Like

I agree with rds. Let everything be on record. Open means open as much as possible, and rds is a perfect fit for these cases. :slight_smile:

2 Likes

A very much related issue here is that it’s way too easy as an individual researcher to not know that a paper I read/downloaded now has been retracted or supplemented with an RDS. We need software to deal with that.

Zotero now flags retracted publications in your library :smiley:

2 Likes

Unfortunately, while we might not find these areas interesting other researchers clearly do and have a long history of studying them (and eventually cause controversy by doing so). It seems like academia does need to develop better norms about how such research should be conducted, so if people choose to pursue these research questions using poor methods and cherry-pick data to push forward an ideological agenda, it can be clearly dismissed as such before causing controversy and bringing academic fields into disrepute. Of course, if people do conduct good studies and still find results that support an agenda, then society also has to be prepared to deal with those findings.

With regards to being alerted about retractions and RDS, this reminded me of a Retraction Watch post I saw a while ago:

I don’t think that the retractions were necessary, i.e. we were not obliged to do this. We could have just published our paper in Biology Open explaining the artefact and assumed that people who cared would notice. We decided to retract the two papers because we felt it was the right thing to do. Both of the retracted papers have been reasonably well cited, and we were worried that some people would not notice our paper describing the artefact and would continue to believe (and cite) these incorrect results. We therefore felt that the only way to ensure that this information came to everyone’s attention was by somehow linking the Biology Open paper to the original incorrect publications, and retractions were the only mechanism available to do this.

In that case, the authors felt that only a retraction could be used to alert the field about the artefacts their original paper described. However, the retracted article still appears on Google Scholar and the journal site includes only a small note on the retraction. So even this isn’t a very clear indication of the retraction and I’m not sure how well the author’s goals were achieved (their article still seems to be frequently cited). Ideally, retractions would be prominently flagged in Scholar (and other databases), and one could even imagine an alert tool that mailed the corresponding authors of articles that had already cited the retracted paper (or who continued to do so). Of course, it would be good to do the same to RDS, although I fear that is an even more distant goal :confused: