A toast to error detectors

I find this article quite interesting as it portrays the tendency of in-group blaming of researchers dedicated to expose errors in scientific works (which represents, in my opinion, one of the finest ways to be a researcher and one of the best features of scientific knowledge production), but also the general call for “kindness” between researchers (i.e. not criticizing eachother’s work). This last issue is very much present in my research field.


It is sure that kindness and respect should be more present in our scientific community and especially in those people trying to expose questionable research practices and promote open science. But I would not mistake kindness with compliance!


When I’ve corrected errors in the engineering literature, I’ve consistently been ignored. Hostility would be preferable to being ignored to me. At least they’re engaging with me if they are hostile.

One particularly disappointing example I summarized on PubPeer:


The issues with the part of the article that I criticize are so basic that I’m disappointed it passed review. You do not need to understand much about this field to see that something is off.

I published a conference paper on this error because the original paper was highly cited. (I’m redoing a part of my paper to improve it before submitting to a journal.) I contacted the first author twice before I published my conference paper and never received a reply. Later, I contacted someone else who recently published something based on the flawed work, and they later published a review article that favorably cited the flawed work with no mention of my criticism.

Perhaps the culture in engineering is different, and ignoring criticism is more common than attacking the critics.

Edit: Just randomly browsing the internet leads to this link which suggests ignoring critics is common in psychology too: https://twitter.com/hardsci/status/1072228408489168896


I think the “ignoring” aspect is very present in every field. I would add that even those aticles and reports that do reach a high visibility may result in “much ado about nothing”, as people carry on as if nothing happened. This is, in my opinion, due to the high-pressure environment, limited time for critical thinking and general sense of resignation (or inability to act) that characterizes the scientific community but also our society in general.

I also think, though, that the ignoring aspect does not stem from malevolence or “cowardry” only. Sometimes an article or communication is ignored because it is quickly lost in the ever-growing bunk of scientific articles, blogs, news and other form of information. This to me raises an important issue that is not really addressed in the science of digital era: How to filter, group and monitor all this information? How to clusterize it, reduce it and synthetize it? How to make sure that what should be seen is actually seen and seriosuly “digested”?

This to me raises an important issue that is not really addressed in the science of digital era: How to filter, group and monitor all this information? How to clusterize it, reduce it and synthetize it? How to make sure that what should be seen is actually seen and seriosuly “digested”?

PubPeer works quite nicely in my view, though roughly 0% of engineering researchers seem aware of it. I entered email addresses for two of the authors of the paper I criticized, so they should have received notice as well.

I don’t know what other people’s personal policies/behaviors are. But if I get an email that I know I won’t have time to answer right away, I’ll send a reply saying that I’ve received their email and will reply when I get the chance to examine it more closely. And if someone found a major error in my work, unless I had deadlines soon, I’d probably drop most of what I was doing to investigate. No response at all after two emails separated by a long time doesn’t make much sense to me.

In terms of being aware of the literature as a whole, I think (at least in the fields I’m familiar with) this is extremely poorly done in general. While reviews and books aren’t everything, it’s important that they are actually up-to-date as many people learn from them. Most reviews appear to mirror previous reviews. It’s not uncommon for reviewers to add new things that they are familiar with, but there’s generally a lack of depth. This seems to be a major problem holding back the progress of science to me. I don’t know how to solve this problem in general, though I try hard to be aware of all literature on certain problems. I’m just one person, so the scope of what I can do is limited. Fortunately, I don’t think a subfield needs too many people trying to be comprehensive to see large benefits. One problem is that people like me rarely seem to be in position to be invited to write review articles.

If some billionaire wants to accelerate the progress of science, they might do well to fund researchers to specifically do in-depth reviews. I’d jump at such an opportunity.

I recall watching a video where Nick Brown (@sTeamTraen) said that his article that criticizes the critical positivity ratio is cited at a rate lower than the one which it debunks. That’s amazing to me because of the media coverage his article got. I’m not even a psychologist and I heard about it.

There are existing group methods as well. I emailed Nick Brown before and he recommended that I get on Twitter as he’ll hear about problematic studies there. But I’m not aware of anyone in my field on Twitter who posts about problematic studies. Twitter mostly is used for self-promotional purposes in my field. I think psychology is much better organized than engineering in this regard, though unlikely to be optimal.

A online journal club platform might be better than Twitter for this.


Definitely what I would do as well (unless I find the concern to not be serious).

I think @antonio.schettino once proposed a journal club here on the forum. Don’t think it was aimed particularly at error detection, but perhaps worthwhile to bring the idea up again.


PubPeer is interesting, I hadn’t seen it before - thanks for linking @btrettel. I had a quick look at a few articles on the front page and saw that most of the biology papers listed were for people picking up on figure manipulations, but I assume the type of comments varies a lot by field. I have at times been very frustrated after being misled by numerical or mathematical errors in papers, and will consider posting them on PubPeer in future!

I think one difficulty with posting about errors, and also publishing replications, is that they usually aren’t connected back to the original article by anything more than a link or citation. The notable exception here is eLife, where public comments and annotations can be made directly to papers on their site:

I don’t think comments are included in PDF downloads, but I have seen authors respond to comments on several occasions, and I assume they are more likely to in this context than if feedback comes through a 3rd party site. People also seem to reply to comments on Research Gate, although the comments are not particularly visible so I don’t know how much attention they get.

The extreme case for correcting the scientific record seems to be making a retraction. I recall hearing about the authors of a cell biology paper making a retraction after discovering a methodological artefact in a widely cited paper they published. They felt that retracting the paper was the only way to alert other people using their method to the problem: https://www.nature.com/articles/nj7492-389a

While retractions should be a last resort to correct an error, it seems they work as people do quickly stop citing the retracted paper and even other work in the same field (maybe that’s not always be beneficial):

I’m not sure if corrections, errata, or corrigenda are similarly effective.

It would be nice if there was a way for original articles or their metadata to link forwards to comments in places like PubPeer and Research Gate. Something like the ‘Cited by’ information shown on databases and most publishers sites. I expect that the original authors might take more responsibility for errors in their work if anybody who found their paper say, on google scholar, was also linked to PubPeer comments!

I think Scite.ai is taking on the related problem of determining the context in which citations are made and linking this data back to the original article on a publishers page, maybe something similar could be done to aggregate and overlay comments. Any thoughts @Josh_Nicholson?

1 Like

I was also unaware of the existence of PubPeer! It looks very interesting and they are also beta-testing self-edited journals at Peeriodicals @btrettel maybe that would be useful in terms of keeping an updated literature review of a specific subfield.

I personally would like to see more and more commitment to this open and broad peer-reviewing. It is, in my opnion, the ultimate way to assess scientific validity and rigour of a work before it gets on the records (i.e. published) and also after its publication, to update the community on a work’s significance and fasifiability.

However, in my opinion, the problem remains the same. The digital era is providing us with tools to drastically improve scientific research at all levels, but only few of these initiatives, breakthroughs and tools have a real impact. Is this because there is an overload of information? Is it because, while the digital world progresses rapidly, scientists are still bounded to traditional academic environment and mentality? Is it because, no matter how the internet connects us, we are scattered and individualistic? I think addressing these questions is a very big deal and can lead to possible solutions.

1 Like