A new crisis of reproducibility in science: the more dubious the results, the more often they are cited



Findings made in studies that cannot be confirmed in repeated experiments are cited 153 times more often because of the interestingness of the research.



Reproducible articles in leading psychology, economics, and science journals are often the most cited articles in academic research, according to a new study by the University of California, San Diego and the Redy School of Management. Failure to reproduce them usually means that they are less likely to be true.



Published in Science Advances, the article explores the current “reproducibility crisis,” which is that many discoveries in the social sciences and medicine are not confirmed when experiments are replicated by other researchers.



The article says that discoveries made in research that cannot be confirmed by repeated experiments have a greater impact in the long term. As a rule, questionable studies are cited as if their results are true, long after it is discovered that the results of the publication cannot be reproduced.



“We also know that experts are good at predicting which articles will turn out to be reproducible,” write economics and strategy professor Marta Serra-Garcia and professor of behavioral economics Uri Gnisi at the Redy School. “Knowing about such predictions, we ask the question: 'Why are these unreproducible articles accepted for publication at all?'”



Perhaps the answer is that peer review teams of scientific journals are forced to compromise. When the results are "more interesting", they are less strict about their reproducibility.



The link between the interestingness of discoveries and the non-reproducibility of research may also explain why they are cited much more often - the authors found that successfully reproduced articles are cited 153 times less often than non-reproducible ones.



“Interesting or engaging articles are also more talked about in the media and published on platforms like Twitter, generating a lot of attention, but that doesn't make them true,” Gneesey says.



Serra García and Gnisi analyzed data from three well-known projects that systematically test the reproducibility of discoveries from the best psychological, economic and natural science journals (Nature and Science). In the field of psychology, only 39% of the 100 experiments were successfully reproduced. In economics, 61% of 18 studies were replicated, and 62% of 21 studies published in Nature / Science were replicated.



With the reproducibility data from these three projects in hand, the authors used Google Scholar to test whether unreproducible studies were cited more often. They carried out the study both before and after the publication of the results of the reproducibility testing projects. The largest gap was found for articles published in Nature / Science: articles with non-reproducible results were cited 300 times more often than those with reproducible results.



The authors then decided to take into account various characteristics of reproducible studies, such as number of authors, proportion of male authors, experimental details (location, language, and online implementation), and the scope in which the articles were published - the relationship between reproducibility and citations remained the same. ...



They also demonstrated how the influence of such citation grows over time. The mentions count for the year shows a clear gap between reproducible and non-reproducible articles. On average, articles that cannot be reproduced are cited 16 times more often per year. This gap persisted even after the publication of the reproducibility project data.



“Remarkably, only 12% of mentions in articles published after reproducibility tests are about unsuccessful reproduction attempts,” the authors write.



The impact of an inaccurate article published in a prestigious journal could last for decades. For example, a study published by Andrew Wakefield in The Lancet in 1998 led tens of thousands of parents around the world to take up arms against measles, mumps and rubella vaccines by pointing to a possible link between vaccinations and autism. The article was withdrawn by The Lancet 12 years later, but claims of the autism-vaccine connection continue to be published.



The authors add that magazines and scientists may feel pressure to publish interesting discoveries. For example, when deciding on the promotion of employees, most scientific institutions use citation as an important metric.



This, too, may be the source of the “reproducibility crisis” identified in the early 2010s.



“We hope our research motivates readers to be careful when they read something interesting and engaging,” says Serra Garcia. “When researchers cite interesting or frequently cited work, we would like to see them check for reproducibility data and think about what conclusions can be drawn from their findings.”



Gneesey adds, "We care about quality research and we want it to be true."






Advertising



Epic Servers are reliable VPS with powerful AMD EPYC processors and fast disk storage based on Intel NVMe disks. Everyone can create a tariff for themselves!



Join our Telegram chat .






All Articles