The data from retractions could be used to clean up science

March 9, 2025 by No Comments

Chinese hospitals are retracting hotspots: What do universities, colleges and universities tell us about the past half-decade of research activity in high-income countries?

Many other Chinese hospitals are retraction hotspots. The data also shows universities and colleges in China, Saudi Arabia, India, Pakistan and Ethiopia. Most of the cases in the data are linked to wrongdoing, according to evidence.

One clear change on which all the analyses agree is that in the past half-decade, retractions from institutions in Saudi Arabia and India have come to the fore — largely because of paper-mill activity from Hindawi journals (see ‘Institutions with most retractions’).

The universities that top the lists are small. In the case of the Ghazi University, about half of their total papers were written by just four authors.

Some investigators think that African authors may have added paper-mill products from elsewhere to take advantage of the free open-access for scholars from low- or lower-middle-income countries. A spokesperson for Wiley (the owner of Hindawi) did not give details, but commented: “We are aware of authorship for sale and waiver manipulation schemes deployed by paper mills,” adding that the publisher has strengthened its internal checks.

Technology firms launched research integrity tools over the past two years in order to help publishers stem a surge in fake and significantly flawed research. Some of the software products are from Scitility in Nevada, Digital Science in the UK, and Research Signals in London. The majority shareholder in Springer Nature is the latter. Nature’s news and features team is editorially independent of its publisher.) Nature got the data from these three firms, they also looked at journals and countries that have histories in retractions.

Depending on data set, the retraction rates in Ethiopia and Saudi Arabia are surpassed by those in Iraq and Pakistan. Russia has a lower rate of retractions than Iraq, but because many Russians don’t have DOIs, the firms left them out of their analyses. Meanwhile, countries such as the United States and the United Kingdom have rates of around 0.04%, much lower than the global average of 0.1%, and many countries have even lower rates (see ‘Retraction rates by country’).

Institutional analyses can be confusing, but firms are working hard to make them public: a study of Nature, Retraction Watch, OpenAlex, and Dimensions

Analysing institutions is an even thornier task, because there are both data errors and different approaches in the underlying databases, Nature’s analysis found. To map institutional affiliations, Dimensions uses a private Global Research Identifier Database (GRID), whereas OpenAlex uses the public Research Organization Registry (ROR). There are quirks in both of them, such as the fact that some smaller institutions have different ways of assigning an affiliation or that the database curators made different choices about how to assign one. The firms have different analyses.

The researchers in public universities in India face less pressure to publish than those in private universities and colleges. Private institutions pay bonuses to students and researchers for the papers they publish.

The three firms’ numbers of retracted articles are some 6–15% greater (over the past decade) than the Retraction Watch data set they build on. In some cases, cross ref and other online sources can be used to record incorrect articles so the data may not be completely accurate, warns a University of Illinois researcher.

Retraction Watch is to be commended for pushing hard to improve how retractions are recorded, and for creating a relatively clean database of retractions on which further tools and analyses, including those used in Nature’s analysis, can be built. But Nature did find some incorrect institutional assignments in the data provided by the firms marketing the existing analysis tools — as is almost inevitable in such large data sets. (The firms say that they are working to correct the errors).

One firm, Scitility, says it will make its institution-level figures public later this year. “We think scientific publishing will be helped if we create transparency around this,” says the firm’s co-founder Jan-Erik de Boer, a former chief information officer at Springer Nature who is based in Roosendaal, the Netherlands.

It’s tempting to think about whether differences in incentives are related to different outcomes for researchers, according to Ivan Oransky, co- founder of Retraction Watch.

Some doctors at hospitals had purchased fake manuscripts from paper mills that had been used to make fraudulent reports. According to integrity sleuth Elisabeth Bik in California, these doctors were put under pressure to publish papers so they could get jobs or promotions. Sleuths such as Bik soon began spotting signs of this problem, identifying duplicated images in large numbers of papers. They publicized the issue and a wave of retractions followed.

In general, there is little consistency in how publishers record and communicate retractions — although last year, publishers did agree, under the auspices of the US National Information Standards Organization, on a unified technical standard for doing this, which might help to improve matters.

Online records of work that has been changed are not always neat. A paper heavily cited in The Lancet is currently categorized as a retracted article by Cross Ref, a site that records data about published articles. The paper was printed by an error by its publisher, Elsevier.

The new data is valuable and should not be overlooked. Quality and quantity are important in science, according to retractions data.

Tags: ,