Scientific retractions are on the rise. In 2001 there were 40 incidents in which published results of scientific research were retracted, but in less than a decade that number had ballooned to 400. And yes, the publication rate had also increased during that time, but by only 44 percent—not nearly enough to explain away a tenfold jump in retractions.
So why is this happening?
“Some people come out and say, ‘Well, clearly fraud is on the rise.’ And theoretically, that’s true…. A lot of these retractions are due to fraud,” says medical journalist Ivan Oransky, whose talk last night was co-sponsored by the Berkeley School of Public Health, the Graduate School of Journalism and Kaiser Permanente. Oransky runs Retraction Watch, a nonprofit that he cofounded with medical news editor Adam Marcus, which blows the whistle on scientific fraud and calls attention to retractions.
One study found that two-thirds of retractions in a sampling of biomedical and life-sciences research were due to misconduct—fraud, suspected fraud, plagiarism, or duplicate publication. Only about one in five retractions were attributable to mere error.
Even though the data suggests that retraction rates have been increasing since the 1950s, Oransky notes that he believes research today is more scrutinized. In other words, it’s possible that there was just as much need for retractions in the 1950s, but we weren’t looking so diligently to spot problems. And it’s important that we do so, because scientific “mistakes” can yield life-threatening results.
Among the most infamous examples is Andrew Wakefield, the anti-vaccine proponent who authored a fraudulent study that suggested measles, mumps and rubella vaccine can cause autism—resulting in plummeting vaccination rates and increases in measles in the United Kingdom and perhaps elsewhere. Another case involves Dong-Pyou Han, a former Iowa State University biochemist who falsified results of HIV vaccine trials by spiking rabbit blood samples with human antibodies to make it appear as if they developed HIV immunity. His deception could have had devastating public health consequences, and ultimately he was sentenced to five years in prison for research misconduct.
Aside from fraud, other retraction triggers include duplication or “self-plagiarism,” plagiarism, image manipulation, faked data, fake peer reviews, publisher errors, authorship issues, legal reasons, and data that’s not reproducible. And often when publications are forced to say oopsies, the retractions themselves are vague.
For instance, a Retraction Watch post cites a retraction notice in Computational and Mathematical Methods in Medicine that reads: “This article has been retracted upon the authors request as it was found to include unreliable interpretation due to insufficient provision of studying materials.” This explanation is, as Oransky and Marcus note, “completely inscrutable.”
After all: What’s the point of a retraction when no one knows what the mistake was? It’s hard for scientists to monitor each other’s work without transparency.
What’s more, other researchers may continue to cite the articles even after they’ve been retracted, with no acknowledgment of retraction. Oransky cited a 1999 study showing that retracted articles received more than 2,000 post-retraction citations—with less than 8 percent acknowledging a retraction. Preliminary analysis of more recent data, he says, shows this is still a problem.
Despite everything, Oransky describes himself as “somehow skeptical, optimistic, and realistic”—not pessimistic—about the current state of science. The fact that it’s under more scrutiny than ever, he says, can be a positive.
In addition to sites like Retraction Watch, PubPeer allows scientists to keep each other in check by interacting and commenting on publications. Oranksy notes that universities acknowledge PubPeer has aided in scientific investigations, and concluded that “science is cleaning up some of the stables” on its own.