Retractions and High Profile Journals

Another headline-grabbing study in a major journal has fallen. At the end of last year a paper in Science reported that people could change their minds about same-sex marriage after talking to a gay person for only 20 minutes. The New York Times was one of many news organizations to pick up this story. However, thanks to the fine work of David Brookman, Joshua Kalla and Peter Aronow, a number of “statistical irregularities” in the study have been reported, along with problems with the survey incentives and sponsorship, as explained in the retraction posted on the Science website. The prolific website Retraction Watch was the first to publicly report problems with this study.

The founders of Retraction Watch, Adam Marcus and Ivan Oransky, wrote a perceptive Op-Ed in the New York Times reacting to the same-sex marriage study’s problems. Early in the article they make the excellent point that:

“Retractions can be good things, since even scientists often fail to acknowledge their mistakes, preferring instead to allow erroneous findings simply to wither away in the back alleys of unreproducible literature. But they don’t surprise those of us who are familiar with how science works; we’re surprised only that retractions aren’t even more frequent.”

I think that retractions would be more common if both scientists and journals were less embarrassed about finding and acknowledging them, but these sorts of reactions are understandably very difficult to overcome.

Marcus and Oransky go on to note that journals with high impact factors – a measure of the frequency with which the average article in a journal has been cited in a particular year – retract papers more often than journals with low impact factors. Commenting on this correlation, they say:

“It’s not clear why. It could be that these prominent periodicals have more, and more careful, readers, who notice mistakes. But there’s another explanation: Scientists view high-profile journals as the pinnacle of success — and they’ll cut corners, or worse, for a shot at glory.”

Both of these explanations sound plausible, but it’s also important to note the severe screening process applied by journals like Nature and Science. According to a talk given by Leslie Sage, astronomy editor at Nature, only about 7% of submissions to Nature are published. Sage says a Nature paper should:

“report a fundamental new physical insight, or announce a startling, unexpected or difficult-to-understand discovery, or have striking conceptual novelty with specific predictions” and “be very important to your field”.

The general information for authors of Science papers states:

“Science seeks to publish those papers that are most influential in their fields or across fields and that will significantly advance scientific understanding. Selected papers should present novel and broadly important data, syntheses, or concepts. They should merit the recognition by the scientific community and general public provided by publication in Science, beyond that provided by specialty journals.”

Given the extraordinarily high standards that both Nature and Science set for their papers it’s not surprising that their retraction rates would be higher than average. Consider, for example, the “startling” or “unexpected” discovery that Nature seeks. Scientists can legitimately make such a discovery by, for example, developing unprecedented analysis tools or mining archival data in a novel way. However, they may also break new ground by making errors in their analysis or interpretation. Like any task with a high degree of difficulty, it’s inevitable that a larger number of mistakes will be made. Unfortunately, because the prestige of these journals is so high, a larger amount of cheating is also expected.

Where does peer review fit into this story? Marcus and Oransky go on to explain:

“And while those top journals like to say that their peer reviewers are the most authoritative experts around, they seem to keep missing critical flaws that readers pick up days or even hours after publication — perhaps because journals rush peer reviewers so that authors will want to publish their supposedly groundbreaking work with them.”

Rushed peer review may be one factor, but I think it’s also important to acknowledge why post-publication peer review is so powerful. Nature and Science papers usually only have 2 or 3 peer reviewers. For post-publication peer review, dozens or even hundreds of scientists with relevant expertise might review a paper. Therefore, just from a statistical viewpoint there’s a good chance that post-publication peer review will catch problems that traditional peer review missed, no matter how good the initial reviewers are. Nobody is perfect.  

Several lessons can be taken from this discussion. First, all of the different parties involved in research and the dissemination of it – scientists, peer reviewers, publishers, press officer and journalists – should be more careful and more skeptical. Second, although traditional peer review still has value, it’s important to stop deifying the peer review of journal papers, as Jonathan Eisen has said. Third, it’s important to pay more attention to post-publication peer review.

Some people may claim that the rise in the number of scientific retractions represents a worrying trend for scientific research. I would argue instead that it represents a triumph of the scientific method. In the case of the same-sex marriage study, careful statistical analysis helped confirm problems with it, as explained by Brookman, Kalla and Aronow and by Jesse Singal in his terrific article in New York Magazine. Debunking like this also gives a warning to others who are tempted to commit fraud.

Science is an incredibly successful endeavor, but it can also ruthlessly expose our human shortcomings. Retractions can reveal both of these sides to us.  




Comments

Popular Posts