The Impact Factor debate goes critical….

Nobelist Randy Shekman will boycott of Cell, Nature and Science, story here.  Money quote from the article in The Guardian:

Schekman said pressure to publish in “luxury” journals encouraged researchers to cut corners and pursue trendy fields of science instead of doing more important work. The problem was exacerbated, he said, by editors who were not active scientists but professionals who favoured studies that were likely to make a splash.

Are we at a tipping point? Potentially so, particularly with regards to the use of Impact Factor as a metric in assessing quality of scientific publications. Further quoting:

A journal’s impact factor is a measure of how often its papers are cited, and is used as a proxy for quality. But Schekman said it was “toxic influence” on science that “introduced a distortion”. He writes: “A paper can become highly cited because it is good science – or because it is eye-catching, provocative, or wrong.”

Impact factor debate….

Michael White’s thoughtful piece on the use of Impact Factor on assessing scientists for promotion and tenure is here. The piece is in Pacific Standard, a magazine publication that I’m increasingly impressed by.

I’m a supporter of DORA for readers who are curious.

Publish in high "retraction rate journals"….

From Bjorn Brembs via the London School of Economics site, here. Turns out that if you’re encouraged to publish in high impact factor journals, you’re also being encouraged to publish in high retraction rate journals.

Money quote:

This already looks like a much stronger correlation than the one between IF and citations. How do the critical values measure up? The regression is highly significant at p<0.000003, with a coefficient of determination at a whopping 0.77. Thus, at least with the current data, IF indeed seems to be a more reliable predictor of retractions than of actual citations. How can this be, given that the IF is supposed to be a measure of citation rate for each journal? There are many reasons why this argument falls flat, but here are the three most egregious ones:

  • The IF is negotiable and doesn’t reflect actual citation counts (source)
  • The IF cannot be reproduced, even if it reflected actual citations (source)
  • The IF is not statistically sound, even if it were reproducible and reflected actual citations (source)

In other words, all the organizations that require scientists to publish in ‘high-impact journals’ at the same time require them to publish in ‘high-retraction journals’. I wonder if requiring publication in high-retraction journals can be good for science?