The other day, I run into O.A., one of my former students who is now a research group leader. O.A. is not the type that lacks self-confidence, and although having a bit of a lazy attitude, he has some good ideas and a good feel for where the money is. I asked him how his research was going. He responded with a tepid smile, as if to indicate that I had asked the right question: “Very good. Next week I have a paper coming out in Nature, although I am only second last author in that one. I published a paper in EMBO Journal jus a few weeks ago. And we have also made some very interesting observations which will likely lead to a paper in a high-impact journal!”
I have been around long enough to remember the time when there were no impact factors. (Don’t know what an impact factor is? Read HERE). We all knew that, say, Nature, was more prestigious (or sexy, hot, trendy, impactful, whatever you want…) than, say, JBC. And that JBC was better journal than many (actually many!) other (ie lower) journals. We did not need any impact factors to realise that. And of course this “intuitive” information was used to evaluate job candidates and assess tenure. A paper in Nature was very important, we all knew that, and did not need any impact factors. The problem now is that impact factors put a hard number on what earlier was an intuitive, soft process. So, now we know that not only is Nature “better” than JBC, it is actually 10.12 times “better”. And PNAS is 2.23 times “better”. That is what has generated so many problems and distortions. The temptation to use those numbers is just too high, irresistible. For the journals, for the papers in them, and for individual scientists. And the numbers change every year. When applied to individual papers this gets totally crazy. Imagine. The “value” of a given paper can be higher (or lower) this year than, say, 3 years ago when it was published. The same paper, the same data. And let’s not get started with what the impact factor has done to innovaiton and creativity. (For a good view on this, read Sydney Brenner’s interview HERE).
Open access journals charge fees to their authors for publication of accepted articles. Some of those fees can be quite significant. Cell Reports, a new journal from Cell Press, charges $5,000 per article, the highest among open access research periodicals. There is currently a debate as to whether the journals that charge the most are the most influential. A recent survey appears to indicate that price doesn’t always buy prestige in open access. My friend and colleague M.F. has recently made a prescient comment in this context: “…but apart from the commercial desire to maximize profits, the pricing is probably designed as part of the brand signal, to make the point that this should be in the very top tier of journals. Similar to launching a new “premium” wine to the market, if price on release is low, the consumers will never perceive it as a premium wine… . Time will tell if this self-fulfilling prophecy is indeed true, or if journals like Open Biology or eLife can completely break that model.“
Thank you for sending us your paper “Downregulation of HlpxE-mediated transcription doubles median life-span expectancy in humans”, but I am afraid we cannot offer to publish it in The Current Biologist.
We appreciate the interest in the issue you are addressing, and your results sound potentially significant for the field, but our feeling is that at this stage your paper would be better suited to a somewhat more specialised journal.
I am sorry that we cannot give you a more positive response, but thank you for your interest in The Current Biologist.
Many of us —professional scientists writing research articles— have had to confront this type of letters from journal editors. We have grown accustomed to them. A standard cut-and-paste piece of text used knee-jerkedly by editors without much thought or consideration. We file them promptly, and move on. After all, there are plenty of journals around, both general and specialized. No big deal, right?
After subscribing for over two years to the Nature, Science and Cell podcasts, my preference falls clearly with the former. The Nature podcast is snappy, lively, fun to listen to and has great interviews. The journalists have human voices, sound like real people and manage to confer the excitement of science with a touch of humor.
The Science podcast has several problems, the biggest one is podcaster Robert Frederick. I can not imagine a more unnatural, robotic voice on Earth. Does he speak like that to his friends? Even the text-to-speech voice in my Mac sounds more human that this guy. I also find the usual bit about science policy terribly uninteresting. As in the World Series, this is only concerned with US policy, of course. If listening at night in bed, I am surely asleep by this moment. Both Nature and Science have another feature in common that I think takes unnecessary space, that is the bit on news at the end in which one journalist interviews another. This practice has become very popular in TV talk shows and news programs, and sometimes I can see the point of asking questions to a journalist deeply specialized on a particular topic. But those are not the guys at Nature or Science. I find it totally uninteresting, I much rather have the actual scientists telling the story.