The gatekeepers of the top scientific journals are people who themselves failed to publish in those journals when they were in academia. Had they been able to do so, they would today be authors, not professional editors. Could it be that the best science produced in the world today is being judged by the worst scientists? If true, that would be very unsettling.
It is quite doubtful that anyone would start graduate school or postdoctoral training with the idea in mind to become a journal editor. The vast majority of graduates initiating postdoctoral studies do so with an intent to become principal investigators. Something happens along the way. Disenchantment with an active career in science? Too few positions available? Harsh competition? Family choices? Perhaps all of the above. Read more...
The other day, I run into O.A., one of my former students who is now a research group leader. O.A. is not the type that lacks self-confidence, and although having a bit of a lazy attitude, he has some good ideas and a good feel for where the money is. I asked him how his research was going. He responded with a tepid smile, as if to indicate that I had asked the right question: “Very good. Next week I have a paper coming out in Nature, although I am only second last author in that one. I published a paper in EMBO Journal jus a few weeks ago. And we have also made some very interesting observations which will likely lead to a paper in a high-impact journal!” Read more...
I have been around long enough to remember the time when there were no impact factors. (Don’t know what an impact factor is? Read HERE). We all knew that, say, Nature, was more prestigious (or sexy, hot, trendy, impactful, whatever you want…) than, say, JBC. And that JBC was better journal than many (actually many!) other (ie lower) journals. We did not need any impact factors to realise that. And of course this “intuitive” information was used to evaluate job candidates and assess tenure. A paper in Nature was very important, we all knew that, and did not need any impact factors. The problem now is that impact factors put a hard number on what earlier was an intuitive, soft process. So, now we know that not only is Nature “better” than JBC, it is actually 10.12 times “better”. And PNAS is 2.23 times “better”. That is what has generated so many problems and distortions. The temptation to use those numbers is just too high, irresistible. For the journals, for the papers in them, and for individual scientists. And the numbers change every year. When applied to individual papers this gets totally crazy. Imagine. The “value” of a given paper can be higher (or lower) this year than, say, 3 years ago when it was published. The same paper, the same data. And let’s not get started with what the impact factor has done to innovaiton and creativity. (For a good view on this, read Sydney Brenner’s interview HERE). Read more...
Science, Jazz, Photography