The current system is broken, as argued in an earlier post. Given the arguments exposed there, is there a better way forward? I believe there is, and some ideas on how to go about it are presented below. Much of what is being proposed here represents the complete opposite of many of the fundamental principles of the current system: forums instead of journals, acceptance instead of rejection, commentary instead of review, disclosure instead of anonymity, community engagement instead of expert wisdom, live papers instead of dead papers.
Like it or not, we live in an on-line society; certainly most part of research communication and dissemination take place on-line. The word “journal”, in all its physicality, feels very much obsolete. For an on-line world, and for other reasons that will become clearer later, the word “forum” seems much more appropriate. “Neuroscience Forum” instead of “Journal of Neuroscience”. “Cancer Research Forum” instead of “Journal of Cancer Research”. How’s a Forum different from a Journal?Read more...
I have been honored to serve as Chief-Editor of the section in Cell and Molecular Neuroscience of a prestigious journal, until I resigned this month. It was time for me to leave space to someone with a stronger belief in the peer-review system. In my case, this has been slowly eroding during the past years, as I witness a steady decline in scholarship, transparency and basic respect across the publishing enterprise and its actors. Failed scientists become editors of powerful journals but their shallow competence limits them as over-qualified secretaries. Researchers with little time to spare accept to review papers, but end up providing senseless and superficial evaluations of dismal scholarship.Read more...
When I was about in the middle of my undergraduate studies, I decided that I needed some laboratory practice. So I joined a lab that was studying Drosophila genetics. The Prof. there, his name was Enzo Muñoz, was an old-school geneticist, with rather conservative views. We spoke often. He was talking about science. I remember one thing he said once: “There are no boring topics in science, only bored scientists.” I thought at that time that he was defending himself a little bit.
Later I understood that he was actually talking about something more profound. Nothing in science is boring. There are scientists that may get bored about something, but that does not make that topic a boring one, it only says that those scientists have been unable to find a way to crack that problem or make progress in that topic. Or they simply did not understand its depth.Read more...
Prof. Eve Marder, from Brandeis University, was one of the founder editors of eLife, a scientific journal launched in 2012, one of the few journals that is still run by working scientists, as opposed to so-called “professional” editors, like most of the commercial journals.
Dr. Marder wrote recently an opinion article for the journal in which she sharply criticises the kinds of words, often derogatory, that reviewers use when judging research papers, grants and appointments.
She writes: “Over the years I have grown to truly abhor some of the words that are overused and abused when we review manuscripts, job candidates, and grant applications. In particular, I now detest five words: incremental, novelty, mechanism, descriptive, and impact. These words are codes behind which we hide, and are frequently used in lieu of actual explanations of what people think about the subject at hand.”Read more...
It can be considerably frustrating to have to summarize many years of work in just 150 words, but that is what scientists often have to do at the time of writing the Abstract section of their research papers. However, a well written Abstract is crucially important, as it is the first thing (sometimes the only thing!) that readers will read, including the journal Editors that will decide about its publication. It can really be a make-it-or-break-it for the success of the article. However, many young and budding scientists often struggle with this section, usually because of an inability to distill the single most important and essential part of the discovery in a clear and simple way. Read more...
Once upon a time, science journals were run by scientific societies and their editors were active scientists. Very few of these remain today. Instead, nowadays most journals are own by private, usually very large, publishing companies and their editors are “professional”. That is, their only job is to be journal editors, they are not active scientists. Most of them were active scientists earlier in their career, but left academia to become “professional” editors, usually, shortly after their postdoctoral studies. Because of this, most professional editors are much younger (no problem there) and considerably more inexperienced (hmmm… ) than the principal investigators from whom they receive manuscripts for consideration. Typically, these “youngish” editors can get advice and (one would hope) guidance from more senior editors within the same journal or publishing organization, but they are pretty much in charge of the main decisions of the manuscripts assigned to them.Read more...
The other day, I run into O.A., one of my former students who is now a research group leader. O.A. is not the type that lacks self-confidence, and although having a bit of a lazy attitude, he has some good ideas and a good feel for where the money is. I asked him how his research was going. He responded with a tepid smile, as if to indicate that I had asked the right question: “Very good. Next week I have a paper coming out in Nature, although I am only second last author in that one. I published a paper in EMBO Journal jus a few weeks ago. And we have also made some very interesting observations which will likely lead to a paper in a high-impact journal!”
I do not live in a bubble, so, in a disappointing way, I was not surprised. But it was difficult for me to keep myself from venting a remark of frustration, “O., you tell me where you are publishing your work, but you don’t tell me what the work was about, what you have discovered! Isn’t that the important thing?” Well, I did not actually make that last rhetorical question, but I should have.
Here is O.A., one of my former students, one of the promising ones, explaining his research in terms of the journals in which it is getting published, as if that were the only thing that matters in his science. Yes, he may have been trying to make an impression on his former mentor. But… where did the science go? Isn’t that what really counts? The “Journal Syndrome” has advanced to such point that the title of the journal in which the research is published becomes more important than the research itself and hence the preferred short-hand description for science output.
How did we get to this situation and can this trend be reversed? Without doubt, this is a direct product of the current addiction to Impact Factors, the mother of most curses in modern science. However, while making Impact Factors disappear would appear very difficult at this time, avoiding the Journal Syndrome should be relatively simpler. When someone asks about your research, pretend he or she is a distant relative with no inside knowledge and simply tell them what you found in as few and simple words as possible. Journal Syndrome manifests most commonly when the other person is also a scientist. In this case, you can allow yourself a bit more jargon and specifics, but the key point is always to keep the focus on your new findings. Try this next time. And if you are at the other end, as I was with my student O.A., don’t let them get away with the journal babble. Force them to tell you what they found. Hopefully, they’ll know…
A newly recruited staff in a research group has her first meeting with the principal investigator, a full Professor, to discuss projects and tasks to carry out in the lab. During the conversation, it becomes apparent that the so-called principal investigator is nothing more than a former clinician turned science administrator that pretends leading a research group. There are no new projects coming from the mind of this principal investigator.
“Go to PubMed and find something interesting to work on”, says the Professor.
Astonished, the newly recruited lab member becomes silent and after a few awkward minutes leaves the room, in shock.
“Go to PubMed and find something interesting to work on”. Now, we should point out that PubMed is the public repository of all scientific literature in the life sciences and biomedicine of the entire planet since the beginning of time. There are literary millions of papers in the repository. How does one find “something interesting to work on” there? Is this the best advice, the best guidance that this so-called senior scientist has to offer to his newly recruited lab member?
I could not believe when I first heard this, but it is a true story. It happened at the National University of Singapore, but the characters shall remain anonymous. There are likely people like that in most universities around the world. Group leaders out there that have no clue whatsoever of what science is about, or what is to be an inspiring mentor. How their reputations survive is a total mystery.
I have been around long enough to remember the time when there were no impact factors. (Don’t know what an impact factor is? Read HERE). We all knew that, say, Nature, was more prestigious (or sexy, hot, trendy, impactful, whatever you want…) than, say, JBC. And that JBC was better journal than many (actually many!) other (ie lower) journals. We did not need any impact factors to realise that. And of course this “intuitive” information was used to evaluate job candidates and assess tenure. A paper in Nature was very important, we all knew that, and did not need any impact factors. The problem now is that impact factors put a hard number on what earlier was an intuitive, soft process. So, now we know that not only is Nature “better” than JBC, it is actually 10.12 times “better”. And PNAS is 2.23 times “better”. That is what has generated so many problems and distortions. The temptation to use those numbers is just too high, irresistible. For the journals, for the papers in them, and for individual scientists. And the numbers change every year. When applied to individual papers this gets totally crazy. Imagine. The “value” of a given paper can be higher (or lower) this year than, say, 3 years ago when it was published. The same paper, the same data. And let’s not get started with what the impact factor has done to innovaiton and creativity. (For a good view on this, read Sydney Brenner’s interview HERERead more...
Marc Kirschner is the John Franklin Enders University Professor and chair of the Department of Systems Biology, Harvard Medical School, Boston, MA 02115. He recently wrote an editorial for Science magazine published on 14 June 2013.
Kirschner debunks the notion of research “impact” as the likelihood that the proposed work will have a “sustained and powerful influence”. He writes that: “Especially in fundamental research, which historically underlies the greatest innovation, the people doing the work often cannot themselves anticipate the ways in which it may bring human benefit. Thus, under the guise of an objective assessment of impact, such requirements invite exaggerated claims of the importance of the predictable outcomes—which are unlikely to be the most important ones. This is both misleading and dangerous“.
Importantly, he says that “One may be able to recognize good science as it happens, but significant science can only be viewed in the rearview mirror. To pretend otherwise distorts science.”
Everyone preoccupied about the “impact” of current and future research should read Marc Kirschner’s full piece HERE.