Tag Archives: Making science

Making science (part XIV): Journal Syndrome

The other day, I run into O.A., one of my former students who is now a  research group leader. O.A. is not the type that  lacks self-confidence, and although having a bit of a lazy attitude, he has some good ideas and a good feel for where the money is. I asked him how his research was going. He responded with a tepid smile, as if to indicate that I had asked the right question: “Very good. Next week I have a paper coming out in Nature, although I am only second last author in that one. I published a paper in EMBO Journal jus a few weeks ago. And we have also made some very interesting observations which will likely lead to a paper in a high-impact journal! Read more...

Making science (part XIII): How not to make science

A newly recruited staff in a research group has her first meeting with the principal investigator, a full Professor,  to discuss projects and tasks to carry out in the lab. During the conversation, it becomes apparent that the so-called principal investigator is nothing more than a former clinician turned science administrator that pretends leading a research group. There are no new projects coming from the mind of this principal investigator.

Go to PubMed and find something interesting to work on”, says the Professor.

Astonished, the newly recruited lab member becomes silent and after a few awkward minutes leaves the room, in shock. Read more...

Making science (part XII): The problem with impact factors

I have been around long enough to remember the time when there were no impact factors. (Don’t know what an impact factor is? Read HERE). We all knew that, say, Nature, was more prestigious (or sexy, hot, trendy, impactful, whatever you want…) than, say, JBC. And that JBC was better journal than many (actually many!) other (ie lower) journals. We did not need any impact factors to realise that. And of course this “intuitive” information was used to evaluate job candidates and assess tenure. A paper in Nature was very important, we all knew that, and did not need any impact factors. The problem now is that impact factors  put a hard number on what earlier was an intuitive, soft process. So, now we know that not only is Nature “better” than JBC, it is actually 10.12 times “better”. And PNAS is 2.23 times “better”. That is what has generated so many problems and distortions. The temptation to use those numbers is just too high, irresistible. For the journals, for the papers in them, and for individual scientists. And the numbers change every year. When applied to individual papers this gets totally crazy. Imagine. The “value” of a given paper can be higher (or lower) this year than, say,  3 years ago when it was published. The same paper, the same data. And let’s not get started with what the impact factor has done to innovaiton and creativity. (For a good view on this, read Sydney Brenner’s interview HERE). Read more...

Making science (part XI): A perverted view of “impact”

Marc Kirschner is the John Franklin Enders University Professor and chair of the Department of Systems Biology, Harvard Medical School, Boston, MA 02115. He recently wrote an editorial for Science magazine published on 14 June 2013.

Kirschner debunks the notion of research “impact” as the likelihood that the proposed work will have a “sustained and powerful influence”. He writes that: “Especially in fundamental research, which historically underlies the greatest innovation, the people doing the work often cannot themselves anticipate the ways in which it may bring human benefit. Thus, under the guise of an objective assessment of impact, such requirements invite exaggerated claims of the importance of the predictable outcomes—which are unlikely to be the most important ones. This is both misleading and dangerous“. Read more...

Making science (part X): What open access journals have in common with premium wine

Open access journals charge fees to their authors for publication of accepted articles. Some of those fees can be quite significant. Cell Reports, a new  journal from Cell Press, charges $5,000 per article, the highest among open access research periodicals. There is currently a debate as to whether the journals that charge the most are the most influential. A recent survey appears to indicate that price doesn’t always buy prestige in open access. My friend and colleague M.F. has recently made a prescient comment in this context: “…but apart from the commercial desire to maximize profits, the pricing is probably designed as part of the brand signal, to make the point that this should be in the very top tier of journals. Similar to launching a new “premium” wine to the market, if price on release is low, the consumers will never perceive it as a premium wine… . Time will tell if this self-fulfilling prophecy is indeed true, or if journals like Open Biology or eLife can completely break that model. Read more...

Making science (part IX): The process of acceptance of scientific theories

Novel scientific propositions are initially taken with skepticism. Eventually, they become accepted –at least some of them. The transition between heressy and main stream has been debated ad nauseam. British geneticist J.B.S. Haldane (1892-1964) has been famous for many things, one among them was his incisive sarcasm. Haldane was an assiduous contributor to the Journal of Gentics, not only of scientific articles, but often many book reviews. One of those reviews, published in 1963 (Journal of Genetics Vol. 58, page 464), is perhaps the best known among the lot, not because of the book being reviewed, but because of Haldane’s now famous description of the stages in the process of acceptance of scientific theories. In Haldane’s words, theories invariably pass through “the “usual four stages”: Read more...

Making science (part VIII): A more specialized journal

“Dear Author, 

Thank you for sending us your paper “Downregulation of HlpxE-mediated transcription doubles median life-span expectancy in humans”, but I am afraid we cannot offer to publish it in The Current Biologist.

We appreciate the interest in the issue you are addressing, and your results sound potentially significant for the field, but our feeling is that at this stage your paper would be better suited to a somewhat more specialised journal.

I am sorry that we cannot give you a more positive response, but thank you for your interest in The Current Biologist.

Regards,
Geoffrey South
Editor”

Many of us —professional scientists writing research articles— have had to confront this type of letters from journal editors. We have grown accustomed to them. A standard cut-and-paste piece of text used knee-jerkedly by editors without much thought or consideration. We file them promptly, and move on. After all, there are plenty of journals around, both general and specialized. No big deal, right? Read more...

Making science (part VII): On the utility of science

In a recent interview for the podcast series of the Proceedings of the National Academy of Sciences of the USA, Ira Mellman of Genentech expressed his views on the utility of science practiced at academic institutions. After an academic career at Rockefeller and Yale University, Mellman joined Genentech in 2007 where he is Vice President of Research Oncology. Mellman is a member of the National Academy of Sciences since 2011. The interview is about current challenges in the field of cancer immunotherapy. But things get a bit more controversial at the end. Scroll the audio featured below to -0:35 and you’ll hear this:

PNAS: “I asked Mellman whether his move from academia to industry has brought him closer to his goal of practicing people-centered science.” Read more...

Making science (part VI): Ignorance

There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say, we know there are some things we do not know. But there are also unknown unknowns, the ones we don’t know we don’t know.
— D. Rumsfeld, 2002

The scientific literature increases exponentially with thousands of papers added daily. As the day has only 24 hours, what this means is that every time we sit down to read a paper, we have —consciously or unconsciously— decided to neglect thousands of others which we will most likely never read, ever. Agonizing as this may sound to some, it is equally inevitable. The assurance that we feel when moving a paper to our “to-read” list can be self deceptive, however, and so it is crucial that we choose which papers to read with great care. Or rather, that we carefully decide which papers not to read. In fact, better to do this consciously than as a default consequence of the limited number of hours in the day. The art of selectively ignoring sets of facts has been called “controlled neglect” and it is a crucial tactic to cope with the vast mountain of facts that keeps growing by the day. Read more...

Making science (part V): Bad project

“I want good data, a paper in Cell
But I got a project straight from Hell”

“I wanna graduate in less than five years
But there ain’t no getting out of here”

“Oh oh oh… caught in a bad project”

Crazy mice. Smelly brain cells. Empty Western blots. It’s a bad project alright. Or… is it? There are indeed bad projects out there. Research projects begin with a question that is to be answered. If no question has been formulated, however general, and experiments are being done only because they are doable, then a bad project is on the horizon. With a question at hand, hypotheses have to be made as to the posssible answers, ideally covering all logical possibilities. Lack of hypotheses in a project is not a good sign. The question posed may not be answereable. (We’ve all heard about hypothesis-free studies. That’s okey for a group leader with 50 postdocs and lots of other projects. Not recommended  to anyone that wants to graduate and get a job in less than five years!) Hypotheses help designing the experiments that are going to distinguish between them. Experiments are typically designed to systematically disprove them one by one. A neat, key experiment to prove one of the hypothesis upfront is more difficult to come by. Some experiments may just add support to a particular hypothesis, but not prove it or disprove it outright. So far so good. But a good project should also allow for serendipitous discoveries. Paradoxically, serendipity is one of the most common ways of advancement in science. Alas, serendipity can not be planned. But it can be encouraged. In addition to concrete goals and defined questions, research projects that allow some amount of open-ended possibilites have greater chances to extend into (positively) unexpected directions. It’s a fine balance, in which informed intuition plays a vital role. (For a discussion of intuitive thinking, see Making Science Part III.) Lady Science in the video above seems to be having more problems than just a bad project. But those are topics of other discussions. Read more...