A newly recruited staff in a research group has her first meeting with the principal investigator, a full Professor, to discuss projects and tasks to carry out in the lab. During the conversation, it becomes apparent that the so-called principal investigator is nothing more than a former clinician turned science administrator that pretends leading a research group. There are no new projects coming from the mind of this principal investigator.
“Go to PubMed and find something interesting to work on”, says the Professor.
Astonished, the newly recruited lab member becomes silent and after a few awkward minutes leaves the room, in shock.
I have been around long enough to remember the time when there were no impact factors. (Don’t know what an impact factor is? Read HERE). We all knew that, say, Nature, was more prestigious (or sexy, hot, trendy, impactful, whatever you want…) than, say, JBC. And that JBC was better journal than many (actually many!) other (ie lower) journals. We did not need any impact factors to realise that. And of course this “intuitive” information was used to evaluate job candidates and assess tenure. A paper in Nature was very important, we all knew that, and did not need any impact factors. The problem now is that impact factors put a hard number on what earlier was an intuitive, soft process. So, now we know that not only is Nature “better” than JBC, it is actually 10.12 times “better”. And PNAS is 2.23 times “better”. That is what has generated so many problems and distortions. The temptation to use those numbers is just too high, irresistible. For the journals, for the papers in them, and for individual scientists. And the numbers change every year. When applied to individual papers this gets totally crazy. Imagine. The “value” of a given paper can be higher (or lower) this year than, say, 3 years ago when it was published. The same paper, the same data. And let’s not get started with what the impact factor has done to innovaiton and creativity. (For a good view on this, read Sydney Brenner’s interview HERE).
Can a good scientist be a bad writer? The answer, in my opinion, is nope. Here is the story.
The registrator office at the Karolinska Institute has recently received a request to release the full texts of several of their successful grant applications to the European Research Council (ERC) as well as the texts of their respective evaluations and referee comments. ERC grants are both generous and prestigious awards that have come symbolize the success of the European scientific elite. Under Swedish freedom of information legislation, the Karolinska Institute -which is ultimately under state jurisdiction- is obliged to release these documents, as astonishing as this may sound. (A topic that surely deserves a post of its own.) Needless to say, such a request has come down as nothing short of controversial among the scientists involved, since grant applications contain unpublished data and detailed confidential information about their future research programs. Who could have made such a preposterous request?
Intuition is as important in science as it is in the arts and any other creative activity. Intuition can allow the formulation of novel ideas or solutions to complex problems that would otherwise be difficult or improbable to reach via conventional, logical reasoning. Although the popular term “gut feeling” would appear to indicate that intuitive processes take place outside the brain, it is a misplaced metaphor, as intuition is very much a mental activity.
As in conventional reasoning, intuitive thinking computes the odds of competing ideas or solutions. Unlike the former, however, the intuitive process is largely unconscious. We are only aware of the result of the computation but not the process by which it was obtained. It is nevertheless a mental calculation like any other: it utilizes data stored in memory to deduce connections, predict missing bits of information, or generate new hypotheses.
Science, Jazz, Photography