Traduction blog en anglais de l’article de www.captaineconomics.fr intitulé : Pourquoi le système actuel de publication est absurde… et comment libérer la science ?
Author : Michaël Bon
Science derives its power from its demanding verification process. Once their research done, scientists write an article full of refutable statements and make it public, thus subjecting it to the possibility of refutation by the rest of the scientific community through debate, counter-experience and consensus seeking. This ensures that scientific knowledge remains sound and protects it from the prejudices held by any member of the community. That was how science always progressed with a remarkable number of successes, until the 1960s, when institutions changed the rules in a direct albeit profound way. In fact, in their quest to manage their researchers in a ‘better’ way, they started to look for quantitative criteria in order to estimate productivity. That’s when a unique, aberrant criterion was invented and, little by little, adopted by everybody: the impact factor.
Impact factor is essentially the average number of citations generated by articles published in a given journal in the two years following their publication . It is supposed to reflect, more or less, the journal’s ‘prestige’ and institutions consider that their researchers did a good job if they manage to publish an article in the best-ranked journals of their discipline. This method of estimating scientific quality is not only absurd but also wrought with errors  and perverse effects related to the way of becoming a successful scientist. What’s even worse, is that this evaluation method is fundamentally destructive to the scientific process and annihilates the scientific community. The scientist’s objective in order to make a career, be recognized for his work and secure funding for his research, is no longer to ‘beat’ his peers but to court the favors of an editor. All scientists suddenly became competitor over this favor that obviously, cannot be bestowed upon everybody. The horizontal and community-based mode of operation, which entails constructive criticism and consensus seeking, is no longer possible. The whole system is polarized around a set of journals newly promoted as the sole monopolistic kingmakers of their discipline. The scientific process is thus entirely privatized.
The Current Publication Process
When a researcher wants to publish, meaning that he wants to make known the results of his research to his peers, his sends his manuscript to an editor of his target journal. He would usually start with the journal that has the highest impact factor. The editor decides if the article is eligible for publication in his journal. If that is the case, he summons a tribunal of one, two (most often), or three anonymous referees who will, in a limited amount of time, volunteer to provide as much feedback as possible as the rejection or acceptance of the article and, in the latter case, requesting a few modifications to the original manuscript. Feedback may sometimes be contradictory but still, the authors makes whatever modifications necessary for his paper to be accepted by the anonymous judges. Once he satisfies the referees’ requirements, the article is formatted according to the journal’s templates; the author signs a copyright transfer ceding the article’s exploitation rights to the editor who then makes its PDF version available for sale on his web site. In newer variation on this model, the author himself additionally pays the editor so that his article be available to the largest possible online audience. If however, the article is rejected, the author will retry the same procedure as many times as is necessary with other journals, one by one, which might take a lot of time. If the journal, which accepted the article, has a high impact factor, the author has the right to celebrate for he can now put a check in the right checkboxes in his institution (e.g., “Rank A journals” for the case of the French CNRS) and may be even get European scholarships and a promotion. Until the article is put online, the whole process is unverifiable and takes place in an editor’s mail box .
This system has an essentially ritual, undoubtedly intellectual, but by no means a scientific value. Let us quick go through some of its major flaws:
- Scientific truth is determine by two anonymous persons with a short amount of time at hand. Obviously, this cannot work and science as such is largely ‘false’, to the extent that the taxpayer may legitimately question the utility of continuing to finance research. For example, 90% of the most cited works in cancer research is not reproducible and has thus no scientific value. More general, we estimate that about 50% of biomedical research is not reproducible Imagine the result of the absence of serious verification combined with the willingness of the pharmaceutical industry to defend its interests … No field is spared; here is an example from the field of economics. The scientist’s objective is to convince an editor is a short amount of time and the storytelling aspect of the article is often more important that its technical correctness or scientific depth. Scientists with high ethical standards take the risk of being overtaken by less scrupulous peers.
- A system in which the main objective is to satisfy the expectations of a few editors bears an innate conservatism that goes against the emergence of new ideas. Science thus managed, will no longer advance at the rate of its own intrinsic dynamics but rather, at the rate at which powerful editors or the leaders they consecrated reach retirement or die.
- The system has a prohibitive cost. Editors secure for themselves the exploitation rights of scientific articles (largely financed by public research) and resell them to the scientific community. There is no pressure to lower prices simply because there is no competition between journals: Every scientific article is unique (“un-substitutable goods”) and the researcher needs to buy all of them. Thousands of journals belong to multinational companies (most representative among which is Elsevier) who often enjoy aberrant profit margins (a profit of 1.16 billion euros representing a whopping 36% profit margin for the case of Elsevier). In the end, the scientific community pays a total of 23 billion euros every year to access its own production. The taxpayer also has to pay if he wants to know what researchers are doing with his money (readership right for scientific articles is around 35 euros for private individuals).
In the last 20 years, institutional efforts focused solely on the last sentence of the previous paragraph, that is, the problem of “Open access” . We would like that all humanity have free access to the conservative and error-ridden science it finances, which morally irreproachable but definitely less urgent than solving the other problems. Passions are unleashed: petitions are signed, forums are awash with heated discussions, multinational companies are targeted, and legislators are called upon. We even invest millions of euros in national projects headed by leaders with a crystal clear vision and discourse.
In practice, a macroscopic analysis leaves no room for suspense. To the extent that it is journals who control science, open access can only be achieved according to their conditions, already well known by financial analysts, via a generalization of the “deduction at the source” model (so-called “gold open access”) in which the researcher pays to give his article to the journal. Of course, this will cost event more than the current system and the potential abuses of this new economic model are obvious.
There currently is no audible voice on the “elephant in the hallway”, in other words, the fact that, first and foremost, a journal has neither the capacity nor the legitimacy to evaluate science (only the scientific community as a whole can do this) and that its role in its diffusion was rendered obsolete in the age of the internet. The only alternative evaluation mode that managed to attract attention recently is that of altmetrics, which suggests to complement the impact factor with the buzz generated by articles on Facebook, Twitter and the mass media.
The Proposed Solution
The fundamental solution is obviously to go back to a rigorous, horizontal, and community-based scientific evaluation, at which point the system will mechanically reorganize itself. The internet solved the problem of space and time in such a wat that this process can function even better than it did in the past. It is on this peer-to-peer foundation that was built the “The Self-Journal of Science” platform (SJS, www.sjscience.org). This is an article upload site in which, all the processes related to publication (quality control, building a common vision, and classification) are transparently self-managed and self-regulated. New solutions were found that solve the two fundamental problems: (1) Making sure that contradiction and disagreement are not anti-social, and (2) Making sure that the freedom thus restituted to every individual scientist be always used for the purpose of achieving scientific excellence. A detailed explanation of this solution is presented here, and it would be difficult to explain its principle in this article for reasons of space (unless requested by popular demand).
I would like to conclude with a message to the (young) researchers who are reading these lines. The current system is unbearable to anyone who loves science. Furthermore, with the currently prevailing budget cuts, it will be even harder and unjust towards the “lower” scientific social classes composed of Phd students and Postdocs, especially those who do not work in “star” laboratories. We absolutely have to go back to the evaluation system in which all scientists participate, such as the one I suggest. However, it is obvious that such redistribution of power cannot be done top-down. Neither the politicians strangely conciliatory towards the financial interests of multinational companies, nor the tiny minority of prominent scientists who managed to reach the peak of this system and to enjoy its privileges would be of any help. You have to use the tools offered by SJS and start to voice your opinion, which otherwise nobody will ask to hear, in order to launch the community dynamics that will, at some point, provide clear evidence of the fact that the science it produces is superior to that of the current system. At that moment, the decisive argument that will favor the adoption of the system (and inevitably the old one as regards the distribution of research financing and promotions) will that fact that it is free of charge: The scientific solution that our researchers need will coincide with the economic solution that our institutions are actively seeking.
Let us read this article, debate, gather, and take action! I personally see no alternative scenario by which the current system can progress in favor of the interests of science and the majority of scientists.
 The definition of a “citation” was left to the discretion of a private company, Thomson Reuters through its “the Web of Science” service that does the counting. Nothing is verifiable but we can already see that are only taken into account the articles that were written in English.
 A complete mess: Impact factor is not related to an article but to the journal; citing an article does not mean it is good (on the contrary, I can cite an article to refute it), it is very easy to manipulate the number of citations of an article with the help of a few accomplices, the average number of citations has no statistical relevance since citations do not have a Normal distribution, it does not take into account the size of the communities, they favor the short term and buzz effects.
 In some disciplines, like physics for example, we notice the development of the pre-print practice, namely that some web sites like arXiv.org allow scientists to at least deposit their initial manuscript so that everybody can have access to it, and to some extent, to protect themselves against some potential abusive practices on the part of the review committee (e.g.; stealing of ideas by anonymous reviewers who can slow down the process to a maximum degree and publish these ideas under their name).
 The reader may verify by himself the abundance of information and initiatives related to this keyword.