[Help] [Aide] [Up]

Science Tribune - Article - December 1996

http://www.tribunes.com/tribune/art96/bere.htm

Hampering the progress of science by peer review and by the 'selective' funding system



Alexander A. Berezin

Department of Engineering Physics, McMaster University, Hamilton, Ontario, L8S 4L7, Canada.
E-mail : berezin@mcmaster.ca


Introduction

The evolution of university-based research in recent decades has resulted in the growth of corporate structures and corporate mentality in science, and of the powerful funding councils, agencies, foundations, etc. These agencies control the distribution of research funds among scientists and, consequently, have a ruling influence on what research is done and who is allowed to do it.

A widespread, but largely false, notion is that the "over"-competitive funding policies of major granting agencies in the world of university-based research - such as the Natural Science and Engineering Research Council (NSERC) in Canada, the National Science Foundation (NSF) and National Institutes of Health (NIH) in the USA, etc - encourage innovation and risk-taking. Although these are the professed goals, the actual managerial structure and functioning of these agencies are such that they tend to serve the opposite end. Despite claims to the contrary, genuine innovation is suppressed while mediocrity and production of trivial data are implicitly endorsed.


Proposal-based science : 20th century nonsense ?

In my opinion, the core defect of the American/Canadian research funding system is the idea of competition between proposals. The proposals are scored by peer reviewers and then separated into two groups - to be funded or not to be funded - on the basis of these scores. Ostensibly, the policy is justified by the quality control afforded by peer review but, in practice, the peer review almost invariably favours research along well-established lines and discourages real innovation and risk-taking. This state of affairs, commonly known as "grantsmanship", leads to the proliferation of conservatism and cronism, and to the overconcentration of research funds in the hands of elitarian and overfunded control groups. The net result is that truly exploratory research, with its frequently uncertain outcome, is marginalized.

The widely cherished notion of "unrestricted free market competition as the best possible driver of technological progress" is, however, running into ever-greater trouble. "Competition" means different things to different people. In science and innovative technology, the very idea of competition sounds like a rather strange import to many. The purpose of these activities is, by definition, either to explore the unknown (science) or to develop useful applications at the frontier of fundamental research (technology). Both these activities rely strongly on human creativity and, in this context, the notion of "competition" has, at best, only peripheral importance. Yes, it is nice to be the first but it is much more important to do things well, to be convincing and constructive.

Many critics, referring to the recent evolution of the North American model of university research funding toward ever greater selectivity, have pointed out the damaging consequences of ferocious competition for funds which is "justified" by the assumption that such a strategy fosters "excellence" in research. At first glance, the idea of "excellence through competition" seems reasonable. It is relatively easy to sell to the politicians and to the general public. After all, if it works for business deals or the Olympic games, why should it not work for science ? However, as often, the argument fails by extension because the external mechanisms currently regulating competition in science ("grant selection") incorporate several underlying fallacies (myths). I shall discuss these briefly in the present article.

For example, the basis for the present funding policy of the Canadian NSERC is the principle of "selectivity" - not funding all applicants for research grants - which is applied in the name of the alleged "excellence" and "competitiveness" of the research. The attitude of the NSERC is reflected both in its terminology ("grant selection committee", "next competition") and in its explicit instructions to the funding panels to recommend a significant fraction of applicants for zero funding ("Nil" awards). Presently, about one third of Canadian professors, who are involved in full-time research in science and engineering, are not funded by the NSERC at any level and are obliged to support their research by their own means. (Universities, as a rule, have no separate budget for this).

Thus, despite its good intentions, the present NSERC funding system is highly detrimental because, instead of research being idea and opportunity driven, it has become grant driven and grant seeking. In fact, nowadays, the only real concern of any applicant to a funding agency is how to optimize all his/her research along a single criterium : fundability.


Expert peer review : The emperor has no clothes !

Well, in this case, he has, at best, a fig leaf.

The most common argument for "expert peer review" of journal papers is that, without it, we would be flooded with megatons of garbage. The argument is sensible but only outwardly. In reality, peer review is the major driving force behind the "publish or perish" syndrome and the paper production treadmill. People publish largely for a question of prestige and a "peer reviewed paper" is the main credit unit in the advancement of a scientific career. Once a manuscript has made publication in a prestige journal, its actual quality or importance are of little matter.

How did one arrive at the present state of affairs ? The notion of expert peer review as an inevitable step before publication of submitted manuscripts is deeply ingrained in the culture of the modern research enterprise. That science is much better with peer review than without it has virtually become an axiom. However, even though some editorial control (e.g., improving clarity of presentation) is desirable, the idea of peer review, as it stands, is increasingly becoming a butt for criticism.

Peer review as a police force
Using a somewhat blunt metaphor, peer review can be compared to the police. An effective police force can be expected to help curb crime (though it cannot abolish it). It cannot be expected to have socially creative functions or to act as a true catalyst of society's progress. Likewise, peer review can help detect the most flagrant cases of scientific unprofessionalism but it cannot foster scientific progress ! On the contrary, heavy-duty peer review more often than not suppresses truly innovative research. Thus, de facto, the present NSERC policy encourages the prolific production of routine - but easily publishable - results that follow the well-established lines of mainstream research.

Peer reviewers and the establishment
Because peer-reviewers are drawn, as a rule, from the members of the scientific establishment (allegedly the "best experts"), they will tend to support established (i.e. their) projects rather than truly innovative projects. Innovative projects are, by definition, not established. How supportive was the scientific establishment when Boltzmann presented statistical mechanics ? Christopher Columbus may never have left harbour if his travel plans had been subject to the prior approval of an expert peer review panel !


Underfunding and superfunding

A common stand for almost any group - and that includes the research community - is to attribute all its problems to underfunding. Only give us more money and everything will be just fine ! It is always easier to shift the blame away from home. This is the reason why the underfunding thesis is so universally attractive and popular.

The "grantsmanship" establishment has vested interests in stressing its most misleading myth, that of "superfunding for super-research". This is another seemingly sensible, but in essence perverted, extrapolation of a business model to science. This myth has two propositions :

1. The "most promising" research with the best future "impact factor" can be correctly identified by peer reviewers, expert panels, boards of directors, or whatever.

2. Putting "more money" into the "excellent" research thus identified is bound to make it even "more excellent".

The first item is wishful thinking based on the presumed "collective wisdom" of expert committees, the second is based on the traditional American aberration that "money can buy everything". This is not just plainly naive but also very costly socially, as it leads to unwarranted OVERfunding of many "polically correct" research activities like targeted mega-projects, "centers of excellence", etc. This myth bluntly ignores all crucial non-monetary constraints on genuine research. Even Albert Einstein, if his grant were suddenly increased fourfold, would not produce "four times as many discoveries". On the contrary, his real productivity is more likely to drop due to the additional paperwork, new commitments, etc. Yes, a modest bonus of, say, 30-50 % above average for "really good" (by whichever criteria) research may be quite appropriate but systematic (over)funding of "selected" groups - at the expense of zero "awards" to scores of other equally decent researchers - is nothing short of arbitrary ideological apartheid. Its consequences are especially damaging for the morale of the younger generation of university researchers.

The typical university research program normally evolves as a result of complicated ("nonlinear") interactions between the personal motivations of researchers and a web of social, micro-political and financial considerations relating to the specific research issue in question. The spectrum of personal motivations is wide and can range from the humility of pure curiosity and the selfless quest for truth to the pragmatic, but socially still quite acceptable, objective of career advancement leading to a sizable level of authority, influence and institutional weight. Unfortunately, under the present university reward system, it is not rare that these "pragmatic" aims degenerate into an obsession with power and personal enrichment.

In short, it is naive to assume that the "best researchers must be funded with top dollars". In reality, they need - and can meaningfully absorb - only those funds that reckon with the human limitations of time, attention, concentration, information overload, etc. The aptitude to overcome these limitations varies among individuals but, to my knowledge, the range is quite confined and the distribution Gaussian. With this in mind, the system of multiple grants, which inevitably invites a greedy money-grabbing attitude, is socially counterproductive and fiscally wasteful. Funding agencies, therefore, should avoid the present practice of giving further grants to those who are already well funded.


The image of the 'boss'

A co-discoverer of the genetic code, Erwin Chargaff (Chargaff, 1980), pointed out that the present university system is largely based on the exploitation of young graduate students, postdocs, assistant professors, etc... If the major currency unit in science is a "solid" peer-reviewed paper in a well-acclaimed mainstream journal, the more such units are accumulated, the better is the bargaining position for obtaining more funding, hiring more postdocs, attracting even more Ph.D. students, etc.

This vicious circle is self-serving and self-propelling. The role model in academic science today is "the boss", the head of a departmental mini-empire with a 10- to 15-strong cheap research labour force with an annual net output of some 20 to 40 papers. The per capita, per paper, and per dollar innovation effect of such super-departments is, as a rule, much lower than that of smaller groups or even of many researchers working entirely on their own. The practice has, moreover, the added detrimental effect of producing a number of Ph.D. graduates in many key areas of science and also of technology, well in excess of the true capacity of the job market.


Funds to assist or direct research ?

Likewise, the fallacy that the funding councils should be expert-laden think-tanks that manage individual researchers by a stick-and-carrot system of grants ignores the fundamental fact that some minimal (basic) funding must be provided to researchers on a default basis with the only evidence required for attribution being ongoing research activity. The social purpose of funding agencies is to assist university research. They should not have de facto mandates for directing or controlling the paths of free inquiry and ongoing technological development. The present trend, however, is towards control and is a direct result of a bureaucratic takeover in an unjustifiably blown-up managerial structure.

The inevitable result of any oligarchic structure is that it proliferates for its own sake. Typically, the core activity quickly shifts away from the original professed goal of supporting research toward administering internal procedures. The present trend within the NSERC for even tighter quality control through peer review and even greater selectivity in funding (more Nil-awards) is a step in precisely the wrong direction if what is really needed is a catalyst of true innovation. The performance of a complex decision-making system is not a linear function of its overall "expertise" but follows an inverted U-curve with a maximum (optimum) beyond which the system loses efficiency. This is a known effect of an over-controlled system; too many strings threaten adaptability. Paradoxically, agencies like the NSERC require less and not more expertise to improve their operations.


Fund researchers, not proposals !

In my opinion and that of several others, a viable alternative to the present system is to "fund researchers, not proposals" on the basis of their overall record (Forsdyke, 1991, 1993; Berezin and Hunter, 1994) but this is fiercely resisted by the research bureaucracy. The most likely reason for this is that such a formula would fly in the face of the present American project-oriented funding model and because it would diminish the power of the paper-shuffling bureaucracy and grantsmanship elite.

With very slight variations, this was a guiding principle of research funding for centuries. Researchers should be funded on the basis of a "blended" score of their research record with an emphasis not on counting peer-reviewed papers but on the efficiency record of the researcher, i.e., prior achievements in relation to total funds spent. This curtails the emphasis on the overfunded research empires whilst opening more opportunities for the more in-depth, especially junior, researchers who are often less well-versed in grantsmanship games.

Basing research grants exclusively on the long-term track record of applicants avoids the problems of over-control mentioned above, and renders the whole process less hostile and more time- and resource-efficient. Multiple grants should be gradually dephased and those wishing to apply for funds from another source should be ready to give up their presently held funding. Special provisions for small bona fide grants could be set aside for junior applicants. In the present "competition for excellence" rat race, a university professor with, say, one or two well thought-through papers per year has virtually no chance of obtaining funding at ANY level. Implementation of the scheme "fund researchers, not proposals" will not only make the process of funding more democratic and socially responsible, it will also greatly reduce the paperwork and raise the overall efficiency level and morale of university researchers.


Conclusion

To conclude, whereas some ranking of applicants and of grant amounts is, of course, appropriate, the policy of mass "zeroing" of active university scientists is not only anti-intellectual in its essence, it is also socially and economically counterproductive. It is time to re-orient the university system away from the obsolete idea of "competition", which fails to deliver, toward cooperation and "win-win" science games. But so far, in its search for winners, the system still follows an old prescription: "The mass trials have been a great success, comrades. In the future there will be fewer but better Russians." (Greta Garbo in "Ninotchka", 1939).



References

Berezin AA. Roots of secretive peer refering. Am J Physics 57, 392, 1989.

Berezin AA. The superconducting supercollider and peer review. Physics World (Dec issue) 19, 1993.

Berezin AA. Nobel Prizes for more physicists with fewer discoveries: the fallacy of "competition" drains the pool. Physics in Canada 51(1) (Jan/Feb), 6, 1995.

Berezin AA, Hunter G. Myth of competition and NSERC policy of selectivity. Can Chem News 46(3), 4, 1994.

Chargaff E. In praise of smallness : how we can return to small science ? Persp Biol & Med 23, 370, 1980.

Forsdyke DR. Bicameral grant review : an alternative to conventional peer review. FASEB J 5, 2312, 1991.

Forsdyke DR. On giraffes and peer review. FASEB J 7, 619, 1993.

Gordon R. Grant agencies versus the search for truth. Accountability in Res 2, 297, 1993.

Horrobin DF. The philosophical basis of peer review and the suppression of innovation. J Am Med Ass 263(10), 1438, 1990.

Hunter G. NSERC Funding Program - University professor suggest changes. Can Res (March issue), 27, 1985.

Osmond DH. Malice's Wonderland : research funding and peer review. J Neurobiol 14(2), 95, 1983.

Savan B. Science under siege. The myth of objectivity in scientific research. CBC Enterprises, Toronto, 1988, 1990.

Schnaars SP. Megamistakes. Forecasting and the myth of rapid technological change. The Free Press, Macmillan, NY, 1989.

Szent-Györgyi A. Dionysians and Appolonians. Science, 176, 966, 1972.

Vijh AK. Intellectual roots of innovation: some myths and some facts, with implication for the third millenium. Can Chem News 42 (#10), 14 (Nov/Dec issue), 1990.



[Up]