[Help] [Aide] [Up]

Science Tribune - Commentary - June 1998


New age numerology: A gloss on Apostol

Blaise Cronin

Professor of Information Science, Indiana University, Bloomington, Indiana, IN 47405, USA.
E-mail : bcronin@indiana.edu

Make no mistake about it, bibliometricians invite lampoonery. Their parasitic craft is emblematic of all that is awry in our postmodern world, with its Alice in Wonderland sense of logic and fetishistic attachment to accountability. We reward these wannabe meta-scientists, these counters of glass beads, while punishing real scientists for not being widely enough published or cited. The tail is now wagging the scientific dog. In recent years, the science policy community has constructed a virtual panopticon policed by thuggish bibliometricians and sundry others to monitor the performance of career scientists. It's all such a very long way from the halcyon days of the seventeenth century, where gentlemen scientists and civil processes of authentication were the order of the day (1), and where the notion of science for science's sake (l'art pour l'art) was uncontested. How Boyle and Cavendish must be turning in their graves: and Foucault smiling. For Apostol, this metric madness calls to mind the medieval schoolmen's tortuous wrangling about the number of angels on a pinhead. Now the pinheads, so to speak, are responsible for evaluating men of science. The obscenity of this inversion is too much for Apostol, and he slips from lampoonery into unsavory caricature of all things bibliometric.

But science is not a world apart. In so many walks of late twentieth century life, be it medicine, financial services or government, the demand for accountability from taxpayers and their elected representatives has become institutionalized. Scientists are only one of several elite professional groups who recently have found themselves under the microscope and seen their autonomy eroded. Why shouldn't federal agencies, research councils, national academies of science and similar organizations take reasonable steps to ensure that their funds are efficiently and effectively disbursed? The case for the systematic gathering of data on researchers' performance and productivity to ensure that society derives maximal benefit from the national science budget is a "no brainer", as Americans would say, though not, apparently, for Apostol, who believes that intrinsic value and individual creativity should drive scientific inquiry. It's not for him, the customer-contractor model for commissioning applied science, which became so popular in the early 1970s (2). But, the reality is there are no more free lunches. Instead, lunch tokens are required - in the shape of results, outcomes and impacts and, of course, the associated array of bibliometric (and other) indicators by which these effects can be quantified. For good or ill, the discourse of science policy mirrors that of management and market research. Talk of "picking winners" is near and dear to ministerial hearts. We now speak of France's market share of the world's high energy physics research, as reflected in publications and citations (3), just as we would of its share of the fine wine market. Autres temps, autres moeurs.

Of course, there is the not-so-trivial problem of fraud in science (4). Can the growing reports of scientific misconduct (fabrication, falsification, plagiarism) continue to be swept under the carpet, or is it time to expose and document the scale of the phenomenon? As Ben Yehuda (5) noted (p. 17) more than a decade ago; "There exists a bitter argument whether deviance in science is better represented by the "iceberg theory" or by "the bad apples" theory". Apostol goes for the few bad apples theory, but I remember the Titanic. And that makes me much less cavalier than he when it comes to peer review, flawed though the system undeniably is (6). Peer review, to paraphrase Winston Churchill on democracy, is the least worst of the options available. But it also works remarkably well in some contexts. Think of high energy physics, where the internal peer review conducted by members of the different Collaborations (e.g., CLAS, PHOBOS) and institutions (e.g., CERN, FERMILAB) is so intense and carefully choreographed (7), that preprints have acquired the assured authority of articles in the traditional printed journal of record - as Ginsparg and colleagues at Los Alamos have demonstrated with their e-print exchange. Moreover, without peer review Fleischmann and Pons might still be peddling the idea of cold fusion to the state of Utah and other equally gullible would-be backers (8) Here was a relatively rare occasion when the global scientific community needed no persuading to replicate an experiment.

And herein lies the pivotal issue. The stakes involved in Big Science are enormous. Dollars and reputations are on the line: the winner takes all. Given the intensity of the race for scientific breakthroughs, there is little incentive for career scientists to undertake replication studies. The reward system favors novelty, and, consequently, the market for reproducible results is almost non-existent (9). However, incentives for replication would reduce spurious knowledge claims and deter malfeasance. As things stand, the scientific reward system is massively skewed in favor of originality, not rigor. This is where change ought to be effected. In choosing soft targets, such as evaluative bibliometrics and peer review, Apostol has foregone an opportunity to stimulate meaningful discussion.


1. Shapin S. A social history of truth. Chicago, IL: University of Chicago Press, 1994.

2. Kogan M, Henkel M. Government and research: the Rothschild experiment in a government department. Heinemann, London, 1983.

3. Irvine J, Martin BR. Foresight in science: picking the winners. Pinter, London, 1984.

4. Altman E. Scientific research misconduct. In: Research misconduct: issues, implications and strategies. (Altman E, Hernon P, eds). Ablex: Greenwich, CT, pp 1-31, 1997.

5. Ben-Yehuda N. Deviance in science. British Journal of Criminology, 26, January, 1-27, 1986.

6. Chubin DE, Hackett EJ. Peerless science: peer review and U.S. science policy. SUNY Press, Albany, NY, 1990.

7. see http://www.physics.odu.edu/~dodge/charter_jan98.txt

8. Close F. Too hot to handle: the race for cold fusion. Allen, London, 1990.

9. Feigenbaum S, Levy DM. The market for (ir)reproducible econometrics. Social Epistemology, 7(3), 215-232, 1993.