____________________________________________________________________________________

- medical ethics,

- emotional responses in the face of disease and death,

- the need to take a speedy, but sensible, decision to alleviate suffering,

- the economic, social, and political impact of all medical decisions.

At present, there are two schools of thought on how to tackle this biological, psychological and social complexity :

- the one gives maximum credit to the intelligence of 'the' expert or a group of experts (

- the other transfers the problem to 'automats', the products of the

Which school of thought should the clinician follow or are there other options ?

1) falling-back on the art of diagnosis and prescription. This art - an alchemy of intelligence and experience - is a subtle, partly unconscious, integration of information,

2) delegating his decision powers to more-or-less advanced technical procedures based on artificial cognition models that mime the nervous system (expert systems, connectionist models, decisions under constraint, multicriteria selection...).

However, transferring all know-how to an automat can quickly lead to the abdication of personal responsibility. However agreeable it may be to place the risks run by the patient into the reassuring hands of an approved system, all systems thus empowered are essentially irresponsible. This behaviour is a modern version of sheltering behind the authority of norms, regulations, etc... Besides, most of these model systems are closed 'black boxes' concealing procedures that remain incomprehensible to the practitioner. Blind belief in subtle expert-systems is no less suspect than believing the Delphi oracles; in both instances, trust in 'magical' procedures, without any form of censorship, can be treacherous.

The coming years may witness a situation whereby medical decisions are brought into question because necessity has become the rule of law. For those who govern us, 'modelling' may prove to be the ideal Trojan Horse because it can incorporate any form of constraint (physiological, genetic, lifestyle, financial, or the so-called 'public good'....). I personally firmly believe that, if a diagnostic-aid has to take into account all the parameters that intervene in the expression of a pathology or in the success of a treatment, these parameters must concern the patient alone and not society. Even if society is an organism suffering from chronic deficits !

This, of course, means that clinging to rigid traditions or chasing mirages are unpragmatic, doctrinaire (unidimensional) and disastrous attitudes. An example of the former is the rejection of a theoretical model (e.g. the automatic analysis of electro-cardiograms) because it is judged to be unfair competition. The search for models as substitutes for responsible action is a mirage cut off from any roots or reality.

Medicine belongs to the

- seek a binary 'yes/no' answer in deference to our dualistic culture fascinated with norms. They should not, for example, strictly oppose the ill and the healthy. Normality, as given by a greater or lesser distance from a mean (in a supposedly Gaussian distribution), does not guarantee the absence of a pathology. Any approach that considers each parameter within an array as independent ignores the notion of a physiological system in which small individual deviations can be the signatures of great disturbances

- adopt an unduly simple hierarchical ranking system which does not account for the full data. Many ranking systems are based on the calculation of means but, for instance, a student who is ranked in an average position in his class, is he average in all subjects or top in mathematics and last in literature ?

- employ numerical coefficients that, allegedly, embrace all the data but that are often meaningless and sometimes even vectors of false information. It is as if one transposed the IQ (intelligence quotient) into the world of medicine ! Unlike quotients, profiles of intellectual aptitudes proscribe undue elitism, bring to light differentials in ability and reveal the polymorphism of a population group.

Medical information comprises both intra- and inter-variable information, the latter intervening in the definition of a structure or organization. To analyse information entropy (as in Shannon's theory of communication), a mathematical tool for building structural models is needed. This tool will first derive factors from a series of profiles describing the system and then proceed to recombine these factors into a model. Extracting factors is a banal procedure familiar to all those who have learnt the rudiments of algebra but one that has not been generalized.

Dismantling the complex, according to Descartes' precepts (3), breaks information down into disjoint subunits but how a system functions depends more on its architecture than on its building blocks. The cogs and wheels of a watch tell us little about how it works except if one believes in the great watchmaker or in the magic of self-assembly. Faith in a preestablished order smacks of finalism ! Even if anatomy is the framework for physiology, anatomical information only partly explains physiological processes. As Pascal (4) observed, and the Gestalt theorists acquiesced, 'the whole is greater than the sum of its parts'. Each level of superstructure yields extra information above and beyond the information contained in the lower strata.

Question : Which set of tools can both dismantle for the sake of analysis and rebuild with a view to understanding ? Answer : The tools of multidimensional data reduction and, in particular, of factor analysis.

What types of relationships can multidimensional reduction derive and represent ?

- relationships between individuals (typology), e.g., the relative distances (relationships) between patients within a population calculated on the basis of their nosological diagnostic profiles

- relationships between individuals and their characteristics (unsupervised correlations with no external constraints), e.g., the distribution of clinical symptoms among patients.

- supervised relationships that may be linear (discriminant analysis) or non-linear (non-linear mapping, neural networks...), e.g., the specific relationship between a pathology and various biological parameters in epidemiology studies.

Factorial mapping can be likened to the long-standing

a)

b)

c)

d)

To my mind, this point is sufficiently important to justify a small digression on the status of the scientist, clinician, or analyst. For investigations to yield the most creative - and not necessarily productive - results, the researcher should step into the shoes of the humble explorer and set aside his most profound convictions, all dogma, and any manifestations of paranoia. The field of exploration is to stay wide open, unrestricted by a scholastic

Factorial analysis may prove to be, along with other imaginative methods, an extremely useful mathematical tool in the children's playground.

Let's take a multidimensional structure or system composed of patient characteristics determined at different time-points (a table of n characteristics at t time-points). A factor analysis first performs an analytical step by calculating factorial axes of which the first is the principal component of the structure and the remainder are components of decreasing importance. All these components are independent and unrelated to each other. Each factorial axis, from the most to the least important, is composed of a special mix of characteristics and time-points. By combining the factorial axes, it is possible to create simple coherent substructures which can be partly reassembled into the initial structure.

If a single causal factor, such as time, governs the data, the outcome of the factor analysis will be simple (unidimensional). In other words, the information contained in the table - excluding background noise and the idiosyncracies associated with certain patients - can be represented by a single dimension (the time-factor). The multidimensional matrix will have, in effect, been reduced to a single projection (factorial) axis without any loss in information, regardless of the size of the table.

To understand the basis of the calculations requires some knowledge of vectorial and matricial algebra. The core of the procedure involves the diagonalisation of a symmetric square matrix and the calculation of the roots of the determinant (Eigenvalues and Eigenvectors). What this really means is that the double-entry table is converted into a symmetric square matrix, the rows and columns of which are cleverly permutated so that the most marked relationships between row and column lie along the diagonal. Bertin et al. illustrated the analysis by using a set cubes. The greater the relationship between row and column, the darker the cube. The result was an empirical scalogram.

However, the great majority of systems under study do not possess a unidimensional structure. It therefore becomes necessary to extract a series of successive factorial axes and Bertin's manual method has to be replaced by appropriate diagonalisation algorithms (e.g. those of Jacobi, Kaiser, Householder). How can one best describe these algorithms ? Everyone knows the least squares method for finding the regression line that best describes the relationship between the weight and height of a group of individuals. This line can be considered to be a factor resulting from the confrontation of individuals projected into a two-dimensional space (described by height and weight). An optimal virtual line can also be imagined in the context of an n-dimensional space. But, to avoid information loss, it is necessary to draw not one projection axis but a whole series. The individuals can then be projected onto each of these axes that shape the hyperspace.

This method of data manipulation results in the factorisation not of an algebraic polynomial (numerical values) but of individuals and their characteristics (descriptive values)..

2. Valéry P. (1871-1945) : French poet, essayist and philosopher, well-known for his personal published reflections

3. Descartes R. (1596-1650). French philosopher, physicist, and mathematician who created analytic geometry, particularly well known for his discourse on method (

4. Pascal (1623-1662) : French philosopher, writer, physicist and mathematician, whose principal work describes his personal thoughts (