Scientific Rationality and the "Even Stronger Program"

Tetsuji Iseda

Committee on the History and Philosophy of Science
Department of Philosophy
University of Maryland
College Park, MD 20742



This is a programmatic proposal about a better use of the notion of scientific rationality in the sociology of scientific knowledge (SSK). Strangely enough, some relativistic authors in SSK literature allow the room for scientific rationality in their analyses of scientific practice. I interpret their arguments as follows: since science is essentially a collective activity, any rationality developed and sustained in science should have some institutional basis analyzable in sociological terms. I advocate that sociologists should explain such scientific rationality, especially the asymmetry between science and non-science, in sociological terms. In some sense, my program is 'even stronger' than Bloor's 'strong programme'.

Keywords: scientific rationality; strong programme; SSK.

1. Introduction -- the "Strong Programme" and the symmetry thesis

A convenient place to start this paper is David Bloor's "Strong Programme". As is well known, Bloor put forward four basic tenets of the sociology of scientific knowledge (SSK) in his influential book, Knowledge and Social Imagery (Bloor 1976). First, the sociology of scientific knowledge should be "causal", that is, "concerned with the conditions which bring about belief or states of knowledge. Naturally there will be other types of causes apart from social ones which will cooperate in bringing about belief" (7). Second, sociologists should be impartial with respect to truth and falsity, rationality and irrationality etc., in the sense that both sides require explanation. Third, the explanation should be symmetrical, namely "the same types of cause would explain, say, true and false beliefs" (ibid.). Finally, the explanation should be reflexive, namely it has to apply to sociology itself. The second and third tenets, namely the tenet of impartiality and the tenet of symmetry are widely accepted by sociologists of scientific knowledge (though people like Latour and Callon think that Bloor is not radical enough in these respects; Latour and Callon, 1992). I think that these tenets of impartiality and symmetry have played crucial heuristic roles in the development of the sociology of scientific knowledge. However, there are certain types of research systematically ignored by sociologists because of these tenets. The main purposes of this paper are to reconsider these tenets, and, through the reconsideration, to propose a better use of the traditional epistemological notion of scientific rationality in the sociology of scientific knowledge.

2. Two meanings of "social" in social studies of science

Let us go back to Bloor's list of tenets. The first tenet of causal analysis does not enjoy the status of consensus which other tenets enjoy. According to this tenet, scientific knowledge can be studied sociologically because social factors (among other factors) cause scientific knowledge. But there is another way of looking at science as 'social' activity.

One of the common ways to classify recent studies in the sociology of scientific knowledge is to distinguish a macrolevel approach from a microlevel approach. To put it very schematically,

Macrolevel approach: Study of the influence of macro-social factors on scientific knowledge

(e.g. Edinburgh School, Marxists)

Microlevel approach: Study of the microprocessing of knowledge production in science

(e.g. Paris School, Ethnomethodology)

It is pretty obvious why the first approach can be called 'social'. It is a study of social factors, namely extra-epistemic factors in science. Bloor's first tenet suggests this approach. But in what sense is the second approach 'social'? A couple of quotes from Latour and Woolgar and from Knorr-Cetina will help us to understand their enterprise.

According to Latour and Woolgar, "it could be said that we are concerned with the social construction of scientific knowledge in so far as this draws attention to the process by which scientists make sense of their observations" (Latour and Woolgar 1979, 32; emphases in original). Accordingly, they refuse the use of 'social' as opposed to 'technical' factors in science. Rather, they "regard the use of such concepts as a phenomenon to be explained" (27).

According to Knorr-Cetina, various approaches at a microlevel are "social in that they consider the objects of knowledge as the outcomes of processes which invariably involve more than one individual, and which normally involve individuals at variance with one another in relevant respects" (Knorr-Cetina 1983, 117).

In short, according to them, science is a social activity because the scientific community, as a small society with its own structure, norms and decision processes, creates scientific knowledge. Now, if one claims that science is 'social' in this latter sense, this position is compatible with the view that science is thoroughly a rational activity. Thus, this shift in the meaning of 'social' opens the possibility of cooperation between philosophers and sociologists of science, though sociologists themselves do not seem to appreciate the full implication of the shift in the meaning. This is the possibility I want to explore in this paper.

3. Return of rationality in seemingly relativistic discourse

3-1. Harry Collins and gravity wave

Harry Collins calls his research program "empirical relativist" (Collins 1983, 85), but probably a 'sociological foundationalist' would be a better description of him given his criticism of Latour, Callon, and Woolgar in the so-called "epistemological chicken" debate (Collins and Yearley 1992, 308-309). Anyway, in Collins' program, Bloor's symmetry thesis is understood as the claim that the "natural world must be treated as though it did not affect our perception of it" (Collins 1983, 88). Sociologists' task is to explain the decisions of the scientific community using only social factors. This looks like a radically relativistic approach (at least about sciences other than sociology), but through a close examination, we find that Collins admits rationality of scientific practice.

Collins' study on the gravitational radiation debate (Collins 1985, 79-111) is a good place to understand his position. Einstein's general theory of relativity predicts that moving bodies will produce gravity waves. These waves are so weak that ordinary detectors cannot detect them. In the late 1960s, Joseph Weber constructed a detector which, according to him, could detect gravitational radiation, if there really is such a thing. In 1969, he reported that his detector had really detected some waves. Other scientists tried to replicate the result, but none of them succeeded. By 1975, scientists reached an agreement that there had been something wrong with Weber's experiment.

Collins uses this case to support his "experimenter's regress" argument (Collins 1985, 83-84). In this kind of situation, no one really knows what the correct outcome of the experiment is, because "what the correct outcome is depends upon whether there are gravity waves hitting the Earth in detectable fluxes" (84). To determine this, we need to construct a better gravitational wave detector, but to know which one is a better detector, we need to know what a correct outcome would be. As it is, this is a never-ending process, and this is what Collins calls 'the experimenter's regress'.

However, as a matter of fact, scientists reach agreements and determine which experiments are valid. Then, how do they reach such agreements? Collins tries to answer this question by interviewing scientists who have participated in the gravity wave debate. He finds that 'non-scientific' reasons have played an important role in the debate. His list of such reasons includes faith in experimental capability and honesty, personality and intelligence of experimenters, reputation of running a huge lab, whether the scientist worked in industry or academia, nationality, and so on (87). At the final step of the debate, many scientists have mentioned the failure of calibration of Weber's detector as a reason for disbelieving his result, but according to Collins this is merely an expression of conservatism. There is no assurance that a detector which can detect other kinds of waves (i.e. which can be calibrated) can also detect gravity waves (again, this is a part of the experimenter's regress). Therefore, if scientists took a different attitude, "the anomalous outcome of Weber's experiments could have led toward a variety of heterodox interpretations with widespread consequences for physics. They could have led to a schism in the scientific community or even a discontinuity in the progress of the science" (105).

Criticizing Collins' study, Franklin finds many scientific reasons for rejecting Weber's results. A major difference between Weber's detector and many others' was that Weber used a non-linear algorithm for his detector, and others used a linear algorithm. Weber attributed the failure of other detectors to this difference (Franklin 1994, 472-473). However, there were a few attempts to replicate Weber's results using a non-linear algorithm, and they failed too. This suggests that the problem was with his particular apparatus. Moreover, there was an error in the way Weber analyzed his data, and Weber himself admitted that (478). Given these suspicious features of Weber's study, it was reasonable for scientists to reject his surprising results. We do not have to bring extra-scientific factors into consideration in order to break the experimenter's regress. Franklin concludes: "Collins's view that there were no formal criteria, applied to deciding between Weber and his critics, may be correct. But, the fact that the procedure was not rule-governed, or algorithmic, does not imply that the decision was unreasonable" (Franklin 1994, 471).

Collins' reply to this criticism is interesting for our present purposes. Collins replied that "I have never suggested that scientists' actions were unreasonable" (Collins 1994, 501). Collins admits that these scientific reasons played an important role in the debate. But these reasons are not enough to reach a conclusion. Nevertheless, most of these extra-scientific reasons are also reasonable in some sense. Collins concludes: "it is just that 'reasonableness' is a social category; it is not drawn from physics" (Collins 1994, 503). This last quote suggests that Collins thinks that scientific rationality (a version of reasonableness) is a social category. Thus even when scientists settle arguments for scientific reasons, what they are doing is still social. This move is understandable only when we take into account the shift in the meaning of 'social' pointed out earlier in this paper. Despite the relativistic guise (and despite the banner 'empirical relativist'), Collins is not a relativist in the traditional sense.

3-2. Alan Gross's rhetorical analysis

Alan Gloss's rhetorical analysis is another good case of a seemingly relativistic analysis of science which does not deny the rational nature of science. Gross is not a sociologist, but given the growing attention paid to discourse analysis in the present-day sociology of scientific knowledge, I think it is appropriate to discuss his work here.

In the preface to the second edition of his book, The Rhetoric of Science, Gross maintains "the view that rhetoric has a crucial epistemic role in science, that science is constituted through interactions that are essentially rhetorical" (Gross 1996, x). According to him, this is a relativist position (x-xii). Here, he means by relativism the position that "whatever 'objectivity is achieved, is based on socially shared plausibility judgments rather than proof" (Campbell 1993, cited by Gross 1996, xi).

But we should not be deceived by the label Gross puts on his position. A close reading of his book reveals that his view is much more modest than the banner 'relativism' suggests. First, for him, by definition, rhetoric "concerns the necessary and sufficient conditions for the creation of persuasive discourse in any field" (Gross 1996, viii, emphasis in original). This is a shift in the meaning of 'rhetoric' in the same way as social constructivists shift the meaning of 'social'; that is, just as any communal processing of knowledge is regarded as 'social' in the literature, here any persuasion (rational or non-rational) is regarded as 'rhetorical'. Even deductive logic is not an exception: "Suppose... we define dialectic and logic in terms of rhetoric. From this perspective, dialectic and logic are rhetoric designed for special purposes: dialectic, to generate the first principles of the special sciences; logic, to derive from these principles true statements about the causal structure of the world" (Gross 1990, 206). Note that preservation of truth by deductive logic is not rejected here.

If we overlook this shift in the meaning of 'rhetoric', Gross's analysis of peer review is almost unintelligible. Gross analyzes the peer review practices in the scientific community as a pursuit of the ideal speech situation in Habermas's sense. The reason Gross prefers this approach to other approaches is that "the ideal speech situation permits a rhetorical definition of peer review in which rationality still has a central place" (Gross 1990, 143). Why does he care about rationality? He explains:

"Debates about rationality follow a common pattern. Rationality is described in terms of its alleged necessary conditions -- arguers must be consistent, must heed the law of contradiction, must be constrained by modus ponens. Relativists then attack these purported universals with supposed counterinstances.... The real need, I think, lies elsewhere: we require a definition of rationality that is not vacuous, one that at the same time takes into account the presence of "divergent rationalities": the intuition that the divergent fundamental beliefs and patterns of inference of other cultures do not count as irrationality but as rationality of another sort." ( Gross 1990, 142).

I think that many rationalist philosophers can agree with him about divergent rationalities (see, for example, Putnam 1983 and ch. 5 of Cherniak 1986). Thus, even though it seems that there is a wide gap between rationalistic philosophers and Gross, the gap may in fact be very narrow.

4. One step further--the "even stronger program"

What moral should we get out of these analyses? There are several ways of interpreting these remarks by Collins and Gross, but the one I prefer is the following: since science is essentially a collective activity, any rationality developed and sustained in science should have some institutional basis analyzable in sociological terms. If sociologists take this stance toward rationality, this opens up a fruitful research program, which is more or less suggested by both Collins and Gross themselves, but not really fully developed.

To see why this might lead to an interesting research program, let us first look at Stephen Cole's recent criticisms of social constructivism (Cole 1992). There are a couple of major points. First, Cole argues that social constructivists fail to demonstrate any linkages between social processes and "knowledge outcomes" (61) in their macro-level research. Cole means by "knowledge outcomes" the specific content of a scientific idea, like E= mc2. Social constructivists have succeeded in showing that scientists' everyday activity is influenced by social variables, but at the same time social constructivists are "black-boxing" knowledge outcomes. Second, Cole also points out a problem in accounting for consensus by micro-level research (33-60). He claims that "constructivist sociologists of science have no convincing way to explain why some ideas win out in the competition for communal acceptance" (59). In short, Cole's criticisms amount to this: when sociologists claim that science is social in the first sense, they try to explain scientific knowledge but fail; when sociologists claim that science is social in the second sense, they don't even try to explain, just to describe.

What I am suggesting is one way of meeting the criticisms. In a nutshell, my reply to Cole goes as follows: suppose that if science is a rational activity, then scientific rationality can explain knowledge outcomes of science; then, by giving a sociological basis for scientific rationality, we can also provide a sociological explanation of knowledge outcomes.

To see the point, let us consider a question often raised against constructivists: 'If scientific facts are socially constructed, why do airplanes fly anyway?' This naive question needs an elaboration. When we look at scientific activity, what many of us are most impressed by is its ability to create novel phenomena and to control (replicate, predict, etc.) them. This ability becomes even more impressive when we compare scientific activity with other systems of knowledge. For example, aerodynamics is an indispensable part of making airplanes, and if we replaced aerodynamics with another system of knowledge, like crystal ball gazing, the airplanes would not fly. Laboratory science is another case of creating new phenomena. Biochemical laboratories create new phenomena by creating certain new materials with certain distinctive features. Even non-experimental science creates new phenomena. For example, the Magellan mission to Venus created detailed radar maps of Venus, which certainly did not exist before the mission. Most of these new phenomena are well controlled in the sense that they are replicable if needed. This replicability and predictive force are the basis of the accumulative nature of scientific knowledge. On the other hand, an alchemy lab may create something new, but the process is much less controlled and not quite replicable. Astrology may create new horoscopes, but they lack the precision, robustness and accountability of Venus radar images. Then the question boils down to this: what is so peculiar about scientific activity that makes these differences?

I think that sociologists of scientific knowledge tend to overemphasize the fact that science is just another social activity. To admit that there is something really peculiar about science does not exclude sociological research on science. To the contrary, I think that there is a genuine sociological question here. Imagine that there is a country which has a significantly lower crime rate than other countries. What sociologists normally do is not to argue that there is nothing special about this country, but to attempt to find some sociological explanations of the peculiarity. Similarly, here is a community with a significantly higher rate of creating and controlling novel phenomena than other communities. I think that it is the sociologists' task to find a sociological explanation of this peculiarity. We should find some peculiar structure, norms, and decision processes in science.

This may sound like an old Mertonian theme, but what I have in mind is not quite Mertonian. As is well known, Robert Merton proposed four basic norms operational in a scientific community, namely universalism, 'communism' (communalism may be a better term), disinterestedness and organized skepticism (Merton 1942). The situation is now more difficult than in the time of Merton, because now we know that scientists are neither disinterested, communalist, skeptic nor universalist. Rather they are deeply situated in their social contexts, and are influenced by the ideology of the age and their own interests. Even if we regard Merton's four norms as regulative rules to counter these tendencies, these scientists do not seem to be punished because of their non-universalist, non-disinterested activities. If these social factors are causally relevant to knowledge outcomes (in Cole's term), i.e. if Bloor is right, maybe what we need to look for to account for the peculiarity of science is some peculiar causal relationship among them. If they are not causally relevant to knowledge outcomes, i.e. if Cole is right, maybe we need to suspect that there is some systematic way of ruling out or of neutralizing such extra-epistemic factors.

For this kind of study, the traditional philosophical notion of scientific rationality provides us with a strong heuristic device. The whole point of traditional philosophical investigations of scientific rationality is to reveal some peculiarity about good scientific reasoning. Of course philosophers have been looking for abstract reasoning rules, but if we can find some way of translating the talk of reasoning rules into a talk of institutional basis of science, sociologists can utilize the rich resources of the philosophical tradition in their study of peculiarity in a scientific community. If you are familiar with evolutionary biology, the following analogy may help you to understand the situation. The relationship between rationality and institutional basis of science is comparable with the relationship between the fitness of an organism and its phenotype/genotype. We can describe the phenotype/genotype of each individual organism and its survival, but we cannot explain a long-term evolutionary trend by merely describing each organism in this way. The talk of fitness is useful in explaining a long term evolutionary trend, but by itself fitness is a pretty abstract notion. Thus, evolutionary biologists try to explain such a trend by establishing a connection between an abstract notion of fitness and a more concrete phenotype/genotype. What I am proposing here is an establishment of such a connection between scientific rationality and its institutional basis.

I would like to call this research program 'the even stronger program'. In some sense, I agree with Bloor's tenet of symmetry that we need the same "type" of explanation for rationality and irrationality. But, for me, this symmetry in the type of explanation is merely a basis for further comparisons between rationality and irrationality, in the sense that you cannot compare apples and oranges. Among apples, we have to be able to distinguish and compare good apples and bad apples. The ordinary interpretation of the symmetry thesis makes this problem invisible. To put it in a different way, Bloor's 'strong programme' advocated the necessity of a sociological explanation of scientific knowledge without appealing to scientific rationality. In this sense, scientific rationality itself was out of the picture of Bloor's program. Now, I advocate the necessity of a sociological explanation of scientific rationality itself, especially an explanation of the asymmetry between science and non-science. In this sense, my program is 'even stronger' than Bloor's.

Finally, I would like to say a little bit about the implication of the program for philosophers. The first case I used above to argue for the peculiarity of a scientific community, the case of an airplane, is actually a case from technology. This is because the kind of rationality I looked at is most conspicuous in technology. Thus, even though philosophers of science tend to ignore technology when they talk about rationality, they will have to take the idea of technological rationality seriously, if philosophers want a fruitful cooperation with sociologists in this field. Are scientific and technological rationalities different? If they are different, what are the differences and what are the similarities? A successful development of the 'even stronger program' will be partly dependent on the exploration of these questions.


Bloor, D. (1976) Knowledge and Social Imagery. Chicago: University of Chicago Press.

Campbell, D.T. (1993) "Plausible coselection of belief by referent: all the objectivity that is possible", Perspectives on Science 1, 88-108.

Cherniak, C. (1986) Minimal Rationality. Cambridge: The MIT Press.

Cole, S. (1992) Making Science: Between Nature and Society. Cambridge: Harvard University Press.

Collins, H.M. (1983) "An empirical relativist programme in the sociology of scientific konwledge", in K. D. Knorr-Cetina and M. Mulkay (eds.), Science Observed: Perspectives on the Social Study of Science. London: Sage Publications, 85-114.

--. (1985) Changing Order: Replication and Induction in Scientific Practice. London: Sage Publications.

--. (1994) "A strong confirmation of the experimenter's regress" in Studies in History and Philosophy of Science 25, 493-503.

Collins, H.M. and Yearley, S. (1992a) "Epistemological chicken" in Pickering (ed.) 1992, 301-326.

Franklin, A. (1994) "How to avoid experimenter's regress" in Studies in History and Philosophy of Science 25, 463-491.

Gross, A. (1990) The Rhetoric of Science. Cambridge: Harvard University Press.

--. (1996) "Preface: the rhetoric of science 1996" in The Rhetoric of Science, second edition. Cambridge: Harvard University Press, vi-xxxiii.

Knorr-Cetina, K.D. (1983) "The ethnographic study of scientific work: towards a constructivist interpretation of science", in K. D. Knorr-Cetina and M. Mulkay (eds.), Science Observed: Perspectives on the Social Study of Science. London: Sage Publications, 115-40.

Latour, B. and Woolger, S. (1979) Laboratory Life: The Social Construction of Scientific Facts. Princeton: Princeton University Press.

Merton, R.K. (1942) "Science and Technology in a Democratic Order", Journal of Legal and Political Sociology 1, 115-126.

Putnam, H. (1983) "Philosophers and human understanding" in Realism and Reason. Cambridge: Cambridge University Press, 184-204.