These are my answers to the special field exam, one of the competency exams for Ph. D candidacy in philosophy of science in Committee on the History and Philosophy of Science (CHPS). The exam is a four-hour written exam, and I had to pick up four questions out of 14 questions. I have corrected grammatical mistakes in the original answers, but I have deliberately left my factual misstatements unchanged, because I think some of you are interested in knowing what kind of answers are considered as passable (Yes, I have passed the exam!).

1. Question: What is skepticism? What, if any, is the relation between skepticism and science? Discuss two epistemological theories as responses to skepticism: Kant, logical positivists, naturalized epistemology, special reasons requirement (as in Austin or Shapere, etc.) Which, if any, succeeds in responding to skepticism's challenge? Argue for your answer.


There are various different kinds of skepticism in philosophy (skepticism about other minds, skepticism about past, etc.), but the most influential (and most relevant to science) version is skepticism about external world. This kind of skepticism is originally proposed by Descartes. (Hereafter I use the word skepticism exclusively for skepticism about external world.) The basic question of skepticism is this: is there any reason to believe that there is an external world? Isn't there a evil demon who is deceiving me? In Descartes' original formulation, he wanted absolute certainty for the answer, and naturally he failed to give a convincing answer that there is an external world (his answer depended on his peculiar theology and not acceptable for many others). Actually skeptical doubt occurs even when we do not require absolute certainty to the answer. Do we have any reason to believe there is more than fifty-fifty odds that external world exist?

Hume's version of skepticism concerns with inductive knowledge. Do we have any reason to believe that rules of induction (reasoning from what we experience to what we do not experience) yield knowledge? His analysis is centered upon causation . When we believe A causes B, actually what we observe is (1) temporal and spatial closeness of A and B; and (2) constant conjunction between A and B. The attribute we give to causality, namely the necessary relation between A and B is caused by our habit to expect one from the other.

Hume's version of skepticism is especially relevant to science. Science is usually supposed to find out causal relationships in the world (of course there may be other aims of science, but many people assume that knowledge about causation is a central part of science). If Hume is right, we need to change this view of science. Cartesian skepticism is also a threat of science. If there is no external world, what is the object of science?

There are several attempts to reply to these questions. I would like to analyze two of the replies, namely one from the special reasons requirement and another from the naturalized epistemology.

The special reasons requirement is developed by Austin. The baseline of the argument is this: When we ask a question about reality or existence, we are not asking Cartesian skeptical question. Austin's favorite example is the question whether there is a goldfinch in my backyard. Suppose that I assert that I saw a goldfinch there. To challenge the assertion, what people do is to raise a specific doubt about the claim: for example, "Isn't it a goldcrest, instead of a goldfinch?" An important characteristics of such challenges from specific doubt is that there should be an established procedure to settle the matter. The doubt about goldcrest can be solved by close investigation of the characteristics of the bird. This is the same even when we ask a question "Is the goldfinch real?" This question is not about the possibility of evil demon's deceiving, but about the possibility that the goldfinch is stuffed. This question can be settled by an established procedure. Austin's point from these example is that as a matter of fact, we do not worry about Cartesian skepticism.

It seems to me this reply misses the point of skepticism. It is true that we do not worry about Cartesian demon in our ordinary and scientific discourse, but Descartes introduced a different level of discourse, namely a philosophical discourse about what we actually talking about in our ordinary and scientific discourse. Austin's argument is not enough to ban Cartesian worry. Peirece introduced an argument to support Austin's point. According to Peirce, it is just psychologically impossible for us to worry about Cartesian doubt, because we cannot do without believing something. This is a good argument, but again misses the point. Instead of disbelieving everything, we can just reassess what we believe, according to Cartesian criteria. The purpose of the reassessment will be to find out whether the beliefs are sure enough to be used as foundation of our knowledge, rather than whether we should believe them. Then we can follow Cartesian program without disbelieving anything, and it is likely that none of our current belief does not pass the criteria. In this sense, Peircean argument from psychological impossibility misfires. Therefore, the argument from the special reasons requirement fails to reject Cartesian skepticism.

Another attempt I would like to argue is naturalized epistemology. This is a position developed by Quine, in the paper "epistemology naturalized" (1969). Quine uses the image of Neurath's boat to illustrate our cognitive system. We cannot rebuild our cognitive system entirely. Rather we should take some part of it for granted and rebuilt other parts. What Quine proposes to answer Cartesian skepticism is this: we need to take a large part of our science for granted, to do any skeptical enterprise. And naturally if we start from our scientific knowledge, the result is a kind of scientific world view, and it is not skeptical. Michael Devitt adds a realistic flavor to this argument. When we take science for granted, we also take ontology of science for granted. Since science is realistic, we becomes realists when we take this for granted. we cannot be skeptics.

This line of argument has its own strength, but again is not enough to rebut skepticism. The part we take for granted is not necessarily science. For example we can take logic for granted and doubt everything else. Descartes seems to take for granted his intuition. So we can reproduce the large part of Cartesian skepticism on Neurath's boat. Devitt argues that since Cartesian skepticism is an unanswerable problematic, it is just uninteresting. But such an evaluation depends on what we accept as an answer. Devitt seems to think that instantaneous solipsism is unacceptable, but if the Cartesian skepticism really is an important question and the sole answer is the instantaneous solipsism, maybe what we should do is to find a way to go along with the instantaneous solipsism. Devitt does not provide any convincing argument to refuse this line of argument. Therefore I think that naturalized epistemology also fails to reject the skeptical doubt.

4. Question: For Bayesianism to be relevant to scientific inference, what it needs to deliver are not mere subjective opinions but reasonable, rational, objective degrees of belief. How are prior probabilities to be assigned so as to make this delivery possible? Discuss some Bayesian responses to this challenge.


Bayesianism is an influential theory about confirmation in these days. Bayesian believe that confirmation of theories follow the style of Bayes's theorem

P (T/E) = P (T) P (E/T) / P(E)

Usually Bayesians take the subjective interpretation of probabilities, namely the probabilities in the theorem should be understood as subjective degree of believe of the scientist. Many anti-Bayesians (e.g. Mayo's 1996 book) attack Bayesians saying such subjective probabilities are irrelevant to scientific inquiry, which is supposed to be aa communal, objective activity.

There are several attempts to reply this challenge. I will discuss Savage's swamping argument, Salmon's attempt to assign objective prior probability, and Howson and Urbach's reply that the question is totally irrelevant. I would like to add two more argument which I do not find in the literature.

First, Savage argued that assignment of the prior probability does not affect the conclusion very much. For example, suppose that there are two people who are testing if a coin is `fair', namely if the probability of heads of the coin is .5. One has the hypothesis that the probability of heads is .5, and the other believes the probability is .8 (exactly speaking, the tested hypothesis in this case is "the result of next trial is head" and the probabilities are the probabilities that the hypothesis is true). As Savage shows, whatever is the real probability of heads, after several trials their posterior probability converges to the real probability. The problem with this argument is that we need several unrealistic assumptions to get the convergence. For example, the difference between two prior probabilities converges only when two people totally agree in the expectedness of the evidence (P (E)) and likelihood of the evidence under the hypothesis (P(E/T). In a simple case like coin-tossing we can determine these things clearly, but in general it is unlikely two people totally agree in these points.

Second attempt is Salmon's attempt to assign the prior in some objective way. He thinks that we are not totally free in assigning prior probabilities. We need to obey ordinary criteria of rationality, like simplicity, fruitfulness etc. For example, suppose that we assign prior probabilities to two theories, one is ordinary physical theory and the other is the theory that the God is taking care of everything as if the things in the world obey physical laws. Since the first one is apparently more simple than the second one, we need to assign higher prior probability to the first one. If we obey Salmon's recommendation, we can assure some objective structure in out assignment of prior probability. But of course we still have a wide range of choice in the assignment. This is not enough to convince anti-Bayesians.

Howson and Urbach dismisses the charge in their book, Scientific Reasoning. According to them, Bayesianism is a position about an appropriate reasoning rule, and it does not tell us correct answers to scientific problems. To see the point, let us compare Bayes's theorem with rules in deductive logic. In deductive logic, if we put in false premises, we get (naturally) false conclusions. Similarly, When we use Bayes's theorem, if we plug in inappropriate prior probabilities, we get unreliable posterior probabilities. Just as the rules of deductive logic do not tell us what is a good way to choose premises, Bayesianism does not tell us how to determine prior probabilities. I think that their characterization of the status of Bayes's theorem is correct, but actually they do not reply the worry anti-Bayesians raise. What is at stake in the debate is whether we should accept Bayesianism as a proper reasoning rule in science. And the question about prior probabilities is raised in the course of assessment of the fruitfulness or appropriateness of the Bayes's theorem in science. If we have no procedure to use Bayes's theorem in science in a reliable manner, there is no point in accepting it. This pretty much depends on whether we have reliable way to assign prior probability. In this sense, Howson and Urbach miss the point of the debate.

I would like to add another strategies to reply to anti-Bayesians (maybe someone has argued for these strategies somewhere, but I am not aware of it). First is to understand Bayesianism as a qualitative method rather than quantitative method of confirmation. Actually Bayesians usually do not need exact prior probability to assess various methodologies used in science. For example, Popperians discuss what counts as novel evidence (which they think very important for acceptance of a scientific theory), and Bayesians can offer some qualitative criteria when we should accept an evidence as novel. Thus, what I am suggesting here is to use Bayesianism in a meta level, to assess other methodologies in science. As far as we use Bayes's theorem in this way, we do not have to worry about exact values of prior probabilities. What we need is some general relationships between probabilities (so, for example, Salmon's proposals are enough for such arguments). I think this reduces the problem of assignment of subjective prior probabilities.

7. Question: A major issue in philosophy of biology is the issue of finding appropriate levels of organization and appropriate units of organization at those levels to solve biological problems. (a) What is at issue in the units of selection debate? (b)State two possible units of selection and indicate who advocates each. (c) Argue for one or the other as the appropriate level. (You must state the criteria for being appropriate.) In your answer, indicate what (if anything) could resolve the debate.


Unit of selection debate is about appropriate way to look at evolutionary process. Even though many biologists agree on the mechanism of natural selection, there are significant differences in the way they understand the mechanism. The difference may influence the way they conceive their research project. If a biologist thinks that a certain level is the unit of selection, she would concentrate upon that level in her study of evolutionary process. If her understanding of the unit of selection is totally wrong-headed, her study would spend time on inappropriate topics. So what is at stake in the unit of selection debate is not only the correct way to understand the evolutionary process, but also the appropriate way to proceed in evolutionary biology.

There are several alternatives in unit of selection question. The traditional unit of selection is an individual organism (this view is hold by, for example, Stephen Gould). According to this view, the fittest individual organism is likely to leave more offspring than others, and the individual's heritable characters will be succeeded in the next generation. Evolution occurs as a result of the accumulation of this process.

On the other hand, there is a view that a gene is the appropriate unit of selection (proposed by Richard Dawkins). According to this view, individual organisms are vehicles constructed to protect genes so that genes can replicate themselves. A gene which is good at producing an effective vehicle is likely to replicate itself into next generation. The advantage of this view is the simplicity in explaining certain phenomena, for example (seeming) altruism found in nature. We often find that an organism sacrifices itself to help other organisms, usually close relatives of the organism. Hamilton explained this phenomena by introducing the notion of "inclusive fitness". According to Hamilton, the fitness of an organism is not determined by the expected number of direct offspring of the organisms, but we need to count in the offspring by close relatives who share a part of the genes of the organism. So, for example, by helping three brothers (who have 3/2 of the genes of the organism), the organism will be better off even if it have to sacrifice its life. As far as we take an organism as a unit of selection, we need to use this rather complex notion of inclusive fitness. If we take a gene as a unit of selection, the situation becomes easier to understand, at least conceptually. From this point of view, a gene which construct a vehicle which behave to help another vehicle with the same gene have better chance to replicate than other genes. There is no need of altruism in this explanation. This is why Dawkins named his book The Selfish Gene.

Gould and others attack Dawkins's position. Actual selection process does not occur at the gene level. Selection occurs by differentiate behaviors of individual organisms. So if we really want to understand the selection process, we should look at individual level, rather than gene level.

As is obvious from my description of the unit of selection debate at the beginning of this answer, I think that appropriateness of a view on the unit of selection is determined by its usefulness in biological research projects. In this sense, I support Dawkins's view that the unit of selection is a gene. The reason I support him is the conceptual simplicity. As Dawkins points out in Extended Phenotype, Hamilton once miscalculated the fitness of a gene. The simpler the central concepts are, the less the probability of such a mistake is. This is a good reason to support `gene's eye view'. As for the objection I considered above, I do not think that gene's eye view is an obstacle to study interactions of individual organisms. What we should do is to distinguish replicators (genes) and interactors (individual organisms) clearly, so that we do not have confusion between two levels.

Of course everyone do not agree on the criteria of appropriateness on this issue. So I think what those in the debate should do first is to settle the matter of which kind of biological research will be fruitful. If they agree upon this general direction, and if they understand that fruitfulness in future research is what is at stake in the debate, the debate will be resolved.

12. Question: What is the Verstehen question? Discuss the evolution of the Verstehen question from Weber's initial formulation of the thesis to the present, giving consideration to at least five different positions including logical positivism. Identify any questionable assumptions driving the debate and explain why you think they are questionable.


Verstehen question is originally raised by Max Weber. According to him, social sciences requires understanding of the subject in the way natural sciences do not. When we describe people, to describe their behavior is not enough for sociology. Rather, we need to understand what are the motivations and beliefs of these people and why they behave in that way. According to Weber, for this purpose we need to assume people are rational to some reasonable degree -- they have some reasonable beliefs, when they want A and B is a means of A, then they do B, and so on. If we assume people are irrational, there is no clue for finding out people's motivations. In Weber's formulation, the interpretation is supposed to be pretty much objective activity.

Logical positivism totally opposes Weber's position. According to positivists, in science we are not allowed to speculate about what we do not have empirical evidence. People's motivations and beliefs are notorious example of what we have no empirical evidence. For, to figure out what people want and believe, the only clue is people's action. But we can come up with many different pairs of fancy beliefs and fancy motivations which yield exactly the same action. Since we have no independent clues to beliefs and desires, this is totally underdetermined by evidence. Therefore, according to positivism, this is not a relevant subject matter of science. This lead to behaviorism in social sciences.

After 1960s, many sociologists thought that behaviorism never gives a satisfactory answer to the questions they want to answer. Whether we have enough empirical evidence or not, we need to speculate.

One of major approach is phenomenological approach developed by Schutz. Phenomenology studies our lifeworld, which we take for granted. According to Schutz, we use stereotypes (which Schutz calls "typification") to understand the world around us. To figure out what are the typifications, the best way is to ask people how they classify things. For this enterprise, interpretation is necessary. First of all this is a study of people's interpretation of the world, and moreover, we need to interpret people's answer to reconstruct their lifeworld.

Ethnomethodology is another sociological study which uses interpretations. Ethnomethodology is developed by Harold Garfinkel, and similar to phenomenology. The major difference between phenomenology and ethnomethodology is that ethnomethodology does not study people's consciousness or lifeworld. Rather, the subject of ethnomethodology is people's strategies to construct reality. For example, ethnomethodologists use a "breach experiment" to study. This is an experiment to observe people when someone breach some implicit rule of society (like appropriate responses on the phone) and find out how people reconstruct their world by interpreting such breaching behavior in some suitable ways. Ethnomethodology also needs interpretations to make sense peoples behavior, just like phenomenology needs them.

Hermeneutics is an interesting position in this debate. Sociologists from Weber to ethnomethodologists (except for logical positivists) look for correct interpretations of people's behavior, but hermeneuticists admits that there is no objective interpretation. Rather, what we do when we interpret other people is "fusion of horizons". Sociologist's world view before the interpretation changes thorough the process of conversation with another world view expressed by other people usually written in a text. Sociologist's view on the other world view also changes. This is the fusion of horizons, and objectivity arises as a result of the fusion of horizons, as a consensus between people (including sociologists).

I think there is some confusion in the way sociology gather data. Interpretation is taken a somewhat subjective matter by many sociologists, but it is not necessarily so. What we should do is to count in the information about the background of sociologists who interpret. Then interpretation is a function of two variables: people's behavior and sociologists' background. If we can control sociologists' background effectively, the resulting interpretations can be standardized and becomes "hard facts". We may be able to statistically analyze the interpretations by using many different sociologists with controlled backgrounds. In this way, we do not need to reject appropriateness of interpretation, while we can use fairly scientific methodology to process interpretations.