The Oxford Companion to Philosophy Part 88 ppt

10 251 0
The Oxford Companion to Philosophy Part 88 ppt

Đang tải... (xem toàn văn)

Thông tin tài liệu

the conservation of matter (Antoine Lavoisier) or the con- servation of energy (Robert von Mayer, who started a decisive paper with this very principle). The uniformity of Being survived as the idea that basic laws must be inde- pendent of space, time, and circumstance. ‘For us physi- cists’, wrote Einstein, almost repeating Parmenides, ‘the distinction between past, present and future has no other meaning than that of an illusion, though a tenacious one.’ A third group that affected Western science and its phil- osophy were the early scientists themselves. They differed from philosophers by favouring specifics and from arti- sans by their theoretical bias. With the exception of phys- icians like Alcmaeon of Croton, who wrote a medical textbook and who lived, probably, in the early fifth cen- tury bc, they became professionals only at the time of the Sophists. Towards the middle of the fifth century bc arith- metic, geometry, astronomy, and harmonics were already dreaded subjects of instruction (Plato, Protagoras 318d–f ). They were also centres of intellectual activity and popular interest; even Aristophanes made fun of math- ematicians. The arguments between scientists, philoso- phers, and those artisans who explained and defended their enterprise in writing, as well as the more specific arguments between scientific, philosophical, and practical schools, form an early, rather heterogeneous, and not always fully documented, philosophy of science. Thus we may conjecture that the transition from a geometry and *number theory whose propositions could be confirmed, one by one, by intuitively evident arrange- ments (pebble figures, drawings) to systems of statements based on principles and proofs was accompanied by a vig- orous debate—but it is difficult to identify stages and indi- viduals. In many subjects ‘scientific’ assumptions were closely intertwined with magical and religious ideas. This bothers historians, who want to describe the past exactly as it was but without conferring honour on what they regard as superstitious nonsense. It did not bother the author of On the Sacred Disease, who ridiculed temple medicine and regarded health and illness as purely natural phenomena, or the author of On Ancient Medicine, who rejected philosophy as being too remote for medical practice. Galen’s essays on the nature of science illustrate the debate between empiricists and theoreticians in the second century ad. In contrast with these local quarrels, Plato tried to build a philosophy that combined technical excellence with reli- gion and an orderly politics. Outstanding scientists assisted him. Starting from the divine properties of judge- ment, foresight, and wisdom (Laws 892b2ff.), Plato postu- lated that the basic *laws of the universe must be simple and timeless. Observed regularities, he said, do not reveal basic laws. They depend on matter, which is an agent of change. Even the best-established astronomical facts do not last for ever (Republic 530a8ff.). To find the principles, say, of planetary motion, it is therefore necessary to develop mathematical models ‘and to leave the phenom- ena of the heavens aside’ (Republic 530b7ff.). Strangely enough, this passage was reviled by scientists, who, being aware of the many disturbances that conceal the ‘pure case’ (perturbations, the effects of tidal friction, preces- sion, atmospheric refraction, instrumental failure, subject- ive errors, etc., in the case of planetary motion), often started with theories and considered observations only later. It is theory that teaches us what observations are and what they mean, said Einstein. Important discoveries (the stability of the planetary system, the details of Brownian motion, the particle character of light, the *uncertainty relations) were made by proceeding accordingly. This was not the procedure favoured by Aristotle, how- ever. Taking experience at face value, he tried to reconcile observations, common-sense, and abstract thought. He was the first systematic philosopher of science of the West. He raised many of the issues that constitute the sub- ject today and suggested solutions that are still influential. He described how facts turn into concepts and, further, into principles (Analytica posteriora 99 b 35ff.) and how things give rise to perceptions (De anima 418 a 4ff., 424 a 17ff.). For Aristotle these were natural processes which obeyed his general laws of motion and guaranteed the consistency of his empiricism. The deductive struc- ture he proposed for *explanations served the exposition, not the discovery, of knowledge: Aristotle had no explicit theory of research. However, he left us examples which show what he did. He began with ‘phenomena’. These could be observa- tions, common opinions, traditional beliefs, conceptual relations, or the views of earlier thinkers. Aristotle used special teams to collect them; he established a natural his- tory museum and a library of maps and manuscripts, and laid the foundation of all histories of Greek philosophy, mathematics, astronomy, medicine, and forms of govern- ment. Next he analysed the phenomena in a particular area; he extrapolated and removed contradictions, staying close to observation when the area was empirical, or to linguistic usage when it was abstract. His notion of place, for example, retains the idea that place is a container of sorts, but with the meaning of ‘being in’ freed from *para- doxes. Finally, he formulated definitions to summarize what he had obtained. A general theory of change and interaction, the conceptual possibilities discussed, for example, in his Metaphysics, and a theory of mathematics which explained how mathematical concepts functioned in his largely qualitative universe served as a framework. Aristotle also started and considerably advanced the study of social, biological, and psychological phenomena. ‘No one prior to Darwin has made a greater contribution to our understanding of the living world than Aristotle’, wrote E. Mayr, a leading modern biologist. The rise of modern science undermined important parts of the Aristotelian enterprise. It was a complex process which is still not fully understood. Some earlier historians and philosophers have described it in a simple and tendentious way. This is not surprising. The partici- pants themselves misled them. Thus Newton asserted that natural laws could be made manifest by collecting ‘phenomena’ (which for him were 850 science, history of the philosophy of either particular experimental findings or observable regu- larities like Kepler’s laws), inferring conclusions, general- izing them ‘by induction’, and checking the result by comparing it with further facts. He thought that gravita- tion, the laws of motion, and the basic properties of light had been discovered and established in precisely this fash- ion. He added that known laws might be explained by ‘hypotheses’ and proposed a variety of models to make sense of the properties of light and matter. This account suggests a hierarchy reaching from obser- vations, measurements, low-level generalizations, and theories to entire sciences and overarching theoretical schemes. Indeed, such a hierarchy for a long time formed the background of discussions about the support, the implications, the explanatory (reductive) power, and the meaning of scientific statements. Scientists like Herschel and Whewell, and philosophers like Mill, Carnap, Hempel, Ernest Nagel, Popper, inductivists and deduc- tivists alike, used the scheme, packing whatever fissures they perceived into ‘evidence’, ‘initial conditions’, ‘auxil- iary hypotheses’, ‘approximations’, ‘correspondence rules’, and ‘ceteris paribus clauses’. In a purely formal way they preserved the coherence of the knowledge ‘on top’ and its continuity with what went on ‘below’. Kant’s codification of Newtonian science, the attempt of *logical empiricists to ‘reconstruct’ or ‘explicate’ science by translating it into a uniform language, and the idea of a uniform *scientific method centred in physics further increased the impression of compactness. Remaining cracks were concealed by distinguishing, after Herschel, between a context of discovery and a context of justifica- tion: discovering new laws, facts, theories, may be a wildly irrational process—but establishing and presenting what has been found is subjected to strict and rational rules. This wonderfully harmonious and rather overwhelming fiction was gradually dismantled by a series of developments in the philosophy, the history, and the sociology of science as well as in the natural and social sciences themselves. Problems occur already in Newton. Discussing the derivation of his law of gravitation, he admits that Kepler’s laws are not strictly true, but decides to neglect ‘those small and inconsiderable errors’ (Principia, tr. Andrew Motte (1729), 401), which means that his empirical pre- misses are idealizations. Starting from here, Duhem argued that all experimental reports and low-level laws are idealizations and that the corresponding theories do not describe anything, while Cartwright showed that such theories are almost always false. Newton also gave different weight to different phe- nomena. Being confronted with facts that contradicted his views (on light), he declared that his own results had already decided the matter. Again he admitted in practice what he had denied in philosophy, namely that the selec- tion of data involves personal judgements. More recent research (Pickering, Galison, Rudwick, and others) has added that scientific facts are constituted by debate and compromise, that they harden with the distance from their origin, that they are manufactured rather than read off nature, and that the activities that produce and/or identify them form complex and, with respect to theory, relatively self-contained cultures. Even laws and theories belonging to the same general field may split into separate domains with separate criteria. There are many breaks in the alleged hierarchy from fact to theory. In the mean time historians and sociologists are taking a new look at power centres, institutions, and social groups; they point out that scientists often depend on patronage and choose their problems and their methods accordingly; they inquire how instruments like the tele- scope, the microscope, the air pump, or Millikan’s oil- drop experiments could produce results and change views long before they were theoretically understood; they trace the changing relations between philosophers (who had defined reality), mathematicians (who had ordered events in it), and artisans (who were granted skills but denied understanding). On a more theoretical level they explore the role of terms not directly related to observa- tion and describe how even relatively simple acts of per- ception (such as seeing a fly) were gradually broken up into processes (propagation of physical light; physio- logical reaction of eye and brain; ‘mental’ phenomena) whose mutual coherence is still a problem. The role of experience turns out to be vastly more com- plex than empiricists up to and including the members of the *Vienna Circle had assumed. Common-sense and sci- ences such as biology, meteorology, geology, medicine, provide ample evidence for regularities and exceptions. Nature is what happens always, or almost always, said Aristotle (De partibus animalium 663 b 27ff.). Thus the belief in inexorable laws of nature which inspired Galileo, Descartes, and their followers, which gave rise to import- ant theoretical developments and became a decisive ingredient of modern physics, not only was not based on experience, but clashed with it in many areas. This further widened the gap between common-sense, qualitative knowledge, and the gradually emerging edifice of modern science. The edifice started falling apart in the twentieth cen- tury. *Mathematics, apparently the most secure and well- founded science, divided into schools with different philosophies and different conditions for acceptable results. Logicists argued that mathematics was part of logic and therefore as unambiguous and compelling as that discipline. Intuitionists interpreted mathematics as a human enterprise and inferred, for example, that some of Cantor’s theorems and methods could not be accepted. Trying to save these and other parts of classical math- ematics, Hilbert and his collaborators formalized the rele- vant proofs and examined the resulting structures in a way that satisfied the intuitionists’ criteria. The programme collapsed when Gödel showed that the idea of mathemat- ics as a comprehensive and provably consistent system was incoherent. Following Einstein, Reich-enbach, Grün- baum, and Michael Friedmann developed new philoso- phies of *space, *time, and *confirmation, while quantum mechanics opened a gulf between space-time and matter science, history of the philosophy of 851 and closed the traditional gulf between observer and real- ity in ways that keep troubling scientists and have philoso- phers in a tizzy. Wittgenstein and Quine revealed the wishful thinking implicit in logical empiricism; biologists, chemists, historians, social scientists, aided by philoso- phers, reasserted an independence they had possessed in the nineteenth century, while the ‘New History’ reached levels of concreteness unimagined before. Surrounded by the ruins of once well-established patterns of knowledge, Kuhn, in 1962, proposed a new and, to most philosophers, rather upsetting account of scientific change. Like Aris- totle, Kuhn emphasized the collaborative character of sci- ence and the role of shared facts, concepts, procedures. But he also asserted that change (‘progress’ in the older philosophical versions) might sever all logical connections with the past. Adopting his views, one could no longer assume that science accumulates facts, or that theories can be reduced, by approximation, to their more precise and more comprehensive successors. Kuhn’s book, The Structure of Scientific Revolutions, was the last major attempt so far to subject a complex practice, science, to abstract thought. It clashed with important ingredients of *rationalism. After 1962 philoso- phers tried either to reinforce these ingredients or to show that they were not in danger, or they introduced less bind- ing rules, or else they concentrated on problems appar- ently untouched by Kuhn. Older approaches are still producing interesting results (example: Achinstein’s Bayesian reconstruction of nineteenth-century debates about light and matter, which seemed to call for a less orderly account). The issue between *realism and *empiricism, which changed with the arrival of *quantum mechanics and was sharpened by the interventions of G. Maxwell, Richard Boyd, Ernan McMullin, Putnam, van Fraassen, Cartwright, and others, is as alive as ever. Already before Kuhn some writers had opted for cognitive models of scientific knowledge which are naturalistic— they do not distinguish between logical and empirical *‘laws of thought’—and based on only partly rational pat- terns of adaptation. Others had emphasized details and objected to premature generalizations. These researchers appreciate what Kuhn did, but think that his approach is still far too abstract. They study particular events, conduct interviews, invade laboratories, challenge scientists, examine their technologies, images, conceptions, and explore the often glaring antagonisms between discip- lines, schools, and individual research groups. Summa- rizing their results, we can say that the problem is no longer how to articulate the monolith science, but what to do with the scattered collection of efforts that has replaced it. A topic that was often neglected or dealt with in a dog- matic way is the authority of science. Is science the best type of knowledge we possess, or is it just the most influential? This way of putting the question has by now become obsolete. Science is not one thing, it is many; it is not closed, but open to new approaches. Objections to novelty and to alternatives come from particular groups with vested interests, not from science as a whole. It is therefore possible to gain understanding and to solve problems by combining bits and pieces of ‘science’ with prima facie ‘unscientific’ opinions and procedures. Archi- tecture, technology, work in *artificial intelligence, man- agement science, ecology, public health, and community development are examples. Purely theoretical subjects have profited from foreign invasions. One can even suc- ceed by altogether staying outside ‘science’. Numerous non-scientific cultures supported their members materi- ally and spiritually. True, they ran into difficulties—but so did our science-based Western civilization. The old antag- onism between practice and theory and the related antag- onism between ‘scientific’ and ‘unscientific’ approaches may still survive in practice, or in some archaic slogans; however, it has lost much of its philosophical bite. p.k.f. *medicine, philosophy of; revolutions, scientific. The many aspects and schools of ancient science are discussed, with ample literature, in the books and articles of G. E. R. Lloyd. Ernan McMullin, The Inference that Makes Science (Milwaukee, 1992) is a concise account that includes the medieval period. For the changing ways of dealing with scientific practice see A. Pick- ering (ed.), Science as a Practice and Culture (Chicago, 1992). I. Hacking, Representing and Intervening (Cambridge, 1983) describes the fruitful confusion of post-Kuhnian thought, to which Hacking himself has made important contributions. science, problems of the philosophy of. The philosophy of science can be divided into two broad areas: the episte- mology of science and the metaphysics of science. The epistemology of science discusses the justification and objectivity of scientific knowledge. The metaphysics of science discusses philosophically puzzling aspects of the reality uncovered by science. Questions about the epistemology of science overlap with questions about knowledge in general. A central issue is the problem of induction. *Induction is the process which leads us from observations of particular cases to such universal conclusions as that ‘All bodies fall with con- stant acceleration’. The problem is that such arguments are not logically valid. The truth of the particular pre- misses does not guarantee the truth of the universal con- clusion. That all bodies observed so far have fallen with constant acceleration does not guarantee that all future ones will do so too. One popular response to the problem of induction is due to Karl Popper. In Popper’s view, science does not rely on induction in the first place. Rather it puts forward hypotheses in a conjectural spirit, and then strives to refute them. Popper argues that as long as such hypo- theses are falsifiable, in the sense that there are pos- sible observations that would disprove them, then the objectivity of science is assured. Critics of Popper’s ‘falsificationism’ complain that it offers no account of our entitlement to believe in the truth of scientific theories, rather than their falsity. Popper only accounts for negative scientific knowledge, as opposed to positive knowledge. He rightly points out that a single 852 science, history of the philosophy of counter-example can show us that a scientific theory is wrong. But he says nothing about what can show us that a scientific theory is right. Yet it is positive knowledge of this latter kind that is supposed to follow from inductive infer- ences. What is more, it is this kind of positive knowledge that makes induction so important. We can cure diseases and send people to the moon because we know that cer- tain causes do always have certain results, not because we know that they don’t. If Popper cannot explain how we sometimes know that ‘All A’s are B’s’, rather than just ‘It’s false that all A’s are B’s’, then he has surely failed to deal properly with the problem of induction. An alternative response to the problem of induction is offered by *Bayesian confirmation theory. Bayesians argue that our beliefs come in degrees, and that such degrees of belief, when rational, conform to the probabil- ity calculus. In itself, this does not imply that any particu- lar degree of belief in universal scientific theories is mandated—only that an individual’s degrees of belief must respect the axioms of probability. However, Bayesians also argue that *Bayes’s theorem implies a rational strategy for updating our degrees of belief in response to new evidence, and in particular that it implies we should increase our degree of belief in a scientific theory when its predictions are confirmed. On this basis they argue that everybody ought to come to attach a high credence to any universal generalization that is repeatedly borne out by experience. The idea is thus that, given enough evidence, everybody will eventually end up believing true universal generalizations, even if they attach different degrees of belief to them to start with. However, this doesn’t work for all possible initial degrees of belief. Some eccentric initial degrees of belief are con- sistent with the axioms of probability, but will not lead to eventual convergence. So, for example, Bayesians don’t explain what is wrong with people who never end up believing universal generalizations because they always think it is probable that the course of nature is going to change tomorrow. This shows that Bayesianism provides at best a partial solution to the problem of induction. Bayesianism may show us how our initial degrees of belief constrain the way we should respond in response to new evidence. But it needs to be supplemented by some fur- ther account of why some initial degrees of belief are objectively superior to others. Another central problem in the epistemology of science is the possibility of knowledge of unobservables like viruses and electrons. Instrumentalists deny that scientific theories about unobservables can be accepted as true descriptions of an unobservable world. Rather they hold that such theories are at best useful instruments for gener- ating observational predictions. They are opposed by those who take the realist view that science can and does discover truths about unobservables. Some instrumentalists defend their view by appeal to the underdetermination of theory by data. According to this claim, any given body of observational data will always be compatible with a number of mutually incom- patible theories about unobservables, and so cannot com- pel the choice of any particular such theory. This claim can be defended in turn by appeal to the ‘Duhem–Quine thesis’, which says that you can always retain any particular theoretical proposition in the face of apparently contrary evidence, by making adjustments to other auxiliary hypotheses in your overall theory. An alternative route to the underdetermination of theory by data is to observe that, given any successful theory that explains the obser- vational data, we can always ‘cook up’ an alternative the- ory which explains the same observational facts. For example, Darwinism may successfully explain the fossil record, but so does Philip Henry Gosse’s hypothesis that God put the fossils there in order to test our faith. The doctrine of *instrumentalism rests on a distinction between what is observable and what is not. This distinc- tion is not unproblematic. Some philosophers of science, most notably T. S. Kuhn and Paul Feyerabend, argue that observation is ‘theory-laden’, by which they mean that our prior theories influence what observations we make and what significance we attach to them. They infer from this that different scientific theories are often ‘incom- mensurable’, in the sense that there is no theory-neutral body of observational judgements to adjudicate between them. Kuhn argues that the history of science displays a succession of ‘paradigms’, sets of assumptions and exem- plars which condition the way scientists solve problems and understand data, and which are only overthrown in occasional ‘scientific revolutions’ when scientists switch from one theoretical faith to another. Kuhn and Feyerabend thus in effect generalize the instrumentalists’ doubts about scientific truth. Whereas instrumentalists hold that claims about scientific unob- servables cannot be accepted as true descriptions of real- ity, Kuhn and Feyerabend say the same about observational claims too. Not all epistemologists of sci- ence accept Kuhn’s and Feyerabend’s *epistemological relativism. On the issue of observational claims, most would maintain that, even if the line between observables and unobservables is neither sharp nor immutable, basic observational judgements can still provide an impartial test of a theory’s predictions. And, on the issue of theories about unobservables, many would maintain that, even if theories are always underdetermined in the sense that a number of different theories will always be compatible with any given body of data, it does not necessarily follow that we cannot rationally choose between such theories, for some of those theories may be better supported than others by that body of data. There is, however, another powerful argument against the realist view that scientific theories are true descrip- tions of an independent reality. This is the poor past form of such theories. Many past scientific theories, from Ptol- emaic astronomy to the phlogiston theory of combustion, have turned out to be false. So it seems we should infer, by a ‘pessimistic meta-induction’, that since past scientific theories have normally been false, present and future sci- entific theories are likely to be false too. science, problems of the philosophy of 853 In response, it can be argued that even false past theor- ies contained a large element of truth, and therefore that present and future theories can be expected to approximate to the truth. Moreover, some philosophers detect a pat- tern of convergence, and argue that the succeeding scien- tific theories are moving closer and closer to the truth. These views, however, presuppose a notion of ‘likeness to truth’, or *verisimilitude. It has proved surprisingly diffi- cult to give a clear meaning to this notion. The earliest attempts to define this notion, due to Popper and others, have proved incoherent, and it is not clear whether a satis- factory clarification of this notion is possible. During the 1980s a number of philosophers adopted a naturalized approach to the epistemology of science. Rather than seeking to identify *a priori rules of scientific method, they look to the history of science and other *a posteriori disciplines to show which methodological strategies are in fact effective means to the achievement of scientific goals. It is possible to combine this naturalized approach with the realist view that the goal of scientific theorizing is to uncover the truth. However, in the light of the arguments mentioned above, most naturalized episte- mologists of science reject truth as a sensible goal for sci- ence, and instead investigate strategies for achieving such theoretical goals as simplicity, predictive success, and *heuristic fertility. Turning now to the metaphysics of science, a central issue is the analysis of *causality. According to *David Hume, causation, as an objective relation, is simply a matter of constant conjunction: one event causes another just in case events of the first type are constantly conjoined with events like the second. This analysis, however, generates a number of problems. First, there is the question of distin- guishing genuine causal *laws of nature from accidentally true constant conjunctions: being a screw in my desk could well be constantly conjoined with being made of copper, without its being true that those screws are made of copper because they are in my desk. Second, there is a problem about the direction of causation: how do we tell causes from effects, given that a constant conjunction of A-type events with B-type events immediately implies a constant conjunction of B-type events with A-type events? And, third, there is the issue of probabilistic causation: do causes have to determine their effects, or is it enough that they are probably (rather than ‘constantly’) conjoined with them? Many philosophers of science this century have pre- ferred to talk about *explanation rather than causation. According to the popular *covering-law model of expla- nation, developed by Carl Hempel, a particular event is explained if its occurrence can be deduced from the occur- rence of other particular events, with the help of one or more laws. But this is little different from Hume’s account of causation, and not surprisingly faces essentially the same problems. How do we tell laws from accidents? Can’t we sometimes deduce ‘backwards’, from causes to effects—as when we infer the height of the flagpole from the length of the shadow—even though we don’t want to say that the length of the shadow explains the height of the flagpole? And aren’t there cases where we can explain one event—Mr X contracting cancer, say—by another—his smoking sixty cigarettes a day—even though we can’t deduce the former from the latter, since their connection is only probabilistic? On the question of distinguishing laws from accidents, there are two possible strategies. The first remains faithful to the Humean view that law statements assert nothing more than constant conjunction, and then seeks to explain why some statements of constant conjunction—the laws—are more important than others—the accidents. The best-known version of this Humean strategy, origin- ally proposed by F. P. Ramsey and later revived by David Lewis, argues that laws are those true generaliza- tions that can be fitted into an ideal systematization of knowledge—or, as Ramsey put it, laws are a ‘consequence of those propositions which we should take as axioms if we knew everything and organized it as simply as possible in a deductive system’. The alternative, non-Humean strategy, whose most prominent defender is D. M. Armstrong, rejects the presupposition that laws involve nothing more than constant conjunctions, and instead postulates a relationship of ‘necessitation’ which obtains between event-types which are related by law, but not between those which are only accidentally conjoined. On the question of the direction of causation, Hume himself simply said that the earlier of two constantly con- joined events is the cause, and the later the effect. But there are a number of objections to using the earlier–later asym- metry of time to analyse the cause–effect asymmetry. For a start, it seems to be at least conceivable that there should be causes that are simultaneous with their effects, or even causes that come after their effects. In addition, there seem to be good reasons for wanting to run the analysis in the opposite direction, and use the direction of causation to analyse the direction of time. If we do this, then we will want some time-independent account of the direction of causation. A number of such accounts have been pro- posed. David Lewis argues that the asymmetry of caus- ation derives from the ‘asymmetry of overdetermination’: while the *overdetermination of effects by causes is very rare, it is absolutely normal for causes to be ‘overdeter- mined’ by a large number of chains of independent effects, each of which suffices for the earlier cause. Other writers have appealed to a related probabilistic asymmetry to explain causal asymmetry, pointing out that the different causes of any given common effect are normally prob- abilistically independent of each other, but the different effects of a common cause are normally probabilistically correlated. The rise of *quantum mechanics, and in particular the experimental disproof of Bell’s inequality, has persuaded many philosophers of science of the falsity of *determin- ism. In line with this, they have sought to develop models of causation in which causes only probabilify, rather than determine, their effects. The earliest such models, influ- enced by Carl Hempel’s account of ‘inductive-statistical’ 854 science, problems of the philosophy of explanation, required that causes should give their effects a high probability. However, while smoking unequivo- cally causes cancer, it never makes it highly probable. So more recent models simply require that causes increase the probability of their effects, even if this is merely from a low figure to a slightly less low figure. Models of probabilistic causation need to guard against the possibility that prob- abilistic associations between events may be spurious rather than genuinely causal, like the association between barometers falling and subsequent rain. It is an open ques- tion whether such spurious associations can be ruled out by purely probabilistic means, or whether further non- probabilistic criteria need to be introduced. The notion of *probability is of philosophical interest independently of its connection with causation. There are a number of different ways of interpreting the mathemat- ical calculus of probability. Subjective theories of prob- ability, which developed out of J. M. Keynes’s logical the- ory of probability, understand probabilities as subjective degrees of belief. This is the interpretation assumed by Bayesian confirmation theorists. However, most philoso- phers of probability argue that we need an objective inter- pretation of probability in addition to this subjective account. According to the frequency theory of Richard von Mises, the probability of any given type of result is the limit of the relative frequency with which it occurs in longer and longer sequences drawn from some infinite ‘reference class’. One difficulty facing the frequency the- ory is that it will ascribe a different probability to a given single-case result when that result is considered as a mem- ber of different reference classes. To rule this out, Karl Popper proposed that probabilities should be regarded as propensities of specific experimental set-ups, in the sense that only frequencies in reference classes generated by repetitions of the same experimental set-up should count as genuine probabilities. Later versions of this propensity theory dispense with the reliance on infinite reference classes, and simply take probability to be a quantitative feature of particular set-ups, which is evidenced by fre- quencies in repetitions of those set-ups, but cannot be defined in terms of frequencies. The philosophical interpretation of objective probabil- ity is tied up with our understanding of modern quantum mechanics. The interpretation of quantum mechanics, however, is still an open problem in the *philosophy of physics. Taken at face value, quantum mechanics says that when physical systems are measured, they suddenly acquire definite values of observable parameters which they did not have before. The theory specifies the prob- abilities of different such values, but cannot predict with certainty which will be observed. Albert Einstein’s response was that quantum mechanics must therefore be incomplete, and that a future theory would identify the ‘hidden variables’ which do determine observed results. However, since Einstein’s time new evidence against such hidden-variable theories has emerged. John Bell showed that any version of such a hidden-variable theory that respected the principles of special relativity would contra- dict the observational predictions of quantum mechanics; moreover, experiments have now confirmed that these quantum-mechanical predictions are correct. True, this does leave the possibility of hidden-variable theories that do not respect special relativity and allow faster-than-light causal influences, like David Bohm’s version of quantum mechanics, but these now have only minority support. If hidden variables are rejected, quantum mechanics then faces the problem of making sense of quantum *measure- ment as a process in which a previously indeterminate reality somehow ‘collapses’ into some definite observable state. It is not obvious that it can answer this challenge. Even though measurements are themselves physical processes, orthodox quantum mechanics offers no theory of why measurements precipitate definite observable values, but instead simply assumes this. It thus seems likely that a satisfactory understanding of quantum measure- ment will depend on some radically new interpretation of the theory. According to one increasingly popular view, it is an illusion that prior indeterminacies ‘collapse’ into a definite state when measured; rather, reality ‘branches’ into a plurality of distinct non-interacting futures, one for each possible result of the measurement. A further metaphysical aspect of the philosophy of sci- ence is the issue of *teleology. This is mainly a topic in the philosophy of *biology, since it is in the biological realm that we find the paradigm examples of teleological explan- ation, as when we say, for example, that chlorophyll is present in plants in order to facilitate photosynthesis. Explanations like these are of philosophical interest because they explain causes by effects, and so seem to run counter to the normal pattern of explaining effects by causes. Carl Hempel argued that such explanations are simply a species of covering-law explanation in which the fact used to explain—the photosynthesis—happens to come later in time than the fact which gets explained—the chlorophyll. However, there are counter-examples to this proposal, and attempts to tighten it up by requiring that the items involved be parts of some self-regulating system have proved problematic. The majority of philosophers of science would probably now favour a different approach, according to which teleological explanations in biology are a form of disguised causal explanation, in which implicit reference is made to a hypothesized history of nat- ural selection during which the trait in question—chloro- phyll—was favoured because it produced the relevant effect—photosynthesis. Some philosophers would ques- tion whether such ‘backward-looking’ explanations really deserve to be called ‘teleological’, since they do not in fact explain the present by the future, but by past histories of selection; this issue, however, is essentially a termino- logical matter. ‘Special sciences’ like biology, chemistry, geology, meteorology, and so on raise the question of *reduction- ism. One science is said to be ‘reduced’ by another if its cat- egories can be defined in terms of the categories of the latter, and its laws explained by the laws of the latter. Reductionists argue that the sciences form a hierarchy in science, problems of the philosophy of 855 which the higher can be reduced by the lower: thus biol- ogy might be reduced by physiology, physiology by chem- istry, and eventually chemistry by physics. The issue of reductionism can be viewed either histor- ically or metaphysically. The historical question is whether science characteristically progresses by later theories reducing earlier ones. The metaphysical question is whether the different areas of science describe different realities, or just the one physical reality described at differ- ent levels of detail. Though often run together, these are different questions. Taken as a general thesis, historical reductionism is false, for reasons relating to the ‘pessimistic meta-induction’ discussed above: while there are some his- torical episodes where new scientific theories have reduced old ones, there are as many where new theories have shown old theories to be false, and so eliminated rather than reduced them. This does not mean, however, that metaphysical reductionism is false. Even if science pro- ceeds towards the overall truth by fits and starts, there may be general reasons for expecting that this overall truth, when eventually reached, will reduce to physical truth. One possible such argument stems from the causal interaction between the phenomena discussed in the spe- cial sciences and physical phenomena. Thus biological, geological, and meteorological events all unquestionably have physical effects; this might seem to require that they be made of physical components. It is doubtful, however, whether this suffices to establish full-scale reductionism, as opposed to the thesis of supervenience, which holds that special properties are determined by physical proper- ties, even though they are not identical with them as types. For example, whether or not some being is adding 2 and 2 is arguably determined by its physical make-up; yet the property of adding 2 plus 2 cannot be identified with any specific physical property, for many different beings, with different physical constitutions, are in principle cap- able of adding 2 plus 2. If we accept supervenience, and reject the stronger claim of ‘type identity’ between special and physical properties, then we will also reject the reduc- tionist thesis that all special laws can be explained by phys- ical laws. Instead we will hold that there are sui generis special laws, patterns which cover special types which vary in their physical make-up, and which therefore can- not be explained in terms of physical law alone. d.p. *science, history of the philosophy of; scientific method; hypothetico-deductive method; nomic neces- sity; natural law. I. Hacking, Representing and Intervening (Oxford, 1983). C. Hempel, Aspects of Scientific Explanation (New York, 1965). T. S. Kuhn, The Structure of Scientific Revolutions (Chicago, 1962). E. Nagel, The Structure of Science (New York, 1961). K. Popper, The Logic of Scientific Discovery (London, 1959). B. van Fraassen, The Scientific Image (Oxford, 1980). science, pseudo: see pseudo-science. science, social philosophy of. General term for the inves- tigation of moral and political issues in the practice of sci- ence. These issues are very varied, ranging from the morality of animal experimentation to the accusation that science promotes a reductive, disenchanting world out- look. There are two overall questions: whether scientists should subscribe to an ethical code (like the Hippocratic Oath); and whether there is a clear distinction between cognitive (‘purely scientific’) and ethical values in science. a.bel. David B. Resnik, The Ethics of Science: An Introduction (London, 1998). science, the unity of: see unity of science. science and knowledge: see knowledge and science. science and life: see life and science. science and philosophy: see philosophy and science. science, art, and religion. The theories of science aim at accounts of the world which depend on no particular per- spective on the world and no particular type of observer. Though in practice they never completely abstract from idiosyncratically human perceptions and forms of thought, their success, or otherwise, depends on how they fare against a nature which is impervious to our feelings and perceptions. *Art, by contrast, works with visions of the world expressed in concrete form, adapted precisely to human sensory faculties and emotional sensibilities. Works of art are judged by their success over time in evok- ing responses to human perceivers. Religion shares the scientific aim of giving an account of the world as it is in itself, not as it is for us. But, unlike sci- ence, and closer to a work of art, it reveals the world as informed by purpose, will, and personality, as expressing the intentions of a transcendent being. In presupposing a transcendent being, religion avoids the possibility of direct refutation by empirical or scientific evidence. Even the facts of evil and suffering are normally, and from his point of view not unreasonably, taken by the religious believer to be consistent with a divine purpose which, being tran- scendent, we cannot fully fathom. But equally, although religion provides an answer to the questions of the mean- ing and ultimate genesis of the world’s totality, which sci- ence raises but cannot answer, at least while staying within a strictly empirical framework, religion’s appeal to transcendence deprives it of any direct empirical support. Religion, properly conceived, is based on the experiences of meaning and value which it is the province of art rather than science to express and explore. In seeing value and personality written into the very fabric of the world, in a way which does not depend on our wishes or desires, is the religious believer indulging in mere projection or wish-fulfilment? In considering this question, we should remember that science as such cannot pronounce on the questions relating to the world as a whole, and also reflect on how hard it is to live as if our values were just projec- tions of human feeling, individual or collective. a.o’h. 856 science, problems of the philosophy of T. Nagel, The View from Nowhere (New York, 1986). A. O’Hear, ‘Science and Religion’, British Journal for the Philosophy of Science (1993). scientific determinism: see determinism, scientific. scientific method. Although the question of scientific method is generally thought to resolve itself into two parts—the problem of discovery and the problem of justi- fication—it seems fair to say that philosophers have felt significantly more comfortable with the latter than the former. Indeed there are those (like Karl Popper) who have argued that philosophy can say nothing of value about discovery and that the whole topic is best left to the historian or psychologist. Certainly it seems the case that the problem of justifica- tion lays itself more readily open to the forming of rules and criteria for identifying and producing the best kind of science. Traditionally, the discussion has been located against the ideal of an *axiom system, a powerful model set by the successes of Greek geometry. Transcending mere common-sense knowledge, science shares with mathematics some elements of its necessity and universal- ity, although what distinguishes science from pure thought is its mandate to understand the world of empiri- cal experience. How does one establish the truths of science? Francis Bacon argued that scientific knowledge is gained and con- firmed by a process of *induction. Precisely how this should be understood and performed has been a matter of ongoing debate. Many, although not Bacon himself, have argued that the process of induction is merely one of sim- ple enumeration, where essentially what one does is count the cases favourable to a particular hypothesis. But this suggestion can be faulted on at least two grounds. First, it is simply false of the way in which real science proceeds. No one just goes out and counts instances, without a prior the- ory for inspiration. Second, no matter how much one counts, the result will always be in doubt, because of the ever-present possibility of counter-examples. Seizing on this second point, a number of philosophers have made a virtue out of necessity, arguing that the aim of science is never to achieve certain knowledge. Rather, in the light of intuitions and previous understanding, one proposes hypotheses which one then judges against experience. Inasmuch as they stand the test of time, understanding advances; but since all scientific claims are by their very nature falsifiable (to use Popper’s term), there is always the possibility of disconfirmation and the need for replacement by a more powerful hypothesis. An important body of scientific claims are causal, mean- ing that in some sense they tell us why things work. New- ton set the modern agenda for discussion about *cause, arguing that the best causes are verae causae. This claim has been variously interpreted. The English empiricist philosopher John Herschel argued that one must aim always for analogical support, based on everyday easily understood and encountered examples. The English rationalist philosopher William Whewell argued that one should aim to locate one’s causes at the focus of a ‘con- cilience of inductions’, where many different pieces of evi- dence combine to point to one true explanation. To use a detective-story analogy, whereas Herschel argues that to pin the guilt on a suspect one must have eyewitness testi- mony, Whewell preferred circumstantial evidence, where the clues all pointed to the guilty party. This century’s advances in the understanding of the micro-world beyond the senses have led to refinements in discussions of method, as have the successes of the non- physical sciences. Much effort has been given to the ques- tion of whether there is one method uniquely for all science. Although Bacon argued strongly against tele- ology in science, many biologists would still claim that their material demands an understanding in terms of purpose or final cause. This is neither something theological nor something demanding causes acting out of the future and on to the present. It is something which recognizes the dis- tinctively adapted nature of organic beings. One has a metaphor of design—a metaphor because the work of organic organization is being done by Darwin’s mech- anism of natural selection rather than a conscious being— which yet forces one to think of the ends that features serve, rather than merely their material causes. Although discovery may be problematic, there are always those who try to approach it philosophically. Many have thought that Mill’s so-called methods are a good start, for here one has a number of recipes for working out the existence and nature of causes. (For example, the *method of agreement claims that if one has a number of different cases of the production of an effect and only one antecedent phenomenon in common, then it is the cause.) However, as Whewell pointed out, although Mill’s methods may be valuable in the discerning of causes, they are not very helpful in deciding initially what phenomena are worth explaining and what different circumstances lead to these phenomena. In other words, their virtue lies truly in the context of discovery. In recent years, there have been renewed attempts to crack the discovery problem. Working from insights of the *pragmatists, Norwood Russell Hanson argued that one needs a kind of logic of *abduction, which throws up plaus- ible hypotheses. Some have been attracted to the nature of analogy in their quest for insight. And yet others have turned to the newly developed power of computers for clues to a logic and method of discovery. But although cer- tainly much has been learned thereby about the nature of human creative reasoning, it seems fair to say that, in the search for formal rules of method, we are little further ahead than we were when we began. Perhaps there are some things which simply do not yield to philosophical analysis, and Popper’s conclusion is less one of despair than of realism. Or perhaps the very distinction itself is ill-taken, and (as various historians and sociologists would argue) the very act of scientific creativity can take place only within a certain culture and against a background of already held belief. Hence, there never can be a claim which is scientific method 857 epistemically neutral nor can there be a discovery except some commitments have already been made. m.r. *methodology; scientism. J. Losee, The History of the Philosophy of Science (Oxford, 1972). K. Popper, The Logic of Scientific Discovery, Eng. tr. (London, 1959). scientism. ‘Scientism’ is a term of abuse. Therefore, per- haps inevitably, there is no one simple characterization of the views of those who are thought to be identified as prone to it. In philosophy, a commitment to one or more of the following lays one open to the charge of scientism. (a) The sciences are more important than the arts for an understanding of the world in which we live, or, even, all we need to understand it. (b) Only a scientific methodology is intellectually acceptable. Therefore, if the arts are to be a genuine part of human knowledge they must adopt it. (c) Philosophical problems are scientific problems and should only be dealt with as such. A successful accusation of scientism usually relies upon a restrictive conception of the sciences and an optimistic conception of the arts as hitherto practised. Nobody espouses scientism; it is just detected in the writings of others. Among the accused are P. M. and P. S. Church- land, W. V. Quine, and *Logical Positivism. p.j.p.n. T. Sorell, Scientism (London, 1991). scope. Many words have the syntactic role of building one or more expressions of some type into another expression of some type, as ‘but’ can build the sentences ‘Bill is rich’ and ‘Jill is stinking rich’ into the sentence ‘Bill is rich but Jill is stinking rich’. The scope of such a word is the immedi- ate outcome of this process. For example, the scope of ‘fashion’ in ‘He kept his house Bristol fashion’ is the adver- bial phrase ‘Bristol fashion’, but in ‘The Bristol fashion experts were wrong’ it is the noun phrase ‘fashion experts’. Some structural ambiguities are ambiguities of scope, for example ‘superfluous’ in ‘Try our superfluous hair remover’. A virtue of the *artificial languages of logic is to represent many structural ambiguities of English, e.g. ‘Some professors get drunk every night’, as demanding a decision on the scope of some logical *constant. c.a.k. R. M. Sainsbury, Logical Forms: An Introduction to Philosophical Logic (Oxford, 1991). scope fallacies. Scope fallacies are endemic in philosophy and in ordinary language. ‘If it snows the crops will inevitably fail’ suggests misleadingly that the scope of ‘inevitably’ is the consequent rather than the whole condi- tional. A standard example is to be found in proofs of God’s existence which move from ‘For every contingent being there is a time when it does not exist’ to ‘There is a time when every contingent being does not exist’. Subtler is ‘Statements of identity are, when true, necessarily true. Therefore since Elizabeth is the Queen of England, neces- sarily Elizabeth is the Queen of England’. It is true, of the person who is, contingently, the Queen of England that, necessarily, she is identical with Elizabeth, but it is not true of that person that, necessarily, she is Queen of England. j.j.m. *de re and de dicto; scope. S. Kripke, ‘Speaker’s Reference and Semantic Reference’, in P. A. French, T. E. Uehling Jr., and H. K. Wettstein (eds.), Contempo- rary Perspectives in the Philosophy of Language (Minneapolis, 1979). Scottish philosophy. Scottish universities had, until recently, a traditional reverence for philosophy, which was compulsory for every degree. The tradition began in the University of St Andrews, founded in the fifteenth cen- tury. Initially the universities relied on their seniors in England and France to train their teachers. But by the end of the sixteenth century they were ready to provide their own. The subject was divided into logic, pneumatology (psychology), moral philosophy, and natural philosophy (physical science). There was no idea of exclusive special- ization, let alone that the best results require it. The policy of non-specialization allowed Hume, Smith, and Reid, at the tradition’s high point, to variously combine psych- ology, moral philosophy, optics, mechanics, economics, history, and jurisprudence. Philosophy and science were taken to be one. But, as physical science advanced and the mind–body distinction took a firmer grip, philosophers in Scotland, as elsewhere, increasingly doubted the philo- sophical competence of science and instead concentrated on developing metaphysical *idealism. The earliest Scottish philosophers of note came from the south-east. They were Duns Scotus (c.1266–1308), the ‘subtle doctor’, and John Mair (or Major) (c.1467–1550). The former was a Franciscan academic who lectured in Cambridge, Oxford, and Paris. Primarily a theologian, he had a distinctive doctrine of *free will, emphasizing the possibility of genuine altruism, a view subsequently resumed by Hutcheson and his followers. John Mair, educated in Cambridge, also lectured in Paris, before returning to take up teaching and administrative posts in his native land. His approach seems to have been logico-linguistic. Lacking a conception of intellectual progress, Scottish philosophy of this period assumed that anything worth knowing had already been divinely imparted to the ancient Jews or discovered by the ancient Greeks. The Reformation did not dispel this prejudice, but, if anything, reinforced it, favouring classical writers, like Plato and Cicero. The prevailing practice was to comment on what- ever texts could be retrieved from ancient times, whether in Latin, Greek, or Hebrew. Thus the seventeenth century produced no original philosopher in the universities edu- cating the professional classes. Scottish academics, by no means insular, were conscious of continental and English genius, like that of the Dutch lawyer Grotius, the French mathematician and natural philosopher Descartes, and the English natural philosophers Newton and Boyle. They 858 scientific method also admired John Locke, who based his epistemology on physical science. The time was ripe to earn the benefit of an educational system which was non-specialist yet capable of imparting the latest discoveries in mental and natural philosophy. The results were impressive. In mental philosophy emphasis on feeling and sensation as sources of belief pro- duced the great works of Hutcheson, Hume, Smith, and Reid. Hutcheson showed that moral approval is a kind of pleasure, and therefore is not purely rational. Hume accepted this and went on to extend anti-rationalism in a unique claim that important beliefs, in causes, in the *external world, and in *personal identity, are instinctive and non-rational. He showed that a cause can never be completely proved and that belief in a cause is a condi- tioned reflex, not a logical deduction. Belief in the external world, and in personal identity, is also the result of condi- tioning and cannot be proved; so is belief in God. His views were widely misunderstood as being sceptical about any kind of unperceived existence, whereas he was only point- ing out that beliefs about the world, causes, and personal identity go beyond any evidence. Reid, using contempor- ary science to develop indirect realism, and seeing Hume as a sceptic, blamed him for saying that we mistake sensa- tions for external objects and that beliefs are images of sen- sations. Though critical of Hume, he agreed that the mind generates beliefs from materials which are logically inad- equate—and a good thing too, he thought. Whereas Hume and Reid were not sceptical about the mind’s irrational usefulness, they were sceptical about philosophy’s power to do any better. It was felt by some that they had down- graded reason and the nobility of their subject. Kant dis- paraged them, while thanking them for waking him from his dogmatic slumber, and argued that philosophy can jus- tify belief in the external world, in causality, and in per- sonal identity, as necessary presuppositions for objective understanding. Hume and Reid had never doubted the need for such presuppositions, but thought it obvious, as indeed it is, that such need is no evidence of truth. Reid’s philosophy had critics, beginning with Dugald Stewart who made minor refinements. William Hamilton went further, declaring that the cause of sensation cannot be known. James Frederick Ferrier, an idiosyncratic ideal- ist, proposed that sensations have a dual nature as occur- ring in perceptible sense-organs, while at the same time presenting themselves to consciousness as external objects, a view not unlike Hume’s. Later, Andrew Seth, the first non-professorial lecturer appointed in Scotland (thanks to A. J. Balfour), attacked Hamiltonian agnosti- cism. He proudly defended what he called ‘the Natural Realism of the Scottish philosophy’, arguing that sub- stance and quality are not distinct: sensation of qualities must be of substance, revealing what it is. Henry Langueville Mansel, an English theologian and pupil of Hamilton’s, used his philosophical agnosticism to defend revealed religion, as Hume’s agnosticism had been used for the same purpose in Germany. Anti-Hamiltonianism flared up in Glasgow, led by Edward Caird, Professor of Moral Philosophy, who had imbibed German idealism at Balliol, to whose Mastership he gratefully returned. In the twentieth century England escaped into empiri- cism, and the canny Scots concentrated on the history of philosophy. The early tradition of combining philosophy and science, weakened by idealism, had ended, though there was still a strong connection between philosophy and psychology, felt of all the sciences to be closest to spir- itual reality. This may explain the subsequent appearance in Hume’s university of a chair of paranormal psychology. v.h. *English philosophy; Irish philosophy. A. Broadie, The Tradition of Scottish Philosophy (Edinburgh, 1990). G. E. Davie, The Scotch Metaphysics (London, 2001). James M‘Cosh, The Scottish Philosophy (London, 1875). Rudolf Metz, A Hundred Years of British Philosophy (London, 1938). Andrew Seth, Scottish Philosophy (Edinburgh, 1890). Scotus, John Duns: see Duns Scotus, John. Scotus Eriugena, John: see Eriugena, John Scotus. sea battle: see Łukasiewicz; many-valued logic. sea-battle argument. Argument criticized by *Aristotle in De Interpretatione, book ix. Either there will be a sea bat- tle tomorrow or there will not. These possibilities are mutually exclusive and jointly exhaustive. It seems, there- fore, to be a necessary truth that there will be a sea battle tomorrow or there will be no sea battle. The future is inevitable now, it is just that we are presently ignorant of which future events will happen. The argument puta- tively establishes the fatalistic conclusions that everything that happens happens necessarily, and that it was always true that those events that do happen would happen. In criticism, Aristotle argues that from the necessity of the disjunction (p ∨ –p) it does not follow that the disjunct p is necessary nor that the disjunct –p is necessary. Although it is necessary that either a sea battle take place tomorrow or not, it is not necessary that it should take place tomorrow, and is it not necessary that it should not take place tomorrow. Therefore the argument does not validly establish that future events are inevitable. This is consistent with its being a necessary truth that either there will or will not be a sea battle tomorrow. s.p. *future contingents. Aristotle’s Categories and De Interpretatione, tr. J. L. Ackrill (Oxford, 1963). Searle, John R. (1932– ). Philosopher of mind and language at the University of California at Berkeley. The mind, for Searle, is *intentional (à la Brentano) in that perceptions, memories, imaginings, desires, and many other mental states take objects (e.g. I see the car and I remember Aunt Fanny). Language, seen by Searle mainly from the *speech-act tradition of J. L. Austin, is also inten- tional, but derivatively so. His intentional theory, and the Searle, John R. 859 . from causes to effects—as when we infer the height of the flagpole from the length of the shadow—even though we don’t want to say that the length of the shadow explains the height of the flagpole?. explanation in which the fact used to explain the photosynthesis—happens to come later in time than the fact which gets explained the chlorophyll. However, there are counter-examples to this proposal,. in the fifteenth cen- tury. Initially the universities relied on their seniors in England and France to train their teachers. But by the end of the sixteenth century they were ready to provide their own.

Ngày đăng: 02/07/2014, 09:20

Tài liệu cùng người dùng

Tài liệu liên quan