BY: RICHARD J.KOSCIEJEW
The scandalous rumours and undertones about or abroad the tattling rumour about the death of epistemology. Death notices appeared in such works as ‘Philosophy and the Mirror of Nature’ (1979) by Richard Rorty and Williams’s ‘Groundless Belief’ (1977). Of late the rumours seem to have died down, but whether they will prove to have been exaggerated remains to be seen.
Arguments for the death of epistemology typically pass through three stages. At the first stage, the critic characterizes the task of epistemology by identifying the distinctive sorts of questions it deals with. At the second stage, he tries to isolate the theoretical ideas that make those questions possible. Finally, he tries to undermine those ideas in question are less than compelling, there is no pressing need to solve the problems they give rise to. Thus, the death-of-epistemology the theorists holds that there is no barrier of, say, demonology or judicial astrology. These disciplines, are centre on questions that were once taken very seriously indeed, but as their presuppositions came to seem dubious, debating their problems came to seem pointless. Furthermore, some theorists hold that philosophy, as a distinctive, professionalized activity, revolves essentially around about the death of epistemology is apt to evolve into speculation about the death of philosophy generally.
Clearly, the death-of-epistemology theorthere is nothing special about philosophical problems. This is where philosophers who see little sense in talk of the death of epistemology disagree. For them, philosophical problems, including epistemological problems, are distinctive in that they are ‘natural’ or ‘intuitive’: That is to say, they can be posed and understood taking for granted little or nothing in the way of contentious, theoretical ideas. Thus, unlike problems belonging to the particular sciences, they bare ‘perennial’ problems that could occur to more or less anyone, anything and anywhere, But are the standard problems of epistemology really ‘intuitive’ as all that? Or if they indeed come to seem so commonsensical, is this only because commonsense is a repository fo r ancient theory? These are the sorts of question that underlie speculation about epistemology’s possible demise.
Because it resolves round questions like this, the death-of-epistemology movement is distinguished by its interest in what we may call ‘theoretical diagnosis;, bring to light the theoretical background to philosophical problems so as to argue that they cannot survive detachment from it. This explains the movement’s interest in historical-explanatory accounts of the emergence of philosophical problems. If certain problems can be shown not to be perennial, rather, to have emerged at definite points in time, this strongly suggestive of their dependence on some particular theoretical outlook: And, if an account of that outlook makes intelligible the subsequent development of the discipline centred on those problems, that is evidence for its correctness. Still, the goal of theoretical diagnosis is to establish logical dependence, not just historical correlation. So, although not just historical correlation. So, although historical investigation into the roots and development of epistemology can provide valuable clues to the ideas that inform its problems, history cannot substitute for problem-analysis.
The death-of-epistemology movement has many sources: In the pragmatists, particularly James and Dewey, and in the writings of Wittgenstein, Quine, Sellars and Austin. But the project of theoretical diagnosis must be distinguished from the ‘therapeutic’ approach ti philosophical problems that some names on the list of theoretical diagnosis does not claim that the problem he analyses are ‘pseudo-problems’ rooted in ‘conceptual confusion’: Rather, rooted and claims that, while genuine, they are wholly internal to a particular intellectual project whose generally unacknowledged theoretical commitments he aims to isolate and express criticism.
Turning to details, the task of epistemology, as these radical critics conceive it, is to determine the nature, scope and limits, indeed the very possibility of human knowledge. Since epistemology determines the extent to which knowledge is possible. It cannot itself take for granted the results of any particular forms of empirical inquiry. Thus epistemology purports to be a non-empirical discipline, the function of which is to sit in judgement on all particular discursive practices with a view to determining their cognitive status. The epistemologists (or, in the rea of epistemological-centred philosophy, we might as well say ’the philosophers’) is someone professionally equipped to determine what forms of judgement are ‘scientific’, ‘rational’. ‘merely expressive’, and so forth. Epistemology is therefore fundamentally concerned with sceptical questions. Determining the scope and limits of human knowledge is a matter of showing where and when knowledge is possible. But there is a project called ‘showing that knowledge is possible’ only because there are powerful arguments for the view that knowledge is impossible. yet, the scepticism in question is firs t and foremost radical scepticism, the theses, with respect to this or that area of putative knowledge we are never so much as justified in believing one thing rather than another. The task of epistemology is thus to determine the extent in which bit is possible to respond to the challenge s posed by radical sceptical arguments by determining where we can and cannot have justifications for our beliefs. If in turns out that the prospects are most hopeful for some sorts of beliefs than for others, we will have uncovered a difference in epistemological status. The ‘scope and limit’ question and problem of radical scepticism are two sides of one coin.
This emphasis on scepticism as the fundamental problem of epistemology may strike some philosophers s misguided. Much recent work on the concept of knowledge, particularly that inspired by Gettier’s demonstration of the insufficiency of the standard ‘justified true analysis’, has been carried on independently of any immediate concern with scepticism. I think it must be admitted that philosophers who envisage the death of epistemology tend to assume a somewhat dismissive attitude to work of this kind. In part, this is because they tend to be precise necessary and sufficient conditions for the application of any concept. But the determining factor is their thought that only the centrality of the problem of radical scepticism can reexplain the important for philosophy that, at least in the modern period, epistemology has taken on. Since radical scepticism concerns the very possibility of justification, for philosophers who put this problem first, questions about what special sorts of justification yield knowledge, or about whether knowledge might be explained in non-justificational terms, are of secondary importance. Whatever importance they have will have to derive the end from connections. If any, with sceptical problems.
In light of this, the fundamental question for death-of-epistemology theorists becomes, ‘What are the essential theoretical presuppositions of arguments for radical scepticism? Different theorists suggest different answers. Rorty traces scepticism to the ‘representationalist’ conception of belief and its close ally, the correspondence theory of truth. According to Rorty, if we think of beliefs as ‘representations’, that aim to correspond with mind-independent ‘reality’ (mind as the mirror of nature), we will always face insuperable problems when we try to assure ourselves that the proper alignment has been achieved. In Rorty’s view, by switching to a more ‘pragmatic’ or ‘behaviouristic’ conception of beliefs as devised for coping with particular, concrete problems, we can put scepticism, hence the philosophical discipline that revolves around it, behind us once and for all.
Other theorists stress epistemological foundationalism as the essential background to traditional sceptical problems. There are reasons for preferring this approach. Arguments for epistemological conclusions require at least one epistemological premiss. It is, therefore, not easy to see how metaphysical or semantic doctrines of the sort emphasized by Rorty could, by themselves, generate epistemological problems, such as radical scepticism. On the other hand, the case for scepticism’s essential dependence on foundationalist preconceptions is by no means easy to make. It has even been argued that this approach ‘gets things almost entirely upside down’. The thought it has, is that foundationalism is an attempt to save knowledge from the sceptic, and is therefore a reaction to, rather than a presupposition of, the deepest and most intuitive arguments for scepticism. Challenges like this certainly need to be met by death-of-epistemology theorists, who have sometimes been too ready to take for obvious asceticism’s dependence on foundationalist, or other theoretical ideas. This reflects, perhaps the dangers of taking one’s cue from historical accounts of the development of sceptical problems. It may be that, in the heyday of foundationalism, sceptical arguments were typically presented within a foundationalist context. But the crucial question is not whether some sceptical arguments do take foundationalism for granted but whether there are any that do not. This issue - indeed, the general issue of whether scepticism is a truly intuitive problem – can only be resolved by detailed analysis of the possibilities and resources of sceptical argumentation.
Another question concern why anti-foundationalism leads to the death of epistemology than a non-foundational. Hence ‘coherentist’ approach to knowledge and justification. It is true that death-of-epistemology theorists often characterize justification is to make a negative point. According to foundationalism, our beliefs fall naturally to foundationalism, our belief categories that reflect objectives context-independent relations of epistemological priority. Thus,, for example, experimental beliefs are thought to be naturally or intrinsically prior to beliefs about the natural world. This relation of epistemic priority is, so to say, just a fact. Foundationalism is therefore committed to a strong form of ‘realism’ about epistemological facts and relations, call it ‘epistemological realism’. For some anti-foundationalist’s, talk of coherence is just a way of rejecting the picture in favour of the view that justification is a matter of accommodating new beliefs to relevant background beliefs in contextually appropriated ways, there being no context-independent, purely epistemological restrictions on what sorts of beliefs can confer evidence on what others. If this is all that is meant, talk of coherence does not point to a theory of justification so much as too the deflationary view that justification is not the sort of thing we should expect to have theories about. There is, however, a stronger sense of a genuine theory. This is the radically holistic account of justification, according to which inference depends on assessing our entire belief-system or ‘total view’, in the light of abstract criteria of ‘coherence’. But it is questionable whether this view, which seems to demand privileged knowledge of what we believe is an alternative to foundationalism or just a variant form. Accordingly, it is possible that a truly uncompromising anti-foundationalism will prove as hostile to traditional coherence theories as to standard foundationalist positions, reinforcing the connection between the rejection of foundationalism and the death of epistemology.
The death-of-epistemology movement has some affinities with the call for a ‘naturalized’ approach to knowledge. Quine argues that the time has come for us to abandon such traditional projects as refuting the sceptic by showing how empirical knowledge can be rationally reconstructed on a sensory basis, hence justifying empirical knowledge at large. We should concentrate instead on the more tractable problem of explaining how we ‘project our physics from our data’, i.e., how retinal stimulations cause us to respond with increasingly complex sentences about events in our environment. Epistemology should be transformed into a branch of natural science, specifically experimental psychology. But though Quine presents this as a suggestion about how to continue doing epistemology, to philosophers who think that the traditional questions still lack satisfactory answers, it looks more like abandoning epistemology in favour of another pursuit entirely. It is significant, therefore, that in subsequent writings Quine has been less dismissive of sceptical concerns. But if this is how `naturalized`epistemology develops then for the death-of-epistemology theorist, its claims will open up a new field for theoretical diagnosis.
Even so, the scepticism hypothesis is designed to impugn our knowledge of empirical propositions by showing that our experience is not a reliable source of beliefs. Thus, one form of traditional scepticism developed by the Pyrrhonists, namely that reason is incapable of producing knowledge, is ignored by contemporary scepticism. Apparently, the sceptical hypothesis can be employed in two distinct ways. It can be used to show that our beliefs fall short of being certain and it can be used to show that they are not even justified. In fact, as we are to implicate that the first use depends on or upon the second.
Letting ‘p’ stand for any ordinary belief (e.g., there ids a table before me) the first type of argument employing the sceptical hypothesis can be stared as follows:
1. If ‘S’ knows that ‘p’, then ‘p’ is certain.
2. The sceptical hypothesis shows that ‘p’ is not certain.
Therefore, ‘S’ does not know that ‘p’ is not certain.
No argument for the first premiss is needed because this first form of the argument employing the sceptical hypothesis is only concerned with cases in which certainty is though t to be a necessary condition of knowledge. Yet issues surrounding certainty are inextricably connected with those concerning scepticism. For many sceptics have traditionally held that knowledge requires certainty, and, of course, they claim that certain knowledge is not possible. In part, in order to avoid scepticism, the anti-sceptics have generally held that knowledge does not require certainty: According to which the meaning of a concept is to be sought in the experimental or practical consequences of its application. The epistemology of pragmatism is typically anti-Cartesian, fallibilistic, naturalistic. In some versions it is also realistic, in others not. In fact, Wittgenstein (1972) claims roughly, that propositions which are known are always subject to challenge, whereas, when we say that ‘p’ is certain, we are foreclosing challenges to ‘p’. As he puts it. ‘Knowledge and certainty’ belong to different categories (Wittgenstein, 1969). As such, if justification is a necessary condition of knowledge, it is suggested that it explicitly employs the premiss needed by the first argument discussed or aforementioned, as namely that ‘S is not justified in denying the sceptical hypothesis. Nonetheless, the first premiss employs a version of the co-called ‘transmissibility principle’ which probably first occurred with Edmund Gettier’s standard analysis of propositional knowledge, and is suggested by Plato and Kant among others, and implies that if one has a justified true belief that ‘p’ then one knows that ‘p’ has a three individually necessary and jointly sufficient conditions, as the ‘tripartite definition of knowledge’ stating that justification, truth and belief are justified true beliefs. The belief condition requires that anyone who knows that ‘p’ believe that ‘p’, the truth condition requires that any known proposition be true, and the justification condition requires that any known proposition be adequately justified, warranted or evidentially supported.
Such as in the second premiss of the argument is a Cartesian not in of doubt which is roughly that a proposition, ‘p’. Is doubtful for ‘S’ if there is a proposition that (1) ’S’ is not justified in denying and (2) if added to S’s beliefs , would lower the warrant of ‘p’ as it seems clear that certainty is a property that can be ascribed to either a person or a belief. However, a Cartesian characterization of a concept of absolute certainty seems the approach that is a proposition ‘p’. Is certain for ‘S’ just in case ‘S’ is warranted in believing that ‘p’ and there are absolutely no grounds whatsoever for doubting. If. now one could characterize those ground in a variety of ways (Firth,1976; Miller, 1978; Klein, 1981,1990). For example, a ground ‘g’ for making ‘p’ doubtful for ‘S’ could be such that (a)‘S’ is not warranted in denying ‘g’ and:
(B1) If ‘g’ is added to S’s beliefs, the negation of ‘p’ is warranted Or,
(B2) if ‘g’ is added t o S’s beliefs, ‘p’ is no longer warranted:
Or;
(B3) If ‘g’ is added to S ‘s beliefs, ‘p’ becomes less warranted (even if only slightly so).
Warrant might also be increased rather than just ‘passed on’. The coherence of probable propositions with other probable propositions with other probable propositions might (defensibly ) making them all the more evident (Firth, 1964).
Nonetheless, if belief is a necessary condition of knowledge since we can believe a proposition without believing all of the propositions entailed by it. It is clear that the principle is false. Similarly, the principle entails for other uninteresting reasons. For example, if the entailment is a very complex one, ‘S’ may not be justified in believing what is entailed because ‘S’ does not recognize the entailment. In addition, ‘S’ may recognizes the entailment but believe the entailing in the proposition for silly reasons. But, the interesting question is this: If `S`is justified in believing (or knows) that ‘p’. And ‘p’ obviously (to ‘S’) entails ’q’, and ‘S’ believes ‘q’ on the basis of believing is justified in believing (or, in a position to know) that ’q’.
Even so, Quine argued that the classical foundationalist project was a failure, both in its details and in its conception. On the classical view, an epistemological theory would tell us how we ought to arrive at our beliefs, only by developing such a theory and then applying it could we reasonably come to believe anything about the world around us. Thus, on this classical view, an epistemological theory must be developed independently of, and prior to, any scientific theorizing: Proper scientific theorizing could only occur after such a theory was developed and deployed. This was Descartes’ view of how an epistemological theory ought to proceed, it was what he called ‘First Philosophy’. Moreover, it is this approach to epistemological issues motivated not only foundationalism, but virtually all epistemological theorizing for the next 300 years.
Quine urged a rejection of this approach to epistemological questions. Epistemology, on Quine’s view, is a branch of natural science. It studies the relationship between human beings ad their environment, in particular, it asks how it is that human beings can arrive at beliefs about the world around them on the basis of sensory stimulation, the only source of belief there is. Thus Quine commented, [sensory stimulation] and the torrential output [our total science] is a relation we are prompted to study for somewhat the same reasons that always prompted epistemology: Namely, in order to see how evidence relates to theory, and in what ways one’s theory of nature transcends any available evidence (Quine,1969), Quine spoke of this project study as ‘epistemology naturalized’.
One important difference between this approach and more traditional ones becomes plain when the two are applied to sceptical questions. On the classical view, if we to explain how knowledge is possible, it is illegitimate to make use of the resources of science: This would simply beg the question against the sceptic by making use of the very knowledge which he calls into question. Thus, Descartes’ attempt to answer the sceptic begins by rejecting all those beliefs about which any doubt is possible. Descartes must respond to the sceptic from a starting place which includes no be beliefs at all. Naturalistic epistemologists, however, understand the demand to explain the possibility of knowledge differently. As Quine argues, sceptical questions arise from within science. It is precisely our success in understanding the world, and thus in seeing that appearance and reality may differ, that raises the sceptical question in the first place. We may thus legitimately use the resources of science to answer the question which science itself has raised. The question about how knowledge is possible should thus be construed as an empirical question: It is a question about creatures such as we (given what our best current scientific theories tell us, we are like) may come to have accurate beliefs about the world (given what our best current scientific theories tell us the world is lik0e), Quine suggests that the Darwinian account of the origin of species gives a very general explanation of why it is that we should be well adapted to getting true beliefs about our environment (Stich, 1990), although Quine himself does no t suggest it, in that investigations in the sociology of knowledge are obviously relevant as well.
This approach into sceptical questions clearly makes them quite tractable, and its proponents see this, understandably, as an important advantage of the naturalistic approach. It is in part for this reason that current work in psychology and sociology is under such close scrutiny by many epistemologists. By the same token, the detractors of the naturalistic approach argue that this way of dealing with sceptical questions simply bypasses the very question which philosophers have long dealt with. Far from answering the traditional sceptical question. It is argued, the naturalistic approach merely changes the topic (e.g., Stroud, 1981). Debates between naturalistic epistemologists and their critics thus frequently focus on or upon whether this new way of doing epistemology adequately answers, transforms or simply ignores the questions, which others see as central to epistemological inquiry. Some are the naturalistic approach as an attempt to abandon the philosophical study of knowledge.
Precisely what the Quinean project amounts to is also a subject of some controversy. Both those who see themselves as opponents of naturalistic epistemology and those who are eager to sign onto the project frequently disagree about what the project is. The essay of Quine’s which prompted this controversy (Quine, 1969) leaves a great deal of room for interpretation.
At the centre of this controversy is the issue of the normative dimension of epistemological inquiry. Philosophers differ regarding the sense. if any, in which epistemology is normative (roughly, valuational). But what precisely is at stake is this controversy is no clearer than the problematic fact/value distinction itself. Much epistemologists as such make judgements of value or epistemic responsibility? If epistemology is naturalistic, then even epistemic principles simply articulate under what conditions - say, appropriate perceptual stimulation - a belief is justified, or constitutes knowledge. Its standards of, e.g., resilience for bridges. It is not obvious, however, that the appropriated standards can be established without independent judgements that, say, a certain kind of evidence is good enough for justified belief (or knowledge). The most plausible view may be that justification is like intrinsic goodness: Though it supervenes on natural properties, it cannot be analysed wholly in factual statements.
Perhaps, the central role which epistemological theories have traditionally played is normative. Such theories were meant not merely to describe the various processes of belief acquisitions and retention, but to tell us which of these processes we ought to be using. By describing his preferred epistemological approach as a ‘chapter of psychology and hence of natural science’ (Quine, 1969). Quine has encouraged many to interpret his view as a rejection of the normative dimension of epistemological theorizing (Goldman. 1986; Kim, 1988). Quine has, however, since repudiated this reading: Naturalization of epistemology does not jettison the normative and settle for the indiscriminate description of ongoing procedure’ (Quine, 1986 & 1999)
Unfortunately, matters are not quite a simple as this quotation makes things seem, Quine goes on to say, ‘For me, normative epistemology is a branch of engineering. It is the technology of truth-seeking, . . . There is no question as of th e ultimate value, as in morals: It is a matter of efficacy for an ulterior end, truth or prediction. The normative, as elsewhere in engineering, becomes descriptive when the terminal parameter is expressed’ (Quine, 1986). But this suggestion, brief as it is, is compatible with a number of different approaches.
On one approach, by Alvin Goldman (Goldman, 1968). Knowledge is just true belief which is produced by a reliable process, that is, a process which tends to produce true beliefs. In so much as, the view that a belief acquires favourable epistemic status by having some kind of reliable linkage to the truth. Variations of this view have been advanced for both knowledge and justified belief. The first formulation of a reliable account of knowing appeared in a note by F.P. Ramsey (1931), who said that a belief is knowledge if it is true, certain and obtained by a reliable process. P. Unger (1968) suggested that ‘S’ knows that ‘p’ just in case it is not at all accidental that ‘S’ is right about its being the case that ‘p’. D.M. Armstrong (1973) drew an analogy between a thermometer that reliably indicates the temperature and a belief that reliably indicates the truth. Armstrong said that a non-inferential belief qualifies as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth according to the laws of nature.
Yet, the ‘technological’ question arises in asking which processes tend to produce true belief. Questions of this sort are clearly part of natural science, but there is also the account of knowledge itself. On Goldman`s view, the claim that knowledge is reliably produced true belief is arrived at independent of, and prior to, scientific investigation: It is a product of conceptual analysis. Given Quine’s rejection of appeals to meaning, the analytic-synthetic distinction, and thus the very enterprise of conceptual analysis, this position is not open to him. Nevertheless, it is for many and attractive way of allowing scientific theorizing to play a larger role in epistemology than it traditionally has, and thus one important approach which might reasonably be thought of as a naturalistic epistemology.
Those who eschew conceptual analysis will need another way of explaining how the normative dimension of epistemology arises within the context of empirical inquiry. Quine says that this normative is not mysterious once we recognize that it becomes descriptive when the terminal parameter is expressed. But why is it conduciveness to truth. Than something else, such as survival, which is at issue here. Why is it that truth counts as the goal for which ewe should aim. Is this merely a sociological point, that people do seem to have this goal. Or is conduciveness to truth itself instrumental to other goals in some way that makes it of special pragmatic importance. It is not that Quine has no way to answer these questions within the confines of the naturalistic position he defines, rather that there seem to be many different options open, such that which is needed of further exploration and elaborations.
A number of attempts to fill in the naturalistic account draw a close connection between how people actually reason and how they ought to reason, thereby attempting to illuminate the relation between th e normative and the descriptive. One view has in that these two are identical (Kornblith, 1985; Sober, 1978), that with respect to a given subject-matter ‘psychologism’ is the theory that the subject-matter in question can be reduced to, or explained in terms of, psychological phenomena., as mental acts, events, states, dispositions and the like. But different criteria of legitimacy are normally considered appropriate types of reasoning, or roles for the faculty of reason, seem to be commonly recognized in Western culture.
It is, nonetheless, that modern science gave new impetus to affirmative theorizing about rationality, it was probably, at least in part because of the important part played by mathematics in the new mechanics of Kepler, Galileo and Newton, that some philosophers though it plausible to suppose that rationality was just as much the touchstone of scientific truth as of mathematical truth. At any rate, that supposition seems to underlie the epistemologies of Descartes and Spinoza, for example, in that which observation and experiment are assigned relatively little importance compared with the role of reason. Correspondingly, it was widely held that knowledge of right and wrong is knowledge of necessary truths that are to be discovered by rational intuition in much the same way as it was believed that the fundamental principles of arithmetic and geometry are discovered, for example, Richard Price argued that rational agent void of all moral judgement, . . . is not possible to be imagined`(1797).
But in modern philosophy the most influential sceptical challenge to everyday beliefs about rationality was originated by Hume. Hume argued the impossibility of reasoning from the past to the future or from knowledge about some instances of a particular kind of situation to knowledge about all instances of that kind. There would b e nothing contradictory, he claimed, in supposing both that the sun had always risen in the past and that it would not rise tomorrow. In effect, therefore, Hume assumed the only valid standards of cognitive rationality were those concerned that rationality, where in of consisting to the conformity with the laws of deductive logic, and that of rationality as exhibited by correct mathematical calculation, and the concerning aspect of whose reasoning that depends for its correctness solely on the meaning of words that belong neither to logical nor to our mathematical vocabulary thus, it would be rational to infer that, if two people; are first cousins of one another, they share, at least one grand-parent. The form of rationality is exhibited by applicative induction that conform to appropriate criteria, as in an inference from experimental data to a general theory that explains them. For example, a hypothesis about the cause of a phenomenons needs to be tested in a relevant variety of controlled conditions in order to eliminate other possible explanations of the phenomenon, and it would be irrational to judge the hypothesis to be well-supported unless it had survived a suitable set of such tests.
Deduction was not a rational procedure, on his view, because it could not be reduced to the exercise of reason in one or another of these roles as played by doing their part.
Hume’s argument about induction is often criticized for begging the question on the grounds that induction should be held to be a valid process in its own right and with its own criterions of good and bad reasoning. But this response to Hume seems just to beg the question in the opposite direction. What is needed instead, as, perhaps, to demonstrate a continuity between inductive and deductive reasoning, with the latter exhibited as a limiting case of the former (Cohen. 1989). Even so, Hume’s is not the only challenge that defenders of inductive rationality need to rebuff. Popper has also denied the possibility of inductive reasoning, and much-discussed paradoxes about inductive reasoning have been proposed by Goodman and Hemper.
Hemper’s paradox in the study of confirmation (1945) has introduced a paradox that raises fundamental questions about what counts as confirming evidence for a universal hypothesis. To generate the paradox three intuitive principles are invoked:
1. Nicol`s Principle (after Jean Nicod, 1930): Instances of A’s that are B’s provide confirming evidence for the universal hypothesis that, all A’s are B’s: While instances of A’s that are non-B’s provide disconfirming evidence. for example, instances of ravens that are black constitute confirming evidence for the hypothesis. All ravens are black` while instances of non-black ravens are disconfirming.
2. Equivalence Principle. If ℯ is confirming evidence for hypothesis h1 and if h1 is logically equivalent to hypothesis h, then ℯ is confirming evidence for h2. For example, if instances of ravens that are black are confirming evidence that all ravens are black, they are also confirming evidence that all non-black things are non-raven, since the latter hypothesis is logically equivalent to the former.
3. A Principle of Deductive logic: A sentence of the form, All A’s are B’s is logically equivalent to one of the form, that All non-B’s are non-A’s.
Using these principles, the paradox is generate d by supposing that all the non-black things so far observed have been non-ravens. These might include white shoes, green leaves and red apples, by Nicod`s principle, this is confirming evidence for the hypothesis, All non-black things are non-ravens. (In the schematic version of Nicod`s principle. Let A’s be non-black things and B’s be non-ravens.) But by principle (3) of deductive logic, the hypothesis, All non-black things are non-ravens, is logically equivalent to, All ravens are black. Therefore by th equivalence principle (2) the fact that all the non-black things so far observed have been non-ravens is confirming evidence for the hypothesis that all ravens are black. That is, instances of white shoes, green leaves and red apples count as evidence for this hypothesis, which seems absurd, This is Hempel`s ravens paradox.
Hume also argued, as against philosophers like Richard Price (1787), that it was impossible for any reasoning to demonstrate the moral rightness or wrongness of a particular action. There would be nothing self-contradictory in preferring the destruction of the whole world to the scratching of one’s little finger. The only role for reason in decision making was to determine the means to desired ends. Nonetheless, Price’s kind of ethical rationalism has been revived in more recent times by W.D. Ross (1930) and others. Perhaps Hume’s argument had been based on question-begging assumptions, and it may be more cogent to point out that ethical rationalism implies a unity of moral standards that is not grounded to exist in the real world.
Probabilistic reasoning is another area in which the possibility of attaining fully rational results has sometimes been queried, as in the lottery paradox. And serious doubts have also been raised (Sen. 1982) about the concept of a rational agent that is required b y classical models of economic behaviour. No doubt a successful piece of embezzlement may in certain circumstances further the purpose of an accountant, and need not be an irrational action. But is it entitled to the accolades of rationality: And how should its immorality be weighed against its utility in the scales of practical reasoning? Or, is honesty always the rationally preferable policy.
These philosophical challenges to rationality gave been directed against the very possibility of these existing valid standards of reasoning of this of that area of enquiry. They have thus been concerned with the integrity of the concept of rationality rather than with the extent to which that concept is in fact instantiated by the actual thoughts, procedure and actions of human beings. The latter of issue’s seem at first sight to be a matter for philosophical, than philosophical research. Some of this research will no doubt be concerned with the circumstances under which people fail to perform in accordance with valid principles that they have nevertheless developed or adopted, as when they make occasional mistakes in their arithmetical calculations. But there also to be room for research into the categories of the population have developed or adopted. Some of this would be research into the success with which the relevant principles have been taught, as when students are educated in formal logic or statistical theory. Some would be research into the extent to which those who have not had any relevant education are, or are not, prone to any systematic patterns of error in their reasoning. And it is this last type of research that has claimed results with ‘bleak implications for human rationality’ (Nisbett and Borgida, 1975).
One robust result was when (Wason, 1966) logically untutored subjects are presented with four cards showing, respectively, ‘A’. ‘D’ ‘4' and ‘7', and they know that every card has a letter on one and a number on the other. They are then given the rule, ‘If a card has a vowel on one side, It has an even number on the other’, and told that their task is to say which of the cards they need to turn in order to find out whether the rule is true or false. The most frequent answers are ‘A’ and ‘4' and ‘Only ‘A’ which are both wrong, while the right answer ‘A’ and ‘7' is given spontaneously by very few subjects. Wason interpreted this result as demonstrating that most subjects have a systematic bias towards seeking verification than falsification in testing the rule, and he regarded this bias as a fallacy of the same kind as Popper claimed to have discerned in the belief that induction could be a valid form of human reasoning.
Some of these results concern probabilistic reasoning, for example, in an experiment (Kahneman and Tversky, 1972) on statistically untutored students the subjects are told that in a certain town blue and green cabs operate in a ratio of 85 to 15 respectively. A witness identifies the cab in an accident as green and the court is told that in the relevant circumstances he says that a cab is blue when it is blue, or that a cab is green when it is green in 80 per cent of cases. When asked the probability that the cab involved in the accident was blue subjects tend to say 20 per cent. The experimenters have claimed that this robust result shows the prevalence of a systematic fallacy in ordinary people’s probabilistic reasoning, though a failure to pay attention prior probabilities and it has been argued (Saks and Kidd, 1980) that the existence of several such results demonstrates the inherent unsoundness of mandating lay juries to decide issues of fact in a court of law.
However, it is by no means clear that these psychological experimenters have interpreted their data correctly o r that the implications for human rationality are as bleak as they suppose (Cohen, 1981, 1982). For example, it might be argued that Wason’s experiment merely shows the difficulty that people have in applying the familiar rule of contraposition to artificial conditional relationships that lack any basis in causality or in any explanatory system. And as for the cabs, it might well be dispute d whether the size of the fleet to which a cab belongs should be accepted as determining a prior probability that can count against a posterior probability founded on the causal relation between a witness’s mental powers and his courtroom testimony. To count against such a posterior probability one would need a prior one that was also rooted in causality, such as the ratio in which cabs from the blue fleet and cabs from the green fleet (which may have different policies about vehicle maintenance and driver training) are involved in accidents of the kind in question. In other words, the subjects may interpret the question to concerning probabilities, not probabilities conceived as relative frequencies that may be accidental, nonetheless, it is always necessary to consider whether the dominant responses given by subjects in such experiments should be taken, on the assumption that they are correct, as indicating how the task is generally understood - instead of as indicating, on the assumption that the task is understood exactly in the way intended, what error are being made.
Finally, there is an obvious paradox in supposing that untutored human intuitions may be systematically erroneous over a wide range of issues in human reasoning. On what non-circular basis other than such intuitions can philosophers ultimately found their theories about the correct norms of deductive or probabilistic reasoning? No doubt an occasional intuition may have to be sacrificed in order to construct an adequately comprehensive system of norms. But empirical data seem in principle incapable of showing that the untutored human mind is deficient in rationality, since we need to assume the existence of this rationality - in most situations - in order to provide a basis for those normative theories in terms of which we feel confident in criticizing occasional errors of performance in arithmetical calculations, and so forth.
There has been a steady stream of two-way traffic between epistemology and psychology. Philosophers and psychologists have relied on novel epistemological doctrines and arguments to support psychological views, more recently, epistemologists have been drawn to psychology in an attempt to solve their own problems.
It is, nonetheless, that many epistemological disagreements within psychology pertain in some way or other to disputes about ‘behaviourism’. The epistemological argument most widely used by behaviouralists turns on the alleged unobservability of mental events or states. If cognations are unobservable in principle, the argument runs, we have no warrant for believing that they exist and, hence, no warrantably accepting to cognitive explanations. The same argument applies to non-cognitive mental states, such as sensations or emotions. Opponents of behaviourism sometimes reply that mental states can be observed. Each of us, through ‘introspection’, can observe at least some mental states, namely our own (at least those of which we are conscious). To this point, behaviouralists have made several replies, some (e.g., Zuriff, 1985) argue that introspection is too unreliable for introspective reports too qualify as firm scientific evidence. Others have replied that, even if introspection is private and that this fact alone renders introspective data unsuitable as evidence in a science of behaviour. A more radical reply, advanced by certain philosophers, is that introspection is not a form of observation, but rather a kind of theorizing. More precisely, when we report on the basis of introspection, that we have a painful sensation, a thought, a mental image, and so forth, we are theorizing about what is present. The resulting view, the fact that we introspect does on this view, the fact that we introspect does not show that any mental states are observable.
Given by our inherent perception of the world, is only after a long experience that one is visually to identify such things in our familiar surroundings that do not typically go through such a process known as the relevance of perceptual identifiability. However, the perceptual knowledge of the expert is still dependent, of course, since even an expert cannot see what kind of flower it is, nonetheless, it is to say, that the expert has developed identificatory skills that no longer require the sort of conscious inferential processes that characterize a beginner’s efforts. Much of our perceptual knowledge - even (sometimes) the most indirect and derived forms of it - does not mean that learning is not required to know in this way. That these sensory facts are, so to speak, are right up against the mind’s eye, and one cannot be mistaken about the conveying facts in the mind, as for these facts are, in reality, facts about the way things appear to be. Normal perception of external conditions, are, then, turning to be (always) a type of indirect perception. Such by seeing that the appearances (of the tomato) and inferring (this is typically said to be automatic and unconscious), on the basis of certain background assumptions (e.g., that there typically is a tomato in front of one when one has experiences of this sort) that there is a tomato in front of one. All knowledge of an objective reality, then, even what commonsense regards as the most direct perceptual knowledge, is based on a even more direct knowledge of the appearances.
For the representationalist, then, perceptual knowledge of our physical surroundings is always theory-loaded and indirect. Such perception is ‘loaded’ with the theory that there is some regular, some uniform, correlation between the way things appear (known in a perceptually direct way) and the way things actually are (known, known at all, in a perceptually indirect way).
Another view, as direct realism, refuses to restrict direct perceptual knowledge to an inner world of subjective experience. Though the direct realist is willing to concede that much of our knowledge of the physical world is indirect, however direct and immediate it may sometimes feel, some perceptual knowledge of physical reality is direct. What makes it direct is that such knowledge is not based on., nor in any way dependent on, other knowledge and belief. The justification needed for the knowledge is right there in the experience itself.
Too understand the way that is supposed to work, consider an ordinary example, for which of ‘S’ identifies a banana (learns that it is a banana) by noting its shape and colour - perhaps, even tasting and smelling it (to make sure it is not wax). In this case the perceptual knowledge that is a banana is (the direct realist admits) indirect, dependent on S’s perceptual knowledge of its shape, colour, smell and taste. ‘S’ learns that it is a banana by seeing that it is yellow, banana-shaped, and so on. Nonetheless, S’s perception of the banana’s colour and shape is not indirect. ‘S’ does not see that the object is yellow, for example, by seeing (knowing, believing) anything more basic - either about the banana or anything else, e.g., his own sensations of the banana, for ‘S’ has learned to identify such features, that, of course, what ‘S’ learned to do is not make an inference, even a unconscious inference, from other things he believes. What ‘S’ acquired was a cognitive skill, a disposition to believe of yellow objects he saw that they were yellow. The exercise of this skill does not require, and in no way depends on, the having of any other beliefs. ‘S’ identifcatory success will depend on his operating in certain special conditions, of course, ‘S’ will not, as, perhaps, be able to visually identify yellow object s in drastically reduced lighting, at funny viewing angles, or when afflicted with certain nervous disorders. But these facts about when ‘S’ can see that something is yellow does not show that his perceptual knowledge (that ‘a’ is yellow) in any way depends on a belief (let alone knowledge) that he is in such special conditions. It merely shows that direct perceptual knowledge is the skill of exercising a skill, an identificatory skill that like any skill requires certain conditions for its successful exercise. An expert basketball player cannot shoot accurately in a hurricane. He needs normal conditions to do what he has learned to do. So also with individuals who have developed perceptual (cognitive) skills. They need normal conditions to do what they have learned to do. They need normal conditions to see, for example, that something is yellow. But they don’t, any more than the basketball player, have to know they are in the conditions to do what being in these conditions enables then to do.
This means, of course, that for the direct realist direct perceptual knowledge is fallible and corrigible. Whether ‘S’ sees that ‘a; is ‘F’ depends on his being caused to believe that ‘a’ is ‘F’ in conditions that are appropriate for an exercise of that cognitive skill. If conditions are right, then ‘S’ sees (hence, knows) that ‘a’ is ‘F’. If they aren’t, he doesn’t. Whether or not ‘S’ knows depends, then, not on what else (if any thing) ‘S’ believes, but on the circumstances in which ‘S’ comes to believe. This being so, this type of direct realism is a form of an externalized world.
Nonetheless, epistemologists often use the distinction between internalised and externalised theories of epistemic justification without offering any very explicit explication. However, the distinction has been mainly applied to theories of epistemic justification. It has also been applied in a closely related way to accounts of knowledge and in a rather different way in accounts of belief and thought content. Also, on this way of drawing the distinction, a hybrid view, as according to which some of the factors required for justification must be cognitively accessible while others need not and in general will not be, would count as an externalized view. Obviously too, a view that was externalist in relation to a strong version of internalism (by not requiring that the believer actually be aware of all justifying factors) could still be internalized in relation to a weak version (by requiring that he at least, be capable of becoming aware of them). However, most prominent recent externalist views have been versions of ‘reliabilism’, whose main requirement for justification is roughly that the belief be produced in a way of or the convergent process that makes it objectively likely that the belief is true (Goldman, 1986)
What makes such a view externalist is the absence of any requirements that the person for whom the belief is justified have any sort of cognitive access to the relation of reliability in question. Lacking such access, such a person will in general have no reason for thinking that the belief is true or likely to be true, but will, on such an account, nonetheless, be epistemically justified in accepting it. Thus, such a view arguably marks a major break from the modern epistemological tradition, stemming from Descartes, which identifies epistemic justification with having a reason, perhaps even a conclusive reason, for thinking that the belief is true. An epistemologist working within this tradition is likely to feel that the externalist, rather than offering a compelling account of the same concept of epistemic justifications with which the traditional epistemologist is concerned has simply changed the subject.
The general line of argument for externalism points out that internalist views have conspicuously failed to provide defensible non-sceptical solutions to the classical problems of epistemology. A in striking contrast, however, such problems are in general easily solvable on an externalized view. For example, Goldman (1986) offers a one-page solution, in a footnote, of the same problem of induction. Thus, if we assume both that the various relevant forms of scepticism are false and that the failure of internalist views so far is unlikely to be remedial in the future. We have good reason to think that some externalist view is true. obviously the cogency of this argument depends on the plausibility of the assumptive arguments. An internalist can reply, that it is not obvious that internalist epistemology is designed to failure that the failure, that the explanation for the present lack of success may simply be the extreme difficulty of the problem in question. As it can be argued that most or even all of the appeal of the assumption that the various forms of scepticism are false and depends essentially on the intuitive conviction that we do not have reasons in our grasp for thinking that the various beliefs questioned by the sceptic are true - a conviction that the proponent of this argument must of course reject.
The main objection to externalism rests on the intuition that the basic requirement for epistemic justification is that the acceptance of the belief in question be rational or responsible in relation to the cognitive goal of truth, which seems to require in turn that the believer actually be aware of a reason for thinking that the belief is true (or at the very least, that such a reason be available to him). Since the satisfaction of an externalist condition is neither necessary nor sufficient for the existence of such a cognitively accessible reason: It is argued, externalism is mistaken as an account of epistemic justification. This general point has been elaborated by the sorts of putative intuition as a counterexample of externalism. However, of these sorted challenges are plainly necessary of the externalist conditions for epistemic justification by appealing to examples of belief for which the intuitive to be justified, but for which the standard examples of this sort are cases where beliefs are produced in some very non-standard way, e.g., by a Cartesian demon, but nonetheless, in such a way that the subjective experience of the believer is indistinghable from that of someone whose beliefs are produced more general, this sort can be constructed with which any of the standard externalist condition, e.g., that the belief be a result of a reliable process, fail to be satisfied. The intuitive claim is that the believer in such a case is nonetheless, epistemically justified, as much so as one whose belief is produced in a more normal way, and hence, that externalist accounts of justification must be mistaken.
A view in this same general vein, one that might be described as a hybrid of internalism and externalism (Swain, 1981 and Alson, 1989), holds that epistemic justification requires that there be a justificatory factor that is cognitively accessible to the believer in question (though it need not be actually grasped), thus ruling out, e.g., is pure reliabilism. At the same time, however, though it must be objectively true that beliefs for which such a fact or is available are likely to be true, this further fact need not be in any way grasped or cognitively accessible to the believer. In effect these premises’s needed to be argued, that a particular belief is likely to be true. One must be accessible in a way that would satisfy at least weak internalism, while having to be (and normally will be) purely external. At this point, the internalist will respond that this hybrid view is of no help at all in meeting the objection that the belief is not held in the rational, responsible in ways that justification may intuitively seems to require for the believer in question, lacking one crucial premiss, still has no reason for not all for thinking that his belief is likely to be true.
An alternative to giving an externalized account of epistemic justification, one which may be more defensible while still accommodating many of the same motivating concerns, is to give an externalist account of knowledge directly, without relying on an intermediate account of justification. Such a view will obviously have to reject the justified true belief account of knowledge, holding instead that knowledge is true belief which satisfies the chosen externalist condition, e.g., is a result of a reliable process and, perhaps, further conditions as well. This make s it possible for such a view to retain an internalist account of epistemic justification, though the centrality of that concept to epistemology would obviously be seriously diminished.
Such an externalist account of knowledge can accommodate the commonsense conviction that animals, young children, and unsophisticated adults possess knowledge, through most weaker of convictions (if such conviction even exits) that such individuals are epistemically justified in their beliefs. It is also, at least less vulnerable to internalist counterexamples of the sorts discussed, since the intuition involved there pertain more clearly to justification than to knowledge. What is uncertain is what ultimate philosophical significance the resulting conception of knowledge is supposed to have. In particular, doesn’t have any serious bearing on traditional epistemological problems and on the deeper and most troubling versions of scepticism, which seem in fact to be primarily concerned with justification, rather than knowledge.
As with justification and knowledge, the traditional view of content has been strongly internalist in character. The main argument for externalisms derives from the philosophy of language, more specifically from the various phenomenon pertaining to natural kind terms as, indexical, and so forth, that motivate the views that have come to be known as ‘direct reference ‘ theories. Such phenomenons seem at least to show that the belief or thought content that can be property attributed to a person is dependent on facts about the environment - e.g., whether he is on Earth or Twin Earth what in fact he is pointing at the classificatory criteria employed by the experts in his social group, and so on - not just on what is going on internally in his mind or brain (Putnam, 1975 and Burge, 1979).
An objection to externalist accounts of content is that they seem unable to do justice to our ability to know the contents of our beliefs or thoughts from the inside, simply by reflection. If content is dependent on external factors pertaining to the environment, then knowledge of content should depend on knowledge of content factors - which will not in general be available to the person whose belief or thought is in question.
The adoption of an externalist account of mental content would seem to support an externalist account of justification in the following way: If part of all of the content of a belief’s inaccessible to the believer, then both the justifying status of other beliefs in relation to that content and the status of that content as justifying further beliefs will be similarly inaccessible. Thus contravening the internalist requirement for justification. An internalist must insist that there are no justification relations of these sorts, that only internally accessible content can either be justified of justly anything else, but such a response appears lame unless it is coupled with an attempt to show that the externalist account of content is mistaken.
Direct perception of objective facts, pure perceptual knowledge of external events, is made possible because what is needed (by way of justification) for such knowledge has been reduced. Background knowledge - and, in particular, the knowledge that the experience does, indeed, suffer for knowing - isn’t needed.
This means that the foundation of knowledge are fallible, nonetheless, though fallible, they are in no way derived. That is what makes them foundations, even if they are brittle, as foundations sometimes are, everything else rests on or upon them.
As it is, direct realism is in of assuming that objects of realism exist independently of any mind that might perceive them: And so it thereby rules out all forms of idealism and phenomenalism, which hold that there are no such independently existing objects. Its being a ‘direct’ realism rules out those views defended under the rubic of ‘critical realism’, or ‘representative realism’ in which there is some non-physical intermediary - usually called a ‘sense-datum’ or a ‘sense-impression’ - that must first be perceived or experienced in order to perceive the object that exists independently of this perception. Often the distinction between direct realism and other theories of perception is explained more fully in terms of what is ‘immediately’ perceived, rather than ‘mediately’ perceived. These terms are Berkeley’s who claims (1713) that one might be said to hear an electrical street car rattling down the street, but this is mediate perception as opposed to what is ‘in truth and strictness’ the immediate perception of a sound. Since the senses ‘make no inference’, the perceiver is then said to infer the existence of the electrical street car. Or to have it suggested to him by means of hearing the sound. Thus for Berkeley, the distinction between mediate and immediate perception is explained in terms of whether or not either inference or suggestion is present in the perception itself.
Berkeley went on to claim that objects of immediate perception - sounds, colours, tastes, smells, sizes and shapes - were all ‘ideas of the mind’. Yet he held that there was no further reality to be inferred from them: So that the object of mediate perception of what we would call ‘physical objects’ - are reduced to being simply collections of ideas. Thus, Berkeley uses the immediate-mediate distinction to defend ‘idealism’. A direct realist, however, can also make use of Berkeley’s distinction to define his own position. D.M. Armstrong does this by claiming that the objects of immediate perception are all occurrences of sensible qualitites, such as colours, shapes and sounds, and these are all physical existents, and not ideas or any sort of mental intermediary at all (Armstrong, 1961). Physical objects, all mediately perceived are the bearers of these properties immediately perceived.
Berkeley and Armstrong’s way of drawing the distinction between mediate and immediate perception - by reference to inference or the lack of it - houses major difficulties. There are cases in which it is plausible to assert that someone perceived a physical object - say, a tree, - even when that person was unaware of perceiving it. (We can infer from his behaviour in carefully walking around it that he did see it, even though he does not remember seeing it). Armstrong would have to say that in such cases inference was present, because seeing a tree would be a case of mediate perception: Although it would have to be an unconscious inference, but this seems baseless: There is no empirical evidence that any sort of inference was made at all.
There seems that whether a person infers the existence of something from what he perceives is more a question of talent and training than it is a question of what the nature of the objects inferred really is. For in instance, if we have three different colour samples, a trained artist might not have to infer their difference, instead, he might see their difference immediately. Someone with less colour sense, however,, might see patches ‘A’ and ‘B’ as being the same in colour, and see that ‘A’ is darker than ‘C’. On this basis, he might then infer that ‘A’ is darker than ‘B’, and ‘B’ darker than ‘C’, and its inference might present in determining difference in colour, but colour was supposed to be an object of immediate perception. On the other hand, a games keeper might no t have to infer that the animal he sees as placed within the grounds of the Metro Zoo in Toronto, who sees a black panther, he sees it to be such as straightaway. Someone unfamiliar with the Toronto Zoo and the animalized placements, however, he might have to infer this from the creature’s markings in identifying it. Hence , inference had not been present in cases of perceiving physical objects, are yet in perceiving of these objects of physical objects was supposed to be mediate perception.
A more straightforward way to distinguish between different objects of perception was advanced b y Aristotle in ‘De Anima’, where he spoke of objects directly or essentially perceived as opposed to those objects incidentally perceived, in of those that comprise perceptual properties, either those discerned by only one sense (the ‘proper sensibles’), such as colour, sound, taste, smell, and tactile qualities, or else those discerned by more than one sense, such as size, shape, and motion (the ‘(‘common sensibles’). The objects incidentally perceived are the concrete individuals which posses the perceptual properties, that is, particular physical objects .
According to Aristotle’s direct realism, we pe receive physical objects incidentally: That is, only by means of the direct or essential perception of certain properties that belong to such objects. In other words , by perceiving the real properties of things, and only in this way, can we thereby be said to perceive the things the themselves. These perceptual properties, though not existing independently of the objects that have the m, are yet held to exist independently of the perceiving subject: And, the perception of them is direct in that no mental messages have been perceived or sensed in order to perceive these real properties.
Aristotle‘s way of defining his position seems superior to the psychological account offered by Armstrong, since it is unencumbered with the extra baggage of inference or suggestion. Yet, a common interpretation of the Aristotlean view leads to grave difficulties. The interpretation identifies the property of the perceived sense organ. It is base on Aristotle’s saying that in perception the soul taken in the form of the object perceived without its matter.. On this interpretation, it is easy to think of direct realism as being committed to the view that ‘colour as seen’ or ‘sound as heard’ were independently existing of physical objects, such a view has been rightly disparaged by its critics and labelled as ‘naive realism’: For this is s a view holding that the way thing look seem the way things are, even in the absence of perceives to who the y appear that way.
Similarly, such reductions could be made with regard to the other sensible properties that seemed to be perceived-dependent: sound could be reduced to sound waves, taste and smells to the particular shape’s of the molecules that lie on the tongue or enter the nose, and tactual qualities such as roughness and smoothness to structural properties of the objects felt. All of these properties would be taken to be distinct from the perceptual experience that these properties typically give is to when they cause changes , that the perceiver’s sense the sense organs. When critic s complain that such a reduction on and the greenness of green and the yellowness of yellow (Campbell, 1976), the direct realism can answer this, it is by identifying different colours with distinct light waves, that we can best explain how it is that the perceiver’s in the same environment, with similar physical constituents, can cite similar colour experiences of green or of yellow.
A direct realist could claim that one directly perceives what is real only when there is no difference between the property proximately impinging on the same sense organ and that the property of the object which gives rise to the sense organ’s being affected. For colour, this would mean that the light waves reflected from the surface of the object mus t match those entering: the eyes, and for sound, it means that the sound waves emitted from the object must match those entering the ear. A difference in the property at the object from that at the same organ would result in illusions, not veridical perception. Perhaps, this is simply a modern version of Aristotle ‘s idea that a genuine perception the soul (now the sense organ) takes in the form of the perceived object.
If it is protested that illusion might also result from an abnormal condition of the perceiver, this can also be accepted, if one’s normal colour experience deviated too far from normal, even when the physical properties at the object and the sense organ were the same, then misperception or illusion would result. But such illusion could only be noted against a backdrop of veridical perception of real properties. Thus, the chance of illusion due to subjective factors need not lead to Liberal views of colour, sounds, tastes, and smell as existing merely‘ by convention’. The direct realist could implicate that there must be a break basis in veridical perception for must be a real basis in veridical perceptions for any such agreement to take place at all: And, veridical perception is best explained in terms of the direct perception of the properties of physical objects. It is explained, in other words, when our perceptual experience is caused when our perceptual experience is caused in the appropriate way.
This reply, in the part of the direct realist does not, of course, serve to refute the global sceptic, who claims that, since or perceptual experience could be just as it is without there being any real properties at all, we have no knowledge of any such properties but no view of perception alone is sufficient to refuse such global scepticism (Pitcher, 1971). For such a refutation we must go beyond a theory that claims how best to explain our perception of physical objects, and defend a theory that best explains how we obtain knowledge of the world.
In it s best-known form the adverbial the or of experience proposes that the grammatical object of a statement attributing an experience of someone being analysed as an adverb. More example.
(1) Tom is experiencing a pink square
Is rewritten as:
Rod is experiencing (pink square)-ly
This is present as an alterative to the act/object analysis, according to which the truth of a statement like (1) requires the existence of an object of experience, corresponding to it s grammatical object. A commitment to the explicit adverblization of statements of experience, however, essential to adverbialism. The core of the theory consists, rather, in th e denial of objects of experience (as opposed to objects of perception) coupled with the view that the role of the grammatical object in a statement of experience is to characterize more fully the sort of experience which is being attributed to the subject. The claim, then, is that the grammatical object is functioning as a modifier, and, in particular, as a ,modifier of a verb. If this is so, it is perhaps, appropriate to regard it as a special kind of adverb at the semantic level.
Nevertheless, our acquaintance with ‘experience‘ is to meet with directly (as through participation or observation) is an intricate and intimate affair as knowledge of something based on personal exposure. However familiar, it is not possible to define experience in an illuminating way, however, know what experiences are through acquaintance with some of their own, e.g., a visual experience of a given after-image, a feeling of physical nausea or a tactile experience of an abrasive surface (which might be caused by an actual surface) - rough or smooth - or which might be part of a dream, or the product of a vivid sensory imagination).
Another core feature of the sorts of experiences with which to consider is concerned of our spatial temporality in occupying a certain particular point in space and time, such that they have representational content. The most obvious cases of experience with content are sense experiences of the kind normally involved in perception. We may describe such experiences by mentioning their sensory modalities and their contents, e.g., a gustatory experience (modality) of chocolate ice cream (content), but do so more combined with noun phrases specifying their contents: As in ‘Macbeth perceived visually and ‘Macbeth had a visual experience of a dagger’.(The reading with which we are concerned).
As in the case of other mental states and events with content, it is impossible to distinguish between the properties which an experience represents and the properties which it possesses. To talk of the representational properties of an experience is to say something about its content, not to attribute those properties to the experience itself. Like every other experience, a visual experience of a pink square is a mental event, and it is therefore, presents those properties. It is, though it represent those properties, it is, perhaps, fleeting, pleasant o r unusual, even though it does not represent those properties. An experience may represent a property which it possesses and it may even do so in virtue of possessing that property, as in the case of a rapidly changing (complex) experience representing something as changing rapidly, but this is the exception and not the rule.
Which properties can be (directly) represented in sense experience is subject to debate. Traditionalists include only properties whose presence could not be doubted by a subject having appropriate experiences, e.g., colour and shape in the case of visual experience, and (apparent) shape, surface, texture, hardness, etc., in the case of tactile experience. This view is natural to anyone who has an egocentric, Cartesian perspective in epistemology, and who wishes for pure data in experience to serve as logically certain foundations for knowledge. The successors to the empiricalists’ concept of ideas of sense are the sense-data, a term introduced by Moore and Russell, and refers to the immediate objects of perceptual awareness, such as colour patches and shapes, usually supposed distinct from surfaces of physical objects. Qualities of sense-data was supposed to be distinct from physical qualities because their perception is more relative to conditions, more certain and more immediate and because sense data are private and cannot appear other than they are. They are objects that change in our perceptual fields when conditions of operations change and physical objects remain constant.
Others who do not think that this wish can be satisfied, and who are more impressed with the role of experience in providing animals with ecologically significant information about the world around them, claim that sense experience represent properties, characteristics and kinds which are much richer and much more wide-ranging than the traditional sensory qualities. We do not see only colours and shapes, they tell us, but also earth, water, men, women and fire. There is no place as to examine the factors relevant to a choice between these alternatives.
Given the modality and content of a sense experience, most of us will be aware of its character even though we cannot describe that character directly. This suggests that character and content are not really distinct and there is a close connection between them. For one thing, the relative plexuity of the character of a sense experience places limitations on its possible content, e.g., a tactile experience of something touching one’s left ear is just too simple to carry the same amount of content as a typical everyday visual experience. Furthermore, the content of a sense experience of a given character depends on the normal causes of appropriately similar experiences, e.g., the sort of gustatory experience which we have when eating chocolate would not represent chocolate unless it were normally caused by chocolate. Granting a contingent connection between the character of an experience and its possible causal origins, it again follows, that its possible content is limited by its character.
Character and content are, nonetheless, irreducibly different, for the following reasons. (1) There are experiences which completely lack content, e.g., certain bodily pleasures, and (2) not every aspect of the character of an experience with content is relevant to that content, e.g., the unpleasures of an aural experience or of chalk squeaking on a board may have no representational significance. (3) Experiences in different modalities may overlap in content without a parallel overlap in character, e.g., visual and tactile experiences of circularity feel completely different, and (4) the content of an experience with a given character may vary according to the background of the subject, e.g., a certain aural experience may come to have the content ‘singing bird’ only after the subject has learned something about birds.
According to the act/object analysis of experience (which is a special case of the act/object analysis of consciousness), every experience involves an object of experience even if it has no material object. Two main lines of argument may be offer in support of this view, one phenomenological and the other semantic.
In outline, the phenomenological argument is as follows. Whenever we have an experience, even if nothing beyond the experience answers to it, we seem to be presented with something through the experience (Which is itself diaphanous). The object of the experience is what ever is so presented to us - be it an individual thing, an event, or a state of affairs.
The semantic argument is that objects of experience are required in order to make sense of certain features of our about experience, including, in particular, the following, (1) Simple attributions of experience (e.g., Rod is experiencing a pink square) seem to be relational, and (2) we appear to refer to objects of experience and to attribute properties to them (e.g., The after-image which John experienced was green. (3) We appear to quantify over objects of experience (e.g., Macbeth saw something which his wife did not see).
The act/object analysis faces several problems concerning that status of objects of experience, currently the most common view is that they are sense-data - private mental entities which actually posses the traditional sensory qualitites s represented by the experience of which they are the object. But the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable property (e.g ., redness) without representing it as having any subordinate determinate property (e.g., any specific shade of red), a sense-data may actually have a determinable property without having a determinate property subordinate to it. Even more disturbing is that sense-data may have contradictory properties, as sense-data theorist mus t either deny that there are experiences or admit contradictory objects.
These for experiences seem not to present us with bare properties (however complex), but with properties embodied in individuals. The view that objects of experience are Meinongian objects accommodates this point. It is also attractive in so far as (1) It allows experiences to represent properties other than traditional sensory qualities, and (2) It allows for the identification of objects of experience and objects of perception in the case of experiences which constitute perception. According to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have mental objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences, nonetheless appear to represent something, and their objects are supposed to be whatever it is that they represent. Act/object theories may differ on the nature of objects of experience, which have been treated as properties. Meinongian objects (which may not exist or have any form of being), and, more commonly, private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the latter, but has also been used as a general term for objects of sense experience, as in the work of G.E. Moore). Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of representative realism, objects of perception (of which we are ‘indirectly aware’) Meinongians, however, may simply treat objects of perception as existing objects of experience. But most philosophers will feel that the Meinongian’s acceptance of impossible objects is too high a price to pay for these benefits.
A general problem for the act/object analysis is that the question of whether two subjects are experiencing one and the same thing (as opposed to having exact ly similar experiences) appears to have an answer only on the assumption that the experiences concerned are perception with material objects. But in terms of the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-data theory, it could be possible on the other versions of the act/object analysis, depending on the fact of the case).
In view encapsulated of problems, the case for the act/object analysis should be reassessed. The phenomenological argument is not, on reflection, convincing, for it is easy enough to grant that any experience appears to present us with an object without accepting that it actually does. The semantic argument that is more impressive, but, nonetheless, answerable. The seemingly relational structure of attributions of experience is a challenge dealt within the adverbial theory. Apparent reference to and quantification over objects of experience can be handled by analysing them as reference to experience theme selves, and quantification over experiences tacitly typed according to content. (This, ‘The after-image which John experienced was green. John’s after-image experience was an experience of green, and Macbeth saw something which his wife did not see, because ‘Macbeth had a visual experience which his wife did not have’.
Pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated disposition, e.g., Susy’s experience of a rough surface beneath her hand might be ‘identified with the event of her acquiring this belief, with a disposition to acquire in which has somehow been blocked.
This position has attractions, as it does full justice to the cognateness of experience and to the important role of experience as a source of belief acquisition. It would also help clear the way for a naturalistic theory of mind, since there seems to be some prospect of a phyicalist/fundationalist account of belief and other intentional states. But pure cognitivism is completely undermined by its failure to accommodate the fact that experiences have a felt character which cannot be reduced to the content. A launching celebration of gratifying the adverberial theory is an attempt to undermine the act/object analysis by suggesting a semantic account of attributions of experience which does not require objects of experience. Unfortunately, the oddities of explicit adverbialization of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basic intuition, and there is reason to believe that an effective development of the theory of which is possible.
The relevant intuitions are (1) That when we say that someone is experiencing ‘an A’ or has an experience ‘of an A’, we are using this content-expression to specify the type of thing which the experience is especially apt to fit. (2) That doing this is a matter of saying something about the experience itself (and maybe about the normal also about the normal causes of the experience itself) and, (3) That there is no good reason to suppose that it involves the description of an object which the experience is ‘of’. Thus, the effective role of the content-expression in a statement of experience is to modify the verb it complements, not to introduce a special type of object.
Perhaps the most important criticism of the adverbial theory is the ‘many property problem’, according to which the theory does have the resources to distinguish between, e.g.,
(1) Frank has an experience of a brown triangle
And:
(2) Frank has an experience of brown and an experience of a triangle.
Which is entailed by (1) but does not entail it. The act/object analysis can easily accommodate the difference between (1) and (2) by claiming that the truth of (1) requires a single object of experience which is both brown and triangular, while that of (2) allows for the possibility of two objects of experience, one brown and the other triangular, however, that (1) is equivalent to.
(1*) Frank has an experience of something’s being both brown and triangular.
And (2) is equivalent to:
(2*) Frank has an experience of something’s being brown and an experience of something’s being triangular.
And the difference between these can be explained quite simply in terms of logical scope without invoking objects of experience. The adverbialist may use this to answer the many-property problem by arguing that the phrase’s ‘a brown triangle’ in (1) does exactly the same work as the clause ‘something’s being both brown and triangular,’ in (1*). This is perfectly compatible with the view that it also has the ‘adverbial’ function of modifying the verb ‘has an experience of’ for it specifies the experience more narrowly just by giving a necessary condition for the satisfaction of the experience (the condition being that there is something both brown and triangular before Frank).
And yet, a position of which should be mentioned is the state theory, according to which a sense experience of an ‘A’ is an occurrent, non-relational state of the kind which the subject would be in when perceiving an ‘A’. Suitably qualified, this claim is no doubt true, but its significance subject to debate. Perhaps, it is enough to remark that the claim is compatible with pure cognitivism and the adverbial theory, and that state theorists are probably best to advised to adopt adverbialism as a means of developing their intuitions.
That is to say, that most generally, intuition and deduction are quantifiable when one has intuitive knowledge, that ‘p’ when:
1: One knows that ‘p’.
2: One’s knowledge that ‘p’ is immediate, and
3: One’s knowledge that ‘p’ is not an instance of the operations of any of the five senses (so knowledge of the nature of one’s own experience of the nature of one’s own experience is not intuitive).
On this account neither mediate nor sensory knowledge is intuitive knowledge. Some philosophers, however, want to allow sensory knowledge to count as intuitive, so to do this, omit clause (3) above.
The two principal families of examples of mediate (i.e., not immediate) knowledge that have interested philosophers are, knowledge through the representation and knowledge by inference. Knowledge by representation occurs when the thing known is not what one appeals to as a basis for claiming to know it, as when one appeals to sensory phenomena as a basis for knowledge of the world (and the world is not taken to be a sense-phenomenal construct) or as when one appeals to words as a source of knowledge of the world (as when one claims that a proposition is true of the world solely by virtue of the meaning of the words expressing it).
(There are other idioms that are used to mark out the differences between non-intuitive ways of knowing, such as knowing indirectly and knowing directly, or knowing in the absence of the thing known and knowing by virtue of the presence of the thing known. It is sometimes useful to speak of the object of knowledge being intuitively given, meaning that we can know things about it without mediation. The justification of a claim to knowledge by appeal to its object being intuitively given is surely as good as could be. What could be a better basis for a claim to knowledge than the object of knowledge itself given just as it is?).
One of the fundamental problems of philosophy , overlapping epistemology and the philosophy of logic, is that of giving criteria for when a deductive inference is valid, criteria for when an inference does or can continue knowledge or truth. There are in fact two very different proposals for solutions to this problem, one that had slowly come into fashion during the early part of this century, and another that has been much out of fashion, but gaining in admirers. The former, which develops out of the tradition of Aristotelian syllogistic, holds that all valid deductive inferences can be analysed and paraphrased as follows:
The sentences occurring in the deduction are aptly paraphrased by sentences with an explicit, interpreted logical syntax, which in the main consists of expressions for logical operations, e.g., predication, negation, conjunction, disjunction, quantification, abstraction, . . .,: and
The validity of the inferences made from sentences in that syntax to sentences, in that syntax is entirely a function of the meaning of the signs for logical operations expressed in the syntax.
In particular, it is principally the meaning of the signs for logical operations that justify taking considered rules of inference as valid (Koslow, 1991). For example, is such a justification as given by Gottlob Frége (1848-1925), one of the great developers of this vie w of the nature of the proper criteria for valid deductive inference, someone who is in fact, in the late nineteenth century, gave us an interpreted logical syntax (and so a formal deductive logic) far, far greater and more powerful than had been available through the tradition of Aristotelian syllogistic:
A ➞ B is meant to be a proposition that is false when ‘A’ is true and ‘B’ is false: Otherwise, it is true (Frége. 1964) paraphrased; variables restricted to the True . . .the False.
The following is a valid rule of inference: From ‘A’ and A ➞ B, infer ‘B’, for if ‘B’ were false, since ‘A’ is true A ➞ B would be false, but it is supposed to be true (Frége, 1964, paraphrased).
Frége believed that the principal virtue of such formal-syntactical reconstructions of inferences - as validity moving on the basis of the meaning of the signs for the logical operations alone - was that it eliminated the dependence on intuition and let on see exactly on what our inferences depended, e.g.,:
We divided all truths that require justification into two kinds, those for which the proof can be carried out purely by means of logic and those for which it must be supported by facts of experience.
. . . Now, when I came to consider the question to which of these two kinds the judgment of arithmetic belong. I first had to ascertain how far one could proceed in arithmetic by means of inference alone, with the sole support of those laws of thought that transcend all particulars. . . .To prevent anything intuitive (Anschauliches) from penetrating. here unnoticed, I had to bend every effort to keep the chain of inference free from gaps (Frége, 1975).
In the literature most ready to hand, the alternative view was supported by Descartes and elaborated by John Locke, when maintained that inferences move best and most soundly when based on intuition. (their word):
Syllogism serves our Reason, in that it shows¸the connection of the Proofs, i.e., the connexion between premises and conclusion¸in any one instance and no more, but in this, of no great use. Since the Mind can perceive such connexion, where it readily is, as easily, nay, perhaps better without Syllogism¸.
If we observe the Acting of our own Minds, we shall find, that we reason best and clearest when we only observe the connexion of the Ideas, without reducing out Thoughts to any Rule of Syllogism . (Locke, 1975. p.670).
What is it that one is intuiting? Ideas, or meaning, and relationships among them. Ideas or meaning are directly given, to be directly the difference s being marked by Locke is between (1) inferring Socrates is mortal from the premises All men are mortal and Socrates is a man by appealing to the form-logical rule. All ‘A’ are ‘B’ and ‘C’ is an ‘A’, therefor ‘C’ and ‘B’ which is supposed to be done without any appeal to the intuitive meanings of, All and is, and (2) seeing that Socrates is moral follows from, All men are mortal and Socrates is a man by virtue of understanding (the meaning of) those informal sentences without any appeal to th e formal-logical sentences without any appeal to the formal-logical rule. Lock e is also making the point that inference made on the basis of such an understanding of meanings are better, and more fundamental, then inferences made on the basis of an appeal in a formal-logical schema. Indeed, Locke would certainly maintain that such informal, intuitive inferences made on the basis of understanding the meaning of sentences serve better as a check on the correctiveness inference. Serve as a check on intuitive inferences.
Such distrust of formal logical inference or greater trust in intuitive inference has been promoted in recent times by Henri Poincaré and L.R.J. Bouwer (Detlefsen, 1991).
We might say that for Frége, too, logical inferences moved by virtue of intuition of meaning, the meaning of the signs for logical inference, for we have seen how Frége appealed to such meanings in order to justify formal-logical rules of inference. Of course, its content, the formal-logical rules are justified, Frége is quite content to appeal to them in the construction of deduction, not returning each time to the intuited meanings of the logical signs. What is new in Frége is the conviction that inferences that proceed wholly on the basis of the logical signs, signs for logical operations, are complete with respect to logical implication - that if ‘B’ logically follows from ‘A’, then from ‘A’ by rules which mention only logical operations and not, e.g., the concrete meaning of predicate-expressions in the relevant propositions. There is a deep issue of which is destined to become the principal issue in the philosophy and epistemology of logical theory, but, to what extent, in what measure, does intuition of the non-logical content measure, does intuition of the non-logical contents of propositions, i.e., content other than the meanings of the signs for logical operations, rightly sustain inference?
But one does not really need to reach to such an example, that virtually all inferences set out in mathematical proofs most obviously proceed on the basis of intuitively given meaning content rather than appeal to formal-logical rules, and it is easy to find examples of such proofs that clearly do not depend on the meaning of signs for logical operators, but rather on the non-logical content of the mathematical propositions. There is a good example in Hilbert (1971, p 6, paraphrased).
Similar problems face the suggestion that necessary truths are the ones we know with certainty: We lack a criterion for certainty, as there are necessary truths we don’t know, and (barring dubious arguments for scepticism) it is reasonable to suppose that we know some contingent truths with certainty. Leibniz defined a necessary truth as one whose opposite implies a contradiction. Every such proposition, he held, is either an explicit identity (i.e., of the form ‘A’ is ‘A’, ‘AB’ is ‘B’, etc.) or is reducible to any identity by successively substituting equivalent terms. (Thus, one might be so reduced by substituting ‘unmarried man’ for ‘bachelor’. This has several advantages over the ideas of the previous ascriptions. First, it explicates the notions of necessity and possibility and seems to provide a criterion we can apply. Second, because explicit identities are self-evidently a priori propositions. The theory implies that all necessary truths are knowable a priori, but it does not entail that we actually know all of them, nor does it define ‘knowable’ in a circular way. Third, it implies that necessary truths are knowable with certainty, but does not preclude our having certainty in knowledge of contingent truths by means of contingent truths by mans other than a reduction.
Nevertheless, this view is also problematic. Leibniz`s example of reduction are too spare to prove a claim about all necessary truths. Some of his reductions. Moreover, are deficient: Frége has pointed out , for example, that his proof of ‘2 + 2 = 4', presupposes the principle of association and so does not depend only on the principle of identity. More generally, it has been shown that arithmetic cannot be reduced to logic, but requires the resources of set theory as well. Finally, there are other necessary propositions (e.g., ’Nothing can be red and green all over’) which do not seem to be reducible to identities and which Leibniz does not show how to reduce.
Leibniz and others have thought of truths as property of propositions, where the latter are conceived as things which may be expressed by, but are distinct from, linguistic items like statements on another approach truth is a property of linguistic entities, and the basis of necessary truth is convention. Thus, A.J. Aver, for example, argued that the only necessary truths are analytic statements and that the latter rests entirely on or upon our commitment to use words in certain ways.
The general project of the positivistic theory of knowledge is to exhibit the structure, content and basis of human knowledge in accordance with empirical principles. Since sentence is regarded as the repository of all genuine human knowledge, structure, or as it was called the logic of science. The theory of knowledge thus becomes three major tasks (1) to analyse the meaning of the statements of science exclusively in terms of observations or experiences in principles available to human beings, (2) to show how certain observations or experiences serve to confirm a given statement in the sense of making it more warranted or reasonable, and (3) to show how non-empirical or a priori knowledge of the necessary truths of logic and mathematics is possible even though every matter of fact which can be intelligibly thought or known is empirically verifiable or falsifiable.
1. The slogan ‘the meaning of a statement is its method of verification’ expresses the empirical verification theory of meaning. It is more than the general criterion of meaningfulness according to which a sentence, if it is cognitively verifiable. It says in addition, that the meaning of each sentence is: It is all those observations which would confirm or disconfirm the sentence. Sentence which would be verified or falsified by all the same observations are empirically equivalent or have the same meaning.
A sentence recording the result of a single observation is an observation or ‘protocol’ sentence. it can be conclusively verified or falsified on a single occasion. Every other implies an indefinitely large number of observation sentences which together exhaust its meaning, but at no time will all of them have been verified or falsified. To give an ‘analysis’ of the statements science is to show how the content of each scientific statement can be reduced in this way, to nothing more than a complex combination of directly verifiable ‘protocol’ sentences
Verificationism, is that of any view according to which the conditions of a sentence’s or a thought’s being meaningful or intelligible are equated with the conditions of is being verified or falsified. An explicit defence of the position would be a defence of the verifiability principle of meaningfulness. The exclusiveness of a scientific world view was to be secured by showing that everything beyond the reach of science is strictly or ‘cognitively’ meaningless. In the sense of being incapable of truth or falsity, and so not possible an object of meaningfulness and it was found in the idea of empirical verification. And anything which does not fulfil this criterion is declared literally meaningless to its truth or falsity. It is not an appropriate object of enquiry. Moral and aesthetic and other confirmable nor disconfirmable on empirical grounds, and so are cognitively meaningless. They are at best expressions of feeling or preferences which are neither true nor false. Whatever is cognitively meaningful and therefore factual is value-free. The positivist claim that many of the sentences of traditional philosophy, especially those in what they called ‘metaphysics’, also lack cognitive meaning and say nothing that could be true or false. But they did spend much time trying to show this in detail about the philosophy of the past. They were more concerned with developing a theory of meaning and of knowledge adequate to the understanding and perhaps even the improvement of science.
Implicit verificationism is often present in position or arguments which do not defend that principle in general, but which reject suggestions to the effect that a certain sort of claim is unknowable or unconfirmable, on the sole ground that it would therefore be meaningless or unintelligible. Only if meaningfulness or intelligibility is indeed a guarantee of known or confirmability is the position sound. If it is , nothing we understand could be unknowable or unconfirmable by us .
2. The observations recorded in particular ‘protocol’ sentences are said to confirm those ‘hypotheses’ of which they are instances. The task of confirmation theory is therefore to deduce the notion of a confirming instance of a hypothesis and to show how the occurrence of more and more such instances adds credibility or warrant to the hypothesis in question. A complete answer would involve a solution to the problem of induction: To explain how any past or present experience makes it reasonable to believe in something that has not yet been experienced. Even so, all inferences from past or present experience to an unobserved matter of fact ‘proceed upon’ that principle. But no assurance can be given to that principle from reason alone: It is not impossible, for the future to be different from the past Whether the future will resemble the past is a contingent matter of fact. Experience is therefore needed to assure us of that principle. It cannot do so alone, since the principle partly can tell us only how things have been in the past. Something more than past experience is needed. As in the sense of implying a contradiction, for the future to be different from the past.
But reason, even when combined with past experience, cannot be what leads us to believe that the future will resemble the past. If it did, it would be by means of an inference from past experience to the principle that the future will resemble the past. And, so before, any such inference would have too ‘proceed on the supposition’ that the future will resemble the past, but that would be evidently going in a circle and taking that for granted which is the very point in question.
3. Logical and mathematical propositions and other necessary truths, are not to predict the course of future sense experience, they cannot be empirically confirmed or disconfirmed, but they are essential to science and so must be accounted for. They are one and all ‘analytic’ in something like the Kantian sense: True solely in virtue of the meanings of their constituent terms. They serve only to make explicit the contents of and the logical relations among the terms or concepts which make up the conceptual framework through which we interpret and predict experience. Our knowledge of such truths is simply knowledge of what is and what is not contained in the concepts we use.
Nonetheless, the Lockean/Kantian distinction is based on a narrow notion of concept on which concepts are senses of expressions in the generalized language. The broad Frégean/Carnapian distinction is based on a broad notion of concept on which concepts are conceptions - often scientific ones - about the nature of the referents of expressions (Katz, 1971 and Putnam, 1981). Whereas, in its conflation of these two notions of concept produce the illusion of a single concept with the content of philosophical, Logical and mathematical conceptions, but with the status of linguistic concepts. All that is necessary is to keep the original, narrow distinction from being broadened. This insures that preposition expressing the content of broad concepts cannot receive the easier justification appropriate to narrow ones, its notion allows us to pose the problem of how necessary is knowledge, if not for being of what is possible, moreover, logical and mathematical knowledge are part of the problem. By which Quine, did not undercut the foundations of rationalism, hence, a serious reproval of the new empiricism and naturalized epistemology is, to say the least, very much in order (Katz, 1990).
Experience can perhaps show that a given concept has no instances, or that it is not a justified useful concept for us to employ, but that would not show that what we understand to be included in that concept is no really included in it, or that it is not the concept we take it to be. Our knowledge of the constituents of and the relations among our concepts is the reference or not dependent on experience: It is a priori, it is knowledge of what holds necessarily, and all necessary truths are ‘analytic’, as there is no synthetic a priori knowledge to mark the distinction, one who characterizes a priori knowledge in terms of justification which is independent of experience is faced with the task of articulating the relevant sense of experience. Proponents of the a prior often cite ‘intuition’ or ‘intuitive apprehension’ as the source of a priori justification. Recent attacks on the existence of an a priori is knowledge known knowledge fall into three general camps. Some as Putnam (1979) and Kitcher (1983), begin by providing an analysis of the concept of a priori knowledge and then ague that alleged examples of a prior in knowledge and fail to satisfy the conditions specified in the analysis. Attacks in the second generality are independent processes of any particular analysis of the concept on the alleged source of such knowledge. Benacerrak (1973), for example, argues by dominants of the a priori to be the source of mathematical knowledge, but cannot fulfil that role. A third form of attack is to consider prominent examples such of our positions alleged to be knowable only a priori and to show that they can be justified by experiential evidence . The Kantian position that has received most attention is the claim that a priori knowledge is the claim that some a priori knowledge is of synthetic priori propositions. Initially, there were two different claims. That were concerned exclusively with some of Kant’s particular examples of alleged synthetic a priori knowledge only the claim that the truths of arithmetic are synthetic.
Kantian strategies that mathematical knowledge is a necessary condition of empirical knowledge. Kant argued that the laws of mathematics are actually constraints on our perception of space and time. In knowing mathematics, then, we know only the laws of our own perception. Physical space in itself, for all we know, may not obey the laws of Euclidean geometry and arithmetic, but the world as perceived by us must, that mathematics is objective - or ‘intersubjective - in the sense that it holds good of all portions of the whole human race, past, present, and future. For this reason, there is no problem with the applicability of mathematics in empirical science - or, so the Kantian claim.
In the sense of which we are to assume of some characteristic, as pending some thought to be epistemological problems, it will be difficult do not give of some needed demonstration. It is, and would seem, that to base conclusions of truth that must fail because in any such attempt be to our understanding of the truths and reasons of fact, that in this distinction is associated with Leibniz, who declares that there are only two kinds of truths - truths of reason and truths of fact. The former are either explicit identities, i.e., of the form ‘A is A’, ‘AB is B’, and so forth, or they are reducible to this form by successively substituting the equivalent terms. Leibniz also says that truths of reason ‘rest on the principle of contradiction, or identity’ and that they are necessary propositions, which are true of all possible worlds. Some examples are, ‘All bachelors are unmarried: And ‘All bachelors are unmarried‘: The first is already of the form ‘AB is B’ and the latter case be reduced to this form, that of any substituting ‘unmarried man’ for ‘bachelor’. Other examples, or so Leibnitz believes, are ‘God exists’ and the truths of logic, arithmetic and geometry.
Truths of fact, on the other hand, cannot be reduced to an identity and our only way of know them is a posteriori, or by reference e to the facts of the empirical world. Likewise, since their denial does not involve a contradict ion, the truth is merely contingent: They could have been otherwise and hold of the actual world, but not of every possible one. Some examples are ‘Caesar crossed the Rubicon’ and ‘Leibniz was born in Leipzig’ as well as propositions expressing correct scientific generalizations. In Leibniz’s view, truths of fact are often the principle of sufficient reason, which states that nothing can be so unless there is a reason why it is so. This reason is that the actual world (by which the means the total collection of things past, present and future) is better than any other possible world and was therefore created by God.
Necessary contingent truths are ones which must be true or whose opposite is impossible. Contingent truths are those that are not necessary and whose opposite is therefore. 1-3 below are necessary . 4-6, contingent.
(1) It is not the case that it is raining and not raining
(2) 2 + 2 = 4.
(3) All bachelor are unmarried.
(4) All seldom rains in th e Sahara.
(5) There are more than four states in the US of A.
(6) Some bachelors drive Maserati’s.
Plantinga (1974) characterizes the sense of necessity, illustrated in 1-3 as ‘broadly logical. For it includes not only truths of logic, but those of mathematics, set theory, and other quasi-logical ones. Yet it is not so broad ass to include matters of causal or natural necessity, such as
(7) Nothing travels faster than the speed of light.
One would like an account of our distinction and a criterion by which to apply it. Some suppose that necessary truths are those we know a priori. But we lack a criterion for a priori truths, and there are necessary for truths we don’t know at all (e.g., undiscovered mathematical ones). It won’t help to say that necessary truths are ones where it is possible, in the broadly logical sense, to know a priori, for this is circular. Finally, Kripke (1972) and Plantinga (1974) argue that some contingent truths are knowable a priori knowledge. Similar problems face the suggestion that necessary truths are the ones we know to be of certainty: We lack a criterion for certainty, there are necessary truths we don’t know and barring dubious arguments for scepticism, it is reasonable to suppose that we know some contingent truths with certainty.
Leibniz defined as necessary truth as one whose opposite implies a contradiction. Every such proposition, he held, is either an explicit identity (i.e., of the form ‘A’ is ‘A’, ‘AB’ is ‘B’) or is reducible to an identity by successively substituting equivalent terms. (Thus 3 above might be so reduced by substituting ‘unmarried man’ for ‘bachelor’). This has several advantages over the ideas of the previous paragraph. First, it explicates the notion of necessarily and possibility and seems to provide a criterion we can apply. Second, because explicit identities are self-evident a prior propositions, that theory implies that all necessary truths are knowable a priori that all necessary truths are knowable a priori, but it does not entail that we actually know all of them, nor does it define ‘knowable’ in a circular way. Third, it implies that necessary truths are knowable with certainty, but does not preclude our having certain knowledge of contingent truths by means other than a reduction.
Nevertheless, this view is also problematic, as Leibniz’s examples of reduction are too sparse to prove a claim about all necessary truths. Even so, Frége has pointed out, for example, that his proof of ‘2 + 2 = 4' presupposes the principle of association and so does not depend only on the principle of identity. More generally, it has been shown that arithmetic cannot be reduced to logic, but requires the resources of set theory as well. Finally, there are other necessary propositions (e.g., ‘Nothing can be red and green all over’) which do not seem to be reducible to identities and which Leibniz does not show how to reduce.
Leibniz’s account of our knowledge of contingent truths is remarkably similar to what we would expect too find in an empiricist’s epistemology. Leibniz claimed that our knowledge of particular contingent truths has its basis in sense perception. He argued that our knowledge of universal contingent truths cannot be based entirely on simple enumerative inductions, but must be supplemented by what he called ‘the conjectural method a priori’: Which he described as follows.
1. Truth is fundamentally a matter of the containment of the concept of the predicate of a proposition in the concept of its subject.
2. The distinction between necessary truth and contingent truth is absolute, and in no way relative to a corresponding distinction between divine and human sources of knowledge.
3. A proposition is known a priori by a finite mind only if that proposition is a necessary truth (Parkinson and Morris, 1973).
Hence, although Leibniz commenced with an account of truth that one might expect to lead to the conclusion that all knowledge is ultimately a priori knowledge, he set out to avoid that conclusion.
Leibniz’s rationalism in epistemology is most evident in his account of our a priori knowledge, that is, according to (3), our knowledge of necessary truths. One of Leibniz’s persistent criticisms of Locke’s empiricism is the thesis that Locke’s theory of knowledge provides no explanation of how we know of certain propositions that they are not only true, but necessarily true. Leibniz agreed that Locke offered no adequate account of how we know propositions to be true whose justification does not depend on or upon experience: Hence, that Locke had no acceptable account for a priori knowledge, however, Leibniz diagnosis of Locke’s failing was straightforward: Locke lacked an adequate account of our a prior knowledge because, on Locke’s theory must come from experience, thus overlooking what Leibniz took to be the source of our a prior knowledge namely, what is innate to the mind. In that, Leibniz argued for the second alternative, the theory of innate doctrines and concepts.
The thesis that some concepts are innate to the mind is crucial to Leibniz’s philosophy. He held that the most basic metaphysical concepts, e.g., the concepts of substances and causation, are innate, whereby, he was unmoved by the inability of empiricism to reconstruct full-blown versions of those concepts from the materials of sense experience.
Leibniz’s account of our knowledge of contingent truths is remarkably similar to what we would expect to find in an empiricist’s epistemology. Leibniz claimed that our knowledge of particular contingent truths has its basis in sense perception. He argued that our knowledge of universal contingent truths can not be based entirely on simple enumerative inductions, but must be supplemented by what he called ‘the conjectural method a priori’, which he described as follows:
The conjectural method a priori proceeds by hypotheses, assuming certain causes, perhaps, without proof and showing that the things that happen would follow from those that happen would follow from those assumptions. A hypothesis of this kind is like the key to a cryptograph, and the simpler it is, and the greater the number of events that can be explained by it, the more probable it is (Loemker, 1969).
Leibniz’s conception of the conjectural method a priori is a precursor of the hypothetico-deductive method. He placed emphasis on the need for a formal theory of probability, in order to formulate an adequate theory of our knowledge of contingent truths.
Leibniz sided with his rationalist colleagues, e.g., Descartes, in maintaining, contrary to the empiricist, that, since thought is an essential property of the mind, there is no time at which a mind exists without a thought, a perception. But Leibniz insisted on a distinction between having a perception and being aware of it. He argued forcefully on both empirical grounds and conceptual grounds that finite minds have numerous perceptions of which they are not aware of the time at which they have them (Remnant and Bennett, 1981).
Leibniz’s rationalism in epistemology is most evident in his account of our a priori knowledge, that is, according to (3), our knowledge of necessary truths. One of Leibniz’s persistent criticisms of Locke’s empiricism is the thesis that Locke’s theory of knowledge provides no explanation of how we know of certain propositions that they are not only true, but necessarily true. Leibniz argued that Locke offered no adequate account of how we know propositions to be true whose justification does not depend upon experience: Hence, that Locke had no acceptable account of our a priori knowledge. Leibniz’s diagnosis of Locke failing was straightforward: Locke lacked an adequate account of our a priori knowledge because, on Locke’s theory, all the material for the justification of beliefs must come from experience, thus overlooking what Leibniz took to be the source of our a priori knowledge, namely, what is innate to the mind. Leibniz summarized his dispute with Locke thus:
Our differences are on matters of some importance. It is a matter of knowing if the soul in itself is entirely empty like a writing tablet on which nothing has as yet been written . . . And if everything inscribed there comes solely from the senses and experience, or if the soul contains originally the sources of various concepts and doctrines that external objects merely reveal on occasion . . . (Remnant and Bennett, 1981).
Leibniz argued for the second alternative, the theory of innate doctrines and concepts. Th e thesis that some concepts are innate to the mind is crucial to Leibniz’s philosophy. he held that the most basic metaphysical concepts, e.g., the concepts of substance and causation, are innate. Hence, he was unmoved by the inability of empiricist to reconstruct full-blown versions of those concepts from the materials of sense experience.
These in innate ideas, have been variously defined by philosophers either as ideas consciously present to the mind prior to sense experience (the non-dispositional sense), or as ideas which we have an innate disposition top form (though we need not be actually aware of them at any particular time, e.g., as babies)-the dispositional sense.
Understood in either way they were invoked to account for our recourse tn experiential verification, such as those of mathematics, or to justify versions of moral and religious claims which were held to be capable of being known by introspection of our innate ideas, examples of such supposed truths might include ‘murder is wrong’ or that ‘God exists’.
One difficulty with the doctrine is that is sometimes formulated as one about concepts or idea s which are held to be innate and at other times as one about a source of propositional knowledge. in so far as concepts are taken to be innate the doctrine relates primarily to claim about meaning, our idea of God, for example, is taken as a source for the meaning of the word God. When innate ideas are understood propostionally their supposed innateness is taken as evidence for their truth. This latter thesis clearly rests on the assumption that innate propositions have an unimpeachable source, usually taken to be God, but then any appeal to innate ideas to justify the existence of God is circular. Despite such difficulties the doctrine of innate ideas has a long and influential history or until the eighteenth century and the concept has in employment in Noam Chomsky‘s influential account of the mind’s linguistic capacities.
The attraction of the theory has been felt strongly by those philosophers who have been unable to give an alternative account of our capacity to recognize that some propositions are certainly true where that recognition cannot be justified solely on the basis of an appeal to sense experience. Thus, Plato argued that, for example, recognition of mathematical truths could only be explained on the assumption by some assumption of some form of recollection. ‘Recollection’, or anamnesis has several roles in Plato’s epistemology. In the ‘Meno’, it is invoked to explain the behaviour of an uneducated boy who answers a geometric problem that he has never heard. At the same time, it is used to solve a paradox about inquiry and learning. In the ‘Phaedo’, it is said too explain our possession of concepts, construed as knowledge of Forms, which we supposedly could not have gained from experience. Recollection also appears in the ‘Phaedrus’, but is notably absent from important presentations of Plato’s epistemological views in the Republic and other works. Since there was no plausible post-natal source the recollection must refer back to a pre-natal acquisition of knowledge. Thus understood, the doctrine of innate ideas supported the view that there were important truths innate in human beings and it was the sense which hindered the proper apprehension.
The ascetic implications of the doctrine were important in Christian philosophy throughout the Middle Ages and the doctrine featured powerfully in scholastic teaching until its displacement by Locke’s philosophy in the eighteenth century. It had in the meantime acquired modern expression in the philosophy of Descartes who argued that we can come to know certain important truths before we have an empirical knowledge at all. Our idea of God, for example, and our coming to recognize that God must necessarily exit, as Descartes held, logically independent of sense experience. In England the Cambridge Platonists such as Henry More and Ralph Cudworth added considerable support.
Locke’s rejection of innate ideas and his alternative empiricist account was powerful enough to displace the doctrine from philosophy almost totally. Leibniz, in his critique of Locke, attempted to defend in with a sophisticated dispositional version of the theory , but it attracted few followers.
The empiricist alternative to innate ideas as an explanation of the certainty of propositions was in the direction of construing all necessary truths as analytic. Kant’s refinement of the classification of propositions with the fourfold distinction, analytic/synthetic and a priori/ a posteriori did nothing to encourage a return to the innate ideas doctrine, which slipped from view. The doctrine may fruitfully be understood as the production of confusion between explaining the genesis of ideas or concepts and the basis for regarding some propositions as necessarily true.
Chomsky’s revival of the term in connection with his account of human speech acquisition has once more made the issue topical. He claims that the principles of language and ‘natural logic’ are known unconsciously and are a precondition for language acquisition. But for his purposes innate ideas must be taken in a strongly dispositional sense - so strong that it is far from clear that Chomsky’s claim’s are in conflict with empiricist’ account as some (including Chomsky) have supposed. Quine, for example, sees no clash with his own version of empirical behaviourism, in which old talk of ideas is eschewed in favour of dispositions to observable behaviour.
There are various ways of distinguishing types of Foundationalism epistemology by the use of the variations that have been enumerating. Plantinga has put forward an influence conception of ‘classical foundationalism’, specified in terms of limitations on the foundations. He construes this as a disjunction of ‘ancient and medieval foundationalism, which takes foundations to comprise that with ‘self-evident’ and ‘evident to the senses’, and ‘modern foundationalism’ that replace ‘evident foundationalism’ that replaces ’evidently to the senses’ with which the replacements are ‘evident to the senses’ with ‘incorrigibly’, which in practice was taken to apply only to beliefs bout one’s present state of consciousness. Plantinga himself developed this notion in the context of arguing those items outside this territory, in particular certain beliefs about God, could also be immediately justified. A popular recent distinction is between what is variously ‘strong’ or ‘extremely’ foundationalism and ‘moderate’, ‘modest’ or minimalist’ and ‘moderate ‘Modest’ or ‘minimal’ foundationalism with the distinction depending on whether epistemic immunities are reassured of foundations. While depending on whether it requires of a foundation only that it is required of as foundation, that only it be immediately justified, or whether it be immediately justified. In that it makes just the comforted preferability, only to suggest that the plausibility of the string requiring stems from both a ‘level confusion’ between beliefs on different levels.
Emerging sceptic tendencies come forth in the 14th-century writings of Nicholas of Autrecourt. His criticisms of any certainty beyond the immediate deliverance of the senses and basic logic, and in particular of any knowledge of either intellectual or material substances, anticipate the later scepticism of Balye and Hume. The latter distinguish between Pyrrhonistic and excessive scepticism, which he regarded as unlivable, and the more mitigated scepticism that accepts every day or commonsense beliefs (not as the delivery of reason, but as due more to custom and habit), but is duly wary of the power of reason to give us much more. Mitigated scepticism is thus closer to the attitude fostered by ancient scepticism from Pyrrho through to Sexus Empiricus. Although the phrase ‘Cartesian scepticism’ is sometimes used, Descartes himself was not a sceptic, but in the method of doubt, uses a sceptical scenario in order to begin the process of finding a secure mark of knowledge. Descartes himself trusts a category of ‘clear and distinct’ ideas, not far removed from the phantasia kataleptiké of the Stoics.
Scepticism should not be confused with relativism, which is a doctrine about the nature of truth, and may be motivated by trying to avoid scepticism. Nor is it identical with eliminativism, which counsels abandoning an area of thought together, not because we cannot know the truth, but because there are no truths capable of being framed in the terms we use.
Descartes’s theory of knowledge starts with the quest for certainty, for an indubitable starting-point or foundation on the basis alone of which progress is possible. This is eventually found in the launching celebrations as gratified in the ‘Cogito ergo sum’: I think am thinking: Therefore? I am. By locating the point of certainty in my own awareness of my own self, Descartes gives a first-person twist to the theory of knowledge that dominated them following centuries in spite of various counter attacks on behalf of social and public starting-points. The metaphysics associated with this priority are the famous Cartesian dualism, or separation of mind and matter into two distinct and separate but interacting substances, Descartes rigorously and rightly see that it takes divine dispensation to certify any relationship between the two realms thus divided, and to prove the reliability of the senses invokes a ‘clear and distinct perception’ of highly dubious proofs of the existence of a benevolent deity. This has not met general acceptance: as Hume drily puts it, ‘to have recourse to the veracity of the supreme Being, in order to prove the veracity of our senses, is surely making a very unexpected circuit’.
In his own time Descartes’s conception of the entirely separate substance of the mind was recognized to give rise to insoluble problems of the nature of the causal connection between the two. It also gives rise to the problem, insoluble in its own terms, of other minds. Descartes’s notorious denial that nonhuman animals are conscious is a stark illustration of the problem. In his conception of matter Descartes also gives preference to rational cogitation over anything derived from the senses. Since we can conceive of the matter of a ball of wax surviving changes to its sensible qualities, matter is not an empirical concept, but eventually an entirely geometrical one, with extension and motion as its only physical nature. Descartes’s thought, as reflected in Leibniz, that the qualities of sense experience have no resemblance to qualities of things, so that knowledge of the external world is essentially knowledge of structure rather than of filling. On this basis Descartes erects a remarkable physics. Since matter is in effect the same as extension there can be no empty space or ‘void’, since there is no empty space motion is not a question of occupying previously empty space, but is to be thought of in terms of vortices (like the motion of a liquid).
Although the structure of Descartes’s epistemology, theory of mind, and theory of matter have been rejected many times, their relentless exposures of the hardest issues, their exemplary clarity, and evens their initial plausibility, all contrive to make him the central point of reference for modern philosophy.
The self-conceived, as Descartes presents it in the first two Meditations: aware only of its own thoughts, and capable of disembodied existence, neither situated in a space nor surrounded by others. This is the pure self of ‘I-ness’ that we are tempted to imagine as a simple unique thing that make up our essential identity. Descartes’s view that he could keep hold of this nugget while doubting everything else is criticized by Lichtenberg and Kant, and most subsequent philosophers of mind.
Descartes holds that we do not have any knowledge of any empirical proposition about anything beyond the contents of our own minds. The reason, roughly put, is that there is a legitimate doubt about all such propositions because there is no way to deny justifiably that our senses are being stimulated by some cause (an evil spirit, for example) which is radically different from the objects that we normally think affect our senses.
He also points out, that the senses (sight, hearing, touch, and so forth) are often unreliable, and ‘it is prudent never to trust entirely those who have deceived us even once’, he cited such instances as the straight stick that looks bent in water, and the square tower that look round from a distance. This argument of illusion, has not, on the whole, impressed commentators, and some of Descartes’ contemporaries pointing out that since such errors become known as a result of further sensory information, it cannot be right to cast wholesale doubt on the evidence of the senses. But Descartes regarded the argument from illusion as only the first stage in a buffering softener process which would ‘lead the mind away from the senses’. He admits that there are some cases of sense-base belief about which doubt would be insane, e.g., the belief that I am sitting here by the fire, wearing a winter dressing gown’.
Descartes was to realize that there was nothing in this view of nature that could explain or provide a foundation for the mental, or from direct experience as distinctly human. In a mechanistic universe, he said, there is no privileged place or function for mind, and the separation between mind and matter is absolute. Descartes was also convinced, that the immaterial essences that gave form and structure to this universe were coded in geometrical and mathematical ideas, and this insight led him to invent algebraic geometry.
A scientific understanding of these ideas could be derived, aforesaid by Descartes, with the aid of precise deduction, and also claimed that the contours of physical reality could be laid out in three-dimensional coordinates. Following the publication of Newton’s Principia Mathematica in 1687, reductionism and mathematical modelling became the most powerful tools of modern science. And the dream that the entire physical world could be known and mastered through the extension and refinement of mathematical theory became the central feature and guiding principle of scientific knowledge.
Having to its recourse of knowledge, its cental questions include the origin of knowledge, the place of experience in generating knowledge, and the place of reason in doing so, the relationship between knowledge and certainty, and between knowledge and the impossibility of error, the possibility of universal scepticism, and the changing forms of knowledge that arise from new conceptualizations of the world. All of these issues link with other central concerns of philosophy, such as the nature of truth and the natures of experience and meaning.
Foundationalism was associated with the ancient Stoics, and in the modern era with Descartes (1596-1650). Who discovered his foundations in the ‘clear and distinct’ ideas of reason? Its main opponent is Coherentism, or the view that a body of propositions mas be known without a foundation in certainty, but by their interlocking strength, than as a crossword puzzle may be known to have been solved correctly even if each answer, taken individually, admits of uncertainty. Difficulties at this point led the logical passivists to abandon the notion of an epistemological foundation altogether, and to flirt with the coherence theory of truth. It is widely accepted that trying to make the connection between thought and experience through basic sentences depends on an untenable ‘myth of the given’.
Still in spite of these concerns, the problem was, of course, in defining knowledge in terms of true beliefs plus some favoured relations between the believer and the facts that began with Plato’s view in the “Theaetetus,” that knowledge is true belief, and some logos. Due of its nonsynthetic epistemology, the enterprising of studying the actual formation of knowledge by human beings, without aspiring to certify those processes as rational, or its proof against ‘scepticism’ or even apt to yield the truth, what is more, that natural epistemology would therefore blend into the psychology of learning and the study of episodes in the history of science. The scope for ‘external’ or philosophical reflection of the kind that might result in scepticism or its refutation is markedly diminished. Despite the fact that the terms of modernity are so distinguished as exponents of the approach include Aristotle, Hume, and J. S. Mills.
The task of the philosopher of a discipline would then be to reveal the correct method and to unmask counterfeits. Although this belief lay behind much positivist philosophy of science, few philosophers now subscribe to it. It places too well a confidence in the possibility of a purely previous ‘first philosophy’, or viewpoint beyond that of the work one’s way of practitioners, from which their best efforts can be measured as good or bad. These standpoints now seem that too many philosophers to be a free-fancied, that the more modest of tasks that are actually adopted at various historical stages of investigation into different areas with the aim not so much of criticizing but more of systematization, in the presuppositions of a particular field at a particular tie. There is still a role for local methodological disputes within the community investigators of some phenomenon, with one approach charging that another is unsound or unscientific, but logic and philosophy will not, on the modern view, provide an independent arsenal of weapons for such battles, which indeed often come to seem more like political bids for ascendancy within a discipline.
This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge processed through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. There is a widespread misconception that evolution proceeds according to some plan or direct, but it has neither, and the role of chance ensures that its future course will be unpredictable. Random variations in individual organisms create tiny differences in their Darwinian fitness. Some individuals have more offsprings than others, and the characteristics that increased their fitness thereby become more prevalent in future generations. Once upon a time, at least a mutation occurred in a human population in tropical Africa that changed the haemoglobin molecule in a way that provided resistance to malaria. This enormous advantage caused the new gene to spread, with the unfortunate consequence that sickle-cell anaemia came to exist.
Given to that chance, it can influence the outcome at each stage, beginning, in the creation of genetic mutation, the endorsements, in wether the bearer lives long enough to show its effects, and, in the chance events that influence the individual’s actual reproductive success, and fourth, in whether a gene even if favoured in one generation, is, happenstance, eliminated in the next, and finally in the many unpredictable environmental changes that will undoubtedly occur in the history of any group of organisms. As Harvard biologist Stephen Jay Gould has so vividly expressed that process over again, the outcome would surely be different. Not only might there not be humans, there might not even be anything like mammals.
We will often emphasis the elegance of traits shaped by natural selection, but the common idea that nature creates perfection needs to be analysed carefully. The extent to which evolution achieves perfection depends on exactly what you mean. If you mean “Does natural selections always take the best path for the long-term welfare of a species?” The answer is no. That would require adaption by group selection, and this is, unlikely. If you mean “Does natural selection creates every adaption that would be valuable?” The answer again, is no. For instance, some kinds of South American monkeys can grasp branches with their tails. The trick would surely also be useful to some African species, but, simply because of bad luck, none have it. Some combination of circumstances started some ancestral South American monkeys using their tails in ways that ultimately led to an ability to grab onto branches, while no such development took place in Africa. Mere usefulness of a trait does not necessitate a means in that what will understandably endure phylogenesis or evolution.
This is an approach to the theory of knowledge that sees an important connection between the growth of knowledge and biological evolution. An evolutionary epistemologist claims that the development of human knowledge proceeds through some natural selection process, the best example of which is Darwin’s theory of biological natural selection. The three major components of the model of natural selection are variation selection and retention. According to Darwin’s theory of natural selection, variations are not pre-designed to do certain functions. Rather, these variations that do useful functions are selected. While those that do not employ of some coordinates in that are regainfully purposed are also, not to any of a selection, as duly influenced of such a selection, that may have responsibilities for the visual aspects of variational intentionally occurs. In the modern theory of evolution, genetic mutations provide the blind variations: Blind in the sense that variations are not influenced by the effects they would have-the likelihood of a mutation is not correlated with the benefits or liabilities that mutation would confer on the organism, the environment provides the filter of selection, and reproduction provides the retention. Fatnesses are achieved because those organisms with features that make them less adapted for survival do not survive in connection with other organisms in the environment that have features that are better adapted. Evolutionary epistemology applies this blind variation and selective retention model to the growth of scientific knowledge and to human thought processes overall.
The parallel between biological evolution and conceptual or ‘epistemic’ evolution can be seen as either literal or analogical. The literal version of evolutionary epistemology deeds biological evolution as the main cause of the growth of knowledge. On this view, called the ‘evolution of cognitive mechanic programs’, by Bradie (1986) and the ‘Darwinian approach to epistemology’ by Ruse (1986), that growth of knowledge occurs through blind variation and selective retention because biological natural selection itself is the cause of epistemic variation and selection. The most plausible version of the literal view does not hold that all human beliefs are innate but rather than the mental mechanisms that guide the acquisitions of non-innate beliefs are themselves innately and the result of biological natural selection, Ruse, (1986) demands of a version of literal evolutionary epistemology that he links to sociology (Rescher, 1990).
On the analogical version of evolutionary epistemology, called the ‘evolution of theory’s program’, by Bradie (1986). The ‘Spenserians approach’ (after the nineteenth century philosopher Herbert Spencer) by Ruse (1986), the development of human knowledge is governed by a process analogous to biological natural selection, rather than by an instance of the mechanism itself. This version of evolutionary epistemology, introduced and elaborated by Donald Campbell (1974) as well as Karl Popper, sees the [affirmative] fit between theories and the world as explained by a mental process of trial and error known as epistemic natural selection.
Both versions of evolutionary epistemology are usually taken to be types of naturalized epistemology, because both take some empirical facts as a starting point for their epistemological project. The literal version of evolutionary epistemology begins by accepting evolutionary theory and a materialist approach to the mind and, from these, constructs an account of knowledge and its developments. In contrast, the metaphorical version does not require the truth of biological evolution: It simply draws on biological evolution as a source for the model of natural selection. For this version of evolutionary epistemology to be true, the model of natural selection need only apply to the growth of knowledge, not to the origin and development of species. Crudely put, evolutionary epistemology of the analogical sort could still be true even if Creationism is the correct theory of the origin of species.
Although they do not begin by assuming evolutionary theory, most analogical evolutionary epistemologists are naturalized epistemologists as well, their empirical assumptions, least of mention, implicitly come from psychology and cognitive science, not evolutionary theory. Sometimes, however, evolutionary epistemology is characterized in a seemingly non-naturalistic fashion. Campbell (1974) says that explore without the benefit of wisdom’, i.e., blindly. This, Campbell admits, makes evolutionary epistemology close to being a tautology (and so not naturalistic). Evolutionary epistemology does assert the analytic claim that when expanding one’s knowledge beyond what one knows, one must precessed to something that is already known, but, more interestingly, it also makes the synthetic claim that when expanding one’s knowledge beyond what one knows, one must proceed by blind variation and selective retention. This claim is synthetic because it can be empirically falsified, as the central claim of evolutionary epistemology is synthetic, not analytic. Also, if the central contradictory, which they are not, for which Campbell is right that evolutionary epistemology does have the analytic feature he mentions, but he is wrong to think that this is a distinguishing feature, since any plausible epistemology has the same analytic feature (Skagestad, 1978).
Two extraordinary issues lie to awaken the literature that involves questions about ‘realism’, i.e., What metaphysical commitment does an evolutionary epistemologist have to make? Progress, i.e., according to evolutionary epistemology, does knowledge develop toward a goal? With respect to realism, many evolutionary epistemologists endorse that is called ‘hypothetical realism’, a view that combines a version of epistemological ‘scepticism’ and tentative acceptance of metaphysical realism. With respect to progress, the problem is that biological evolution is not goal-directed, but the growth of human knowledge seems to be. Campbell (1974) worries about the potential dis-analogy here but is willing to bite the stone of conscience and admit that epistemic evolution progress toward a goal (truth) while biologic evolution does not. Many another has argued that evolutionary epistemologists must give up the ‘truth-topic’ sense of progress because a natural selection model is in essence, is non-teleological, as an alternative, following Kuhn (1970), and embraced in the accompaniment with evolutionary epistemology.
Among the most frequent and serious criticisms levelled against evolutionary epistemology is that the analogical version of the view is false because epistemic variation is not blind (Skagestad, 1978), and (Ruse, 1986) including, (Stein and Lipton, 1990) all have argued, nonetheless, that this objection fails because, while epistemic variation is not random, its constraints come from heuristics that, for the most part, are selective retention. Further, Stein and Lipton come to the conclusion that heuristics are analogous to biological pre-adaptions, evolutionary pre-biological pre-adaptions, evolutionary cursors, such as a half-wing, a precursor to a wing, which have some function other than the function of their descendable structures: The function of descendable structures, the function of their descendable character embodied to its structural foundations, is that of the guidelines of epistemic variation is, on this view, not the source of disanaloguousness, but the source of a more articulated account of the analogy.
Many evolutionary epistemologists try to combine the literal and the analogical versions (Bradie, 1986, and Stein and Lipton, 1990), saying that those beliefs and cognitive mechanisms, which are innate results from natural selection of the biological sort and those that are innate results from natural selection of the epistemic sort. This is reasonable as long as the two parts of this hybrid view are kept distinct. An analogical version of evolutionary epistemology with biological variation as its only source of blondeness would be a null theory: This would be the case if all our beliefs are innate or if our non-innate beliefs are not the result of blind variation. An appeal to the legitimate way to produce a hybrid version of evolutionary epistemology since doing so trivializes the theory. For similar reasons, such an appeal will not save an analogical version of evolutionary epistemology from arguments to the effect that epistemic variation is blind (Stein and Lipton, 1990).
Although it is a new approach to theory of knowledge, evolutionary epistemology has attracted much attention, primarily because it represents a serious attempt to flesh out a naturalized epistemology by drawing on several disciplines. In science is relevant to understanding the nature and development of knowledge, then evolutionary theory is among the disciplines worth a look. Insofar as evolutionary epistemology looks there, it is an interesting and potentially fruitful epistemological programme.
What makes a belief justified and what makes a true belief knowledge? Thinking that whether a belief deserves one of these appraisals is natural depends on what caused the depicted branch of knowledge to have the belief. In recent decades a number of epistemologists have pursued this plausible idea with a variety of specific proposals. Some causal theories of knowledge have it that a true belief that ‘p’ is knowledge just in case it has the right causal connection to the fact that ‘p’. Such a criterion can be applied only to cases where the fact that ‘p’ is a sort that can reach causal relations, as this seems to exclude mathematically and the necessary facts and perhaps any fact expressed by a universal generalization, and proponents of this sort of criterion have usually supposed that it is limited to perceptual representations where knowledge of a particular fact or two about the subjects’ environment, for such is surrounding or associated matters that influence or modify a course of development, e.g., an enclosing line or margin.
Armstrong (1973), predetermined that a position held by a belief in the form ‘This perceived object is ‘F’ is [non-inferential] knowledge if and only if the belief is a completely reliable sign that the perceived object is ‘F’, that is, the fact that the object is ‘F’ contributed to causing the belief and its doing so depended on properties of the believer such that the laws of nature dictated that, for any subject ‘χ’ and perceived object ‘y’, if ‘χ’ has those properties and believed that ‘y’ is ‘F’, then ‘y’ is ‘F’. (Dretske (1981) offers a rather similar account, in terms of the belief’s being caused by a signal received by the perceiver that carries the information that the object is ‘F’).
Goldman (1986) has proposed an importantly different causal criterion, namely, that a true belief is knowledge if it is produced by a type of process that is ‘globally’ and ‘locally’ reliable. Causing true beliefs is sufficiently high is globally reliable if its propensity. Local reliability has to do with whether the process would have produced a similar but false belief in certain counterfactual situations alternative to the actual situation. This way of marking off true beliefs that are knowledge does not require the fact believed to be causally related to the belief, and so it could in principle apply to knowledge of any kind of truth.
Goldman requires the global reliability of the belief-producing process for the justification of a belief, he requires it also for knowledge because justification is required for knowledge. What he requires for knowledge, but does not require for justification is local reliability. His idea is that a justified true belief is knowledge if the type of process that produced it would not have produced it in any relevant counterfactual situation in which it is false. Its purported theory of relevant alternatives can be viewed as an attempt to provide a more satisfactory response to this tension in our thinking about knowledge. It attempts to characterize knowledge in a way that preserves both our belief that knowledge is an absolute concept and our belief that we have knowledge.
According to the theory, we need to qualify rather than deny the absolute character of knowledge. We should view knowledge as absolute, reactive to certain standards (Dretske, 1981 and Cohen, 1988). That is to say, in order to know a proposition, our evidence need not eliminate all the alternatives to that preposition, rather for ‘us’, that our evidence eliminates all the relevant alternatives, where the set of relevant alternatives (a proper subset of the set of all alternatives) is determined by some standard. Moreover, according to the relevant alternatives view, and the standards determining that of the alternatives is raised by the sceptic are not relevant. If this is correct, then the fact that our evidence cannot eliminate the sceptic’s alternative does not lead to a sceptical result. For knowledge requires only the elimination of the relevant alternatives, so the relevant alternative view preserves in both strands in our thinking about knowledge. Knowledge is an absolute concept, but because the absoluteness is relative to a standard, we can know many things.
The interesting thesis that counts as a causal theory of justification (in the meaning of ‘causal theory’ intended here) is that: A belief is justified in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs-that can be defined (to a good approximation) As the proportion of the beliefs it produces (or would produce) that is true is sufficiently great.
This proposal will be adequately specified only when we are told (I) how much of the causal history of a belief counts as part of the process that produced it, (ii) which of the many types to which the process belongs is the type for purposes of assessing its reliability, and (iii) relative to why the world or worlds are the reliability of the process type to be assessed the actual world, the closet worlds containing the case being considered, or something else? Let ‘us’ look at the answers suggested by Goldman, the leading proponent of a reliabilist account of justification.
(1) Goldman (1979, 1986) takes the relevant belief producing process to include only the proximate causes internal to the believer. So, for instance, when recently I believed that the telephone was ringing the process that produced the belief, for purposes of assessing reliability, includes just the causal chain of neural events from the stimulus in my ear’s inward and other concurrent brain states. By what extension does something which extends beyond a level or a normative outer surface, as the product to believe a proposition for being held to be true. The philosophical problem is to understand what kind of state of a person constitutes belief. Is it, for example, a simple disposition to behaviour? Or the assembling of greater and larger complex states, as these complicated and complex states resist identification with any such disposition, it is a verbal skill or verbal behaviour essential to belief, in which case what is to be said about pre-linguistic infants, or non-linguistic animals? An evolutionary approach asks how the cognitive success of possessing the capacity to believe things relates to success in particular. Further, topics include discovering whether belief differs from other varieties of assent, such as acceptance, discovering whether belief is an all-or-nothing matter, or to what extent degrees of belief are possible, understanding the ways in which belief is controlled by rational and irrational factors, and discovering its links with other properties, such as the possession of conceptual or linguistic skills.
Nevertheless, this does not incline or tend toward the gradients of balancing levels of implicated manifestation, where inclination alludes to any event or any earlier decisions made that seem intuitively plausible of a belief depends should be restricted to internal omnes proximate to the belief. Why? Goldman does not tell ‘us’. One answer that some philosophers might give is that it is because a belief’s being justified at a given time can depend only on facts directly accessible to the believer’s awareness at that time (for, if a believer ought to holds only beliefs that are justified, she can tell at any given time what beliefs would then be justified for her). However, this cannot be Goldman’s answer because he wishes to include in the relevantly process neural events that are not directly accessible to consciousness.
(2) Once the reliabilist has told ‘us’ how to delimit the process producing a belief, he needs to tell ‘us’ which of the many types to which it belongs is the relevant type. Coincide, for example, the process that produces your current belief that you see a book before you. One very broad type to which that process belongs would be specified by ‘coming to a belief as to something one perceives as a result of activation of the nerve endings in some of one’s sense-organs’. A constricted type, in belonging to a differentially derived contributorial function dynamic, which that of an unvarying process belong would be specified by ‘coming to a belief as to what one sees as a result of activation of the nerve endings in one’s retinas’. A still narrower type would be given by inserting in the last specification a description of a particular pattern of activation of the retina’s particular cells. Which of these or other types to which the token process belongs is the relevant type for determining whether the type of process that produced your belief is reliable?
If we select a type that is too broad, as having the same degree of justification various beliefs that intuitively seem to have different degrees of justification. Thus the broadest type we specified for your belief that you see a book before you apply also to perceptual beliefs where the object seen is far away and seen only briefly is less justified. On the other hand, is we are allowed to select a type that is as narrow as we please, then we make it out that an obviously unjustified but true belief is produced by a reliable type of process. For example, suppose I see a blurred shape through the fog far in a field and unjustifiedly, but correctly, believe that it is a sheep: If we include enough details about my retinal image is specifying te type of the visual process that produced that belief, we can specify a type is likely to have only that one instanced and is therefore 100 percent reliable. Goldman conjectures (1986) that the relevant process type is ‘the narrowest type that is casually operative’. Presumably, a feature of the process producing beliefs were causally operatives in producing it just in case some alternative feature instead, but it would not have led to that belief. (We need to say ‘some’ here rather than ‘any’, because, for example, when I see an oak or pine tree, the particular ‘like-minded’ material bodies of my retinal image are casually clearly toward the operatives in producing my belief that what is seen as a tree, even though there are alternative shapes, for example, ‘pineish’ or ‘birchness’ ones, that would have produced the same belief.)
(3) Should the justification of a belief in a hypothetical, non-actual example turn on the reliability of the belief-producing process in the possible world of the example? That leads to the implausible result in that in a world run by a Cartesian demon-a powerful being who causes the other inhabitants of the world to have rich and coherent sets of perceptual and memory impressions that are all illusory the perceptual and memory beliefs of the other inhabitants are all unjustified, for they are produced by processes that are, in that world, quite unreliable. If we say instead that it is the reliability of the processes in the actual world that matters, we get the equally undesired result that if the actual world is a demon world then our perceptual and memory beliefs are all unjustified.
Goldman’s solution (1986) is that the reliability of the process types is to be gauged by their performance in ‘normal’ worlds, that is, worlds consistent with ‘our general beliefs about the world . . . ‘about the sorts of objects, events and changes that occur in it’. This gives the intuitively right results for the problem cases just considered, but indicate by inference an implausible proportion of making compensations for alternative tending toward justification. If there are people whose general beliefs about the world are very different from mine, then there may, on this account, be beliefs that I can correctly regard as justified (ones produced by processes that are reliable in what I take to be a normal world) but that they can correctly regard as not justified.
However, these questions about the specifics are dealt with, and there are reasons for questioning the basic idea that the criterion for a belief’s being justified is its being produced by a reliable process. Thus and so, doubt about the sufficiency of the reliabilist criterion is prompted by a sort of example that Goldman himself uses for another purpose. Suppose that being in brain-state ‘B’ always causes one to believe that one is in brained-state ‘B’. Here the reliability of the belief-producing process is perfect, but ‘we can readily imagine circumstances in which a person goes into grain-state ‘B’ and therefore has the belief in question, though this belief is by no means justified’ (Goldman, 1979). Doubt about the necessity of the condition arises from the possibility that one might know that one has strong justification for a certain belief and yet that knowledge is not what actually prompts one to believe. For example, I might be well aware that, having read the weather bureau’s forecast that it will be much hotter tomorrow. I have ample reason to be confident that it will be hotter tomorrow, but I irrationally refuse to believe it until Wally tells me that he feels in his joints that it will be hotter tomorrow. This, however, is to assert that of whatever it is that encourages or prompts me into believing the justifiable influence as to something contrived as true or the state of mind of one who offers a ready-made belief to anyone that is thought to be trusted. Implications as representing the act of assenting intellectually to something proposed as true or the state of having a firn conviction in the reality of something, heretofore, to have doubts about, hold the belief that, or take (as, accepted) as gospel, take at one’s word, but my belief is nevertheless, that not one who is not orthodox in his beliefs, but the normative conditions to change from a closed to an open condition, Although the knowledge of something based on personal exposure is an intricate affair, even so, experiences by which ‘we’ have on occasion to meet directly (as through participation or observation) as made skilful or wise through its practice, as seen for an experience of having been around, as knowing the score. Our mental and especially artistic efforts justifiable in the ordering toward my knowledge of the weather bureau’s prediction and of its evidential pressures, I can advert to any disavowable inference that I should not be holding that belief, indeed, given my justification and that there is nothing untoward about the weather bureau’s prediction, my belief, if true, can be counted knowledge. This sorts of example raises doubt whether any causal conditions, are it a reliable process or something else, it is necessary for either justification or knowledge to be.
Philosophers and scientists alike, have often held that the simplicity or parsimony of a theory is one reason, all else being equal, to view it as true. This goes beyond the unproblematic idea that simpler theories are easier to work with and gave greater aesthetic appeal.
One theory is more parsimonious than another when it postulates fewer entities, processes, changes or explanatory principles: The simplicity of a theory depends on essentially the same consecrations, though parsimony and simplicity obviously become the same. Demanding clarification of what makes one theory simpler or more parsimonious is plausible than another before the justification of these methodological maxims can be addressed.
If we set this description problem to one side, the major normative problem is as follows: What reason is there to think that simplicity is a sign of truth? Why should we accept a simpler theory instead of its more complex rivals? Newton and Leibniz thought that the answer was to be found in a substantive fact about nature. In “Principia,” Newton laid down as his first Rule of Reasoning in Philosophy that ‘nature does nothing in vain . . . ‘for Nature is pleased with simplicity and affects not the pomp of superfluous causes. Leibniz hypothesized that the actual world obeys simple laws because God’s taste for simplicity influenced his decision about which world to actualize.
The tragedy of the Western mind, described by Koyré, is a direct consequence of the stark Cartesian division between mind and world. We discovered the ‘certain principles of physical reality’, said Descartes, ‘not by the prejudices of the senses, but by the light of reason, and which thus possess so great evidence that we cannot doubt of their truth’. Since the ‘real’, or that for which the question of why there is something and not anything, where everything is real and nothing unreal belongs to the domain of ‘Being’. But, modern ‘logic’ gives little comfort to the speculations, and prompts suspicion that the question of why there is something and not anything is either ill-formed or profitless, since any intelligible answer will merely invite the same question. A central mistake in the area is to treat being as a noun that identifiers a particularly deep-matter. This is parallel to treating ‘Nothing’ as a name of a particular thing, perhaps, an object of dread or fear. The modern logical treatment of these notions by means of ‘quantifiers’ and ‘variables’ provides a defence against this error and others. The less abstract parts of the study of being concerns the kinds of things whose existence we have to acknowledge: Abstract entities, possibilities, numbers, and so on, and disputes over their reality form the subject of ‘ontology’. Of what exists external to ‘us’ is viewed only as what could be represented in the quantitative term of mathematics, Descartes concludes that all quantitative aspects of reality could be traced to the deceitfulness of the senses.
Once, again, the nonexistence of all things, indicate by inference of a concept that can be frightening, fascinating or dismissed as the product of the logical confusion of treating the term ‘nothing’ as itself a referring expression instead of a quantifier. This confusion leads the unwary to think that a sentence such as ‘nothing is all around us’ talks of a special kind of thing that is all around us, when in fact it merely denies that the predicate ‘is all around us’ has application. The feeling that lead some philosophers and theologians, notably Heidegger, to talk of the experience of Nothing, is not properly the experience of anything but rather the failure of a hope or expectation that there would be something of some kind at some point or points. This may arise in quite every day cases, as when one finds the article of furniture one expected to see as usual in the corner has disappeared. The difference between ‘existentialist’ and ‘analytical’ philosophers on the point is that whereas the former is afraid of Nothing, and later think that there is nothing to be afraid of. A rather different set of concerns arises when actions are specified in terms of doing nothing, saying nothing may be an admission of guilt, and doing nothing in some circumstances may be tantamount to murder. Other substantive problems arise over conceptualizing empty space and time.
Critics reply that omissions can be as deliberate and immoral as commissioners: If I am responsible for your food and fail to feed you, my omission is surely a killing. ‘Doing nothing’ can be away of doing something, or in other words, Absence of bodily movements can also constitute acting negligently, or deliberately and depending on the context may be a way of deceiving, betraying, or killing. Nevertheless, criminal law often finds it convenient to distinguish discontinuing an intervention, which is permissible, from bringing about a result which may not be, if for instance, the result is death of a patient. The question is whether the difference, if there is one, between acting and omitting to act can be describes or defining in a way that bears general moral weight.
The most fundamental aspect of the Western intellectual tradition is the assumption that there is a fundamental division between the material and the immaterial world or between the realm of matter and the realm of pure mind or spirit. The metaphysical framework based on this assumption is known as ontological dualism. As the word dual implies, the framework is predicated on ontology, or a conception of the nature of God or Being, that assumes reality has two distinct and separable dimensions. The concept of Being as continuous, immutable, and having a prior or separate existence from the world of change dates from the ancient Greek philosopher Parmenides, these same qualities were associated with the God of the Judeo-Christian tradition, and they were considerably amplified by the role played in Theology by Platonic and Neoplatonic philosophy.
Nicolas Copernicus, Galileo, Johannes Kepler, and Isaac Newton were all inheritors of a cultural tradition in which ontological dualism was a primary article of faith, hence, the idealization of the mathematical ideal as a source of communion with God, which dates from Pythagoras, provided a metaphysical foundation for the emerging natural sciences. This explains why, the creators of classical physics believed that doing physics was a form of communion with the geometrical and mathematical forms’ resident in the perfect mind of God. This view would survive in a modified form in what is now known as Einsteinian epistemology and accounts in no small part for the reluctance of many physicists to accept the epistemology associated with the Copenhagen Interpretation.
At the beginning of the nineteenth century, Pierre-Simon LaPlace, along with a number of other French mathematicians, advanced the view that the science of mechanics constituted a complete view of nature. Since this science, by observing its epistemology, had revealed itself to be the fundamental science, the hypothesis of God was, they concluded, entirely unnecessary.
LaPlace is recognized for eliminating not only the theological component of classical physics but the ‘entire metaphysical component’ as well’. The epistemology of science requires, as aforesaid, that we advance, in the proceedings by inductive generalizations from observed facts to hypotheses that are ‘tested by observed conformity of the phenomena’. What was unique about LaPlace’s view of hypotheses was his insistence that we cannot attribute reality to them. Although concepts like force, mass, motion, cause, and laws are obviously present in classical physics, they exist in LaPlace’s view only as quantities. Physics is concerned, he argued, with quantities that we associate as a matter of convenience with concepts, and the truths about nature are only the quantities.
As this view of hypotheses and the truths of nature as quantity was extended in the nineteenth century to a mathematical description of phenomena like heat, light, electricity, and magnetism. LaPlace’s assumptions about the actual character of scientific truths seemed correct. This progress suggested that if we could remove all thoughts about the ‘nature of’ or the ‘source of’ phenomena, the pursuit of strictly quantitative concepts would bring us to a complete description of all aspects of physical reality. Subsequently, figures like Comte, Kirchhoff, Hertz, and Poincaré developed a program for the study of nature that was quite different from that of the original creators of classical physics.
The seventeenth-century view of physics as a philosophy of nature or as natural philosophy was displaced by the view of physics as an autonomous science that was ‘the science of nature’. This view, which was premised on the doctrine of positivism, promised to subsume all of the nature with a mathematical analysis of entities in motion and claimed that the true understanding of nature was revealed only in the mathematical description. Since the doctrine of positivism assumes that the knowledge we call physics resides only in the mathematical formalism of physical theory, it disallows the prospect that the vision of physical reality revealed in physical theory can have any other meaning. In the history of science, the irony is that positivism, which was intended to banish metaphysical concerns from the domain of science, served to perpetuate a seventeenth-century metaphysical assumption about the relationship between physical reality and physical theory.
Epistemology since Hume and Kant has drawn back from this theological underpinning. Indeed, the very idea that nature is simple (or uniform) has come in for a critique. The view has taken hold that a preference for simple and parsimonious hypotheses is purely methodological: It is constitutive of the attitude we call ‘scientific’ and makes no substantive assumption about the way the world is.
A variety of otherwise diverse twentieth-century philosophers of science have attempted, in different ways, to flesh out this position. Two examples must suffice here: Hesse (1969) as, for summaries of other proposals. Popper (1959) holds that scientists should prefer highly falsifiable (improbable) theories: He tries to show that simpler theories are more falsifiable, also Quine (1966), in contrast, sees a virtue in theories that are highly probable, he argues for a general connection between simplicity and high probability.
Both these proposals are global. They attempt to explain why simplicity should be part of the scientific method in a way that spans all scientific subject matters. No assumption about the details of any particular scientific problem serves as a premiss in Popper’s or Quine’s arguments.
Newton and Leibniz thought that the justification of parsimony and simplicity flows from the hand of God: Popper and Quine try to justify these methodologically median of importance is without assuming anything substantive about the way the world is. In spite of these differences in approach, they have something in common. They assume that all users of parsimony and simplicity in the separate sciences can be encompassed in a singularity, as of justifying arguments, that recent developments in confirmation theory suggest that this assumption should be scrutinized. Good (1983) and Rosenkrantz (1977) has emphasized the role of auxiliary assumptions in mediating the connection between hypotheses and observations. Whether a hypothesis is well supported by some observations, or whether one hypothesis is better supported than another by those observations, crucially depends on empirical background assumptions about the inference problem here. The same view applies to the idea of prior probability (or, prior plausibility). In of a single hypo-physical science if chosen as an alternative to another even though they are equally supported by current observations, this must be due to an empirical background assumption.
Principles of parsimony and simplicity mediate the epistemic connection between hypotheses and observations. Perhaps these principles are able to do this because they are surrogates for an empirical background theory. It is not that there is one background theory presupposed by every appeal to parsimony; This has the quantifier order backwards. Rather, the suggestion is that each parsimony argument is justified only to each degree that it reflects an empirical background theory about the subjective matter. On this theory is brought out into the open, but the principle of parsimony is entirely dispensable (Sober, 1988).
This ‘local’ approach to the principles of parsimony and simplicity resurrects the idea that they make sense only if the world is one way rather than another. It rejects the idea that these maxims are purely methodological. How defensible this point of view is, will depend on detailed case studies of scientific hypothesis evaluation and on further developments in the theory of scientific inference.
It is usually not found of one and the same that, an inference is a (perhaps very complex) act of thought by virtue of which act (1) I pass from a set of one or more propositions or statements to a proposition or statement and (2) it appears that the latter are true if the former is or are. This psychological characterization has come about and above the wider summation of literature under fewer than inessential variations. Desiring a better characterization of inference is natural. Yet attempts to do so by constructing a fuller psychological explanation fail to comprehend the grounds on which inference will be objectively valid -. A point or points of affirmative amplification were made by Gottlob Frége, who attempted in the understanding that the nature of inference through the device of the representation of inference by formal-logical calculations or derivations better (1) leaving ‘us’ inpuzzled about the relation of formal-logical derivations to the informal inferences they are supposedly to represent or reconstruct, and (2) leaves ‘us’ worried about the sense of such formal derivations. Are these derivations inference? Are not informal inferences needed in order to apply the rules governing the constructions of formal derivations (inferring that this operation is an application of that formal rule)? These are concerns cultivated by, for example, Wittgenstein.
Coming up with an adequate characterized inferences, and even working out what would count as a very adequate characterization here is demandingly by no means nearly some resolved philosophical problem.
Traditionally, a proposition that is not a ‘conditional’, as with the ‘affirmative’ and ‘negative’, modern opinion is wary of the distinction, since what appears categorical may vary with the choice of a primitive vocabulary and notation. Apparently categorical propositions may also turn out to be disguised conditionals: ‘X’ is intelligent (categorical?) Equivalent, if ‘X’ is given a range of tasks, she does them better than many people (conditional?). The problem is not merely one of classification, since deep metaphysical questions arise when facts that seem to be categorical and therefore solid, come to seem by contrast conditional, or purely hypothetical or potential.
Its condition of some classified necessity is so proven sufficient that if ‘p’ is a necessary condition of ‘q’, then ‘q’ cannot be true unless ‘p’ is true? If ‘p’ is a sufficient condition, thus steering well is a necessary condition of driving in a satisfactory manner, but it is not sufficient, for one can steer well but drive badly for other reasons. Confusion may result if the distinction is not heeded. For example, the statement that ‘A’ causes ‘B’ may be interpreted to mean that ‘A’ is itself a sufficient condition for ‘B’, or that it is only a necessary condition fort ‘B’, or perhaps a necessary parts of a total sufficient condition. Lists of conditions to be met for satisfying some administrative or legal requirement frequently attempt to give individually necessary and jointly sufficient sets of conditions.
What is more, that if any proposition of the form ‘if p then q’. The condition hypothesized, ‘p’. Is called the antecedent of the conditionals, and ‘q’, the consequent? Various kinds of conditional have been distinguished. Its weakest is that of ‘material implication’, merely telling that either ‘not-p’, or ‘q’. Stronger conditionals include elements of ‘modality’, corresponding to the thought that, if ‘p’ is truer then ‘q’ must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether conditionals are better treated semantically, yielding differently finds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning with surface differences arising from other implicatures.
It follows from the definition of ‘strict implication’ that a necessary proposition is strictly implied by any proposition, and that an impossible proposition strictly implies any proposition. If strict implication corresponds to ‘q follows from p’, then this means that a necessary proposition follows from anything at all, and anything at all follows from an impossible proposition. This is a problem if we wish to distinguish between valid and invalid arguments with necessary conclusions or impossible premises.
The Humean problem of induction is that if we would suppose that there is some property ‘A’ concerning and observational or an experimental situation, and that out of a large number of observed instances of ‘A’, some fraction m/n (possibly equal to 1) has also been instances of some logically independent property ‘B’. Suppose further that the background proportionate circumstances not specified in these descriptions has been varied to a substantial degree and that there is no confirming availability in a knowledge practised by the imparting information concerning the frequency of the ‘B’s’ among the A’s or concerning causal or nomologically connections between instances of ‘A’ and instances of ‘B’.
In this situation, an ‘enumerative’ or ‘instantial’ induction inference would move rights from the premise, that m/n of observed A’s’ are B’s’ to the conclusion that approximately m/n of all A’s’ are B’s. (The usual probability qualification will be assumed to apply to the inference, rather than being part of the conclusion.) Here the class of A’s should be taken to include not only unobserved A’s and future A’s, but also possible or hypothetical A’s (an alternative conclusion would concern the probability or likelihood of the adjacently observed ‘A’ being a ‘B’).
The traditional or Humean problem of induction, often referred to simply as ‘the problem of induction’, is the problem of whether and why inferences that fit this schema should be considered rationally acceptable or justified from an epistemic or cognitive standpoint, i.e., whether and why reasoning in this way is likely to lead to true claims about the world. Is there any sort of argument or rationale that can be offered for thinking that conclusions reached in this way are likely to be true in the corresponding premisses is true - or even that their chances of truth are significantly enhanced?
Hume’s discussion of this issue deals explicitly only with cases where all observed A’s are B’s and his argument applies just as well to the more general case. His conclusion is entirely negative and sceptical: Inductive inferences are not rationally justified, but are instead the result of an essentially a rational process, custom or habit. Hume (1711-76) challenges the proponent of induction to supply a cogent line of reasoning that leads from an inductive premise to the corresponding conclusion and offers an extremely influential argument in the form of a dilemma (a few times referred to as ‘Hume’s fork’), that either our actions are determined, in which case we are not responsible for them, or they are the result of random events, under which case we are also not responsible for them.
Such reasoning would, he argues, have to be either deductively demonstrative reasoning in the concerning relations of ideas or ‘experimental’, i.e., empirical, that reasoning concerning matters of fact or existence. It cannot be the former, because all demonstrative reasoning relies on the avoidance of contradiction, and it is not a contradiction to suppose that ‘the course of nature may change’, that an order that was observed in the past and not of its continuing against the future: But it cannot be, as the latter, since any empirical argument would appeal to the success of such reasoning about an experience, and the justifiability of generalizing from experience are precisely what is at issue - so that any such appeal would be question-begging. Hence, Hume concludes that there can be no such reasoning (1748).
An alternative version of the problem may be obtained by formulating it with reference to the so-called Principle of Induction, which says roughly that the future will resemble the past or, somewhat better, that unobserved cases will resemble observed cases. An inductive argument may be viewed as enthymematic, with this principle serving as a supposed premiss, in which case the issue is obviously how such a premiss can be justified. Hume’s argument is then that no such justification is possible: The principle cannot be justified a prior because having possession of been true in experiences without obviously begging the question is not contradictory to have possession of been true in experiences without obviously begging the question.
The predominant recent responses to the problem of induction, at least in the analytic tradition, in effect accept the main conclusion of Hume’s argument, namely, that inductive inferences cannot be justified in the sense of showing that the conclusion of such an inference is likely to be true if the premise is true, and thus attempt to find another sort of justification for induction. Such responses fall into two main categories: (I) Pragmatic justifications or ‘vindications’ of induction, mainly developed by Hans Reichenbach (1891-1953), and (ii) ordinary language justifications of induction, whose most important proponent is Frederick, Peter Strawson (1919-). In contrast, some philosophers still attempt to reject Hume’s dilemma by arguing either (iii) That, contrary to appearances, induction can be inductively justified without vicious circularity, or (iv) that an anticipatory justification of induction is possible after all:
(1) Reichenbach’s view is that induction is best regarded, not as a form of inference, but rather as a ‘method’ for arriving at posits regarding, i.e., the proportion of ‘A’s’ remain additionally of ‘B’s’. Such a posit is not a claim asserted to be true, but is instead an intellectual wager analogous to a bet made by a gambler. Understood in this way, the inductive method says that one should posit that the observed proportion is, within some measure of an approximation, the true proportion and then continually correct that initial posit as new information comes in.
The gambler’s bet is normally an ‘appraised posit’, i.e., he knows the chances or odds that the outcome on which he bets will actually occur. In contrast, the inductive bet is a ‘blind posit’: We do not know the chances that it will succeed or even that success is that it will succeed or even that success is possible. What we are gambling on when we make such a bet is the value of a certain proportion in the independent world, which Reichenbach construes as the limit of the observed proportion as the number of cases increases to infinity. Nevertheless, we have no way of knowing that there are even such a limit, and no way of knowing that the proportion of ‘A’s’ accessional through the medium of B’s converges in the end on some stable value than varying at random. If we cannot know that this limit exists, then we obviously cannot know that we have any definite chance of finding it.
What we can know, according to Reichenbach, is that ‘if’ there is a truth of this sort to be found, the inductive method will eventually find it’. That this is so is an analytic consequence of Reichenbach’s account of what it is for such a limit to exist. The only way that the inductive method of making an initial posit and then refining it in light of new observations can fail eventually to arrive at the true proportion is if the series of observed proportions never converges on any stable value, which means that there is no truth to be found pertaining the proportion of A’s additionally constitute B’s. Thus, induction is justified, not by showing that it will succeed or indeed, that it has any definite likelihood of success, but only by showing that it will succeed if success is possible. Reichenbach’s claim is that no more than this can be established for any method, and hence that induction gives ‘us’ our best chance for success, our best gamble in a situation where there is no alternative to gambling.
This pragmatic response to the problem of induction faces several serious problems. First, there are indefinitely many other ‘methods’ for arriving at posits for which the same sort of defence can be given-methods that yields the same result as the inductive method over time but differ arbitrarily before long. Despite the efforts of others, it is unclear that there is any satisfactory way to exclude such alternatives, in order to avoid the result that any arbitrarily chosen short-term posit is just as reasonable as the inductive posit. Second, even if there is a truth of the requisite sort to be found, the inductive method is only guaranteed to find it or even to come within any specifiable distance of it in the indefinite long run. All the same, any actual application of inductive results always takes place in the presence to the future eventful states in making the relevance of the pragmatic justification to actual practice uncertainly. Third, and most important, it needs to be emphasized that Reichenbach’s response to the problem simply accepts the claim of the Humean sceptic that an inductive premise never provides the slightest reason for thinking that the corresponding inductive conclusion is true. Reichenbach himself is quite candid on this point, but this does not alleviate the intuitive implausibility of saying that we have no more reason for thinking that our scientific and commonsense conclusions that result in the induction of it ‘ . . . is true’ than, to use Reichenbach’s own analogy (1949), a blind man wandering in the mountains who feels an apparent trail with his stick has for thinking that following it will lead him to safety.
An approach to induction resembling Reichenbach’s claiming in that those particular inductive conclusions are posits or conjectures, than the conclusions of cogent inferences, is offered by Popper. However, Popper’s view is even more overtly sceptical: It amounts to saying that all that can ever be said in favour of the truth of an inductive claim is that the claim has been tested and not yet been shown to be false.
(2) The ordinary language response to the problem of induction has been advocated by many philosophers, nonetheless, Strawson claims that the question whether induction is justified or reasonable makes sense only if it tacitly involves the demand that inductive reasoning meet the standards appropriate to deductive reasoning, i.e., that the inductive conclusions are shown to follow deductively from the inductive assumption. Such a demand cannot, of course, be met, but only because it is illegitimate: Inductive and deductive reasons are simply fundamentally different kinds of reasoning, each possessing its own autonomous standards, and there is no reason to demand or expect that one of these kinds meet the standards of the other. Whereas, if induction is assessed by inductive standards, the only ones that are appropriate, then it is obviously justified.
The problem here is to understand to what this allegedly obvious justification of an induction amount. In his main discussion of the point (1952), Strawson claims that it is an analytic true statement that believing it a conclusion for which there is strong evidence is reasonable and an analytic truth that inductive evidence of the sort captured by the schema presented earlier constitutes strong evidence for the corresponding inductive conclusion, thus, apparently yielding the analytic conclusion that believing it a conclusion for which there is inductive evidence is reasonable. Nevertheless, he also admits, indeed insists, that the claim that inductive conclusions will be true in the future is contingent, empirical, and may turn out to be false (1952). Thus, the notion of reasonable belief and the correlative notion of strong evidence must apparently be understood in ways that have nothing to do with likelihood of truth, presumably by appeal to the standard of reasonableness and strength of evidence that are accepted by the community and are embodied in ordinary usage.
Understood in this way, Strawson’s response to the problem of inductive reasoning does not speak to the central issue raised by Humean scepticism: The issue of whether the conclusions of inductive arguments are likely to be true. It amounts to saying merely that if we reason in this way, we can correctly call ourselves ‘reasonable’ and our evidence ‘strong’, according to our accepted community standards. Nevertheless, to the undersealing of issue of wether following these standards is a good way to find the truth, the ordinary language response appears to have nothing to say.
(3) The main attempts to show that induction can be justified inductively have concentrated on showing that such as a defence can avoid circularity. Skyrms (1975) formulate, perhaps the clearest version of this general strategy. The basic idea is to distinguish different levels of inductive argument: A first level in which induction is applied to things other than arguments: A second level in which it is applied to arguments at the first level, arguing that they have been observed to succeed so far and hence are likely to succeed in general: A third level in which it is applied in the same way to arguments at the second level, and so on. Circularity is allegedly avoided by treating each of these levels as autonomous and justifying the argument at each level by appeal to an argument at the next level.
One problem with this sort of move is that even if circularity is avoided, the movement to higher and higher levels will clearly eventually fail simply for lack of evidence: A level will reach at which there have been enough successful inductive arguments to provide a basis for inductive justification at the next higher level, and if this is so, then the whole series of justifications collapses. A more fundamental difficulty is that the epistemological significance of the distinction between levels is obscure. If the issue is whether reasoning in accord with the original schema offered above ever provides a good reason for thinking that the conclusion is likely to be true, then it still seems question-begging, even if not flatly circular, to answer this question by appeal to anther argument of the same form.
(4) The idea that induction can be justified on a pure priori basis is in one way the most natural response of all: It alone treats an inductive argument as an independently cogent piece of reasoning whose conclusion can be seen rationally to follow, although perhaps only with probability from its premise. Such an approach has, however, only rarely been advocated (Russell, 1913 and BonJour, 1986), and is widely thought to be clearly and demonstrably hopeless.
Many on the reasons for this pessimistic view depend on general epistemological theses about the possible or nature of anticipatory cognition. Thus if, as Quine alleges, there is no a prior justification of any kind, then obviously a prior justification for induction is ruled out. Or if, as more moderate empiricists have in claiming some preexistent knowledge should be analytic, then again a prevenient justification for induction seems to be precluded, since the claim that if an inductive premise ids truer, then the conclusion is likely to be true does not fit the standard conceptions of ‘analyticity’. A consideration of these matters is beyond the scope of the present spoken exchange.
There are, however, two more specific and quite influential reasons for thinking that an early approach is impossible that can be briefly considered, first, there is something that is taken for granted or advanced as fact and decisions are based of assumption about the nature of the characterization as to decrying the essential individuality for being free from pretension or calculation. Hume adopted that a move forward in the defence of induction would have to involve ‘turning induction into deduction’, i.e., showing, the impossibilities, that the inductive conclusion follows deductively from the premise, so that it is a formal contradiction to accept the latter and deny the former. However, it is unclear why a prior approach need be committed to anything this strong. It would be enough if it could be argued that it is deductively unlikely that such a premise is true and corresponding conclusion false.
Second, Reichenbach defends his view that pragmatic justification is the best that is possible by pointing out that a completely chaotic world in which there is simply not true conclusion to be found as to the proportion of A’s in additions that occur of, but B’s is neither impossible nor unlikely from a purely a prior standpoint, the suggestion being that therefore there can be no a prior reason for thinking that such a conclusion is true. Nevertheless, there is still a substring wayin laying that a chaotic world is a prior neither impossible nor unlikely without any further evidence does not show that such a world os not a prior unlikely and a world containing such-and-such regularity might anticipatorially be somewhat likely in relation to an occurrence of a long-run patten of evidence in which a certain stable proportion of observed ‘A’s’ are B’s ~. An occurrence, it might be claimed, that would be highly unlikely in a chaotic world (BonJour, 1986).
Goodman’s ‘new riddle of induction’ purports that we suppose that before some specific time ’t’ (perhaps the year 2000) we observe a larger number of emeralds (property A) and find them all to be green (property B). We proceed to reason inductively and conclude that all emeralds are green Goodman points out, however, that we could have drawn a quite different conclusion from the same evidence. If we define the term ‘grue’ to mean ‘green if examined before ’t’ and blue examined after tʹ, then all of our observed emeralds will also be gruing. A parallel inductive argument will yield the conclusion that all emeralds are gruing, and hence that all those examined after the year 2000 will be blue. Presumably the first of these concisions is genuinely supported by our observations and the second is not. Nevertheless, the problem is to say why this is so and to impose some further restriction upon inductive reasoning that will permit the first argument and exclude the second.
The obvious alternative suggestion is that ‘grue. Similar predicates do not correspond to genuine, purely qualitative properties in the way that ‘green’ and ‘blueness’ does, and that this is why inductive arguments involving them are unacceptable. Goodman, however, claims to be unable to make clear sense of this suggestion, pointing out that the relations of formal desirability are perfectly symmetrical: Grue’ may be defined in terms if, ‘green’ and ‘blue’, but ‘green’ an equally well be defined in terms of ‘grue’ and ‘green’ (blue if examined before ‘t’ and green if examined after ‘t’).
The ‘grued, paradoxes’ demonstrate the importance of categorization, in that sometimes it is itemized as ‘gruing’, if examined of a presence to the future, before future time ‘t’ and ‘green’, or not so examined and ‘blue’. Even though all emeralds in our evidence class grue, we ought must infer that all emeralds are gruing. For ‘grue’ is non-projectible, and cannot transmit credibility form known to unknown cases. Only projectable predicates are right for induction. Goodman considers entrenchment the key to projectibility having a long history of successful protection, ‘grue’ is entrenched, lacking such a history, ‘grue’ is not. A hypothesis is projectable, Goodman suggests, only if its predicates (or suitable related ones) are much better entrenched than its rivalrous past successes that do not assume future ones. Induction remains a risky business. The rationale for favouring entrenched predicates is pragmatic. Of the possible projections from our evidence class, the one that fits with past practices enables ‘us’ to utilize our cognitive resources best. Its prospects of being true are worse than its competitors’ and its cognitive utility is greater.
A paradox, least of mention, arises when set of apparently incontrovertible premises gives unacceptable or contradictory conclusions. To solve a paradox will involve showing either that there is a hidden flaw in the premise, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand. Famous families of paradoxes include the ‘semantic paradoxes’ and ‘Zeno’s paradoxes’. At the beginning of the 20th century, Russell’s paradox and other set-theoretic paradoxes led to the complex overhaul of the foundations of set theory. Yet, the ‘Sorites paradox’ led to other investigation of the semantics of ‘vagueness’, and ‘fuzzy logics’.
So, to a better understanding of induction we should then term is most widely used for any process of reasoning that takes ‘us’ from empirical premises to empirical conclusions supported by the premises, but not deductively entailed by them. Inductive arguments are therefore kinds of applicative arguments, in which something beyond the content of the premise is inferred as probable or supported by them. Induction is, however, commonly distinguished from arguments to theoretical explanations, which share this applicative character, by being confined to inferences in which he conclusion involves the same properties or relations as the premises. The central example is induction by simple enumeration, where from the premises letting it, is known that Fa, Fb, Fc . . . ‘where, . . . a, b, c’s ~, is all of some kind ‘G’, it is inferred that G’s from outside the sample, such as future G’s, will be ‘F’, or perhaps that all G’s are ‘F’. In this, which and the other persons deceive them, children may infer that everyone is a deceiver: Different, but similar inferences of a property by some object to the same object’s future possession of the same property, or from the constancy of some law-like pattern in events and states of affairs ti its future constancy. All objects we know of attract each other with a force inversely proportional to the square of the distance between them, so perhaps they all do so, and will always do so.
The rational basis of any inference was challenged by Hume, who believed that induction presupposed belie in the uniformity of nature, but that this belief has no defence in reason, and merely reflected a habit or custom of the mind. Hume was not therefore sceptical about the role of reason in either explaining it or justifying it. Trying to answer Hume and to show that there is something rationally compelling about the inference referred to as the problem of induction. It is widely recognized that any rational defence of induction will have to partition well-behaved properties for which the inference is plausible (often called projectable properties) from badly behaved ones, for which it is not. It is also recognized that actual inductive habits are more complex than those of similar enumeration, and that both common sense and science pay attention to such giving factors as variations within the sample giving ‘us’ the evidence, the application of ancillary beliefs about the order of nature, and so on.
Nevertheless, the fundamental problem remains that ant experience condition by application show ‘us’ only events occurring within a very restricted part of a vast spatial and temporal order about which we then come to believe things.
Uncompounded by its belonging of a confirmation theory finding of the measure to which evidence supports a theory fully formalized confirmation theory would dictate the degree of confidence that a rational investigator might be connected if by a belief, policy, or procedure or followed as the basis of action, importantly, this educational system that is based on theory, may, nevertheless, has to a theory, whereby given of a particularized body of evidence. The grandfather of confirmation theory is Gottfried Leibniz (1646-1718), who believed that a logically transparent language of science would be able to resolve all disputes. In the 20th century a fully formal confirmation theory was a main goal of the logical positivist, since without it the central concept of verification by empirical evidence itself remains distressingly unscientific. The principal developments were due to Rudolf Carnap (1891-1970), culminating in his “Logical Foundations of Probability” (1950). Carnap’s idea was that the measure necessitated would be the proportion of logically possible states of affairs in which the theory and the evidence both hold, compared to the number in which the evidence itself holds that the probability of a preposition, relative to some evidence, is a proportion of the range of possibilities under which the proposition is true, compared to the total range of possibilities left by the evidence. The difficulty with the theory lies in identifying sets of possibilities so that they admit of measurement. It therefore demands that we can put a measure on the ‘range’ of possibilities consistent with theory and evidence, compared with the range consistent with the evidence alone.
Among the obstacles the enterprise meets, is the fact that while evidence covers only a finite range of data, the hypotheses of science may cover an infinite range. In addition, confirmation proves to vary with the language in which the science is couched, and the Carnapian programme has difficulty in separating genuinely confirming variety of evidence from less compelling repetition of the same experiment. Confirmation also proved to be susceptible to acute paradoxes. Finally, scientific judgement seems to depend on such intangible factors as the problems facing rival theories, and most workers have come to stress instead the historically situated scene of what would appear as a plausible distinction of a scientific knowledge at a given time.
Awakening to the paradox, of which when a set of apparent incontrovertible premises is given to unacceptable or contradictory conclusions. To solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved it shows that there is something about our reasoning and our concepts that we do not understand. What is more, and somewhat loosely, a paradox is a compelling argument from unacceptable premises to an unacceptable conclusion: More strictly speaking, a paradox is specified to be a sentence that is true if and only if it is false. A characterlogical objection hinges on or upon the expressive linguistic communication as set by the presentiment or articulation for which of an utterance or voices that qualification or applicability is stated in descriptions in the narrative, least of mention, the narrations exemplify or services to explain, in that an act, process, or instance of expressing in words are founded in the given expressions to (as a thought, opinion, or an emotion) are ever the views freely. In the forceful and expressive word is clearly conveying or manifesting something, as, perhaps, some of it would be: “The displayed sentence is false.”
Seeing that this sentence is false if true is easy, and true if false, a paradox, in either of the senses distinguished, presents an important philosophical challenger. Epistemologists are especially concerned with various paradoxes having to do with knowledge and belief. In other words, for example, the Knower paradox is an argument that begins with apparently impeccable premisses about the concepts of knowledge and inference and derives an explicit contradiction. The origin of the reasoning is the ‘surprise examination paradox’: A teacher announces that there will be a surprise examination next week. A clever student argues that this is impossible. ‘The test cannot be on Friday, the last day of the week, because it would not be a surprise. We would know the day of the test on Thursday evening. This means we can also rule out Thursday, for after we learn that no test has been given by Wednesday, we would know the test is on Thursday or Friday - and would already know that it s not on Friday and would already know that it is not on Friday by the associative reasons for which were previously given. The remaining days can be eliminated in the same manner’.
This puzzle has over a dozen variants. The first was probably invented by the Swedish mathematician Lennard Ekbon in 1943, although the first few commentators regarded the reverse elimination argument as cogent, every writer on the subject since 1950 agrees that the argument is unsound. The controversy has been over the proper diagnosis of the flaw.
Initial analyses of the subject’s argument tried to lay the blame on a simple equivocation. Their failure led to more sophisticated diagnoses. The general format has been an assimilation to better-known paradoxes. One tradition casts the surprise examination paradox as a self-referential problem, as fundamentally akin to the Liar, the paradox of the Knower, or Gödel’s incompleteness theorem. That in of itself, says enough that Kaplan and Montague (1960) distilled the following ‘self-referential’ paradox, the Knower. Consider the sentence: (S) The negation of this sentence is known (to be true). Suppose that (S) is true. Then its negation is known and hence true. However, if its negation is true, then (S) must be false. Therefore (s) is false, or what is the name, the negation of (S) is true.
This paradox and its accompanying reasoning are strongly reminiscent of the Lair Paradox that (in one version) begins by considering a sentence ‘This sentence is false’ and derives a contradiction. Versions of both arguments using axiomatic formulations of arithmetic and Gödel-numbers were to achieve the effect of a self-reference for yielding important meta-theorems about what can be expressed in such systems. Roughly these are to the effect that no predicates definable in the formalized arithmetic can have the properties we demand of truth (Tarski’s Theorem) or of knowledge (Montague, 1963).
Nevertheless, these meta-theorems still leave ‘us’ in accompaniments with the problem that if we suppose that we add of these formalized languages predicates intended to express the concept of knowledge (or truth) and inference - as one might do if a logic of these concepts is desired. Then the sentence expressing the leading principles of the Knower Paradox will be true.
Explicitly, the assumption about knowledge and inferences:
(1) If sentences ‘A’ are known, then “a.”
(2) (1) is known?
(3) If ‘B’ is correctly inferred from ‘A’, and ‘A’ is known, then ‘B’ is known.
To give an absolutely explicit t derivation of the paradox by applying these principles to (S), we must add (contingent) assumptions to the effect that certain inferences have been done. Still, as we go through the argument of the Knower, these inferences are done. Even if we can somehow restrict such principles and construct a consistent formal logic of knowledge and inference, the paradoxical argument as expressed in the natural language still demands some explanation.
There are a number of paradoxes of the Liar family. The simplest example is the sentence ‘This sentence is false’, which must be false if it is true, and true if it is false. One suggestion is that the sentence fails to say anything, but sentences that fail to say anything are at least not true. In fact case, we consider to sentences ‘This sentence is not true’, which, if it fails to say anything is not true, and hence (this kind of reasoning is sometimes called the strengthened Liar). Other versions of the Liar introduce pairs of sentences, as in a slogan on the front of a T-shirt saying ‘This sentence on the back of this T-shirt is false’, and one on the back saying ‘The sentence on the front of this T-shirt is true’. It is clear that each sentence individually is well formed, and was it not for the other, might have said something true. So any attempt to dismiss the paradox by sating that the sentence involved is meaningless will face problems.
Even so, the two approaches that have some hope of adequately dealing with this paradox is ‘hierarchy’ solutions and ‘truth-value gap’ solutions. According to the first, knowledge is structured into ‘levels’. It is argued that there is a one-binding notion given tongue to the verb ‘knows’, but rather a whole series of notions, of the knowable knows, and so on (perhaps into transfinite), stated ion terms of predicate expressing such ‘ramified’ concepts and properly restricted, (1)-(3) lead to no contradictions. The main objections to this procedure are that the meaning of these levels has not been adequately explained and that the idea of such subscripts, even implicit, in a natural language is highly counterintuitive the ‘truth-value gap’ solution takes sentences such as (S) to lack truth-value. They are neither true nor false, but they do not express propositions. This defeats a crucial step in the reasoning used in the derivation of the paradoxes. Kripler (1986) has developed this approach in connection with the Liar and Asher and Kamp (1986) has worked out some details of a parallel solution to the Knower. The principal objection is that ‘strengthened’ or ‘super’ versions of the paradoxes tend to reappear when something spoke or written by way of return to a semblance as equated and recognized in satisfactory results are to answer the question or questions, in that of the solution it is stated.
Since the paradoxical deduction uses only the properties (1)-(3) and since the argument is formally valid, any notion that satisfies these conditions will lead to a paradox. Thus, Grim (1988) notes that this may be read as ‘is known by an omniscient God’ and concludes that there is no coherent single notion of omniscience. Thomason (1980) observes that with some different conditions, analogous reasoning about belief can lead to paradoxical consequence.
Overall, it looks as if we should conclude that knowledge and truth are ultimately intrinsically ‘stratified’ concepts. It would seem that wee must simply accept the fact that these (and similar) concepts cannot be assigned of a particularly fixed, finite or infinite. Still, the meaning of this idea certainly needs further clarification.
Its paradox arises when a set of apparently incontrovertible premises gives unacceptable or contradictory conclusions, to solve a paradox will involve showing either that there is a hidden flaw in the premises, or that the reasoning is erroneous, or that the apparently unacceptable conclusion can, in fact, be tolerated. Paradoxes are therefore important in philosophy, for until one is solved its show that there is something about our reasoning and of concepts that we do not understand. Famous families of paradoxes include the ‘semantic paradoxes’ and ‘Zeno’s paradoxes. Art the beginning of the 20th century, paradox and other set-theoretical paradoxes led to the complete overhaul of the foundations of set theory, while the ’Sorites paradox’ has lead to the investigations of the semantics of vagueness and fuzzy logics.
It is, however, to what extent can analysis be informative? This is the question that gives a riser to what philosophers has traditionally called ‘the’ paradox of analysis. Thus, consider the following proposition:
(1) To be an instance of knowledge is to be an instance of justified true belief not essentially grounded in any falsehood. (1) If true, illustrates an important type of philosophical analysis. For convenience of exposition, it is to assume that (1) is a correct analysis. The paradox arises from the fact that if the concept of justified true belief not been essentially grounded in any falsification is the analysand of the concept of knowledge, it would seem that they are the same concept and hence that: (2) To be an instance of knowledge is to be as an instance of knowledge and would have to be the same propositions as (1). But then how can (1) be informative when (2) is not? This is what is called the first paradox of analysis. Classical writings’ on analysis suggests a second paradoxical analysis (Moore, 1942).
(3) An analysis of the concept of being a brother is that to be a brother is to be a male sibling. If (3) is true, it would seem that the concept of being a brother would have to be the same concept as the concept of being a male sibling and that:
(4) An analysis of the concept of being a brother is that to be a brother is to be a brother would also have to be true and in fact, would have to be the same proposition as (three?). Yet (3) is true and (4) is false.
Both these paradoxes rest upon the assumptions that analysis is a relation between concepts, than one involving entity of other sorts, such as linguistic expressions, and tat in a true analysis, analysand and analysandum are equivalent concepts. Both these assumptions are explicit in Moore, but some of Moore’s remarks hint at a solution to that of another statement of an analysis is a statement partly about the concept involved and partly about the verbal expressions used to express it. He says he thinks a solution of this sort is bound to be right, but fails to suggest one because he cannot see a way in which the analysis can be even partly about the expression (Moore, 1942).
Elsewhere, of such ways, as a solution to the second paradox, which of rhetorical expressions lay upon the tendency as supplied and supported in the tenets of (3) as: (5)-An analysis is given by saying that the verbal expression ‘χ’ is a brother’ expresses the same concept as is expressed by the conjunction of the verbal expressions ‘χ’ is male’ when used to express the concept of being male and ‘χ’ is a sibling’ when used to express the concept of being a sibling (Ackerman, 1990). An important point about (5) is as follows. Stripped of its philosophical jargon (‘analysis’, ‘concept’, ‘χ’ is a . . . ‘), (5) seems to state the sort of information generally stated in a definition of the verbal expression ‘brother’ in terms of the verbal expressions ‘male’ and ‘sibling’, where this definition is designed to draw on or upon listeners’ antecedent understanding of the verbal expression ‘male’ and ‘sibling’, and thus, to tell listeners what the verbal expression ‘brother’ really means, instead of merely providing the information that two verbal expressions are synonymous without specifying the meaning of either one. Thus, its solution to the second paradox seems to make the sort of analysis tat gives rise to this paradox matter of specifying the meaning of a verbal expression in terms of separate verbal expressions already understood and saying how the meanings of these separate, already-understood verbal expressions are combined. This corresponds to Moore’s intuitive requirement that an analysis should both specify the constituent concepts of the analysandum and tell how they are combined, but is this all there is to philosophical analysis?
To answer this question, we must note that, in addition too there being two paradoxes of analysis, there is two types of analyses that are relevant here. (There are also other types of analysis, such as reformatory analysis, where the analysands are intended to improve on and replace the analysandum. But since reformatory analysis involves no commitment to conceptual identity between analysand and analysandum, reformatory analysis does not generate a paradox of analysis and so will not concern ‘us’ here.) One way to recognize the difference between the two types of analysis concerning ‘us’ are to consider and have as a purpose the focussing difference between the two paradoxes. This can be done by means of the Frége-inspired sense-individuation condition, which is the condition that two expressions have the same sense if and only if they can be interchangeably ‘salva veritate’ whenever used in propositional attitudinal context. If the expressions for the analysands and the analysandum in (1) met this condition, (1) and (2) would not raise the first paradox, but the second paradox arises regardless of whether the expression for the analysand and the analysandum meet this condition. The second paradox is a matter of the failure of such expressions to be interchangeable salva veritate in sentences involving such contexts as ‘an analysis is given thereof. Thus, a solution (such as the one offered) that is aimed only at such contexts can solve the second paradox. This is clearly false for the first paradox, however, which will apply to all pairs of propositions expressed by sentences in which expressions for pairs of analysands and analysantia raising the first paradox is interchangeable. For example, consider the following proposition:
(6) Mary knows that some cats tail.
It is possible for John to believe (6) without believing:
(7) Mary has justified true belief, not essentially grounded in any falsehood, that some cats lack tails.
Yet this possibility clearly does not mean that the proposition that Mary knows that some casts lack tails is partly about language.
One approach to the first paradox is to argue that, despite the apparent epistemic inequivalence of (1) and (2), the concept of justified true belief not essentially grounded in any falsehood is still identical with the concept of knowledge (Sosa, 1983). Another approach is to argue that in the sort of analysis raising the first paradox, the analysand and analysandum are concepts that are different but that bear a special epistemic relation to each other. Elsewhere, the development is such an approach and suggestion that this analysand-analysandum relation has the following facets.
(I) The analysand and analysandum are necessarily coextensive, i.e., necessarily every instance of one is an
instance of the other.
(ii) The analysand and analysandum are knowable theoretical to be coextensive.
(iii) The analysandum is in general, is the innermost of sublime simplicities, than, the analysands a condition whose necessity is recognized in classical writings on analysis, such as, Langford, 1942.
(iv) The analysands do not have the analysandum as a constituent.
Condition (iv) rules out circularity. But since many valuable quasi-analyses are partly circular, e.g., knowledge is justified true belief supported by known reasons not essentially grounded in any falsehood, it seems best to distinguish between full analysis, from that of (iv) is a necessary condition, and partial analysis, for which it is not.
These conditions, while necessary, are clearly insufficient. The basic problem is that they apply too many pairs of concepts that do not seem closely enough related epistemologically to count as analysand and analysandum. Such as the concept of being ‘6' and the concept of the fourth root of ‘1296'. Accordingly, its solution upon what actually seems epistemologically distinctive about analyses of the sort under consideration, which is a certain way they can be justified. This is by the philosophical example-and-counterexample method, which is in a general term that goes as follows. ‘J’ investigates the analysis of K’s concept ‘Q’ (where ‘K’ can but need not be identical to ‘J’ by setting ‘K’ a series of armchair thought experiments, i.e., presenting ‘K’ with a series of simple described hypothetical test cases and asking ‘K’ questions of the form ‘If such-and-such where the case would this count as a case of Q? ‘J’ then contrasts the descriptions of the cases to which; K’ answers affirmatively with the description of the cases to which ‘K’ does not, and ‘J’ generalizes upon these descriptions to arrive at the concepts, if possible not including the analysandum and their mode of combination that constitute the analysand of K’‘s concept ‘Q’. Since ‘J’ need not be identical with ‘K’, there is no requirement that ‘K’ himself be able to perform this generalization, to recognize its result as correct, or even to understand the analysand that is the result. This is reminiscent of Walton’s observation that one can simply recognize a bird as a swallow without realizing just what feature of the bird - beak, wing configurations, and so forth. Form the basis of this recognition, and the philosophical significance of this way of recognizing is discussed in Walton, 1972 where ‘K’ answers the questions based solely on whether the described hypothetical cases just strike him as cases of ‘Q’. ‘J’ observes certain strictures in formulating the cases and questions. He makes the cases as simple as possible, to minimize the possibility of confusion and to minimize the likelihood that ‘K’ will draw upon his philosophical theories (or quasi-philosophical, a rudimentary notion if he is unsophisticated philosophically) in answering the questions. For this conflicting result, the conflict should ‘other things being equal’ be resolved in favour of the simpler case. ‘J’ makes the series of described cases wide-ranging and varied, with the aim of having it be a complete series, where a series is complete if and only if no case that is omitted in such that, if included, it would change the analysis arrived at. ‘J’ does not, of course, use as a test-case description anything complicated and general enough to express the analysand. There is no requirement that the described hypothetical test cases be formulated only in terms of what can be observed. Moreover, using described hypothetical situations as test cases enables ‘J’ to frame the questions in such a way as to rule out extraneous background assumption to a degree, thus, even if ‘K’ correctly believes that all and only P’s are R’s, the question of whether the concepts of P, R, or both enter the analysand of his concept ‘Q’ can be investigated by asking him such questions as ‘Suppose (even if it seems preposterous to you) that you were to find out that there was a ‘P’ that was not an ‘R’. Would you still consider it a case of Q?
Taking all this into account, the fifth necessary condition for this sort of analysand-analysandum relations is as follows: If ‘S’ is the analysand of ‘Q’, the proposition that necessarily all and only instances of ‘S’ are instances of ‘Q’ can be justified by generalizing from intuition about the correct answers to questions of the sort indicated about a varied and wide-ranging series of simple described hypothetical situations. It so does occur of antinomy, when we are able to argue for, or demonstrate, both a proposition and its contradiction, roughly speaking, a contradiction of a proposition ‘p’ is one that can be expressed in form ‘not-p’, or, if ‘p’ can be expressed in the form ‘not-q’, then a contradiction is one that can be expressed in the form ‘q’. Thus, e.g., if ‘p’ is 2 + 1 = 4, then 2 + 1 ≠4 is invalidating ‘p’, for 2 + 1 ≠4 can be expressed in the form not (2 + 1 = 4). If ‘p’ is 2 + 1 ≠4, then 2 + 1-4 is a contradictory of ‘p’, since 2 + 1 ≠4 can be expressed in the form not (2 + 1 = 4). This is, mutually, but contradictory propositions can be expressed in the form, ‘r’, ‘not-r’. The Principle of Contradiction says that mutually contradictory propositions cannot both be true and cannot both be false. Thus, by this principle, since if ‘p’ is true, ‘not-p’ is false, no proposition ‘p’ can be at once true and false (otherwise both ‘p’ and its contradictories would be false?). In particular, for any predicate ‘p’ and object ‘χ’, it cannot be that ‘p’; is at once true of ‘χ’ and false of χ? This is the classical formulation of the principle of contradiction, but it is nonetheless, that wherein, we cannot now fault either demonstrates. We would eventually hope to be able ‘to solve the antinomy’ by managing, through careful thinking and analysis, eventually to fault either or both demonstrations.
Many paradoxes are as an easy source of antinomies, for example, Zeno gave some famously lets say, logical-cum-mathematical arguments that might be interpreted as demonstrating that motion is impossible. But our eyes as it was, demonstrate motion (exhibit moving things) all the time. Where did Zeno go wrong? Where do our eyes go wrong? If we cannot readily answer at least one of these questions, then we are in antinomy. In the ‘Critique of Pure Reason’, Kant gave demonstrations of the same kind -in the Zeno example they were obviously not the same kind of both, e.g., that the world has a beginning in time and space, and that the world has no beginning in time or space. He argues that both demonstrations are at fault because they proceed on the basis of ‘pure reason’ unconditioned by sense experience.
At this point, we display attributes to the theory of experience, as it is not possible to define in an illuminating way, however, we know what experiences are through acquaintances with some of our own, e.g., visual experiences of as afterimage, a feeling of physical nausea or a tactile experience of an abrasive surface (which might be caused by an actual surface -rough or smooth, or which might be part of a dream, or the product of a vivid sensory imagination). The essential feature of experience is it feels a certain way -that there is something that it is like to have it. We may refer to this feature of an experience as its ‘character’.
Another core feature of the sorts of experiences with which this may be of a concern, is that they have representational ‘content’. (Unless otherwise indicated, ‘experience’ will be reserved for their ‘contentual representations’.) The most obvious cases of experiences with content are sense experiences of the kind normally involved in perception. We may describe such experiences by mentioning their sensory modalities ad their contents, e.g., a gustatory experience (modality) of chocolate ice cream (content), but do so more commonly by means of perceptual verbs combined with noun phrases specifying their contents, as in ‘Macbeth saw a dagger’. This is, however, ambiguous between the perceptual claim ‘There was a (material) dagger in the world that Macbeth perceived visually’ and ‘Macbeth had a visual experience of a dagger’ (the reading with which we are concerned, as it is afforded by our imagination, or perhaps, experiencing mentally hallucinogenic imagery).
As in the case of other mental states and events with content, it is important to distinguish between the properties that and experience ‘represents’ and the properties that it ‘possesses’. To talk of the representational properties of an experience is to say something about its content, not to attribute those properties to the experience itself. Like every other experience, a visual; experience of a nonstructural square, of which is a mental event, and it is therefore not itself, either irregular or square, even though it represents those properties. It is, perhaps, fleeting, pleasant or unusual, even though it does not represent those properties. An experience may represent a property that it possesses, and it may even do so in virtue of a rapidly changing (complex) experience representing something as changing rapidly. However, this is the exception and not the rule.
Which properties can be [directly] represented in sense experience is subject to debate. Traditionalists include only properties whose presence could not be doubted by a subject having appropriate experiences, e.g., colour and shape in the case of visual experience, and apparent shape, surface texture, hardness, etc., in the case of tactile experience. This view is natural to anyone who has an egocentric, Cartesian perspective in epistemology, and who wishes for pure data in experiences to serve as logically certain foundations for knowledge, especially to the immediate objects of perceptual awareness in or of sense-data, such categorized of colour patches and shapes, which are usually supposed distinct from surfaces of physical objectivity. Qualities of sense-data are supposed to be distinct from physical qualities because their perception is more relative to conditions, more certain, and more immediate, and because sense-data is private and cannot appear other than they are they are objects that change in our perceptual field when conditions of perception change. Physical objects remain constant.
Others who do not think that this wish can be satisfied, and who are more impressed with the role of experience in providing animisms with ecologically significant information about the world around them, claim that sense experiences represent properties, characteristic and kinds that are much richer and much more wide-ranging than the traditional sensory qualities. We do not see only colours and shapes, they tell ‘us’, but also earth, water, men, women and fire: We do not smell only odours, but also food and filth. There is no space here to examine the factors relevantly responsible to their choice of situational alternatives. Yet, this suggests that character and content are not really distinct, and there is a close tie between them. For one thing, the relative complexity of the character of sense experience places limitations upon its possible content, e.g., a tactile experience of something touching one’s left ear is just too simple to carry the same amount of content as typically convincing to an every day, visual experience. Moreover, the content of a sense experience of a given character depends on the normal causes of appropriately similar experiences, e.g., the sort of gustatory experience that we have when eating chocolate would be not represented as chocolate unless it was normally caused by chocolate. Granting a contingent ties between the character of an experience and its possible causal origins, once, again follows that its possible content is limited by its character.
Character and content are none the less irreducibly different, for the following reasons. (1) There are experiences that completely lack content, e.g., certain bodily pleasures. (2) Not every aspect of the character of an experience with content is relevant to that content, e.g., the unpleasantness of an aural experience of chalk squeaking on a board may have no representational significance. (3) Experiences in different modalities may overlap in content without a parallel overlap in character, e.g., visual and tactile experiences of circularity feel completely different. (4) The content of an experience with a given character may vary according to the background of the subject, e.g., a certain content ‘singing bird’ only after the subject has learned something about birds.
According to the act/object analysis of experience (which is a special case of the act/object analysis of consciousness), every experience involves an object of experience even if it has no material object. Two main lines of argument may be offered in support of this view, one ‘phenomenological’ and the other ‘semantic’.
In an outline, the phenomenological argument is as follows. Whenever we have an experience, even if nothing beyond the experience answers to it, we seem to be presented with something through the experience (which is itself diaphanous). The object of the experience is whatever is so presented to ‘us’-is that it is an individual thing, an event, or a state of affairs.
The semantic argument is that objects of experience are required in order to make sense of certain features of our talk about experience, including, in particular, the following. (I) Simple attributions of experience, e.g., ‘Rod is experiencing an oddity that is not really square but in appearance it seems more than likely a square’, this seems to be relational. (ii) We appear to refer to objects of experience and to attribute properties to them, e.g., ‘The after image that John experienced was certainly odd’. (iii) We appear to quantify ov er objects of experience, e.g., ‘Macbeth saw something that his wife did not see’.
The act/object analysis faces several problems concerning the status of objects of experiences. Currently the most common view is that they are sense-data-private mental entities that actually posses the traditional sensory qualities represented by the experiences of which they are the objects. But the very idea of an essentially private entity is suspect. Moreover, since an experience may apparently represent something as having a determinable property, e.g., redness, without representing it as having any subordinate determinate property, e.g., any specific shade of red, a sense-datum may actually have a determinate property subordinate to it. Even more disturbing is that sense-data may have contradictory properties, since experiences can have contradictory contents. A case in point is the waterfall illusion: If you stare at a waterfall for a minute and then immediately fixate on a nearby rock, you are likely to have an experience of the rock’s moving upward while it remains in the same place. The sense-data theorist must either deny that there are such experiences or admit contradictory objects.
These problems can be avoided by treating objects of experience as properties. This, however, fails to do justice to the appearances, for experience seems not to present ‘us’ with properties embodied in individuals. The view that objects of experience is Meinongian objects accommodate this point. It is also attractive in as far as (1) it allows experiences to represent properties other than traditional sensory qualities, and (2) it allows for the identification of objects of experience and objects of perception in the case of experiences that constitute perception.
According to the act/object analysis of experience, every experience with content involves an object of experience to which the subject is related by an act of awareness (the event of experiencing that object). This is meant to apply not only to perceptions, which have material objects (whatever is perceived), but also to experiences like hallucinations and dream experiences, which do not. Such experiences none the less appear to represent something, and their objects are supposed to be whatever it is that they represent. Act/object theorists may differ on the nature of objects of experience, which have been treated as properties. Meinongian objects (which may not exist or have any form of being), and, more commonly private mental entities with sensory qualities. (The term ‘sense-data’ is now usually applied to the latter, but has also been used as a general term for objects of sense experiences, as in the work of G. E. Moore) Act/object theorists may also differ on the relationship between objects of experience and objects of perception. In terms of perception (of which we are ‘indirectly aware’) are always distinct from objects of experience (of which we are ‘directly aware’). Meinongian, however, may treat objects of perception as existing objects of experience. But sense-datum theorists must either deny that there are such experiences or admit contradictory objects. Still, most philosophers will feel that the Meinongian’s acceptance of impossible objects is too high a price to pay for these benefits.
A general problem for the act/object analysis is that the question of whether two subjects are experiencing one and the same thing (as opposed to having exactly similar experiences) appears to have an answer only on the assumption that the experiences concerned are perceptions with material objects. But in terms of the act/object analysis the question must have an answer even when this condition is not satisfied. (The answer is always negative on the sense-datum theory; it could be positive on other versions of the act/object analysis, depending on the facts of the case.)
In view of the above problems, the case for the act/object analysis should be reassessed. The Phenomenological argument is not, on reflection, convincing, for it is easy enough to grant that any experience appears to present ‘us’ with an object without accepting that it actually does. The semantic argument is more impressive, but is none the less answerable. The seemingly relational structure of attributions of experience is a challenge dealt with below in connection with the adverbial theory. Apparent reference to and quantification over objects of experience can be handled by analysing them as reference to experiences themselves and quantification over experiences tacitly typed according to content. Thus, ‘The after image that John experienced was colourfully appealing’ becomes ‘John’s after image experience was an experience of colour’, and ‘Macbeth saw something that his wife did not see’ becomes ‘Macbeth had a visual experience that his wife did not have’.
Pure cognitivism attempts to avoid the problems facing the act/object analysis by reducing experiences to cognitive events or associated disposition, e.g., Susy’s experience of a rough surface beneath her hand might be identified with the event of her acquiring the belief that there is a rough surface beneath her hand, or, if she does not acquire this belief, with a disposition to acquire it that has somehow been blocked.
This position has attractions. It does full justice to the cognitive contents of experience, and to the important role of experience as a source of belief acquisition. It would also help clear the way for a naturalistic theory of mind, since there seems to be some prospect of a physicalist/functionalist account of belief and other intentional states. But pure cognitivism is completely undermined by its failure to accommodate the fact that experiences have a felt character that cannot be reduced to their content, as aforementioned.
The adverbial theory is an attempt to undermine the act/object analysis by suggesting a semantic account of attributions of experience that does not require objects of experience. Unfortunately, the oddities of explicit adverbializations of such statements have driven off potential supporters of the theory. Furthermore, the theory remains largely undeveloped, and attempted refutations have traded on this. It may, however, be founded on sound basis intuitions, and there is reason to believe that an effective development of the theory (which is merely hinting at) is possible.
The relevant intuitions are (1) that when we say that someone is experiencing ‘an A’, or has an experience ‘of an A’, we are using this content-expression to specify the type of thing that the experience is especially apt to fit, (2) that doing this is a matter of saying something about the experience itself (and maybe about the normal causes of like experiences), and (3) that it is no-good of reasons to posit of its position to presuppose that of any involvements, is that its descriptions of an object in which the experience is. Thus the effective role of the content-expression in a statement of experience is to modify the verb it compliments, not to introduce a special type of object.
Perhaps, the most important criticism of the adverbial theory is the ‘many property problem’, according to which the theory does not have the resources to distinguish between, e.g.,
(1) Frank has an experience of a brown triangle
and:
(2) Frank has an experience of brown and an experience of a triangle.
Which is entailed by (1) but does not entail it. The act/object analysis can easily accommodate the difference between (1) and (2) by claiming that the truth of (1) requires a single object of experiences that are both brown and triangular, while that of the (2) allows for the possibility of two objects of experience, one brown and the other triangular, however, (1) is equivalent to:
(1*) Frank has an experience of something’s being both brown and triangular.
And (2) is equivalent to:
(2*) Frank has an experience of something’s being brown and an experience of something’s being triangular, and the difference between these can be explained quite simply in terms of logical scope without invoking objects of experience. The Adverbialists may use this to answer the many-property problem by arguing that the phrase ‘a brown triangle’ in (1) does the same work as the clause ‘something’s being both brown and triangular’ in (1*). This is perfectly compatible with the view that it also has the ‘adverbial’ function of modifying the verb ‘has an experience of’, for it specifies the experience more narrowly just by giving a necessary condition for the satisfaction of the experience (the condition being that there are something both brown and triangular before Frank).
A final position that should be mentioned is the state theory, according to which a sense experience of an ‘A’ is an occurrent, non-relational state of the kind that the subject would be in when perceiving an ‘A’. Suitably qualified, this claim is no doubt true, but its significance is subject to debate. Here it is enough to remark that the claim is compatible with both pure cognitivism and the adverbial theory, and that state theorists are probably best advised to adopt adverbials as a means of developing their intuitions.
Yet, clarifying sense-data, if taken literally, is that which is given by the senses. But in response to the question of what exactly is so given, sense-data theories posit private showings in the consciousness of the subject. In the case of vision this would be a kind of inner picture show which it only indirectly represents aspects of the external world that has in and of itself a worldly representation. The view has been widely rejected as implying that we really only see extremely thin coloured pictures interposed between our mind’s eye and reality. Modern approaches to perception tend to reject any conception of the eye as a camera or lense, simply responsible for producing private images, and stress the active life of the subject in and of the world, as the determinant of experience.
Nevertheless, the argument from illusion is of itself the usually intended directive to establish that certain familiar facts about illusion disprove the theory of perception called naïevity or direct realism. There are, however, many different versions of the argument that must be distinguished carefully. Some of these distinctions centre on the content of the premises (the nature of the appeal to illusion); others centre on the interpretation of the conclusion (the kind of direct realism under attack). Let ‘us’ set about by distinguishing the importantly different versions of direct realism which one might take to be vulnerable to familiar facts about the possibility of perceptual illusion.
A crude statement of direct realism might go as follows. In perception, we sometimes directly perceive physical objects and their properties, we do not always perceive physical objects by perceiving something ‘else’, e.g., a sense-datum. There are, however, difficulties with this formulation of the view, as for one thing a great many philosophers who are ‘not’ direct realists would admit that it is a mistake to describe people as actually ‘perceiving’ something other than a physical object. In particular, such philosophers might admit, we should never say that we perceive sense-data. To talk that way would be to suppose that we should model our understanding of our relationship to sense-data on our understanding of the ordinary use of perceptual verbs as they describe our relation to and of the physical world, and that is the last thing paradigm sense-datum theorists could do with, have a mind as a duty or responsibility for being in what should be wanted, let alone, be in need of. At least, many of the philosophers who objected to direct realism would prefer to express in what they were of objecting too in terms of a technical (and philosophically controversial) concept such as ‘acquaintance’. Using such a notion, we could define direct realism this way: In ‘veridical’ experience we are directly acquainted with parts, e.g., surfaces, or constituents of physical objects. A less cautious verison of the view might drop the reference to veridical experience and claim simply that in all experience we are directly acquainted with parts or constituents of physical objects. The expressions ‘knowledge by acquaintance’ and ‘knowledge by description’, and the distinction they mark between knowing ‘things’ and knowing ‘about’ things, are generally associated with Bertrand Russell (1872-1970), that scientific philosophy required analysing many objects of belief as ‘logical constructions’ or ‘logical fictions’, and the programme of analysis that this inaugurated dominated the subsequent philosophy of logical atomism, and then of other philosophers, Russell’s “The Analysis of Mind,” the mind itself is treated in a fashion reminiscent of Hume, as no more than the collection of neutral perceptions or sense-data that make up the flux of conscious experience, and that looked at another way that also was to make up the external world (neutral monism), but “An Inquiry into Meaning and Truth” (1940) represents a more empirical approach to the problem. Yet, philosophers have perennially investigated this and related distinctions using varying terminology.
Distinction in our ways of knowing things, highlighted by Russell and forming a central element in his philosophy after the discovery of the theory of ‘definite descriptions’. A thing is known by acquaintance when there is direct experience of it. It is known by description if it can only be described as a thing with such-and-such properties. In everyday parlance, I might know my spouse and children by acquaintance, but know someone as ‘the first person born at sea’ only by description. However, for a variety of reasons Russell shrinks the area of things that can be known by acquaintance until eventually only current experience, perhaps my own self, and certain universals or meanings qualify anything else is known only as the thing that has such-and-such qualities.
Because one can interpret the relation of acquaintance or awareness as one that is not ‘epistemic’, i.e., not a kind of propositional knowledge, it is important to distinguish the above aforementioned views read as ontological theses from a view one might call ‘epistemological direct realism? In perception we are, on at least some occasions, non-inferentially justified in believing a proposition asserting the existence of a physical object. Since it is that these objects exist independently of any mind that might perceive them, and so it thereby rules out all forms of idealism and phenomenalism, which hold that there are no such independently existing objects. Its being to ‘direct’ realism rules out those views defended under the cubic of ‘critical naive realism’, or ‘representational realism’, in which there is some nonphysical intermediary -usually called a ‘sense-datum’ or a ‘sense impression’ -that must first be perceived or experienced in order to perceive the object that exists independently of this perception. Often the distinction between direct realism and other theories of perception is explained more fully in terms of what is ‘immediately’ perceived, than ‘mediately’ perceived. What relevance does illusion have for these two forms of direct realism?
The fundamental premise of the arguments is from illusion seems to be the theses that things can appear to be other than they are. Thus, for example, straight sticks when immerged in water looks bent, a penny when viewed from certain perspective appears as an illusory spatial elliptic circularity, when something that is yellow when place under red fluorescent light looks red. In all of these cases, one version of the argument goes, it is implausible to maintain that what we are directly acquainted with is the real nature of the object in question. Indeed, it is hard to see how we can be said to be aware of the really physical object at all. In the above illusions the things we were aware of actually were bent, elliptical and red, respectively. But, by hypothesis, the really physical objects lacked these properties. Thus, we were not aware of the substantial reality of been real as a physical objects or theory.
So far, if the argument is relevant to any of the direct realisms distinguished above, it seems relevant only to the claim that in all sense experience we are directly acquainted with parts or constituents of physical objects. After all, even if in illusion we are not acquainted with physical objects, but their surfaces, or their constituents, why should we conclude anything about the hidden nature of our relations to the physical world in veridical experience?
We are supposed to discover the answer to this question by noticing the similarities between illusory experience and veridical experience and by reflecting on what makes illusion possible at all. Illusion can occur because the nature of the illusory experience is determined, not just by the nature of events or sorted, conflicting affairs but the object perceived as itself the event in cause, but also by other conditions, both external and internal as becoming of an inner or as the outer experience. But all of our sensations are subject to these causal influences and it would be gratuitous and arbitrary to select from indefinitely of many and subtly different perceptual experiences some special ones those that get ‘us’ in touch with the ‘real’ nature of the physical world and its surrounding surfaces. Red fluorescent light affects the way thing’s look, but so does sunlight. Water reflects light, but so does air. We have no unmediated access to the external world.
The Philosophy of science, and scientific epistemology are not the only area where philosophers have lately urged the relevance of neuroscientific discoveries. Kathleen Akins argues that a "traditional" view of the senses underlies the variety of sophisticated "naturalistic" programs about intentionality. Current neuroscientific understanding of the mechanisms and coding strategies implemented by sensory receptors shows that this traditional view is mistaken. The traditional view holds that sensory systems are "veridical" in at least three ways. (1) Each signal in the system correlate with a small range of properties in the external (to the body) environment. (2) The structure in the relevant relations between the external properties the receptors are sensitive to is preserved in the structure of the relations between the resulting sensory states. And (3) the sensory system of rules that without fictive additions or existent embellishments, the external events, all of which of using recent neurobiological discoveries about response properties of thermal receptors in the skin, as an illustration, Akins shows that sensory systems are "narcissistic" rather than "veridical," all three traditional assumptions are violated. These neurobiological details and their philosophical implications open novel questions for the philosophy of perception and for the appropriate foundations for naturalistic projects about intentionality. Armed with the known neurophysiology of sensory receptors, for example, our "philosophy of perception" or of "perceptual intentionality" will no longer focus on the search for correlations between states of sensory systems and "veridically detected" external properties. This traditionally philosophical (and scientific) project rests upon a mistaken "veridical" view of the senses. Neuroscientific knowledge of sensory receptor activity also shows that sensory experience does not serve the naturalist well as a "simple paradigm case" of an intentional relation between representation and world. Once again, available scientific detail shows the naivety of some traditional philosophical projects.
Focusing on the anatomy and physiology of the pain transmission system, Valerie Hardcastle (1997) urges a similar negative implication for a popular methodological assumption. Pain experiences have long been philosophers' favorite cases for analysis and theorizing about conscious experience generally. Nevertheless, every position about pain experiences has been defended recently: eliminativist, a variety of objectivists view, relational views, and subjectivist views. Why so little agreement, despite agreement that pain experience is the place to start an analysis or theory of consciousness? Hardcastle urges two answers. First, philosophers tend to be uninformed about the neuronal complexity of our pain transmission systems, and build their analyses or theories on the outcome of a singular component of an associated multiple-component system. Second, even those who understand some of the underlying neurobiology of pain tends to advocate gate-control theories. But the best existing gate-control theories are vague about the neural mechanisms of the gates. Hardcastle instead proposes a dissociable dual system of pain transmission, consisting of a pain sensory system closely analogous in its neurobiological implementation to other sensory systems, and a descending pain inhibitory system. She argues that this dual system is consistent with recent neuroscientific discoveries and accounts for all the pain phenomena that have tempted philosophers toward particular (but limited) theories of pain experience. The neurobiological uniqueness of the pain inhibitory system, contrasted with the mechanisms of other sensory modalities, renders pain processing atypical. In particular, the pain inhibitory system dissociates pains sensation from stimulation of nociceptors (pain receptors). Hardcastle concludes from the neurobiological uniqueness of pain transmission that pain experiences are atypical conscious events, and hence not a good place to start theorizing about or analyzing the general type.
Developing and defending theories of content is a central topic in current philosophy of mind. A common desideratum in this debate is a theory of cognitive representation consistent with a physical or naturalistic ontology. The overall description as a contributorial dynamic of functional continuity is here given by few neurophilosophers, and, in this of worth, have contributed to this literature.
When one perceives or remembers that he is out of coffee, his brain state possesses intentionality or "aboutness." The percept or memory is about one's being out of coffee, and it represents one for being out of coffee. The representational state has content. Some psychosemantics seek to explain what it is for a representational state to be about something: to provide an account of how states and events can have specific representational content. Some physicalist psychosemantics seek to do this using resources of the physical sciences exclusively. Neurophilosophers have contributed to two types of physicalist psychosemantics: the Functional Role approach and the Informational approach.
The nucleus of functional roles of semantics holds that a representation has its content in virtue of relations it bears to other representations. Its paradigm application is to concepts of truth-functional logic, like the conjunctive ‘and’ or disjunctive ‘or’ a physical event instantiates the ‘and’ function just in case it maps two true inputs onto a single true output. Thus an expression bears the relations to others that give it the semantic content of ‘and.’ Proponents of functional role semantics propose similar analyses for the content of all representations (Form 1986). A physical event represents birds, for example, if it bears the right relations to events representing feathers and others representing beaks. By contrast, informational semantics associates content to a state depending upon the causal relations obtaining between the state and the object it represents. A physical state represents birds, for example, just in case an appropriate causal relation obtains between it and birds. At the heart of informational semantics is a causal account of information. Red spots on a face carry the information that one has measles because the red spots are caused by the measles virus. A common criticism of informational semantics holds that mere causal covariation is insufficient for representation, since information (in the causal sense) is by definition, always veridical while representations can misrepresent. A popular solution to this challenge invokes a teleological analysis of ‘function.’ A brain state represents χ by virtue of having the function of carrying information about being caused by χ (Dretske 1988). These two approaches do not exhaust the popular options for some psychosemantics, but are the ones to which neurophilosophers have contributed.
Jerry Fodor and Ernest LePore raise an important challenge to Churchlands psychosemantics. Location in a state space alone seems insufficient to fix a state's representational content. Churchland never explains why a point in a three-dimensional state space represents the Collor, as opposed to any other quality, object, or event that varies along three dimensions. Churchlands account achieves its explanatory power by the interpretation imposed on the dimensions. Fodor and LePore allege that Churchland never specifies how a dimension comes to represent, e.g., degree of saltiness, as opposed to yellow-blue wavelength opposition. One obvious answer appeals to the stimuli that form the ‘external’ inputs to the neural network in question. Then, for example, the individuating conditions on neural representations of colours are that opponent processing neurons receive input from a specific class of photoreceptors. The latter in turn have electromagnetic radiation (of a specific portion of the visible spectrum) as their activating stimuli, however, this appeal to ‘external’ stimuli as the ultimate individuating conditions for representational content makes the resulting approach a version of informational semantics. Is this approach consonant with other neurobiological details?
The neurobiological paradigm for informational semantics is the feature detector: One or more neurons that are (I) maximally responsive to a particular type of stimulus, and (ii) have the function of indicating the presence of that stimulus type. Examples of such stimulus-types for visual feature detectors include high-contrast edges, motion direction, and colours. A favorite feature detector among philosophers is the alleged fly detector in the frog. Lettvin et al. (1959) identified cells in the frog retina that responded maximally to small shapes moving across the visual field. The idea that these cells' activity functioned to detect flies rested upon knowledge of the frogs' diet. Using experimental techniques ranging from single-cell recording to sophisticated functional imaging, neuroscientists have recently discovered a host of neurons that are maximally responsive to a variety of stimuli. However, establishing condition (ii) on a feature detector is much more difficult. Even some paradigm examples have been called into question. David Hubel and Torsten Wiesel's (1962) Nobel Prize award winning laureates, have established the receptive fields of neurons in striate cortices are often interpreted as revealing cells whose function is edge detection. However, Lehky and Sejnowski (1988) have challenged this interpretation. They trained an artificial neural network to distinguish the three-dimensional shape and orientation of an object from its two-dimensional shading pattern. Their network incorporates many features of visual neurophysiology. Nodes in the trained network turned out to be maximally responsive to edge contrasts, but did not appear to have the function of edge detection.
Kathleen Akins (1996) offers a different neurophilosophical challenge to informational semantics and its affiliated feature-detection view of sensory representation. We saw in the previous section how Akins argues that the physiology of thermoreceptor violates three necessary conditions on ‘veridical’ representation. From this fact she draws doubts about looking for feature detecting neurons to ground some psychosemantics generally, including thought contents. Human thoughts about flies, for example, are sensitive to numerical distinctions between particular flies and the particular locations they can occupy. But the ends of frog nutrition are well served without a representational system sensitive to such ontological refinements. Whether a fly seen now is numerically identical to one seen a moment ago, need not, and perhaps cannot, figure into the frog's feature detection repertoire. Akins' critique casts doubt on whether details of sensory transduction will scale up to encompass of some adequately unified psychosemantics. It also raises new questions for human intentionality. How do we get from activity patterns in "narcissistic" sensory receptors, keyed not to "objective" environmental features but rather only to effects of the stimuli on the patch of tissue enervated, to the human ontology replete with enduring objects with stable configurations of properties and relations, types and their tokens (as the "fly-thought" example presented above reveals), and the rest? And how did the development of a stable, and rich ontology confer survival advantages to human ancestors?
Consciousness has reemerged as a topic in philosophy of mind and the cognitive and brains sciences over the past three decades. Instead of ignoring it, many physicalists now seek to explain it (Dennett, 1991). Here we focus exclusively on ways those neuroscientific discoveries have impacted philosophical debates about the nature of consciousness and its relation to physical mechanisms. Thomas Nagel argues that conscious experience is subjective, and thus permanently recalcitrant to objective scientific understanding. He invites us to ponder ‘what it is like to be a bat’ and urges the intuition that no amount of physical-scientific knowledge (including neuroscientific) supplies a complete answer. Nagel's intuition pump has generated extensive philosophical discussion. At least two well-known replies make direct appeal to neurophysiology. John Biro suggests that part of the intuition pumped by Nagel, that bat experience is substantially different from human experience, presupposes systematic relations between physiology and phenomenology. Kathleen Akins (1993) delves deeper into existing knowledge of bat physiology and reports much that is pertinent to Nagel's question. She argues that many of the questions about bat subjectivity that we still consider open hinge on questions that remain unanswered about neuroscientific details. One example of the latter is the function of various cortical activity profiles in the active bat.
The more recent philosopher David Chalmers (1996), has argued that any possible brain-process account of consciousness will leave open an ‘explanatory gap’ between the brain process and properties of the conscious experience. This is because no brain-process theory can answer the "hard" question: Why should that particular brain process give rise to conscious experience? We can always imagine ("conceive of") a universe populated by creatures having those brain processes but completely lacking conscious experience. A theory of consciousness requires an explanation of how and why some brain process causes consciousness replete with all the features we commonly experience, the fact that the complicated question remains unanswered shows that we will probably never get a complete explanation of consciousness at the level of neural mechanisms. Paul and Patricia Churchland have recently offered the following diagnosis and reply. Chalmers offer a conceptual argument, based on our ability to imagine creatures possessing brains like ours but wholly lacking in conscious experience. But the more one learns about how the brain produces conscious experience-and literature is beginning to emerge (e.g., Gazzaniga, 1995)-the harder it becomes to imagine a universe consisting of creatures with brain processes like ours but lacking consciousness. This is not just to bare assertions. The Churchlands appeal to some neurobiological detail. For example, Paul Churchland (1995) develops a neuroscientific account of consciousness based on recurrent connections between thalamic nuclei (particularly "diffusely projecting" nuclei like the intralaminar nuclei) and the cortex. Churchland argues that the thalamocortical recurrency accounts for the selective features of consciousness, for the effects of short-term memory on conscious experience, for vivid dreaming during REM. (rapid-eye movement) sleep, and other "core" features of conscious experience. In other words, the Churchlands are claiming that when one learns about activity patterns in these recurrent circuits, one can't "imagine" or "conceive of" this activity occurring without these core features of conscious experience. (Other than just mouthing the words, "I am now imagining activity in these circuits without selective attention/the effects of short-term memory/vivid dreaming . . . ")
A second focus of sceptical arguments about a complete neuroscientific explanation of consciousness is sensory Qualia: the introspectable qualitative aspects of sensory experience, the features by which subjects discern similarities and differences among their experiences. The colours of visual sensations are a philosopher's favorite example. One famous puzzle about colour Qualia is the alleged conceivability of spectral inversions. Many philosophers claim that it is conceptually possible (if perhaps physically impossible), for two humans are not alternative modifications or mutations upon whose stability equivalence is to be unlike or distinct in nature, form, or characteristics, e.g., any two houses’ differ only in a few minor details. Neurophysiological, while the Colours that fire engines and tomatoes appear to have to one subject is the Colour that grasses and frogs appear to have to the other (and vice versa). A large amount of neuroscientifically-informed philosophy has addressed this question. A related area where neurophilosophical considerations have emerged concerns the metaphysics of colours themselves (rather than Collor experiences). A longstanding philosophical dispute is whether colours are objective property’s Existing external to perceiver or rather identifiable as or dependent upon minds or nervous systems. Some recent work on this problem begins with characteristics of Collor experiences: For example that Collor similarity judgments produce Collor orderings that align on a circle. With this resource, one can seek mappings of phenomenology onto environmental or physiological regularities. Identifying colours with particular frequencies of electromagnetic radiation does not preserve the structure of the hue circle, whereas identifying colours with activity in opponent processing neurons does. Such a tidbit is not decisive for the Collor objectivist-subjectivist debate, but it does convey the type of neurophilosophical work being done on traditional metaphysical issues beyond the philosophy of mind.
We saw in the discussion of Hardcastle (1997) two sections above that Neurophilosophers have entered disputes about the nature and methodological import of pain experiences. Two decades earlier, Dan Dennett (1978) took up the question of whether it is possible to build a computer that feels pain. He compares and notes of the presentation in omissions and variables that unduly progress of pressures between neurophysiological discoveries and common sense intuitions about pain experience, he, too, draws the suspicions that the incommensurability between scientific and common sense views is due to incoherence in the latter. His attitude is wait-and-see. But foreshadowing Churchland's reply to Chalmers, Dennett favours scientific investigations over conceivability - based philosophical arguments.
Neurological deficits have attracted philosophical interest. For thirty years philosophers have found implications for the unity of the self in experiments with commissurotomy patients. In carefully controlled experiments, commissurotomy patients display two dissociable seats of consciousness. Patricia Churchland scouts philosophical implications of a variety of neurological deficits. One deficit is blindsight. Some patients with lesions to primary visual cortex report being unable to see items in regions of their visual fields, yet perform far better than chance in forced guess trials about stimuli in those regions. A variety of scientific and philosophical interpretations have been offered. Ned Form (1988) worries that many of these conflate distinct notions of consciousness. He labels these notions ‘phenomenal consciousness’ (‘P-consciousness’) and ‘access consciousness’ (‘A-consciousness’). The former is that which, ‘what it is likeness of experience. The latter are the availability of representational content to self-initiated action and speech. Form argues that P-consciousness is not always representational whereas A-consciousness is. Dennett and Michael Tye are sceptical of non-representational analyses of consciousness in general. They provide accounts of blindsight that do not depend on Form's distinction.
Many other topics are worth neurophilosophical pursuit. We mentioned commissurotomy and the unity of consciousness and the self, which continues to generate discussion. Qualia beyond those of Collor and pain have begun to attract neurophilosophical attention has self-consciousness. The first issue to arise in the ‘philosophy of neuroscience’ (before there was a recognized area) was the localization of cognitive functions to specific neural regions. Although the ‘localization’ approach had dubious origins in the phrenology of Gall and Spurzheim, and was challenged severely by Flourens throughout the early nineteenth century, it reemerged in the study of aphasia by Bouillaud, Auburtin, Broca, and Wernicke. These neurologists made careful studies (where possible) of linguistic deficits in their aphasic patients followed by brain autopsies postmortem. Broca's initial study of twenty-two patients in the mid-nineteenth century confirmed that damage to the left cortical hemisphere was predominant, and that damage to the second and third frontal convolutions was necessary to produce speech production deficits. Although the anatomical coordinates’ Broca’s postulation for the ‘speech production centres do not correlate exactly with damage producing production deficits, both are that in this area of frontal cortex and speech production deficits still bear his name (‘Broca's area’ and ‘Broca's aphasia’). Less than two decades later Carl Wernicke published evidence for a second language Centre. This area is anatomically distinct from Broca's area, and damage to it produced a very different set of aphasic symptoms. The cortical area that still bears his name (‘Wernicke's area’) is located around the first and second convolutions in temporal cortex, and the aphasia that bear his name (‘Wernicke's aphasia’) involves deficits in language comprehension. Wernicke's method, like Broca's, was based on lesion studies: a careful evaluation of the behavioural deficits followed by post mortem examination to find the sites of tissue damage and atrophy. Lesion studies suggesting more precise localization of specific linguistic functions remain a cornerstone to this day in aphasic research
Lesion studies have also produced evidence for the localization of other cognitive functions: for example, sensory processing and certain types of learning and memory. However, localization arguments for these other functions invariably include studies using animal models. With an animal model, one can perform careful behavioural measures in highly controlled settings, then ablate specific areas of neural tissue (or use a variety of other techniques to Form or enhance activity in these areas) and remeasure performance on the same behavioural tests. But since we lack an animal model for (human) language production and comprehension, this additional evidence isn't available to the neurologist or neurolinguist. This fact makes the study of language a paradigm case for evaluating the logic of the lesion/deficit method of inferring functional localization. Philosopher Barbara Von Eckardt (1978) attempts to make explicitly in the gaiting steps of reason which involves commonality, as, a matter-of-course belonging to or characterized to the everyday world, also, the chronological record of events whereby an accountable chronicle in the historically important dramatic means or procedures used in attaining an end, least of mention, that it is claimed that this means to an end is justifiable in the methods. Her analysis begins with Robert Cummins' early analysis of functional explanation, but she extends it into a notion of structurally adequate functional analysis. These analyses break down a complex capacity C into its constituent capacity’s C1, C2, . . . Cn, where the constituent capacities are consistent with the underlying structural details of the system. For example, human speech production (complex capacity C) results from formulating a speech intention, then selecting appropriate linguistic representations to capture the content of the speech intention, then formulating the motor commands to produce the appropriate sounds, then communicating these motor commands to the appropriate motor pathways (constituent capacity’s C1, C2, . . . , Cn). A functional-localization hypothesis has the form: Brain structure S in an organism (type) O has constituent capacity [ci] where [ci] is a function of some part of O. An example, Brains Broca's area (S) in humans (O) formulates motor commands to produce the appropriate sounds (one of the constituent capacities ci). Such hypotheses specify aspects of the structural realization of a functional-component model. They are part of the theory of the neural realization of the functional model.
Armed with these characterizations, Von Eckardt argues that inference to some functional-localization hypothesis proceeds in two steps. First, a functional deficit in a patient is hypothesized based on the abnormal behaviour the patient exhibits. Second, localization of function in normal brains is inferred on the basis of the functional deficit hypothesis plus the evidence about the site of brain damage. The structurally-adequate functional analysis of the capacity connects the pathological behaviour to the hypothesized functional deficit. This connection suggests four adequacy conditions on a functional deficit hypothesis. Initiatively, the pathological behaviour of ‘P’ (e.g., the speech deficits characteristic of Broca's aphasia) must result from failing to exercise some complex capacity ‘C’ (human speech production). Second, there must be a structurally-adequate functional analysis of how people exercise capacity ‘C’ that involves some constituent capacity ‘ci’ (formulating motor commands to produce the appropriate sounds). Third, the operation of the steps described by the structurally-adequate functional analysis minus the operation of the component performing ‘ci’ (Broca's area) must result in pathological behaviour ‘P’. Fourth, there must not be a better available explanation for why the patient does ‘P’. Arguments to a functional deficit hypothesis on the basis of pathological behaviour is thus an instance of argument to the best available explanation. When postulating a deficit in a normal functional component provides the best available explanation of the pathological data, we are justified in drawing the inference.
Von Eckardt applies this analysis to a neurological case study involving a controversial reinterpretation of agnosia. Her philosophical explication of this important neurological method reveals that most challenges to localization arguments of whether to argue only against the localization of a particular type of functional capacity or against generalizing from localization of function in one individual to all normal individuals. (She presents examples of each from the neurological literature.) Such challenges do not impugn the validity of standard arguments for functional localization from deficits. It does not follow that such arguments are unproblematic. But they face difficult factual and methodological problems, not logical ones. Furthermore, the analysis of these arguments as involving a type of functional analysis and inference to the best available explanation carries an important implication for the biological study of cognitive function. Functional analyses require functional theories, and structurally adequate functional analyses require checks imposed by the lower level sciences investigating the underlying physical mechanisms. Arguments to best available explanation are often hampered by a lack of theoretical imagination: The available explanations are often severely limited. We must seek theoretical inspiration from any level of theory and explanation. Hence, given rise to the explicit of ‘logic’ of this common and historically important form of neurological explanation reveals the necessity of joint participation from all scientific levels, from cognitive psychology down to molecular neuroscience. Von Eckardt anticipated what came to be heralded as the ‘co-evolutionary research methodology,’ which remains a centerpiece of neurophilosophy to the present day.
Over the last two decades, evidence for localization of cognitive function has come increasingly from a new source: the development and refinement of neuroimaging techniques. The form of localization-of-function argument appears not to have changed from that employing lesion studies (as analysed by Von Eckardt). Instead, these imaging technologies resolve some of the methodological problems that plage lesion studies. For example, researchers do not need to wait until the patient dies, and in the meantime probably acquires additional brain damage, to find the lesion sites. Two functional imaging techniques are prominent: Positron emission tomography, or PET, and functional magnetic resonance imaging, or MRI. Although these measure different biological markers of functional activity, both now have a resolution down too around one millimetre. As these techniques increase spatial and temporal resolution of functional markers and continue to be used with sophisticated behavioural methodologies, the possibility of localizing specific psychological functions to increasingly specific neural regions continues to grow.
What we now know about the cellular and molecular mechanisms of neural conductance and transmission is spectacular. The same evaluation holds for all levels of explanation and theory about the mind/brain: maps, networks, systems, and behaviour. This is a natural outcome of increasing scientific specialization. We develop the technology, the experimental techniques, and the theoretical frameworks within specific disciplines to push forward our understanding. Still, a crucial aspect of the total picture gets neglected: the relationship between the levels, the ‘glue’ that binds knowledge of neuron activity to subcellular and molecular mechanisms, networks activity patterns to the activity of and connectivity between single neurons, and behavioural network activity. This problem is especially glaring when we focus on the relationship between ‘cognitivist’ psychological theories, postulating information-bearing representations and processes operating over their contents, and the activity patterns in networks of neurons. Co-evolution between explanatory levels still seems more like a distant dream rather than an operative methodology.
It is here that some neuroscientists appeal to ‘computational’ methods. If we examine the way that computational models function in more developed sciences (like physics), we find the resources of dynamical systems constantly employed. Global effects (such as large-scale meteorological patterns) are explained in terms of the interaction of ‘local’ lower-level physical phenomena, but only by dynamical, nonlinear, and often chaotic sequences and combinations. Addressing the interlocking levels of theory and explanation in the mind/brain using computational resources that have worked to bridge levels in more mature sciences might yield comparable results. This methodology is necessarily interdisciplinary, drawing on resources and researchers from a variety of levels, including higher levels like experimental psychology, ‘program-writing’ and ‘connectionist’ artificial intelligence, and philosophy of science.
However, the use of computational methods in neuroscience is not new. Hodgkin, Huxley, and Katz incorporated values of voltage-dependent potassium conductance they had measured experimentally in the squid giant axon into an equation from physics describing the time evolution of a first-order kinetic process. This equation enabled them to calculate best-fit curves for modelled conductance versus time data that reproduced the S-shaped (sigmoidal) function suggested by their experimental data. Using equations borrowed from physics, Rall (1959) developed the cable model of dendrites. This theory provided an account of how the various inputs from across the dendritic tree interact temporally and spatially to determine the input-output properties of single neurons. It remains influential today, and has been incorporated into the genesis software for programming neurally realistic networks. More recently, David Sparks and his colleagues have shown that a vector-averaging model of activity in neurons of superior colliculi correctly predicts experimental results about the amplitude and direction of saccadic eye movements. Working with a more sophisticated mathematical model, Apostolos Georgopoulos and his colleagues have predicted direction and amplitude of hand and arm movements based on averaged activity of 224 cells in motor cortices. Their predictions have borne out under a variety of experimental tests. We mention these particular studies only because we are familiar with them. We could multiply examples of the fruitful interaction of computational and experimental methods in neuroscience easily by one-hundred-fold. Many of these extend back before ‘computational neuroscience’ was a recognized research endeavour.
February 9, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment