The minimal theory states that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, the principle that for any proposition ‘p’, it is true that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truths. It is how widely accepted, that both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning (Davidson, 1990, Dummett, 1959 and Horwich, 1990). If the claim that ‘Paris is beautiful’ is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try to explain the sentence’s meaning in terms of its truth conditions. The minimal theory of truth has been endorsed by Ramsey, Ayer, the later Wittgenstein, Quine, Strawson, Horwich and - confusingly and inconsistently if be to set right something that is wrong as to correct a misstatement, even by confronting to or agreeing with fact, as having the solution to the problem, yet, to mention, is the minimal theory correct?
The minimal or redundancy theory treats instances of the equivalence principle as definitional of truth for a given sentence. But in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which such an instance as:
‘London is beautiful’, is true if and only if London is beautiful preserving a right to be interpreted specifically of A1 and A3 above? This would be a pseudo-explanation if the fact that ‘London’ refers to ‘London is beautiful’ has the truth-condition it does. But that is very implausible: It is, after all, possible to understand that ‘London’, without understanding the predicate ‘is beautiful’. The idea that facts about the reference of particular words can be explanatory of facts about the truth conditions of sentences containing them in no way requires any naturalistic or any other kind of reduction of the notion of reference. Nor is the idea incompatible with the plausible point that singular reference can be attributed at all only to something that is capable of combining with other expressions to form complete sentences. That still leaves room for facts about an expression’s having the particular reference it does to be partially explanatory of the particular truth condition possessed by a given sentence containing it. The minimal; theory thus treats as definitional or stimulative something that is in fact open to explanation. What makes this explanation possible is that there is a general notion of truth that has, among the many links that hold it in place, systematic connections with the semantic values of sub-sentential expressions.
A second problem with the minimal theory is that it seems impossible to formulate it without at some point relying implicitly on features and principles involving truths that go beyond anything countenanced by the minimal theory. If the minimal theory hypothesis or procedure proposed or followed as the basis of action and was to assert by an assumption that something that is taken for granted or advanced as fact whereby, decisions based on assumptions about the nature as such would find that if be, that something taken for granted especially on trivial or inadequate grounds that the speculative presentiment, such as the controlling natures of suspicion. However, the minimal theory treats or takes to consider a conversation or discussion usually directed toward reaching a decision or final settlement, yet to have to do with or behave toward (a person or thing) in a specified manner, Dealing with the truth, as predicate within the boundaries of anything linguistic, be to some uttered utterances, type-in-a-language, or whatever, then the equivalence schema will not cover all cases, but to the exclusion of an alternative or competitor, simply, in spite of, being one or more of which there exist no others but those prescribed or serve as a restricting boundary in the theorist’s retaining linguistic communicative communications. Some account has to be given of truth for sentences of other languages. Speaking of the truth of language-independence propositions or thoughts will only postpone, not avoid, this issue, since at some point principles have to be stated associating these language-independent entities with sentences of particular languages. The defender of the minimalist theory is likely to say that if a sentence ‘S’ of a foreign language is best translated by our sentence ‘p’, then the foreign sentence ‘S’ is true if and only if ‘p’. Now the best translation of a sentence must preserve the concepts expressed in the sentence. Constraints involving a general notion of truth are persuasive in a plausible philosophical theory of concepts. It is, for example, a condition of adequacy on an individualized account of any concept that there exists what is called ‘Determination Theory’ for that account-that is, a specification of how the account contributes to fixing the semantic value of that concept, the notion of a concept’s semantic value is the notion of something that makes a certain contribution to the truth conditions of thoughts in which the concept occurs. But this is to presuppose, than to elucidate, a general notion of truth.
It is also plausible that there are general constraints on the form of such Determination Theories, constraints that involve truth and which are not derivable from the minimalist’s conception. Suppose that concepts are individuated by their possession conditions. A concept is something that is capable of being a constituent of such contentual representational in a way of thinking of something-a particular object, or property, or relation, or another entity. A possession condition may in various way’s make a thinker’s possession of a particular concept dependent on or upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world for being a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession of that condition will make possession of that concept dependent in part upon the environment relations of the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition which property individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.
One such plausible general constraint is then the requirement that when a thinker forms beliefs involving a concept in accordance with its possession condition, a semantic value is assigned to the concept in such a way that the belief is true. Some general principles involving truth can indeed, as Horwich has emphasized, be derived from the equivalence schema using minimal logical apparatus. Consider, for instance, the principle that ‘Paris is beautiful and London is beautiful’ is true if and only if ‘Paris is beautiful’ is true if and only if ‘Paris is beautiful’ is true and ‘London is beautiful’ is true. This follows logically from the three instances of the equivalence principle: ‘Paris is beautiful and London is beautiful’ is rue if and only if Paris is beautiful, and ‘London is beautiful’ is true if and only if London is beautiful. But no logical manipulations of the equivalence schemas will allow the deprivation of that general constraint governing possession conditions, truth and the assignment of semantic values. That constraint can have courses be regarded as a further elaboration of the idea that truth is one of the aims of judgement.
We now turn to the other question, ‘What is it for a person’s language to be correctly describable by a semantic theory containing a particular axiom, such as the axiom A6 above for conjunction?’ This question may be addressed at two depths of generality. At the shallower level, the question may take for granted the person’s possession of the concept of conjunction, and be concerned with what has to be true for the axiom correctly to describe his language. At a deeper level, an answer should not duck the issue of what it is to possess the concept. The answers to both questions are of great interest: We will take the lesser level of generality first.
When a person means conjunction by ‘sand’, he is not necessarily capable of formulating the axiom A6 explicitly. Even if he can formulate it, his ability to formulate it is not the causal basis of his capacity to hear sentences containing the word ‘and’ as meaning something involving conjunction. Nor is it the causal basis of his capacity to mean something involving conjunction by sentences containing the utterances in the word ‘and’ that is, it is then right to regard a truth theory as part of an unconscious psychological computation, and to regard understanding a sentence as involving a particular way of depriving a theorem from a truth theory at some level of conscious proceedings? One problem with this is that it is quite implausible that everyone who speaks the same language has to use the same algorithms for computing the meaning of a sentence. In the past thirteen years, thanks particularly to the work of Davies and Evans, a conception has evolved according to which an axiom like A6 is true of a person’s language only if there is a common component in the explanation of his understanding of each sentence containing the word ‘and’, a common component that explains why each such sentence is understood as meaning something involving conjunction (Davies, 1987). This conception can also be elaborated in computational terms: Suggesting that for an axiom like A6 to be true of a person’s language is for the unconscious mechanisms which produce understanding to draw on the information that a sentence of the form ‘A and B’ are true if and only if ‘A’ is true and ‘B’ is true (Peacocke, 1986). Many different algorithms may be equally drawn by this information. The psychological reality of a semantic theory thus involves, in Marr’s (1982) famous classification, something intermediate between his level one, the function computed, and his level two, the algorithm by which it is computed. This conception of the psychological reality of a semantic theory can also be applied to syntactic and phonol logical theories. Theories in semantics, syntax and phonology are not themselves required to specify the particular algorithms that the language user employs. The identification of the particular computational methods employed is a task for psychology. But semantics, syntactic and phonology theories are answerable to psychological data, and are potentially refutable by them-for these linguistic theories do make commitments to the information drawn upon by mechanisms in the language user.
This answer to the question of what it is for an axiom to be true of a person’s language clearly takes for granted the person’s possession of the concept expressed by the word treated by the axiom. In the example of the axiom A6, the information as gathered on or upon what are those sentences in the form ‘A’ and ‘B’. Are both true if and only if ‘A’ is true and ‘B’ is true? This informational content employs, as it has to if it is to be adequate, the concept of conjunction used in stating the meaning of sentences containing ‘and’ in that, the computational answers that we have returned need further elaboration if we are to address the deeper question, which does not want to take for granted possession of the concepts expressed in the language. It is at this point that the theory of linguistic understanding has to draws upon a theory of concepts. It is plausible that the concepts of conjunction are individuated by the following condition for a thinker to possess it.
Finally, this response to the deeper question allows ‘us’ to answer two challenges to the conception of meaning as truth-conditions. First, there was the question left hanging earlier, of how the theorist of truth-conditions is to say what makes one axiom of a semantic theory is correctly in that of another, when the two axioms assign the same semantic values, but do so by means of different concepts. Since the different concepts will have different possession conditions, the dovetailing accounts, at the deeper level of what it is for each axiom to be correct for a person’s languages will be different accounts. Second, there is a challenge repeatedly made by the minimalist theorists of truth, to the effect that the theorist of meaning as truth-conditions should give some non-circular account of what it is to understand a sentence, or to be capable of understanding all sentences containing a given constituent. For each expression in a sentence, the corresponding dovetailing account, together with the possession condition, supplies a non-circular account of what it is to understand any sentence containing that expression. The combined accounts for each of the expressions that comprise a given sentence together constitute a non-circular account of what it is to understand the completed sentences. Taken together, they allow the theorists of meaning as truth-conditions fully to meet the challenge.
A curious view common to that which is expressed by an utterance or sentence: The proposition or claim made about the world. By extension, the content of a predicate or other sub-sentential component is what it contributes to the content of sentences that contain it. The nature of content is the central concern of the philosophy of language, in that mental states have contents: A belief may have the content that the prime minister will resign. A concept is something that is capable of bringing a constituent of such contents. More specifically, a concept is a way of thinking of something-a particular object, or property or relation, or another entity. Such a distinction was held in Frége’s philosophy of language, explored in “On Concept and Object” (1892). Fridges regarded predicates as incomplete expressions, in the same way as a mathematical expression for a function, such as sines . . . a log . . . , are incomplete. Predicates refer to concepts, which of themselves are ‘unsaturated’, and cannot be referred to by subject expression (we thus get the paradox that the concept of a horse is not a concept). Although Fridge recognized the metaphorical nature of the notion of a concept being unsaturated, he was rightly convinced that some such notion is needed to explain the unity of a sentence, and to prevent sentences from being thought of as mere lists of names.
The general system of concepts with which we organize our thoughts and perceptions are to encourage a conceptual scheme of which the outstanding elements of our every day conceptual formalities include spatial and temporal relations between events and enduring objects, causal relations, other persons, meaning-bearing utterances of others, . . . and so on. To see the world as containing such things is to share this much of our conceptual scheme. A controversial argument of Davidson’s urges that we would be unable to interpret speech from a different conceptual scheme as even meaningful, Davidson Daringly goes on to argue that since translation proceeds according to a principle of clarity, and since it must be possible of an omniscient translator to make sense of, in that ‘we’ can be assured that most of the beliefs formed within the commonsense conceptual framework are true.
Concepts are to be distinguished from a stereotype and from conceptions. The stereotypical spy may be a middle-level official down on his luck and in need of money. Nonetheless, we can come to learn that Anthony Blunt, art historian and Surveyor of the Queen’s Pictures, is a spy; we can come to believe that something falls under a concept while positively disbelieving that the same thing falls under the stereotype associated wit the concept. Similarly, a person’s conception of a just arrangement for resolving disputes may bring a person or thing into circumstances or a situation from which extrication is different. The complex identifiable elements are compounded, but difficult to comprehend because of a multiplicity of interrelated elements. The founding complicated involvements in that of something like contemporary Western legal systems. But whether or not it would be correct that it is quite intelligible for someone to reject this conception by arguing that it decease, but not adequately providing for the elements of fairness and respect that are required by the concepts of justice.
Basically, a concept is that which is understood by a term, particularly a predicate. To posses a concept is to be able to deploy a term expressing it in making judgements, in which the ability connection is such things as recognizing when the term applies, and being able to understand the consequences of its application. The term ‘idea’ was formally used in the same way, but is avoided because of its associations with subjective matters infer on or upon mental imaging, from which it may be irrelevant to the possession of a concept. In the semantics of Fridge, a concept is the reference of a predicate, and cannot be referred to by a subjective term, although its recognition of as a concept, in that some such notion is needed to the explanatory justification of which that sentence of unity finds of itself from being thought of as namely categorized lists of itemized priorities.
A theory of a particular concept must be distinguished from a theory of the object or objects it selectively picks the outlying of the theory of the concept under which is partially contingent of the theory of thought and/or epistemology. A theory of the object or objects is part of metaphysics and ontology. Some figures in the history of philosophy-and are open to the accusation of not having fully respected the distinction between the kinds of theory. Descartes appears to have moved from facts about the indubitability of the thought ‘I think’, containing the fist-person was of thinking, to conclusions about the nonmaterial nature of the object he himself was. But though the goals of a theory of concepts and a theory of objects are distinct, each theory is required to have an adequate account of its relation to the other theory. A theory if concept is unacceptable if it gives no account of how the concept is capable of picking out the object it evidently does pick out. A theory of objects is unacceptable if it makes it impossible to understand how we could have concepts of those objects.
A fundamental question for a school of thought as assimilated of philosophy that is of what individuates a given concept - that is, what makes it the one that is, rather than any other concept? One answer, which has been developed in great detail, is that it is impossible to give a nontrivial answer to this question (Schiffer, 1987). An alternative approach, addressees the question by starting from the idea that a concept id individuated by the condition that must be satisfied if a thinker is to posses that concept and to be capable of having beliefs and other attitudes whose content contains it as a constituent. So, to take a simple case, one could propose that the logical concept ‘and’ is individuated by this condition, it be the unique concept ‘C’ to posses that a thinker has to find these forms of inference compelling, without basing them on any further inference or information: From any two premisses ‘A’ and ‘B’, ACB can be inferred, and from any premiss ACB, each of the ‘A’ and the ‘B’ can be inferred. Again, a relatively observational concept such as ‘round’ can be individuated in part by stating that the thinker finds specified contents containing it compelling when he has certain kinds of perception, and in part by relating those judgements containing the concept and which are not based on perception to those judgements that are. A statement that individuates a concept by saying what is required for a thinker to posses it can be described as giving the possession condition for the concept.
A possession condition for a particular concept may actually make use of that concept. The possession condition for ‘and’ does so. We can also expect to use relatively observational concepts in specifying the kind of experience that have to be mentioned in the possession conditions for relatively observational concepts. What we must avoid is mention of the concept in question as such within the content of the attitudes attributed to the thinker in the possession condition. Otherwise we would be presupposing possession of the concept in an account that was meant to elucidate its possession. In talking of what the thinker finds compelling, the possession conditions can also respect an insight of the later Wittgenstein: That to find the naturalness as the typical nature or having the nature of a type generally confronting to what is expected in the ordinary course of events, whereas, an individual thought of as a group because of a common quality or qualities which for being a continuation to go on in new cases of applying the concept.
Sometimes a family of concepts has this property: It is not possible to master any one of the members of the family without mastering the others. Two of the families that plausibly have this status are these: The family consisting of some simple concepts 0, 1, 2, . . . of the natural numbers and the corresponding concepts of numerical quantifiers there are 0 and so-and-so’s, there is 1 so-and-so, . . . and the family consisting of the concepts; belief’ and ‘desire’. Such families have come to be known as ‘local holism’. A local holism does not prevent the individuation of a concept by its possession condition. Rather, it demands that all the concepts in the family be individuated simultaneously. So one would say something of this form: Belief and desire form the unique pair of concepts C1 and C2 such that for the thinker to posses them, as they are to meet such-and-such condition involving the thinker, C1 and C2. For these and other possession conditions to individuate properly, it is necessary that there be some ranking of the concepts treated. The possession conditions for concepts higher in the ranking must presuppose only possession of concepts at the same or lower levels in the ranking.
A possession condition may in various way’s make a thinker’s possession of a particular concept dependent upon his relations to his environment. Many possession conditions will mention the links between a concept and the thinker’s perceptual experience. Perceptual experience represents the world as a certain way. It is arguable that the only satisfactory explanation of what it is for perceptual experience to represent the world in a particular way must refer to the complex relations of the experience to the subject’s environment. If this is so, then mention of such experiences in a possession condition will make possession of that concept dependent in part upon the environmental relations of the thinker. Burge (1979) has also argued from intuitions about particular examples that, even though the thinker’s non-environmental properties and relations remain constant, the conceptual content of his mental state can vary if the thinker’s social environment is varied. A possession condition that properly individuates such a concept must take into account the thinker’s social relations, in particular his linguistic relations.
Concepts have a normative dimension, a fact strongly emphasized by Kripke. For any judgement whose content involves a given concept, there is a correctness condition for that judgement, a condition that is dependent in part upon the identity of the concept. The normative character of concepts also extends into making the territory of a thinker’s reasons for making judgements. A thinker’s visual perception can give him good reason for judging ‘That man is bald’: It does not by itself give him good reason for judging ‘Rostropovich ids bald’, even if the man he sees is Rostropovich. All these normative connections must be explained by a theory of concepts one approach to these matters is to look to the possession condition for the concept, and consider how the referent of a concept is fixed from it, together with the world. One proposal is that the referent of the concept is that object (or property, or function, . . .) which makes the practices of judgement and inference to make a point in the possession condition that always lead to true judgements and truth-preserving inferences? This proposal would explain why certain reasons are necessity good reasons for judging given contents. Provided the possession condition permits ‘us’ to say what it is about a thinker’s previous judgements that masker it, the case that he is employing one concept rather than another, this proposal would also have another virtue. It would allow ‘us’ to say how the correctness condition is determined for a judgement in which the concept is applied to newly encountered objects. The judgement is correct if the new object has the property that in fact makes the judgmental practices mentioned in the possession condition yield true judgements, or truth-preserving inferences.
These manifesting dissimilations have occasioned the affiliated differences accorded within the distinction as associated with Leibniz, who declares that there are only two kinds of truths-truths of reason and truths of fact. The outward appearance of something as distinguished from the substance of which it is made, the forming configuration of figure, and shape gives to the form of conduct the regulated external control (as custom or a formal protocol of procedure) such of a fixed or accepted way of doing or sometimes of expressing something as a fabricated or forging schematic as similarly explicit identities, i.e., of the form ‘A is A’, ‘AB is B’, etc., or they are reducible to this form by successively substituting equivalent terms. Leibniz dubs them ‘truths of reason’ because the explicit identities are self-evident deducible truths, whereas the rest can be converted too such by purely rational operations. Because their denial involves a demonstrable contradiction, Leibniz also says that truths of reason ‘rest on the principle of contradiction, or identity’ and that they are necessary [propositions, which are true of all possible words. Some examples are ‘All equilateral rectangles are rectangles’ and ‘All bachelors are unmarried’: The first is already of the form AB is B’ and the latter can be reduced to this form by substituting ‘unmarried man’ fort ‘bachelor’. Other examples, or so Leibniz believes, are ‘God exists’ and the truths of logic, arithmetic and geometry.
In defending the principle of sufficient reason, Leibniz runs into serious problems. He believes that in every true proposition, the concept of the predicate is contained in that of the subject. (This holds even for propositions like ‘Caesar crossed the Rubicon’: Leibniz thinks anyone who dids not cross the Rubicon, would not have been Caesar). And this containment relationship! Which is eternal and unalterable even by God ~?! Guarantees that every truth has a sufficient reason. If truth consists in concept containment, however, then it seems that all truths are analytic and hence necessary, and if they are all necessary, surely they are all truths of reason. Leibnitz responds that not every truth can be reduced to an identity in a finite number of steps, in some instances revealing the connection between subject and predicate concepts would requite an infinite analysis. But while this may entail that we cannot prove such propositions as deductively manifested, it does not appear to show that the proposition could have been false. Intuitively, it seems a better ground for supposing that it is necessary truth of a special sort. A related question arises from the idea that truths of fact depend on God’s decision to create the best of all possible worlds: If it is part of the concept of this world that it is best, now could its existence be other than necessary? Leibniz answers that its existence is only hypothetically necessary, i.e., it follows from God’s decision to create this world, but God had the power to decide otherwise. Yet God is necessarily good and non-deceiving, so how could he have decided to do anything else? Leibniz says much more about these masters, but it is not clear whether he offers any satisfactory solutions.
Necessary truths are ones that must be true, or whose opposite is impossible. Contingent truths are those that are not necessary and whose opposite is therefore possible. 1-3 below is necessary, 4-6, contingent. 1. It is not the case that it is raining and not raining
2. 2 + 2= 4
3. All bachelors are unmarried.
4. It seldom rains in the Sahara.
5. There are more than four states in the USA.
6. Some bachelors drive Maserati.
Plantinga (1974, p. 2) characterizes the sense of necessity illustrated in 1-3 as ‘broadly logical’. For it includes not only truths of logic, but those of mathematics, set theory, and other quasi-logical ones. Yet it is not as broadened as to include matters of causal or natural necessity, such as: Nothing travels faster than the speed of light.
One would like an account of the basis of our distinction and a criterion by which to apply it. Some suppose that necessary truths are those we know as deductively possible. But we lack the criterion for deductive truths, and there are necessary truths we do not know at all, e.g., undiscovered mathematical ones. It would not help to say that necessary truths are one, and it is possible, in the broadly logical sense, to know of a deductive set. Finally, Kripke (1972, p.253 v) and Plantinga (1974, p. 8) argues that some contingent truths are knowable by deductive reasoning. Similar problems face the suggestion that necessary truths are the ones we know with the fairest of certainties: We lack a criterion for certainty, there are necessary truths we do not know, and (barring dubious arguments for scepticism) it is reasonable to suppose that we know some contingent truths with certainty.
Leibniz defined a necessary truth as one whose opposite implies a contradiction. Every such proposition, he held, is either an explicit identity, i.e., of the form ‘A is A’, ‘AB is B’, etc.) or is reducible to an identity by successively substituting equivalent terms. (Thus, 3 above might be so reduced by substituting ‘unmarried man’, for ‘bachelor’.) This has several advantages over the ideas of the previous paragraph. First, it explicated the notion of necessity and possibility and seems to provide a criterion we can apply. Secondly, the compounded idealisation for which is accountable for an explicit format of identities such that they are self-evident and can be deduced or developed from premises in the deductive proposition. The theory implies that all necessary truths are knowable deductively, but it does not entail that wee actually know all of them, nor does it define ‘knowable’ in a circular way. Third, it implies that necessary truths are knowable with certainty, but does not preclude over the control or experience as one’s own containment of having certain knowledge of contingent truths, that by means other than the abatement or subtraction in reduction.
Nevertheless, this view is also problematic, and Leibniz’s examples of reductions are too sparse to prove a claim about all necessary truths. Some of his reductions, moreover, are deficient: Fridge has pointed out, for example, that his proof of ‘2 + 2 = 4' presupposes the principle of association and so does not depend on the principle of identity. More generally, it has been shown that arithmetic cannot be reduced to logic, but requires the resources of set theory as well. Finally, there are other necessary propositions, e.g., ‘Nothing can be red and green all over’, which do not seem to be reducible to identities and which Leibniz does not show how to reduce.
Leibniz and others have thought of truths as a property of propositions, where the latter are conceived as things that may be expressed by, but are distinct from, linguistic items like statements. On another approach, truth is a property of linguistic entities, and the basis of necessary truth in convention. Thus A.J. Ayer, for example, Argued that the only necessary truths are analytic statements and that the latter rest entirely on our commitment to use words in certain ways.
The slogan ‘the meaning of a statement is its method of verification’ expresses the empirical verification’s theory of meaning. It is more than the general criterion of meaningfulness if and only if it is empirically verifiable. If says in addition what the meaning of a sentence is inclusively mounted by the observations that would confirm or disconfirmed the sentence. Sentences that would be verified or falsified by all the same observations are empirically equivalent or have the same meaning. A sentence is said to be cognitively meaningful if and only if it can be verified or falsified in experience. This is not meant to require that the sentence be conclusively verified or falsified, since universal scientific laws or hypotheses (which are supposed to pass the test) are not logically deducible from any amount of actually observed evidence.
When one predicate’s necessary truth of a preposition one speaks of modality dedicto. For one ascribes the modal property, necessary truth, to a dictum, namely, whatever proposition is taken as necessary. A venerable tradition, however, distinguishes this from necessary de re, wherein one predicates necessary or essential possession of some property to an on object. For example, the statement ‘4 is necessarily greater than 2' might be used to predicate of the object, 4, the property, being necessarily greater than 2. That objects have some of their properties necessarily, or essentially, and others only contingently, or accidentally, are a main part of the doctrine called ‘essentialism’. Thus, an essentialist might say that Socrates had the property of being bald accidentally, but that of being self-identical, or perhaps of being human, essentially. Although essentialism has been vigorously attacked in recent years, most particularly by Quine, it also has able contemporary proponents, such as Plantinga.
Modal necessity as seen by many philosophers who have traditionally held that every proposition has a modal status as well as a truth value. Every proposition is either necessary or contingent as well as either true or false. The issue of knowledge of the modal status of propositions has received much attention because of its intimate relationship to the issue of deductive reasoning. For example, no propositions of the theoretic content that all knowledge of necessary propositions is deductively knowledgeable. Others reject this claim by citing Kripke’s (1980) and alleged cases of necessary theoretical propositions. Such contentions are often inconclusive, for they fail to take into account the following tripartite distinction: ‘S’ knows the general modal status of ‘p’ just in case ‘S’ knows that ‘p’ is a necessary proposition or ‘S’ knows the truth that ‘p’ is a contingent proposition. ‘S’ knows the truth value of ‘p’ just in case ‘S’ knows that ‘p’ is true or ‘S’ knows that ‘p’ is false. ‘S’ knows the specific modal status of ‘p’ just in case ‘S’ knows that ‘p’ is necessarily true or ‘S’ knows that ‘p’ is necessarily false or ‘S’ knows that ‘p’ is contingently true or ‘S’ knows that ‘p’ is contingently false. It does not follow from the fact that knowledge of the general modal status of a proposition is a deductively reasoned distinctive modal status is also given to theoretical principles. Nor des it follow from the fact that knowledge of a specific modal status of a proposition is theoretically given as to the knowledge of its general modal status that also is deductive.
The certainties involving reason and a truth of fact are much in distinction by associative measures given through Leibniz, who declares that there are only two kinds of truths-truths of reason and truths of fact. The former is of either explicit identities, i.e., of the form ‘A’ is ‘A’, ‘AB is B’, and so forth, or they are reducible to this form by successively substituting equivalent terms. Leibniz dubs them ‘truths of reason’ because the explicit identities are self-evident theoretical truth, whereas the rest can be converted too such by purely rational operations. Because their denial involves a demonstrable contradiction, Leibniz also says that truths of reason ‘rest on the principle of contraction, or identity’ and that they are necessary propositions, which are true of all possible worlds. Some examples are that all bachelors are unmarried’: The first is already of the form ‘AB is B’ and the latter can be reduced to this form by substituting ‘unmarried man’ for ‘bachelor’. Other examples, or so Leibniz believes, are ‘God exists’ and the truth of logic, arithmetic and geometry.
Truths of fact, on the other hand, cannot be reduced to an identity and our only way of knowing them is contained of a theoretical manifestation, or by reference to the fact of the empirical world. Likewise, since their refusal to satisfy a request or desire as their refusal to admit the truth, such that any shaping of denial does not involve as contradiction, only that their truth is merely contingent: They could have been otherwise and hold of the actual world, but not of every possible one. Some examples are ‘Caesar passed over the Rubicon’ and ‘Leibniz was born in Leipzig’, as well as propositions expressing correct scientific generalizations. In Leibniz’s view, truths of fact rest on the principle of sufficient reason, which states that nothing can be so unless thee is a reason that it is so. This reason is that the actual world (by which he means the total collection of things past, present and future) is better than any other possible world and was therefore created by God.
In defending the principle of sufficient reason, Leibniz runs into serious problems. He believes that in every true proposition, the concept of the predicate is contained in that of the subject. (This hols even for propositions like ‘Caesar crossed the Rubicon’: Leibniz thinks anyone who did not cross the Rubicon would not have been Caesar) And this containment relationship-that is eternal and unalterable even by God-guarantees that every truth has a sufficient reason. If truth consists in concept containment, however, then it seems that all truths are analytic and hence necessary, and if they are all necessary, surely they are all truths of reason. Leibniz responds that not evert truth can be reduced to an identity in a finite number of steps: In some instances revealing the connection between subject and predicate concepts would require an infinite analysis. But while this may entail that we cannot prove such propositions as deductively probable, it does not appear to show that the proposition could have been false. Intuitively, it seems a better ground for supposing that it is a necessary truth of a special sort. A related question arises from the idea that truths of fact depend on God’s decision to create the best world, if it is part of the concept of this world that it is best, how could its existence be other than necessary? Leibniz answers that its existence is only hypothetically necessary, i.e., it follows from God’s decision to create this world, but God is necessarily good, so how could he have decided to do anything else? Leibniz says much more about the matters, but it is not clear whether he offers any satisfactory solutions.
The modality of a proposition is the way in which it is true or false. The most important division is between propositions true of necessity, and those true asa things are: Necessary as opposed to contingent propositions. Other qualifiers sometimes called ‘modal’ include the tense indicators ‘It will be the case that p’ or it was the case that p’, and there are affinities between the ‘deontic indicators’, as it ought to be the case that p’ or ‘it is permissible that p’, and the logical modalities as a logic that study the notions of necessity and possibility. Modal logic was greatly important historically, particularly in the light of various doctrines concerning the necessary properties of the deity, but was not a central topic of modern logic in its golden period at the beginning of the 20th century. It was, however, revived by C. I. Lewis, by adding to some propositional or predicate calculus two operators, □ and ◊ (sometimes written N and M), meaning necessarily and possibly, respectively. These like p ➞ ◊ p and □ p ➞ p will be wanted. Controversial theses include □ p ➞ □□ p (if a proposition is necessary, it is necessarily necessary, characteristic of the system known as S4) and ◊ p ➞ □ ◊ p (if a proposition is possible, it is necessarily possible, characteristic of the system known as S5). The classical ‘modal theory’ for modal logic, due to Kripke and the Swedish logician Stig Kanger, involves valuing propositions not as true or false ‘simplicitiers’, but as true or false art possible worlds, with necessity then corresponding to truth in all worlds, and possibly to truths in some world.
The doctrine advocated by David Lewis, which different ‘possible worlds’ are to be thought of as existing exactly as this one does. Thinking in terms of possibilities is thinking of real worlds where things are different, this view has been charged with misrepresenting it as some insurmountably unseeing to why it is good to save the child from drowning, since there is still a possible world in which she (or her counterpart) drowned, and from the standpoint of the universe it should make no difference that a world is actual. Critics also charge to either that the notion fails to fit within the premises that logical theory has or by showing skill in thinking or reasoning gives to a logical argument, especially upon grounds to recognize the difference of how we know about possible worlds, or with a coherent theory about possible worlds, or with a coherent theory of why we are interested in them, but Lewis denies that any other way of interpreting modal statements is tenable.
Thus and so, the ‘standard analysis’ of propositional knowledge, suggested by Plato and Kant among others, implies that if one has a justified true belief that ‘p’, then one knows that ‘p’. The belief condition ‘p’ believes that ‘p’, the truth condition requires that any known proposition be true. And the justification condition requires that any known proposition be adequately justified, warranted or evidentially supported. Plato appears to be considering the tripartite definition in the “Theaetetus” (201c-202d), and to be endorsing its jointly sufficient conditions for knowledge in the “Meno” (97e-98a). This definition has come to be called ‘the standard analysis’ of knowledge, and has received a serious challenge from Edmund Gettier’s counterexamples in 1963. Gettier published two counterexamples to this implication of the standard analysis.
(1) Smith and Jones have applied for the same job. Smith is justified in believing that (1) Jones will get the job, and that (2) Jones has ten coins in his pocket. On the basis of (1) and (2) Smith infers, and thus is justified in believing, that (3) the person who will get the job has ten coins in his pocket. At it turns out, Smith himself will get the job, and he also happens to have ten coins in his pocket. So, although Smith is justified in believing the true proposition (3) Smith does not know (3).
(2) Smith is justified in believing the false proposition that (a) Smith owns a Ford. On the basis of (a) Smith infers, and thus is justified in believing, that (b) either Jones owns a Ford or Brown is in Barcelona. As it turns out, Brown or in Barcelona, and so (b) is true. So although Smith is justified in believing the true proposition (b). Smith does not know (b).
Gettier’s counterexamples are thus cases where one has justified true belief that ‘p’, but lacks knowledge that ‘p’. The Gettier problem is the problem of finding a modification of, or an alterative to, the standard justified-true-belief analysis of knowledge that avoids counterexamples like Gettier’s. Some philosophers have suggested that Gettier style counterexamples are defective owing to their reliance on the false principle that false propositions can justify one’s belief in other propositions. But there are examples much like Gettier’s that do not depend on this allegedly false principle. Here is one example inspired by Keith and Richard Feldman:
(3) Suppose Smith knows the following proposition, ‘m’: Jones, whom Smith has always found to be reliable and whom Smith, has no reason in readiness to distrust its presence, in that rationalization has told Smith, his office-mate, that ‘p’: and that Jones owns a Ford. The exposition for explaining or interpreting the expounding of statements, as such, is the explicative interpretation of finding to explanation, it would that something that makes clear of what is obscure, as these and the following explications will expound on or upon the features that characterize statements of causes, grounds, or motives. Suppose also that Jones has told Smith that ‘p’ only because of a state of hypnosis Jones is in, and that ‘p’ is true only because, unknown to himself, Jones has won a Ford in a lottery since entering the state of hypnosis. And suppose further that Smith deduces from ‘m’ its existential generalization, ‘q’: There is someone, whom Smith has always found to be reliable and whom that Smith, has no rationality for reasons to distrust its presence, and who has told Smith, his office-mate, that he owns a Ford. Smith, then, knows that ‘q’, since he has correctly deduced ‘q’ from ‘m’, which he also knows. But suppose also that on the basis of his knowledge that ‘q’. Smith believes that ‘r’: Someone in the office owns a Ford. Under these conditions, Smith has justified true belief that ‘r’, knows his evidence for ‘r’, but does not know that ‘r’.
Gettier-style examples of this sort have proven especially difficult for attempts to analyse the concept of propositional knowledge. The history of attempted solutions to the Gettier problem is complex and open-ended. It has not produced consensus on any solution. Many philosophers hold, in light of Gettier-style examples, that propositional knowledge requires a fourth condition, beyond the justification, truth and belief conditions. Although no particular fourth condition enjoys widespread endorsement, there are some prominent general proposals in circulation. One sort of proposed modification, the so-called ‘defeasibility analysis’, requires that the justification appropriate to knowledge be ‘undefeated’ in the general sense that some appropriate subjunctive conditional concerning genuine defeaters of justification be true of that justification. One straightforward defeasibility fourth condition, for instance, requires of Smith’s knowing that ‘p’ that there be no true proposition ‘q’, such that if ‘q’ became justified for Smith, ‘p’ would no longer be justified for Smith (Pappas and Swain, 1978). A different prominent modification requires that the actual justification for a true belief qualifying as knowledge not depend I a specified way on any falsehood (Armstrong, 1973). The details proposed to elaborate such approaches have met with considerable controversy.
The fourth condition of evidential truth-sustenance may be a speculative solution to the Gettier problem. More specifically, for a person, ‘S’, to have knowledge that ‘p’ on justifying evidence ‘e’, ‘e’ must be truth-sustained in this sense for every true proposition ‘t’ that, when conjoined with ‘e’, undermines S’s justification for ‘p’ on ‘e’, there is a true proposition, ‘t’, that, when conjoined with ‘e’ & ‘t’, restores the justification of ‘p’ for ‘S’ in a way that ‘S’ is actually justified in believing that ‘p’. The gist of this resolving evolution, put roughly, is that propositional knowledge requires justified true belief that is sustained by the collective totality of truths. Herein, is to argue in Knowledge and Evidence, that Gettier-style examples as (1)-(3), but various others as well.
Three features that proposed this solution merit emphasis. First, it avoids a subjunctive conditional in its fourth condition, and so escapes some difficult problems facing the use of such a conditional in an analysis of knowledge. Second, it allows for non-deductive justifying evidence as a component of propositional knowledge. An adequacy condition on an analysis of knowledge is that it does not restrict justifying evidence to relations of deductive support. Third, its proposed solution is sufficiently flexible to handle cases describable as follows:
(4) Smith has a justified true belief that ‘p’, but there is a true proposition, ‘t’, which undermines Smith’s justification for ‘p’ when conjoined with it, and which is such that it is either physically or humanly impossible for Smith to be justified in believing that ‘t’.
Examples represented by (4) suggest that we should countenance varying strengths in notions of propositional knowledge. These strengths are determined by accessibility qualifications on the set of relevant knowledge-precluding underminers. A very demanding concept of knowledge assumes that it need only be logically possible for a Knower to believe a knowledge-precluding underminer. Fewer demanding concepts assume that it must be physically or humanly possible for a Knower to believe knowledge-precluding underminers. But even fewer demanding concepts of knowledge need to rely on a notion of truth-sustained evidence if they are to survive a threatening range of Gettier-style examples. Given to some resolution that it needs be that the forth condition for a notion of knowledge is not a function simply of the evidence a Knower actually possesses.
The higher controversial aftermath of Gettier’s original counterexamples has left some philosophers doubted of the real philosophical significance of the Gettier problem. Such doubt, however, seems misplaced. One fundamental branch of epistemology seeks understanding of the nature of propositional knowledge. And our understanding exactly what prepositional knowledge is essentially involving our having to a Gettier - resistant analysis of such knowledge. If our analysis is not Gettier-resistant, we will lack an exact understanding of what propositional knowledge is. It is epistemologically important, therefore, to have a defensible solution to the Gettier problem, however, demanding such a solution is.
In 1963 the American philosopher Edmund Gettier provided a range of counterexamples to this formula. In his cases a belief is true, and the agent is justified in believing it. But the justification does not relate to the truth of the belief in the right way, so that it is relatively accidental, or a matter of luck, that the belief is true. For example, I see what I reasonably and justifiably take to an event of your receiving a bottle of whisky. The truth is that you do not drink let alone drink whiskey, but on this occasion you were in fact taking delivery of a medical specimen. In such a case my belief is true and justifiable, but I do not thereby know that you drink whiskey, since this truth is only accidental relative to my evidence. The counterexample sparked a prolonged debate over the kinds of condition that might be substituted to give a better account of knowledge, or whether all suggestions would meet similar problems
Propositional knowledge (PK) is the type of knowing whose instance are labelled by means of a phrase expressing some proposition, e.g., in English a phrase of the form ‘that h’, where some complete declarative sentence is instantial for ‘h’.
Theories of ‘PK’ differ over whether the proposition that ‘h’ is involved in a more intimate fashion, such as serving as a way of picking out a proposition attitude required for knowing, e.g., believing that ‘h’, accepting that ‘h’ or being sure that ‘h’. For instance, the tripartite analysis or standard analysis, treats ‘PK’ as consisting in having a justified, true belief that ‘h’, the belief condition requires that anyone who knows that ‘h’ believes that ‘h’, the truth condition requires that any known proposition be true, in contrast, some regarded theories do so consider and treat ‘PK’ as the possession of specific abilities, capabilities, or powers, and that view the proposition that ‘h’ as needed to be expressed only in order to label a specific instance of ‘PK’.
Although most theories of Propositional knowledge (PK) purport to analyse it, philosophers disagree about the goal of a philosophical analysis. Theories of ‘PK’ may differ over whether they aim to cover all species of ‘PK’ and, if they do not have this goal, over whether they aim to reveal any unifying link between the species that they investigate, e.g., empirical knowledge, and other species of knowing.
Very many accounts of ‘PK’ have been inspired by the quest to add a fourth condition to the tripartite analysis so as to avoid Gettier-type counterexamples to it, whereby a fourth condition of evidential truth-sustenance for every true proposition when conjoined with regaining justification, which may require the justified true belief that is sustained by the collective totality of truths that an adequacy condition of propositional knowledge not restrict justified evidences in relation of deductive support, such that we should countenance varying strengths in notions of propositional knowledge. Restoratively, these strengths are determined by accessibility qualifications on the set of relevant knowledge-precluding underminers. A very demanding concept of knowledge assumes that it need only be logically possible for a Knower to believe a knowledge-precluding undeterminers, and less demanding concepts that it must physically or humanly possible for a Knower to believe knowledge-precluding undeterminers. But even such demanding concepts of knowledge need to rely on a notion of truth-sustaining evidence if they are to survive a threatening range of Gettier-style examples. As the needed fourth condition for a notion of knowledge is not a function simply of the evidence, a Knower actually has possession by his grasping to know of a better resolve. As to find an explanation or solution for something obscure, mysterious, or incomprehensible. One fundamental source of epistemology seeks understanding of the nature of propositional knowledge, and our understanding exactly what propositional knowledge is essentially involving our consuming of a Gettier-resistant analysis of such knowledge. If our analysis is not Gettier-resistant, we will lack an exact understanding of what propositional knowledge is. It is epistemologically important, therefore, to have a defensible solution to the Gettier problem, however, demanding such a solution is. And by the resulting need to deal with other counterexamples provoked by these new analyses.
Keith Lehrer (1965) originated a Gettier-type example that has been a fertile source of important variants. It is the case of Mr Notgot, who is in one’s office and has provided some evidence, ‘e’, in response to all of which one form a justified belief that Mr. Notgot is in the office and owns a Ford, thanks to which one arrives at the justified belief that ‘h': ‘Someone in the office owns a Ford’. In the example, ‘e’ consists of such things as Mr. Notgot’s presently showing one a certificate of Ford ownership while claiming to own a Ford and having been reliable in the past. Yet, Mr Notgot has just been shamming, and the only reason that it is true that ‘h1' is because, unbeknown to oneself, a different person in the office owns a Ford.
Variants on this example continue to challenge efforts to analyse species of ‘PK’. For instance, Alan Goldman (1988) has proposed that when one has empirical knowledge that ‘h’, when the state of affairs (call it h*) expressed by the proposition that ‘h’ figures prominently in an explanation of the occurrence of one’s believing that ‘h’, where explanation is taken to involve one of a variety of probability relations concerning ‘h*’, and the belief state. But this account runs foul of a variant on the Notgot case akin to one that Lehrer (1979) has described. In Lehrer’s variant, Mr Notgot has manifested a compulsion to trick people into justified believing truths yet falling short of knowledge by means of concocting Gettierized evidence for those truths. It we make the trickster’s neuroses highly specific ti the type of information contained in the proposition that ‘h’, we obtain a variant satisfying Goldman’s requirement, that the occurrence of ‘h*’ significantly raises the probability of one’s believing that ‘h’. (Lehrer himself (1990, pp. 103-4) has criticized Goldman by questioning whether, when one has ordinary perceptual knowledge that abn object is present, the presence of the object is what explains one’s believing it to be present.)
In grappling with Gettier-type examples, some analyses proscribe specific relations between falsehoods and the evidence or grounds that justify one’s believing. A simple restriction of this type requires that one’s reasoning to the belief that ‘h’ does not crucially depend upon any false lemma (such as the false proposition that Mr Notgot is in the office and owns a Ford). However, Gettier-type examples have been constructed where one does not reason through and false belief, e.g., a variant of the Notgot case where one arrives at belief that ‘h’, by basing it upon a true existential generalization of one’s evidence: ‘There is someone in the office who has provided evidence e’, in response to similar cases, Sosa (1991) has proposed that for ‘PK’ the ‘basis’ for the justification of one’s belief that ‘h’ must not involve one’s being justified in believing or in ‘presupposing’ any falsehood, even if one’s reasoning to the belief does not employ that falsehood as a lemma. Alternatively, Roderick Chisholm (1989) requires that if there is something that makes the proposition that ‘h’ evident for one and yet makes something else that is false evident for one, then the proposition that ‘h’ is implied by a conjunction of propositions, each of which is evident for one and is such that something that makes it evident for one makes no falsehood evident for one. Other types of analyses are concerned with the role of falsehoods within the justification of the proposition that ‘h’ (Versus the justification of one’s believing that ‘h’). Such a theory may require that one’s evidence bearing on this justification not already contain falsehoods. Or it may require that no falsehoods are involved at specific places in a special explanatory structure relating to the justification of the proposition that ‘h’ (Shope, 1983.).
A frequently pursued line of research concerning a fourth condition of knowing seeks what is called a ‘defeasibility’ analysis of ‘PK’. Early versions characterized defeasibility by means of subjunctive conditionals of the form, ‘If ‘A’ were the cases then ‘B’ would be the case’. But more recently the label has been applied to conditions about evidential or justificational relations that are not themselves characterized in terms of conditionals. Early versions of defeasibility theories advanced conditionals where ‘A’ is a hypothetical situation concerning one’s acquisition of a specified sort of epistemic status for specified propositions, e.g., one’s acquiring justified belief in some further evidence or truths, and ‘B’; concerned, for instance, the continued justified status of the proposition that ‘h’ or of one’s believing that ‘h’.
A unifying thread connecting the conditional and non-conditional approaches to defeasibility may lie in the following facts: (1) What is a reason for being in a propositional attitude is in part a consideration, instances of the thought of which have the power to affect relevant processes of propositional attitude formation? : (2) Philosophers have often hoped to analyse power ascriptions by means of conditional statements: And (3) Arguments portraying evidential or justificational relations are abstractions from those processes of propositional attitude maintenance and formation that manifest rationality. So even when some circumstance, ‘R’, is a reason for believing or accepting that ‘h’, another circumstance, ‘K’ may present an occasion from being present for a rational manifestation of the relevant power of the thought of ‘R’ and it will not be a good argument to base a conclusion that ‘h’ on the premiss that ‘R’ and ‘K’ obtain. Whether ‘K’ does play this interfering, ‘defeating’. Role will depend upon the total relevant situation.
Accordingly, one of the most sophisticated defeasibility accounts, which has been proposed by John Pollock (1986), requires that in order to know that ‘h’, one must believe that ‘h’ on the basis of an argument whose force is not defeated in the above way, given the total set of circumstances described by all truths. More specifically, Pollock defines defeat as a situation where (1) one believes that ‘p’ and it is logically possible for one to become justified in believing that ‘h’ by believing that ’p’, and (2) one actually has a further set of beliefs. ‘S’ logically has a further set of beliefs, as, ‘S’, is logically consistent with the proposition that ‘h’, such that it is not logically possible for one to become justified in believing that ‘h’ by believing it in the basis of holding the set of beliefs that is the union of ‘S’, with the belief that ‘p’ (Pollock, 1986, pp. 36, 38). Furthermore, Pollock requires for ‘PK’ that the rational presupposition in favour of one’s believing that ‘h’ created by one’s believing that ‘p’ is undefeated by the set of all truths, including considerations that one does not actually believe. Pollock offers no definition of what this requirement means. But he may intend roughly the following: That ‘T’ is the set of all true propositions: (I) one believes that ‘p’ and it is logically possible for one to become justified in believing that ‘h’, by believing that ‘p’. And (II), there are logically possible situations in which one becomes justified in believing that ‘h’ on the basis of having the belief that ‘p’ and the beliefs in ‘T’. Thus, in the Notgot example, since ‘T’ includes the proposition that Mr. Notgot does own a Ford, one lack’s knowledge because condition (II) is not satisfied.
But given such an interpretation. Pollock’s account illustrates the fact that defeasibility theories typically have difficulty dealing with introspective knowledge of one’s beliefs. Suppose that some proposition, say that ƒ, is false, but one does not realize this and holds the belief that ƒ. Condition
(II) has no knowledge that h2? : ‘I believe that ƒ’. At least this is so if one’s reason for believing that h2 includes the presence of the very condition of which one is aware, i.e., one’s believing that ƒ. It is incoherent to suppose hat one retains the latter reason, also, believes the truth that not-ƒ. This objection can be avoided, but at the cost of adopting what is a controversial view about introspective knowledge that ‘h’, namely, the view that one’s belief that ‘h’ is in such cases mediated by some mental state intervening between the mental state of which there is introspective knowledge and he belief that ‘h’, so that is mental state is rather than the introspected state that it is included in one’s reason for believing that ‘h’. In order to avoid adopting this controversial view, Paul Moser (1989) gas proposed a disjunctive analysis of ‘PK’, which requires that either one satisfy a defeasibility condition rather than like Pollock’s or else one believes that ‘h’ by introspection. However, Moser leaves obscure exactly why beliefs arrived at by introspections account as knowledge.
Early versions of defeasibility theories had difficulty allowing for the existence of evidence that is ‘merely misleading’, as in the case where one does know that ‘h3: ‘Tom Grabit stole a book from the library’, thanks to having seen him steal it, yet where, unbeknown to oneself, Tom’s mother out of dementia gas testified that Tom was far away from the library at the time of the theft. One’s justifiably believing that she gave the testimony would destroy one’s justification for believing that ‘h3' if added by itself to one’s present evidence.
At least some defeasibility theories cannot deal with the knowledge one faces, while giving way that ‘h4', in this life there is no timer at which I believe that ‘d’, where the proposition that ‘d’ expresses the details regarding some philosophical matter, e.g., the maximum number of blades of grass ever simultaneously growing on the earth. When it just so happens that it is true that ‘d’, defeasibility analyses typically consider the addition to one’s dying thoughts of a belief that ‘d’ in such a way as too improperly rule out actual knowledge that ‘h4'.
A quite different approach to knowledge, and one able to deal with some Gettier-type cases, involves developing some type of causal theory of Propositional knowledge. The interesting thesis that counts as a causal theory of justification (in the meaning of ‘causal theory; intended here) is that of a belief is justified just in case it was produced by a type of process that is ‘globally’ reliable, that is, its propensity to produce true beliefs-that can be defined (to a god enough approximations) as the proportion of the bailiffs it produces (or would produce were it used as much as opportunity allows) that are true-is sufficiently meaningful-variations of this view have been advanced for both knowledge and justified belief. The first formulation of reliability account of knowing appeared in a note by F.P. Ramsey (1931), who said that a belief was knowledge if it is true, certain can obtain by a reliable process. P. Unger (1968) suggested that “S’ knows that ‘p’ just in case it is not at all accidental that ‘S’ is right about its being the casse that ‘p’. D.M. Armstrong (1973) said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth through and by the laws of nature.
Such theories require that one or another specified relation hold that can be characterized by mention of some aspect of cassation concerning one’s belief that ‘h’ (or one’s acceptance of the proposition that ‘h’) and its relation to state of affairs ‘h*’, e.g., h* causes the belief: h* is causally sufficient for the belief h* and the beliefs have a common cause. Such simple variations of a causal theory are able to deal with the original Notgot case. Since it involves no such causal relationship, but cannot explain why there is ignorance in the variants where Notgot and Berent Enç (1984) have pointed out. Sometimes one knows of ‘χ’ that is ø by recognizing a feature merely corelated with the presence of oneness without endorsing a causal theory themselves, there suggest that it would need to be elaborated so as to allow that one’s belief that ‘χ’ has ø has been caused by a factor whose correlation with the presence of øne has caused for us, e.g., by evolutionary adaption in one’s ancestors, the disposition that one manifests in acquiring the belief in response to the correlated factor. Not only does this strain the unity of as causal theory by complicating it, but no causal theory without other shortcomings has been able to cover instances of deductively reasoned knowledge.
Causal theories of Propositional knowledge differ over whether they deviate from the tripartite analysis by dropping the requirements that one’s believing (accepting) that ‘h’ be justified. The same variation occurs regarding reliability theories, which present the Knower as reliable concerning the issue of whether or not ‘h’, in the sense that some of one’s cognitive or epistemic states, θ, is such that, given further characteristics of oneself-possibly including relations to factors external to one and which one may not be aware-it is nomologically necessary (or at least probable) that ‘h’. In some versions, the reliability is required to be ‘global’ in as far as it must concern a nomologically (probabilistic) relationship) relationship of states of type θ to the acquisition of true beliefs about a wider range of issues than merely whether or not ‘h’. There is also controversy about how to delineate the limits of what constitutes a type of relevant personal state or characteristic. (For example, in a case where Mr Notgot has not been shamming and one does know thereby that someone in the office owns a Ford, such as a way of forming beliefs about the properties of persons spatially close to one, or instead something narrower, such as a way of forming beliefs about Ford owners in offices partly upon the basis of their relevant testimony?)
One important variety of reliability theory is a conclusive reason account, which includes a requirement that one’s reasons for believing that ‘h’ be such that in one’s circumstances, if h* were not to occur then, e.g., one would not have the reasons one does for believing that ‘h’, or, e.g., one would not believe that ‘h’. Roughly, the latter are demanded by theories that treat a Knower as ‘tracking the truth’, theories that include the further demand that is roughly, if it were the case, that ‘h’, then one would believe that ‘h’. A version of the tracking theory has been defended by Robert Nozick (1981), who adds that if what he calls a ‘method’ has been used to arrive at the belief that ‘h’, then the antecedent clauses of the two conditionals that characterize tracking will need to include the hypothesis that one would employ the very same method.
But unless more conditions are added to Nozick’s analysis, it will be too weak to explain why one lack’s knowledge in a version of the last variant of the tricky Mr Notgot case described above, where we add the following details: (a) Mr Notgot’s compulsion is not easily changed, (b) while in the office, Mr Notgot has no other easy trick of the relevant type to play on one, and ©) one arrives at one’s belief that ‘h’, not by reasoning through a false belief ut by basing belief that ‘h’, upon a true existential generalization of one’s evidence.
Nozick’s analysis is too strong to permit anyone ever to know that ‘h’: ‘Some of my beliefs about beliefs might be otherwise elsewhere, e.g., I might have refused to approve of them’. If I know that ‘h5' then satisfaction of the antecedent of one of Nozick’s conditionals would involve its being false that ‘h5', thereby thwarting satisfaction of the consequent’s requirement that I not then believe that ‘h5'. For the belief that ‘h5' is itself one of my beliefs about beliefs (Shope, 1984).
Some philosophers think that the category of knowing for which true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of ‘PK’ that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats ‘PK’ as merely the ability to provide a correct answer to a possible question, however, White may be equating ‘producing’ knowledge in the sense of producing ‘the correct answer to a possible question’ with ‘displaying’ knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some nonhuman animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that ‘h’ without believing or accepting that ‘h’ can be modified so as to illustrate this point. Two examples concern an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical ‘seer’ never picks winners but only muses over whether those horses wight win, or only reports those horses winning, this behaviour should be as much of a candidate for the person’s manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.
These considerations expose limitations in Edward Craig’s analysis (1990) of the concept of knowing of a person’s being some satisfactory informants in relation to an inquirer who wants to find out whether or not ‘h’. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who is too recalcitrant to inform the inquirer, or to incapacitate to inform, or too discredited to be worth considering (as with the boy who cried ‘Wolf’). Craig admits that this might make preferable some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such that by an alternate, which offers a recursive definition that concerns one’s having the power to proceed in a way representing the state of affairs, causally involved in one’s proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.
Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such am the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). Nonetheless, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).
The incompatibility thesis is sometimes traced to Plato ©. 429-347 BC) in view of his claim that knowledge is infallible while belief or opinion is fallible (“Republic” 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.
A. Duncan-Jones (1939: Also Vendler, 1978) cites linguistic evidence to back up the incompatibility thesis. He notes that people often say ‘I do not believe she is guilty. I know she is’ and the like, which suggest that belief rules out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying ‘I do not just believe she is guilty, I know she is’ where ‘just’ makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: ‘You do not hurt him, you killed him’.
A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives ‘us’ no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.
A.D. Woozley (1953) defends a version of the separability thesis. Woozley’s version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is ‘what I can do, where what I can do may include answering questions’. On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, ‘I am contingently at odds with myself as doubting measures are consistently beckoning upon whether my answer is true: Still, I know it is correct’. But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make are true. While ‘I know such and such’ might be true even if I am unsure of whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I was sure of the truth of my claim.
Colin Radford (1966) extends Woozley’s defence of the separability thesis. In Radford’s view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year’s priori and yet he is able to give several correct responses to questions such as ‘When did the Battle of Hastings occur’? Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would nonetheless insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is ‘intentionally misleading’.
Those that agree with Radford’s defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack’s belief about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when he seeks them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain’s (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.
D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that a point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings came to pass, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently ‘guessed’ that it took place in 1066, we would surely describe the situation as one in which Jean’s false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford’s original case as one that Jean’s true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.
Armstrong’s response to Radford was to reject Radford’s claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examined case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, D.C. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samantha’s belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford’s examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean’s memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.
Least has been mentioned, about the approaching view from which ‘perception’ basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives ‘us’ knowledge of the world around ‘us’. (2) We are conscious of that world by being aware of ‘sensible qualities’: Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is affected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between ‘us’ and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like ‘sense-data’ or ‘percepts’ exacerbate the tendency, but once the model is in place, the first property, that perception gives ‘us’ knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include ‘scepticism’ and ‘idealism’.
A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the apparent individual (neurophysiological and other) factors determining haw we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.
Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something-that, the light has turned green-by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one’s sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.
Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot - in at least in this way - to hear that one’s visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a’ is ‘F’, coming to know thereby that ‘a’ is ‘F’, by seeing (hearing, etc.) that some other condition, ‘b’s’ being ‘G’, obtains when this occurs, the knowledge (that ‘a’ is ‘F’) is derived from, or dependent on, the more basic perceptual knowledge that ‘b’ is ‘G’.
Perhaps as a better strategy is to tie an account save that part that evidence could justify explanation for it is its truth alone. Since, at least the times of Aristotle philosophers of explanatory knowledge have intently assertively given to emphasis too especially by displaying more or less prominent importance in that, or a force brought to bear on something to bring out what is important. In its simplest of terms. We want to know not only what is the composite peculiarities and particular points of issue but also why it is. This consideration suggests that we define an explanation as an answer to a why-question. Such a definition would, however, be too broad, because some why-questions are requests for consolation (Why did my son have to die?) Or moral justification (Why should women not be paid the same as men for the same work?) It would also be too narrow because some explanations are responses to how-questions (How does radar work?) Or how-possibility-questions (How is it possible for cats always to land their feet?)
In its overall sense, “to explain” means to make clear, to make plain, or to provide understanding. Definitions of this sort are philosophically unhelpful, for the terms used in the deficient are no less problematic than the term to be defined. Moreover, since a wide variety of things require explanation, and since many different types of explanation exist, as more complex explanation is required. To facilitate the requirement leaves, least of mention, for us to consider by introduction a bit of technical terminology. The term “explanation” is used to refer to that which is to be explained: The term “explanans” refer or submit to call or direct attentions to something as no one pertaining to that which does the explaining, the explanans and the explanation taken together constitute the explanation.
One common type of explanation occurs when deliberate human actions are explained in terms of conscious purposes. “Why did you go to the pharmacy yesterday?” “Because I had a headache and needed to get some aspirin.” It is tacitly assumed that aspirin is an appropriate medication for headaches and that going to the pharmacy would have been efficient way of getting some. Such explanations are, of course, teleological, referring, as they do, to goals. The explanans are not the realization of a future goal-if the pharmacy happened to be closed for stocktaking the aspirin would have ben obtained there, bu t that would not invalidate the explanation. Some philosophers would say that the antecedent desires to achieve the end are what does the explaining: Others might say that the explaining is done by the nature of the goal and the fact that the action promoted the chances of realizing it. (Taylor, 1964). In that it should not be automatically being assumed that such explanations are causal. Philosophers differ considerably on whether these explanations are to be framed in terms of cause or reason, but the distinction cannot be used to show that the relation between reasons and the actions they justify is in no way causal, and there are many differing analyses of such concepts as intention and agency. Expanding the domain beyond consciousness, Freud maintained, in addition, that much human behaviour can be explained in terms of unconscious and conscious wishes. Those Freudian explanations should probably be construed as basically causal.
Problems arise when teleological explanations are offered in other context. The behaviour of nonhuman animals is often explained in terms of purpose, e.g., the mouse ran to escape from the cat. In such cases the existence of conscious purpose seems dubious. The situation is still more problematic when a supr-empirical purpose in invoked -, e.g., the explanations of living species in terms of God’s purpose, or the vitalistic explanations of biological phenomena in terms of an entelechy or vital principle. In recent years an “anthropic principle” has received attention in cosmology (Barrow and Tipler, 1986). All such explanations have been condemned by many philosophers an anthropomorphic.
Nevertheless, philosophers and scientists often maintain that functional explanations play an important and legitimate role in various sciences such as, evolutionary biology, anthropology and sociology. For example, of the peppered moth in Liverpool, the change in colour from the light phase to the dark phase and back again to the light phase provided adaption to a changing environment and fulfilled the function of reducing predation on the spacies. In the study of primitive soviets anthropologists have ascertained that various rituals (rain dance) which may be inefficacious in bringing about their manifested gaols (producing rain), actually cohesion at a period of stress (often a drought). Philosophers who admit teleological and/or functional explanations in common sense and science oftentimes take pans to argue that such explanations can be annualized entirely in terms of efficient causes, thereby escaping the charge of anthropomorphism (Wright, 1976): Again, however, not all philosophers agree.
Mainly to avoid the incursion of unwanted theology, metaphysics, or anthropomorphism into science, many philosophers and scientists, especially during the first half of the twentieth century-held that science provides only descriptions and predictions of natural phenomena, but not explanations for a series of influential philosophers of science-including Karl Popper (1935) Carl Hempel and Paul Oppenheim (1948) and Hempel (1965)-maintained that empirical science can explain natural phenomena without appealing to metaphysics or theology. It appears that this view is now accepted by the vast majority of philosophers of science, though there is sharp disagreement on the nature of scientific explanation.
The foregoing approach, developed by Hempel, Popper and others, became virtually a “received view” in the 1960s and 1970s. According to this view, to give a scientific explanation of any natural phenomenon is to show how these phenomena can be subsumed under a law of nature. A particular repture in a water pipe can be explained by citing the universal law that water expands when it freezes to its lowest temperature, and, the fact that the temperature of water in the pipe dropped below the freezing point. General law, as well as particular facts, can be explained by subsumption, the law of conservation of linear momentum can be explained by derivation from Newton’s second and third laws of motion. Each of these explanations is a deductive argument: The explanans contain one or more statements of universal laws and, in many cases, statements deceiving initial conditions. This pattern of explanation is known as the deductive-nomological (D-N) model. Any such argument shows that the explanandun had to occur given the explanans.
Many, though not all, adherents of the received view allow for explanation by subsumption under statistical laws. Hempel (1965) offers as an example the case of a man who recovered quickly from a streptococcus infection as a result of treatment with penicillin. Although not all strep infections’ clar up quickly under this treatment, the probability of recovery in such cases is high, and this is sufficient for legitimate explanation According to Hempel. This example conforms to the inductive-statistical (I-S) model. Such explanations are viewed as arguments, but they are inductive than deductive. In these instances the explanation confers high inductive probability on the explanandum. An explanation of a particular fact satisfying either the D-N or I-S model is an argument to the effect that the fact in question was to b e expected by virtue of the explanans.
The received view been subjected to strenuous criticism by adherents of the causal/mechanical approach to scientific explanation (Salmon 1990). Many objections to the received view we engendered by the absence of caudal constraints (due largely to worries about Hume’s critique) on the N-D and I-S models. Beginning in the late 1950s, Michael Scriven advanced serious counter examples to Hempel’s models: He was followed in the 1960s by Wesley Salmon and in the 1970s by Peter Railton. As accorded to the view, one explain phenomena’s identifying causes (a death is explained resalting from a massive cerebral haemorrhage) or by exposing underlying mechanisms (the behaviour of a gas is explained in terms of the motion of constituent molecules).
A unification approach to explanation contains the basic idea that we understand our world more adequately to the extent that we can reduce the number of independent assumptions we must introduce to account for what goes on in it. Accordingly, we understand phenomena to the degree that we can fit them into an overall world picture or Weltanschauung. In order to serve in scientific explanation, the world picture must be scientifically well founded.
During the pas half-century much philosophical attention has ben focussed on explanation in science and in history. Considerable controversy has surrounded the question of whether historical explanation must be scientific, or whether history requires explanations of different types. Many diverse views have been articulated: The forgoing brief survey does not exhaust the variety (Salmon, 19990).
In everyday life we encounter many types of explanation, which appear not to raise philosophical difficulties, in addition to those already made of mention. Prior to takeoff a flight attendant explains how to use the safety equipment on the aeroplane. In a museum the guide explains the significance of a famous painting. A mathematics teacher explains a geometrical proof to a bewildered student. A newspaper story explains how a prisoner escaped. Additional examples come easily to mind, the main point is to remember the great variety of contexts in which explanations are sought and given into.
Another item of importance to epistemology is the wider held notion that non demonstrative inferences can be characterized as inference to the best explanation. Given the variety of views on the nature of explanation, this popular slogan can hardly provide a useful philosophical analysis
Early versions of defeasibility theories had difficulty allowing for the existence of evidence that was "merely misleading," as in the case where one does know that h3: "Tom Grabit stole a book from the library," thanks to having seen him steal it, yet where, unbeknown to oneself, Tom’s mother out of dementia gas testified that Tom was far away from the library at the time of the theft. One’s justifiably believing that she gave the testimony would destroy one’s justification for believing that h3, if added by itself to one’s present evidence.
A different approach to knowledge, and one able to deal with some Gettier-type cases, involves developing some type of causal theory of propositional knowledge. The interesting thesis that counts as a causal theory of justification (in the meaning of "causal theory": Intended here) is that of a belief is justified just in case it was produced by a type of process that is "globally" reliable, that is, its propensity to produce true beliefs-that can be defined (to a god enough approximations) as the proportion of the bailiffs it produces (or would produce where it used as much as opportunity allows) that are true-is sufficiently meaningful-variations of this view have been advanced for both knowledge and justified belief. The first formulation of reliability account of knowing appeared in a note by F.P. Ramsey (1931), who said that a belief was knowledge if it is true, certain can obtain by a reliable process. P. Unger (1968) suggested that 'S’ knows that ‘p’ just in case it is not at all accidental that ‘S’ is right about its being the casse that ‘p’. D.M. Armstrong (1973) said that a non-inferential belief qualified as knowledge if the belief has properties that are nominally sufficient for its truth, i.e., guarantee its truth through and by the laws of nature.
Such theories require that one or another specified relation hold that can be characterized by mention of some aspect of cassation concerning one’s belief that ‘h’ (or one’s acceptance of the proposition that ‘h’) and its relation to state of affairs ‘h*’, e.g., 'h' causes the belief: 'h' is causally sufficient for the belief 'h' and the beliefs have a common cause. Such simpler versions of a causal theory are able to deal with the original Notgot case. Since it involves no such causal relationship, but cannot explain why there is ignorance in the variants where Notgot and Berent Enç (1984) have pointed out that sometimes one knows of ‘χ’ that is ø recognizes a feature merely corelated with the presence of øne without endorsing a causal theory themselves. There suggest that it would need to be elaborated so as to allow that one’s belief that ‘χ’ has ø has been caused by a factor whose correlation with the presence of oneness has caused in itself, e.g., by evolutionary adaption in one’s ancestors, the disposition that one manifests in acquiring the belief in response to the correlated factor. Not only does this strain the unity of as causal theory by complicating it, but no causal theory without other shortcomings has been able to cover instances of deductively reasoned knowledge.
Causal theories of Propositional knowledge differ over whether they deviate from the tripartite analysis by dropping the requirements that one’s believing (accepting) that ‘h’ be justified. The same variation occurs regarding reliability theories, which present the Knower as reliable concerning the issue of whether or not ‘h’, in the sense that some of one’s cognitive or epistemic states, θ, is such that, given further characteristics of oneself-possibly including relations to factors external to one and which one may not be aware-it is nomologically necessary (or at least probable) that ‘h’. In some versions, the reliability is required to be ‘global’ in as far as it must concern a nomologically (probabilistic) relationship) relationship of states of type θ to the acquisition of true beliefs about a wider range of issues than merely whether or not ‘h’. There is also controversy about how to delineate the limits of what constitutes a type of relevant personal state or characteristic. (For example, in a case where Mr Notgot has not been shamming and one does know thereby that someone in the office owns a Ford, such as a way of forming beliefs about the properties of persons spatially close to one, or instead something narrower, such as a way of forming beliefs about Ford owners in offices partly upon the basis of their relevant testimony?)
One important variety of reliability theory is a conclusive reason account, which includes a requirement that one’s reasons for believing that ‘h’ be such that in one’s circumstances, if h* were not to occur then, e.g., one would not have the reasons one does for believing that ‘h’, or, e.g., one would not believe that ‘h’. Roughly, the latter are demanded by theories that treat a Knower as ‘tracking the truth’, theories that include the further demand that is roughly, if it were the case, that ‘h’, then one would believe that ‘h’. A version of the tracking theory has been defended by Robert Nozick (1981), who adds that if what he calls a ‘method’ has been used to arrive at the belief that ‘h’, then the antecedent clauses of the two conditionals that characterize tracking will need to include the hypothesis that one would employ the very same method.
But unless more conditions are added to Nozick’s analysis, it will be too weak to explain why one lack’s knowledge in a version of the last variant of the tricky Mr Notgot case described above, where we add the following details: (a) Mr Notgot’s compulsion is not easily changed, (b) while in the office, Mr Notgot has no other easy trick of the relevant type to play on one, and finally for one’s belief that ‘h’, not by reasoning through a false belief ut by basing belief that ‘h’, upon a true existential generalization of one’s evidence.
Nozick’s analysis is addition too strong to permit anyone ever to know that ‘h’: ‘Some of my beliefs about beliefs might be otherwise, e.g., I might have disapproved and rejected them’. If I know that ‘h5' then satisfaction of the antecedent of one of Nozick’s conditionals would involve its being false that ‘h5', thereby thwarting satisfaction of the consequent’s requirement that I not then believe that ‘h5'. For the belief that ‘h5' is itself one of my beliefs about beliefs (Shope, 1984).
Some philosophers think that the category of knowing for which is true. Justified believing (accepting) is a requirement constituting only a species of Propositional knowledge, construed as an even broader category. They have proposed various examples of ‘PK’ that do not satisfy the belief and/ort justification conditions of the tripartite analysis. Such cases are often recognized by analyses of Propositional knowledge in terms of powers, capacities, or abilities. For instance, Alan R. White (1982) treats ‘PK’ as merely the ability to provide a correct answer to some possible questions, however, White may be equating ‘producing’ knowledge in the sense of producing ‘the correct answer to a possible question’ with ‘displaying’ knowledge in the sense of manifesting knowledge. (White, 1982). The latter can be done even by very young children and some nonhuman animals independently of their being asked questions, understanding questions, or recognizing answers to questions. Indeed, an example that has been proposed as an instance of knowing that ‘h’ without believing or accepting that ‘h’ can be modified so as to illustrate this point. Two examples concern an imaginary person who has no special training or information about horses or racing, but who in an experiment persistently and correctly picks the winners of upcoming horseraces. If the example is modified so that the hypothetical ‘seer’ never picks winners but only muses over whether those horse’s might win, or only reports those horses winning, this behaviour should be as much of a candidate for the person’s manifesting knowledge that the horse in question will win as would be the behaviour of picking it as a winner.
These considerations now placed upon our table, least that we take to consider of their vulnerability, that is in regard to their limitation: Edward Craig’s analysis (1990) of the concept of knowing of a person’s being a satisfactory informant in relation to an inquirer who wants to find out whether or not ‘h’. Craig realizes that counterexamples to his analysis appear to be constituted by Knower who are too recalcitrant to inform the inquirer, or too incapacitate to inform, or too discredited to be worth considering (as with the boy who cried ‘Wolf’). Craig admits that this might make preferable some alternative view of knowledge as a different state that helps to explain the presence of the state of being a suitable informant when the latter does obtain. Such the alternate, which offers a recursive definition that concerns one’s having the power to proceed in a way representing the state of affairs, causally involved in one’s proceeding in this way. When combined with a suitable analysis of representing, this theory of propositional knowledge can be unified with a structurally similar analysis of knowing how to do something.
Knowledge and belief, according to most epistemologists, knowledge entails belief, so that I cannot know that such and such is the case unless I believe that such and such is the case. Others think this entailment thesis can be rendered more accurately if we substitute for belief some closely related attitude. For instance, several philosophers would prefer to say that knowledge entail psychological certainties (Prichard, 1950 and Ayer, 1956) or conviction (Lehrer, 1974) or acceptance (Lehrer, 1989). None the less, there are arguments against all versions of the thesis that knowledge requires having a belief-like attitude toward the known. These arguments are given by philosophers who think that knowledge and belief (or a facsimile) are mutually incompatible (the incomparability thesis), or by ones who say that knowledge does not entail belief, or vice versa, so that each may exist without the other, but the two may also coexist (the separability thesis).
The incompatibility thesis is sometimes traced to Plato (429-347 Bc) in view of his claim that knowledge is infallible while belief or opinion is fallible (“Republic” 476-9). But this claim would not support the thesis. Belief might be a component of an infallible form of knowledge in spite of the fallibility of belief. Perhaps, knowledge involves some factor that compensates for the fallibility of belief.
A. Duncan-Jones (1939: Also Vendler, 1978) cite linguistic evidence to back up the incompatibility thesis. He notes that people often say ‘I do not believe she is guilty. I know she is’ and the like, which suggest that belief rule out knowledge. However, as Lehrer (1974) indicates, the above exclamation is only a more emphatic way of saying "I do not just believe she is guilty, I know she is" where "just" makes it especially clear that the speaker is signalling that she has something more salient than mere belief, not that she has something inconsistent with belief, namely knowledge. Compare: "You do not hurt him, you killed him.'
H.A. Prichard (1966) offers a defence of the incompatibility thesis that hinges on the equation of knowledge with certainty (both infallibility and psychological certitude) and the assumption that when we believe in the truth of a claim we are not certain about its truth. Given that belief always involves uncertainty while knowledge never dies, believing something rules out the possibility of knowing it. Unfortunately, however, Prichard gives ‘us’ no goods reason to grant that states of belief are never ones involving confidence. Conscious beliefs clearly involve some level of confidence, to suggest that we cease to believe things about which we are completely confident is bizarre.
A.D. Woozley (1953) defends a version of the separability thesis. Woozley’s version, which deals with psychological certainty rather than belief per se, is that knowledge can exist in the absence of confidence about the item known, although might also be accompanied by confidence as well. Woozley remarks that the test of whether I know something is "what I can do, where what I can do may include answering questions." On the basis of this remark he suggests that even when people are unsure of the truth of a claim, they might know that the claim is true. We unhesitatingly attribute knowledge to people who give correct responses on examinations even if those people show no confidence in their answers. Woozley acknowledges, however, that it would be odd for those who lack confidence to claim knowledge. It would be peculiar to say, I am unsure whether my answer is true: Still, I know it is correct But this tension Woozley explains using a distinction between conditions under which we are justified in making a claim (such as a claim to know something), and conditions under which the claim we make is true. While ‘I know such and such’ might be true even if I am unsure whether such and such holds, nonetheless it would be inappropriate for me to claim that I know that such and such unless I were sure of the truth of my claim.
Colin Radford (1966) extends Woozley’s defence of the separability thesis. In Radford’s view, not only is knowledge compatible with the lack of certainty, it is also compatible with a complete lack of belief. He argues by example. In one example, Jean has forgotten that he learned some English history year’s priori and yet he is able to give several correct responses to questions such as "When did the Battle of Hastings occur?" Since he forgot that he took history, he considers the correct response to be no more than guesses. Thus, when he says that the Battle of Hastings took place in 1066 he would deny having the belief that the Battle of Hastings took place in 1066. A disposition he would deny being responsible (or having the right to be convincing) that 1066 was the correct date. Radford would none the less insist that Jean know when the Battle occurred, since clearly be remembering the correct date. Radford admits that it would be inappropriate for Jean to say that he knew when the Battle of Hastings occurred, but, like Woozley he attributes the impropriety to a fact about when it is and is not appropriate to claim knowledge. When we claim knowledge, we ought, at least to believe that we have the knowledge we claim, or else our behaviour is ‘intentionally misleading’.
Those that agree with Radford’s defence of the separability thesis will probably think of belief as an inner state that can be detected through introspection. That Jean lack’s beliefs about English history is plausible on this Cartesian picture since Jean does not find himself with any beliefs about English history when ne seek them out. One might criticize Radford, however, by rejecting that Cartesian view of belief. One could argue that some beliefs are thoroughly unconscious, for example. Or one could adopt a behaviourist conception of belief, such as Alexander Bain’s (1859), according to which having beliefs is a matter of the way people are disposed to behave (and has not Radford already adopted a behaviourist conception of knowledge?) Since Jean gives the correct response when queried, a form of verbal behaviour, a behaviourist would be tempted to credit him with the belief that the Battle of Hastings occurred in 1066.
D.M. Armstrong (1873) takes a different tack against Radford. Jean does know that the Battle of Hastings took place in 1066. Armstrong will grant Radford that point, in fact, Armstrong suggests that Jean believe that 1066 is not the date the Battle of Hastings occurred, for Armstrong equates the belief that such and such is just possible but no more than just possible with the belief that such and such is not the case. However, Armstrong insists, Jean also believes that the Battle did occur in 1066. After all, had Jean been mistaught that the Battle occurred in 1066, and subsequently "guessed" that it took place in 1066, we would surely describe the situation as one in which Jean’s false belief about the Battle became unconscious over time but persisted of a memory trace that was causally responsible for his guess. Out of consistency, we must describe Radford’s original case as one that Jean’s true belief became unconscious but persisted long enough to cause his guess. Thus, while Jean consciously believes that the Battle did not occur in 1066, unconsciously he does believe it occurred in 1066. So after all, Radford does not have a counterexample to the claim that knowledge entails belief.
Armstrong’s response to Radford was to reject Radford’s claim that the examinee lacked the relevant belief about English history. Another response is to argue that the examinee lacks the knowledge Radford attributes to him (cf. Sorenson, 1982). If Armstrong is correct in suggesting that Jean believes both that 1066 is and that it is not the date of the Battle of Hastings, one might deny Jean knowledge on the grounds that people who believe the denial of what they believe cannot be said t know the truth of their belief. Another strategy might be to compare the examine case with examples of ignorance given in recent attacks on externalist accounts of knowledge (needless to say. Externalists themselves would tend not to favour this strategy). Consider the following case developed by BonJour (1985): For no apparent reason, Samantha believes that she is clairvoyant. Again, for no apparent reason, she one day comes to believe that the President is in New York City, even though she has every reason to believe that the President is in Washington, D.C. In fact, Samantha is a completely reliable clairvoyant, and she has arrived at her belief about the whereabouts of the President thorough the power of her clairvoyance. Yet surely Samantha’s belief is completely irrational. She is not justified in thinking what she does. If so, then she does not know where the President is. But Radford’s examinee is unconventional. Even if Jean lacks the belief that Radford denies him, Radford does not have an example of knowledge that is unattended with belief. Suppose that Jean’s memory had been sufficiently powerful to produce the relevant belief. As Radford says, in having every reason to suppose that his response is mere guesswork, and he has every reason to consider his belief false. His belief would be an irrational one, and hence one about whose truth Jean would be ignorant.
Least has been of mention to an approaching view from which ‘perception’ basis upon itself as a fundamental philosophical topic both for its central place in ant theory of knowledge, and its central place un any theory of consciousness. Philosophy in this area is constrained by a number of properties that we believe to hold of perception, (1) It gives "us" knowledge of the world around "us," (2) We are conscious of that world by being aware of "sensible qualities": Colour, sounds, tastes, smells, felt warmth, and the shapes and positions of objects in the environment. (3) Such consciousness is effected through highly complex information channels, such as the output of the three different types of colour-sensitive cells in the eye, or the channels in the ear for interpreting pulses of air pressure as frequencies of sound. (4) There ensues even more complex neurophysiological coding of that information, and eventually higher-order brain functions bring it about that we interpreted the information so received. (Much of this complexity has been revealed by the difficulties of writing programs enabling computers to recognize quite simple aspects of the visual scene.) The problem is to avoid thinking of here being a central, ghostly, conscious self, fed information in the same way that a screen if fed information by a remote television camera. Once such a model is in place, experience will seem like a veil getting between ‘us’ and the world, and the direct objects of perception will seem to be private items in an inner theatre or sensorium. The difficulty of avoiding this model is epically cute when we considered the secondary qualities of colour, sound, tactile feelings and taste, which can easily seem to have a purely private existence inside the perceiver, like sensation of pain. Calling such supposed items names like ‘sense-data’ or ‘percepts’ exacerbates the tendency, but once the model is in place, the first property, that perception gives ‘us’ knowledge of the world and its surrounding surfaces, is quickly threatened, for there will now seem little connection between these items in immediate experience and any independent reality. Reactions to this problem include "scepticism" and "idealism."
A more hopeful approach is to claim that the complexities of (3) and (4) explain how we can have direct acquaintance of the world, than suggesting that the acquaintance we do have been at best indirect. It is pointed out that perceptions are not like sensation, precisely because they have a content, or outer-directed nature. To have a perception is to be aware of the world for being such-and-such a way, than to enjoy a mere modification of sensation. But such direct realism has to be sustained in the face of the evident personal (neurophysiological and other) factors determining haw we perceive. One approach is to ask why it is useful to be conscious of what we perceive, when other aspects of our functioning work with information determining responses without any conscious awareness or intervention. A solution to this problem would offer the hope of making consciousness part of the natural world, than a strange optional extra.
Furthering, perceptual knowledge is knowledge acquired by or through the senses and includes most of what we know. We cross intersections when we see the light turn green, head for the kitchen when we smell the roast burning, squeeze the fruit to determine its ripeness, and climb out of bed when we hear the alarm ring. In each case we come to know something-that the light has turned green, that the roast is burning, that the melon is overripe, and that it is time to get up-by some sensory means. Seeing that the light has turned green is learning something - that the light has turned green - by use of the eyes. Feeling that the melon is overripe is coming to know a fact-that the melon is overripe-by one’s sense to touch. In each case the resulting knowledge is somehow based on, derived from or grounded in the sort of experience that characterizes the sense modality in question.
Much of our perceptual knowledge is indirect, dependent or derived. By this I mean that the facts we describe ourselves as learning, as coming to know, by perceptual means are pieces of knowledge that depend on our coming to know something else, some other fact, in a more direct way. We see, by the gauge, that we need gas, see, by the newspapers, that our team has lost again, see, by her expression, that she is nervous. This derived or dependent sort of knowledge is particularly prevalent in the cases of vision, but it occurs, to a lesser degree, in every sense modality. We install bells and other noise makers so that we calm for example, hear (by the bell) that someone is at the door and (by the alarm) that its time to get up. When we obtain knowledge in this way, it is clear that unless one sees-hence, comes to know something about the gauge (that it says) and (hence, know) that one is described as coming to know by perceptual means. If one cannot hear that the bell is ringing, one cannot-in at least in this way-hear that one’s visitors have arrived. In such cases one sees (hears, smells, etc.) that ‘a’ is ‘F’, coming to know thereby that ‘a’ is ‘F’, by seeing (hearing, etc.) that some other condition, ‘b’s’ being ‘G’, obtains when this occurs, the knowledge (that ‘a’ is ‘F’) is derived from, or dependent on, the more basic perceptual knowledge that ‘b’ is ‘G’.
And finally, the representational Theory of mind (RTM) (which goes back at least to Aristotle) takes as its starting point commonsense mental states, such as thoughts, beliefs, desires, perceptions and images. Such states are said to have "intentionality"-they are about or refer to things, and may be evaluated with respect to properties like consistency, truth, appropriateness and accuracy. (For example, the thought that cousins are not related is inconsistent, the belief that Elvis is dead is true, the desire to eat the moon is inappropriate, a visual experience of a ripe strawberry as red is accurate, an image of George W. Bush with deadlocks is inaccurate.)
The Representational Theory of Mind, defines such intentional mental states as relations to mental representations, and explains the intentionality of the former in terms of the semantic properties of the latter. For example, to believe that Elvis is dead is to be appropriately related to a mental representation whose propositional content is that Elvis is dead. (The desire that Elvis be dead, the fear that he is dead, the regret that he is dead, etc., involve different relations to the same mental representation.) To perceive a strawberry is to have a sensory experience of some kind which is appropriately related to (e.g., caused by) the strawberry Representational theory of mind also understands mental processes such as thinking, reasoning and imagining as sequences of intentional mental states. For example, to imagine the moon rising over a mountain is to entertain a series of mental images of the moon (and a mountain). To infer a proposition q from the propositions p and if 'p' then 'q' is (among other things) to have a sequence of thoughts of the form 'p', 'if p' then 'q', 'q'.
Contemporary philosophers of mind have typically supposed (or at least hoped) that the mind can be naturalized-i.e., that all mental facts have explanations in the terms of natural science. This assumption is shared within cognitive science, which attempts to provide accounts of mental states and processes in terms (ultimately) of features of the brain and central nervous system. In the course of doing so, the various sub-disciplines of cognitive science (including cognitive and computational psychology and cognitive and computational neuroscience) postulate a number of different kinds of structures and processes, many of which are not directly implicated by mental states and processes as commonsensical conceived. There remains, however, a shared commitment to the idea that mental states and processes are to be explained in terms of mental representations.
In philosophy, recent debates about mental representation have centred around the existence of propositional attitudes (beliefs, desires, etc.) and the determination of their contents (how they come to be about what they are about), and the existence of phenomenal properties and their relation to the content of thought and perceptual experience. Within cognitive science itself, the philosophically relevant debates have been focussed on the computational architecture of the brain and central nervous system, and the compatibility of scientific and commonsense accounts of mentality.
Intentional Realists such as Dretske (e.g., 1988) and Fodor (e.g., 1987) note that the generalizations we apply in everyday life in predicting and explaining each other's behaviour (often collectively referred to as "folk psychology") are both remarkably successful and indispensable. What a person believes, doubts, desires, fears, and so forth, is a highly reliable indicator of what that person will do and we have no other way of making sense of each other's behaviour than by ascribing such states and applying the relevant generalizations, as alternative fabrications. We are thus committed to the basic truth of commonsense psychology and, hence, to the existence of the states its generalizations refer to. (Some realists, such as Fodor, also hold that commonsense psychology will be vindicated by cognitive science, given that propositional attitudes can be construed as computational relations to mental representations.)
Intentional Eliminativists, such as Churchland, (perhaps) Dennett and (at one time) Stich argue that no such things as propositional attitudes (and their constituent representational states) are implicated by the successful explanation and prediction of our mental lives and behaviour. Churchland denies that the generalizations of commonsense propositional-attitude psychology are true. He (1981) argues that folk psychology is a theory of the mind with a long history of failure and decline, and that it resists incorporation into the framework of modern scientific theories (including cognitive psychology). As such, it is comparable to alchemy and phlogiston theory, and ought to suffer a comparable fate. Commonsense psychology is false, and the states (and representations) it postulates simply don't exist. (It should be noted that Churchland is not an eliminativist about mental representation tout court.
Dennett (1987) grants that the generalizations of commonsense psychology are true and indispensable, but denies that this is sufficient reason to believe in the entities they appear to refer to. He argues that to give an intentional explanation of a system's behaviour is merely to adopt the "intentional stance" toward it. If the strategy of assigning contentual states to a system and predicting and explaining its behaviour (on the assumption that it is rational-i.e. that it behaves as it should, given the propositional attitudes it should have in its environment) is successful, then the system is intentional, and the propositional-attitude generalizations we apply to it are true. But there is nothing more to having a propositional attitude than this.
Though he has been taken to be thus claiming that intentional explanations should be construed instrumentally, Dennett (1991) insists that he is a "moderate" realist about propositional attitudes, since he believes that the patterns in the behaviour and behavioural dispositions of a system on the basis of which we (truly) attribute intentional states to it are objectively real. In the event that there are two or more explanatorily adequate but substantially different systems of intentional ascriptions to an individual, however, Dennett claims there is no fact of the matter about what the system believes (1987, 1991). This does suggest an irrealism at least with respect to the sorts of things Fodor and Dretske take beliefs to be; though it is not the view that there is simply nothing in the world that makes intentional explanations true.
(Davidson 1973, 1974 and Lewis 1974 also defend the view that what it is to have a propositional attitude is just to be interpretable in a particular way. It is, however, not entirely clear whether they intend their views to imply irrealism about propositional attitudes.). Stich (1983) argues that cognitive psychology does not (or, in any case, should not) taxonomies mental states by their semantic properties at all, since attribution of psychological states by content is sensitive to factors that render it problematic in the context of a scientific psychology. Cognitive psychology seeks causal explanations of behaviour and cognition, and the causal powers of a mental state are determined by its intrinsic "structural" or "syntactic" properties. The semantic properties of a mental state, however, are determined by its extrinsic properties-e.g., its history, environmental or intra-mental relations. Hence, such properties cannot figure in causal-scientific explanations of behaviour. (Fodor 1994 and Dretske 1988 are realist attempts to come to grips with some of these problems.) Stich proposes a syntactic theory of the mind, on which the semantic properties of mental states play no explanatory role.
It is a traditional assumption among realists about mental representations that representational states come in two basic varieties (Boghossian 1995). There are those, such as thoughts, which are composed of concepts and have no phenomenal ("what-it's-like") features ("Qualia"), and those, such as sensory experiences, which have phenomenal features but no conceptual constituents. (Non-conceptual content is usually defined as a kind of content that states of a creature lacking concepts might nonetheless enjoy. On this taxonomy, mental states can represent either in a way analogous to expressions of natural languages or in a way analogous to drawings, paintings, maps or photographs. (Perceptual states such as seeing that something is blue, are sometimes thought of as hybrid states, consisting of, for example, a Non-conceptual sensory experience and a thought, or some more integrated compound of sensory and conceptual components.)
Some historical discussions of the representational properties of mind (e.g., Aristotle 1984, Locke 1689/1975, Hume 1739/1978) seem to assume that Non-conceptual representations-percepts ("impressions"), images ("ideas") and the like-are the only kinds of mental representations, and that the mind represents the world in virtue of being in states that resemble things in it. On such a view, all representational states have their content in virtue of their phenomenal features. Powerful arguments, however, focussing on the lack of generality (Berkeley 1975), ambiguity (Wittgenstein 1953) and non-compositionality (Fodor 1981) of sensory and imaginistic representations, as well as their unsuitability to function as logical (Frége 1918/1997, Geach 1957) or mathematical (Frége 1884/1953) concepts, and the symmetry of resemblance (Goodman 1976), convinced philosophers that no theory of mind can get by with only Non-conceptual representations construed in this way.
Contemporary disagreement over Non-conceptual representation concerns the existence and nature of phenomenal properties and the role they play in determining the content of sensory experience. Dennett (1988), for example, denies that there are such things as Qualia at all; while Brandom (2002), McDowell (1994), Rey (1991) and Sellars (1956) deny that they are needed to explain the content of sensory experience. Among those who accept that experiences have phenomenal content, some (Dretske, Lycan, Tye) argue that it is reducible to a kind of intentional content, while others (Block, Loar, Peacocke) argue that it is irreducible.
There has also been dissent from the traditional claim that conceptual representations (thoughts, beliefs) lack phenomenology. Chalmers (1996), Flanagan (1992), Goldman (1993), Horgan and Tiensen (2003), Jackendoff (1987), Levine (1993, 1995, 2001), McGinn (1991), Pitt (2004), Searle (1992), Siewert (1998) and Strawson (1994), claim that purely symbolic (conscious) representational states themselves have a (perhaps proprietary) phenomenology. If this claim is correct, the question of what role phenomenology plays in the determination of content reprises for conceptual representation, and the eliminativist ambitions of Sellars, Brandom, Rey, would meet a new obstacle. (It would also raise prima face problems for reductionist representationalist
The representationalist thesis is often formulated as the claim that phenomenal properties are representational or intentional. However, this formulation is ambiguous between a reductive and a non-deductive claim (though the term ‘representationalist’ is most often used for the reductive claim). On one hand, it could mean that the phenomenal content of an experience is a kind of intentional content (the properties it represents). On the other, it could mean that the (irreducible) phenomenal properties of an experience determine an intentional content. Representationalists such as Dretske, Lycan and Tye would assent to the former claim, whereas phenomenalists such as Block, Chalmers, Loar and Peacocke would assent to the latter. (Among phenomenalists, there is further disagreement about whether Qualia are intrinsically representational (Loar) or not (Block, Peacocke).
Most (reductive) representationalists are motivated by the conviction that one or another naturalistic explanation of intentionality is, in broad outline, correct, and by the desire to complete the naturalization of the mental by applying such theories to the problem of phenomenality. (Needless to say, most phenomenalists (Chalmers is the major exception) are just as eager to naturalize the phenomenal-though not in the same way.)
The main argument for representationalist appeals to the transparency of experience (cf. Tye 2000: 45-51). The properties that characterize what it's like to have a perceptual experience are presented in experience as properties of objects perceived: in attending to an experience, one seems to "see through it" to the objects and properties it is experiences of. They are not presented as properties of the experience itself. If nonetheless they were properties of the experience, perception would be massively deceptive. But perception is not massively deceptive. According to the representationalist, the phenomenal character of an experience is due to its representing objective, non-experiential properties. (In veridical perception, these properties are locally instantiated; in illusion and hallucination, they are not.) On this view, introspection is indirect perception: one comes to know what phenomenal features one's experience has by coming to know what objective features it represents.
In order to account for the intuitive differences between conceptual and sensory representations, representationalists appeal to their structural or functional differences. Dretske (1995), for example, distinguishes experiences and thoughts on the basis of the origin and nature of their functions: an experience of a property 'P' is a state of a system whose evolved function is to indicate the presence of 'P' in the environment; a thought representing the property 'P', on the other hand, is a state of a system whose assigned (learned) function is to calibrate the output of the experiential system. Rey (1991) takes both thoughts and experiences to be relations to sentences in the language of thought, and distinguishes them on the basis of (the functional roles of) such sentences' constituent predicates. Lycan (1987, 1996) distinguishes them in terms of their functional-computational profiles. Tye (2000) distinguishes them in terms of their functional roles and the intrinsic structure of their vehicles: thoughts are representations in a language-like medium, whereas experiences are image-like representations consisting of "symbol-filled arrays." (the account of mental images in Tye 1991.)
Phenomenalists tend to make use of the same sorts of features (function, intrinsic structure) in explaining some of the intuitive differences between thoughts and experiences; Nonetheless, they do not suppose that such features exhaust the differences between phenomenal and non-phenomenal representations. For the phenomenalism, it is the phenomenal properties of experiences - Qualia themselves - that constitute the fundamental difference between experience and thought. Peacocke (1992), for example, develops the notion of a perceptual "scenario" (an assignment of phenomenal properties to coordinates of a three-dimensional egocentric space), whose content is "correct" (a semantic property) if in the corresponding "scene" (the portion of the external world represented by the scenario) properties are distributed as their phenomenal analogues are in the scenario.
Another sort of representation championed by phenomenalists (e.g., Block, Chalmers (2003) and Loar (1996)) is the "phenomenal concept"-a conceptual/phenomenal hybrid consisting of a phenomenological "sample" (an image or an occurrent sensation) integrated with (or functioning as) a conceptual component. Phenomenal concepts are postulated to account for the apparent fact (among others) that, as McGinn (1991) puts it, "you cannot form [introspective] concepts of conscious properties unless you yourself instantiate those properties." One cannot have a phenomenal concept of a phenomenal property 'P', and, hence, phenomenal beliefs about P, without having experience of 'P', because 'P' itself is (in some way) constitutive of the concept of 'P'. (Jackson 1982, 1986 and Nagel 1974.)
Though imagery has played an important role in the history of philosophy of mind, the important contemporary literature on it is primarily psychological. In a series of psychological experiments done in the 1970s (summarized in Kosslyn 1980 and Shepard and Cooper 1982), subjects' response time in tasks involving mental manipulation and examination of presented figures was found to vary in proportion to the spatial properties (size, orientation, etc.) of the figures presented. The question of how these experimental results are to be explained has kindled a lively debate on the nature of imagery and imagination.
Kosslyn (1980) claims that the results suggest that the tasks were accomplished via the examination and manipulation of mental representations that themselves have spatial properties-i.e., pictorial representations, or images. Others, principally Pylyshyn (1979, 1981, 2003), argue that the empirical facts can be explained in terms exclusively of discursive, or propositional representations and cognitive processes defined over them. (Pylyshyn takes such representations to be sentences in a language of thought.)
The idea that pictorial representations are literally pictures in the head is not taken seriously by proponents of the pictorial view of imagery The claim is, rather, that mental images represent in a way that is relevantly like the way pictures represent. (Attention has been focussed on visual imagery-hence the designation ‘pictorial’; though of course there may be of an imagery in other modalities-auditory, olfactory, etc.-as well.)
The distinction between pictorial and discursive representation can be characterized in terms of the distinction between analog and digital representation (Goodman 1976). This distinction has itself been variously understood (Fodor & Pylyshyn 1981, Goodman 1976, Haugeland 1981, Lewis 1971, McGinn 1989), though a widely accepted construal is that analog representation is continuous (i.e., in virtue of continuously variable properties of the representation), while digital representation is discrete (i.e., in virtue of properties a representation either has or doesn't have) (Dretske 1981). (An analog/digital distinction may also be made with respect to cognitive processes. (Block 1983.)) On this understanding of the analog/digital distinction, imaginistic representations, which represent in virtue of properties that may vary continuously (such for being more or less bright, loud, vivid, etc.), would be analog, while conceptual representations, whose properties do not vary continuously (a thought cannot be more or less about Elvis: either it is or it is not) would be digital.
It might be supposed that the pictorial/discursive distinction is best made in terms of the phenomenal and nonphenomenal distinction, but it is not obvious that this is the case. For one thing, there may be nonphenomenal properties of representations that vary continuously. Moreover, there are ways of understanding pictorial representation that presuppose neither phenomenality nor analogicity. According to Kosslyn (1980, 1982, 1983), a mental representation is "quasi-pictorial" when every part of the representation corresponds to a part of the object represented, and relative distances between parts of the object represented are preserved among the parts of the representation. But distances between parts of a representation can be defined functionally rather than spatially-for example, in terms of the number of discrete computational steps required to combine stored information about them. (Rey 1981.)
Tye (1991) proposes a view of images on which they are hybrid representations, consisting both of the pictorial and discursive elements. On Tye's account, images are "(labelled) interpreted symbol-filled arrays." The symbols represent discursively, while their arrangement in arrays has representational significance (the location of each "cell" in the array represents a specific viewer-centred 2-D location on the surface of the imagined object)
The contents of mental representations are typically taken to be abstract objects (properties, relations, propositions, sets, etc.). A pressing question, especially for the naturalist, is how mental representations come to have their contents. Here the issue is not how to naturalize content (abstract objects can't be naturalized), but, rather, how to provide a naturalistic account of the content-determining relations between mental representations and the abstract objects they express. There are two basic types of contemporary naturalistic theories of content-determination, causal-informational and functional.
Causal-informational theories (Dretske 1981, 1988, 1995) hold that the content of a mental representation is grounded in the information it carries about what does (Devitt 1996) or would (Fodor 1987, 1990) cause it to occur. There is, however, widespread agreement that causal-informational relations are not sufficient to determine the content of mental representations. Such relations are common, but representation is not. Tree trunks, smoke, thermostats and ringing telephones carry information about what they are causally related to, but they do not represent (in the relevant sense) what they carry information about. Further, a representation can be caused by something it does not represent, and can represent something that has not caused it.
The main attempts to specify what makes a causal-informational state a mental representation are Asymmetric Dependency Theories (e.g., Fodor 1987, 1990, 1994) and Teleological Theories (Fodor 1990, Millikan 1984, Papineau 1987, Dretske 1988, 1995). The Asymmetric Dependency Theory distinguishes merely informational relations from representational relations on the basis of their higher-order relations to each other: informational relations depend upon representational relations, but not vice-versa. For example, if tokens of a mental state type are reliably caused by horses, cows-on-dark-nights, zebras-in-the-mist and Great Danes, then they carry information about horses, etc. If, however, such tokens are caused by cows-on-dark-nights, etc. because they were caused by horses, but not vice versa, then they represent horses.
According to Teleological Theories, representational relations are those a representation-producing mechanism has the selected (by evolution or learning) function of establishing. For example, zebra-caused horse-representations do not mean zebra, because the mechanism by which such tokens are produced has the selected function of indicating horses, not zebras. The horse-representation-producing mechanism that responds to zebras is malfunctioning.
Functional theories (Block 1986, Harman 1973), hold that the content of a mental representation is grounded in its (causal computational, inferential) relations to other mental representations. They differ on whether relata should include all other mental representations or only some of them, and on whether to include external states of affairs. The view that the content of a mental representation is determined by its inferential/computational relations with all other representations is holism; the view it is determined by relations to only some other mental states is localism (or molecularism). (The view that the content of a mental state depends on none of its relations to other mental states is atomism.) Functional theories that recognize no content-determining external relata have been called solipsistic (Harman 1987). Some theorists posit distinct roles for internal and external connections, the former determining semantic properties analogous to sense, the latter determining semantic properties analogous to reference (McGinn 1982, Sterelny 1989)
(Reductive) representationalists (Dretske, Lycan, Tye) usually take one or another of these theories to provide an explanation of the (Non-conceptual) content of experiential states. They thus tend to be Externalists about phenomenological as well as conceptual content. Phenomenalists and non-deductive representationalists (Block, Chalmers, Loar, Peacocke, Siewert), on the other hand, take it that the representational content of such states is (at least in part) determined by their intrinsic phenomenal properties. Further, those who advocate a phenomenology-based approach to conceptual content (Horgan and Tiensen, Loar, Pitt, Searle, Siewert) also seem to be committed to internalist individuation of the content (if not the reference) of such states.
Generally, those who, like informational theorists, think relations to one's (natural or social) environment are (at least partially) determinative of the content of mental representations are Externalists (e.g., Burge 1979, 1986, McGinn 1977, Putnam 1975), whereas those who, like some proponents of functional theories, think representational content is determined by an individual's intrinsic properties alone, are internalists (or individualists; cf. Putnam 1975, Fodor 1981).
This issue is widely taken to be of central importance, since psychological explanation, whether commonsense or scientific, is supposed to be both causal and content-based. (Beliefs and desires cause the behaviours they do because they have the contents they do. For example, the desire that one have a beer and the beliefs that there is beer in the refrigerator and that the refrigerator is in the kitchen may explain one's getting up and going to the kitchen.) If, however, a mental representation's having a particular content is due to factors extrinsic to it, it is unclear how its having that content could determine its causal powers, which, arguably, must be intrinsic. Some who accept the standard arguments for externalism have argued that internal factors determine a component of the content of a mental representation. They say that mental representations have both "narrow" content (determined by intrinsic factors) and "wide" or "broad" content (determined by narrow content plus extrinsic factors). (This distinction may be applied to the sub-personal representations of cognitive science as well as to those of commonsense psychology.
Narrow content has been variously construed. Putnam (1975), Fodor (1982)), and Block (1986), for example, seem to understand it as something like dedicto content (i.e., Frégean sense, or perhaps character, à la Kaplan 1989). On this construal, narrow content is context-independent and directly expressible. Fodor (1987) and Block (1986), however, have also characterized narrow content as radically inexpressible. On this construal, narrow content is a kind of proto-content, or content-determinant, and can be specified only indirectly, via specifications of context/wide-content pairings. On both construal, narrow contents are characterized as functions from context to (wide) content. The narrow content of a representation is determined by properties intrinsic to it or its possessor such as its syntactic structure or its intra-mental computational or inferential role (or its phenomenology).
Burge (1986) has argued that causation-based worries about externalist individuation of psychological content, and the introduction of the narrow notion, are misguided. Fodor (1994, 1998) has more recently urged that a scientific psychology might not need narrow content in order to supply naturalistic (causal) explanations of human cognition and action, since the sorts of cases they were introduced to handle, viz., Twin-Earth cases and Frigg cases, are either nomologically impossible or dismissible as exceptions to the non-rigidity in the psychological laws.
The leading contemporary version of the Representational Theory of Mind, the Computational Theory of Mind (CTM), claims that the brain is a kind of computer and that mental processes are computations. According to the computational theory of mind, cognitive states are constituted by computational relations to mental representations of various kinds, and cognitive processes are sequences of such states. The computational theory of mind and the representational theory of mind, may by attempting to explain all psychological states and processes in terms of mental representation. In the course of constructing detailed empirical theories of human and animal cognition and developing models of cognitive processes’ implementable in artificial information processing systems, cognitive scientists have proposed a variety of types of mental representations. While some of these, may be suited to be mental relata of commonsense psychological states, some-so-called "subpersonal" or "sub-doxastic" representations-are not. Though many philosophers believe that computational theory of mind can provide the best scientific explanations of cognition and behaviour, there is disagreement over whether such explanations will vindicate the commonsense psychological explanations of prescientific representational theory of mind.
According to Stich's (1983) Syntactic Theory of Mind, for example, computational theories of psychological states should concern themselves only with the formal properties of the objects those states are relations to. Commitment to the explanatory relevance of content, however, is for most cognitive scientists fundamental (Fodor 1981, Pylyshyn 1984, Von Eckardt 1993). That mental processes are computations, which computations are rule-governed sequences of semantically evaluable objects, and that the rules apply to the symbols in virtue of their content, are central tenets of mainstream cognitive science.
Explanations in cognitive science appeal to a many different kinds of mental representation, including, for example, the "mental models" of Johnson-Laird 1983, the "retinal arrays," "primal sketches" and "2½ -D sketches" of Marr 1982, the "frames" of Minsky 1974, the "sub-symbolic" structures of Smolensky 1989, the "quasi-pictures"of Kosslyn 1980, and the "interpreted symbol-filled arrays" of Tye 1991-in addition to representations that may be appropriate to the explanation of commonsense psychological states. Computational explanations have been offered of, among other mental phenomena, belief (Fodor 1975, Field 1978), visual perception (Marr 1982, Osherson, et al. 1990), rationality (Newell and Simon 1972, Fodor 1975, Johnson-Laird and Wason 1977), language learning and (Chomsky 1965, Pinker 1989), and musical comprehension (Lerdahl and Jackendoff 1983).
A fundamental disagreement among proponents of computational theory of mind concerns the realization of personal-level representations (e.g., thoughts) and processes (e.g., inferences) in the brain. The central debate here is between proponents of Classical Architectures and proponents of Conceptionist Architectures.
The classicists (e.g., Turing 1950, Fodor 1975, Fodor and Pylyshyn 1988, Marr 1982, Newell and Simon 1976) hold that mental representations are symbolic structures, which typically have semantically evaluable constituents, and that mental processes are rule-governed manipulations of them that are sensitive to their constituent structure. The connectionists (e.g., McCulloch & Pitts 1943, Rumelhart 1989, Rumelhart and McClelland 1986, Smolensky 1988) hold that mental representations are realized by patterns of activation in a network of simple processors ("nodes") and that mental processes consist of the spreading activation of such patterns. The nodes themselves are, typically, not taken to be semantically evaluable; nor do the patterns have semantically evaluable constituents. (Though there are versions of Connectionism"localist" versions-on which individual nodes are taken to have semantic properties (e.g., Ballard 1986, Ballard & Hayes 1984).) It is arguable, however, that localist theories are neither definitive nor representative of the Conceptionist program (Smolensky 1988, 1991, Chalmers 1993).
Classicists are motivated (in part) by properties thought seems to share with language. Fodor's Language of Thought Hypothesis (LOTH) (Fodor 1975, 1987), according to which the system of mental symbols constituting the neural basis of thought is structured like a language, provides a well-worked-out version of the classical approach as applied to commonsense psychology. According to the language of thought hypothesis, the potential infinity of complex representational mental states is generated from a finite stock of primitive representational states, in accordance with recursive formation rules. This combinatorial structure accounts for the properties of productivity and systematicity of the system of mental representations. As in the case of symbolic languages, including natural languages (though Fodor does not suppose either that the language of thought hypothesis explains only linguistic capacities or that only verbal creatures have this sort of cognitive architecture), these properties of thought are explained by appeal to the content of the representational units and their combinability into contentual complexes. That is, the semantics of both language and thought is compositional: the content of a complex representation is determined by the contents of its constituents and their structural configuration.
Connectionists are motivated mainly by a consideration of the architecture of the brain, which apparently consists of layered networks of interconnected neurons. They argue that this sort of architecture is unsuited to carrying out classical serial computations. For one thing, processing in the brain is typically massively parallel. In addition, the elements whose manipulation drives computation in Conceptionist networks (principally, the connections between nodes) are neither semantically compositional nor semantically evaluable, as they are on the classical approach. This contrast with classical computationalism is often characterized by saying that representation is, with respect to computation, distributed as opposed to local: representation is local if it is computationally basic; and distributed if it is not. (Another way of putting this is to say that for classicists mental representations are computationally atomic, whereas for connectionists they are not.)
Moreover, connectionists argue that information processing as it occurs in Conceptionist networks more closely resembles some features of actual human cognitive functioning. For example, whereas on the classical view learning involves something like hypothesis formation and testing (Fodor 1981), on the Conceptionist model it is a matter of evolving distribution of "weight" (strength) on the connections between nodes, and typically does not involve the formulation of hypotheses regarding the identity conditions for the objects of knowledge. The Conceptionist network is ‘trained up’ by repeated exposure to the objects it is to learn to distinguish and though networks typically require many more exposures to the objects than do humans, however, this seems to model at least one feature of this type of human learning quite well.
Further, degradation in the performance of such networks in response to damage is gradual, not sudden as in the case of a classical information processor, and hence more accurately models the loss of human cognitive function as it typically occurs in response to brain damage. It is also sometimes claimed that Conceptionist systems show the kind of flexibility in response to novel situations typical of human cognition-situations in which classical systems are relatively "brittle" or "fragile."
Some philosophers have maintained that Connectionism entails that there are no propositional attitudes. Ramsey, Stich and Garon (1990) have argued that if Conceptionist models of cognition are basically correct, then there are no discrete representational states as conceived in ordinary commonsense psychology and classical cognitive science. Others, however (e.g., Smolensky 1989), hold that certain types of higher-level patterns of activity in a neural network may be roughly identified with the representational states of commonsense psychology. Still others (e.g., Fodor & Pylyshyn 1988, Heil 1991, Horgan and Tienson 1996) argue that language-of-thought style representation is both necessary in general and realizable within Conceptionist architectures. (MacDonald & MacDonald 1995 collects the central contemporary papers in the classicist/Conceptionist debate, and provides useful introductory material as well.
Whereas Stich (1983) accepts that mental processes are computational, but denies that computations are sequences of mental representations, others accept the notion of mental representation, but deny that computational theory of mind provides the correct account of mental states and processes.
Van Gelder (1995) denies that psychological processes are computational. He argues that cognitive systems are dynamic, and that cognitive states are not relations to mental symbols, but quantifiable states of a complex system consisting of (in the case of human beings) a nervous system, a body and the environment in which they are embedded. Cognitive processes are not rule-governed sequences of discrete symbolic states, but continuous, evolving total states of dynamic systems determined by continuous, simultaneous and mutually determining states of the systems' components. Representation in a dynamic system is essentially information-theoretic, though the bearers of information are not symbols, but state variables or parameters.
Horst (1996), on the other hand, argues that though computational models may be useful in scientific psychology, they are of no help in achieving a philosophical understanding of the intentionality of commonsense mental states. computational theory of mind attempts to reduce the intentionality of such states to the intentionality of the mental symbols they are relations to. But, Horst claims, the relevant notion of symbolic content is essentially bound up with the notions of convention and intention. So the computational theory of mind involves itself in a vicious circularity: the very properties that are supposed to be reduced are (tacitly) appealed to in the reduction.
To say that a mental object has semantic properties is, paradigmatically, to say that it may be about, or be true or false of, an object or objects, or that it may be true or false simpliciter. Suppose I think that ocelots take snuff. I am thinking about ocelots, and if what I think of them (that they take snuff) is true of them, then my thought is true. According to representational theory of mind such states are to be explained as relations between agents and mental representations. To think that ocelots take snuff is to token in some way a mental representation whose content is that ocelots take snuff. On this view, the semantic properties of mental states are the semantic properties of the representations they are relations to.
Linguistic acts seem to share such properties with mental states. Suppose I say that ocelots take snuff. I am talking about ocelots, and if what I say of them (that they take snuff) is true of them, then my utterance is true. Now, to say that ocelots take snuff is (in part) to utter a sentence that means that ocelots take snuff. Many philosophers have thought that the semantic properties of linguistic expressions are inherited from the intentional mental states they are conventionally used to express (Grice 1957, Fodor 1978, Schiffer, 1972/1988, Searle 1983). On this view, the semantic properties of linguistic expressions are the semantic properties of the representations that are the mental relata of the states they are conventionally used to express.
It is also widely held that in addition to having such properties as reference, truth-conditions and truth-so-called extensional properties-expressions of natural languages also have intensional properties, in virtue of expressing properties or propositions-i.e., in virtue of having meanings or senses, where two expressions may have the same reference, truth-conditions or truth value, yet express different properties or propositions (Frigg 1892/1997). If the semantic properties of natural-language expressions are inherited from the thoughts and concepts they express (or vice versa, or both), then an analogous distinction may be appropriate for mental representations.
Søren Aabye Kierkegaard (1813-1855), a Danish religious philosopher, whose concern with individual existence, choice, and commitment profoundly influenced modern theology and philosophy, especially existentialism.
Søren Kierkegaard wrote of the paradoxes of Christianity and the faith required to reconcile them. In his book Fear and Trembling, Kierkegaard discusses Genesis 22, in which God commands Abraham to kill his only son, Isaac. Although God made an unreasonable and immoral demand, Abraham obeyed without trying to understand or justify it. Kierkegaard regards this “leap of faith” as the essence of Christianity.
Kierkegaard was born in Copenhagen on May 15, 1813. His father was a wealthy merchant and strict Lutheran, whose gloomy, guilt-ridden piety and vivid imagination strongly influenced Kierkegaard. Kierkegaard studied theology and philosophy at the University of Copenhagen, where he encountered Hegelian philosophy and reacted strongly against it. While at the university, he ceased to practice Lutheranism and for a time led an extravagant social life, becoming a familiar figure in the theatrical and café society of Copenhagen. After his father's death in 1838, however, he decided to resume his theological studies. In 1840 he became engaged to the 17-year-old Regine Olson, but almost immediately he began to suspect that marriage was incompatible with his own brooding, complicated nature and his growing sense of a philosophical vocation. He abruptly broke off the engagement in 1841, but the episode took on great significance for him, and he repeatedly alluded to it in his books. At the same time, he realized that he did not want to become a Lutheran pastor. An inheritance from his father allowed him to devote himself entirely to writing, and in the remaining 14 years of his life he produced more than 20 books.
Kierkegaard's work is deliberately unsystematic and consists of essays, aphorisms, parables, fictional letters and diaries, and other literary forms. Many of his works were originally published under pseudonyms. He applied the term existential to his philosophy because he regarded philosophy as the expression of an intensely examined individual life, not as the construction of a monolithic system in the manner of the 19th-century German philosopher Georg Wilhelm Friedrich Hegel, whose work he attacked in Concluding Unscientific Postscript (1846, trans., 1941). Hegel claimed to have achieved a complete rational understanding of human life and history; Kierkegaard, on the other hand, stressed the ambiguity and paradoxical nature of the human situation. The fundamental problems of life, he contended, defy rational, objective explanation; the highest truth is subjective.
Kierkegaard maintained that systematic philosophy not only imposed a false perspective on human existence but that it also, by explaining life in terms of logical necessity, becomes a means of avoiding choice and responsibility. Individuals, he believed, create their own natures through their choices, which must be made in the absence of universal, objective standards. The validity of a choice can only be determined subjectively.
In his first major work, Either/Or (2 volumes, 1843; trans. 1944), Kierkegaard described two spheres, or stages of existence, that the individual may choose: the aesthetic and the ethical. The aesthetic way of life is refined hedonism, consisting of a search for pleasure and a cultivation of moods. The aesthetic individual constantly seeks variety and novelty in an effort to stave off boredom but eventually must confront boredom and despair. The ethical way of life involves an intense, passionate commitment to duty, to unconditional social and religious obligations. In his later work, Stages on Life's Way (1845, trans., 1940) Kierkegaard discerned in this submission to duty a loss of individual responsibility, and he proposed a third stage, the religious, in which one submits to the will of God but in doing so finds authentic freedom. In Fear and Trembling (1846, trans., 1941) Kierkegaard focused on God's command that Abraham sacrifice his son Isaac (Genesis 22: 1-19), an act that violates Abraham's ethical convictions. Abraham proves his faith by resolutely setting out to obey God's command, even though he cannot understand it. This “suspension of the ethical,” as Kierkegaard called it, allows Abraham to achieve an authentic commitment to God. To avoid ultimate despair, the individual must make a similar “leap of faith” into a religious life, which is inherently paradoxical, mysterious, and full of risk. One is called to it by the feeling of dread (The Concept of Dread, 1844, trans., 1944), which is generally meant by and ultimately a fear of nothingness.
Toward the end of his life Kierkegaard was involved in bitter controversies, especially with the established Danish Lutheran church, which he regarded as worldly and corrupt. His later works, such as The Sickness Unto Death (1849; trans. 1941), reflect an increasingly somber view of Christianity, emphasizing suffering as the essence of authentic faith. He also intensified his attack on modern European society, which he denounced in The Present Age (1846, trans., 1940) for its lack of passion and for its quantitative values. The stress of his prolific writing and the controversies in which he engaged, gradually undermined his health and on October 1855 he fainted in the street, and he died in Copenhagen on November 11, 1855.
Kierkegaard's influence was at first confined to Scandinavia and to German-speaking Europe, where his work had a strong impact on Protestant Theology and on such writers as the 20th-century Austrian novelist Franz Kafka. As existentialism emerged as a general European movement after World War I, Kierkegaard's work was widely translated, and he was recognized as one of the seminal figures of modern culture.
Since scientists, during the nineteenth century were engrossed with uncovering the workings of external reality and seemingly knew of themselves that these virtually overflowing burdens of nothing, in that were about the physical substrates of human consciousness, the business of examining the distributive contribution in dynamic functionality and structural foundation of mind became the province of social scientists and humanists. Adolphe Quételet proposed a ‘social physics’ that could serve as the basis for a new discipline called sociology, and his contemporary Auguste Comte concluded that a true scientific understanding of the social reality was quite inevitable. Mind, in the view of these figures, was a separate and distinct mechanism subject to the lawful workings of a mechanical social reality.
More formal European philosophers, such as Immanuel Kant, sought to reconcile representations of external reality in mind with the motions of matter-based on the dictates of pure reason. This impulse was also apparent in the utilitarian ethics of Jerry Bentham and John Stuart Mill, in the historical materialism of Karl Marx and Friedrich Engels, and in the pragmatism of Charles Smith, William James and John Dewey. These thinkers were painfully aware, however, of the inability of reason to posit a self-consistent basis for bridging the gap between mind and matter, and each remains obliged to conclude that the realm of the mental exists only in the subjective reality of the individual.
The fatal flaw of pure reason is, of course, the absence of emotion, and purely explanations of the division between subjective reality and external reality, of which had limited appeal outside the community of intellectuals. The figure most responsible for infusing our understanding of the Cartesian dualism with contextual representation of our understanding with emotional content was the death of God theologian Friedrich Nietzsche 1844-1900. After declaring that God and ‘divine will’, did not exist, Nietzsche reified the ‘existence’ of consciousness in the domain of subjectivity as the ground for individual ‘will’ and summarily reducing all previous philosophical attempts to articulate the ‘will to truth’. The dilemma, forth in, had seemed to mean, by the validation, . . . as accredited for doing of science, in that the claim that Nietzsche’s earlier versions to the ‘will to truth’, disguises the fact that all alleged truths were arbitrarily created in the subjective reality of the individual and are expressed or manifesting the individualism of ‘will’.
In Nietzsche’s view, the separation between mind and matter is more absolute and total than previously been imagined. Based on the assumption that there is no really necessary correspondence between linguistic constructions of reality in human subjectivity and external reality, but he deuced that we are all locked in ‘a prison house of language’, the prison as he concluded it, was also a ‘space’ where the philosopher can examine the ‘innermost desires of his nature’ and articulate a new message of individual existence founded on ‘will’.
Those who fail to enact their existence in this space, Nietzsche says, are enticed into sacrificing their individuality on the nonexistent altars of religious beliefs and democratic or socialists’ ideals and become, therefore, members of the anonymous and docile crowd. Nietzsche also invalidated the knowledge claims of science in the examination of human subjectivity. Science, he said. Is not exclusive to natural phenomenons and favors reductionistic examination of phenomena at the expense of mind? It also seeks to reduce the separateness and uniqueness of mind with mechanistic descriptions that disallow and basis for the free exercise of individual will.
Nietzsche’s emotionally charged defense of intellectual freedom and radial empowerment of mind as the maker and transformer of the collective fictions that shape human reality in a soulless mechanistic universe proved terribly influential on twentieth-century thought. Furthermore, Nietzsche sought to reinforce his view of the subjective character of scientific knowledge by appealing to an epistemological crisis over the foundations of logic and arithmetic that arose during the last three decades of the nineteenth century. Through a curious course of events, attempted by Edmund Husserl 1859-1938, a German mathematician and a principal founder of phenomenology, wherefor to resolve this crisis resulted in a view of the character of consciousness that closely resembled that of Nietzsche.
The best-known disciple of Husserl was Martin Heidegger, and the work of both figures greatly influenced that of the French atheistic existentialist Jean-Paul Sartre. The work of Husserl, Heidegger, and Sartre became foundational to that of the principal architects of philosophical postmodernism, and deconstructionist Jacques Lacan, Roland Barthes, Michel Foucault and Jacques Derrida. It obvious attribution of a direct linkage between the nineteenth-century crisis about the epistemological foundations of mathematical physics and the origin of philosophical postmodernism served to perpetuate the Cartesian two-world dilemma in an even more oppressive form. It also allows us better to understand the origins of cultural ambience and the ways in which they could resolve that conflict.
The mechanistic paradigm of the late in the nineteenth century was the one Einstein came to know when he studied physics. Most physicists believed that it represented an eternal truth, but Einstein was open to fresh ideas. Inspired by Mach’s critical mind, he demolished the Newtonian ideas of space and time and replaced them with new, “relativistic” notions.
Jean-Paul Sartre (1905-1980), was a French philosopher, dramatist, novelist, and political journalist, who was a leading exponent of existentialism. Jean-Paul Sartre helped to develop existential philosophy through his writings, novels, and plays. Much of Sartre’s work is focused on the dilemma of choice as faced by free individuals and on the challenge of creating meaning by acting responsibly in an indifferent world. In stating that “man is condemned to be free,” Sartre reminds us of the responsibility that accompanies human decisions.
Sartre was born in Paris, June 21, 1905, and educated at the Écôle Normale Supérieure in Paris, the University of Fribourg in Switzerland, and the French Institute in Berlin. He taught philosophy at various lycées from 1929 until the outbreak of World War II, when he was called into military service. In 1940-41 he was imprisoned by the Germans; after his release, he taught in Neuilly France, and later in Paris, and was active in the French Resistance. The German authorities, unaware of his underground activities, permitted the production of his antiauthoritarian play The Flies (1943; trans. 1946) and the publication of his major philosophic work Being and Nothingness (1943; trans. 1953). Sartre gave up teaching in 1945 and founded the political and literary magazine Les Temps Modernes, of which he became the editor in chief. Sartre was active after 1947 as an independent Socialist, critical of both the USSR and the United States in the so-called cold war years. Later, he supported Soviet positions but still frequently criticized Soviet policies. Most of his writing of the 1950s deals with literary and political problems. Sartre rejected the 1964 Nobel Prize in literature, explaining that to accept such an award would compromise his integrity as a writer.
Sartre's philosophic works combine the phenomenology of the German philosopher Edmund Husserl, the metaphysics of the German philosophers Georg Wilhelm Friedrich Hegel and Martin Heidegger, and the social theory of Karl Marx into a single view called existentialism. This view, which relates philosophical theory to life, literature, psychology, and political action, stimulated so much popular interest that existentialism became a worldwide movement.
In his early philosophic work, Being and Nothingness, Sartre conceived humans as beings who create their own world by rebelling against authority and by accepting personal responsibility for their actions, unaided by society, traditional morality, or religious faith. Distinguishing between human existence and the nonhuman world, he maintained that human existence is characterized by nothingness, that is, by the capacity to negate and rebel. His theory of an existential psychoanalysis asserted the inescapable responsibility of all individuals for their own decisions and made the recognition of one's absolute freedom of choice the necessary condition for authentic human existence. His plays and novels express the belief that freedom and acceptance of personal responsibility are the main values in life and that individuals must rely on their creative powers rather than on social or religious authority.
In his later philosophic work Critique of Dialectical Reason (1960, trans., 1976), Sartre's emphasis shifted from existentialist freedom and subjectivity to Marxist social determinism. Sartre argued that the influence of modern society over the individual is so great as to produce serialization, by which he meant loss of self. Individual power and freedom can only be regained through group revolutionary action. Despite this exhortation to revolutionary political activity, Sartre himself did not join the Communist Party, thus retaining the freedom to criticize the Soviet invasions of Hungary in 1956 and Czechoslovakia in 1968. He died in Paris, April 15, 1980.
The part of the theory of design or semiotics that of concerning the relationship in between the speakers and their place a contract as signs, these implicate the study of the principles governing appropriate conversational moves is generally called pragmatics. Applied pragmatics treats of special kinds of linguistic infection such as interview and speech asking, nevertheless, the philosophical movement that has had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notions that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.
Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.
Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.
The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism’s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather than these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists’ denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.
Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested too many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.
The three most important pragmatists are American philosophers’ Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning-in particular, the meaning of concepts used in science. The meaning of the concept “brittle,” for example, is given by the observed consequences or properties that objects called “brittle” exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivist, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivist emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.
James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce’s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life-morality and religious belief, for example-are leaps of faith. As such, they depend upon what he called “the will to believe” and not merely on scientific evidence, which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist-someone who believes the world to be far too complex for anyone philosophy to explain everything.
Dewey’s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world view in which individuals and societies are progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depend on a historical context and are thus tentative rather than absolute.
Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey’s writings, although he aspired to synthesize the two realms.
The pragmatist’s tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists-Pierce, James, and Dewey-as an alternative to Rorty’s interpretation of the tradition.
In an ever-changing world, pragmatism has many benefits. It defends social experimentation as a means of improving society, accepts pluralism, and rejects’ dead dogmas. But a philosophy that offers no final answers or absolutes and that appears vague as a result of trying to harmonize opposites may also be unsatisfactory to some.
One of the five branches into which semiotics is usually divided the study of meaning of words, and their relation of designed to the object studied, a semantic is provided for a formal language when an interpretation or model is specified. Nonetheless, the Semantics, the Greek semanticist, “significant,” the study of the meaning of linguistic signs that is, words, expressions, and sentences. Scholars of semantics try to one answer such questions as “What is the meaning of (the word) X?” They do this by studying what signs are, as well as how signs possess significance-that is, how they are intended by speakers, how they designate (make reference to things and ideas), and how they are interpreted by hearers. The goal of semantics is to match the meanings of signs - what they stand for - with the process of assigning those meanings.
Semantics is studied from philosophical (pure) and linguistic (descriptive and theoretical) approaches, and an approach known as general semantics. Philosophers look at the behavior that goes with the process of meaning. Linguists study the elements or features of meaning as they are related in a linguistic system. General semanticists concentrate on meaning as influencing what people think and do.
These semantic approaches also have broader application. Anthropologists, through descriptive semantics, study what people categorize as culturally importantly. Psychologists draw on theoretical semantic studies that attempt to describe the mental process of understanding and to identify how people acquire meaning (as well as sound and structure) in language. Animal behaviorists research how and what other species communicate. Exponents of general semantics examine the different values (or connotations) of signs that supposedly mean the same thing (such as “the victor at Jena” and “the losers at Waterloo,” both referring to Napoleon). Also in a general-semantics vein, literary critics have been influenced by studies differentiating literary language from ordinary language and describing how literary metaphors evoke feelings and attitudes.
In the late 19th century Michel Jules Alfred Breal, a French philologist, proposed a “science of significations” that would investigate how sense is attached to expressions and other signs. In 1910 the British philosophers’ Alfred North Whitehead and Bertrand Russell published Principia Mathematica, which strongly influenced the Vienna Circle, a group of philosophers who developed the rigorous philosophical approach known as logical positivism.
One of the leading figures of the Vienna Circle, the German philosopher Rudolf Carnap, made a major contribution to philosophical semantics by developing symbolic logic, a system for analyzing signs and what they designate. In logical positivism, meaning is a relationship between words and things, and its study is empirically based: Because language, ideally, is a direct reflection of reality, signs match things and facts. In symbolic logic, however, mathematical notation is used to state what signs designate and to do so more clearly and precisely than is possible in ordinary language. Symbolic logic is thus itself a language, specifically, a metalanguage (formal technical language) used to talk about an object language (the language that is the object of a given semantic study).
An object language has a speaker (for example, a French woman) using expressions (such as la plume rouge) to designate a meaning (in this case, to illustrate a definite colour component of a plume-of the color red-rouge). The full description of an object language in symbols is called the semiotic of that language. A language's semiotic has the following aspects: (1) a semantic aspect, in which signs (words, expressions, sentences) are given specific designations; (2) a pragmatic aspect, in which the contextual relations between speakers and signs are indicated; and (3) a syntactic aspect, in which formal relations among the elements within signs (for example, among the sounds in a sentence) are indicated.
An interpreted language in symbolic logic is an object language together with rules of meaning that link signs and designations. Each interpreted sign has a truth condition-a condition that must be met in order for the sign to be true. A sign's meaning is what the sign designates when its truth condition is satisfied. For example, the expression or sign “the moon is a sphere” is understood by someone who knows English; however, although it is understood, it may or may not be true. The expression is true if the thing it is extended to - the moon - is in fact spherical, and to determine the sign's truth value, one must look at the moon carried by oneself.
The symbolic logic of logical positivist philosophy thus represents an attempt to get at meaning by way of the empirical verifiability of signs-by whether the truth of the sign can be confirmed by observing something in the real world. This attempt at understanding meaning has been only moderately successful. The Austrian-British philosopher Ludwig Wittgenstein rejected it in favor of his “ordinary language” philosophy, in which he asserted that thought is based on everyday language. Not all signs designate things in the world, he pointed out, nor can all signs be associated with truth values. In his approach to philosophical semantics, the rules of meaning are disclosed in how speech is used.
From ordinary-language philosophy has evolved the current theory of speech-act semantics. The British philosopher J. L. Austin claimed that, by speaking, a person performs an act, or does something (such as state, predict, or warn), and that meaning is found in what an expression does, in the act it performs. The American philosopher John R. Searle extended Austin's ideas, emphasizing the need to relate the functions of signs or expressions to their social context. Searle asserted that speech encompasses at least three kinds of acts: (1) locutionary acts, in which things are said with a certain sense or reference (as in “the moon is a sphere”); (2) illocutionary acts, in which such acts as promising or commanding are performed by means of speaking; and (3) perlocutionary acts, in which the speaker, by speaking, does something to someone else (for example, angers, consoles, or persuades someone). The speaker's intentions are conveyed by the illocutionary force that is given to the signs-that is, by the actions implicit in what is said. To be successfully meant, however, the signs must also be appropriate, sincere, consistent with the speaker's general beliefs and conduct, and recognizable as meaningful by the hearer.
What has developed in philosophical semantics, then, is a distinction between truth-based semantics and speech-act semantics. Some critics of speech-act theory believe that it deals primarily with meaning in communication (as opposed to meaning in language) and thus is part of the pragmatic aspect of a language's semiotic-that it relates to signs and to the knowledge of the world shared by speakers and hearers, rather than relating to signs and their designations (semantic aspect) or to formal relations among signs (syntactic aspect). These scholars hold that semantics should be restricted to assigning interpretations to signs alone-independent of a speaker and hearer.
Researchers in descriptive semantics examine what signs of the idea that something conveys to the mind as do by comparison, the internment of an expressive debate meaning of this evidential symbolizations whose indications of significance indicate a motion, action, gesture, or word by which a command, thought or wish is expressed to mean or designate, that particularly in the languages. They aim, for instance, to identify what constitutes nouns or noun phrases and verbs or verb phrases. For some languages, such as English, this is done with subject-predicate analysis. For languages without clear-cut distinctions between nouns, verbs, and prepositions, it is possible to say what the signs mean by analyzing the structure of what are called propositions. In such an analysis, a sign is seen as an operator that combines with one or more arguments (also signs), often nominal arguments (noun phrases) or, relate nominal arguments to other elements in the expression (such as prepositional phrases or adverbial phrases). For example, in the expression ‘Bill gives Mary the book,’ ‘gives’ is an operator that relates the arguments ‘Bill’ ‘Mary’, and ‘the book.’
Whether using subject-predicate analysis or propositional analysis, descriptive semanticists establish expression classes (classes of items that can substitute for one another within a sign) and classes of items within the conventional parts of speech (such as nouns and verbs). The resulting classes are thus defined in terms of syntax, and they also have semantic roles; that is, the items in these classes perform specific grammatical functions, and in so doing they establish meaning by predicating, referring, making distinctions among entities, relations, or actions. For example, “kiss” belongs to an expression class with other items such as “hit” and “see,” as well as to the conventional part of speech “verb,” in which it is part of a subclass of operators requiring two arguments (an actor and a receiver). In “Mary kissed John,” the syntactic role of “kiss” is to relate two nominal arguments (“Mary” and “John”), whereas its semantic role is to identify a type of action. Unfortunately for descriptive semantics, however, it is not always possible to find a one-to-one correlation of syntactic classes with semantic roles. For instance, “John” has the same semantic role-to identify a person-in the following two sentences: “John is easy to please” and “John is eager to please.” The syntactic role of “John” in the two sentences, however, is different: In the first, “John” is the receiver of an action; in the second, “John” is the actor.
Linguistic semantics is also used by anthropologists called ethnoscientists to conduct formal semantic analysis (componential analysis) to determine how expressed signs-usually single words as vocabulary items called lexemes-in a language are related to the perceptions and thoughts of the people who speak the language. Componential analysis tests the idea that linguistic categories influence or determine how people view the world; this idea is called the Whorf hypothesis after the American anthropological linguist Benjamin Lee Whorf, who proposed it. In componential analysis, lexemes that have a common range of meaning constitute a semantic domain. Such a domain is characterized by the distinctive semantic features (components) that differentiate individual lexemes in the domain from one another, and also by features shared by all the lexemes in the domain. Such componential analysis points out, for example, that in the domain ‘seat’ in English, the lexemes ’chair’, ‘sofa’, ‘loveseat’, and ‘bench’ can be distinguished from one another according to what extent for people are accommodated and whether a back support is included, all the lexemes that share the common component, or feature, of meaning “something on which to sit.”
Linguists pursuing such componential analysis hope to identify a universal set of such semantic features, from which are drawn the different sets of features that characterize different languages. This idea of universal semantic features has been applied to the analysis of systems of myth and kinship in various cultures by the French anthropologist Claude Lévi-Strauss. He showed that people organize their societies and interpret their place in these societies in ways that, despite apparent differences, have remarkable underlying similarities.
Linguists concerned with theoretical semantics are looking for a general theory of meaning in language. To such linguists, known as transformational-generative grammarians, meaning is part of the linguistic knowledge or competence that all humans possess. A generative grammar as a model of linguistic competence has a phonological (sound-system), a syntactic, and a semantic component. The semantic component, as part of a generative theory of meaning, is envisioned as a system of rules that govern how interpretable signs are interpreted and determine that other signs (such as “Colorless green ideas sleep furiously”), although grammatical expressions, are meaningless-semantically blocked. The rules must also account for how a sentence such as “They passed the port at midnight” can have at least two interpretations.
Generative semantics grew out of proposals to explain a speaker's ability to produce and understand new expressions where grammar or syntax fails. Its goal is to explain why and how, for example, a person understands at first hearing that the sentence “Colorless green ideas sleep furiously” has no meaning, even though it follows the rules of English grammar; or how, in hearing a sentence with two possible interpretations (such as “They passed the port at midnight”), one decides which meaning applies.
In generative semantics, the idea developed that all information needed to semantically interpret a sign (usually a sentence) is contained in the sentence's underlying grammatical or syntactic deep structure. The deep structure of a sentence involves lexemes (understood as words or vocabulary items composed of bundles of semantic features selected from the proposed universal set of semantic features). On the sentence's surface (that is, when it is spoken) these lexemes will appear as nouns, verbs, adjectives, and other parts of speech-that is, as vocabulary items. When the sentence is formulated by the speaker, semantic roles (such as subject, objects, predicate) are assigned to the lexemes; the listener hears the spoken sentence and interprets the semantic features that are meant.
Whether deep structure and semantic interpretation are distinct from one, another is a matter of controversy. Most generative linguists agree, however, that a grammar should generate the set of semantically formally toned expressions that are possible in a given language, and that the grammar should associate a semantic interpretation with each expression.
Another subject of debate is whether semantic interpretation should be understood as syntactically based (that is, coming from a sentence's deep structure); Or whether it should be seen as semantically based, as accorded to Noam Chomsky, an American scholar who is particularly influential in this field, it is possible - in a syntactically based theory - for surface structure and deep structure jointly to determine the semantic interpretation of an expression.
The focus of general semantics is how people evaluate words and how that evaluation influences their behavior. Begun by the Polish American linguist Alfred Korzybski and long associated with the American semanticist and politician S. I. Hayakawa, general semantics has been used in efforts to make people aware of dangers inherent in treating words as more than symbols. It has been extremely popular with writers who use language to influence people's ideas. In their work, these writers use general-semantics guidelines for avoiding loose generalizations, rigid attitudes, inappropriate finality, and imprecision. Some philosophers and linguists, however, have criticized general semantics as lacking scientific rigor, and the approach has declined in popularity.
Positivism, system of philosophy based on experience and empirical knowledge of natural phenomena, in which metaphysics and theology are regarded as inadequate and imperfect systems of knowledge. The doctrine was first called positivism by the 19th-century French mathematician and philosopher Auguste Comte (1798-1857), but some of the positivist concepts may be traced to the British philosopher David Hume, the French philosopher Duc de Saint-Simon, and the German philosopher Immanuel Kant.
Comte chose the word positivism on the ground that it indicated the “reality” and “constructive tendency” that he claimed for the theoretical aspect of the doctrine. He was, in the main, interested in a reorganization of social life for the good of humanity through scientific knowledge, and thus controls of natural forces. The two primary components of positivism, the philosophy and the polity (or programs of individual and social conduct), were later welded by Comte into a whole under the conception of a religion, in which humanity was the object of worship. A number of Comte's disciples refused, however, to accept this religious development of his philosophy, because it seemed to contradict the original positivist philosophy. Many of Comte's doctrines were later adapted and developed by the British social philosophers John Stuart Mill and Herbert Spencer and by the Austrian philosopher and physicist Ernst Mach.
During the early 20th century a group of philosophers who were concerned with developments in modern science rejected the traditional positivist ideas that held personal experience to be the basis of true knowledge and emphasized the importance of scientific verification. This group came to be known as logical positivist, and it included the Austrian Ludwig Wittgenstein and the British Bertrand Russell and G. E. Moore. It was Wittgenstein's Tractatus Logico-philosophicus (1921; German-English parallels text, 1922) that proved to be of decisive influence in the rejection of metaphysical doctrines for their meaninglessness and the acceptance of empiricism as a matter of logical necessity.
The positivist today, who has rejected this so-called Vienna school of philosophy, prefers to call themselves logical empiricist in order to dissociate themselves from the emphasis of the earlier thinkers on scientific verification. They maintain that the verification principle itself is philosophically unverifiable.
Bertrand Arthur William Russell (1872-1970), British philosopher, mathematician, and Nobel laureate, who was a positivist and who drew emphasis on logical analysis influenced the course of 20th-century philosophy. In the early 20th century British mathematician and philosopher Bertrand Russell, along with British mathematician and philosopher Alfred North Whitehead, attempted to demonstrate that mathematics and numbers can be understood as groups of concepts, or classes. Russell and Whitehead tried to show that mathematics is closely related to logic and, in turn, that ordinary sentences can be logically analyzed using mathematical symbols for words and phrases. This idea resulted in a new symbolic language, used by Russell in a field he termed philosophical logic, in which philosophical propositions were reformulated and examined according to his symbolic logic.
Born in Trelleck, Wales, on May 18, 1872, Russell was educated at Trinity College, University of Cambridge. After graduation in 1894, he traveled in France, Germany, and the United States and was then made a fellow of Trinity College. From an early age he developed a strong sense of social consciousness; at the same time, he involved himself in the study of logical and mathematical questions, which he had made his special fields and on which he was called to lecture at many institutions throughout the world. He achieved prominence with his first major work, The Principles of Mathematics (1902), in which he attempted to remove mathematics from the realm of abstract philosophical notions and to give it a precise scientific framework.
Russell then collaborated for eight years with the British philosopher and mathematician Alfred North Whitehead to produce the monumental work Principia Mathematica (3 volumes, 1910-1913). This work showed that mathematics can be stated in terms of the concepts of general logic, such as class and membership in a class. It became a masterpiece of rational thought. Russell and Whitehead proved that numbers can be defined as classes of a certain type, and in the process they developed logic concepts and a logic notation that established symbolic logic as an important specialization within the field of philosophy. In his next major work, The Problems of Philosophy (1912), Russell borrowed from the fields of sociology, psychology, physics, and mathematics to refute the tenets of idealism, the dominant philosophical school of the period, which held that all objects and experiences are the product of the intellect. Russell, a realist, believed that objects perceived by the senses have an inherent reality independent of the mind.
Russell criticized in making adverse comments about the two sides in World War I (1914-1918), and for his uncompromising stand he was fined, imprisoned, and deprived of his teaching post at Cambridge. In prison he wrote Introduction to Mathematical Philosophy (1919), combining the two areas of knowledge he regarded as inseparable. After the war he visited the Russian Soviet Federated Socialist Republic, and in his book Practice and Theory of Bolshevism (1920) he expressed his disappointment with the form of socialism practiced there. He felt that the methods used to achieve a Communist system was intolerable and that the results obtained were not worth the price paid.
Russell taught at Beijing University in China during 1921 and 1922. From 1928 to 1932, after he returned to England, he conducted the private, highly progressive Beacon Hill School for young children. From 1938 to 1944 he taught at various educational institutions in the United States. He was barred, however, from teaching at the College of the City of New York (now City College of the City University of New York) by the state supreme court because of his attacks on religion in such works as What I Believe (1925) and his advocacy of sexual freedom, expressed in Manners and Morals (1929).
Russell returned to England in 1944 and was reinstated as a fellow of Trinity College. Although he abandoned pacifism to support the Allied cause in World War II (1939-1945), he became an ardent and active opponent of nuclear weapons. In 1949 he was awarded the Order of Merit by King George VI. Russell received the 1950 Nobel Prize for Literature and was cited as “the champion of humanity and freedom of thought.” He led a movement in the late 1950s advocating unilateral nuclear disarmament by Britain, and at the age of 89 he was imprisoned after an antinuclear demonstration. He died on February 2, 1970.
In addition to his earlier work, Russell also made a major contribution to the development of logical positivism, a strong philosophical movement of the 1930's and 1940's. The major Austrian philosopher Ludwig Wittgenstein, at one time Russell's student at Cambridge, was strongly influenced by his original concept of logical atomism. In his search for the nature and limits of knowledge, Russell was a leader in the revival of the philosophy of empiricism in the larger field of epistemology. In Our Knowledge of the External World (1926) and Inquiry into Meaning and Truth (1962), he attempted to explain all factual knowledge as constructed out of immediate experiences. Among his other books is The ABC of Relativity (1925), Education and the Social Order (1932), A History of Western Philosophy (1945), The Impact of Science upon Society (1952), My Philosophical Development (1959), War Crimes in Vietnam (1967), and The Autobiography of Bertrand Russell (3 volumes, 1967-1969).
Analytic and Linguistic philosophy begins in the 20th-century as philosophical movement, it is dominant in Britain and the United States since World War II, and aims to clarify language and analyze the concepts expressed in it. The movement has been given a variety of designations, including linguistic analysis, logical empiricism, logical positivism, Cambridge analysis, and “Oxford philosophy.” The last two labels are derived from the universities in England where this philosophical method has been particularly influential. Although no specific doctrines or tenets are accepted by the movement as a whole, analytic and linguistic philosophers agree that the proper activity of philosophy is clarifying language, or, as some prefer, clarifying concepts. The aim of this activity is to settle philosophical disputes and resolve philosophical problems, which, it is argued, originates in linguistic confusion.
A considerable diversity of views exists among analytic and linguistic philosophers regarding the nature of conceptual or linguistic analysis. Some have been primarily concerned with clarifying the meaning of specific words or phrases as an essential step in making philosophical assertions clear and unambiguous. Others have been more concerned with determining the general conditions that must be met for any linguistic utterance to be meaningful; their intent is to establish a criterion that will distinguish between meaningful and nonsensical sentences. Still other analysts have been interested in creating formal, symbolic languages that are mathematical in nature. Their claim is that philosophical problems can be more effectively dealt with once they are formulated in a rigorous logical language.
By contrast, many philosophers associated with the movement have focused on the analysis of ordinary, or natural, language. Difficulties arise when concepts such as time and freedom, for example, are considered apart from the linguistic context in which they normally appear. Attention to language as it is ordinarily used is the key, it is argued, to resolving many philosophical puzzles.
Many experts believe that philosophy as an intellectual discipline originated with the work of Plato, one of the most celebrated philosophers in history. The Greek thinker had an immeasurable influence on Western thought. However, Plato’s expression of ideas in the form of dialogues-he dialectical method, used most famously by his teacher Socrates-has led to difficulties in interpreting some of the finer points of his thoughts. The issue of what exactly Plato meant to say is addressed in the following excerpt by author R. M. Hare.
Linguistic analysis as a method of philosophy is as old as the Greeks. Several of the dialogues of Plato, for example, are specifically concerned with clarifying terms and concepts. Nevertheless, this style of philosophizing has received dramatically renewed emphasis in the 20th century. Influenced by the earlier British empirical tradition of John Locke, George Berkeley, David Hume, and John Stuart Mill and by the writings of the German mathematician and philosopher Gottlob Frége, the 20th-century English philosophers’ G. E. Moore and Bertrand Russell became the founders of this contemporary analytic and linguistic trend. As students together at the University of Cambridge, Moore and Russell rejected Hegelian idealism, particularly as it was reflected in the work of the English metaphysician F. H. Bradley, who held that nothing is completely real except the Absolute. In their opposition to idealism and in their commitment to the view that careful attention to language is crucial in philosophical inquiry, but they set the mood and style of philosophizing for much of the 20th century English-speaking world.
For Moore, philosophy was first and foremost analysis. The philosophical task involves clarifying puzzling propositions or concepts by indicating fewer puzzling propositions or concepts to which the originals are held to be logically equivalent. Once this task has been completed, the truth or falsity of problematic philosophical assertions can be determined more adequately. Moore was noted for his careful analyses of such puzzling philosophical claims as “time is unreal,” analyses that then aided of the sportive determinations in the truth of such assertions.
Russell, strongly influenced by the precision of mathematics, was concerned with developing an ideal logical language that would accurately reflect the nature of the world. Complex propositions, Russell maintained, can be resolved into their simplest components, which he called atomic propositions. These propositions refer to atomic facts, the ultimate constituents of the universe. The metaphysical views based on this logical analysis of language and the insistence that meaningful propositions must correspond to facts constitute what Russell called logical atomism. His interest in the structure of language also led him to distinguish between the grammatical form of a proposition and its logical form. The statements “John is good” and “John is tall” have the same grammatical form but different logical forms. Failure to recognize this would lead one to treat the property “goodness” as if it were a characteristic of John in the same way that the property “tallness” is a characteristic of John. Such failure results in philosophical confusion.
Austrian-born philosopher Ludwig Wittgenstein was one of the most influential thinkers of the 20th century. With his fundamental work, Tractatus Logico-philosophicus, published in 1921, he became a central figure in the movement known as analytic and linguistic philosophy.
Russell’s work in mathematics was drawn into Cambridge, and the Austrian philosopher Ludwig Wittgenstein, who became a central figure in the analytic and linguistic movement. In his first major work, Tractatus Logico-Philosophicus (1921; trans. 1922), in which he first presented his theory of language, Wittgenstein argued that “all philosophy is a ‘critique of language’” and that “philosophy aims at the logical clarification of thoughts.” The results of Wittgenstein’s analysis resembled Russell’s logical atomism. The world, he argued, is ultimately composed of simple facts, which it is the purpose of language to picture. To be meaningful, statements about the world must be reducible to linguistic utterances that have a structure similar to the simple facts pictured. In this early Wittgensteinian analysis, only propositions that picture facts—the propositions of science-are considered factually meaningfully. Metaphysical, theological, and ethical sentences were judged to be factually meaningless.
Influenced by Russell, Wittgenstein, Ernst Mach, and others, a group of philosophers and mathematicians in Vienna in the 1920's initiated the movement known as logical positivism. Led by Moritz Schlick and Rudolf Carnap, the Vienna Circle initiated one of the most important chapters in the history of analytic and linguistic philosophy. According to the positivist, the task of philosophy is the clarification of meaning, not the discovery of new facts (the job of the scientists) or the construction of comprehensive accounts of reality (the misguided pursuit of traditional metaphysics).
The positivist divided all meaningful assertions into two classes: analytic propositions and empirically verifiable ones. Analytic propositions, which include the propositions of logic and mathematics, are statements the truth or falsity of which depend altogether on the meanings of the terms constituting the statement. An example would be the proposition “two plus two equals four.” The second class of meaningful propositions includes all statements about the world that can be verified, at least in principle, by sense experience. Indeed, the meaning of such propositions is identified with the empirical method of their verification. This verifiability theory of meaning, the positivist concluded, would demonstrate that scientific statements are legitimate factual claims and that metaphysical, religious, and ethical sentences are factually empties. The ideas of logical positivism were made popular in England by the publication of A. J. Ayer’s Language, Truth and Logic in 1936.
The positivist’ verifiability theory of meaning came under intense criticism by philosophers such as the Austrian-born British philosopher Karl Popper. Eventually this narrow theory of meaning yielded to a broader understanding of the nature of language. Again, an influential figure was Wittgenstein. Repudiating many of his earlier conclusions in the Tractatus, he initiated a new line of thought culminating in his posthumously published Philosophical Investigations (1953, trans., 1953). In this work, Wittgenstein argued that once attention is directed to the way language is actually used in ordinary discourse, the variety and flexibility of language become clear. Propositions do much more than simply picture facts.
This recognition led to Wittgenstein’s influential concept of language games. The scientist, the poet, and the theologian, for example, are involved in different language games. Moreover, the meaning of a proposition must be understood in its context, that is, in terms of the rules of the language game of which that proposition is a part. Philosophy, concluded Wittgenstein, is an attempt to resolve problems that arise as the result of linguistic confusion, and the key to the resolution of such problems is ordinary language analysis and the proper use of language.
Finally, for which Wittgenstein comes as a particular note for his contribution to the movement known as analytic and linguistic philosophy. He was born in Vienna on April 26, 1889, Wittgenstein was raised in a wealthy and cultured family. After attending schools in Linz and Berlin, he went to England to study engineering at the University of Manchester. His interest in pure mathematics led him to Trinity College, University of Cambridge, to study with Bertrand Russell. There he turned his attention to philosophy. By 1918 Wittgenstein had completed his Tractatus Logico-philosophicus (1921, trans., 1922), he had no doubts about a firm conviction in the reality of something, such as he was to think in the worth of belief which provided the provisions of a ‘final solution’ to philosophical problems. Subsequently, he turned from philosophy and for several years taught elementary school in an Austrian village. In 1929 he returned to Cambridge to resume his work in philosophy and was appointed to the faculty of Trinity College. Soon he began to reject certain conclusions of the Tractatus and to develop the position reflected in his Philosophical Investigations (pub. Posthumously 1953, trans., 1953). Wittgenstein retired in 1947; he died in Cambridge on April 29, 1951. A sensitive, intense man who often sought solitude and was frequently depressed, Wittgenstein abhorred pretense and was noted for his simple style of life and dress. The philosopher was forceful and confident in personality, however, and he exerted considerable influence on those with whom he came in contact.
Wittgenstein’s philosophical life may be divided into two distinct phases: an early period, represented by the Tractatus, and a later period, represented by the Philosophical Investigations. Throughout most of his life, however, Wittgenstein consistently viewed philosophy as linguistic or conceptual analysis. In the Tractatus he argued that “philosophy aims at the logical clarification of thoughts.” In the Philosophical Investigations, however, he maintained that “philosophy is a battle against the bewitchment of our intelligence by means of language.”
Language, Wittgenstein argued in the Tractatus, is composed of complex propositions that can be analyzed into fewer complex propositions until one arrives at simple or elementary propositions. Correspondingly, the world is composed of complex facts that can be analyzed into fewer complex facts until one arrives at simple, or atomic, facts. The world is the totality of these facts. According to Wittgenstein’s picture theory of meaning, it is the nature of elementary propositions logically to picture atomic facts, or “states of affairs.” He claimed that the nature of language required elementary propositions, and his theory of meaning required that there be atomic facts pictured by the elementary propositions. On this analysis, only propositions that picture facts-the propositions of science-are considered cognitively meaningfully. Metaphysical and ethical statements are not meaningful assertions. The logical positivist associated with the Vienna Circle was greatly influenced by this conclusion.
Wittgenstein came to believe, however, that the narrow view of language reflected in the Tractatus was mistaken. In the Philosophical Investigations he argued that if one actually looks to see how language is used, the variety of linguistic usage becomes clear. Words are like tools, and just as tools serve different functions, so linguistic expressions serve many functions. Although some propositions are used to picture facts, others are used to command, question, pray, thank, curse, and so on. This recognition of linguistic flexibility and variety led to Wittgenstein’s concept of a language game and to the conclusion that people play different language games. The scientist, for example, is involved in a different language game than the theologian. Moreover, the meaning of a proposition must be understood in terms of its context, that is, in terms of the rules of the game through which its proposition is a part. The key to the resolution of philosophical puzzles is the therapeutic process of examining and describing language in use.
Once again, the psychology proven attempts are well grounded to evolutionary principles, in which a variety of higher mental functions may be adaptations, forced in response to selection pressures on the human populations through evolutionary time. Candidates for such theorizing include material and paternal motivations, capacities for love and friendship, the development of language as a signalling system cooperative and aggressive, our emotional repertoire, our moral and reactions, including the disposition to detect and punish those who cheat on agreements or who “free-ride” on =the work of others, our cognitive structures, nd many others. Evolutionary psychology goes hand-in-hand with neurophysiological evidence about the underlying circuitry in the brain which subserves the psychological mechanisms it claims to identify. The approach was foreshadowed by Darwin himself, and William James, as well as the sociology of E.O. Wilson. The terms of use are applied, more or less aggressively, especially to explanations offered in Sociobiology and evolutionary psychology.
Another assumption that is frequently used to legitimate the real existence of forces associated with the invisible hand in neoclassical economics derives from Darwin”s view of natural selection as a war like competing between atomized organisms in the struggle for survival. In natural selection as we now understand it, cooperation appears to exist in complementary relation to competition. Complementary relationships between such results are emergent self-regulating properties that are greater than the sum of parts and that serve to perpetuate the existence of the whole.
According to E.O Wilson, the “human mind evolved to believe in the gods” and people “need a sacred narrative” to have a sense of higher purpose. Yet it is also clear that the “gods” in his view are merely human constructs and, therefore, there is no basis for dialogue between the world-view of science and religion. “Science for its part,” said Wilson, “will test relentlessly every assumption about the human condition and in time uncovering the bedrock of the moral of a religious sentiment. The eventual result of the competition between the other, will be the secularization of the human epic and of a system of religious belief, that the body of persons’ who accept a system of religious belief in showing fervour in the practice of religion. Man has come to the threshold in a state of consciousness, regarding his nature and his relationship to the Cosmos, in terms that reflects “reality.” By using the processes of nature as metaphor, to describe the forces by which it operates upon and within Man, we come as close to describing “reality” as we can within the limits of our comprehension. Men will be very uneven in their capacity for such understanding, which, naturally, differs for different ages and cultures, and develops and changes over the course of time. For these reasons it will always be necessary to use metaphor and myth to provide “comprehensible” guides to living. In thus way, Man’s imagination and reasoning represent an extremely vital role on the survival and evolution.
Since so much of life both inside and outside the study is concerned with finding explanations of things, it would be desirable to have a concept of what counts as a good explanation from bad. Under the influence of “logical positivist” approaches to the structure of science, it was felt that the criterion ought to be found in a definite logical relationship between the “explanans” (that which does the explaining) and the explanandum (that which is to be explained). The approach culminated in the covering law model of explanation, or the view that an event is explained when it is subsumed under a law of nature, that is, its occurrence is deducible from the law plus a set of initial conditions. A law would itself be explained by being deduced from a higher-order or covering law, in the way that Johannes Kepler(or Keppler, 1571-1630), was by way of planetary motion that the laws were deducible from Newton”s laws of motion. The covering law model may be adapted to include explanation by showing that something is probable, given a statistical law. Questions for the covering law model include querying for the covering laws are necessary to explanation (we explain whether everyday events without overtly citing laws): Querying whether they are sufficient (it may not explain an event just to say that it is an example of the kind of thing that always happens). And querying whether a purely logical relationship is adapted to capturing the requirements, we carry out of explanations. These may include, for instance, that we have a ‘feel’ for what is happening, or that the explanation proceeds in terms of things that are familiar to us or unsurprising, or that we can give a model of what is going on, and none of these notions is captured in a purely logical approach. Recent work, therefore, has tended to stress the contextual and pragmatic elements in requirements for explanation, so that what counts as good explanation given one set of concerns may not do so given another.
The argument to the best explanation is the view that once we can select the best of any in something in explanations of an event, then we are justified in accepting it, or even believing it. The principle needs qualification, since something it is unwise to ignore the antecedent improbability of a hypothesis which would explain the data better than others, e.g., the best explanation of a coin falling heads 530 times in 1,000 tosses might be that it is biassed to give a probability of heads of 0.53 but it might be more sensible to suppose that it is fair, or to suspend judgement.
In a philosophy of language is considered as the general attempt to understand the components of a working language, the relationship that understanding the speaker has to its elements, and the relationship they bear to the world. The subject therefore embraces the traditional division of semiotic into syntax, semantics, and pragmatics. The philosophy of language thus mingles with the philosophy of mind, since it needs an account of what it is in our understanding that enables us to use language. It so mingles with the metaphysics of truth and the relationship between sign and object. Much as much is that the philosophy in the 20th century, has been informed by the belief that philosophy of language is the fundamental basis of all philosophical problems, in that language is the distinctive exercise of mind, and the distinctive way in which we give shape to metaphysical beliefs. Particular topics will include the problems of logical form? And the basis of the division between syntax and semantics, as well as problems of understanding the number and nature of specifically semantic relationships such as meaning, reference, predication, and quantification, also, pragmatics for which they include that of speech acts, while problems of rule following and the indeterminacy of translation infect philosophies of both pragmatics and semantics.
On this conception, to understand a sentence is to know its truth-conditions, and, yet, in a distinctive way the conception has remained central that those who offer opposing theories characteristically define their position by reference to it. The conception in the meanings truth-conditions needs not and should be used to indicate requirements by immediate or future needs or purpose, and not be the cause to proceed or progress toward a goal, such that to go forward in space or time or toward an object, whereas, the forward movement especially on a course of action or development but to act of raising or the status of being raised in grade, rank, or dignity. For instance, one who understands a language must have some idea of the range of speech acts contextually performed by the various types of a sentence in the language, and must have some idea of the insufficiencies of various kinds of speech acts. The claim of the theorist of truth-conditions should rather be targeted on the notion of content: If indicative sentences differ in what they strictly and literally say, then this difference is fully accounted for by the difference in the truth-conditions.
The meaning of a complex expression is a function of the meaning of its constituent. This is just as a sentence of what it is for an expression to be semantically complex. It is one of the initial attractions of the conception of meaning truth-conditions tat it permits a smooth and satisfying account of the way in which the meaning of s complex expression is a function of the meaning of its constituents. On the truth-conditional conception, to give the meaning of an expression is to state the contribution it makes to the truth-conditions of sentences in which it occurs. For singular terms-proper names, indexical, and certain pronouns-this is done by stating the reference of the terms in question. For predicates, it is done either by stating the conditions under which the predicate is true of arbitrary objects, or by stating the conditions under which arbitrary atomic sentences containing it is true. The meaning of a sentence-forming operator is given by stating its contribution to the truth-conditions of as complex sentence, as a function of the semantic values of the sentences on which it operates.
The theorist of truth conditions should insist that not every true statement about the reference of an expression is fit to be an axiom in a meaning-giving theory of truth for a language, such is the axiom: “London” refers to the city in which there was a huge fire in 1666, is a true statement about the reference of ‘London’. It is a consequent of a theory which substitutes this axiom for no different a term than of our simple truth theory that “London is beautiful” is true if and only if the city in which there was a huge fire in 1666 is beautiful. Since a subject can understand “London” without knowing that last-mentioned truth condition, this replacement axiom is not fit to be an axiom in a meaning-specifying truth theory. It is, of course, incumbent on a theorised meaning of truth conditions, to state in a way which does not presuppose any previous, non-truth conditional conception of meaning
Among the many challenges facing the theorist of truth conditions, two are particularly salient and fundamental. First, the theorist has to answer the charge of triviality or vacuity, second, the theorist must offer an account of what it is for a person”s language to be truly describable by as semantic theory containing a given semantic axiom.
Since the content of “Paris is beautiful” are true amounts that add to no more than the claim that Paris is beautiful, we can trivially describers understanding a sentence, if we wish, as knowing its truth-conditions, but this gives us no substantive account of understanding whatsoever. Something other than grasping of truth conditions must provide the substantive account. The charge rests upon what has been called the redundancy theory of truth, the theory which, somewhat more discriminatingly. Horwich calls the minimal theory of truth. It’s conceptual representation that the concept of truth is exhausted by the fact that it conforms to the equivalence principle, that the principle that for any proposition ‘p’ it is true, that ‘p’ if and only if ‘p’. Many different philosophical theories of truth will, with suitable qualifications, accept that equivalence principle. The distinguishing feature of the minimal theory is its claim that the equivalence principle exhausts the notion of truth. It is now widely accepted, both by opponents and supporters of truth conditional theories of meaning, that it is inconsistent to accept both minimal theory of truth and a truth conditional account of meaning. If the claim that “Paris is beautiful” is true is exhausted by its equivalence to the claim that Paris is beautiful, it is circular to try of its truth conditions. The minimal theory of truth has been endorsed by the Cambridge mathematician and philosopher Plumpton Ramsey (1903-30), and the English philosopher Jules Ayer, the later Wittgenstein, Quine, Strawson. Horwich and-confusing and inconsistently if this article is correct-Frége himself. But is the minimal theory correct?
The minimal theory treats instances of the equivalence principle as definitional of truth for a given sentence, but in fact, it seems that each instance of the equivalence principle can itself be explained. The truths from which “London is beautiful” are true if and only if London is beautiful. This would be a pseudo-explanation if the fact that “London” refers to London consists in part in the fact that “London is beautiful” has the truth-condition it does. But it is very implausible, it is, after all, possible to understand “London” without understanding the predicate “is beautiful.”
Sometimes, however, the counterfactual conditional is known as subjunctive conditionals, insofar as a counterfactual conditional is a conditional of the form “if p were to happen q would,” or “if p were to have happened q would have happened,” where the supposition of “p” is contrary to the known fact that “not-p.” Such assertions are nevertheless, useful “if you broke the bone, the X-ray would have looked different,” or “if the reactors were to fail, this mechanism wold clicks in” are important truths, even when we know that the bone is not broken or are certain that the reactor will not fail. It is arguably distinctive of laws of nature that yield counterfactual (“if the metal were to be heated, it would expand”), whereas accidentally true generalizations may not. It is clear that counterfactuals cannot be represented by the material implication of the propositional calculus, since that conditionals come out true whenever “p” is false, so there would be no division between true and false counterfactual.
Although the subjunctive form indicates a counterfactual, in many context it does not seem to matter whether we use a subjunctive form, or a simple conditional form: “If you run out of water, you will be in trouble” seems equivalent to “if you were to run out of water, you would be in trouble,” in other contexts there is a big difference: “If Oswald did not kill Kennedy, someone else did” is clearly true, whereas “if Oswald had not killed Kennedy, someone would have” is most probably false.
The best-known modern treatment of counterfactuals is that of David Lewis, which evaluates them as true or false according to whether “q” is true in the “most similar” possible worlds to ours in which “p” is true. The similarity-ranking this approach needs have proved controversial, particularly since it may need to presuppose some notion of the same laws of nature, whereas art of the interest in counterfactuals is that they promise to illuminate that notion. There is an evolving growth in the awareness that the classification of conditionals is extremely a tricky business, and categorizing them as counterfactuals or does not be of limited use.
The pronouncing of any conditional preposition of the form “if p then Q.” The condition hypothesizes, “P.” It”s called the antecedent of the conditional, and ‘q’ the consequent. Various kinds of conditional have been distinguished. The weakening in that of material implication, is merely telling us that with ‘not-p’ or ‘q’. Stronger conditionals include elements of modality, corresponding to the thought that if ‘p’ is true then ‘q’ must be true. Ordinary language is very flexible in its use of the conditional form, and there is controversy whether, yielding different kinds of conditionals with different meanings, or pragmatically, in which case there should be one basic meaning which case there should be one basic meaning, with surface differences arising from other implicatures.
We now turn to a philosophy of meaning and truth, for which it is especially associated with the American philosopher of science and of language (1839-1914), and the American psychologist philosopher William James (1842-1910), wherefore the study in Pragmatism is given to various formulations by both writers, but the core is the belief that the meaning of a doctrine is the same as the practical effects of adapting it. Peirce interpreted of a theoretical sentence is only that of a corresponding practical maxim (telling us what to do in some circumstance). In James the position issues in a theory of truth, notoriously allowing that belief, including for an example, belief in God, is the widest sense of the works satisfactorily in the widest sense of the word. On James”s view almost any belief might be respectable, and even rue, provided it works (but working is no simple matter for James). The apparent subjectivist consequences of tis were wildly assailed by Russell (1872-1970), Moore (1873-1958), and others in the early years of the 20th-century. This led to a division within pragmatism between those such as the American educator John Dewey (1859-1952), whose humanistic conception of practice remains inspired by science, and the more idealistic route that especially by the English writer F.C.S. Schiller (1864-1937), embracing the doctrine that our cognitive efforts and human needs actually transform the reality that we seek to describe. James often writes as if he sympathizes with this development. For instance, in The Meaning of Truth (1909), he considers the hypothesis that other people have no minds (dramatized in the sexist idea of an “automatic sweetheart” or female zombie) and remarks’ that the hypothesis would not work because it would not satisfy our egoistic craving for the recognition and admiration of others, the implication that this is what make it true that the other persons have minds in the disturbing part.
Modern pragmatists such as the American philosopher and critic Richard Rorty (1931-) and some writings of the philosopher Hilary Putnam (1925-) who has usually tried to dispense with an account of truth and concentrate, as perhaps James should have done, upon the nature of belief and its relations with human attitude, emotion, and need. The driving motivation of pragmatism is the idea that belief in the truth on te one hand must have a close connection with success in action on the other. One way of cementing the connection is found in the idea that natural selection must have adapted us to be cognitive creatures because beliefs have effects, as they work. Pragmatism can be found in Kant”s doctrine of the primary of practical over pure reason, and continued to play an influential role in the theory of meaning and of truth.
In case of fact, the philosophy of mind is the modern successor to behaviourism, as do the functionalism that its early advocates were Putnam (1926-) and Sellars (1912-89), and its guiding principle is that we can define mental states by a triplet of relations they have on other mental stares, what effects they have on behaviour. The definition need not take the form of a simple analysis, but if w could write down the totality of axioms, or postdate, or platitudes that govern our theories about what things of other mental states, and our theories about what things are apt to cause (for example), a belief state, what effects it would have on a variety of other mental states or alternatives and what affects are likely to produce a usually mental or emotional effect on one capable of reaction, it is likely to have on behaviour, then we would have done all that is needed to make the state a proper theoretical idea as the intellection apprehended by a perceptual impression as an ideational and fanciful notion. It could be implicitly defied by these theses. Functionalism is often compared with descriptions of a computer, since according to mental descriptions correspond to a description of a machine in terms of software, that remains silent about the underlaying hardware or “realization” of the program the machine is running. The principal advantage of functionalism includes its fit with the way we know of mental states both of ourselves and others, which is via their effects on behaviour and other mental states. As with behaviourism, critics charge that structurally complex items that do not bear mental states might nevertheless, imitate the functions that are cited. According to this criticism functionalism is too generous and would count too many things as having minds. It is also queried whether functionalism is too paradoxical, able to see mental similarities only when there is causal similarity, when our actual practices of interpretations enable ‘us’ to ascertain the study of the act or process of thinking as characterized by or exhibiting the power to think thoughts and desires (as, to have a longing for something) too differently from our own, it may then seem as though beliefs and desires can be “variably” realized” causal architecture, just as much as they can be in different neurophysiological states.
The philosophical movement of Pragmatism had a major impact on American culture from the late 19th century to the present. Pragmatism calls for ideas and theories to be tested in practice, by assessing whether acting upon the idea or theory produces desirable or undesirable results. According to pragmatists, all claims about truth, knowledge, morality, and politics must be tested in this way. Pragmatism has been critical of traditional Western philosophy, especially the notions that there are absolute truths and absolute values. Although pragmatism was popular for a time in France, England, and Italy, most observers believe that it encapsulates an American faith in know-how and practicality and an equally American distrust of abstract theories and ideologies.
In mentioning the American psychologist and philosopher we find William James, who helped to popularize the philosophy of pragmatism with his book Pragmatism: A New Name for Old Ways of Thinking (1907). Influenced by a theory of meaning and verification developed for scientific hypotheses by American philosopher C. S. Peirce, James held that truths are what works, or has good experimental results. In a related theory, James argued the existence of God is partly verifiable because many people derive benefits from believing.
The Association for International Conciliation first published a William James’ pacifist statement ‘The Moral Equivalent of War’, in 1910. James, a highly respected philosopher and psychologist, was one of the founders of pragmatism-a philosophical movement holding that ideas and theories must be tested in practice to assess their worth. James hoped to find a way to convince men with a long-standing history of pride and glory in war to evolve beyond the need for bloodshed and to develop other avenues for conflict resolution. Spelling and grammars represent standards of the time.
Pragmatists regard all theories and institutions as tentative hypotheses and solutions. For this reason they believed that efforts to improve society, through such means as education or politics, must be geared toward problem solving and must be ongoing. Through their emphasis on connecting theory to practice, pragmatist thinkers attempted to transform all areas of philosophy, from metaphysics to ethics and political philosophy.
Pragmatism sought a middle ground between traditional ideas about the nature of reality and radical theories of nihilism and irrationalism, which had become popular in Europe in the late 19th century. Traditional metaphysics assumed that the world has a fixed, intelligible structure and that human beings can know absolute or objective truths about the world and about what constitutes moral behavior. Nihilism and irrationalism, on the other hand, denied those very assumptions and their certitude. Pragmatists today still try to steer a middle course between contemporary offshoots of these two extremes.
The ideas of the pragmatists were considered revolutionary when they first appeared. To some critics, pragmatism”s refusal to affirm any absolutes carried negative implications for society. For example, pragmatists do not believe that a single absolute idea of goodness or justice exists, but rather than these concepts are changeable and depend on the context in which they are being discussed. The absence of these absolutes, critics feared, could result in a decline in moral standards. The pragmatists” denial of absolutes, moreover, challenged the foundations of religion, government, and schools of thought. As a result, pragmatism influenced developments in psychology, sociology, education, semiotics (the study of signs and symbols), and scientific method, as well as philosophy, cultural criticism, and social reform movements. Various political groups have also drawn on the assumptions of pragmatism, from the progressive movements of the early 20th century to later experiments in social reform.
Pragmatism is best understood in its historical and cultural context. It arose during the late 19th century, a period of rapid scientific advancement typified by the theories of British biologist Charles Darwin, whose theories suggested too many thinkers that humanity and society are in a perpetual state of progress. During this same period a decline in traditional religious beliefs and values accompanied the industrialization and material progress of the time. In consequence it became necessary to rethink fundamental ideas about values, religion, science, community, and individuality.
The three most important pragmatists are American philosophers’ Charles Sanders Peirce, William James, and John Dewey. Peirce was primarily interested in scientific method and mathematics; his objective was to infuse scientific thinking into philosophy and society, and he believed that human comprehension of reality was becoming ever greater and that human communities were becoming increasingly progressive. Peirce developed pragmatism as a theory of meaning-in particular, the meaning of concepts used in science. The meaning of the concept ‘brittle’, for example, is given by the observed consequences or properties that objects called “brittle” exhibit. For Peirce, the only rational way to increase knowledge was to form mental habits that would test ideas through observation, experimentation, or what he called inquiry. Many philosophers known as logical positivist, a group of philosophers who have been influenced by Peirce, believed that our evolving species was fated to get ever closer to Truth. Logical positivist emphasize the importance of scientific verification, rejecting the assertion of positivism that personal experience is the basis of true knowledge.
James moved pragmatism in directions that Peirce strongly disliked. He generalized Peirce”s doctrines to encompass all concepts, beliefs, and actions; he also applied pragmatist ideas to truth as well as to meaning. James was primarily interested in showing how systems of morality, religion, and faith could be defended in a scientific civilization. He argued that sentiment, as well as logic, is crucial to rationality and that the great issues of life-morality and religious belief, for example-are leaps of faith. As such, they depend upon what he called ‘the will to believe’, and not merely on scientific evidence, Which can never tell us what to do or what is worthwhile. Critics charged James with relativism (the belief that values depend on specific situations) and with crass expediency for proposing that if an idea or action works the way one intends, it must be right. But James can more accurately be described as a pluralist-someone who believes the world to be far too complex for any particular philosophy to explain everything.
Dewey”s philosophy can be described as a version of philosophical naturalism, which regards human experience, intelligence, and communities as ever-evolving mechanisms. Using their experience and intelligence, Dewey believed, human beings can solve problems, including social problems, through inquiry. For Dewey, naturalism led to the idea of a democratic society that allows all members to acquire social intelligence and progress both as individuals and as communities. Dewey held that traditional ideas about knowledge, truth, and values, in which absolutes are assumed, are incompatible with a broadly Darwinian world-view in which individuals and society is progressing. In consequence, he felt that these traditional ideas must be discarded or revised. Indeed, for pragmatists, everything people know and do depend on a historical context and are thus tentative rather than absolute.
Many followers and critics of Dewey believe he advocated elitism and social engineering in his philosophical stance. Others think of him as a kind of romantic humanist. Both tendencies are evident in Dewey”s writings, although he aspired to synthesize the two realms.
The pragmatic tradition was revitalized in the 1980s by American philosopher Richard Rorty, who has faced similar charges of elitism for his belief in the relativism of values and his emphasis on the role of the individual in attaining knowledge. Interest has renewed in the classic pragmatists-Pierce, James, and Dewey-have an alternative to Rorty”s interpretation of the tradition.
The Philosophy of Mind, is the branch of philosophy that considers mental phenomena such as sensation, perception, thought, belief, desire, intention, memory, emotion, imagination, and purposeful action. These phenomena, which can be broadly grouped as thoughts and experiences, are features of human beings; many of them are also found in other animals. Philosophers are interested in the nature of each of these phenomena as well as their relationships to each of the other, and to physical phenomena, such as motion.
The most famous exponent of dualism was the French philosopher René Descartes, who maintained that body and mind are radically different entities and that they are the only fundamental substances in the universe. Dualism, however, does not show how these basic entities are connected.
In the work of the German philosopher Gottfried Wilhelm Leibniz, the universe is held to consist of an infinite number of distinct substances, or monadic physical objects. This view is pluralistic in the sense that it proposes the existence of many separate entities, and it is monistic in its assertion that each monad reflects within itself the entire universe.
Other philosophers have held that knowledge of reality is not derived from a priori principles, but is obtained only from experience. This type of metaphysic is called empiricism. Still another school of philosophy has maintained that, although an ultimate reality does exist, it is altogether inaccessible to human knowledge, which is necessarily subjective because it is confined to states of mind. Knowledge is therefore not a representation of external reality, but merely a reflection of human perceptions. This view is known as skepticism or agnosticism in respect to the soul and the reality of God.
The 18th-century German philosopher Immanuel Kant, released his determinative piece of work, The Critique of Pure Reason in 1781. Three years later, he expanded on his study of the modes of thinking with an essay entitled “What is Enlightenment?” In this 1784 essay, Kant challenged readers to “dare to know,” arguing that it was not only a civic but also a moral duty to exercise the fundamental freedoms of thought and expression.
Several major viewpoints were combined in the work of Kant, who developed a distinctive critical philosophy called transcendentalism. His philosophy is agnostic in that it denies the possibility of a strict knowledge of ultimate reality; it is empirical in that it affirms that all knowledge arises from experience and is true of objects of actual and possible experience; and it is rationalistic in that it maintains the a priori character of the structural principles of this empirical knowledge.
February 9, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment